User Rating: 5 / 5

Star Active Star Active Star Active Star Active Star Active

An inverse proxy is a device that it is between your browser and the final web server. As the name says, it works in the inverse way a common proxy does. The main goals of an inverse proxy are to speed up and bring security. Because they don't hold the content, they only take care of some matters such as:

  • Caching: inverse proxies can offload your main HTTP servers by frequently serving static content.
  • Load balancing: you can have a farm of HTTP servers, and your inverse proxy will do the load balancing among them.
  • Anti-DoS: inverse proxies are designed to manage a high load of TCP connections. Therefore, with the right configuration, they are the first line of defense for DoS attacks.
  • Authentication (if applies): you can add security layers to your application.
  • Content alteration: you can change something without touching your WEB application.

In this article, I am going to focus on the Caching. As the title says, we are enhancing the SEO ranking.

Requisites and Assumptions of your Inverse Proxy

  • You are going to use Squid as your Inverse Proxy
  • You know how to manipulate the Linux CLI
  • Your HTTP server is configured on the same machine and it listens to the loopback (

Configuration of your Inverse Proxy

The next configuration is a basic one that will allow you to start working. Change it according to your needs.

acl localnet src     # RFC1918 possible internal network
acl localnet src  # RFC1918 possible internal network
acl localnet src # RFC1918 possible internal network
acl localnet src fc00::/7       # RFC 4193 local private network range
acl localnet src fe80::/10      # RFC 4291 link-local (directly plugged) machines
acl SSL_ports port 443
acl Safe_ports port 80          # http
acl Safe_ports port 21          # ftp
acl Safe_ports port 443         # https
acl Safe_ports port 70          # gopher
acl Safe_ports port 210         # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280         # http-mgmt
acl Safe_ports port 488         # gss-http
acl Safe_ports port 591         # filemaker
acl Safe_ports port 777         # multiling http
acl proto_navegador     proto   HTTP
acl proto_navegador     proto   HTTPS
acl proto_navegador     proto   FTP
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost manager
http_access deny manager
acl to_rproxy localip PUBLIC_IP
http_access allow to_rproxy proto_navegador
http_access allow localnet
http_access allow localhost
http_access deny all
http_port  PUBLIC_IP:80  accel
cache_dir ufs /var/spool/squid 4096 16 256
coredump_dir /var/spool/squid
refresh_pattern ^ftp:           1440    20%     10080
refresh_pattern ^gopher:        1440    0%      1440
refresh_pattern -i (/cgi-bin/|\?) 0     0%      0
refresh_pattern .               0       20%     4320
cache_peer parent 80  0 no-query originserver name=localhost1 login=PASSTHRU no-digest
cache_peer_access localhost1 allow to_rproxy Safe_ports
cache_peer_access localhost1 deny all
always_direct deny to_rproxy
cache_mem                       64 MB
memory_replacement_policy       heap GDSF
maximum_object_size_in_memory   1024 KB
cache_replacement_policy        heap LFUDA
maximum_object_size             8192 KB
strip_query_terms       off

Remember to execute squid -z to create the disk cache structure. After that, just start Squid. Google, on its next indexing, will get a better response timing and as a consequence, you will get a better SEO ranking.

Good luck!

blog comments powered by Disqus


Read about IT, Migration, Business, Money, Marketing and other subjects.

Some subjects: FusionPBX, FreeSWITCH, Linux, Security, Canada, Cryptocurrency, Trading.