nginx Load Balancing
Server - Web Server

nginx Load Balancing

nginx is well known for being a front-end load balancer. It often sits between the client accessing the web site, and the PHP back ends that process your dynamic pages. While doing this, it will also perform admirably as a static content server. Here's how to get this going. For this setup we'll assume there are three seperate servers connected by LAN. One will run nginx and the other two will run whatever back end you need it to. nginx doesn't care so long as it speaks HTTP, it's just a proxy. The most basic section of the site's configuration is the "upstream" part, where you define the hosts that are being proxied to. The 2nd parameter is the name of the upstream provider. It can be anything unique, here it is "proxiedhosts".

 

upstream proxiedhosts {

	server 172.31.1.90:80;

	server 172.31.1.91:80;

}


The rest is all within the "server" section of the configuration.





location / {

	if ( -f $document_root/maintenance.html) {

		return 503;

	}

		if ($host != www.yoursite.com) {

			rewrite  (.*)$ http://www.yoursite.com$1 permanent;

		}

		proxy_set_header X-Forwarded-Host $http_host;

		proxy_set_header X-Forwarded-Server $host;

		proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

		proxy_set_header X-Real-IP $remote_addr;

		proxy_pass http://proxiedhosts;

	}


There is some handy boilerplate in the example configuration shown here. The first "if" clause detects if the site is down for maintenance, and if it is, it redirects users to that page with a 503 Found, following best practices. The next "if" clause says that if anyone visits the site not using "www.yoursite.com", to redirect them using a 301 Permanently Moved code, also in line with current best practices. This will update search engines. None of this is related to proxying back ends, but it is useful configuration to know and have in your toolbox. The real magic begins with the lines that start with "proxy". The "proxy_set_header" lines change the appropriate headers so that they match what is coming in from the client. If these lines don't exist, the back end servers will "see" all requests as coming from the same server, the front end balancer. These are of vital importance otherwise the first five people to get their password wrong will lock everybody out of the system, as it's all seen coming from the same IP, for example. The "proxy_pass" line merely tells nginx the protocol and what upstreams to use. If you only have one backend, you can leave out the "upstream" section and put the IP address of the host here. That's the basics of it all. Now we'll talk a bit about tuning this. In the following configuration, requests will hit the 1st listed server twice as often as the 2nd server. It is a ratio, so change it to taste. This is only useful if your hardware is not identical, or the loads on the hardware isn't identical; such as having one web server pulling double duty as also being the database server.

upstream proxiedhosts {

	server 172.31.1.90:80 weight=2;

	server 172.31.1.91:80 weight=1;

}

In this configuration, all requests from the same IP will continue to go to the same host. Some back end applications are picky about this sort of thing, though most applications now store session data in a database so it is becoming less of an issue. If you have a problem where you log in, and are then suddenly logged out, this will probably fix it.

upstream proxiedhosts {

	ip_hash;

	server 172.31.1.90:80;

	server 172.31.1.91:80;

}