Load balancing with high availability can be tough to set up. Fortunately, Varnish HTTP Cache server provides a dead simple highly available load balancer that will also work as a caching server.
The modern use of SSL/TLS for all traffic has made this a little harder as Vanish has to handle unencrypted traffic to cache it. This means that we will need to terminate and decrypt the HTTPS connections before they are handed off to Varnish.
We will do this with Apache2.
This means that the HTTPS requests will arrive at the Varnish server and get terminated by Apache2. Apache2 will then pass them on to the Varnish server for caching and distributing to the web front ends.
This guide will use the following three servers:
Function | Name | IP | Listen Port |
---|---|---|---|
Varnish load balancer | varnish | 1.1.1.1 | 443 |
Web server | web1 | 2.2.2.2 | 80 |
Web server | web2 | 3.3.3.3 | 80 |
You should already have web servers configured to serve your site over HTTP (port 80)on your web backends.
I recommend not attaching the web servers to the internet as they are not using HTTPS. Attach all the server’s onto a private network and configure the webservers to only listen to HTTP traffic on the private interfaces.
Install Varnish and Apache2
Log into your CentOS 8 server that you want to use as the load balancer and install Varnish and Apache2 with DNF
:
dnf install varnish httpd
Configure Apache2
First, install the Apache2 module that enables HTTPS
dnf install mod_ssl
Then restart Apache2
systemctl restart httpd.service
Next, create a VirtualHost file that will accept the HTTPS connections on the public IP address on port 443.
Create a VirtualHost file in /etc/httpd/conf.d/
with the following contents:
<VirtualHost *:443>
ServerName <DOMAIN>
ErrorLog /var/log/apache2/<DOMAIN>-https_error.log
CustomLog /var/log/apache2/<DOMAIN>-https_access.log combined
SSLEngine on
SSLCertificateFile <PATH/TO/CERT>/<DOMAIN>.crt
SSLCertificateKeyFile <PATH/TO/KEY>/<DOMAIN>.key
ProxyPreserveHost On
ProxyPass / http://127.0.0.1:8080/
ProxyPassReverse / http://127.0.0.1:8080/
</VirtualHost>
You will need to edit this to match your domain.
As you can see, you need to get an SSL certificate for your website. If you already have this then edit the SSLCertificateFile
and SSLCertificateKeyFile
lines to point to your certificate’s files.
Restart Apache2 to load the new configuration:
systemctl restart httpd.service
Apache2 is now configured to terminate the HTTPS requests and pass them off to Varnish which will listen on 127.0.0.1:8080
for HTTP requests from Apache2.
Configure Varnish
The first job is to configure Varnish to listen on 127.0.0.1:8080
. This is done by modifying the start up parameters that are given to systemd.
Fist, create the following directory:
mkdir /etc/systemd/varnish.service.d
Next, create and edit this file /etc/systemd/varnish.service.d/override.conf
with the following contents:
[Service]
ExecStart=
ExecStart=/usr/sbin/varnishd -j unix,user=vcache -F -a 127.0.0.1:8080 -T localhost:6082 -f /etc/varnish/default.vcl -S /etc/varnish/secret -s malloc,256m
Next, reload systemd:
systemctl daemon-reload
Now that Varnish is listening on the correct port and IP you can create the load balancing configuration. Begin by moving to /etc/varnish/
then rename to supplied configuration file:
mv default.vcl default.vcl.origional
Then create and edit a new default.vcl
file by opening it with a text editor:
nano default.vcl
Then copy and past the following configuration:
vcl 4.0;
import directors;
backend web1 {
.host = "104.248.172.77";
.port = "80";
.probe = {
.url = "/";
.timeout = 1s;
.interval = 5s;
.window = 5;
.threshold = 3;
}
}
backend web2 {
.host = "165.232.104.211";
.port = "80";
.probe = {
.url = "/";
.timeout = 1s;
.interval = 5s;
.window = 5;
.threshold = 3;
}
}
sub vcl_init {
new balancer = directors.round_robin();
balancer.add_backend(web1);
balancer.add_backend(web2);
}
sub vcl_recv {
set req.backend_hint = balancer.backend();
}
Let’s break down these configuration blocks. The first two sections define the web backends:
backend web1 {
.host = "2.2.2.2";
.port = "80";
.probe = {
.url = "/";
.timeout = 1s;
.interval = 5s;
.window = 5;
.threshold = 3;
}
}
The .host
can the web server’s IP address or a domain name that resolves to it. The .probe
section is the health check that Varnish performs to determine if the webserver is online. It checks every 5 seconds that it can get an HTTP response within 1 second. If that fails Varnish will consider it offline and route traffic to the other backends.
Varnish will continue to probe the server and when it comes back online Varnish will direct traffic to it again.
The second section:
sub vcl_init {
new balancer = directors.round_robin();
balancer.add_backend(web1);
balancer.add_backend(web2);
}
Tells Varnish to create a load balancer called balancer
. The traffic is divided among the backends by round_robin
which means that web requests will be sent to the backends in turn.
The last section:
sub vcl_recv {
set req.backend_hint = balancer.backend();
}
routes all inbound traffic to the load balancer.
Finally, restart Varnish:
systemctl restart varnish.service
Testing
First, check that Varnish can communicate with the backends:
$ varnishadm backend.list
Backend name Admin Probe Last change
boot.web1 probe 5/5 good Mon, 07 Dec 2020 14:30:40 GMT
boot.web2 probe 5/5 good Mon, 07 Dec 2020 14:30:40 GMT
boot.balancer probe healthy Mon, 07 Dec 2020 14:30:40 GMT
Stop Apache2 on one of the webservers, wait a few seconds and try again:
$ varnishadm backend.list
Backend name Admin Probe Last change
boot.web1 probe 1/5 bad Mon, 07 Dec 2020 15:09:15 GMT
boot.web2 probe 5/5 good Mon, 07 Dec 2020 15:07:15 GMT
boot.lb probe healthy Mon, 07 Dec 2020 15:07:15 GMT
Varnish has detected that web1 is down and is now ignoring it. You can now restart Apache2 and watch Varnish accept it back into the cluster.
I also recommend putting different index.html
pages on the webservers during testing so you can tell where the page has been loaded from.