Pages

Friday, June 14, 2019

Haproxy - TCP/HTTP Load balancer

Haproxy or High availability proxy is a TCP/HTTP load balancer and proxy server that allows a webserver to load balance incoming requests across multiple endpoints. It also has the basic feature of Http reverse proxy. This is useful in cases where too many concurrent connections over-saturate the capability of a single server. Instead of a client connecting to a single server which processes all of the requests, the client will connect to an HAProxy instance, which will use a reverse proxy to forward the request to one of the available endpoints, based on a load-balancing algorithm.


The following are some of the advantages of HaProxy,
Pluggable architecture
Ability to hot restart
HTTP/2 support


In this article, we will see how we can configure haproxy and load balance requests to two nginx endpoints. For the demo purpose we will use 3 machines
172.31.31.127 - manager
172.31.16.186  worker1
172.31.21.109  worker2


Manager Configuration - On the manager side,
1.set the hostname as “manager” using “hostnamectl set-name manager”
2.Now in the /etc/hosts file on the manager, add the details regarding the other 2 machines as below,


[root@ip-172-31-31-127 centos]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1             localhost localhost.localdomain localhost6 localhost6.localdomain6
172.31.16.186      worker1
172.31.21.109      worker2


3. Install the haproxy using,
yum update -y
yum -y install haproxy


4. Configure the haproxy on the manager node by adding content to the /etc/haproxy/haproxy.cfg file as below,
listen haproxy3-monitoring *:8080
   mode     http
   option   forwardfor
   option   httpclose
   stats    enable
   stats show-legends
   stats refresh 5s
   stats  uri /stats
   stats realm Haproxy\ Statistics
   stats auth Password123: Password123
   stats admin if TRUE
   default_backend app-main

frontend  main
   bind *:80
   option http-server-close
   option forwardfor
   default_backend app-main

backend app-main
    balance roundrobin
    option httpchk HEAD / HTTP/1.1\r\nHost:\ localhost
    server web-server-1 172.31.16.186:80 check                 #Nginx1
    server web-server-2 172.31.21.109:80 check                 #Nginx2


Make sure there is no other front-end element using 80 port. In this case we are setting our front-ent with 80 port which means we will access on port 80.


Now once we access this on port 80, the request goes to the back-end defined by app-main. The linking between frontend and backend are defined by default_backend app-main element.


In our backend configuration we have defined 2 endpoint as ,
server web-server-1 172.31.16.186:80 check                 #Nginx1
server web-server-2 172.31.21.109:80 check                 #Nginx2


In both the above cases, we have nginx configured and running on port 80. So now if we access the <Manager IP>:80 , the request will be routed to either of the webserver-1 or webserver-2 nodes defined.


5. Restart haproxy as “service haproxy restart”


On Both the worker nodes
1. Update the /etc/hosts file with manager node as below,
[root@ip-172-31-21-109 centos]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1            localhost localhost.localdomain localhost6 localhost6.localdomain6
172.31.31.127  manager


2. Install Nginx and modify the index.html in /usr/share/nginx/html directory as below and restart the nginx service
[root@ip-172-31-16-186 ~]# cd /usr/share/nginx/html/
[root@ip-172-31-16-186 html]# mv index.html index.html.bak
[root@ip-172-31-16-186 html]# echo "web-server-1. Hey ! This is your first web server" > index.html
[root@ip-172-31-16-186 html]# service nginx restart


Similarly on the other node, modify the index.html with content
[root@ip-172-31-21-109 centos]# cd /usr/share/nginx/html/
[root@ip-172-31-21-109 html]# mv index.html index.html.bak
[root@ip-172-31-21-109 html]# echo "web-server-2. Hey ! This is your first web server" > index.html
[root@ip-172-31-21-109 html]# service nginx restart


Access the Url on the Manager node
Now once both nodes have nginx installing and running, lets hit the manager node on the port 80 to see whether the requests are load balancing as below,
[root@ip-172-31-31-127 haproxy]# curl 172.31.31.127
web-server-2. Hey ! This is your first web server
[root@ip-172-31-31-127 haproxy]# curl 172.31.31.127
web-server-1. Hey ! This is your first web server
[root@ip-172-31-31-127 haproxy]# curl 172.31.31.127
web-server-2. Hey ! This is your first web server
[root@ip-172-31-31-127 haproxy]# curl 172.31.31.127
web-server-1. Hey ! This is your first web server
[root@ip-172-31-31-127 haproxy]# curl 172.31.31.127
web-server-2. Hey ! This is your first web server
[root@ip-172-31-31-127 haproxy]# curl 172.31.31.127
web-server-1. Hey ! This is your first web server


We can see that the requests are being load balanced when we hit the <manager>:80. The request is being load balanced to either of the nodes and based on the index.html defined in node, we are seeing the output. Hope this helps in understanding the haproxy.

1 comment :

  1. You could certainly see your skills within the work you
    write. The world hopes for more passionate writers like you who aren't afraid to mention how they believe.
    Always go after your heart.

    ReplyDelete