Welcome to World of IPTV

With

+23k members
+11k threads
+106k posts

we are the most popular IPTV community on the web. 

IMPORTANT NOTE:
WE HAVE RECENTLY NOTICED THAT TOO MANY DOUBLE FAKE ACCOUNTS ARE CREATED IN THE PAST.
TO PREVENT THIS ISSUE THE DECISION WAS MADE THAT IN THE FUTURE A ANNUALLY FEE 20 EURO WILL BE RAISED FOR NEW MEMBERSHIPS.

Join now to the World of IPTV

Forum Rules

Before you start, check out the forum rules first

Account upgrade

Upgrade your account to get access to full features

Advertising

Would you like to place your advertisement with us ?

Resources Manager

Hundreds of IPTV scripts and apps are available for download

How to configure load balancing using Nginx

mediasat

Extended Member
Ext. Member
Joined
Oct 20, 2019
Messages
3
Reaction score
18
Points
11
Location
morocco
Advantages of load balancing

Load balancing is an excellent way to scale out your application and increase its performance and redundancy. Nginx, a popular web server software, can be configured as a simple yet powerful load balancer to improve your servers resource availability and efficiency. Nginx acts as a single entry point to a distributed web application working on multiple separate servers.

load-balancer-graph-1-1024x522.png



This guide describes the advantages of load balancing. Learn how to set up load balancing with nginx for your cloud servers.
As a prerequisite, you’ll need to have at least two hosts with a web server software installed and configured to see the benefit of the load balancer. If you already have one web host set up, duplicate it by creating a custom image and deploy it onto a new server at your UpCloud control panel.
Upcloud-cta-2-300x151.jpg




Installing nginx
-----------------------------------
# Debian and Ubuntu
sudo apt-get update
# Then install the Nginx Open Source edition
sudo apt-get install nginx


# CentOS
# Install the extra packages repository
sudo yum install epel-release
# Update the repositories and install Nginx
sudo yum update
sudo yum install ngin

Once installed change directory into the nginx main configuration folder.
-------------------------------------------------------------------------------------------------
cd /etc/nginx/

Now depending on your OS, the web server configuration files will be in one of two places.
Ubuntu and Debian follow a rule for storing virtual host files in /etc/nginx/sites-available/, which are enabled through symbolic links to /etc/nginx/sites-enabled/. You can use the command below to enable any new virtual host files.
-----------------------------------------------------------------------
sudo ln -s /etc/nginx/sites-available/vhost /etc/nginx/sites-enabled/vhost


CentOS users can find their host configuration files under /etc/nginx/conf.d/ in which any .conf type virtual host file gets loaded.
Check that you find at least the default configuration and then restart nginx.
------------------------------------------------------------------------------------------------------
sudo systemctl restart nginx


Test that the server replies to HTTP requests. Open the load balancer server’s public IP address in your web browser. When you see the default welcoming page for nginx the installation was successful.
nginx-welcome-page.png



If you have trouble loading the page, check that a firewall is not blocking your connection. For example on CentOS 7 the default firewall rules do not allow HTTP traffic, enable it with the commands below.
-------------------------------
sudo firewall-cmd --add-service=http --permanent
sudo firewall-cmd --reload


Then try reloading your browser.

Configuring nginx as a load balancer

When nginx is installed and tested, start to configure it for load balancing. In essence, all you need to do is set up nginx with instructions for which type of connections to listen to and where to redirect them. Create a new configuration file using whichever text editor you prefer. For example with nano:
--------------------------------------------------------------------------------------------------------------------------------------------------------------
sudo nano /etc/nginx/conf.d/load-balancer.conf

In the load-balancer.conf you’ll need to define the following two segments, upstream and server, see the examples below.
--------------------------------------------------------------------------------------------------------------------------------------------------------------------
# Define which servers to include in the load balancing scheme.
# It's best to use the servers' private IPs for better performance and security.
# You can find the private IPs at your UpCloud control panel Network section.
http {
upstream backend {
server 10.1.0.101;
server 10.1.0.102;
server 10.1.0.103;
}

# This server accepts all traffic to port 80 and passes it to the upstream.
# Notice that the upstream name and the proxy_pass need to match.

server {
listen 80;

location / {
proxy_pass http://backend;
}
}

Then save the file and exit the editor.
Next, disable the default server configuration you earlier tested was working after the installation. Again depending on your OS, this part differs slightly.
On Debian and Ubuntu systems you’ll need to remove the default symbolic link from the sites-enabled folder.
----------------------------------------------------------------------------------------------------------------------------------------------------
sudo rm /etc/nginx/sites-enabled/default

CentOS hosts don’t use the same linking. Instead, simply rename the default.conf in the conf.d/ directory to something that doesn’t end with .conf, for example:
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
sudo mv /etc/nginx/conf.d/default.conf /etc/nginx/conf.d/default.conf.disabled

Then use the following to restart nginx.
----------------------------------------------------
sudo systemctl restart nginx

Check that nginx starts successfully. If the restart fails, take a look at the /etc/nginx/conf.d/load-balancer.conf you just created to make sure there are no mistypes or missing semicolons.
When you enter the load balancer’s public IP address in your web browser, you should pass to one of your back-end servers
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------

Load balancing methods

Load balancing with nginx uses a round-robin algorithm by default if no other method is defined, like in the first example above. With round-robin scheme each server is selected in turns according to the order you set them in the load-balancer.conf file. This balances the number of requests equally for short operations.
Least connections based load balancing is another straightforward method. As the name suggests, this method directs the requests to the server with the least active connections at that time. It works more fairly than round-robin would with applications where requests might sometimes take longer to complete.
To enable least connections balancing method, add the parameter least_conn to your upstream section as shown in the example below.
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
upstream backend {
least_conn;
server 10.1.0.101;
server 10.1.0.102;
server 10.1.0.103;
}

Round-robin and least connections balancing schemes are fair and have their uses. However, they cannot provide session persistence. If your web application requires that the users are subsequently directed to the same back-end server as during their previous connection, use IP hashing method instead. IP hashing uses the visitors IP address as a key to determine which host should be selected to service the request. This allows the visitors to be each time directed to the same server, granted that the server is available and the visitor’s IP address hasn’t changed.
To use this method, add the ip_hash -parameter to your upstream segment like in the example underneath.
-------------------------------------------------------------------------------------------------------------------------------------------------
upstream backend {
ip_hash;
server 10.1.0.101;
server 10.1.0.102;
server 10.1.0.103;
}

In a server setup where the available resources between different hosts are not equal, it might be desirable to favour some servers over others. Defining server weights allows you to further fine-tune load balancing with nginx. The server with the highest weight in the load balancer is selected the most often.
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------
upstream backend {
server 10.1.0.101 weight=4;
server 10.1.0.102 weight=2;
server 10.1.0.103;
}

For example in the configuration shown above the first server is selected twice as often as the second, which again gets twice the requests compared to the third.

Load balancing with HTTPS enabled

Enable HTTPS for your site, it is a great way to protect your visitors and their data. If you haven’t yet implemented encryption on your web hosts, we highly recommend you take a look at our guide for how to install Let’s Encrypt on nginx.
To use encryption with a load balancer is easier than you might think. All you need to do is to add another server section to your load balancer configuration file which listens to HTTPS traffic at port 443 with SSL. Then set up a proxy_pass to your upstream segment like with the HTTP in the previous example above.
Open your configuration file again for edit.
--------------------------------------------------------

sudo nano /etc/nginx/conf.d/load-balancer.conf


Then add the following server segment to the end of the file.
----------------------------------------------------------------------------------
server {
listen 443 ssl;
server_name domain_name;
ssl_certificate /etc/letsencrypt/live/domain_name/cert.pem;
ssl_certificate_key /etc/letsencrypt/live/domain_name/privkey.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

location / {
proxy_pass http://backend;
}
}

Then save the file, exit the editor and restart nginx again.
----------------------------------------------------------------------------
sudo systemctl restart nginx

Setting up encryption at your load balancer when you are using the private network connections to your back-end has some great advantages.
  • As only your UpCloud servers have access to your private network, it allows you to terminate the SSL at the load balancer and thus only passing forward HTTP connections.
  • It also greatly simplifies your certificate management. You can obtain and renew the certificates from a single host.
With the HTTPS-enabled you also have the option to enforce encryption to all connections to your load balancer. Simply update your server segment listening to port 80 with a server name and a redirection to your HTTPS port. Then remove or comment out the location portion as it’s no longer needed. See the example below.
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
server {
listen 80;
server_name domain_name;
return 301 https://$server_name$request_uri;

#location / {
# proxy_pass http://backend;
#}
}

Save the file again after you have made the changes. Then restart nginx.
-------------------------------------------------------------------------------------------------
sudo systemctl restart nginx

Now all connections to your load balancer will be served over an encrypted HTTPS connection. Requests to the unencrypted HTTP will be redirected to use HTTPS as well. This provides a seamless transition into encryption. Nothing is required from your visitors.

Health checks

In order to know which servers are available, nginx’s implementations of reverse proxy includes passive server health checks. If a server fails to respond to a request or replies with an error, nginx will note the server has failed. It will try to avoid forwarding connections to that server for a time.
The number of consecutive unsuccessful connection attempts within a certain time period can be defined in the load balancer configuration file. Set a parameter max_fails to the server lines. By default, when no max_fails is specified, this value is set to 1. Optionally setting the max_fails to 0 will disable health checks to that server.

If max_fails is set to a value greater than 1 the subsequent fails must happen within a specific time frame for the fails to count. This time frame is specified by a parameter fail_timeout, which also defines how long the server should be considered failed. By default, the fail_timeout is set to 10 seconds.
After a server is marked failed and the time set by fail_timeout has passed, nginx will begin to gracefully probe the server with client requests. If the probes return successful, the server is again marked live and included in the load balancing as normal.
----------------------------------------------------------------------------------------

upstream backend {
server 10.1.0.101 weight=5;
server 10.1.0.102 max_fails=3 fail_timeout=30s;
server 10.1.0.103;
}


Use the health checks. They allow you to adapt your server back-end to the current demand by powering up or down hosts as required. When you start up additional servers during high traffic, it can easily increase your application performance when new resources become automatically available to your load balancer.

Conclusions
If you wish to improve your web application performance and availability, a load balancer is definitely something to consider. Nginx is powerful yet relatively simple to set up as a load balancer. Together with an easy encryption solution, such as Let’s Encrypt client, it makes for a great front-end to your web farm. Check out the documentation for upstream over at nginx.org to learn more.
When you are using multiple hosts protects your web service with redundancy, the load balancer itself can still leave a single point of failure. You can further improve high availability when you set up a floating IP between multiple load balancers. Find out more in our article on floating IPs on UpCloud.




Déplacé d'un forum​
 
Last edited:

redhat

Administrator
Staff member
Administrator
Chief Moderator
Moderator
Joined
Jun 19, 2019
Messages
3,074
Reaction score
14,825
Points
134
Location
root[@]woi
Advantages of load balancing

Load balancing is an excellent way to scale out your application and increase its performance and redundancy. Nginx, a popular web server software, can be configured as a simple yet powerful load balancer to improve your servers resource availability and efficiency. Nginx acts as a single entry point to a distributed web application working on multiple separate servers.

load-balancer-graph-1-1024x522.png


This guide describes the advantages of load balancing. Learn how to set up load balancing with nginx for your cloud servers.
As a prerequisite, you’ll need to have at least two hosts with a web server software installed and configured to see the benefit of the load balancer. If you already have one web host set up, duplicate it by creating a custom image and deploy it onto a new server at your UpCloud control panel.
Upcloud-cta-2-300x151.jpg




Installing nginx
-----------------------------------
# Debian and Ubuntu
sudo apt-get update
# Then install the Nginx Open Source edition
sudo apt-get install nginx


# CentOS
# Install the extra packages repository
sudo yum install epel-release
# Update the repositories and install Nginx
sudo yum update
sudo yum install ngin

Once installed change directory into the nginx main configuration folder.
-------------------------------------------------------------------------------------------------
cd /etc/nginx/

Now depending on your OS, the web server configuration files will be in one of two places.
Ubuntu and Debian follow a rule for storing virtual host files in /etc/nginx/sites-available/, which are enabled through symbolic links to /etc/nginx/sites-enabled/. You can use the command below to enable any new virtual host files.
-----------------------------------------------------------------------
sudo ln -s /etc/nginx/sites-available/vhost /etc/nginx/sites-enabled/vhost


CentOS users can find their host configuration files under /etc/nginx/conf.d/ in which any .conf type virtual host file gets loaded.
Check that you find at least the default configuration and then restart nginx.
------------------------------------------------------------------------------------------------------
sudo systemctl restart nginx


Test that the server replies to HTTP requests. Open the load balancer server’s public IP address in your web browser. When you see the default welcoming page for nginx the installation was successful.
nginx-welcome-page.png



If you have trouble loading the page, check that a firewall is not blocking your connection. For example on CentOS 7 the default firewall rules do not allow HTTP traffic, enable it with the commands below.
-------------------------------
sudo firewall-cmd --add-service=http --permanent
sudo firewall-cmd --reload


Then try reloading your browser.

Configuring nginx as a load balancer

When nginx is installed and tested, start to configure it for load balancing. In essence, all you need to do is set up nginx with instructions for which type of connections to listen to and where to redirect them. Create a new configuration file using whichever text editor you prefer. For example with nano:
--------------------------------------------------------------------------------------------------------------------------------------------------------------
sudo nano /etc/nginx/conf.d/load-balancer.conf

In the load-balancer.conf you’ll need to define the following two segments, upstream and server, see the examples below.
--------------------------------------------------------------------------------------------------------------------------------------------------------------------
# Define which servers to include in the load balancing scheme.
# It's best to use the servers' private IPs for better performance and security.
# You can find the private IPs at your UpCloud control panel Network section.
http {
upstream backend {
server 10.1.0.101;
server 10.1.0.102;
server 10.1.0.103;
}

# This server accepts all traffic to port 80 and passes it to the upstream.
# Notice that the upstream name and the proxy_pass need to match.

server {
listen 80;

location / {
proxy_pass http://backend;
}
}

Then save the file and exit the editor.
Next, disable the default server configuration you earlier tested was working after the installation. Again depending on your OS, this part differs slightly.
On Debian and Ubuntu systems you’ll need to remove the default symbolic link from the sites-enabled folder.
----------------------------------------------------------------------------------------------------------------------------------------------------
sudo rm /etc/nginx/sites-enabled/default

CentOS hosts don’t use the same linking. Instead, simply rename the default.conf in the conf.d/ directory to something that doesn’t end with .conf, for example:
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
sudo mv /etc/nginx/conf.d/default.conf /etc/nginx/conf.d/default.conf.disabled

Then use the following to restart nginx.
----------------------------------------------------
sudo systemctl restart nginx

Check that nginx starts successfully. If the restart fails, take a look at the /etc/nginx/conf.d/load-balancer.conf you just created to make sure there are no mistypes or missing semicolons.
When you enter the load balancer’s public IP address in your web browser, you should pass to one of your back-end servers
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------

Load balancing methods

Load balancing with nginx uses a round-robin algorithm by default if no other method is defined, like in the first example above. With round-robin scheme each server is selected in turns according to the order you set them in the load-balancer.conf file. This balances the number of requests equally for short operations.
Least connections based load balancing is another straightforward method. As the name suggests, this method directs the requests to the server with the least active connections at that time. It works more fairly than round-robin would with applications where requests might sometimes take longer to complete.
To enable least connections balancing method, add the parameter least_conn to your upstream section as shown in the example below.
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
upstream backend {
least_conn;
server 10.1.0.101;
server 10.1.0.102;
server 10.1.0.103;
}

Round-robin and least connections balancing schemes are fair and have their uses. However, they cannot provide session persistence. If your web application requires that the users are subsequently directed to the same back-end server as during their previous connection, use IP hashing method instead. IP hashing uses the visitors IP address as a key to determine which host should be selected to service the request. This allows the visitors to be each time directed to the same server, granted that the server is available and the visitor’s IP address hasn’t changed.
To use this method, add the ip_hash -parameter to your upstream segment like in the example underneath.
-------------------------------------------------------------------------------------------------------------------------------------------------
upstream backend {
ip_hash;
server 10.1.0.101;
server 10.1.0.102;
server 10.1.0.103;
}

In a server setup where the available resources between different hosts are not equal, it might be desirable to favour some servers over others. Defining server weights allows you to further fine-tune load balancing with nginx. The server with the highest weight in the load balancer is selected the most often.
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------
upstream backend {
server 10.1.0.101 weight=4;
server 10.1.0.102 weight=2;
server 10.1.0.103;
}

For example in the configuration shown above the first server is selected twice as often as the second, which again gets twice the requests compared to the third.

Load balancing with HTTPS enabled

Enable HTTPS for your site, it is a great way to protect your visitors and their data. If you haven’t yet implemented encryption on your web hosts, we highly recommend you take a look at our guide for how to install Let’s Encrypt on nginx.
To use encryption with a load balancer is easier than you might think. All you need to do is to add another server section to your load balancer configuration file which listens to HTTPS traffic at port 443 with SSL. Then set up a proxy_pass to your upstream segment like with the HTTP in the previous example above.
Open your configuration file again for edit.
--------------------------------------------------------

sudo nano /etc/nginx/conf.d/load-balancer.conf


Then add the following server segment to the end of the file.
----------------------------------------------------------------------------------
server {
listen 443 ssl;
server_name domain_name;
ssl_certificate /etc/letsencrypt/live/domain_name/cert.pem;
ssl_certificate_key /etc/letsencrypt/live/domain_name/privkey.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

location / {
proxy_pass http://backend;
}
}

Then save the file, exit the editor and restart nginx again.
----------------------------------------------------------------------------
sudo systemctl restart nginx

Setting up encryption at your load balancer when you are using the private network connections to your back-end has some great advantages.
  • As only your UpCloud servers have access to your private network, it allows you to terminate the SSL at the load balancer and thus only passing forward HTTP connections.
  • It also greatly simplifies your certificate management. You can obtain and renew the certificates from a single host.
With the HTTPS-enabled you also have the option to enforce encryption to all connections to your load balancer. Simply update your server segment listening to port 80 with a server name and a redirection to your HTTPS port. Then remove or comment out the location portion as it’s no longer needed. See the example below.
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
server {
listen 80;
server_name domain_name;
return 301 https://$server_name$request_uri;

#location / {
# proxy_pass http://backend;
#}
}

Save the file again after you have made the changes. Then restart nginx.
-------------------------------------------------------------------------------------------------
sudo systemctl restart nginx

Now all connections to your load balancer will be served over an encrypted HTTPS connection. Requests to the unencrypted HTTP will be redirected to use HTTPS as well. This provides a seamless transition into encryption. Nothing is required from your visitors.

Health checks

In order to know which servers are available, nginx’s implementations of reverse proxy includes passive server health checks. If a server fails to respond to a request or replies with an error, nginx will note the server has failed. It will try to avoid forwarding connections to that server for a time.
The number of consecutive unsuccessful connection attempts within a certain time period can be defined in the load balancer configuration file. Set a parameter max_fails to the server lines. By default, when no max_fails is specified, this value is set to 1. Optionally setting the max_fails to 0 will disable health checks to that server.

If max_fails is set to a value greater than 1 the subsequent fails must happen within a specific time frame for the fails to count. This time frame is specified by a parameter fail_timeout, which also defines how long the server should be considered failed. By default, the fail_timeout is set to 10 seconds.
After a server is marked failed and the time set by fail_timeout has passed, nginx will begin to gracefully probe the server with client requests. If the probes return successful, the server is again marked live and included in the load balancing as normal.
----------------------------------------------------------------------------------------

upstream backend {
server 10.1.0.101 weight=5;
server 10.1.0.102 max_fails=3 fail_timeout=30s;
server 10.1.0.103;
}


Use the health checks. They allow you to adapt your server back-end to the current demand by powering up or down hosts as required. When you start up additional servers during high traffic, it can easily increase your application performance when new resources become automatically available to your load balancer.

Conclusions
If you wish to improve your web application performance and availability, a load balancer is definitely something to consider. Nginx is powerful yet relatively simple to set up as a load balancer. Together with an easy encryption solution, such as Let’s Encrypt client, it makes for a great front-end to your web farm. Check out the documentation for upstream over at nginx.org to learn more.
When you are using multiple hosts protects your web service with redundancy, the load balancer itself can still leave a single point of failure. You can further improve high availability when you set up a floating IP between multiple load balancers. Find out more in our article on floating IPs on UpCloud.




Déplacé d'un forum​

Please edit your post and use the BBcodes where the config codes are.
 
Channels MatchTime Unblock CDN Offshore Server Contact
100 cnx / 90€ 5Gbps / 180€ 48CPU-256GRAM 10Gbps 569€ Skype live:giefsl
500 cnx / 350€ 10Gbps / 350€ 48CPU-128GRAM 5Gbps / 349€ TG @changglobize
1000 cnx / 500€ 20Gbps / 700€ 40CPU-128GRAM 20Gbps / €980 http://coronaserver.com

workline17

Extended Member
Ext. Member
Joined
Sep 26, 2019
Messages
22
Reaction score
63
Points
24
Location
UK
Hi,
this is what i was looking for.
a have a few question if is possible.
1- if i have my main web with his database and i set a second clear lb for manage the answer of it...this last will be able to have a answer from the database also if is not installaed on it?
Example:
A+database
B for LB

User ask for example.com/user.php(does a query on database.
B can answer to this User ? or need have any connection or just the lb configuration is enough?

I am asking because i have a high traffic and it need be manage also for the database.
different solution for it?
thanks
 

kiwisports

Extended Member
Ext. Member
Joined
Sep 10, 2019
Messages
3
Reaction score
3
Points
11
Location
segovira
A question (maybe I am silly cotton but I am new to this) the IP of the load balancer I also have to point to the web domain?
 

dgndnzcm

Extended Member
Ext. Member
Joined
Sep 5, 2019
Messages
14
Reaction score
261
Points
59
Location
istanbul
Advantages of load balancing

Load balancing is an excellent way to scale out your application and increase its performance and redundancy. Nginx, a popular web server software, can be configured as a simple yet powerful load balancer to improve your servers resource availability and efficiency. Nginx acts as a single entry point to a distributed web application working on multiple separate servers.

load-balancer-graph-1-1024x522.png



This guide describes the advantages of load balancing. Learn how to set up load balancing with nginx for your cloud servers.
As a prerequisite, you’ll need to have at least two hosts with a web server software installed and configured to see the benefit of the load balancer. If you already have one web host set up, duplicate it by creating a custom image and deploy it onto a new server at your UpCloud control panel.
Upcloud-cta-2-300x151.jpg




Installing nginx
-----------------------------------
# Debian and Ubuntu
sudo apt-get update
# Then install the Nginx Open Source edition
sudo apt-get install nginx


# CentOS
# Install the extra packages repository
sudo yum install epel-release
# Update the repositories and install Nginx
sudo yum update
sudo yum install ngin

Once installed change directory into the nginx main configuration folder.
-------------------------------------------------------------------------------------------------
cd /etc/nginx/

Now depending on your OS, the web server configuration files will be in one of two places.
Ubuntu and Debian follow a rule for storing virtual host files in /etc/nginx/sites-available/, which are enabled through symbolic links to /etc/nginx/sites-enabled/. You can use the command below to enable any new virtual host files.
-----------------------------------------------------------------------
sudo ln -s /etc/nginx/sites-available/vhost /etc/nginx/sites-enabled/vhost


CentOS users can find their host configuration files under /etc/nginx/conf.d/ in which any .conf type virtual host file gets loaded.
Check that you find at least the default configuration and then restart nginx.
------------------------------------------------------------------------------------------------------
sudo systemctl restart nginx


Test that the server replies to HTTP requests. Open the load balancer server’s public IP address in your web browser. When you see the default welcoming page for nginx the installation was successful.
nginx-welcome-page.png



If you have trouble loading the page, check that a firewall is not blocking your connection. For example on CentOS 7 the default firewall rules do not allow HTTP traffic, enable it with the commands below.
-------------------------------
sudo firewall-cmd --add-service=http --permanent
sudo firewall-cmd --reload


Then try reloading your browser.

Configuring nginx as a load balancer

When nginx is installed and tested, start to configure it for load balancing. In essence, all you need to do is set up nginx with instructions for which type of connections to listen to and where to redirect them. Create a new configuration file using whichever text editor you prefer. For example with nano:
--------------------------------------------------------------------------------------------------------------------------------------------------------------
sudo nano /etc/nginx/conf.d/load-balancer.conf

In the load-balancer.conf you’ll need to define the following two segments, upstream and server, see the examples below.
--------------------------------------------------------------------------------------------------------------------------------------------------------------------
# Define which servers to include in the load balancing scheme.
# It's best to use the servers' private IPs for better performance and security.
# You can find the private IPs at your UpCloud control panel Network section.
http {
upstream backend {
server 10.1.0.101;
server 10.1.0.102;
server 10.1.0.103;
}

# This server accepts all traffic to port 80 and passes it to the upstream.
# Notice that the upstream name and the proxy_pass need to match.

server {
listen 80;

location / {
proxy_pass http://backend;
}
}

Then save the file and exit the editor.
Next, disable the default server configuration you earlier tested was working after the installation. Again depending on your OS, this part differs slightly.
On Debian and Ubuntu systems you’ll need to remove the default symbolic link from the sites-enabled folder.
----------------------------------------------------------------------------------------------------------------------------------------------------
sudo rm /etc/nginx/sites-enabled/default

CentOS hosts don’t use the same linking. Instead, simply rename the default.conf in the conf.d/ directory to something that doesn’t end with .conf, for example:
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
sudo mv /etc/nginx/conf.d/default.conf /etc/nginx/conf.d/default.conf.disabled

Then use the following to restart nginx.
----------------------------------------------------
sudo systemctl restart nginx

Check that nginx starts successfully. If the restart fails, take a look at the /etc/nginx/conf.d/load-balancer.conf you just created to make sure there are no mistypes or missing semicolons.
When you enter the load balancer’s public IP address in your web browser, you should pass to one of your back-end servers
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------

Load balancing methods

Load balancing with nginx uses a round-robin algorithm by default if no other method is defined, like in the first example above. With round-robin scheme each server is selected in turns according to the order you set them in the load-balancer.conf file. This balances the number of requests equally for short operations.
Least connections based load balancing is another straightforward method. As the name suggests, this method directs the requests to the server with the least active connections at that time. It works more fairly than round-robin would with applications where requests might sometimes take longer to complete.
To enable least connections balancing method, add the parameter least_conn to your upstream section as shown in the example below.
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
upstream backend {
least_conn;
server 10.1.0.101;
server 10.1.0.102;
server 10.1.0.103;
}

Round-robin and least connections balancing schemes are fair and have their uses. However, they cannot provide session persistence. If your web application requires that the users are subsequently directed to the same back-end server as during their previous connection, use IP hashing method instead. IP hashing uses the visitors IP address as a key to determine which host should be selected to service the request. This allows the visitors to be each time directed to the same server, granted that the server is available and the visitor’s IP address hasn’t changed.
To use this method, add the ip_hash -parameter to your upstream segment like in the example underneath.
-------------------------------------------------------------------------------------------------------------------------------------------------
upstream backend {
ip_hash;
server 10.1.0.101;
server 10.1.0.102;
server 10.1.0.103;
}

In a server setup where the available resources between different hosts are not equal, it might be desirable to favour some servers over others. Defining server weights allows you to further fine-tune load balancing with nginx. The server with the highest weight in the load balancer is selected the most often.
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------
upstream backend {
server 10.1.0.101 weight=4;
server 10.1.0.102 weight=2;
server 10.1.0.103;
}

For example in the configuration shown above the first server is selected twice as often as the second, which again gets twice the requests compared to the third.

Load balancing with HTTPS enabled

Enable HTTPS for your site, it is a great way to protect your visitors and their data. If you haven’t yet implemented encryption on your web hosts, we highly recommend you take a look at our guide for how to install Let’s Encrypt on nginx.
To use encryption with a load balancer is easier than you might think. All you need to do is to add another server section to your load balancer configuration file which listens to HTTPS traffic at port 443 with SSL. Then set up a proxy_pass to your upstream segment like with the HTTP in the previous example above.
Open your configuration file again for edit.
--------------------------------------------------------

sudo nano /etc/nginx/conf.d/load-balancer.conf


Then add the following server segment to the end of the file.
----------------------------------------------------------------------------------
server {
listen 443 ssl;
server_name domain_name;
ssl_certificate /etc/letsencrypt/live/domain_name/cert.pem;
ssl_certificate_key /etc/letsencrypt/live/domain_name/privkey.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

location / {
proxy_pass http://backend;
}
}

Then save the file, exit the editor and restart nginx again.
----------------------------------------------------------------------------
sudo systemctl restart nginx

Setting up encryption at your load balancer when you are using the private network connections to your back-end has some great advantages.
  • As only your UpCloud servers have access to your private network, it allows you to terminate the SSL at the load balancer and thus only passing forward HTTP connections.
  • It also greatly simplifies your certificate management. You can obtain and renew the certificates from a single host.
With the HTTPS-enabled you also have the option to enforce encryption to all connections to your load balancer. Simply update your server segment listening to port 80 with a server name and a redirection to your HTTPS port. Then remove or comment out the location portion as it’s no longer needed. See the example below.
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
server {
listen 80;
server_name domain_name;
return 301 https://$server_name$request_uri;

#location / {
# proxy_pass http://backend;
#}
}

Save the file again after you have made the changes. Then restart nginx.
-------------------------------------------------------------------------------------------------
sudo systemctl restart nginx

Now all connections to your load balancer will be served over an encrypted HTTPS connection. Requests to the unencrypted HTTP will be redirected to use HTTPS as well. This provides a seamless transition into encryption. Nothing is required from your visitors.

Health checks

In order to know which servers are available, nginx’s implementations of reverse proxy includes passive server health checks. If a server fails to respond to a request or replies with an error, nginx will note the server has failed. It will try to avoid forwarding connections to that server for a time.
The number of consecutive unsuccessful connection attempts within a certain time period can be defined in the load balancer configuration file. Set a parameter max_fails to the server lines. By default, when no max_fails is specified, this value is set to 1. Optionally setting the max_fails to 0 will disable health checks to that server.

If max_fails is set to a value greater than 1 the subsequent fails must happen within a specific time frame for the fails to count. This time frame is specified by a parameter fail_timeout, which also defines how long the server should be considered failed. By default, the fail_timeout is set to 10 seconds.
After a server is marked failed and the time set by fail_timeout has passed, nginx will begin to gracefully probe the server with client requests. If the probes return successful, the server is again marked live and included in the load balancing as normal.
----------------------------------------------------------------------------------------

upstream backend {
server 10.1.0.101 weight=5;
server 10.1.0.102 max_fails=3 fail_timeout=30s;
server 10.1.0.103;
}


Use the health checks. They allow you to adapt your server back-end to the current demand by powering up or down hosts as required. When you start up additional servers during high traffic, it can easily increase your application performance when new resources become automatically available to your load balancer.

Conclusions
If you wish to improve your web application performance and availability, a load balancer is definitely something to consider. Nginx is powerful yet relatively simple to set up as a load balancer. Together with an easy encryption solution, such as Let’s Encrypt client, it makes for a great front-end to your web farm. Check out the documentation for upstream over at nginx.org to learn more.
When you are using multiple hosts protects your web service with redundancy, the load balancer itself can still leave a single point of failure. You can further improve high availability when you set up a floating IP between multiple load balancers. Find out more in our article on floating IPs on UpCloud.




Déplacé d'un forum​
No quote

upstream backend {
least_conn;
server 10.1.0.101;
server 10.1.0.102;
server 10.1.0.103;
}

server {
listen 80;
server_name domain_name;
location / {
proxy_pass http://backend;
return 301 https://$server_name$request_uri;
↑↑↑ How can we add upstream servers to $server_name?
}
}
 
Channels MatchTime Unblock CDN Offshore Server Contact
100 cnx / 90€ 5Gbps / 180€ 48CPU-256GRAM 10Gbps 569€ Skype live:giefsl
500 cnx / 350€ 10Gbps / 350€ 48CPU-128GRAM 5Gbps / 349€ TG @changglobize
1000 cnx / 500€ 20Gbps / 700€ 40CPU-128GRAM 20Gbps / €980 http://coronaserver.com
shape1
shape2
shape3
shape4
shape5
shape6
Top
AdBlock Detected

We know, ad-blocking software do a great job at blocking ads. But our site is sponsored by advertising. 

For the best possible site experience please take a moment to disable your AdBlocker.
You can create a Account with us or if you already have account, you can prefer an Account Upgrade.

I've Disabled AdBlock