2025-03-19    2025-03-19    5579 字  12 分钟

一、环境介绍

根据前几篇的部署环境,现实情况中我们向代理服务器(31)发送请求的服务器不止一台,今天我们还是使用71来克隆两台jx-busi-21和jx-busi-22充当应用服务器。由于我们是连接克隆的系统,原系统/opt目录内的文件都删除。我们在运维服务器(81)上使用ansible统一部署业务到jx-busi-21和jx-busi-22上。

二、使用ansible部署业务

部署环境

  • 控制端:运维服务器(81)

  • 被控端:jx-busi-21和jx-busi-22

Ansible软件介绍

Ansible是新出现的自动化运维工具,基于Python开发,集合了众多运维工具(puppet、chef、func、fabric)的优点,实现了批量系统配置、批量程序部署、批量运行命令等功能。他是基于ssh协议通信的。它每次执行命令,先通过ssh登录到被控端,执行命令结束后再退出被控端。

控制端安装Ansible软件

1
yum install ansible -y 

需求:

在控制端上分发软件包到被控端jx-busi-21和jx-busi-22,然后解压软件,配置环境变量。

先配置ansible:

1
v

测试发现报错:

1
2
3
4
5
6
[root@jx-ops-81 ansible]# ansible jxbusi -m shell -a "ls /tmp"
10.10.10.21 | FAILED | rc=-1 >>
Using a SSH password instead of a key is not possible because Host Key checking is enabled and sshpass does not support this.  Please add this host's fingerprint to your known_hosts file to manage this host.

10.10.10.22 | FAILED | rc=-1 >>
Using a SSH password instead of a key is not possible because Host Key checking is enabled and sshpass does not support this.  Please add this host's fingerprint to your known_hosts file to manage this host.

报错原因,ssh通信默认开启检查。

1
2
3
4
[root@jx-ops-81 ansible]# ssh 10.10.10.21 
The authenticity of host '10.10.10.21 (10.10.10.21)' can't be established.
ECDSA key fingerprint is SHA256:DD1XCbgvMyHExH5xCNV7ofghwH2pEey2a9RnSsBt0Co.
Are you sure you want to continue connecting (yes/no/[fingerprint])?

关闭检查:

1
2
3
vim /etc/ansible/ansible.cfg

host_key_checking = False

再次执行成功了,说明此时控制端和被控端通信正常了,可以进行后续操作。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
[root@jx-ops-81 ansible]# ansible jxbusi -m shell -a "ls /tmp"
[WARNING]: Platform linux on host 10.10.10.21 is using the discovered Python interpreter at /usr/bin/python3.7, but future installation of another Python interpreter
could change this. See https://docs.ansible.com/ansible/2.8/reference_appendices/interpreter_discovery.html for more information.

10.10.10.21 | CHANGED | rc=0 >>
ansible_command_payload_0q6k2tlh
systemd-private-d7a0e27ce8ad451ab4d963e639867ff0-chronyd.service-XTeKsb
systemd-private-d7a0e27ce8ad451ab4d963e639867ff0-systemd-logind.service-WUFqgT

[WARNING]: Platform linux on host 10.10.10.22 is using the discovered Python interpreter at /usr/bin/python3.7, but future installation of another Python interpreter
could change this. See https://docs.ansible.com/ansible/2.8/reference_appendices/interpreter_discovery.html for more information.

10.10.10.22 | CHANGED | rc=0 >>
ansible_command_payload_cp1oa154
systemd-private-a444e71b97e04aa2b5a4e683ad063d2d-chronyd.service-Mp5vVg
systemd-private-a444e71b97e04aa2b5a4e683ad063d2d-systemd-logind.service-s7Nb4D

先看一下21 和22上/opt内有什么文件

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
[root@jx-ops-81 opt]# ansible jxbusi -m shell -a "ls /opt"
[WARNING]: Platform linux on host 10.10.10.21 is using the discovered Python interpreter at /usr/bin/python3.7, but future installation of another Python interpreter
could change this. See https://docs.ansible.com/ansible/2.8/reference_appendices/interpreter_discovery.html for more information.

10.10.10.21 | CHANGED | rc=0 >>
kylin-sm-package

[WARNING]: Platform linux on host 10.10.10.22 is using the discovered Python interpreter at /usr/bin/python3.7, but future installation of another Python interpreter
could change this. See https://docs.ansible.com/ansible/2.8/reference_appendices/interpreter_discovery.html for more information.

10.10.10.22 | CHANGED | rc=0 >>
kylin-sm-package

从控制端拷贝文件到被控端。(这个过程较长。。)

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
[root@jx-ops-81 opt]# ansible jxbusi -m copy -a "src=/opt/jdk-8u151-linux-x64.tar.gz dest=/opt"
[WARNING]: Platform linux on host 10.10.10.21 is using the discovered Python interpreter at /usr/bin/python3.7, but future installation of another Python interpreter
could change this. See https://docs.ansible.com/ansible/2.8/reference_appendices/interpreter_discovery.html for more information.

10.10.10.21 | CHANGED => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3.7"
    },
    "changed": true,
    "checksum": "5598835566f55c1a785892d9e4e7868179987ed3",
    "dest": "/opt/jdk-8u151-linux-x64.tar.gz",
    "gid": 0,
    "group": "root",
    "md5sum": "774d8cb584d9ebedef8eba9ee2dfe113",
    "mode": "0644",
    "owner": "root",
    "size": 189736377,
    "src": "/root/.ansible/tmp/ansible-tmp-1733729480.176557-170987315504901/source",
    "state": "file",
    "uid": 0
}
[WARNING]: Platform linux on host 10.10.10.22 is using the discovered Python interpreter at /usr/bin/python3.7, but future installation of another Python interpreter
could change this. See https://docs.ansible.com/ansible/2.8/reference_appendices/interpreter_discovery.html for more information.

10.10.10.22 | CHANGED => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3.7"
    },
    "changed": true,
    "checksum": "5598835566f55c1a785892d9e4e7868179987ed3",
    "dest": "/opt/jdk-8u151-linux-x64.tar.gz",
    "gid": 0,
    "group": "root",
    "md5sum": "774d8cb584d9ebedef8eba9ee2dfe113",
    "mode": "0644",
    "owner": "root",
    "size": 189736377,
    "src": "/root/.ansible/tmp/ansible-tmp-1733729480.176511-128929210165902/source",
    "state": "file",
    "uid": 0
}

检查被控端对应目录下有没有文件

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
[root@jx-ops-81 opt]# ansible jxbusi -m shell -a "ls /opt"
[WARNING]: Platform linux on host 10.10.10.21 is using the discovered Python interpreter at /usr/bin/python3.7, but future installation of another Python interpreter
could change this. See https://docs.ansible.com/ansible/2.8/reference_appendices/interpreter_discovery.html for more information.

10.10.10.21 | CHANGED | rc=0 >>
jdk-8u151-linux-x64.tar.gz
kylin-sm-package

[WARNING]: Platform linux on host 10.10.10.22 is using the discovered Python interpreter at /usr/bin/python3.7, but future installation of another Python interpreter
could change this. See https://docs.ansible.com/ansible/2.8/reference_appendices/interpreter_discovery.html for more information.

10.10.10.22 | CHANGED | rc=0 >>
jdk-8u151-linux-x64.tar.gz
kylin-sm-package

解压被控端压缩包,并再次查看解压成功了没

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
# ansible jxbusi -m shell -a "tar -xvf /opt/jdk-8u151-linux-x64.tar.gz -C /opt"
[root@jx-ops-81 opt]# ansible jxbusi -m shell -a "ls /opt"
[WARNING]: Platform linux on host 10.10.10.21 is using the discovered Python interpreter at /usr/bin/python3.7, but future installation of another Python interpreter
could change this. See https://docs.ansible.com/ansible/2.8/reference_appendices/interpreter_discovery.html for more information.

10.10.10.21 | CHANGED | rc=0 >>
jdk1.8.0_151
jdk-8u151-linux-x64.tar.gz
kylin-sm-package

[WARNING]: Platform linux on host 10.10.10.22 is using the discovered Python interpreter at /usr/bin/python3.7, but future installation of another Python interpreter
could change this. See https://docs.ansible.com/ansible/2.8/reference_appendices/interpreter_discovery.html for more information.

10.10.10.22 | CHANGED | rc=0 >>
jdk1.8.0_151
jdk-8u151-linux-x64.tar.gz
kylin-sm-package

被控端tomcat安装过程也是这样,最后被控端21 21 /opt目录内文件

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
[root@jx-ops-81 opt]# ansible jxbusi -m shell -a "ls /opt"
[WARNING]: Platform linux on host 10.10.10.21 is using the discovered Python interpreter at /usr/bin/python3.7, but future installation of another Python interpreter
could change this. See https://docs.ansible.com/ansible/2.8/reference_appendices/interpreter_discovery.html for more information.

10.10.10.21 | CHANGED | rc=0 >>
jdk1.8.0_151
jdk-8u151-linux-x64.tar.gz
kylin-sm-package
tomcat8
tomcat8-cgi.tar.gz

[WARNING]: Platform linux on host 10.10.10.22 is using the discovered Python interpreter at /usr/bin/python3.7, but future installation of another Python interpreter
could change this. See https://docs.ansible.com/ansible/2.8/reference_appendices/interpreter_discovery.html for more information.

10.10.10.22 | CHANGED | rc=0 >>
jdk1.8.0_151
jdk-8u151-linux-x64.tar.gz
kylin-sm-package
tomcat8
tomcat8-cgi.tar.gz

下面配置被控端环境变量:

最简单的方法先查看21 22 上环境变量,然后在本地编辑好文件后再统一替换被控端的文件。

1
2
[root@jx-ops-81 opt]# ansible jxbusi -m shell -a "cat /etc/profile"
[root@jx-ops-81 opt]# ansible jxbusi -m copy -a "src=/opt/aa dest=/etc/profile"

其实添加了这两行

1
2
export JAVA_HOME=/opt/jdk1.8.0_151
export PATH=$PATH:$JAVA_HOME/bin

使用ansible执行部署脚本,批量部署业务,开启业务。

脚本pro.sh

注意:脚本中需要写环境变量。不然脚本不会执行成功。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
#!/bin/bash
  
export JAVA_HOME=/opt/jdk1.8.0_151
export PATH=$PATH:$JAVA_HOME/bin

# 结束java进程
killall java
# 删除旧war包
rm  /opt/tomcat8/webapps/ROOT/* -rf
# 更新war包
curl http://10.10.10.81:88/virstu.war -o /opt/tomcat8/webapps/ROOT.war
#循环等待80端口释放
while true; do
 have=$(netstat -anp | grep 80 -w)
 if [ -n "$have" ]; then
  sleep 2
 else
  break
 fi
done
# 启动tomcat
nohup /opt/tomcat8/bin/startup.sh &

# 测试访问
# 一直循环访问,直到访问成功,脚本退出
# 访问成功是:$?环境变量的值为0
while true; do
 curl 127.0.0.1/a.html
 if [ $? -eq 0 ]; then
  break
 fi
 sleep 3
done

echo "deploy done..."

由于脚本文件中使用curl下载我们本地服务器(81)的一个文件,我们需要在81的/root/jx1206/target目录内开启http文件共享,以此目录为根目录将文件共享出去可以使用python来实现。

1
n

开启后在控制端(81)执行脚本:

1
[root@jx-ops-81 jx1206]# ansible jxbusi -m script -a "/opt/jx1206/pro.sh" 

执行成功后,可以测试业务是否部署、启动成功。

1
2
3
4
[root@jx-ops-81 target]# curl 10.10.10.21/a.html 
<p>ffffffffff</p>
[root@jx-ops-81 target]# curl 10.10.10.22/a.html 
<p>ffffffffff</p>

三、Nginx负载均衡

我们在访问应用的时候往往需要一台负载均衡服务器来实现业务的分流。我们可以在21 、22 前面搭建一台nginx负载均衡服务器,所有的请求都由这台服务器来分配。我们还是从71上链接克隆一台服务器jx-nginx-11。

部署方式:编译安装Nginx

下载Nginx源码文件并解压

1
2
curl -o nginx.tar.gz https://nginx.org/download/nginx-1.24.0.tar.gz
tar -xvf nginx.tar.gz

进入文件夹测试编译环境

1
2
3
4
5
6
7
8
9
[root@jx-nginx-11 opt]# cd nginx-1.24.0/
[root@jx-nginx-11 nginx-1.24.0]# ls
auto  CHANGES  CHANGES.ru  conf  configure  contrib  html  LICENSE  man  README  src
[root@jx-nginx-11 nginx-1.24.0]# ./configure 
checking for OS
 + Linux 4.19.90-52.22.v2207.ky10.x86_64 x86_64
checking for C compiler ... not found

./configure: error: C compiler cc is not found

系统缺少gcc编译器,yum安装下

1
yum install gcc -y

再次检查环境发现缺少PCRE library 库。

1
[root@jx-nginx-11 nginx-1.24.0]# ./configure

安装库文件

1
[root@jx-nginx-11 nginx-1.24.0]# yum install pcre2-devel -y

这里说下查询方式:

1
[root@jx-nginx-11 nginx-1.24.0]# yum provides "*pcre*"

安装后再次检查环境,发现还少zlib library。

1
2
3
4
5
[root@jx-nginx-11 nginx-1.24.0]# ./configure
./configure: error: the HTTP gzip module requires the zlib library.
You can either disable the module by using --without-http_gzip_module
option, or install the zlib library into the system, or build the zlib library
statically from the source with nginx by using --with-zlib=<path> option.

继续安装库文件:

1
[root@jx-nginx-11 nginx-1.24.0]# yum install zlib-devel -y

再次检查编译环境

当文件夹出现Makefile文件时,说明预编译环境已经完成。

1
2
[root@jx-nginx-11 nginx-1.24.0]# ls
auto  CHANGES  CHANGES.ru  conf  configure  contrib  html  LICENSE  Makefile  man  objs  README  src

开始编译,make命令没找到,安装下

1
2
3
[root@jx-nginx-11 nginx-1.24.0]# make 
-bash: make: command not found
[root@jx-nginx-11 nginx-1.24.0]# yum install make -y

再次执行make。当出现以下内容说明编译成功。

1
2
3
4
5
6
7
8
9
objs/ngx_modules.o \
-ldl -lpthread -lcrypt -lpcre2-8 -lz \
-Wl,-E
sed -e "s|%%PREFIX%%|/usr/local/nginx|" \
	-e "s|%%PID_PATH%%|/usr/local/nginx/logs/nginx.pid|" \
	-e "s|%%CONF_PATH%%|/usr/local/nginx/conf/nginx.conf|" \
	-e "s|%%ERROR_LOG_PATH%%|/usr/local/nginx/logs/error.log|" \
	< man/nginx.8 > objs/nginx.8
make[1]: Leaving directory '/opt/nginx-1.24.0'

开始安装:

1
2
[root@jx-nginx-11 nginx-1.24.0]# make install 
make -f objs/Makefile install

注意:它是根据Makefile 文件中内容安装的。

编译完成后

它的执行文件在:/usr/local/nginx/sbin

配置文件在:/usr/local/nginx/conf

开始配置我们的负载均衡:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
vim /usr/local/nginx/conf/nginx.conf

#user  nobody;
worker_processes  2;

#error_log  logs/error.log;
#error_log  logs/error.log  notice;
#error_log  logs/error.log  info;

#pid        logs/nginx.pid;


events {
    worker_connections  65535;
}


http {
    include       mime.types;
    default_type  application/octet-stream;
    sendfile        on;
    #tcp_nopush     on;

    #keepalive_timeout  0;
    keepalive_timeout  65;

    #gzip  on;

    upstream jxbusi {
        server 10.10.10.21:80;
        server 10.10.10.22:80;
    }
    server {
        listen       80;
        server_name  _;
        location / {
                proxy_pass http://jxbusi;
        }
     }
}

启动nginx:

1
2
3
4
5
6
7
[root@jx-nginx-11 conf]# /usr/local/nginx/sbin/nginx 
[root@jx-nginx-11 conf]# ps aux | grep nginx 
root        6892  0.0  0.0   4436   396 ?        Ss   19:11   0:00 nginx: master process /usr/local/nginx/sbin/nginx
nobody      6893  0.1  1.4  31660 29316 ?        S    19:11   0:00 nginx: worker process
nobody      6894  0.0  1.4  31660 29316 ?        S    19:11   0:00 nginx: worker process
root        6896  0.0  0.0 213136   892 pts/0    S+   19:12   0:00 grep nginx
[root@jx-nginx-11 conf]# 

测试:

可以在网关服务器(254)上发起请求,然后在nginx服务器(11)上抓80 包。多请求几次,可以看到11到后端21和22轮询。

1
2
3
[root@jx-gateway-254 opt]# curl 10.10.10.11/cgi-bin/a.sh 
aaaaaaaaaaaaaaa
[root@jx-nginx-11 ~]# tcpdump -i enp0s3 -p port 80 -vv -nn 

四、部署新项目

整体部署及访问流程

我们将项目部署在应用服务器(21、22)上,项目需要访问数据库服务器(36),将sql语句在数据库服务器上执行。我们从笔记本访问到nginx(11)负载均衡,它会自动选择应用服务器,登录的时候会将请求发送到mysql服务器。MySQL服务器收到请求后,在做出回应。

将试验项目拉至运维服务器(81)上

1
git clone https://gitee.com/laoyang103/jxsy.git

先安装依赖

1
2
3
4
5
[root@jx-ops-81 jxsy]# cat doc/qdcloud.sh  | grep jar 
mvn install:install-file -Dfile=lib/tangyuan-0.9.0.jar -DgroupId=org.xson -DartifactId=tangyuan -Dversion=0.9.0 -Dpackaging=jar
mvn install:install-file -Dfile=lib/rpc-util-1.0.jar -DgroupId=cn.gatherlife -DartifactId=rpc-util -Dversion=1.0 -Dpackaging=jar
mvn install:install-file -Dfile=lib/patchca-0.5.0-SNAPSHOT.jar -DgroupId=net.pusuo -DartifactId=patchca -Dversion=0.5.0-SNAPSHOT -Dpackaging=jar
mvn install:install-file -Dfile=lib/common-object-0.0.1-SNAPSHOT.jar -DgroupId=org.xson -DartifactId=common-object -Dversion=0.0.1-SNAPSHOT -Dpackaging=jar

打包

1
2
3
[root@jx-ops-81 jxsy]# cat doc/qdcloud.sh  | grep package
mvn package -Dmaven.test.skip=true
[root@jx-ops-81 jxsy]# mvn package -Dmaven.test.skip=true

我们要访问的应用涉及到数据库的查询,所以要先找到那张表,过滤出路径上传到mysql数据库(36).

1
2
3
4
root@jx-ops-81 jxsy]# cat doc/jxcms.sql | grep CREATE | wc -l 
29
将表上传到数据库36上
[root@jx-ops-81 jxsy]# scp doc/jxcms.sql [email protected]:/root

登录数据库并创建jxsy库。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
[root@jx-mysql-master-36 ~]# mysql -uroot -p123456 
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 9
Server version: 10.3.39-MariaDB-log MariaDB Server

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> create database jxsy;

将表导入jxsy数据库

1
[root@jx-mysql-master-36 ~]# mysql -uroot -p123456 jxsy < jxcms.sql

更改nginx服务器(11)负载均衡配置。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
vim /usr/local/nginx/conf/nginx.conf
[root@jx-nginx-11 conf]# cat  nginx.conf
#user  nobody;
worker_processes  2;

#error_log  logs/error.log;
#error_log  logs/error.log  notice;
#error_log  logs/error.log  info;

#pid        logs/nginx.pid;


events {
    worker_connections  65535;
}


http {
    include       mime.types;
    default_type  application/octet-stream;
    sendfile        on;
    #tcp_nopush     on;

    #keepalive_timeout  0;
    keepalive_timeout  65;

    #gzip  on;

    upstream jxsy {
        server 10.10.10.21:81;
        server 10.10.10.22:81;
    }
    server {
        listen       80;
        server_name  jxsy.jxit.net.cn;
        location / {
		proxy_pass http://jxsy;
        }
     }
    upstream jxbusi {
        server 10.10.10.21:80;
        server 10.10.10.22:80;
    }
    server {
        listen       80;
        server_name  jxcms.jxit.net.cn;
        location / {
                proxy_pass http://jxbusi;
        }
}
}

重新加载nginx

1
2
3
4
5
6
[root@jx-nginx-11 conf]# /usr/local/nginx/sbin/nginx -s reload 
[root@jx-nginx-11 conf]# ps -ef | grep nginx 
root         777       1  0 17:17 ?        00:00:00 nginx: master process /usr/local/nginx/sbin/nginx
nobody      1516     777  0 17:58 ?        00:00:00 nginx: worker process
nobody      1517     777  0 17:58 ?        00:00:00 nginx: worker process
root        1519    1437  0 17:58 pts/0    00:00:00 grep nginx

运维服务器上找到项目中关于mysql的配置文件

1
[root@jx-ops-81 jxsy]# grep mysql -rwn | grep -v target

更改关于mysql连接的配置:

1
2
3
4
5
[root@jx-ops-81 jxsy]# vim src/main/resources/tangyuan-configuration.xml

        <property name="username" value="jxadmin"/>
        <property name="password" value="123456"/>
        <property name="url" value="jdbc:mysql://10.10.10.36:3306/jxsy?Unicode=true&amp;characterEncoding=utf8"/>

再重新打包生成war包

1
[root@jx-ops-81 jxsy]# mvn package -Dmaven.test.skip=true

将包拷贝给应用服务器21,在21上部署业务。确保我们/data/tomcat8/webapps这个目录内没有任何文件,然后将qdcloud.war拷贝到webapps目录。

1
2
3
4
[root@jx-ops-81 jxsy]# scp target/qdcloud.war  [email protected]:/tmp
[root@jx-busi-21 webapps]# /data/tomcat8/bin/startup.sh 
[root@jx-busi-21 webapps]# ls 
ROOT  ROOT.war

我们需要做一条映射。

127.0.0.1 80 -》 网关服务器(254)80 -》 nginx(80)-》应用服务器21(81)、应用服务器22(81).

然后我们配置域名http://jxsy.jxit.net.cn/和127.0.0.1映射。在windows 的hosts文件中配置。

之后我们就可以使用http://jxsy.jxit.net.cn来访问服务了。

网关服务器上的映射规则

1
iptables -t nat -A PREROUTING -d 10.0.3.15/32 -p tcp -m tcp --dport 80 -j DNAT --to-destination 10.10.10.11:80

windows上hosts文件映射,从笔记本上pingjxsy.jxit.net.cn,返回地址是127.0.0.1,说明映射成功了。

1
127.0.0.1 jxsy.jxit.net.cn

登录测试:

http://jxsy.jxit.net.cn/

jx00000003/123456

登陆后提示验证码错误,这是由于nginx负载均衡算法导致的,我们需要保证来自相同 IP 地址的请求总是转发到相同的后端服务器,从而实现会话保持。这样相同客户端每次请求都会转给同一台后端服务器。

更改nginx.conf配置,添加ip_hash参数。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
vim /usr/local/nginx/conf/nginx.conf
[root@jx-nginx-11 conf]# cat  nginx.conf
#user  nobody;
worker_processes  2;

#error_log  logs/error.log;
#error_log  logs/error.log  notice;
#error_log  logs/error.log  info;

#pid        logs/nginx.pid;


events {
    worker_connections  65535;
}


http {
    include       mime.types;
    default_type  application/octet-stream;
    sendfile        on;
    #tcp_nopush     on;

    #keepalive_timeout  0;
    keepalive_timeout  65;

    #gzip  on;

    upstream jxsy {
        server 10.10.10.21:81;
        server 10.10.10.22:81;
        ip_hash;
    }
    server {
        listen       80;
        server_name  jxsy.jxit.net.cn;
        location / {
		proxy_pass http://jxsy;
        }
     }
    upstream jxbusi {
        server 10.10.10.21:80;
        server 10.10.10.22:80;
    }
    server {
        listen       80;
        server_name  jxcms.jxit.net.cn;
        location / {
                proxy_pass http://jxbusi;
        }
}
}

重启nginx服务
[root@jx-nginx-11 conf]# /usr/local/nginx/sbin/nginx -s reload

再次测试发现服务能够正常登录访问了。

总结:nginx负载均衡常见的几种算法:

轮询(rr):第一个请求发送到第一个后端服务器,第二个请求发送到第二个后端服务器。如果有多个请求轮询从头开始,重复分发。

权重(weight):允许你为不同的后端服务器设置不同的权重值,数字越大,机会越高。

哈希(ip_hash):来自相同 IP 地址的请求总是转发到相同的后端服务器,从而实现会话保持。

五、使用nginx充当war防火墙

大文件共享,不限速。

nginx(11)上定义共享目录/opt/jxdl

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
mkdir -p /opt/jxdl
vim /usr/local/nginx/conf/nginx.conf
    server {
        listen       80;
        server_name  jxdl.jxit.net.cn;
        location / {
                autoindex on;
                root /opt/jxdl;
        }
    }

重启服务

1
[root@jx-nginx-11 ~]# /usr/local/nginx/sbin/nginx -s reload 

访问http:// jxdl.jxit.net.cn,下载文件测试下载速度很快,此时没有限速。

大文件共享,限速1k。

添加:limit_rate 1k;

重启服务。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
vim /usr/local/nginx/conf/nginx.conf
    server {
        listen       80;
        server_name  jxdl.jxit.net.cn;
        location / {
                limit_rate 1k;
                autoindex on;
                root /opt/jxdl;
        }
    }

下载文件发现速度很慢,说明限速生效了。

我们有时下载文件时,刚开始下载很快,中间开始就变慢了。这个可以在nginx上实现,比如规定文件前50m不限速,50m之后限速1m。

看配置:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
vim /usr/local/nginx/conf/nginx.conf
    server {
        listen       80;
        server_name  jxdl.jxit.net.cn;
        location / {
                limit_rate 1m;
                limit_rate_after 50m;
                autoindex on;
                root /opt/jxdl;
        }
    }

重启服务。

重新下载文件观察速度变化。

还可限制访问资源路径内文件下载频率。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
vim /usr/local/nginx/conf/nginx.conf
    限制每个 IP 地址的请求速率,防止过度请求。

    limit_req_zone $binary_remote_addr zone=jxlimit:1m rate=1r/s;
然后在需要限制的资源服务内设置
    server {
        listen       80;
        server_name  jxcms.jxit.net.cn;
        location / {
                limit_req zone=jxlimit;
                proxy_pass http://jxbusi;
        }
    }

这行配置的作用是限制每个 IP 地址在每秒内最多只能发起 1 次请求。如果超过了这个速率,NGINX 将根据 limit_req 配置中的规则处理这些请求(比如返回 503 错误)。这种机制通常用于防止 DoS(拒绝服务)攻击或防止滥用。

浏览器多次访问资源,发现多次访问后显示503.这样我们就限制了每个IP每次请求资源的次数。

http://jxcms.jxit.net.cn/a.html

nginx还可以限制连接数量。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
    limit_conn_zone $binary_remote_addr zone=addr:1m;
    server {
        listen       80;
        server_name  jxdl.jxit.net.cn;
        location / {
                limit_conn addr 2;
                limit_rate 1m;
                limit_rate_after 50m;
                autoindex on;
                root /opt/jxdl;
        }
    }

测试:连续多次下载文件,当数量超过2时,访问失败报错503,说明限制成功了。

整个试验的nginx配置文件

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
[root@jx-nginx-11 ~]# cat /usr/local/nginx/conf/nginx.conf
#user  nobody;
worker_processes  2;

#error_log  logs/error.log;
#error_log  logs/error.log  notice;
#error_log  logs/error.log  info;

#pid        logs/nginx.pid;


events {
    worker_connections  65535;
}


http {
    include       mime.types;
    default_type  application/octet-stream;
    sendfile        on;
    #tcp_nopush     on;

    #keepalive_timeout  0;
    keepalive_timeout  65;

    limit_req_zone $binary_remote_addr zone=jxlimit:1m rate=1r/s;
    #gzip  on;
    upstream jxsy {
        server 10.10.10.21:81;
        server 10.10.10.22:81;
        ip_hash;
    }
    server {
        listen       80;
        server_name  jxsy.jxit.net.cn;
        location / {
		proxy_pass http://jxsy;
        }
     }
    upstream jxbusi {
        server 10.10.10.21:80;
        server 10.10.10.22:80;
    }
    server {
        listen       80;
        server_name  jxcms.jxit.net.cn;
        location / {
		limit_req zone=jxlimit;
                proxy_pass http://jxbusi;
        }
    }

    limit_conn_zone $binary_remote_addr zone=addr:1m;
    server {
        listen       80;
        server_name  jxdl.jxit.net.cn;
        location / {
		limit_conn addr 2;
	        limit_rate 1m;
                limit_rate_after 50m;
                autoindex on;
		root /opt/jxdl;
        }
    }
}

windows hosts文件

1
2
3
127.0.0.1 jxsy.jxit.net.cn
127.0.0.1 jxcms.jxit.net.cn
127.0.0.1 jxdl.jxit.net.cn

测试主要访问以下网站测试:

http://jxcms.jxit.net.cn/a.html

http://jxdl.jxit.net.cn

nginx(11)服务器上/opt/jxdl内放一个大文件即可。

总结:

1. limit_conn(限制连接数)

作用: limit_conn 指令用于限制每个客户端(或每个特定的连接单元)在特定时间内可以打开的最大连接数。

常见应用:

  • 防止单个客户端(IP 地址)消耗过多的连接资源。

  • 减少恶意爬虫或不良行为者的影响。

2. limit_req(限制请求频率)

作用: limit_req 指令用于限制某个客户端在单位时间内发起的请求数量。这个指令通常用于防止频繁请求对服务器造成的负担,避免请求风暴。

常见应用:

  • 防止同一客户端进行过高频次的请求(如恶意请求、爬虫等)。

  • 保护特定的 API 或网页,避免其被过度请求。