C++Keepalived+Nginx+汤姆cat 落成高可用Web集群

集群规划图纸

① 、Nginx的设置进程

1.下载Nginx安装包,安装倚重环境包

(1)安装 C++编译环境

yum  -y install gcc   #C++

(2)安装pcre

yum  -y install pcre-devel

(3)安装zlib

yum  -y install  zlib-devel

(4)安装Nginx

一定到nginx 解压文件地方,执行编译安装命令

[root@localhost nginx-1.12.2]# pwd
/usr/local/nginx/nginx-1.12.2
[root@localhost nginx-1.12.2]# ./configure  && make && make install

(5)启动Nginx

设置完毕后先物色那安装完结的目录地方

[root@localhost nginx-1.12.2]# whereis nginx
nginx: /usr/local/nginx
[root@localhost nginx-1.12.2]# 

进入Nginx子目录sbin启动Nginx

[root@localhost sbin]# ls
nginx
[root@localhost sbin]# ./nginx &
[1] 5768
[root@localhost sbin]# 

翻开Nginx是还是不是运营

Niginx运营成功截图

或通过进度查看Nginx运营状态

[root@localhost sbin]# ps -aux|grep nginx
root       5769  0.0  0.0  20484   608 ?        Ss   14:03   0:00 nginx: master process ./nginx
nobody     5770  0.0  0.0  23012  1620 ?        S    14:03   0:00 nginx: worker process
root       5796  0.0  0.0 112668   972 pts/0    R+   14:07   0:00 grep --color=auto nginx
[1]+  完成                  ./nginx
[root@localhost sbin]# 

到此Nginx安装已毕并运营成功。

(6)Nginx飞速运行和开机运营配置

编排Nginx急忙运营脚本【专注Nginx安装路径急需依照本身的NGINX路径进行改动

[root@localhost init.d]# vim /etc/rc.d/init.d/nginx

#!/bin/sh
#
# nginx - this script starts and stops the nginx daemon
#
# chkconfig: - 85 15
# description: Nginx is an HTTP(S) server, HTTP(S) reverse \
# proxy and IMAP/POP3 proxy server
# processname: nginx
# config: /etc/nginx/nginx.conf
# config: /usr/local/nginx/conf/nginx.conf
# pidfile: /usr/local/nginx/logs/nginx.pid

# Source function library.
. /etc/rc.d/init.d/functions

# Source networking configuration.
. /etc/sysconfig/network

# Check that networking is up.
[ "$NETWORKING" = "no" ] && exit 0
nginx="/usr/local/nginx/sbin/nginx"
prog=$(basename $nginx)
NGINX_CONF_FILE="/usr/local/nginx/conf/nginx.conf"
[ -f /etc/sysconfig/nginx ] && . /etc/sysconfig/nginx
lockfile=/var/lock/subsys/nginx

make_dirs() {
    # make required directories
    user=`$nginx -V 2>&1 | grep "configure arguments:" | sed 's/[^*]*--user=\([^ ]*\).*/\1/g' -`
    if [ -z "`grep $user /etc/passwd`" ]; then
    useradd -M -s /bin/nologin $user
    fi
    options=`$nginx -V 2>&1 | grep 'configure arguments:'`
    for opt in $options; do
    if [ `echo $opt | grep '.*-temp-path'` ]; then
    value=`echo $opt | cut -d "=" -f 2`
    if [ ! -d "$value" ]; then
    # echo "creating" $value
    mkdir -p $value && chown -R $user $value
    fi
    fi
    done
}

start() {
    [ -x $nginx ] || exit 5
    [ -f $NGINX_CONF_FILE ] || exit 6
    make_dirs
    echo -n $"Starting $prog: "
    daemon $nginx -c $NGINX_CONF_FILE
    retval=$?
    echo
    [ $retval -eq 0 ] && touch $lockfile
    return $retval
}

stop() {
    echo -n $"Stopping $prog: "
    killproc $prog -QUIT
    retval=$?
    echo
    [ $retval -eq 0 ] && rm -f $lockfile
    return $retval
}

restart() {
    #configtest || return $?
    stop
    sleep 1
    start
}

reload() {
    #configtest || return $?
    echo -n $"Reloading $prog: "
    killproc $nginx -HUP
    RETVAL=$?
    echo
}

force_reload() {
    restart
}

configtest() {
    $nginx -t -c $NGINX_CONF_FILE
}

rh_status() {
    status $prog
}

rh_status_q() {
    rh_status >/dev/null 2>&1
}

case "$1" in
start)
rh_status_q && exit 0
$1
;;
stop)

rh_status_q || exit 0
$1
;;
restart|configtest)
$1
;;
reload)
rh_status_q || exit 7
$1
;;
force-reload)
force_reload
;;
status)
rh_status
;;
condrestart|try-restart)
rh_status_q || exit 0
;;
*)
echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload|configtest}"
exit 2
esac

为运营脚本授权 并参与开机运行

[root@localhost init.d]# chmod -R 777 /etc/rc.d/init.d/nginx 
[root@localhost init.d]# chkconfig  nginx 

启动Nginx

[root@localhost init.d]# ./nginx start

将Nginx插手种类环境变量

[root@localhost init.d]# echo 'export PATH=$PATH:/usr/local/nginx/sbin'>>/etc/profile && source /etc/profile

Nginx命令 [ service nginx (start|stop|restart) ]

[root@localhost init.d]# service nginx start
Starting nginx (via systemctl):                            [  确定  ]

Tips:快捷命令

service nginx (start|stop|restart)

② 、KeepAlived安装和布局

1.设置Keepalived看重环境

yum install -y popt-devel     
yum install  -y ipvsadm
yum install -y libnl*
yum install -y libnf*
yum install -y openssl-devel

2.编译Keepalived并安装

[root@localhost keepalived-1.3.9]# ./configure
[root@localhost keepalived-1.3.9]#  make && make install

3.将Keepalive 安装成连串服务

[root@localhost etc]# mkdir /etc/keepalived
[root@localhost etc]# cp /usr/local/keepalived/etc/keepalived/keepalived.conf  /etc/keepalived/

手动复制暗中认同的布局文件到暗许路径

[root@localhost etc]#  mkdir /etc/keepalived
[root@localhost etc]# cp /usr/local/keepalived/etc/sysconfig/keepalived  /etc/sysconfig/
[root@localhost etc]# cp /usr/local/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/

为keepalived 创造软链接

[root@localhost sysconfig]# ln -s /usr/local/keepalived/sbin/keepalived  /usr/sbin/

安装Keepalived开机自运维

[root@localhost sysconfig]# chkconfig keepalived  on
注意:正在将请求转发到“systemctl enable keepalived.service”。
Created symlink from /etc/systemd/system/multi-user.target.wants/keepalived.service to /usr/lib/systemd/system/keepalived.service

启动Keepalived服务

[root@localhost keepalived]# keepalived -D  -f /etc/keepalived/keepalived.conf

关闭Keepalived服务

[root@localhost keepalived]# killall keepalived

叁 、集群规划和搭建

集群规划图纸

C++,条件准备:

CentOS 7.2

Keepalived   Version 1.4.0 – December 29, 2017

Nginx           Version: nginx/1.12.2

Tomcat         Version:8


集群规划清单

虚拟机 IP 说明
Keepalived+Nginx1[Master] 192.168.43.101 Nginx Server 01
Keeepalived+Nginx[Backup] 192.168.43.102 Nginx Server 02
Tomcat01 192.168.43.103 Tomcat Web Server01
Tomcat02 192.168.43.104 Tomcat Web Server02
VIP 192.168.43.150 虚拟漂移IP

1.更改汤姆cat暗中认同欢迎页面,用于标识切换Web

变更汤姆catServer01 节点ROOT/index.jsp
消息,参预汤姆catIP地址,并进入Nginx值,即修改节点192.168.43.103音讯如下:

<div id="asf-box">
    <h1>${pageContext.servletContext.serverInfo}(192.168.224.103)<%=request.getHeader("X-NGINX")%></h1>
</div>

变更汤姆catServer02
节点ROOT/index.jsp消息,插手汤姆catIP地址,并参预Nginx值,即修改节点192.168.43.104新闻如下:

<div id="asf-box">
    <h1>${pageContext.servletContext.serverInfo}(192.168.224.104)<%=request.getHeader("X-NGINX")%></h1>
</div>

2.启动Tomcat服务,查看Tomcat服务IP信息,此时Nginx未启动,因此request-header没有Nginx信息。

汤姆cat运营消息

3.安排Nginx代理音讯

1.配置Master节点[192.168.43.101]代办音信

upstream tomcat {
   server 192.168.43.103:8080 weight=1;
   server 192.168.43.104:8080 weight=1;
}
server{
   location / {
       proxy_pass http://tomcat;
   proxy_set_header X-NGINX "NGINX-1";
   }
   #......其他省略
}

2.配置Backup节点[192.168.43.102]代理信息

upstream tomcat {
    server 192.168.43.103:8080 weight=1;
    server 192.168.43.104:8080 weight=1;
}
server{
    location / {
        proxy_pass http://tomcat;
    proxy_set_header X-NGINX "NGINX-2";
    }
    #......其他省略
}

3.启动Master 节点Nginx服务

[root@localhost init.d]# service nginx start
Starting nginx (via systemctl):                            [  确定  ]

那时候造访 192.168.43.101
可以看看103和104节点Tcomat交替显示,表明Nginx服务已经将请求负载到了2台tomcat上。

Nginx 负载效果

4.同理配置Backup[192.168.43.102] Nginx音信,运转Nginx后,访问192.168.43.102后方可观望Backup节点已起到负载的功效。

Backup负载效果

4.配置Keepalived 脚本音信

1.在Master节点和Slave节点 /etc/keepalived目录下添加check_nginx.sh 文件,用于检测Nginx的依存意况,添加keepalived.conf文件

check_nginx.sh文件音信如下:

#!/bin/bash
#时间变量,用于记录日志
d=`date --date today +%Y%m%d_%H:%M:%S`
#计算nginx进程数量
n=`ps -C nginx --no-heading|wc -l`
#如果进程为0,则启动nginx,并且再次检测nginx进程数量,
#如果还为0,说明nginx无法启动,此时需要关闭keepalived
if [ $n -eq "0" ]; then
        /etc/rc.d/init.d/nginx start
        n2=`ps -C nginx --no-heading|wc -l`
        if [ $n2 -eq "0"  ]; then
                echo "$d nginx down,keepalived will stop" >> /var/log/check_ng.log
                systemctl stop keepalived
        fi
fi

添加达成后,为check_nginx.sh 文件授权,便于脚本获得执行权限。

[root@localhost keepalived]# chmod -R 777 /etc/keepalived/check_nginx.sh 

2.在Master 节点 /etc/keepalived目录下,添加keepalived.conf 文件,具体消息如下:

vrrp_script chk_nginx {  
 script "/etc/keepalived/check_nginx.sh"   //检测nginx进程的脚本  
 interval 2  
 weight -20  
}  

global_defs {  
 notification_email {  
     //可以添加邮件提醒  
 }  
}  
vrrp_instance VI_1 {  
 state MASTER                  #标示状态为MASTER 备份机为BACKUP
 interface ens33               #设置实例绑定的网卡(ip addr查看,需要根据个人网卡绑定)
 virtual_router_id 51          #同一实例下virtual_router_id必须相同   
 mcast_src_ip 192.168.43.101   
 priority 250                  #MASTER权重要高于BACKUP 比如BACKUP为240  
 advert_int 1                  #MASTER与BACKUP负载均衡器之间同步检查的时间间隔,单位是秒
 nopreempt                     #非抢占模式
 authentication {              #设置认证
        auth_type PASS         #主从服务器验证方式
        auth_pass 123456  
 }  
 track_script {  
        check_nginx  
 }  
 virtual_ipaddress {           #设置vip
        192.168.43.150         #可以多个虚拟IP,换行即可
 }  
}

3.在Backup节点 etc/keepalived目录下添加 keepalived.conf 配置文件

音信如下:

vrrp_script chk_nginx {  
 script "/etc/keepalived/check_nginx.sh"   //检测nginx进程的脚本  
 interval 2  
 weight -20  
}  

global_defs {  
 notification_email {  
     //可以添加邮件提醒  
 }  
}  
vrrp_instance VI_1 {  
 state BACKUP                  #标示状态为MASTER 备份机为BACKUP
 interface ens33               #设置实例绑定的网卡(ip addr查看)
 virtual_router_id 51          #同一实例下virtual_router_id必须相同   
 mcast_src_ip 192.168.43.102   
 priority 240                  #MASTER权重要高于BACKUP 比如BACKUP为240  
 advert_int 1                  #MASTER与BACKUP负载均衡器之间同步检查的时间间隔,单位是秒
 nopreempt                     #非抢占模式
 authentication {              #设置认证
        auth_type PASS         #主从服务器验证方式
        auth_pass 123456  
 }  
 track_script {  
        check_nginx  
 }  
 virtual_ipaddress {           #设置vip
        192.168.43.150         #可以多个虚拟IP,换行即可
 }  
}

Tips:至于配置新闻的几点表明

  • state – 主服务器需配成MASTEENCORE,从服务器需配成BACKUP
  • interface –
    这么些是网卡名,作者动用的是VM12.0的本子,所以这里网卡名为ens33
  • mcast_src_ip – 配置各自的实际上IP地址
  • priority –
    主服务器的事先级必须比从服务器的高,那里主服务器配置成250,从服务器配置成240
  • virtual_ipaddress – 配置虚拟IP(192.168.43.150)
  • authentication –
    auth_pass主从服务器必须一致,keepalived靠那些来通讯
  • virtual_router_id – 主从服务器必须保持一致

5.集群高可用(HA)验证

  • Step1 启动Master机器的Keepalived和 Nginx服务

[root@localhost keepalived]# keepalived  -D -f /etc/keepalived/keepalived.conf
[root@localhost keepalived]# service nginx start

翻看服务运营进程

[root@localhost keepalived]# ps -aux|grep nginx
root       6390  0.0  0.0  20484   612 ?        Ss   19:13   0:00 nginx: master process /usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/nginx.conf
nobody     6392  0.0  0.0  23008  1628 ?        S    19:13   0:00 nginx: worker process
root       6978  0.0  0.0 112672   968 pts/0    S+   20:08   0:00 grep --color=auto nginx

查看Keepalived运营进程

[root@localhost keepalived]# ps -aux|grep keepalived
root       6402  0.0  0.0  45920  1016 ?        Ss   19:13   0:00 keepalived -D -f /etc/keepalived/keepalived.conf
root       6403  0.0  0.0  48044  1468 ?        S    19:13   0:00 keepalived -D -f /etc/keepalived/keepalived.conf
root       6404  0.0  0.0  50128  1780 ?        S    19:13   0:00 keepalived -D -f /etc/keepalived/keepalived.conf
root       7004  0.0  0.0 112672   976 pts/0    S+   20:10   0:00 grep --color=auto keepalived

选拔 ip add 查看虚拟IP绑定景况,如出现192.168.43.150
节点音信则绑定到Master节点

[root@localhost keepalived]# ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:91:bf:59 brd ff:ff:ff:ff:ff:ff
    inet 192.168.43.101/24 brd 192.168.43.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet 192.168.43.150/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::9abb:4544:f6db:8255/64 scope link 
       valid_lft forever preferred_lft forever
    inet6 fe80::b0b3:d0ca:7382:2779/64 scope link tentative dadfailed 
       valid_lft forever preferred_lft forever
    inet6 fe80::314f:5fe7:4e4b:64ed/64 scope link tentative dadfailed 
       valid_lft forever preferred_lft forever
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN qlen 1000
    link/ether 52:54:00:2b:74:aa brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000
    link/ether 52:54:00:2b:74:aa brd ff:ff:ff:ff:ff:ff
  • Step 2
    运行Backup节点Nginx服务和Keepalived服务,查看服务运营状态,如Backup节点出现了虚拟IP,则Keepalvied配置文件有标题,此情景称为脑裂。

[root@localhost keepalived]# clear
[root@localhost keepalived]# ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:14:df:79 brd ff:ff:ff:ff:ff:ff
    inet 192.168.43.102/24 brd 192.168.43.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::314f:5fe7:4e4b:64ed/64 scope link 
       valid_lft forever preferred_lft forever
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN qlen 1000
    link/ether 52:54:00:2b:74:aa brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000
    link/ether 52:54:00:2b:74:aa brd ff:ff:ff:ff:ff:ff
  • Step 3 验证服务

    浏览并再三强制刷新地址: http://192.168.43.150
    ,可以看来103和104屡次轮番突显,并浮现Nginx-1,则评释Master节点在进展web服务转向。

  • Step 4 关闭Master
    keepalived服务和Nginx服务,访问Web服务旁观服务转移状态

[root@localhost keepalived]# killall keepalived
[root@localhost keepalived]# service nginx stop

此刻威逼刷新192.168.43.150发现 页面交替显示103和104并出示Nginx-2
,VIP已转移到192.168.43.102上,已评释服务机关注换成备份节点上。

  • Step 5 启动Master Keepalived 服务和Nginx服务

    那儿重新表达发现,VIP已被Master重新夺回,并页面交替突显103和104,此时突显Nginx-1

四 、Keepalived抢占格局和非抢占格局

keepalived的HA分为抢占形式和非抢占形式,抢占情势即MASTE君越从故障中平复后,会将VIP从BACKUP节点中抢占过来。非抢占方式即MASTE帕杰罗恢复生机后不抢占BACKUP升级为MASTECR-V后的VIP。

非抢占格局配置:

  • 1>
    在vrrp_instance块下多个节点各伸张了nopreempt指令,表示不争抢vip
  • 2> 节点的state都为BACKUP
    几个keepalived节点都运营后,暗许都是BACKUP状态,双方在殡葬组播信息后,会依据优先级来推举一个MASTE奔驰M级出来。由于两岸都配备了nopreempt,所以MASTELacrosse从故障中平复后,不会抢占vip。那样会防止VIP切换只怕引致的服务延迟。