① OpenStack高可用集群部署方案设计(train版)
后续文章请查看 跳转请点击
文章目录
一、硬件配置参考
共12台服务器资源 ip+主机名+cpu数+核数+硬盘容量
按照硬件资源大小分各类节点,不按照ip顺序
10.15.253.225 cs8srv01-c2m8h300.esxi01.rd.zxbj01
10.15.253.226 cs8srv02-c2m16h600.esxi01.rd.zxbj01
10.15.253.227 cs8srv03-c2m32h1200.esxi01.rd.zxbj01
10.15.253.193 cs8srv01-c2m8h300.esxi02.rd.zxbj01
10.15.253.194 cs8srv02-c2m16h600.esxi02.rd.zxbj01
10.15.253.195 cs8srv03-c2m32h1200.esxi02.rd.zxbj01
10.15.253.161 cs8srv01-c2m8h300.esxi03.rd.zxbj01
10.15.253.162 cs8srv02-c2m16h600.esxi03.rd.zxbj01
10.15.253.163 cs8srv03-c2m32h1200.esxi03.rd.zxbj01
10.15.253.129 cs8srv01-c2m8h300.esxi04.rd.zxbj01 ×不可用
10.15.253.130 cs8srv02-c2m16h600.esxi04.rd.zxbj01 ×不可用
10.15.253.131 cs8srv03-c2m32h1200.esxi04.rd.zxbj01 ×不可用
#root密码,部署完成后请删除
Zx******
系统环境
#所有虚拟机内核版本
[root@cs8srv01 ~]# uname -r
4.18.0-193.14.2.el8_2.x86_64
#所有虚拟机系统版本
[root@cs8srv01 ~]# cat /etc/redhat-release
CentOS Linux release 8.2.2004 (Core)
二、节点整体规划
openstack高可用环境测试需要9台虚拟机,控制、计算、网络、存储、ceph共享存储集群共9台,后续资源充足可以将网络节点和存储节点进行分离,单独准备节点部署。
因控制节点需要运行服务较多,所以选择内存较大的虚拟机,生产中,建议将大磁盘挂载到ceph存储上
host | IP | Service | 备注 |
---|---|---|---|
controller01 | ens192:10.15.253.163 管理网络,外部网络 ens224:172.31.253.163 vlan网络 | 1. keystone 2. glance-api , glance-registry 3. nova-api, nova-conductor, nova-consoleauth, nova-scheduler, nova-novncproxy 4. neutron-api, neutron-linuxbridge-agent, neutron-dhcp-agent, neutron-metadata-agent, neutron-l3-agent 5. cinder-api, cinder-schedulera 6. dashboard 7. mariadb, rabbitmq, memcached,Haproxy 等 | 1.控制节点: keystone, glance, horizon, nova&neutron管理组件; 2.网络节点:虚机网络,L2/L3,dhcp,route,nat等;2核 32线程1.2T硬盘 3.存储节点:调度,监控(ceph)等组件;2核 32线程1.2T硬盘 4.openstack基础服务 |
controller02 | ens192:10.15.253.195 管理网络,外部网络 ens224:172.31.253.195 vlan网络 | 1. keystone 2. glance-api , glance-registry 3. nova-api, nova-conductor, nova-consoleauth, nova-scheduler, nova-novncproxy 4. neutron-api, neutron-linuxbridge-agent, neutron-dhcp-agent, neutron-metadata-agent, neutron-l3-agent 5. cinder-api, cinder-schedulera 6. dashboard 7. mariadb, rabbitmq, memcached,Haproxy 等 | 1.控制节点: keystone, glance, horizon, nova&neutron管理组件; 2.网络节点:虚机网络,L2/L3,dhcp,route,nat等;2核 32线程1.2T硬盘 3.存储节点:调度,监控(ceph)等组件;2核 32线程1.2T硬盘 4.openstack基础服务 |
controller03 | ens192:10.15.253.227 管理网络,外部网络 ens224:172.31.253.227 vlan网络 | 1. keystone 2. glance-api , glance-registry 3. nova-api, nova-conductor, nova-consoleauth, nova-scheduler, nova-novncproxy 4. neutron-api, neutron-linuxbridge-agent, neutron-dhcp-agent, neutron-metadata-agent, neutron-l3-agent 5. cinder-api, cinder-schedulera 6. dashboard 7. mariadb, rabbitmq, memcached,Haproxy 等 | 1.控制节点: keystone, glance, horizon, nova&neutron管理组件; 2.网络节点:虚机网络,L2/L3,dhcp,route,nat等;2核 32线程1.2T硬盘 3.存储节点:调度,监控(ceph)等组件;2核 32线程1.2T硬盘 4.openstack基础服务 |
compute01 | ens192:10.15.253.162 管理网络,外部网络 ens224:172.31.253.162 vlan网络 | 1. nova-compute 2. neutron-linuxbridge-agent 3. cinder-volume(如果后端使用共享存储,建议部署在controller节点) | 1.计算节点:hypervisor(kvm); 2.网络节点:虚机网络等; 3.存储节点:卷服务等组件 |
compute02 | ens192:10.15.253.194 管理网络,外部网络 ens224:172.31.253.194 vlan网络 | 1. nova-compute 2. neutron-linuxbridge-agent 3. cinder-volume(如果后端使用共享存储,建议部署在controller节点) | 1.计算节点:hypervisor(kvm); 2.网络节点:虚机网络等; 3.存储节点:卷服务等组件 |
compute03 | ens192:10.15.253.226 管理网络,外部网络 ens224:172.31.253.226 vlan网络 | 1. nova-compute 2. neutron-linuxbridge-agent 3. cinder-volume(如果后端使用共享存储,建议部署在controller节点) | 1.计算节点:hypervisor(kvm); 2.网络节点:虚机网络等; 3.存储节点:卷服务等组件 |
cephnode01 | ens192:10.15.253.161 ens224:172.31.253.161 | ceph-mon, ceph-mgr | 卷服务,块存储等组件 |
cephnode02 | ens192:10.15.253.193 ens224:172.31.253.193 | ceph-mon, ceph-mgr, ceph-osd | 卷服务,块存储等组件 |
cephnode03 | ens192:10.15.253.225 ens224:172.31.253.225 | ceph-mon, ceph-mgr, ceph-osd | 卷服务,块存储等组件 |
使用Haproxy 做负载均衡优点在于每个节点分布在不同的服务器上,某台物理服务器宕掉后,对于无状态应用来说,不至于都无法运行,实现HA。
控制、网络、存储、3节点
10.15.253.163 c2m32h1200 controller01
10.15.253.195 c2m32h1200 controller02
10.15.253.227 c2m32h1200 controller03
计算、网络、存储、3节点
10.15.253.162 c2m16h600 compute01
10.15.253.194 c2m16h600 compute02
10.15.253.226 c2m16h600 compute03
ceph共享存储3节点
10.15.253.161 c2m8h300 cephnode01
10.15.253.193 c2m8h300 cephnode02
10.15.253.225 c2m8h300 cephnode03
负载均衡1节点,放到某台上
10.15.253.225 c2m8h300 cephnode03
高可用虚拟ip
10.15.253.88 设置vip
三、集群高可用说明
参考官方文档
https://docs.openstack.org/ha-guide/
https://docs.openstack.org/arch-design/design-control-plane.html
https://docs.openstack.org/arch-design/design-control-plane.html#table-deployment-scenarios
https://docs.openstack.org/ha-guide/intro-os-ha-cluster.html
https://docs.openstack.org/ha-guide/storage-ha.html
OpenStack体系结构设计指南: https://docs.openstack.org/arch-design/
OpenStack API文档: https://docs.openstack.org/api-quick-start/
无状态服务
可在提出请求后提供响应,然后无需进一步关注。为了使无状态服务高度可用,需要提供冗余节点并对其进行负载
包括nova-api
, nova-conductor
,glance-api
,keystone-api
,neutron-api
,nova-scheduler
。
有状态服务
对服务的后续请求取决于第一个请求的结果。有状态服务更难管理,因为单个动作通常涉及多个请求。使状态服务高度可用可能取决于您选择主动/被动配置还是主动/主动配置。包括OpenStack的数据库和消息队列
OpenStack集群的高可用方案
三台 Controller 节点分别部署OpenStack服务,共享数据库和消息队列,由haproxy负载均衡请求到后端处理。
前端代理
-
前端代理可以采用
Haproxy + KeepAlived
或者Haproxy + pacemaker
方式,OpenStack控制节点各服务,对外暴露VIP提供API访问。建议将Haproxy单独部署 -
Openstack官网使用开源的
pacemaker cluster stack
做为集群高可用资源管理软件
数据库集群
https://docs.openstack.org/ha-guide/control-plane-stateful.html
https://blog.csdn.net/educast/article/details/78678152
采用MariaDB + Galera
组成三个Active节点,外部访问通过Haproxy的active + backend方式代理。平时主库为A,当A出现故障,则切换到B或C节点。目前测试将MariaDB三个节点部署到了控制节点上。
官方推荐:三个节点的MariaDB和Galera集群,建议每个集群具有4个vCPU和8 GB RAM
RabbitMQ集群
RabbitMQ采用原生Cluster集群,所有节点同步镜像队列。三台物理机,其中2个Mem节点主要提供服务,1个Disk节点用于持久化消息,客户端根据需求分别配置主从策略。
目前测试将RabbitMQ三个节点部署到了控制节点上。
四、基础环境
1. 设置SSH秘钥分发与hosts文件
ssh
#为控制节点controller01 配置ssh免密,先作为一台管理机
yum install sshpass -y
mkdir -p /extend/shell
#执行分发脚本
cat >>/extend/shell/fenfa_pub.sh<< EOF
#!/bin/bash
ssh-keygen -t rsa -f ~/.ssh/id_rsa -P ''
for ip in 161 162 163 193 194 195 225 226 227
do
sshpass -pZx****** ssh-copy-id -o StrictHostKeyChecking=no 10.15.253.\$ip
done
EOF
#测试
[root@controller01 ~]# ssh controller03 hostname
controller03
[root@controller01 ~]# ssh compute03 hostname
compute03
[root@controller01 ~]# ssh cephnode03 hostname
cephnode03
hosts
#所有节点保持一致的hosts即可
cat >>/etc/hosts <<EOF
10.15.253.163 controller01
10.15.253.195 controller02
10.15.253.227 controller03
10.15.253.162 compute01
10.15.253.194 compute02
10.15.253.226 compute03
10.15.253.161 cephnode01
10.15.253.193 cephnode02
10.15.253.225 cephnode03
EOF
#发送到所有节点
for ip in 161 162 163 193 194 195 225 226 227 ;do scp -rp /etc/hosts root@10.15.253.$ip:/etc/hosts ;done
2. 设置时间同步
#chrony时间同步:设置controller01节点做server节点
yum install chrony -y
vim /etc/chrony.conf
server ntp1.aliyun.com iburst
allow 10.15.253.163/12
systemctl restart chronyd.service
systemctl enable chronyd.service
chronyc sources
#其他所有节点都设置时间同步服务器controller01节点
yum install chrony -y
vim /etc/chrony.conf
server controller01 iburst
systemctl restart chronyd.service
systemctl enable chronyd.service
chronyc sources
3. 内核参数、selinux、iptables
注意:线上生产环境请使用iptable规则放行端口的方式
#内核参数优化
echo 'net.ipv4.ip_forward = 1' >>/etc/sysctl.conf
echo 'net.bridge.bridge-nf-call-iptables=1' >>/etc/sysctl.conf
echo 'net.bridge.bridge-nf-call-ip6tables=1' >>/etc/sysctl.conf
#在控制节点上添加,允许非本地IP绑定,允许运行中的HAProxy实例绑定到VIP
echo 'net.ipv4.ip_nonlocal_bind = 1' >>/etc/sysctl.conf
sysctl -p
#关闭selinux
setenforce 0
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config
#关闭firewalld
systemctl disable firewalld.service
systemctl stop firewalld.service
4. 下载Train版的软件包
安装train版的yum源:对于CentOS8,要启用PowerTools存储库和高可用仓库
yum install centos-release-openstack-train -y
#启用HighAvailability repo
yum install yum-utils -y
yum config-manager --set-enabled HighAvailability
yum config-manager --set-enabled PowerTools
#安装epel-repo成
rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
yum clean all
yum makecache
安装客户端:对于CentOS8已升级为python3-openstackclient
yum install python3-openstackclient -y
openstack-utils能够让openstack安装更加简单,直接在命令行修改配置文件(全部节点)
#创建一个存放压缩包文件的目录
mkdir -p /opt/tools
#需要下载依赖软件
yum install crudini -y
wget -P /opt/tools https://cbs.centos.org/kojifiles/packages/openstack-utils/2017.1/1.el7/noarch/openstack-utils-2017.1-1.el7.noarch.rpm
rpm -ivh /opt/tools/openstack-utils-2017.1-1.el7.noarch.rpm
可选:selinux开启时需要安装openstack-selinux,当前环境已将seliux设置为默认关闭
yum install openstack-selinux -y
#重启连接后如果会报一些错,下载此应用可以解决此问题
yum install libibverbs -y
5. 各服务组件的密码
密码名称 | 描述 |
---|---|
Zx***** | admin管理员用户密码 |
Zx***** | 块设备存储服务的数据库密码 |
Zx***** | 块设备存储服务的 cinder 密码 |
Zx***** | 仪表板的数据库密码 |
Zx***** | 镜像服务的数据库密码 |
Zx***** | 镜像服务的 glance 用户密码 |
Zx***** | 认证服务的数据库密码 |
Zx***** | 元数据代理的密码 |
Zx***** | 网络服务的数据库密码 |
Zx***** | 网络服务的 neutron 用户密码 |
Zx***** | 计算服务的数据库密码 |
Zx***** | 计算服务的 nova 用户的密码 |
Zx***** | 展示位置服务placement用户的密码 |
Zx***** | RabbitMQ服务的openstack管理用户的密码 |
Zx***** | pacemaker高可用用户密码 |
五、Mariadb集群(控制节点)
1. 安装与配置修改
1.1 在全部controller节点安装mariadb,以controller01节点为例
yum install mariadb mariadb-server python2-PyMySQL -y
1.2 安装galera相关插件,利用galera搭建集群
yum install mariadb-server-galera mariadb-galera-common galera xinetd rsync -y
systemctl restart mariadb.service
systemctl enable mariadb.service
1.3 初始化mariadb,在全部控制节点初始化数据库密码,以controller01节点为例
[root@controller01 ~]# mysql_secure_installation
#输入root用户的当前密码(不输入密码)
Enter current password for root (enter for none):
#设置root密码?
Set root password? [Y/n] y
#新密码:
New password:
#重新输入新的密码:
Re-enter new password:
#删除匿名用户?
Remove anonymous users? [Y/n] y
#禁止远程root登录?
Disallow root login remotely? [Y/n] n
#删除测试数据库并访问它?
Remove test database and access to it? [Y/n] y
#现在重新加载特权表?
Reload privilege tables now? [Y/n] y
1.4 修改mariadb配置文件
在全部控制节点/etc/my.cnf.d/目录下新增openstack.cnf配置文件,主要设置集群同步相关参数,以controller01节点为例,个别涉及ip地址/host名等参数根据当前节点实际情况修改
创建和编辑/etc/my.cnf.d/openstack.cnf
文件
#bind-address 主机ip
#wsrep_node_name 主机名
#wsrep_node_address 主机ip
[root@controller01 ~]# cat /etc/my.cnf.d/openstack.cnf
[server]
[mysqld]
bind-address = 10.15.253.163
max_connections = 1000
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
log-error=/var/log/mariadb/mariadb.log
pid-file=/run/mariadb/mariadb.pid
[galera]
wsrep_on=ON
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_cluster_name="mariadb_galera_cluster"
wsrep_cluster_address="gcomm://controller01,controller02,controller03"
wsrep_node_name="db01"
wsrep_node_address="10.15.253.163"
binlog_format=ROW
default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2
wsrep_slave_threads=4
innodb_flush_log_at_trx_commit=2
innodb_buffer_pool_size=1024M
wsrep_sst_method=rsync
[embedded]
[mariadb]
[mariadb-10.3]
wsrep_sync_wait:默认值是0,如果需要保证读写一致性可以设置为1。但是需要注意的是,该设置会带来相应的延迟性
1.5 将controller01的配置文件分别拷贝到其他两个主机
修改两台节点对应的地址和主机名:wsrep_node_name、wsrep_node_address,bind-address
scp -rp /etc/my.cnf.d/openstack.cnf controller02:/etc/my.cnf.d/openstack.cnf
scp -rp /etc/my.cnf.d/openstack.cnf controller03:/etc/my.cnf.d/openstack.cnf
以上的安装配置操作在所有控制节点执行完毕以后,就可以开始构建集群,可以在任一控制节点执行
2. 构建集群
2.1 停止全部控制节点的mariadb服务,以controller01节点为例
systemctl stop mariadb
2.2 在controller01节点通过如下方式启动mariadb服务
[root@controller01 ~]# /usr/libexec/mysqld --wsrep-new-cluster --user=root &
[1] 8255
[root@controller01 ~]# 2020-08-28 14:02:44 0 [Note] /usr/libexec/mysqld (mysqld 10.3.20-MariaDB) starting as process 8255 ...
2.3 其他控制节点加入mariadb集群
以controller02节点为例;启动后加入集群,controller02节点从controller01节点同步数据,也可同步查看mariadb日志/var/log/mariadb/mariadb.log
[root@controller02 ~]# systemctl start mariadb.service
2.4 回到controller01节点重新配置mariadb
#重启controller01节点;并在启动前删除contrller01节点之前的数据
[root@controller01 ~]# pkill -9 mysqld
[root@controller01 ~]# rm -rf /var/lib/mysql/*
#注意以system unit方式启动mariadb服务时的权限
[root@controller01 ~]# chown mysql:mysql /var/run/mariadb/mariadb.pid
## 启动后查看节点所在服务状态,controller01节点从controller02节点同步数据
[root@controller01 ~]# systemctl start mariadb.service
[root@controller01 ~]# systemctl status mariadb.service
2.5 查看集群状态
[root@controller01 ~]# mysql -uroot -p
MariaDB [(none)]> show status like "wsrep_cluster_size";
+--------------------+-------+
| Variable_name | Value |
+--------------------+-------+
| wsrep_cluster_size | 3 |
+--------------------+-------+
1 row in set (0.001 sec)
MariaDB [(none)]> SHOW status LIKE 'wsrep_ready';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| wsrep_ready | ON |
+---------------+-------+
1 row in set (0.001 sec)
2.6 在controller01创建数据库,到另外两台节点上查看是否可以同步
[root@controller01 ~]# mysql -uroot -p
MariaDB [(none)]> create database cluster_test charset utf8mb4;
Query OK, 1 row affected (0.005 sec)
MariaDB [(none)]> show databases;
+--------------------+
| Database |
+--------------------+
| cluster_test |
| information_schema |
| mysql |
| performance_schema |
+--------------------+
另外两台查看
[root@controller02 ~]# mysql -uroot -pZx***** -e 'show databases'
+--------------------+
| Database |
+--------------------+
| cluster_test | √
| information_schema |
| mysql |
| performance_schema |
+--------------------+
[root@controller03 ~]# mysql -uroot -pZx***** -e 'show databases'
+--------------------+
| Database |
+--------------------+
| cluster_test | √
| information_schema |
| mysql |
| performance_schema |
+--------------------+
3. 设置心跳检测clustercheck
3.1 下载clustercheck脚本
在全部控制节点下载修改此脚本
wget -P /extend/shell/ https://raw.githubusercontent.com/olafz/percona-clustercheck/master/clustercheck
注意账号/密码与脚本中的账号/密码对应,这里用的是脚本默认的账号/密码,否则需要修改clustercheck脚本
[root@controller01 ~]# vim /extend/shell/clustercheck
MYSQL_USERNAME="clustercheck"
MYSQL_PASSWORD="Zx*****"
MYSQL_HOST="localhost"
MYSQL_PORT="3306"
...
#添加执行权限并复制到/usr/bin/下
[root@controller01 ~]# chmod +x /extend/shell/clustercheck
[root@controller01 ~]# \cp /extend/shell/clustercheck /usr/bin/
3.2 创建心跳检测用户
在任意一个控制节点的数据库中创建clustercheck_user用户并赋权; 其他两台节点会自动同步
GRANT PROCESS ON *.* TO 'clustercheck'@'localhost' IDENTIFIED BY 'Zx*****';
flush privileges;
3.3 创建心跳检测文件
在全部控制节点新增心跳检测服务配置文件/etc/xinetd.d/mysqlchk
,以controller01节点为例
[root@controller01 ~]# touch /etc/xinetd.d/galera-monitor
[root@controller01 ~]# cat >/etc/xinetd.d/galera-monitor <<EOF
# default:on
# description: galera-monitor
service galera-monitor
{
port = 9200
disable = no
socket_type = stream
protocol = tcp
wait = no
user = root
group = root
groups = yes
server = /usr/bin/clustercheck
type = UNLISTED
per_source = UNLIMITED
log_on_success =
log_on_failure = HOST
flags = REUSE
}
EOF
3.4 启动心跳检测服务
在全部控制 节点修改/etc/services,变更tcp9200端口用途,以controller01节点为例
[root@controller01 ~]# vim /etc/services
...
#wap-wsp 9200/tcp # WAP connectionless session service
galera-monitor 9200/tcp # galera-monitor
启动 xinetd 服务
#全部控制节点都需要启动
systemctl daemon-reload
systemctl enable xinetd
systemctl start xinetd
3.5 测试心跳检测脚本
在全部控制节点验证,以controller01节点为例
[root@controller01 ~]# /usr/bin/clustercheck
HTTP/1.1 200 OK
Content-Type: text/plain
Connection: close
Content-Length: 40
Percona XtraDB Cluster Node is synced.
4. 异常关机或异常断电后的修复
当突然停电,所有galera主机都非正常关机,来电后开机,会导致galera集群服务无法正常启动。以下为处理办法
第1步:开启galera集群的群主主机的mariadb服务。
第2步:开启galera集群的成员主机的mariadb服务。
异常处理:galera集群的群主主机和成员主机的mysql服务无法启动,如何处理?
#解决方法一:
第1步、删除garlera群主主机的/var/lib/mysql/grastate.dat状态文件
/bin/galera_new_cluster启动服务。启动正常。登录并查看wsrep状态。
第2步:删除galera成员主机中的/var/lib/mysql/grastate.dat状态文件
systemctl restart mariadb重启服务。启动正常。登录并查看wsrep状态。
#解决方法二:
第1步、修改garlera群主主机的/var/lib/mysql/grastate.dat状态文件中的0为1
/bin/galera_new_cluster启动服务。启动正常。登录并查看wsrep状态。
第2步:修改galera成员主机中的/var/lib/mysql/grastate.dat状态文件中的0为1
systemctl restart mariadb重启服务。启动正常。登录并查看wsrep状态。
六、RabbitMQ集群(控制节点)
https://www.rabbitmq.com/which-erlang.html
1. 下载相关软件包(所有节点)
以controller01节点为例,RabbbitMQ基与erlang开发,首先安装erlang,采用yum方式
[root@controller01 ~]# yum install erlang rabbitmq-server -y
[root@controller01 ~]# systemctl enable rabbitmq-server.service
2. 构建rabbitmq集群
2.1 任选1个控制节点首先启动rabbitmq服务
这里选择controller01节点
[root@controller01 ~]# systemctl start rabbitmq-server.service
[root@controller01 ~]# rabbitmqctl cluster_status
2.2 分发.erlang.cookie到其他控制节点
scp /var/lib/rabbitmq/.erlang.cookie controller02:/var/lib/rabbitmq/
scp /var/lib/rabbitmq/.erlang.cookie controller03:/var/lib/rabbitmq/
2.3 修改controller02和03节点.erlang.cookie文件的用户/组
[root@controller02 ~]# chown rabbitmq:rabbitmq /var/lib/rabbitmq/.erlang.cookie
[root@controller03 ~]# chown rabbitmq:rabbitmq /var/lib/rabbitmq/.erlang.cookie
注意:修改全部控制节点.erlang.cookie文件的权限,默认为400权限,可用不修改
2.4 启动controller02和03节点的rabbitmq服务
[root@controller02 ~]# systemctl start rabbitmq-server
[root@controller03 ~]# systemctl start rabbitmq-server
2.5 构建集群,controller02和03节点以ram节点的形式加入集群
[root@controller02 ~]# rabbitmqctl stop_app
[root@controller02 ~]# rabbitmqctl join_cluster --ram rabbit@controller01
[root@controller02 ~]# rabbitmqctl start_app
[root@controller03 ~]# rabbitmqctl stop_app
[root@controller03 ~]# rabbitmqctl join_cluster --ram rabbit@controller01
[root@controller03 ~]# rabbitmqctl start_app
2.6 任意控制节点查看RabbitMQ集群状态
[root@controller01 ~]# rabbitmqctl cluster_status
Basics
Cluster name: rabbit@controller01
Disk Nodes
rabbit@controller01
RAM Nodes
rabbit@controller02
rabbit@controller03
Running Nodes
rabbit@controller01
rabbit@controller02
rabbit@controller03
Versions
rabbit@controller01: RabbitMQ 3.8.3 on Erlang 22.3.4.1
rabbit@controller02: RabbitMQ 3.8.3 on Erlang 22.3.4.1
rabbit@controller03: RabbitMQ 3.8.3 on Erlang 22.3.4.1
.....
2.7 创建rabbitmq管理员账号
# 在任意节点新建账号并设置密码,以controller01节点为例
[root@controller01 ~]# rabbitmqctl add_user openstack Zx*****
# 设置新建账号的状态
[root@controller01 ~]# rabbitmqctl set_user_tags openstack administrator
# 设置新建账号的权限
[root@controller01 ~]# rabbitmqctl set_permissions -p "/" openstack ".*" ".*" ".*"
# 查看账号
[root@controller01 ~]# rabbitmqctl list_users
Listing users ...
user tags
openstack [administrator]
guest [administrator]
2.8 镜像队列的ha
设置镜像队列高可用
[root@controller01 ~]# rabbitmqctl set_policy ha-all "^" '{"ha-mode":"all"}'
任意控制节点查看镜像队列策略
[root@controller01 ~]# rabbitmqctl list_policies
Listing policies for vhost "/" ...
vhost name pattern apply-to definition priority
/ ha-all ^ all {"ha-mode":"all"} 0
2.9 安装web管理插件
在全部控制节点安装web管理插件,以controller01节点为例
[root@controller01 ~]# rabbitmq-plugins enable rabbitmq_management
[16:02 root@db01 ~]# netstat -lntup|grep 5672
tcp 0 0 0.0.0.0:25672 0.0.0.0:* LISTEN 10461/beam.smp
tcp 0 0 0.0.0.0:15672 0.0.0.0:* LISTEN 10461/beam.smp
tcp6 0 0 :::5672 :::* LISTEN 10461/beam.smp
访问任意节点,如:http://10.15.253.163:15672
七、Memcached集群(控制节点)
-
Memcached是一款开源、高性能、分布式内存对象缓存系统,可应用各种需要缓存的场景,其主要目的是通过降低对Database的访问来加速web应用程序。
-
Memcached一般的使用场景是:通过缓存数据库查询的结果,减少数据库访问次数,以提高动态Web应用的速度、提高可扩展性。
-
本质上,memcached是一个基于内存的key-value存储,用于存储数据库调用、API调用或页面引用结果的直接数据,如字符串、对象等小块任意数据。
-
Memcached是无状态的,各控制节点独立部署,openstack各服务模块统一调用多个控制节点的memcached服务即可
1 安装memcache的软件包
在全部控制节点安装;centos8系统更新为python3-memcached
yum install memcached python3-memcached -y
2 设置memcached
在全部安装memcached服务的节点设置服务监听本地地址
sed -i 's|127.0.0.1,::1|0.0.0.0|g' /etc/sysconfig/memcached
3 设置开机启动
systemctl enable memcached.service
systemctl start memcached.service
systemctl status memcached.service
[root@controller01 ~]# netstat -lntup|grep memcached
tcp 0 0 0.0.0.0:11211 0.0.0.0:* LISTEN 13982/memcached
八、配置Pacemaker高可用集群
https://docs.openstack.org/ha-guide/index.html
服务 | 作用 |
---|---|
pacemaker | 资源管理器(CRM),负责启动与停止服务,位于 HA 集群架构中资源管理、资源代理层 |
corosync | 消息层组件(Messaging Layer),管理成员关系、消息与仲裁,为高可用环境中提供通讯服务,位于高可用集群架构的底层,为各节点(node)之间提供心跳信息 |
resource-agents | 资源代理,在节点上接收CRM的调度,对某一资源进行管理的工具,管理工具通常为脚本 |
pcs | 命令行工具集 |
fence-agents | 在一个节点不稳定或无答复时将其关闭,使其不会损坏集群的其它资源,其主要作用是消除脑裂 |
Openstack官网使用开源的pacemaker cluster stack做为集群高可用资源管理软件。
1 安装相关软件
在全部控制节点安装相关服务;以controller01节点为例
[root@controller01 ~]# yum install pacemaker pcs corosync fence-agents resource-agents -y
2 构建集群
2.1 启动pcs服务
在全部控制节点执行,以controller01节点为例
[root@controller01 ~]# systemctl enable pcsd
[root@controller01 ~]# systemctl start pcsd
2.2 修改集群管理员hacluster(默认生成)密码
在全部控制节点执行,以controller01节点为例
[root@controller01 ~]# echo Zx***** | passwd --stdin hacluster
2.3 认证操作
认证配置在任意节点操作,以controller01节点为例;
节点认证,组建集群,需要采用上一步设置的password
[root@controller01 ~]# pcs host auth controller01 controller02 controller03 -u hacluster -p Zx*****
controller01: Authorized
controller03: Authorized
controller02: Authorized
#centos7的命令(仅作为记录)
pcs cluster auth controller01 controller02 controller03 -u hacluster -p Zx***** --force
2.4 创建并命名集群,
在任意节点操作;以controller01节点为例;
[root@controller01 ~]# pcs cluster setup openstack-cluster-01 --start controller01 controller02 controller03
No addresses specified for host 'controller01', using 'controller01'
No addresses specified for host 'controller02', using 'controller02'
No addresses specified for host 'controller03', using 'controller03'
Destroying cluster on hosts: 'controller01', 'controller02', 'controller03'...
controller02: Successfully destroyed cluster
controller03: Successfully destroyed cluster
controller01: Successfully destroyed cluster
Requesting remove 'pcsd settings' from 'controller01', 'controller02', 'controller03'
controller01: successful removal of the file 'pcsd settings'
controller02: successful removal of the file 'pcsd settings'
controller03: successful removal of the file 'pcsd settings'
Sending 'corosync authkey', 'pacemaker authkey' to 'controller01', 'controller02', 'controller03'
controller01: successful distribution of the file 'corosync authkey'
controller01: successful distribution of the file 'pacemaker authkey'
controller02: successful distribution of the file 'corosync authkey'
controller02: successful distribution of the file 'pacemaker authkey'
controller03: successful distribution of the file 'corosync authkey'
controller03: successful distribution of the file 'pacemaker authkey'
Sending 'corosync.conf' to 'controller01', 'controller02', 'controller03'
controller01: successful distribution of the file 'corosync.conf'
controller02: successful distribution of the file 'corosync.conf'
controller03: successful distribution of the file 'corosync.conf'
Cluster has been successfully set up.
Starting cluster on hosts: 'controller01', 'controller02', 'controller03'...
#centos7的命令(仅作为记录)
pcs cluster setup --force --name openstack-cluster-01 controller01 controller02 controller03
2.5 pcemaker集群启动
[root@controller01 ~]# pcs cluster start --all
controller03: Starting Cluster...
controller01: Starting Cluster...
controller02: Starting Cluster...
[root@controller01 ~]# pcs cluster enable --all
controller01: Cluster Enabled
controller02: Cluster Enabled
controller03: Cluster Enabled
2.6 查看pacemaker集群状态
查看集群状态,也可使用crm_mon -1
命令;
[root@controller01 ~]# pcs cluster status
Cluster Status:
Cluster Summary:
* Stack: corosync
* Current DC: controller02 (version 2.0.3-5.el8_2.1-4b1f869f0f) - partition with quorum
* Last updated: Sat Aug 29 00:37:11 2020
* Last change: Sat Aug 29 00:31:57 2020 by hacluster via crmd on controller02
* 3 nodes configured
* 0 resource instances configured
Node List:
* Online: [ controller01 controller02 controller03 ]
PCSD Status:
controller01: Online
controller03: Online
controller02: Online
通过cibadmin --query --scope nodes
可查看节点配置
[root@controller01 ~]# cibadmin --query --scope nodes
<nodes>
<node id="1" uname="controller01"/>
<node id="2" uname="controller02"/>
<node id="3" uname="controller03"/>
</nodes>
2.7 查看corosync状态
corosync
表示一种底层状态等信息的同步方式
[root@controller01 ~]# pcs status corosync
Membership information
----------------------
Nodeid Votes Name
1 1 controller01 (local)
2 1 controller02
3 1 controller03
2.8 查看节点和资源
#查看节点
[root@controller01 ~]# corosync-cmapctl | grep members
runtime.members.1.config_version (u64) = 0
runtime.members.1.ip (str) = r(0) ip(10.15.253.163)
runtime.members.1.join_count (u32) = 1
runtime.members.1.status (str) = joined
runtime.members.2.config_version (u64) = 0
runtime.members.2.ip (str) = r(0) ip(10.15.253.195)
runtime.members.2.join_count (u32) = 1
runtime.members.2.status (str) = joined
runtime.members.3.config_version (u64) = 0
runtime.members.3.ip (str) = r(0) ip(10.15.253.227)
runtime.members.3.join_count (u32) = 1
runtime.members.3.status (str) = joined
#查看资源
[root@controller01 ~]# pcs resource
NO resources configured
2.9 通过web界面访问pacemaker
访问任意控制节点:https://10.15.253.163:2224
账号/密码(即构建集群时生成的密码):hacluster/Zx*****
2.10 设置高可用属性
在任意控制节点设置属性即可,以controller01节点为例;
- 设置合适的输入处理历史记录及策略引擎生成的错误与警告,在
trouble shooting
故障排查时有用
[root@controller01 ~]# pcs property set pe-warn-series-max=1000 \
pe-input-series-max=1000 \
pe-error-series-max=1000
- pacemaker基于时间驱动的方式进行状态处理,
cluster-recheck-interval
默认定义某些pacemaker操作发生的事件间隔为15min,建议设置为5min或3min
[root@controller01 ~]# pcs property set cluster-recheck-interval=5
- corosync默认启用
stonith
,但stonith
机制(通过ipmi或ssh关闭节点)并没有配置相应的stonith
设备(通过crm_verify -L -V
验证配置是否正确,没有输出即正确),此时pacemaker将拒绝启动任何资源;在生产环境可根据情况灵活调整,测试环境下可关闭
[root@controller01 ~]# pcs property set stonith-enabled=false
- 默认当有半数以上节点在线时,集群认为自己拥有法定人数,是“合法”的,满足公式:total_nodes < 2 * active_nodes;
- 以3个节点的集群计算,当故障2个节点时,集群状态不满足上述公式,此时集群即非法;当集群只有2个节点时,故障1个节点集群即非法,所谓的”双节点集群”就没有意义;
- 在实际生产环境中,做2节点集群,无法仲裁时,可选择忽略;做3节点集群,可根据对集群节点的高可用阀值灵活设置
[root@controller01 ~]# pcs property set no-quorum-policy=ignore
-
v2的heartbeat为了支持多节点集群,提供了一种积分策略来控制各个资源在集群中各节点之间的切换策略;通过计算出各节点的的总分数,得分最高者将成为active状态来管理某个(或某组)资源;
-
默认每一个资源的初始分数(取全局参数default-resource-stickiness,通过"pcs property list --all"查看)是0,同时每一个资源在每次失败之后减掉的分数(取全局参数default-resource-failure-stickiness)也是0,此时一个资源不论失败多少次,heartbeat都只是执行restart操作,不会进行节点切换;
-
如果针对某一个资源设置初始分数”resource-stickiness“或"resource-failure-stickiness",则取单独设置的资源分数;
-
一般来说,resource-stickiness的值都是正数,resource-failure-stickiness的值都是负数;有一个特殊值是正无穷大(INFINITY)和负无穷大(-INFINITY),即"永远不切换"与"只要失败必须切换",是用来满足极端规则的简单配置项;
-
如果节点的分数为负,该节点在任何情况下都不会接管资源(冷备节点);如果某节点的分数大于当前运行该资源的节点的分数,heartbeat会做出切换动作,现在运行该资源的节点将释 放资源,分数高出的节点将接管该资源
-
pcs property list 只可查看修改后的属性值,参数”–all”可查看含默认值的全部属性值;
-
也可查看/var/lib/pacemaker/cib/cib.xml文件,或”pcs cluster cib”,或“cibadmin --query --scope crm_config”查看属性设置,” cibadmin --query --scope resources”查看资源配置
[root@controller01 ~]# pcs property list
Cluster Properties:
cluster-infrastructure: corosync
cluster-name: openstack-cluster-01
cluster-recheck-interval: 5
dc-version: 2.0.3-5.el8_2.1-4b1f869f0f
have-watchdog: false
no-quorum-policy: ignore
pe-error-series-max: 1000
pe-input-series-max: 1000
pe-warn-series-max: 1000
stonith-enabled: false
3. 配置 vip
- 在任意控制节点设置vip(resource_id属性)即可,命名即为
vip
; - ocf(standard属性):资源代理(resource agent)的一种,另有systemd,lsb,service等;
- heartbeat:资源脚本的提供者(provider属性),ocf规范允许多个供应商提供同一资源代理,大多数ocf规范提供的资源代理都使用heartbeat作为provider;
- IPaddr2:资源代理的名称(type属性),IPaddr2便是资源的type;
- cidr_netmask: 子网掩码位数
- 通过定义资源属性(standard:provider:type),定位
vip
资源对应的ra脚本位置; - centos系统中,符合ocf规范的ra脚本位于
/usr/lib/ocf/resource.d/
目录,目录下存放了全部的provider,每个provider目录下有多个type; - op:表示Operations(运作方式 监控间隔= 30s)
[root@controller01 ~]# pcs resource create vip ocf:heartbeat:IPaddr2 ip=10.15.253.88 cidr_netmask=24 op monitor interval=30s
查看集群资源
通过
pcs resouce
查询,vip资源在controller01节点;通过
ip a show
可查看vip
[root@controller01 ~]# pcs resource
* vip (ocf::heartbeat:IPaddr2): Started controller01
[root@controller01 ~]# ip a show ens192
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:50:56:82:82:40 brd ff:ff:ff:ff:ff:ff
inet 10.15.253.163/12 brd 10.15.255.255 scope global noprefixroute ens192
valid_lft forever preferred_lft forever
inet 10.15.253.88/24 brd 10.15.255.255 scope global ens192
valid_lft forever preferred_lft forever
inet6 fe80::250:56ff:fe82:8240/64 scope link
valid_lft forever preferred_lft forever
可选(根据业务需求是否区分来决定):
如果api
区分管理员/内部/公共的接口,对客户端只开放公共接口,通常设置两个vip,如在命名时设置为:
vip_management 与 vip_public
建议是将vip_management与vip_public约束在1个节点上
[root@controller01 ~]# pcs constraint colocation add vip_management with vip_public
4. 高可用性管理
通过web访问任意控制节点:https://10.15.253.163:2224
账号/密码(即构建集群时生成的密码):hacluster/Zx*****
虽然以命令行的方式设置了集群,但web界面默认并不显示,手动添加集群,实际操作只需要添加已组建集群的任意节点即可,如下
九、部署Haproxy
https://docs.openstack.org/ha-guide/control-plane-stateless.html#load-balancer
1. 安装haproxy(控制节点)
在全部控制节点安装haproxy,以controller01节点为例;
[root@controller01 ~]# yum install haproxy -y
2. 配置haproxy.cfg
在全部控制节点配置,以controller01节点为例;
创建HAProxy记录日志文件并授权
建议开启haproxy的日志功能,便于后续的问题排查
[root@controller01 ~]# mkdir /var/log/haproxy
[root@controller01 ~]# chmod a+w /var/log/haproxy
在rsyslog文件下修改以下字段
#取消注释并添加
[root@controller01 ~]# vim /etc/rsyslog.conf
19 module(load="imudp") # needs to be done just once
20 input(type="imudp" port="514")
24 module(load="imtcp") # needs to be done just once
25 input(type="imtcp" port="514")
#在文件最后添加haproxy配置日志
local0.=info -/var/log/haproxy/haproxy-info.log
local0.=err -/var/log/haproxy/haproxy-err.log
local0.notice;local0.!=err -/var/log/haproxy/haproxy-notice.log
#重启rsyslog
[root@controller01 ~]# systemctl restart rsyslog
集群的haproxy文件,涉及服务较多,这里针对涉及到的openstack服务,一次性设置完成:
使用vip 10.15.253.88
[root@controller01 ~]# cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak
[root@controller01 ~]# cat /etc/haproxy/haproxy.cfg
global
log 127.0.0.1 local0
chroot /var/lib/haproxy
daemon
group haproxy
user haproxy
maxconn 4000
pidfile /var/run/haproxy.pid
stats socket /var/lib/haproxy/stats
defaults
mode http
log global
maxconn 4000 #最大连接数
option httplog
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout check 10s
# haproxy监控页
listen stats
bind 0.0.0.0:1080
mode http
stats enable
stats uri /
stats realm OpenStack\ Haproxy
stats auth admin:admin
stats refresh 30s
stats show-node
stats show-legends
stats hide-version
# horizon服务
listen dashboard_cluster
bind 10.15.253.88:80
balance source
option tcpka
option httpchk
option tcplog
server controller01 10.15.253.88:80 check inter 2000 rise 2 fall 5
server controller02 10.15.253.88:80 check inter 2000 rise 2 fall 5
server controller03 10.15.253.88:80 check inter 2000 rise 2 fall 5
# mariadb服务;
#设置controller01节点为master,controller02/03节点为backup,一主多备的架构可规避数据不一致性;
#另外官方示例为检测9200(心跳)端口,测试在mariadb服务宕机的情况下,虽然”/usr/bin/clustercheck”脚本已探测不到服务,但受xinetd控制的9200端口依然正常,导致haproxy始终将请求转发到mariadb服务宕机的节点,暂时修改为监听3306端口
listen galera_cluster
bind 10.15.253.88:3306
balance source
mode tcp
server controller01 10.15.253.163:3306 check inter 2000 rise 2 fall 5
server controller02 10.15.253.195:3306 backup check inter 2000 rise 2 fall 5
server controller03 10.15.253.227:3306 backup check inter 2000 rise 2 fall 5
#为rabbirmq提供ha集群访问端口,供openstack各服务访问;
#如果openstack各服务直接连接rabbitmq集群,这里可不设置rabbitmq的负载均衡
listen rabbitmq_cluster
bind 10.15.253.88:5673
mode tcp
option tcpka
balance roundrobin
timeout client 3h
timeout server 3h
option clitcpka
server controller01 10.15.253.163:5672 check inter 10s rise 2 fall 5
server controller02 10.15.253.195:5672 check inter 10s rise 2 fall 5
server controller03 10.15.253.227:5672 check inter 10s rise 2 fall 5
# glance_api服务
listen glance_api_cluster
bind 10.15.253.88:9292
balance source
option tcpka
option httpchk
option tcplog
server controller01 10.15.253.163:9292 check inter 2000 rise 2 fall 5
server controller02 10.15.253.195:9292 check inter 2000 rise 2 fall 5
server controller03 10.15.253.227:9292 check inter 2000 rise 2 fall 5
# keystone_public _api服务
listen keystone_public_cluster
bind 10.15.253.88:5000
balance source
option tcpka
option httpchk
option tcplog
server controller01 10.15.253.163:5000 check inter 2000 rise 2 fall 5
server controller02 10.15.253.195:5000 check inter 2000 rise 2 fall 5
server controller03 10.15.253.227:5000 check inter 2000 rise 2 fall 5
listen nova_compute_api_cluster
bind 10.15.253.88:8774
balance source
option tcpka
option httpchk
option tcplog
server controller01 10.15.253.163:8774 check inter 2000 rise 2 fall 5
server controller02 10.15.253.195:8774 check inter 2000 rise 2 fall 5
server controller03 10.15.253.227:8774 check inter 2000 rise 2 fall 5
listen nova_placement_cluster
bind 10.15.253.88:8778
balance source
option tcpka
option tcplog
server controller01 10.15.253.163:8778 check inter 2000 rise 2 fall 5
server controller02 10.15.253.195:8778 check inter 2000 rise 2 fall 5
server controller03 10.15.253.227:8778 check inter 2000 rise 2 fall 5
listen nova_metadata_api_cluster
bind 10.15.253.88:8775
balance source
option tcpka
option tcplog
server controller01 10.15.253.163:8775 check inter 2000 rise 2 fall 5
server controller02 10.15.253.195:8775 check inter 2000 rise 2 fall 5
server controller03 10.15.253.227:8775 check inter 2000 rise 2 fall 5
listen nova_vncproxy_cluster
bind 10.15.253.88:6080
balance source
option tcpka
option tcplog
server controller01 10.15.253.163:6080 check inter 2000 rise 2 fall 5
server controller02 10.15.253.195:6080 check inter 2000 rise 2 fall 5
server controller03 10.15.253.227:6080 check inter 2000 rise 2 fall 5
listen neutron_api_cluster
bind 10.15.253.88:9696
balance source
option tcpka
option httpchk
option tcplog
server controller01 10.15.253.163:9696 check inter 2000 rise 2 fall 5
server controller02 10.15.253.195:9696 check inter 2000 rise 2 fall 5
server controller03 10.15.253.227:9696 check inter 2000 rise 2 fall 5
listen cinder_api_cluster
bind 10.15.253.88:8776
balance source
option tcpka
option httpchk
option tcplog
server controller01 10.15.253.163:8776 check inter 2000 rise 2 fall 5
server controller02 10.15.253.195:8776 check inter 2000 rise 2 fall 5
server controller03 10.15.253.227:8776 check inter 2000 rise 2 fall 5
将配置文件拷贝到其他节点中:
scp /etc/haproxy/haproxy.cfg controller02:/etc/haproxy/haproxy.cfg
scp /etc/haproxy/haproxy.cfg controller03:/etc/haproxy/haproxy.cfg
3. 配置内核参数
在基础环境准备中已经配置,这里再做一次记录,以controller01节点为例;
- net.ipv4.ip_nonlocal_bind:是否允许no-local ip绑定,关系到haproxy实例与vip能否绑定并切换
- net.ipv4.ip_forward:是否允许转发
echo 'net.ipv4.ip_nonlocal_bind = 1' >>/etc/sysctl.conf
echo "net.ipv4.ip_forward = 1" >>/etc/sysctl.conf
sysctl -p
4. 启动服务
开机启动是否设置可自行选择,利用pacemaker设置haproxy相关资源后,pacemaker可控制各节点haproxy服务是否启动
systemctl enable haproxy
systemctl restart haproxy
systemctl status haproxy
5. 访问网站
访问:http://10.15.253.88:1080 用户名/密码:admin/admin
6. 设置pcs资源
6.1 添加资源 lb-haproxy-clone
任意控制节点操作即可,以controller01节点为例;
[root@controller01 ~]# pcs resource create lb-haproxy systemd:haproxy clone
[root@controller01 ~]# pcs resource
* vip (ocf::heartbeat:IPaddr2): Started controller01
* Clone Set: lb-haproxy-clone [lb-haproxy]:
* Started: [ controller01 ]
6.2 设置资源启动顺序,先vip再lb-haproxy-clone;
通过cibadmin --query --scope constraints
可查看资源约束配置
[root@controller01 ~]# pcs constraint order start vip then lb-haproxy-clone kind=Optional
Adding vip lb-haproxy-clone (kind: Optional) (Options: first-action=start then-action=start)
6.3 将两种资源约束在1个节点
官方建议设置vip运行在haproxy active的节点,通过绑定lb-haproxy-clone
与vip服务,所以将两种资源约束在1个节点;约束后,从资源角度看,其余暂时没有获得vip的节点的haproxy会被pcs关闭
[root@controller01 ~]# pcs constraint colocation add lb-haproxy-clone with vip
[root@controller01 ~]# pcs resource
* vip (ocf::heartbeat:IPaddr2): Started controller01
* Clone Set: lb-haproxy-clone [lb-haproxy]:
* Started: [ controller01 ]
* Stopped: [ controller02 controller03 ]
6.4 通过pacemaker高可用管理查看资源相关的设置
高可用配置(pacemaker&haproxy)部署完毕。
更多推荐
所有评论(0)