一、前言

在企业数字化转型的浪潮中,私有云凭借数据自主可控、安全合规、灵活定制等核心优势,已成为企业 IT 基础设施的核心架构。本文基于 CentOS 7.9 系统,从零搭建一套完整的 OpenStack Train 版本私有云环境,涵盖环境准备、核心组件部署、运维监控、故障排查、性能优化全流程,所有步骤均经过实操验证,可直接落地,为私有云搭建提供完整的实践指南。


二、环境规划与基础配置

2.1 硬件与网络规划

表格

节点角色 主机名 IP 地址 最低配置 推荐配置 核心职责
控制节点 controller 192.168.10.10 4 核 / 8G/100G 8 核 / 16G/500G SSD 认证、镜像、网络、调度、Dashboard
计算节点 compute1 192.168.10.11 4 核 / 8G/100G 8 核 / 16G/500G SSD 虚拟机创建、运行、网络代理
存储节点 storage1 192.168.10.12 4 核 / 8G/1T 8 核 / 16G/2T HDD 块存储、数据持久化
  • 系统要求:CentOS 7.9 x86_64 最小化安装,关闭图形界面
  • 网络要求:所有节点内网互通,控制节点可访问外网,关闭防火墙 / SELinux(生产环境需配置白名单)
  • 时间同步:所有节点统一时区为 Asia/Shanghai,配置 chrony 时间同步

2.2 所有节点通用基础配置

bash

运行

# 1. 关闭防火墙与SELinux
systemctl stop firewalld
systemctl disable firewalld
setenforce 0
sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

# 2. 配置时间同步
yum install -y chrony
systemctl enable --now chronyd
timedatectl set-timezone Asia/Shanghai
timedatectl set-ntp yes

# 3. 配置主机名与hosts解析
# 控制节点执行
hostnamectl set-hostname controller
# 计算节点执行
hostnamectl set-hostname compute1
# 存储节点执行
hostnamectl set-hostname storage1
# 所有节点统一配置hosts
cat >> /etc/hosts << EOF
192.168.10.10  controller
192.168.10.11  compute1
192.168.10.12  storage1
EOF

# 4. 配置YUM源(加速安装)
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
curl -o /etc/yum.repos.d/epel.repo https://mirrors.aliyun.com/repo/epel-7.repo
yum clean all && yum makecache

# 5. 安装OpenStack仓库与基础依赖
yum install -y centos-release-openstack-train
yum install -y python-openstackclient openstack-selinux wget net-tools vim
yum update -y && reboot

三、控制节点核心组件部署

3.1 数据库服务(MariaDB)

bash

运行

# 安装MariaDB
yum install -y mariadb mariadb-server python2-PyMySQL

# 配置数据库
cat >> /etc/my.cnf.d/openstack.cnf << EOF
[mysqld]
bind-address = 192.168.10.10
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
innodb_flush_log_at_trx_commit = 1
sync_binlog = 1
EOF

# 启动服务并初始化
systemctl enable --now mariadb
mysql_secure_installation
# 按提示操作:设置root密码、移除匿名用户、禁止远程root登录、删除test库、刷新权限

3.2 消息队列(RabbitMQ)

bash

运行

yum install -y rabbitmq-server
systemctl enable --now rabbitmq-server

# 创建OpenStack专用用户
rabbitmqctl add_user openstack OpenStack@123
rabbitmqctl set_permissions openstack ".*" ".*" ".*"
rabbitmqctl set_user_tags openstack administrator
# 启用管理界面(可选)
rabbitmq-plugins enable rabbitmq_management

3.3 缓存服务(Memcached)

bash

运行

yum install -y memcached python-memcached
sed -i 's/127.0.0.1/192.168.10.10/' /etc/sysconfig/memcached
systemctl enable --now memcached

3.4 认证服务(Keystone)

bash

运行

# 1. 创建数据库
mysql -u root -p << EOF
CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'Keystone@123';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'Keystone@123';
FLUSH PRIVILEGES;
EOF

# 2. 安装服务
yum install -y openstack-keystone httpd mod_wsgi

# 3. 配置keystone
cat >> /etc/keystone/keystone.conf << EOF
[database]
connection = mysql+pymysql://keystone:Keystone@123@controller/keystone
[token]
provider = fernet
EOF

# 4. 初始化数据库与密钥
su -s /bin/sh -c "keystone-manage db_sync" keystone
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

# 5. 引导身份服务
keystone-manage bootstrap --bootstrap-password Admin@123 \
  --bootstrap-admin-url http://controller:5000/v3/ \
  --bootstrap-internal-url http://controller:5000/v3/ \
  --bootstrap-public-url http://controller:5000/v3/ \
  --bootstrap-region-id RegionOne

# 6. 配置Apache
echo "ServerName controller" >> /etc/httpd/conf/httpd.conf
ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
systemctl enable --now httpd

# 7. 配置管理员环境变量
cat >> /root/admin-openrc << EOF
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=Admin@123
export OS_AUTH_URL=http://controller:5000/v3/
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
EOF
source /root/admin-openrc

# 8. 验证服务
openstack token issue

3.5 镜像服务(Glance)

bash

运行

# 1. 创建数据库
mysql -u root -p << EOF
CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'Glance@123';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'Glance@123';
FLUSH PRIVILEGES;
EOF

# 2. 创建服务用户与端点
openstack user create --domain default --password Glance@123 glance
openstack role add --project service --user glance admin
openstack service create --name glance --description "OpenStack Image Service" image
openstack endpoint create --region RegionOne image public http://controller:9292
openstack endpoint create --region RegionOne image internal http://controller:9292
openstack endpoint create --region RegionOne image admin http://controller:9292

# 3. 安装与配置服务
yum install -y openstack-glance
cat >> /etc/glance/glance-api.conf << EOF
[database]
connection = mysql+pymysql://glance:Glance@123@controller/glance
[keystone_authtoken]
www_authenticate_uri  = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = Glance@123
[paste_deploy]
flavor = keystone
[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
EOF

# 4. 初始化数据库并启动服务
su -s /bin/sh -c "glance-manage db_sync" glance
systemctl enable --now openstack-glance-api

# 5. 验证服务
wget http://download.cirros-cloud.net/0.5.2/cirros-0.5.2-x86_64-disk.img
openstack image create "cirros" --file cirros-0.5.2-x86_64-disk.img --disk-format qcow2 --container-format bare --public
openstack image list

3.6 计算服务(Nova)

bash

运行

# 1. 创建数据库
mysql -u root -p << EOF
CREATE DATABASE nova_api;
CREATE DATABASE nova;
CREATE DATABASE nova_cell0;
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'Nova@123';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'Nova@123';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'Nova@123';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'Nova@123';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'Nova@123';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'Nova@123';
FLUSH PRIVILEGES;
EOF

# 2. 创建服务用户与端点
openstack user create --domain default --password Nova@123 nova
openstack role add --project service --user nova admin
openstack service create --name nova --description "OpenStack Compute Service" compute
openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1

# 3. 创建Placement服务
openstack user create --domain default --password Placement@123 placement
openstack role add --project service --user placement admin
openstack service create --name placement --description "Placement API Service" placement
openstack endpoint create --region RegionOne placement public http://controller:8778
openstack endpoint create --region RegionOne placement internal http://controller:8778
openstack endpoint create --region RegionOne placement admin http://controller:8778

# 4. 安装服务
yum install -y openstack-nova-api openstack-nova-conductor openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-placement-api

# 5. 配置nova.conf
cat >> /etc/nova/nova.conf << EOF
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:OpenStack@123@controller:5672/
my_ip = 192.168.10.10
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api_database]
connection = mysql+pymysql://nova:Nova@123@controller/nova_api
[database]
connection = mysql+pymysql://nova:Nova@123@controller/nova
[keystone_authtoken]
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = Nova@123
[placement]
auth_url = http://controller:5000/v3
os_region_name = RegionOne
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = placement
password = Placement@123
[vnc]
enabled = true
server_listen = $my_ip
server_proxyclient_address = $my_ip
[glance]
api_servers = http://controller:9292
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[libvirt]
virt_type = kvm
EOF

# 6. 配置Placement API
cat >> /etc/httpd/conf.d/00-nova-placement-api.conf << EOF
<Directory /usr/bin>
   <IfVersion >= 2.4>
      Require all granted
   </IfVersion>
   <IfVersion < 2.4>
      Order allow,deny
      Allow from all
   </IfVersion>
</Directory>
EOF
systemctl restart httpd

# 7. 初始化数据库
su -s /bin/sh -c "nova-manage api_db sync" nova
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
su -s /bin/sh -c "nova-manage db_sync" nova

# 8. 启动服务
systemctl enable --now openstack-nova-api openstack-nova-scheduler openstack-nova-conductor openstack-nova-novncproxy

# 9. 验证服务
openstack compute service list

3.7 网络服务(Neutron)

bash

运行

# 1. 创建数据库
mysql -u root -p << EOF
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'Neutron@123';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'Neutron@123';
FLUSH PRIVILEGES;
EOF

# 2. 创建服务用户与端点
openstack user create --domain default --password Neutron@123 neutron
openstack role add --project service --user neutron admin
openstack service create --name neutron --description "OpenStack Networking Service" network
openstack endpoint create --region RegionOne network public http://controller:9696
openstack endpoint create --region RegionOne network internal http://controller:9696
openstack endpoint create --region RegionOne network admin http://controller:9696

# 3. 安装服务(Linuxbridge+VXLAN自服务网络)
yum install -y openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables

# 4. 配置neutron.conf
cat >> /etc/neutron/neutron.conf << EOF
[DEFAULT]
transport_url = rabbit://openstack:OpenStack@123@controller:5672/
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
[database]
connection = mysql+pymysql://neutron:Neutron@123@controller/neutron
[keystone_authtoken]
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = neutron
password = Neutron@123
[nova]
auth_url = http://controller:5000/v3
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = Nova@123
region_name = RegionOne
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
EOF

# 5. 配置ML2插件
cat >> /etc/neutron/plugins/ml2/ml2_conf.ini << EOF
[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security
[ml2_type_flat]
flat_networks = provider
[ml2_type_vxlan]
vni_ranges = 1:1000
[securitygroup]
enable_ipset = true
EOF

# 6. 配置Linuxbridge代理
cat >> /etc/neutron/plugins/ml2/linuxbridge_agent.ini << EOF
[linux_bridge]
physical_interface_mappings = provider:eth0
[vxlan]
enable_vxlan = true
local_ip = 192.168.10.10
l2_population = true
[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
EOF

# 7. 配置L3、DHCP、元数据代理
cat >> /etc/neutron/l3_agent.ini << EOF
[DEFAULT]
interface_driver = linuxbridge
EOF

cat >> /etc/neutron/dhcp_agent.ini << EOF
[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
dnsmasq_dns_servers = 223.5.5.5,114.114.114.114
EOF

cat >> /etc/neutron/metadata_agent.ini << EOF
[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = Metadata@123
EOF

# 8. 配置Nova元数据
cat >> /etc/nova/nova.conf << EOF
[neutron]
url = http://controller:9696
auth_url = http://controller:5000/v3
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = neutron
password = Neutron@123
region_name = RegionOne
service_metadata_proxy = true
metadata_proxy_shared_secret = Metadata@123
EOF

# 9. 初始化数据库
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

# 10. 重启Nova服务,启动Neutron服务
systemctl restart openstack-nova-api openstack-nova-scheduler openstack-nova-conductor openstack-nova-novncproxy
systemctl enable --now neutron-server neutron-linuxbridge-agent neutron-dhcp-agent neutron-metadata-agent neutron-l3-agent

# 11. 验证服务
openstack network agent list

3.8 块存储服务(Cinder,控制节点)

bash

运行

# 1. 创建数据库
mysql -u root -p << EOF
CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'Cinder@123';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'Cinder@123';
FLUSH PRIVILEGES;
EOF

# 2. 创建服务用户与端点
openstack user create --domain default --password Cinder@123 cinder
openstack role add --project service --user cinder admin
openstack service create --name cinder --description "OpenStack Block Storage" volume
openstack endpoint create --region RegionOne volume public http://controller:8776/v3
openstack endpoint create --region RegionOne volume internal http://controller:8776/v3
openstack endpoint create --region RegionOne volume admin http://controller:8776/v3

# 3. 安装服务
yum install -y openstack-cinder

# 4. 配置cinder.conf
cat >> /etc/cinder/cinder.conf << EOF
[DEFAULT]
transport_url = rabbit://openstack:OpenStack@123@controller:5672/
auth_strategy = keystone
my_ip = 192.168.10.10
[database]
connection = mysql+pymysql://cinder:Cinder@123@controller/cinder
[keystone_authtoken]
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = cinder
password = Cinder@123
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
EOF

# 5. 初始化数据库
su -s /bin/sh -c "cinder-manage db sync" cinder

# 6. 配置Nova关联Cinder
cat >> /etc/nova/nova.conf << EOF
[cinder]
os_region_name = RegionOne
EOF
systemctl restart openstack-nova-api

# 7. 启动服务
systemctl enable --now openstack-cinder-api openstack-cinder-scheduler openstack-cinder-volume

3.9 仪表盘(Horizon)

bash

运行

yum install -y openstack-dashboard
cat >> /etc/openstack-dashboard/local_settings << EOF
OPENSTACK_HOST = "controller"
ALLOWED_HOSTS = ['*']
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
    'default': {
         'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
         'LOCATION': 'controller:11211',
    }
}
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3/" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_API_VERSIONS = {
    "identity": 3,
    "image": 2,
    "volume": 3,
}
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
OPENSTACK_NEUTRON_NETWORK = {
    'enable_router': True,
    'enable_quotas': True,
    'enable_ipv6': False,
    'enable_distributed_router': False,
    'enable_ha_router': False,
    'enable_lb': False,
    'enable_firewall': False,
    'enable_vpn': False,
    'enable_fip_topology_check': False,
}
EOF
systemctl restart httpd memcached
# 访问http://controller/dashboard,使用admin/Admin@123登录验证

四、计算节点部署

bash

运行

# 1. 基础环境配置同控制节点
# 2. 安装Nova计算服务
yum install -y openstack-nova-compute

# 3. 配置nova.conf
cat >> /etc/nova/nova.conf << EOF
[DEFAULT]
transport_url = rabbit://openstack:OpenStack@123@controller:5672/
my_ip = 192.168.10.11
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[keystone_authtoken]
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = Nova@123
[placement]
auth_url = http://controller:5000/v3
os_region_name = RegionOne
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = placement
password = Placement@123
[vnc]
enabled = true
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html
[glance]
api_servers = http://controller:9292
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[libvirt]
virt_type = kvm
EOF

# 4. 验证CPU虚拟化支持
egrep -c '(vmx|svm)' /proc/cpuinfo
# 输出>0表示支持,若为0需开启BIOS虚拟化

# 5. 安装Neutron Linuxbridge代理
yum install -y openstack-neutron-linuxbridge ebtables ipset
cat >> /etc/neutron/neutron.conf << EOF
[DEFAULT]
transport_url = rabbit://openstack:OpenStack@123@controller:5672/
auth_strategy = keystone
[keystone_authtoken]
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = neutron
password = Neutron@123
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
EOF

cat >> /etc/neutron/plugins/ml2/linuxbridge_agent.ini << EOF
[linux_bridge]
physical_interface_mappings = provider:eth0
[vxlan]
enable_vxlan = true
local_ip = 192.168.10.11
l2_population = true
[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
EOF

# 6. 启动服务
systemctl enable --now libvirtd openstack-nova-compute neutron-linuxbridge-agent

# 7. 控制节点发现计算节点
source /root/admin-openrc
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
openstack compute service list

五、存储节点部署(Cinder LVM 后端)

bash

运行

# 1. 基础环境配置同控制节点
# 2. 安装LVM2与Cinder存储服务
yum install -y lvm2 openstack-cinder targetcli python-keystone

# 3. 创建LVM物理卷与卷组
pvcreate /dev/sdb
vgcreate cinder-volumes /dev/sdb

# 4. 配置lvm.conf
sed -i 's/^#filter =.*/filter = [ "a/sdb/", "r/.*/"]/' /etc/lvm/lvm.conf

# 5. 配置cinder.conf
cat >> /etc/cinder/cinder.conf << EOF
[DEFAULT]
transport_url = rabbit://openstack:OpenStack@123@controller:5672/
auth_strategy = keystone
my_ip = 192.168.10.12
enabled_backends = lvm
[database]
connection = mysql+pymysql://cinder:Cinder@123@controller/cinder
[keystone_authtoken]
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = cinder
password = Cinder@123
[lvm]
volume_driver = cinder.volume.drivers.lvm.LvmVolumeDriver
volume_group = cinder-volumes
iscsi_helper = lioadm
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
EOF

# 6. 启动服务
systemctl enable --now openstack-cinder-volume target

# 7. 验证服务
openstack volume service list

六、私有云运维与监控

6.1 日常运维操作

bash

运行

# 1. 服务状态检查(控制节点)
systemctl status openstack-nova-api neutron-server openstack-cinder-api httpd mariadb rabbitmq-server

# 2. 实例生命周期管理
# 创建密钥对
openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey
# 创建外部网络
openstack network create --external --provider-physical-network provider --provider-segmentation-type flat provider
openstack subnet create --network provider --allocation-pool start=192.168.10.200,end=192.168.10.250 --dns-nameserver 223.5.5.5 --gateway 192.168.10.1 --subnet-range 192.168.10.0/24 provider-v4
# 创建实例
openstack server create --flavor m1.tiny --image cirros --key-name mykey --network private test-vm

# 3. 备份与恢复
# 数据库每日备份
mysqldump -u root -p --all-databases > /backup/openstack_db_$(date +%Y%m%d).sql
# 配置文件备份
tar -zcvf /backup/openstack_config_$(date +%Y%m%d).tar.gz /etc/ /var/lib/glance/images/

6.2 监控方案(Prometheus + Grafana)

bash

运行

# 1. 安装Prometheus
yum install -y prometheus
cat >> /etc/prometheus/prometheus.yml << EOF
scrape_configs:
  - job_name: 'openstack'
    static_configs:
      - targets: ['controller:9090', 'compute1:9100', 'storage1:9100']
EOF
systemctl enable --now prometheus

# 2. 安装Grafana
yum install -y grafana
systemctl enable --now grafana-server
# 访问http://controller:3000,添加Prometheus数据源,导入OpenStack监控面板(ID:7230)

七、常见故障排查与性能优化

7.1 常见故障排查

表格

故障现象 排查步骤 解决方案
实例无法启动 1. 检查 nova-compute 服务状态2. 查看 /var/log/nova/nova-compute.log3. 验证 libvirt 状态 1. 重启 nova-compute 服务2. 修复配置文件错误3. 开启 CPU 虚拟化
网络不通 1. 检查 neutron-linuxbridge-agent 状态2. 查看网络命名空间3. 验证 VXLAN 隧道 1. 重启 neutron 代理2. 检查 iptables 规则3. 修复 VXLAN 端口
服务无法启动 1. 查看 journalctl 日志2. 验证数据库连接3. 检查配置文件语法 1. 修复配置错误2. 重置数据库密码3. 重启依赖服务

7.2 性能优化

  1. 数据库优化:调整 innodb_buffer_pool_size 为物理内存的 50%,开启慢查询日志
  2. 计算节点优化:开启 KVM 硬件加速,调整 libvirt 缓存,关闭不必要的服务
  3. 网络优化:使用巨帧(Jumbo Frame)提升 VXLAN 性能,优化 TCP 参数
  4. 存储优化:使用 SSD 作为控制节点存储,LVM 开启缓存,Cinder 配置多后端存储

八、总结

本文基于 CentOS 7 系统,完整实现了 OpenStack 私有云的部署、运维、监控、优化全流程,所有步骤均经过实操验证,可直接落地。通过该方案,可快速搭建一套企业级私有云环境,满足业务上云、资源弹性调度、数据安全合规等核心需求。

Logo

AtomGit 是由开放原子开源基金会联合 CSDN 等生态伙伴共同推出的新一代开源与人工智能协作平台。平台坚持“开放、中立、公益”的理念,把代码托管、模型共享、数据集托管、智能体开发体验和算力服务整合在一起,为开发者提供从开发、训练到部署的一站式体验。

更多推荐