Centos8部署Openstack(U版本)

it2025-08-11  10

简介

OpenStack是一个开源的云计算管理平台项目,是一系列软件开源项目的组合,由NASA(美国国家航空航天局)和Rackspace合作研发并发起,以Apache许可证授权的开源代码项目 OpenStack为私有云和公有云提供可扩展的弹性的云计算服务,项目目标是提供实施简单、可大规模扩展、丰富、标准统一的云计算管理平台 OpenStack覆盖了网络、虚拟化、操作系统、服务器等各个方面,它是一个正在开发中的云计算平台项目,根据成熟及重要程度的不同,被分解成核心项目、孵化项目,以及支持项目和相关项目,每个项目都有自己的委员会和项目技术主管,而且每个项目都不是一成不变的,孵化项目可以根据发展的成熟度和重要性,转变为核心项目 核心组件 1、计算(Compute)Nova:一套控制器,用于为单个用户或使用群组管理虚拟机实例的整个生命周期,根据用户需求来提供虚拟服务。负责虚拟机创建、开机、关机、挂起、暂停、调整、迁移、重启、销毁等操作,配置CPU、内存等信息规格 2、对象存储(Object Storage)Swift:一套用于在大规模可扩展系统中通过内置冗余及高容错机制实现对象存储的系统,允许进行存储或者检索文件,可为Glance提供镜像存储,为Cinder提供卷备份服务 3、镜像服务(Image Service)Glance:一套虚拟机镜像查找及检索系统,支持多种虚拟机镜像格式(AKI、AMI、ARI、ISO、QCOW2、Raw、VDI、VHD、VMDK),有创建上传镜像、删除镜像、编辑镜像基本信息的功能 4、身份服务(Identity Service)Keystone:为OpenStack其他服务提供身份验证、服务规则和服务令牌的功能,管理Domains、Projects、Users、Groups、Roles 5、网络&地址管理(Network)Neutron:提供云计算的网络虚拟化技术,为OpenStack其他服务提供网络连接服务。为用户提供接口,可以定义Network、Subnet、Router,配置DHCP、DNS、负载均衡、L3服务,网络支持GRE、VLAN,插件架构支持许多主流的网络厂家和技术,如OpenvSwitch 6、块存储(Block Storage)Cinder:为运行实例提供稳定的数据块存储服务,它的插件驱动架构有利于块设备的创建和管理,如创建卷、删除卷,在实例上挂载和卸载卷 7、UI 界面(Dashboard)Horizon:OpenStack中各种服务的Web管理门户,用于简化用户对服务的操作,例如:启动实例、分配IP地址、配置访问控制等 8、测量(Metering)Ceilometer:能把OpenStack内部发生的几乎所有的事件都收集起来,然后为计费和监控以及其它服务提供数据支撑 9、部署编排(Orchestration)Heat:提供了一种通过模板定义的协同部署方式,实现云基础设施软件运行环境(计算、存储和网络资源)的自动化部署 10、数据库服务(Database Service)Trove:为用户在OpenStack的环境提供可扩展和可靠的关系和非关系数据库引擎服务

前期准备

准备两台Centos8虚拟机,虚拟机配置两块硬盘,配置IP地址和hostname,同步系统时间,关闭防火墙和selinux,修改ip地址和hostname映射

iphostname192.168.29.148controller192.168.29.149computer

部署服务

安装epel源

[root@controller ~]# yum install epel-release -y [root@computer ~]# yum install epel-release -y

安装openstack源

[root@controller ~]# yum install centos-release-openstack-ussuri -y [root@controller ~]# yum config-manager --set-enabled PowerTools [root@controller ~]# dnf install https://www.rdoproject.org/repos/rdo-release.el8.rpm [root@computer ~]# yum install centos-release-openstack-ussuri -y [root@computer ~]# yum config-manager --set-enabled PowerTools [root@computer ~]# dnf install https://www.rdoproject.org/repos/rdo-release.el8.rpm

安装openstack的客户端和selinux服务

[root@controller ~]# yum install python3-openstackclient openstack-selinux -y [root@computer ~]# yum install python3-openstackclient openstack-selinux -y

部署Mariadb数据库和memcached

[root@controller ~]# yum install mariadb mariadb-server python2-PyMySQL memcached python3-memcached -y

安装消息队列服务

[root@controller ~]# yum install rabbitmq-server -y

安装keystone服务

[root@controller ~]# yum install openstack-keystone httpd python3-mod_wsgi -y

安装glance服务

[root@controller ~]# yum install openstack-glance -y

安装placememt服务

[root@controller ~]# yum install openstack-placement-api -y

controller安装nova服务

[root@controller ~]# yum install openstack-nova-api openstack-nova-conductor openstack-nova-novncproxy openstack-nova-scheduler -y

computer安装nova服务

[root@computer ~]# yum install openstack-nova-compute -y

controller安装neutron服务

[root@controller ~]# yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables -y

computer安装neutron服务

[root@computer ~]# yum install openstack-neutron-linuxbridge ebtables ipset -y

安装dashboard组件

[root@controller ~]# yum install openstack-dashboard -y

controller安装cinder服务

[root@controller ~]# yum install openstack-cinder -y

computer安装cinder和lvm服务

[root@storager ~]# yum install lvm2 device-mapper-persistent-data openstack-cinder targetcli python3-keystone -y

开启硬件加速

[root@controller ~]# modprobe kvm-intel [root@computer ~]# modprobe kvm-intel

配置消息队列服务

开启服务

[root@controller ~]# systemctl start rabbitmq-server.service [root@controller ~]# systemctl enable rabbitmq-server.service

添加用户

[root@controller ~]# rabbitmqctl add_user openstack openstack

授权限

[root@controller ~]# rabbitmqctl set_permissions openstack ".*" ".*" ".*"

配置memcached服务

修改配置文件

[root@controller ~]# vi /etc/sysconfig/memcached PORT="11211" USER="memcached" MAXCONN="1024" CACHESIZE="64" OPTIONS="-l 127.0.0.1,::1,controller"

启动服务

[root@controller ~]# systemctl start memcached.service [root@controller ~]# systemctl enable memcached.service

配置数据库服务

修改配置文件

[root@controller ~]# vi /etc/my.cnf.d/mariadb-server.cnf bind-address = 192.168.29.148 default-storage-engine = innodb innodb_file_per_table = on max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8

启动服务

[root@controller ~]# systemctl start mariadb.service [root@controller ~]# systemctl enable mariadb.service

创建数据库

MariaDB [(none)]> create database keystone; MariaDB [(none)]> create database glance; MariaDB [(none)]> create database nova; MariaDB [(none)]> create database nova_api; MariaDB [(none)]> create database nova_cell0; MariaDB [(none)]> create database neutron; MariaDB [(none)]> create database cinder; MariaDB [(none)]> create database placement;

授权用户

MariaDB [(none)]> grant all privileges on keystone.* to 'keystone'@'localhost' identified by 'your_password'; MariaDB [(none)]> grant all privileges on keystone.* to 'keystone'@'%' identified by 'your_password'; MariaDB [(none)]> grant all privileges on glance.* to 'glance'@'localhost' identified by 'your_password'; MariaDB [(none)]> grant all privileges on glance.* to 'glance'@'%' identified by 'your_password'; MariaDB [(none)]> grant all privileges on nova.* to 'nova'@'localhost' identified by 'your_password'; MariaDB [(none)]> grant all privileges on nova.* to 'nova'@'%' identified by 'your_password'; MariaDB [(none)]> grant all privileges on nova_api.* to 'nova'@'localhost' identified by 'your_password'; MariaDB [(none)]> grant all privileges on nova_api.* to 'nova'@'%' identified by 'your_password'; MariaDB [(none)]> grant all privileges on nova_cell0.* to 'nova'@'localhost' identified by 'your_password'; MariaDB [(none)]> grant all privileges on nova_cell0.* to 'nova'@'%' identified by 'your_password'; MariaDB [(none)]> grant all privileges on neutron.* to 'neutron'@'localhost' identified by 'your_password'; MariaDB [(none)]> grant all privileges on neutron.* to 'neutron'@'%' identified by 'your_password'; MariaDB [(none)]> grant all privileges on cinder.* to 'cinder'@'localhost' identified by 'your_password'; MariaDB [(none)]> grant all privileges on cinder.* to 'cinder'@'%' identified by 'your_password'; MariaDB [(none)]> grant all privileges on placement.* to 'placement'@'localhost' identified by 'your_password'; MariaDB [(none)]> grant all privileges on placement.* to 'placement'@'%' identified by 'your_password'; MariaDB [(none)]> flush privileges;

配置keystone服务

修改配置文件

[root@controller ~]# vi /etc/keystone/keystone.conf [database] connection = mysql+pymysql://keystone:your_password@controller/keystone [token] provider = fernet

数据库同步

[root@controller ~]# su -s /bin/sh -c "keystone-manage db_sync" keystone

密钥库初始化

[root@controller ~]# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone [root@controller ~]# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone [root@controller ~]# keystone-manage bootstrap --bootstrap-password openstack --bootstrap-admin-url http://controller:5000/v3/ --bootstrap-internal-url http://controller:5000/v3/ --bootstrap-public-url http://controller:5000/v3/ --bootstrap-region-id RegionOne

配置httpd服务

#修改配置文件 [root@controller ~]# vi /etc/httpd/conf/httpd.conf ServerName controller #创建软连接 [root@controller ~]# ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ #启动服务 [root@controller ~]# systemctl start httpd [root@controller ~]# systemctl enable httpd

配置admin环境变量脚本

[root@controller ~]# vi admin-openrc export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=openstack export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 export OS_VOLUME_API_VERSION=2

验证环境变量

[root@controller ~]# source admin-openrc [root@controller ~]# openstack token issue

创建service项目

[root@controller ~]# openstack project create --domain default --description "Service Project" service

创建demo项目

[root@controller ~]# openstack project create --domain default --description "Demo Project" demo

创建demo用户

[root@controller ~]# openstack user create --domain default --password-prompt demo

创建user角色

[root@controller ~]# openstack role create user

添加user角色到demo项目和用户

[root@controller ~]# openstack role add --project demo --user demo user

配置demo环境变量脚本

[root@controller ~]# vi demo-openrc export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=demo export OS_USERNAME=demo export OS_PASSWORD=demo export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2

配置glance服务

创建并配置glance用户

[root@controller ~]# openstack user create --domain default --password-prompt glance [root@controller ~]# openstack role add --project service --user glance admin

创建glance服务实体

[root@controller ~]# openstack service create --name glance --description "OpenStack Image" image

创建glance服务端点

[root@controller ~]# openstack endpoint create --region RegionOne image public http://controller:9292 [root@controller ~]# openstack endpoint create --region RegionOne image internal http://controller:9292 [root@controller ~]# openstack endpoint create --region RegionOne image admin http://controller:9292

修改配置文件

[root@controller ~]# vi /etc/glance/glance-api.conf [database] connection = mysql+pymysql://glance:your_password@controller/glance [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = glance [paste_deploy] flavor = keystone [glance_store] stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/

同步数据库

[root@controller ~]# su -s /bin/sh -c "glance-manage db_sync" glance

启动服务

[root@controller ~]# systemctl enable openstack-glance-api.service [root@controller ~]# systemctl start openstack-glance-api.service

上传镜像

[root@controller ~]# openstack image create "Centos7" --file CentOS-7-x86_64-GenericCloud-1907.qcow2 --disk-format qcow2 --container-format bare --public #查看镜像 [root@controller ~]# openstack image list

Controller配置placement服务

创建并配置placement用户

[root@controller ~]# openstack user create --domain default --password-prompt placement [root@controller ~]# openstack role add --project service --user placement admin

创建placement服务实体

[root@controller ~]# openstack service create --name placement --description "Placement API" placement

创建placement服务端点

[root@controller ~]# openstack endpoint create --region RegionOne placement public http://controller:8778 [root@controller ~]# openstack endpoint create --region RegionOne placement internal http://controller:8778 [root@controller ~]# openstack endpoint create --region RegionOne placement admin http://controller:8778

修改配置文件

[root@controller ~]# vi /etc/placement/placement.conf [placement_database] connection = mysql+pymysql://placement:'your_password'@controller/placement [api] auth_strategy = keystone [keystone_authtoken] auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = placement password = placement

同步数据库

[root@controller ~]# su -s /bin/sh -c "placement-manage db sync" placement

重启服务

[root@controller ~]# systemctl restart httpd

Controller配置nova服务

创建并配置nova用户

[root@controller ~]# openstack user create --domain default --password-prompt nova [root@controller ~]# openstack role add --project service --user nova admin

创建nova服务实体

[root@controller ~]# openstack service create --name nova --description "OpenStack Compute" compute

创建nova服务端点

[root@controller ~]# openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 [root@controller ~]# openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 [root@controller ~]# openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1

修改配置文件

[root@controller ~]# vi /etc/nova/nova.conf [DEFAULT] enabled_apis = osapi_compute,metadata my_ip = 192.168.29.148 transport_url = rabbit://openstack:openstack@controller:5672/ [api_database] connection = mysql+pymysql://nova:your_password@controller/nova_api [database] connection = mysql+pymysql://nova:your_password@controller/nova [api] auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = nova password = nova [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip [glance] api_servers = http://controller:9292 [oslo_concurrency] lock_path = /var/lib/nova/tmp [placement] os_region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = placement [root@controller ~]# vi /etc/httpd/conf.d/00-placement-api.conf Listen 8778 <VirtualHost *:8778> WSGIProcessGroup nova-placement-api WSGIApplicationGroup %{GLOBAL} WSGIPassAuthorization On WSGIDaemonProcess nova-placement-api processes=3 threads=1 user=nova group=nova WSGIScriptAlias / /usr/bin/nova-placement-api <IfVersion >= 2.4> ErrorLogFormat "%M" </IfVersion> ErrorLog /var/log/nova/nova-placement-api.log #SSLEngine On #SSLCertificateFile ... #SSLCertificateKeyFile ... <Directory /usr/bin> <IfVersion >= 2.4> Require all granted </IfVersion> <IfVersion >= 2.4> Order allow,deny Allow from all </IfVersion> </Directory> </VirtualHost> Alias /nova-placement-api /usr/bin/nova-placement-api <Location /nova-placement-api> SetHandler wsgi-script Options +ExecCGI WSGIProcessGroup nova-placement-api WSGIApplicationGroup %{GLOBAL} WSGIPassAuthorization On </Location>

重启httpd服务

[root@controller ~]# systemctl restart httpd

同步数据库

[root@controller ~]# su -s /bin/sh -c "nova-manage api_db sync" nova [root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova [root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova [root@controller ~]# su -s /bin/sh -c "nova-manage db sync" nova

验证

[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova

启动服务

[root@controller ~]# systemctl start openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service [root@controller ~]# systemctl enable openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

compute配置nova服务

修改配置文件

[root@compute ~]# vi /etc/nova/nova.conf [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:openstack@controller my_ip = 192.168.29.149 [api] auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = nova password = nova [vnc] enabled = True server_listen = 0.0.0.0 server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html [glance] api_servers = http://controller:9292 [oslo_concurrency] lock_path = /var/lib/nova/tmp [placement] os_region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = placement [libvirt] virt_type = kvm #虚拟机部署集群需要用qemu #virt_type = qemu

启动服务

[root@compute ~]# systemctl start libvirtd.service openstack-nova-compute.service [root@compute ~]# systemctl enable libvirtd.service openstack-nova-compute.service

controller添加computer进入数据库

#查看nova-compute结点 [root@controller ~]# openstack compute service list --service nova-compute #添加数据库 [root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova

controller配置neutron服务

创建并配置neutron用户

[root@controller ~]# openstack user create --domain default --password-prompt neutron [root@controller ~]# openstack role add --project service --user neutron admin

创建neutron服务实体

[root@controller ~]# openstack service create --name neutron --description "OpenStack Networking" network

创建neutron服务端点

[root@controller ~]# openstack endpoint create --region RegionOne network public http://controller:9696 [root@controller ~]# openstack endpoint create --region RegionOne network internal http://controller:9696 [root@controller ~]# openstack endpoint create --region RegionOne network admin http://controller:9696

修改配置文件(linuxbridge网络架构)

[root@controller ~]# vi /etc/neutron/neutron.conf [DEFAULT] core_plugin = ml2 service_plugins = router allow_overlapping_ips = true transport_url = rabbit://openstack:openstack@controller auth_strategy = keystone notify_nova_on_port_status_changes = true notify_nova_on_port_data_changes = true [database] connection = mysql+pymysql://neutron:your_password@controller/neutron [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = neutron password = neutron [nova] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = nova password = nova [oslo_concurrency] lock_path = /var/lib/neutron/tmp [root@controller ~]# vi /etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = linuxbridge,l2population extension_drivers = port_security [ml2_type_flat] flat_networks = provider [ml2_type_vxlan] vni_ranges = 1:1000 [securitygroup] enable_ipset = true [root@controller ~]# vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:ens160 [vxlan] enable_vxlan = true local_ip = 192.168.29.148 l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver [root@controller ~]# vi /etc/neutron/l3_agent.ini [DEFAULT] interface_driver = linuxbridge [root@controller ~]# vi /etc/neutron/dhcp_agent.ini [DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true [root@controller ~]# vi /etc/neutron/metadata_agent.ini [DEFAULT] nova_metadata_host = controller metadata_proxy_shared_secret = 000000 [root@controller ~]# vi /etc/nova/nova.conf [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = neutron service_metadata_proxy = true metadata_proxy_shared_secret = 000000

创建软链接

[root@controller ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

同步数据库

[root@controller ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

启动服务

#重启nova-api服务 [root@controller ~]# systemctl restart openstack-nova-api.service #linuxbridge架构 [root@controller ~]# systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service [root@controller ~]# systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service

computer配置neutron服务

修改配置文件(linuxbridge架构)

[root@computer ~]# vi /etc/neutron/neutron.conf [DEFAULT] transport_url = rabbit://openstack:openstack@controller auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = neutron password = neutron [oslo_concurrency] lock_path = /var/lib/neutron/tmp [root@computer ~]# vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:ens160 [vxlan] enable_vxlan = true local_ip = 192.168.29.149 l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver [root@computer ~]# vi /etc/nova/nova.conf [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = neutron

启动服务

#重启nova-compute服务 [root@compute ~]# systemctl stop openstack-nova-compute.service [root@compute ~]# systemctl start openstack-nova-compute.service #注:直接restart重启可能会导致报错 #linuxbridge架构 [root@compute ~]# systemctl start neutron-linuxbridge-agent.service [root@compute ~]# systemctl enable neutron-linuxbridge-agent.service

验证

[root@controller ~]# openstack network agent list #查看日志 [root@computer ~]# tail /var/log/nova/nova-compute.log

配置dashboard组件

修改配置文件

[root@controller ~]# vi /etc/openstack-dashboard/local_settings OPENSTACK_HOST = "controller" WEBROOT='/dashboard/' ALLOWED_HOSTS = ['*', 'two.example.com'] SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', } } OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_API_VERSIONS = { "identity": 3, "image": 2, "volume": 2, } OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default" OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user" [root@controller ~]# vi /etc/httpd/conf.d/openstack-dashboard.conf WSGIApplicationGroup %{GLOBAL}

重启服务

[root@controller ~]# systemctl restart httpd.service memcached.service

访问web界面 浏览器访问http://ip/dashboard

Controller配置cinder服务

创建并配置cinder用户

[root@controller ~]# openstack user create --domain default --password-prompt cinder [root@controller ~]# openstack role add --project service --user cinder admin

创建cinder服务实体

[root@controller ~]# openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2 [root@controller ~]# openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3

创建cinder服务端点

[root@controller ~]# openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s [root@controller ~]# openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s [root@controller ~]# openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s [root@controller ~]# openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s [root@controller ~]# openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s [root@controller ~]# openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s

编辑配置文件

[root@controller ~]# vi /etc/cinder/cinder.conf [default] transport_url = rabbit://openstack:openstack@controller auth_strategy = keystone my_ip = 192.168.29.148 [database] connection = mysql+pymysql://cinder:your_password@controller/cinder [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = cinder password = cinder [oslo_concurrency] lock_path = /var/lib/cinder/tmp [root@controller ~]# vi /etc/nova/nova.conf [cinder] os_region_name = RegionOne

同步数据库

[root@controller ~]# su -s /bin/sh -c "cinder-manage db sync" cinder

启动服务

[root@controller ~]# systemctl restart openstack-nova-api.service [root@controller ~]# systemctl restart httpd memcached [root@controller ~]# systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service [root@controller ~]# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service

Computer配置cinder服务

配置cinder硬盘

[root@computer~]# mkfs.xfs -f /dev/nvme0n2p1

配置逻辑卷

[root@computer~]# pvcreate /dev/nvme0n2p1 [root@computer~]# vgcreate cinder-volumes /dev/nvme0n2p1

修改配置文件

[root@computer~]# vi /etc/cinder/cinder.conf [default] transport_url = rabbit://openstack:openstack@controller auth_strategy = keystone my_ip = 192.168.29.149 enabled_backends = lvm glance_api_servers = http://controller:9292 [database] connection = mysql+pymysql://cinder:your_password@controller/cinder [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_id = default user_domain_id = default project_name = service username = cinder password = cinder [oslo_concurrency] lock_path = /var/lib/cinder/tmp #没有lvm标签自行添加 [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver volume_group = cinder-volumes iscsi_protocol = iscsi iscsi_helper = lioadm

启动服务

[root@computer~]# systemctl start openstack-cinder-volume.service target.service [root@computer~]# systemctl enable openstack-cinder-volume.service target.service

查看状态

[root@controller ~]# openstack volume service list

创建卷

#容量为1G [root@controller ~]# cinder create --name demo_volume 1

挂载卷

#查看卷id [root@controller ~]# cinder list #挂载卷到云主机 [root@controller ~]# nova volume-attach mycentos e9804810-9dce-47f6-84f7-25a8da672800
最新回复(0)