Openstack Ocata CentOS 7 部署¶
环境准备¶
OPENSTACK支持系统: openSUSE Leap 42.1,SUSE Linux Enterprise Server 12 SP1, RedHat Enterprise Linux 7,CentOS 7, Ubuntu 14.04(LTS) 这里采用CentOS 7进行部署
安全¶
内网环境可以关闭selinux和firewalld:
# setenforce 0 (临时关闭)
# systemctl stop firewalld
# systemctl disable firewalld
主机名¶
修改全部节点/etc/hosts文件
根据实际情况添加,如下(cinder也可以叫做block)
controller controller_IP
compute1 compute1_IP
cinder cinder_IP
ceph ceph_IP
修改主机名:
#hostnamectl set-hostname controller
时间同步(NTP)¶
时间同步的主要原因是当时间存在误差时,包括Nova、Neutron、Cinder在内的多个组件状态会down
# yum install chrony
# vim /etc/chrony.conf
server NTP_SERVER iburst 添加此项并将NTP_SERVER修改为具体的IP
allow 0.0.0.0 此项为NTP_SERVER设置项,允许时间同步网段
# systemctl enable chronyd.service
# systemctl start chronyd.service
验证:
# chronyc sources
210 Number of sources = 2
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^* controller 3 9 377 421 +15us[ -87us] +/- 15ms
Openstack Yum Package¶
下载openstack的yum源:
# yum install centos-release-openstack-ocata 由于客户网络环境原因,采用了本地源,因此跳过。
# yum install python-openstackclient
# yum install openstack-selinux
数据库服务(Mariadb)¶
这里使用了yum安装的mariadb(CentOS 7 yum源中没有对应的MySQL,但可以yum安装基本相同的mariadb)
# yum install mariadb mariadb-server python2-PyMySQL
# vim /etc/my.cnf.d/openstack.cnf
[mysqld]
bind-address = 10.0.0.11
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
# systemctl enable mariadb.service
# systemctl start mariadb.service
初始化SQL
# mysql_secure_installation
Note
在这里需要对数据库配置是否生效进行查看(在这里按照官方配置实际上是没有生效的,还需要其余的配置)
调整mariadb最大连接数
在我的部署过程中,屡次因为mariadb最大连接数问题报错,虽然我按照官方设置了4096,但是实际查询却发现只有214(查看max_connection):
MariaDB [(none)]> show global variables like '%connect%'
由于mariadb有默认打开文件数限制,需要通过配置mariadb.service来调大打开文件数目:
# vim /usr/lib/systemd/system/mariadb.service
[Service]新添加两行如下参数:
LimitNOFILE=10000
LimitNPROC=10000
重新加载系统服务,并重启mariadb服务:
# systemctl --system daemon-reload # systemctl restart mariadb.service
再查看最大连接数:
MariaDB [(none)]> show variables like 'max_connections';
+-----------------+-------+
| Variable_name | Value |
+-----------------+-------+
| max_connections | 4096 |
+-----------------+-------+
消息队列(RabbitMQ)¶
# yum install rabbitmq-server
# systemctl enable rabbitmq-server.service
# systemctl start rabbitmq-server.service
# rabbitmqctl add_user openstack RABBIT_PASS 修改RABBIT_PASS为RabbitMQ密码
Creating user "openstack" ...
# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
Setting permissions for user "openstack" in vhost "/" ...
缓存服务(Memcache)¶
# yum install memcached python-memcached
# vim /etc/sysconfig/memcached
OPTIONS="-l 127.0.0.1,::1,controller"
# systemctl enable memcached.service
# systemctl start memcached.service
安装Openstack组件¶
认证服务(keystone)¶
1. 配置数据库
$ mysql -u root -p
MariaDB [(none)]> CREATE DATABASE keystone;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'KEYSTONE_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'KEYSTONE_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO ' keystone '@'controller' IDENTIFIED BY 'KEYSTONE_DBPASS';
这里需要注意,如果单独只给localhost或者controller配置密码,都有可能无法直接访问数据库,因此通常添加三个用户”controller””localhost””%”,配置完成后建议检查是否可以使用对应用户登录。
2. 安装对应软件包
# yum install openstack-keystone httpd mod_wsgi
# vim /etc/keystone/keystone.conf
[database]
connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
[token]
provider = fernet
3. 导入认证服务数据库文件
# su -s /bin/sh -c "keystone-manage db_sync" keystone
4. 初始化
# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
5. 引导
# keystone-manage bootstrap --bootstrap-password ADMIN_PASS --bootstrap-admin-url http://controller:35357/v3/ --bootstrap-internal-url http://controller:5000/v3/ --bootstrap-public-url http://controller:5000/v3/ --bootstrap-region-id RegionOne
修改ADMIN_PASS为要设置的ADMIN_PASS
配置Apache HTTP
Ocata的keystone依赖于http运行,将会随着http的启动而启动
# vim /etc/httpd/conf/httpd.conf
ServerName controller
# ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
# systemctl enable httpd.service
# systemctl start httpd.service
7. 配置环境变量
$ export OS_USERNAME=admin
$ export OS_PASSWORD=ADMIN_PASS ADMIN_PASS需要设置为之前设置的ADMIN_PASS
$ export OS_PROJECT_NAME=admin
$ export OS_USER_DOMAIN_NAME=Default
$ export OS_PROJECT_DOMAIN_NAME=Default
$ export OS_AUTH_URL=http://controller:35357/v3
$ export OS_IDENTITY_API_VERSION=3
8. Create the service project
$ openstack project create --domain default --description "Service Project" service
9. Create the demo project
$ openstack project create --domain default --description "Demo Project" demo
10. Create the demo user
$ openstack user create --domain default --password-prompt demo
User Password:
Repeat User Password:
11. Create the user role
$ openstack role create user
12.Add the user role to the demo project and user
$ openstack role add --project demo --user demo user
验证
为安全起见关闭临时令牌认证:
# vim /etc/keystone/keystone-paste.ini
删除[pipeline:public_api],[pipeline:admin_api],[pipeline:api_v3]三个标签下的admin_token_auth项
验证token:
$ unset OS_AUTH_URL OS_PASSWORD
$ openstack --os-auth-url http://controller:35357/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name admin --os-username admin token issue
Password:
$ openstack --os-auth-url http://controller:5000/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name demo --os-username demo token issue
Password:
14. 为方便使用,编写环境变量文件修改PASS为自己设置的密码
# vim /root/admin.openrc
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=ADMIN_PASS
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
# vim /root/demo.openrc
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=DEMO_PASS
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
镜像服务(glance)¶
1. 配置数据库
$ mysql -u root -p
MariaDB [(none)]> CREATE DATABASE glance;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'GLANCE_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'GLANCE_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'controller' IDENTIFIED BY 'GLANCE_DBPASS';
2. 创建服务实体和API端点
$ . admin-openrc
$ openstack user create --domain default --password-prompt glance
User Password:
Repeat User Password:
$ openstack role add --project service --user glance admin
$ openstack service create --name glance --description "OpenStack Image" image
$ openstack endpoint create --region RegionOne image public http://controller:9292
$ openstack endpoint create --region RegionOne image internal http://controller:9292
$ openstack endpoint create --region RegionOne image admin http://controller:9292
# yum install openstack-glance
3. 安装并配置组件
# vim /etc/glance/glance-api.conf
[database]
connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = GLANCE_PASS
[paste_deploy]
flavor = keystone
[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
# vim /etc/glance/glance-registry.conf
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = GLANCE_PASS
[paste_deploy]
flavor = keystone
4. 导入镜像服务数据库文件
# su -s /bin/sh -c "glance-manage db_sync" glance
5. 启动服务并加入开机自启
# systemctl enable openstack-glance-api.service openstack-glance-registry.service
# systemctl start openstack-glance-api.service openstack-glance-registry.service
6.下载镜像 官方测试镜像:
# wget http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img
CentOS官方提供的虚拟机镜像:
# wget http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud-1705.qcow2.xz
官方镜像: http://cloud.centos.org/centos/7/images/
7. 将镜像上传到glance(以镜像cirros为例)
# . admin-openrc
# openstack image create "cirros" --file cirros-0.3.5-x86_64-disk.img --disk-format qcow2 --container-format bare --public
$ openstack image list
计算服务(Nova)¶
控制节点¶
1. 配置数据库
$ mysql -u root –p
MariaDB [(none)]> CREATE DATABASE nova_api;
MariaDB [(none)]> CREATE DATABASE nova;
MariaDB [(none)]> CREATE DATABASE nova_cell0;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'controller' IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'controller' IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'controller' IDENTIFIED BY 'NOVA_DBPASS';
2.创建服务实体和API端点
$ . admin-openrc
$ openstack user create --domain default --password-prompt nova
User Password:
Repeat User Password:
$ openstack role add --project service --user nova admin
$ openstack service create --name nova --description "OpenStack Compute" compute
$ openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
$ openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
$ openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1
$ openstack user create --domain default --password-prompt placement
User Password:
Repeat User Password:
$ openstack role add --project service --user placement admin
$ openstack service create --name placement --description "Placement API" placement
$ openstack endpoint create --region RegionOne placement public http://CONTROLLER_IP:8778
$ openstack endpoint create --region RegionOne placement internal http:// CONTROLLER_IP:8778
$ openstack endpoint create --region RegionOne placement admin http:// CONTROLLER_IP:8778
Placement是Ocata新添加的组件,在这里官方给出的文档是直接添加controller,但是如果按照官方文档,后续步骤是会有一些问题,因此在这里最好就是直接使用controller的IP。
3. 安装并配置组件
# yum install openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-placement-api
# vim /etc/nova/nova.conf
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:RABBIT_PASS@controller
my_ip = 10.141.128.11 这里使用控制节点的IP地址
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api_database]
connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api
[database]
connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova
[api]
auth_strategy = keystone
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = NOVA_PASS
[vnc]
enabled = true
vncserver_listen = $my_ip
vncserver_proxyclient_address = $my_ip
[glance]
api_servers = http://controller:9292
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:35357/v3
username = placement
password = PLACEMENT_PASS
# vim /etc/httpd/conf.d/00-nova-placement-api.conf
添加
<Directory /usr/bin>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>
# systemctl restart httpd
# su -s /bin/sh -c "nova-manage api_db sync" nova
# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
109e1d4b-536a-40d0-83c6-5f121b82b650
# su -s /bin/sh -c "nova-manage db sync" nova
# nova-manage cell_v2 list_cells
4. 启动服务并加入开机自启
# systemctl enable openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
# systemctl start openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
计算节点¶
1. 安装服务
# yum install openstack-nova-compute
2. 配置服务
# vim /etc/nova/nova.conf
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:RABBIT_PASS@controller
my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS 这里使用计算节点的IP地址
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api]
auth_strategy = keystone
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = NOVA_PASS
[vnc]
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html 这里使用controller不会报错,但是后续使用dashboard的console界面时无法使用vnc,因此这里建议还是使用IP
[glance]
api_servers = http://controller:9292
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:35357/v3
username = placement
password = PLACEMENT_PASS
3. 检查是否支持硬件加速
$ egrep -c '(vmx|svm)' /proc/cpuinfo
如果结果为0,修改/etc/nova/nova.conf
[libvirt]
virt_type = qemu
4. 启动服务
# systemctl enable libvirtd.service openstack-nova-compute.service
# systemctl start libvirtd.service openstack-nova-compute.service
添加计算节点到cell数据库
Cell是Ocata新添加的组件之一,按照官方介绍,该组件目前是过渡期产品,仅支持单节点,在P版本或更后的版本中会推出分布式cell组件
$ . admin-openrc
$ openstack hypervisor list
# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
# vim /etc/nova/nova.conf
[scheduler]
discover_hosts_in_cells_interval = 300
6. 验证
$ . admin-openrc
$ openstack compute service list
这里需要看到如下情况即可:
Id Binary Host Zone Status State Updated At
1 nova-consoleauth controller internal enabled up | 2016-02-09T23:11:15.000000 |
2 nova-scheduler controller internal enabled up | 2016-02-09T23:11:15.000000 |
3 nova-conductor controller internal enabled up | 2016-02-09T23:11:16.000000 |
4 nova-compute compute1 nova enabled up | 2016-02-09T23:11:20.000000 |
- 多compute节点则更多,如果这里有不同的话,建议查看时间同步,时间不同会对这里造成影响::
$ openstack catalog list $ openstack image list # nova-status upgrade check
这里需要注意的是,按照官方文档做到这里是会报错的,原因其实很简单,这里需要重启http服务,官方是没有写的,systemctl restart httpd
网络服务(Neutron)¶
控制节点¶
1. 配置数据库
$ mysql -u root –p
MariaDB [(none)] CREATE DATABASE neutron;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'NEUTRON_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'NEUTRON_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'controller' IDENTIFIED BY 'NEUTRON_DBPASS';
2. 创建服务实体和API端点
$ . admin-openrc
$ openstack user create --domain default --password-prompt neutron
User Password:
Repeat User Password:
$ openstack role add --project service --user neutron admin
$ openstack service create --name neutron --description "OpenStack Networking" network
$ openstack endpoint create --region RegionOne network public http://controller:9696
$ openstack endpoint create --region RegionOne network internal http://controller:9696
$ openstack endpoint create --region RegionOne network admin http://controller:9696
安装并配置服务
官方这里给出两个选择,分别是Networking Option 1.Provider netwrok和Networking Option 2.Self-service network 其实Option 2是包含有1的,因此我们选择Networking Option 2: Self-service networks
# yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables
# vim /etc/neutron/neutron.conf
[database]
connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
[DEFAULT]
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
transport_url = rabbit://openstack:RABBIT_PASS@controller
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
# vim /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security
[ml2_type_flat]
flat_networks = provider
[ml2_type_vxlan]
vni_ranges = 1:1000
[securitygroup]
enable_ipset = true
vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME 这里修改为使用的网卡名,如em1
[vxlan]
enable_vxlan = true
local_ip = OVERLAY_INTERFACE_IP_ADDRESS 这里修改为controller IP地址
l2_population = true
[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
# vim /etc/neutron/l3_agent.ini
[DEFAULT]
interface_driver = linuxbridge
# vim /etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
# vim /etc/neutron/metadata_agent.ini
[DEFAULT]
nova_metadata_ip = controller
metadata_proxy_shared_secret = METADATA_SECRET 这里设置元数据密令
# vim /etc/nova/nova.conf
[neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS
service_metadata_proxy = true
metadata_proxy_shared_secret = METADATA_SECRET
# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
4.启动服务并加入开机自启
# systemctl restart openstack-nova-api.service
# systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
# systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
# systemctl enable neutron-l3-agent.service
# systemctl start neutron-l3-agent.service
计算节点¶
1. 安装并配置服务
# yum install openstack-neutron-linuxbridge ebtables ipset
# vim /etc/neutron/neutron.conf
[DEFAULT]
transport_url = rabbit://openstack:RABBIT_PASS@controller
auth_strategy = keystone
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
这里官方一样给出了两个选择,需要和controller的选择保持一致 因此选择Networking Option 2: Self-service networks
# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME 修改为该节点使用的的网卡名字,如em1,ip a查看即可
[vxlan]
enable_vxlan = true
local_ip = OVERLAY_INTERFACE_IP_ADDRESS 修改为该节点的IP地址
l2_population = true
[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
# vim /etc/nova/nova.conf
[neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS
2. 启动服务并加入开机自启
# systemctl restart openstack-nova-compute.service
# systemctl enable neutron-linuxbridge-agent.service
# systemctl start neutron-linuxbridge-agent.service
3. 验证
$ . admin-openrc
$ openstack extension list --network
$ openstack network agent list 结果应该是controller四个up,一个计算节点一个up
控制台(Horizon)¶
1. 安装软件包
# yum install openstack-dashboard
# vim /etc/openstack-dashboard/local_settings
OPENSTACK_HOST = "controller"
ALLOWED_HOSTS = ['one.example.com', 'two.example.com'] 内网环境,这里可以修改为*表示允许全部
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': 'controller:11211',
}
}
这一项找不到SESSION_ENGINE需要找CACHES,注释掉默认的CACHES,复制粘贴即可(这里的SESSION_ENGINE是一起的)
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_API_VERSIONS = {
"identity": 3,
"image": 2,
"volume": 2,
}
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
TIME_ZONE = "TIME_ZONE" 这里默认是UTC,可以修改为Asia/Shanghai,不过修改后可能会重启httpd失败,如果失败就用默认的UTC或注释掉TIME_ZONE,这里的引号需要保留
2. 重启http和memcache服务
# systemctl restart httpd.service memcached.service
3. 验证 直接浏览器访问CONTROLLER_IP/dashboard;用户名分别是admin和demo,密码为认证服务阶段设置的ADMIN_PASS和DEMO_PASS
块存储(Cinder)¶
控制节点¶
1. 配置数据库
$ mysql -u root -p
MariaDB [(none)]> CREATE DATABASE cinder;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'CINDER_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'CINDER_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'controller' IDENTIFIED BY 'CINDER_DBPASS';
2. 创建服务实体和API端点
$ . admin-openrc
$ openstack user create --domain default --password-prompt cinder
User Password:
Repeat User Password:
$ openstack role add --project service --user cinder admin
$ openstack service create --name cinderv2 \
--description "OpenStack Block Storage" volumev2
$ openstack service create --name cinderv3 \
--description "OpenStack Block Storage" volumev3
$ openstack endpoint create --region RegionOne \
volumev2 public http://controller:8776/v2/%\(project_id\)s
$ openstack endpoint create --region RegionOne \
volumev2 internal http://controller:8776/v2/%\(project_id\)s
$ openstack endpoint create --region RegionOne \
volumev2 admin http://controller:8776/v2/%\(project_id\)s
$ openstack endpoint create --region RegionOne \
volumev3 public http://controller:8776/v3/%\(project_id\)s
$ openstack endpoint create --region RegionOne \
volumev3 internal http://controller:8776/v3/%\(project_id\)s
$ openstack endpoint create --region RegionOne \
volumev3 admin http://controller:8776/v3/%\(project_id\)s
3. 安装并配置服务
# yum install openstack-cinder
# vim /etc/cinder/cinder.conf
[database]
connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
[DEFAULT]
transport_url = rabbit://openstack:RABBIT_PASS@controller
auth_strategy = keystone
my_ip = 10.0.0.11 这里为controller的IP地址
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = CINDER_PASS
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
# su -s /bin/sh -c "cinder-manage db sync" cinder
# vim /etc/nova/nova.conf
[cinder]
os_region_name = RegionOne
4. 启动服务并加入开机自启
# systemctl restart openstack-nova-api.service
# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
# systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
存储节点¶
1. 安装lvm服务并启动
# yum install lvm2
# systemctl enable lvm2-lvmetad.service
# systemctl start lvm2-lvmetad.service
2. 创建lvm逻辑卷/dev/sdb
# pvcreate /dev/sdb
Physical volume "/dev/sdb" successfully created
# vgcreate cinder-volumes /dev/sdb
Volume group "cinder-volumes" successfully created
3. 配置LVM过滤
# vim /etc/lvm/lvm.conf
devices {
...
filter = [ "a/sdb/", "r/.*/"]
这里如果出错的话,建议对挂载的硬盘重新做分区、格式化
4. 安装cinder服务
# yum install openstack-cinder targetcli python-keystone
# vim /etc/cinder/cinder.conf
[database]
connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
[DEFAULT]
transport_url = rabbit://openstack:RABBIT_PASS@controller
auth_strategy = keystone
my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS 这里修改为Cinder节点的IP
enabled_backends = lvm
glance_api_servers = http://controller:9292
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = CINDER_PASS
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
[lvm] 这个标签默认是没有的,加到最后即可
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = lioadm
5. 启动服务并加到开机自启
# systemctl enable openstack-cinder-volume.service target.service
# systemctl start openstack-cinder-volume.service target.service
6. 验证
$ . admin-openrc
$ openstack volume service list
验证Openstack环境¶
利用heat创建一个租户子网,创建一个路由器连接租户子网与provider-network,创建2个带cinder卷的VM并配置浮动IP
validation.env.yaml
1 2 3 4 5 6 7 8 9 10 11 12 | parameters:
key_name: key-renbin
image: cirros-0.3.5-x86_64
flavor: m1.tiny
volume_size_server1: 5
volume_size_server2: 10
public_net: provider-net
private_net_name: valid
private_net_cidr: 192.168.0.0/24
private_net_gateway: 192.168.0.254
private_net_pool_start: 192.168.0.100
private_net_pool_end: 192.168.0.200
|
validation.hot.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 | heat_template_version: 2013-05-23
description: >
Create a new neutron network plus a router to the public network, and for deploying two servers into the new network ; assigns floating IP addresses to each server ; attach Cinder volumes to each server.
parameters:
key_name:
type: string
description: keypair name
image:
type: string
description: image name
flavor:
type: string
description:
volume_size_server1:
type: number
description: cinder volume size server-1
volume_size_server2:
type: number
description: cinder volume size server-2
public_net:
type: string
description: provider network
private_net_name:
type: string
description: private network name
private_net_cidr:
type: string
description: private network CIDR
private_net_gateway:
type: string
description: private network gateway
private_net_pool_start:
type: string
description: start of private network IP address allocation pool
private_net_pool_end:
type: string
description: end of private network IP address allocation pool
resources:
private_net:
type: OS::Neutron::Net
properties:
name: { get_param: private_net_name }
private_subnet:
type: OS::Neutron::Subnet
properties:
network_id: { get_resource: private_net }
cidr: { get_param: private_net_cidr }
gateway_ip: { get_param: private_net_gateway }
allocation_pools:
- start: { get_param: private_net_pool_start }
end: { get_param: private_net_pool_end }
router:
type: OS::Neutron::Router
properties:
external_gateway_info:
network: { get_param: public_net }
router_interface:
type: OS::Neutron::RouterInterface
properties:
router_id: { get_resource: router }
subnet_id: { get_resource: private_subnet }
server1:
type: OS::Nova::Server
properties:
name: Server1
image: { get_param: image }
flavor: { get_param: flavor }
key_name: { get_param: key_name }
networks:
- port: { get_resource: server1_port }
server1_port:
type: OS::Neutron::Port
properties:
network_id: { get_resource: private_net }
fixed_ips:
- subnet_id: { get_resource: private_subnet }
server1_floating_ip:
type: OS::Neutron::FloatingIP
properties:
floating_network: { get_param: public_net }
port_id: { get_resource: server1_port }
server2:
type: OS::Nova::Server
properties:
name: Server2
image: { get_param: image }
flavor: { get_param: flavor }
key_name: { get_param: key_name }
networks:
- port: { get_resource: server2_port }
server2_port:
type: OS::Neutron::Port
properties:
network_id: { get_resource: private_net }
fixed_ips:
- subnet_id: { get_resource: private_subnet }
server2_floating_ip:
type: OS::Neutron::FloatingIP
properties:
floating_network: { get_param: public_net }
port_id: { get_resource: server2_port }
volume_server1:
type: OS::Cinder::Volume
properties:
size: { get_param: volume_size_server1 }
volume_server2:
type: OS::Cinder::Volume
properties:
size: { get_param: volume_size_server2 }
volume_attach_server1:
type: OS::Cinder::VolumeAttachment
properties:
instance_uuid: { get_resource: server1 }
volume_id: { get_resource: volume_server1 }
mountpoint: /dev/vdb
volume_attach_server2:
type: OS::Cinder::VolumeAttachment
properties:
instance_uuid: { get_resource: server2 }
volume_id: { get_resource: volume_server2 }
mountpoint: /dev/vdb
outputs:
server1_private_ip:
description: IP address of server1 in private network
value: { get_attr: [ server1, first_address ] }
server1_public_ip:
description: Floating IP address of server1 in public network
value: { get_attr: [ server1_floating_ip, floating_ip_address ] }
server2_private_ip:
description: IP address of server2 in private network
value: { get_attr: [ server2, first_address ] }
server2_public_ip:
description: Floating IP address of server2 in public network
value: { get_attr: [ server2_floating_ip, floating_ip_address ] }
|
使用openstack stack create -e validation.env.yaml -f validation.hot.yaml test_stack创建stack