Openstack Pike 설치
Openstack 을 테스트 중이며 정상적인 작동을 보장하지 않습니다.
포스팅 내용은 수정이 될수 있는점을 참고 하시기 바랍니다.
문서 업데이트 내역
최초작성 v0.0 2017-11-23
문서버전 v0.1 2017-11-30 Openstack Cinder 설치 완료
문서버전 v0.2 2017-12-01 linuxbridge_agent eth Device 수정 및 selinux enable or iptables rule 추가
문서버전 v0.3 2017-12-02 nova 실행시 iptables -> firewalld로 변경 하여 테스트
문서버전 v0.4 2017-12-03 KVM Network 변경후 테스트
한글 설치 가이드 의 경우 ocata 참고
https://docs.openstack.org/ocata/ko_KR/install-guide-rdo/common/conventions.html#
영문 설치 가이드의 경우 Pike 를 참고
https://docs.openstack.org/install-guide/common/conventions.html#notices
네트워크 구성및 kvm 구성은 차후 작성 하겠습니다.
일부내용의 경우 공식설치 가이드와 상이한 부분이 있습니다.
네트워크 구성의 경우 Test 가 진행중이며 해당 내용에 따라 문서의 network 부분이 변경될수 있습니다.
KVM 을 이용하여 2개의 network 구성후 작업
controller node / compute node
Network 구성
eth0 의 경우 nat를 통한 외부 접속이 가능한 192망 이고, eth1 의 경우 controller 망으로 사용할 40망 입니다.
프로바이더 네트워크인 eth1 의 경우 임시로 ip 를 설정하여 Opnestack 설치용으로 사용 합니다.
해당 내역은 테스트 중입니다.
controller = eth0 + eth1
관리용 네트워크 eth0 = ip 40.0.0.101/24 (management network 추가)
프로바이더 네트워크 eth1 = ip 192.168.122.0/24
(Deafult NAT 이용)
compute = eth0 + eth1
관리용 네트워크 eth0 = ip 40.0.0.102/24 (management network 추가) 임시 Gateway 192.168.122.1 사용
프로바이더 네트워크 eth1 = ip 192.168.122.0/24 (Deafult NAT 이용)
iptables
[root@controller ~]# yum install -y iptables-services [root@controller ~]# systemctl enable iptables [root@controller ~]# systemctl enable ip6tables
Controller node
/etc/hosts 수정
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 40.0.0.101 controller
/etc/hostname 수정
[root@controller ~]# vi /etc/hostname controller
chrony 설치
[root@controller ~]# yum install -y chrony
/etc/chrony.conf 설정 변경
[root@controller ~]# vi /etc/chrony.conf allow 40.0.0.0/24 [root@controller ~]# systemctl enable chronyd [root@controller ~]# systemctl start chronyd
controller Server 에서 작업
저장소 활성화
[root@controller ~]# yum install -y https://repos.fedorapeople.org/repos/openstack/openstack-pike/rdo-release-pike-1.noarch.rpm
openstack-pike 설치
[root@controller ~]# yum install -y centos-release-openstack-pike [root@controller ~]# yum install -y openstack-utils
openstack-config 명령을 사용하기 위하여 openstack-utils 를 설치 합니다.
Package upgrade 및 리부팅
[root@controller ~]# yum update -y [root@controller ~]# init 6
python-openstackclient 설치
[root@controller ~]# yum install -y python-openstackclient
openstack-selinux 설치
[root@controller ~]# yum install -y openstack-selinux [root@controller ~]# init 6
openstack site: https://docs.openstack.org/install-guide/environment-sql-database-rdo.html
mariadb 설치
[root@controller ~]# yum install -y mariadb mariadb-server python2-PyMySQL
openstack.cnf 파일 생성
[root@controller ~]# vi /etc/my.cnf.d/openstack.cnf [mysqld] bind-address = 40.0.0.101 default-storage-engine = innodb innodb_file_per_table = on max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8 max_connections=32000 max_connect_errors=1000
mariadb 실행
[root@controller ~]# systemctl enable mariadb.service [root@controller ~]# systemctl start mariadb.service
mariadb 설정
[root@controller ~]# mysql_secure_installation NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY! In order to log into MariaDB to secure it, we'll need the current password for the root user. If you've just installed MariaDB, and you haven't set the root password yet, the password will be blank, so you should just press enter here. Enter current password for root (enter for none): OK, successfully used password, moving on... Setting the root password ensures that nobody can log into the MariaDB root user without the proper authorisation. Set root password? [Y/n] y New password: Re-enter new password: Password updated successfully! Reloading privilege tables.. ... Success! By default, a MariaDB installation has an anonymous user, allowing anyone to log into MariaDB without having to have a user account created for them. This is intended only for testing, and to make the installation go a bit smoother. You should remove them before moving into a production environment. Remove anonymous users? [Y/n] y ... Success! Normally, root should only be allowed to connect from 'localhost'. This ensures that someone cannot guess at the root password from the network. Disallow root login remotely? [Y/n] y ... Success! By default, MariaDB comes with a database named 'test' that anyone can access. This is also intended only for testing, and should be removed before moving into a production environment. Remove test database and access to it? [Y/n] y - Dropping test database... ... Success! - Removing privileges on test database... ... Success! Reloading the privilege tables will ensure that all changes made so far will take effect immediately. Reload privilege tables now? [Y/n] y ... Success! Cleaning up... All done! If you've completed all of the above steps, your MariaDB installation should now be secure. Thanks for using MariaDB! [root@controller ~]#
rabbitmq-server 설치
[root@controller ~]# yum install -y rabbitmq-server
rabbitmq-server service enable 및 실행
[root@controller ~]# systemctl enable rabbitmq-server.service [root@controller ~]# systemctl start rabbitmq-server.service
rabbitmq 계정 추가 rabbitmq 권한 설정
[root@controller ~]# rabbitmqctl add_user openstack RABBIT_PASS Creating user "openstack" ... [root@controller ~]# [root@controller ~]# rabbitmqctl set_permissions openstack ".*" ".*" ".*" Setting permissions for user "openstack" in vhost "/" ... [root@controller ~]#
memcached 설치
[root@controller ~]# yum install -y memcached python-memcached
/etc/sysconfig/memcached 파일 편집
[root@controller ~]# vi /etc/sysconfig/memcached PORT="11211" USER="memcached" MAXCONN="1024" CACHESIZE="64" OPTIONS="-l 127.0.0.1,::1,controller"
memcached enable 및 실행
[root@controller ~]# systemctl enable memcached.service [root@controller ~]# systemctl start memcached.service
keystone DB 생성
[root@controller ~]# mysql -uroot -p Enter password: Welcome to the MariaDB monitor. Commands end with ; or \g. Your MariaDB connection id is 10 Server version: 10.1.20-MariaDB MariaDB Server Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MariaDB [(none)]> CREATE DATABASE keystone; Query OK, 1 row affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \ IDENTIFIED BY 'KEYSTONE_DBPASS'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \ IDENTIFIED BY 'KEYSTONE_DBPASS'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> flush privileges; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> quit; Bye [root@controller ~]#
Identity Service Keystone
openstack site :https://docs.openstack.org/keystone/pike/install/
keystone 설치
[root@controller ~]# yum install -y openstack-keystone httpd mod_wsgi
–test 12-06
/etc/keystone/keystone.conf 수정
[root@controller ~]# vi /etc/keystone/keystone.conf [database] connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone [token] provider = fernet
/etc/keystone.conf 수정 // Test 중
openstack-config --set /etc/keystone/keystone.conf DEFAULT transport_url rabbit://openstack:RABBIT_PASS@controller openstack-config --set /etc/keystone/keystone.conf database connection mysql://keystone:KEYSTONE_DBPASS@controller/keystone openstack-config --set /etc/keystone/keystone.conf cache backend oslo_cache.memcache_pool openstack-config --set /etc/keystone/keystone.conf cache enabled true openstack-config --set /etc/keystone/keystone.conf cache memcache_servers controller:11211 openstack-config --set /etc/keystone/keystone.conf memcache servers controller:11211 openstack-config --set /etc/keystone/keystone.conf token expiration 3600 openstack-config --set /etc/keystone/keystone.conf token provider fernet
identity 서비스 데이터 베이스 입력
[root@controller ~]# su -s /bin/sh -c "keystone-manage db_sync" keystone
Fernet 키 저장소 초기화
[root@controller ~]# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone [root@controller ~]# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
identity 서비스를 부트스트래핑
# keystone-manage bootstrap --bootstrap-password ADMIN_PASS \ --bootstrap-admin-url http://controller:35357/v3/ \ --bootstrap-internal-url http://controller:5000/v3/ \ --bootstrap-public-url http://controller:5000/v3/ \ --bootstrap-region-id RegionOne
apache 서버 구성
httpd.conf 수정
[root@controller ~]# vi /etc/httpd/conf/httpd.conf ServerName controller
wsgi-keystone.conf 파일 링크 생성
[root@controller ~]# ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
httpd Service enable 및 실행
[root@controller ~]# systemctl enable httpd.service [root@controller ~]# systemctl start httpd.service
임시 환경 변수 입력
[root@controller ~]# export OS_USERNAME=admin [root@controller ~]# export OS_PASSWORD=ADMIN_PASS [root@controller ~]# export OS_PROJECT_NAME=admin [root@controller ~]# export OS_USER_DOMAIN_NAME=Default [root@controller ~]# export OS_PROJECT_DOMAIN_NAME=Default [root@controller ~]# export OS_AUTH_URL=http://controller:35357/v3 [root@controller ~]# export OS_IDENTITY_API_VERSION=3
service 프로젝트 생성
[root@controller ~]# openstack project create --domain default \ --description "Service Project" service +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | Service Project | | domain_id | default | | enabled | True | | id | e02fcf8fd5204f5293eadf3a65d14f76 | | is_domain | False | | name | service | | parent_id | default | +-------------+----------------------------------+ [root@controller ~]#
demo 프로젝트 생성
[root@controller ~]# openstack project create --domain default \ --description "Demo Project" demo +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | Demo Project | | domain_id | default | | enabled | True | | id | e267fd20523944178b7e7c443b3b0190 | | is_domain | False | | name | demo | | parent_id | default | +-------------+----------------------------------+ [root@controller ~]#
demo 사용자 생성
DEMO_PASS 입력
[root@controller ~]# openstack user create --domain default \ --password-prompt demo User Password: Repeat User Password: +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | default | | enabled | True | | id | fd8fdd89e95c47e4bbf808622e8712af | | name | demo | | options | {} | | password_expires_at | None | +---------------------+----------------------------------+ [root@controller ~]#
user 역할 생성
[root@controller ~]# openstack role create user +-----------+----------------------------------+ | Field | Value | +-----------+----------------------------------+ | domain_id | None | | id | aaf7906d0fc844959527cfcde3e1dab4 | | name | user | +-----------+----------------------------------+ [root@controller ~]#
demo 프로젝트 사용자 user 역할 추가
[root@controller ~]# openstack role add --project demo --user demo user [root@controller ~]#
사용자 테스트를 위하여 환경변수를 unset 합니다.
[root@controller ~]# unset OS_AUTH_URL OS_PASSWORD
ADMIN_PASS 입력
[root@controller ~]# openstack --os-auth-url http://controller:35357/v3 \ --os-project-domain-name Default --os-user-domain-name Default \ --os-project-name admin --os-username admin token issue Password: +------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | expires | 2017-11-28T15:30:29+0000 | | id | gAAAAABaHXMFaPfvhX4e6R7Jb3jThwrnctwrEZQKV6Xfv5X-FcQXQuWU-sTQbOrYpPvJuye1LM1yNuUQXZXgK8zL0yPW6xjcPaFGU5wjNp7HiVIXcJQyi5dTd9kFwMNpB94Zoqe4uHLyFR-VGzcKFueUTuPhEOrRRtITmE2vzlfog23ZrQZM6vk | | project_id | 95d905a1b12544b48ba6e4b43a85ef0b | | user_id | b9a2717e56244411a9cf7b1b52073602 | +------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ [root@controller ~]#
DEMO_PASS 입력
[root@controller ~]# openstack --os-auth-url http://controller:5000/v3 \ --os-project-domain-name Default --os-user-domain-name Default \ --os-project-name demo --os-username demo token issue Password: +------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | expires | 2017-11-28T15:31:39+0000 | | id | gAAAAABaHXNLqqz_zEJ2Sjs8dV4AwZo9UvX5Gj2ImioeA6dFeFo0Vg2b2DSu_B3sgEJRTU49sF3e2WaweASbYSg_wFlGfdPA6N7HG2KZclpRrL3x94RFqpdpRfkq61N0qLm1JImL4sXCbNRLmHMZSfzxEIAFkxgYs1hgqFv2mzvS3a-dz0XHq4E | | project_id | 25325c467e544a599764f51260b05d1a | | user_id | 6478fac787444bb98385714d76a21a4a | +------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ [root@controller ~]#
admin 과 demo 의 사용자 클라이언트 환경 스크립트 생성
admin-openrc 파일생성
[root@controller ~]# vi admin-openrc export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_AUTH_URL=http://controller:35357/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2
demo-openrc 파일생성
[root@controller ~]# vi demo-openrc export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=demo export OS_USERNAME=demo export OS_PASSWORD=DEMO_PASS export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2
세션 재접속후 테스트
root@controll's password: Last login: Tue Nov 28 23:05:01 2017 from gateway [root@controller ~]# . admin-openrc [root@controller ~]# openstack token issue +------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | expires | 2017-11-28T15:34:34+0000 | | id | gAAAAABaHXP6JvekS_E1r7rZWI4d89Pb-rwfVOiZ-5Aqm8dRGzFuCfaDGK-HoshnbLlSUmrEU-Rr4zi9a9WXbpQSbbXnj1W3VwuJDz2tMluB9X2y5R7eUvxHkQHC_TBgtzCbxnwjG30fndFdbWqOvtIw2ZQUjzJOA3i9URQoHAkldEzsp9D0Vwo | | project_id | 95d905a1b12544b48ba6e4b43a85ef0b | | user_id | b9a2717e56244411a9cf7b1b52073602 | +------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ [root@controller ~]#
Image Service glance
openstack site : https://docs.openstack.org/glance/pike/install/
glance DB 생성
[root@controller ~]# mysql -uroot -p Enter password: Welcome to the MariaDB monitor. Commands end with ; or \g. Your MariaDB connection id is 19 Server version: 10.1.20-MariaDB MariaDB Server Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MariaDB [(none)]> CREATE DATABASE glance; Query OK, 1 row affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \ IDENTIFIED BY 'GLANCE_DBPASS'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \ IDENTIFIED BY 'GLANCE_DBPASS'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> flush privileges; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> quit; Bye [root@controller ~]#
오픈스택 glance 유저 생성
GLANCE_PASS 입력
[root@controller ~]# openstack user create --domain default --password-prompt glance User Password: Repeat User Password: +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | default | | enabled | True | | id | 7490f9bcc3894b5d8344f586b8f861b6 | | name | glance | | options | {} | | password_expires_at | None | +---------------------+----------------------------------+ [root@controller ~]#
glance 사용자 추가및 service 프로젝트 추가
[root@controller ~]# openstack role add --project service --user glance admin
glance 서비스 엔티티를 생성 합니다.
[root@controller ~]# openstack service create --name glance \ --description "OpenStack Image" image +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | OpenStack Image | | enabled | True | | id | 540df478eb89412d86534ef55831423a | | name | glance | | type | image | +-------------+----------------------------------+ [root@controller ~]#
이미지 서비스 PAI 엔드포인트를 생성 #3-1
[root@controller ~]# openstack endpoint create --region RegionOne \ image public http://controller:9292 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | a394bb427fb7408c970181dc46f1a6c0 | | interface | public | | region | RegionOne | | region_id | RegionOne | | service_id | 540df478eb89412d86534ef55831423a | | service_name | glance | | service_type | image | | url | http://controller:9292 | +--------------+----------------------------------+ [root@controller ~]#
이미지 서비스 PAI 엔드포인트를 생성 #3-2
[root@controller ~]# openstack endpoint create --region RegionOne \ image internal http://controller:9292 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | e6713b032e4a480b81ce16593f274d02 | | interface | internal | | region | RegionOne | | region_id | RegionOne | | service_id | 540df478eb89412d86534ef55831423a | | service_name | glance | | service_type | image | | url | http://controller:9292 | +--------------+----------------------------------+ [root@controller ~]#
이미지 서비스 PAI 엔드포인트를 생성 #3-3
[root@controller ~]# openstack endpoint create --region RegionOne \ image admin http://controller:9292 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | dc473a0788e04f6baad4310177f4bad8 | | interface | admin | | region | RegionOne | | region_id | RegionOne | | service_id | 540df478eb89412d86534ef55831423a | | service_name | glance | | service_type | image | | url | http://controller:9292 | +--------------+----------------------------------+ [root@controller ~]#
openstack-glance 패키지 설치
[root@controller ~]# yum install -y openstack-glance
glance-api.conf 파일 수정 database , keystone_authtoken , paste_deploy , glance_store )
[root@controller ~]# vi /etc/glance/glance-api.conf [database] connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = glance password = GLANCE_PASS [paste_deploy] flavor = keystone [glance_store] stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/
openstack-config Test 중
openstack-config --set /etc/glance/glance-api.conf DEFAULT transport_url rabbit://openstack:RABBIT_PASS@controller openstack-config --set /etc/glance/glance-api.conf database connection mysql+pymysql://glance:GLANCE_DBPASS@controller/glance openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_uri http://controller:5000 openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_url http://controller:35357 openstack-config --set /etc/glance/glance-api.conf keystone_authtoken memcached_servers controller:11211 openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_type password openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_domain_name default openstack-config --set /etc/glance/glance-api.conf keystone_authtoken user_domain_name default openstack-config --set /etc/glance/glance-api.conf keystone_authtoken username glance openstack-config --set /etc/glance/glance-api.conf keystone_authtoken password GLANCE_PASS openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_name service openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone openstack-config --set /etc/glance/glance-api.conf glance_store stores file,http openstack-config --set /etc/glance/glance-api.conf glance_store default_store file openstack-config --set /etc/glance/glance-api.conf glance_store filesystem_store_datadir /var/lib/glance/images/
glance-registry.conf 파일 수정 ( database , keystone_authtoken , paste_deploy )
[root@controller ~]# vi /etc/glance/glance-registry.conf [database] connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = glance password = GLANCE_PASS [paste_deploy] flavor = keystone
openstack-config Test 중
openstack-config --set /etc/glance/glance-registry.conf DEFAULT transport_url rabbit://openstack:RABBIT_PASS@controller openstack-config --set /etc/glance/glance-registry.conf database connection mysql+pymysql://glance:GLANCE_DBPASS@controller/glance openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_uri http://controller:5000 openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_url http://controller:35357 openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken memcached_serverscontroller:11211 openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_type password openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_domain_name default openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken user_domain_name default openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_name service openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken username glance openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken password GLANCE_PASS openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone
이미지 서비스 데이터 베이스 추가
[root@controller ~]# su -s /bin/sh -c "glance-manage db_sync" glance /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:1328: OsloDBDeprecationWarning: EngineFacade is deprecated; please use oslo_db.sqlalchemy.enginefacade expire_on_commit=expire_on_commit, _conf=conf) INFO [alembic.runtime.migration] Context impl MySQLImpl. INFO [alembic.runtime.migration] Will assume non-transactional DDL. INFO [alembic.runtime.migration] Running upgrade -> liberty, liberty initial INFO [alembic.runtime.migration] Running upgrade liberty -> mitaka01, add index on created_at and updated_at columns of 'images' table INFO [alembic.runtime.migration] Running upgrade mitaka01 -> mitaka02, update metadef os_nova_server INFO [alembic.runtime.migration] Running upgrade mitaka02 -> ocata01, add visibility to and remove is_public from images INFO [alembic.runtime.migration] Running upgrade ocata01 -> pike01, drop glare artifacts tables INFO [alembic.runtime.migration] Context impl MySQLImpl. INFO [alembic.runtime.migration] Will assume non-transactional DDL. Upgraded database to: pike01, current revision(s): pike01 [root@controller ~]#
deprecated 메시지는 무시 해도 상관 없없습니다.
openstack-glance Service 자동실행 추가및 Service Start
[root@controller ~]# systemctl enable openstack-glance-api.service \ openstack-glance-registry.service [root@controller ~]# systemctl start openstack-glance-api.service \ openstack-glance-registry.service [root@controller ~]#
glance image 등록
[root@controller ~]# yum install -y wget [root@controller ~]# wget http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img [root@controller ~]# openstack image create "cirros" \ --file cirros-0.3.5-x86_64-disk.img \ --disk-format qcow2 --container-format bare \ --public +------------------+------------------------------------------------------+ | Field | Value | +------------------+------------------------------------------------------+ | checksum | f8ab98ff5e73ebab884d80c9dc9c7290 | | container_format | bare | | created_at | 2017-11-28T14:56:00Z | | disk_format | qcow2 | | file | /v2/images/3e4448dd-5399-4fff-934e-bde198f6d9fa/file | | id | 3e4448dd-5399-4fff-934e-bde198f6d9fa | | min_disk | 0 | | min_ram | 0 | | name | cirros | | owner | 95d905a1b12544b48ba6e4b43a85ef0b | | protected | False | | schema | /v2/schemas/image | | size | 13267968 | | status | active | | tags | | | updated_at | 2017-11-28T14:56:00Z | | virtual_size | None | | visibility | public | +------------------+------------------------------------------------------+ [root@controller ~]#
image 확인
[root@controller ~]# openstack image list +--------------------------------------+--------+--------+ | ID | Name | Status | +--------------------------------------+--------+--------+ | 3e4448dd-5399-4fff-934e-bde198f6d9fa | cirros | active | +--------------------------------------+--------+--------+ [root@controller ~]#
3. Compute Service
openstack site : https://docs.openstack.org/nova/pike/install/
firewalld Disable 경우 해당 설정 부터 Disable
nova DB 생성
[root@controller ~]# mysql -uroot -p Enter password: Welcome to the MariaDB monitor. Commands end with ; or \g. Your MariaDB connection id is 23 Server version: 10.1.20-MariaDB MariaDB Server Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MariaDB [(none)]> CREATE DATABASE nova_api; MariaDB [(none)]> CREATE DATABASE nova; MariaDB [(none)]> CREATE DATABASE nova_cell0; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> flush privileges; MariaDB [(none)]> quit; Bye [root@controller ~]#
nova 사용자 생성
NOVA_PASS 입력
[root@controller ~]# . admin-openrc [root@controller ~]# openstack user create --domain default --password-prompt nova User Password: Repeat User Password: +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | default | | enabled | True | | id | afcfca3df8034c25b0fd8f834bd08259 | | name | nova | | options | {} | | password_expires_at | None | +---------------------+----------------------------------+ [root@controller ~]#
nova 사용자에 admin role 추가
[root@controller ~]# openstack role add --project service --user nova admin
nova 서비스 엔티티 생성
[root@controller ~]# openstack service create --name nova \ --description "OpenStack Compute" compute +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | OpenStack Compute | | enabled | True | | id | f281d4d6ea18441ca117a57f5b9dcb5b | | name | nova | | type | compute | +-------------+----------------------------------+ [root@controller ~]#
Compute API 서비스 엔드포인트를 생성
[root@controller ~]# openstack endpoint create --region RegionOne \ compute public http://controller:8774/v2.1 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 75521f33cb1744f4b573a3e6b90e1acc | | interface | public | | region | RegionOne | | region_id | RegionOne | | service_id | f281d4d6ea18441ca117a57f5b9dcb5b | | service_name | nova | | service_type | compute | | url | http://controller:8774/v2.1 | +--------------+----------------------------------+ [root@controller ~]# openstack endpoint create --region RegionOne \ compute internal http://controller:8774/v2.1 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 48313bf38bf74559bc5dcc346f2a1309 | | interface | internal | | region | RegionOne | | region_id | RegionOne | | service_id | 78883460454a4fc6940ef56d6acf57e7 | | service_name | nova | | service_type | compute | | url | http://controller:8774/v2.1 | +--------------+----------------------------------+ [root@controller ~]# [root@controller ~]# openstack endpoint create --region RegionOne \ compute admin http://controller:8774/v2.1 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 20c9795a55a3457c9e817c96b5518df3 | | interface | admin | | region | RegionOne | | region_id | RegionOne | | service_id | f281d4d6ea18441ca117a57f5b9dcb5b | | service_name | nova | | service_type | compute | | url | http://controller:8774/v2.1 | +--------------+----------------------------------+ [root@controller ~]#
PLACEMENT_PASS 입력 / placement 서비스 사용자 생성
[root@controller ~]# openstack user create --domain default --password-prompt placement User Password: Repeat User Password: +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | default | | enabled | True | | id | df2929587a264f0ebc914ac010f39538 | | name | placement | | options | {} | | password_expires_at | None | +---------------------+----------------------------------+ [root@controller ~]#
서비스 프로젝트에 placement 사용자 추가
[root@controller ~]# openstack role add --project service --user placement admin
서비스 카탈로그에 Placement API 항목을 생성
[root@controller ~]# openstack service create --name placement --description "Placement API" placement +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | Placement API | | enabled | True | | id | e9eae3013f2e4f7cb9a97676c6dff706 | | name | placement | | type | placement | +-------------+----------------------------------+ [root@controller ~]#
Placement API 서비스 엔드포인트를 생성
[root@controller ~]# openstack endpoint create --region RegionOne placement public http://controller:8778 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | a0a6d335c64b45be982917837fc22bc9 | | interface | public | | region | RegionOne | | region_id | RegionOne | | service_id | e9eae3013f2e4f7cb9a97676c6dff706 | | service_name | placement | | service_type | placement | | url | http://controller:8778 | +--------------+----------------------------------+ [root@controller ~]# openstack endpoint create --region RegionOne placement internal http://controller:8778 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 628ce318919a49c491422f02e22a361d | | interface | internal | | region | RegionOne | | region_id | RegionOne | | service_id | e9eae3013f2e4f7cb9a97676c6dff706 | | service_name | placement | | service_type | placement | | url | http://controller:8778 | +--------------+----------------------------------+ [root@controller ~]# openstack endpoint create --region RegionOne placement admin http://controller:8778 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 781854c9caa34d2495104d9640353f49 | | interface | admin | | region | RegionOne | | region_id | RegionOne | | service_id | e9eae3013f2e4f7cb9a97676c6dff706 | | service_name | placement | | service_type | placement | | url | http://controller:8778 | +--------------+----------------------------------+ [root@controller ~]#
nova 패키지 설치
[root@controller ~]# yum install -y openstack-nova-api openstack-nova-conductor \ openstack-nova-console openstack-nova-novncproxy \ openstack-nova-scheduler openstack-nova-placement-api
nova.conf 수정
[root@controller ~]# vi /etc/nova/nova.conf [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller my_ip = 40.0.0.101 use_neutron = True firewall_driver = nova.virt.firewall.NoopFirewallDriver [api_database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api [database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova [api] auth_strategy = keystone [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = nova password = NOVA_PASS [vnc] enabled = true vncserver_listen = $my_ip vncserver_proxyclient_address = $my_ip [glance] api_servers = http://controller:9292 [oslo_concurrency] lock_path = /var/lib/nova/tmp [placement] os_region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:35357/v3 username = placement password = PLACEMENT_PASS
00-nova-placement-api 파일 수정 (수정을 안했을 경우 아래와 같은 현상발생)
Due to a packaging bug, you must enable access to the Placement API by adding the following configuration to /etc/httpd/conf.d/00-nova-placement-api.conf
:
[root@controller ~]# vi /etc/httpd/conf.d/00-nova-placement-api.conf <Directory /usr/bin> <IfVersion >= 2.4> Require all granted </IfVersion> <IfVersion < 2.4> Order allow,deny Allow from all </IfVersion> </Directory>
00-nova-placement-api 파일 수정 안했을 경우 확인할수 있는 내역
httpd 재시작
[root@controller ~]# systemctl restart httpd
아래 내역은 00-nova-placement-api 설정을 안했을 경우 나타나는 현상입니다.
nova-placement-api.log 확인시 denied messages 확인
[root@controller ~]# tail -f /var/log/nova/nova-placement-api.log AH01630: client denied by server configuration: /usr/bin/nova-placement-api AH01630: client denied by server configuration: /usr/bin/nova-placement-api AH01630: client denied by server configuration: /usr/bin/nova-placement-api AH01630: client denied by server configuration: /usr/bin/nova-placement-api AH01630: client denied by server configuration: /usr/bin/nova-placement-api AH01630: client denied by server configuration: /usr/bin/nova-placement-api ^C
compute node 리부팅전 controller 에서 확인시 Result: Warning 메시지 확인
[root@controller ~]# nova-status upgrade check +-------------------------------------------------------------------+ | Upgrade Check Results | +-------------------------------------------------------------------+ | Check: Cells v2 | | Result: Success | | Details: None | +-------------------------------------------------------------------+ | Check: Placement API | | Result: Success | | Details: None | +-------------------------------------------------------------------+ | Check: Resource Providers | | Result: Warning | | Details: There are no compute resource providers in the Placement | | service but there are 1 compute nodes in the deployment. | | This means no compute nodes are reporting into the | | Placement service and need to be upgraded and/or fixed. | | See | | http://docs.openstack.org/developer/nova/placement.html | | for more details. | +-------------------------------------------------------------------+ [root@controller ~]#
(해당현상의 경우 firewalld Disable 로 해결 할수 있으나.. )
firewalld Disable 후 시스템 리부팅후 nova-manage 명령어 다시실행
[root@controller ~]# nova-status upgrade check +------------------------------------------------------------------+ | Upgrade Check Results | +------------------------------------------------------------------+ | Check: Cells v2 | | Result: Failure | | Details: No host mappings found but there are compute nodes. Run | | command 'nova-manage cell_v2 simple_cell_setup' and then | | retry. | +------------------------------------------------------------------+ | Check: Placement API | | Result: Success | | Details: None | +------------------------------------------------------------------+ | Check: Resource Providers | | Result: Success | | Details: None | +------------------------------------------------------------------+ [root@controller ~]# nova-manage cell_v2 simple_cell_setup Cell0 is already setup [root@controller ~]# nova-status upgrade check +---------------------------+ | Upgrade Check Results | +---------------------------+ | Check: Cells v2 | | Result: Success | | Details: None | +---------------------------+ | Check: Placement API | | Result: Success | | Details: None | +---------------------------+ | Check: Resource Providers | | Result: Success | | Details: None | +---------------------------+ [root@controller ~]#
warning message 확인시 : https://docs.openstack.org/releasenotes/keystonemiddleware/ocata.html
service_token_roles_required false 로 설정 되어 있는것을 확인
True 로 설정을 바꾸면 해결? // Test 시 동일한 문제 발생
# openstack-config --set /etc/nova/nova.conf keystone_authtoken service_token_roles_required True
openstack-config 명령어 사용시 openstack-utils 패키지 필요
차후 firewalld Disable 및 iptables 작업후 nova 설치하여 테스트 필요!!
compute node Rebooting 후 확인시
[root@controller ~]# nova-status upgrade check +---------------------------+ | Upgrade Check Results | +---------------------------+ | Check: Cells v2 | | Result: Success | | Details: None | +---------------------------+ | Check: Placement API | | Result: Success | | Details: None | +---------------------------+ | Check: Resource Providers | | Result: Success | | Details: None | +---------------------------+ [root@controller ~]#
위의 내역의 경우 확인 내역이며 셋팅은 아래 내용부터 하시면 됩니다.
nova-api 데이터 베이스 등록
[root@controller ~]# su -s /bin/sh -c "nova-manage api_db sync" nova
cell0 데이터 베이스 등록
[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
cell1 셀을 생성
[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
nova 데이터 베이스 등록
[root@controller ~]# su -s /bin/sh -c "nova-manage db sync" nova /usr/lib/python2.7/site-packages/pymysql/cursors.py:166: Warning: (1831, u'Duplicate index `block_device_mapping_instance_uuid_virtual_name_device_name_idx`. This is deprecated and will be disallowed in a future release.') result = self._query(query) /usr/lib/python2.7/site-packages/pymysql/cursors.py:166: Warning: (1831, u'Duplicate index `uniq_instances0uuid`. This is deprecated and will be disallowed in a future release.') result = self._query(query) [root@controller ~]#
(해당 메시지 부분의 경우 원인을 찾을수 없음)
nova cell0 cell1이 정확하게 등록 되었는지 확인
[root@controller ~]# nova-manage cell_v2 list_cells +-------+--------------------------------------+------------------------------------+-------------------------------------------------+ | Name | UUID | Transport URL | Database Connection | +-------+--------------------------------------+------------------------------------+-------------------------------------------------+ | cell0 | 00000000-0000-0000-0000-000000000000 | none:/ | mysql+pymysql://nova:****@controller/nova_cell0 | | cell1 | fbfd4a7d-3bcb-478a-b13c-6557c7c9acfe | rabbit://openstack:****@controller | mysql+pymysql://nova:****@controller/nova | +-------+--------------------------------------+------------------------------------+-------------------------------------------------+ [root@controller ~]#
firewall-cmd 명령어
[root@controller ~]# firewall-cmd --permanent --add-port=5672/tcp success [root@controller ~]# firewall-cmd --reload success [root@controller ~]#
openstack-nova-api enable 및 Service 실행
[root@controller ~]# systemctl enable openstack-nova-api.service \ openstack-nova-consoleauth.service openstack-nova-scheduler.service \ openstack-nova-conductor.service openstack-nova-novncproxy.service [root@controller ~]# systemctl start openstack-nova-api.service \ openstack-nova-consoleauth.service openstack-nova-scheduler.service \ openstack-nova-conductor.service openstack-nova-novncproxy.service [root@controller ~]#
compute 에서 작업
firewalld Disable 시 해당 작업 내역에서 진행 하시면 됩니다.
저장소 활성화 및 update
[root@compute ~]# yum install -y https://repos.fedorapeople.org/repos/openstack/openstack-pike/rdo-release-pike-1.noarch.rpm [root@compute ~]# yum update -y [root@compute ~]# init 6
openstack-selinux 패키지 설치
[root@compute ~]# yum install -y openstack-selinux [root@compute ~]# init 6
openstack-nova-compute 설치
[root@controller ~]# yum install -y openstack-nova-compute
nova.conf 수정
[root@compute ~]# vi /etc/nova/nova.conf [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller my_ip = 40.0.0.102 use_neutron = True firewall_driver = nova.virt.firewall.NoopFirewallDriver [api] auth_strategy = keystone [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = nova password = NOVA_PASS [vnc] enabled = True vncserver_listen = 0.0.0.0 vncserver_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html [glance] api_servers = http://controller:9292 [oslo_concurrency] lock_path = /var/lib/nova/tmp [placement] os_region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:35357/v3 username = placement password = PLACEMENT_PASS [libvirt] virt_type = qemu
virt_type 의 경우 아래와 같이 확인 하실수 있습니다.
[root@compute ~]# egrep -c '(vmx|svm)' /proc/cpuinfo 0
0 의 경우 qemu 설정 0 이외의 경우 kvm 으로 설정 합니다.
qemu 사용시
[root@compute ~]# yum install libguestfs-tools
예전방식으로 iptables Rule 을 추가시 아래내용 설정 firewalld 환경에서는 시스템 리부팅 해당 설정이 삭제 됩니다.
iptable 추가 후 iptables 저장및 재시작 (controller node / compute node 작업해야함)
[root@compute ~]# iptables -A IN_public_allow -p tcp -m tcp --dport 5672 -m conntrack --ctstate NEW -j ACCEPT [root@controller ~]# service iptables save iptables: Saving firewall rules to /etc/sysconfig/iptables:[ OK ] [root@controller ~]# [root@controller ~]# systemctl restart iptables
or firewall-cmd 명령어
[root@controller ~]# firewall-cmd --permanent --add-port=5672/tcp success [root@controller ~]# firewall-cmd --reload success [root@controller ~]#
iptables Rule을 사용할 경우 패키지 설치
controller node
[root@controller ~]# yum install -y iptables-services
System Rebooting 시 iptables 가 동작하지 않는 이유???
firewalld Disable 및 iptables 로 바꾸는 작업이 필요 (iptables 을 사전작업으로 하고 nova Test 필요)
[root@compute ~]# systemctl mask firewalld [root@compute ~]# systemctl enable iptables [root@compute ~]# systemctl enable ip6tables
openstack-nova-compute enable 및 실행
[root@compute ~]# systemctl enable libvirtd.service openstack-nova-compute.service [root@compute ~]# systemctl start libvirtd.service openstack-nova-compute.service
정상적으로 nova-compute 가 실행 안될때
[root@compute ~]# tail -f /var/log/nova/nova-compute.log 2017-11-29 01:41:31.993 1840 ERROR oslo.messaging._drivers.impl_rabbit [req-bc6c79e0-5d57-454c-b8bb-c1fd04706b18 - - - - -] [bf6d14bf-42f7-4112-b9df-467e4f511594] AMQP rver on controller:5672 is unreachable: [Errno 113] EHOSTUNREACH. Trying again in 32 seconds. Client port: None: error: [Errno 113] EHOSTUNREACH 2017-11-29 01:42:04.042 1840 ERROR oslo.messaging._drivers.impl_rabbit [req-bc6c79e0-5d57-454c-b8bb-c1fd04706b18 - - - - -] [bf6d14bf-42f7-4112-b9df-467e4f511594] AMQP rver on controller:5672 is unreachable: [Errno 113] EHOSTUNREACH. Trying again in 32 seconds. Client port: None: error: [Errno 113] EHOSTUNREACH 2017-11-29 01:42:36.089 1840 ERROR oslo.messaging._drivers.impl_rabbit [req-bc6c79e0-5d57-454c-b8bb-c1fd04706b18 - - - - -] [bf6d14bf-42f7-4112-b9df-467e4f511594] AMQP rver on controller:5672 is unreachable: [Errno 113] EHOSTUNREACH. Trying again in 32 seconds. Client port: None: error: [Errno 113] EHOSTUNREACH 2017-11-29 01:43:08.141 1840 INFO oslo.messaging._drivers.impl_rabbit [req-bc6c79e0-5d57-454c-b8bb-c1fd04706b18 - - - - -] [bf6d14bf-42f7-4112-b9df-467e4f511594] Reconnted to AMQP server on controller:5672 via [amqp] client with port 53352.
firewalld Disable 시 정상 적동 안하며 iptables Rule 추가 해야지 정상적으로 openstack-nova-compute 데몬이 실행 합니다!
최초 실행시 firewalld 의 영향을 받는거 같으며 리부팅시 firewalld Disable 환경에서도 정상적으로 openstack-nova-compute 가 동작합니다.
[root@compute ~]# iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination [root@compute ~]# systemctl status openstack-nova-compute.service ● openstack-nova-compute.service - OpenStack Nova Compute Server Loaded: loaded (/usr/lib/systemd/system/openstack-nova-compute.service; enabled; vendor preset: disabled) Active: active (running) since Sat 2017-12-02 07:47:01 KST; 21s ago Main PID: 1066 (nova-compute) CGroup: /system.slice/openstack-nova-compute.service └─1066 /usr/bin/python2 /usr/bin/nova-compute Dec 02 07:46:56 compute systemd[1]: Starting OpenStack Nova Compute Server... Dec 02 07:47:01 compute systemd[1]: Started OpenStack Nova Compute Server. [root@compute ~]#
controller Rebooting 후 테스트 ( controller firewalld Enable 이고 Rule 이 빠졌을때는 정상적으로 동작 안하며 controller firewalld Disable 후 Rebooting 후 정상작동)
[root@compute ~]# systemctl status openstack-nova-compute.service ● openstack-nova-compute.service - OpenStack Nova Compute Server Loaded: loaded (/usr/lib/systemd/system/openstack-nova-compute.service; enabled; vendor preset: disabled) Active: active (running) since Sat 2017-12-02 07:56:26 KST; 3s ago Main PID: 1461 (nova-compute) CGroup: /system.slice/openstack-nova-compute.service └─1461 /usr/bin/python2 /usr/bin/nova-compute Dec 02 07:56:22 compute systemd[1]: Starting OpenStack Nova Compute Server... Dec 02 07:56:26 compute systemd[1]: Started OpenStack Nova Compute Server. [root@compute ~]#
controller node 에서 확인
[root@controller ~]# . admin-openrc [root@controller ~]# openstack compute service list --service nova-compute +----+--------------+---------+------+---------+-------+----------------------------+ | ID | Binary | Host | Zone | Status | State | Updated At | +----+--------------+---------+------+---------+-------+----------------------------+ | 6 | nova-compute | compute | nova | enabled | up | 2017-11-28T16:47:09.000000 | +----+--------------+---------+------+---------+-------+----------------------------+ [root@controller ~]#
compute 호스트 찾기
[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova Found 2 cell mappings. Skipping cell0 since it does not contain hosts. Getting compute nodes from cell 'cell1': fbfd4a7d-3bcb-478a-b13c-6557c7c9acfe Found 1 unmapped computes in cell: fbfd4a7d-3bcb-478a-b13c-6557c7c9acfe Checking host mapping for compute host 'compute': f6f67c1c-f7c0-4d52-b521-30ee36610c60 Creating host mapping for compute host 'compute': f6f67c1c-f7c0-4d52-b521-30ee36610c60 [root@controller ~]#
interval수정
[root@controller ~]# vi /etc/nova/nova.conf [scheduler] discover_hosts_in_cells_interval = 300
서비스 구성요소 확인
[root@controller ~]# openstack compute service list +----+------------------+------------+----------+---------+-------+----------------------------+ | ID | Binary | Host | Zone | Status | State | Updated At | +----+------------------+------------+----------+---------+-------+----------------------------+ | 1 | nova-conductor | controller | internal | enabled | up | 2017-11-28T16:52:33.000000 | | 2 | nova-consoleauth | controller | internal | enabled | up | 2017-11-28T16:52:34.000000 | | 3 | nova-scheduler | controller | internal | enabled | up | 2017-11-28T16:52:33.000000 | | 6 | nova-compute | compute | nova | enabled | up | 2017-11-28T16:52:29.000000 | +----+------------------+------------+----------+---------+-------+----------------------------+ [root@controller ~]#
endpoint 목록 확인
[root@controller ~]# openstack catalog list +-----------+-----------+----------------------------------------+ | Name | Type | Endpoints | +-----------+-----------+----------------------------------------+ | glance | image | RegionOne | | | | public: http://controller:9292 | | | | RegionOne | | | | admin: http://controller:9292 | | | | RegionOne | | | | internal: http://controller:9292 | | | | | | keystone | identity | RegionOne | | | | internal: http://controller:5000/v3/ | | | | RegionOne | | | | public: http://controller:5000/v3/ | | | | RegionOne | | | | admin: http://controller:35357/v3/ | | | | | | placement | placement | RegionOne | | | | internal: http://controller:8778 | | | | RegionOne | | | | admin: http://controller:8778 | | | | RegionOne | | | | public: http://controller:8778 | | | | | | nova | compute | RegionOne | | | | admin: http://controller:8774/v2.1 | | | | RegionOne | | | | public: http://controller:8774/v2.1 | | | | RegionOne | | | | public: http://controller:8774/v2.1 | | | | | +-----------+-----------+----------------------------------------+ [root@controller ~]#
image 서비스 이미지 확인
[root@controller ~]# openstack image list +--------------------------------------+--------+--------+ | ID | Name | Status | +--------------------------------------+--------+--------+ | 3e4448dd-5399-4fff-934e-bde198f6d9fa | cirros | active | +--------------------------------------+--------+--------+ [root@controller ~]#
셀과 placement API 동작 확인
[root@controller ~]# nova-status upgrade check +---------------------------+ | Upgrade Check Results | +---------------------------+ | Check: Cells v2 | | Result: Success | | Details: None | +---------------------------+ | Check: Placement API | | Result: Success | | Details: None | +---------------------------+ | Check: Resource Providers | | Result: Success | | Details: None | +---------------------------+ [root@controller ~]#
Test 전 controller node / compute node virt-clone 으로 vm 백업
Networking Service neutron
https://docs.openstack.org/ocata/ko_KR/install-guide-rdo/neutron.html
https://docs.openstack.org/neutron/pike/install/
controller system 에서 작업
neutron db 생성
[root@controller ~]# mysql -uroot -p Enter password: Welcome to the MariaDB monitor. Commands end with ; or \g. Your MariaDB connection id is 18 Server version: 10.1.20-MariaDB MariaDB Server Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MariaDB [(none)]> CREATE DATABASE neutron; Query OK, 1 row affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \ IDENTIFIED BY 'NEUTRON_DBPASS'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \ IDENTIFIED BY 'NEUTRON_DBPASS'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> flush privileges; Query OK, 0 rows affected (0.01 sec) MariaDB [(none)]> quit; Bye [root@controller ~]#
admin 환경파일을 불러 옵니다.
[root@controller ~]# . admin-openrc
neutron 유저를 생성합니다.
NEUTRON_PASS 입력
[root@controller ~]# openstack user create --domain default --password-prompt neutron User Password: Repeat User Password: +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | default | | enabled | True | | id | d989d834135e40ecbb461c622bfb36bd | | name | neutron | | options | {} | | password_expires_at | None | +---------------------+----------------------------------+ [root@controller ~]#
neutron role 추가
[root@controller ~]# openstack role add --project service --user neutron admin
서비스 엔티티 생성
[root@controller ~]# openstack service create --name neutron \ --description "OpenStack Networking" network +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | OpenStack Networking | | enabled | True | | id | a714327b7ff14147811588e03e0572ff | | name | neutron | | type | network | +-------------+----------------------------------+ [root@controller ~]#
네트워크 서비스 API 엔드포인트를 생성
[root@controller ~]# openstack endpoint create --region RegionOne \ network public http://controller:9696 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | fb0b35d86a5c48f39916835f429fc9e4 | | interface | public | | region | RegionOne | | region_id | RegionOne | | service_id | a714327b7ff14147811588e03e0572ff | | service_name | neutron | | service_type | network | | url | http://controller:9696 | +--------------+----------------------------------+ [root@controller ~]# openstack endpoint create --region RegionOne \ network internal http://controller:9696 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 54310c0061b345919558e80df87f7a55 | | interface | internal | | region | RegionOne | | region_id | RegionOne | | service_id | a714327b7ff14147811588e03e0572ff | | service_name | neutron | | service_type | network | | url | http://controller:9696 | +--------------+----------------------------------+ [root@controller ~]# openstack endpoint create --region RegionOne \ network admin http://controller:9696 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | d0234bd35b844d8da24bd806bc4fa6a9 | | interface | admin | | region | RegionOne | | region_id | RegionOne | | service_id | a714327b7ff14147811588e03e0572ff | | service_name | neutron | | service_type | network | | url | http://controller:9696 | +--------------+----------------------------------+ [root@controller ~]#
네트워킹 옵션 구성
옵션 1 및 2에 의해 표현되는 두 아키텍처 중 하나를 사용하여 네트워킹 서비스를 배포할 수 있습니다.
옵션 1 은 프로바이더 (외부) 네트워크에 대한 인스턴스 연결만을 지원하는 가능한 단순한 아키텍처로 배포합니다.
셀프 서비스 (사설) 네트워크, 라우터 또는 플로팅 IP 주소가 없습니다. admin 또는 기타 권한이 있는 사용자만 프로바이더 네트워크를 관리할 수 있습니다.
옵션 2 는 옵션 1에 셀프 서비스 네트워크에 대한 인스턴스 연결을 지원하는 layer-3 서비스를 확장합니다.
demo 또는 다른 관리자 권한을 갖지 않은 사용자가 셀프 서비스와 프로바이더 네트워크 간 연결을 제공하는 라우터를 포함한 셀프 서비스 네트워크를 관리할 수 있습니다.
부가적으로, 플로팅 IP 주소는 인터넷과 같은 외부 네트워크로부터 셀프 서비스 네트워크를 사용하여 인스턴스에 대한 연결을 제공합니다.
셀프 서비스 네트워크는 보통 오버레이 네트워크를 사용합니다.
VXLAN과 같은 오버레이 네트워크 프로토콜은 오버헤드를 증가시키고 페이로드 또는 사용자 데이터를 위해 사용 가능한 용량을 감소시키는 추가적인 패킷 헤더를 포함합니다.
가상 네트워크 인프라에 대한 지식이 없더라도 인스턴스들은 기본 이더넷에 대한 1500 바이트의 maximum transmission unit (MTU) 를 사용하여 패킷 전송을 시도합니다.
네트워킹 서비스는 DHCP를 통해 인스턴스에 알맞은 MTU 값을 자동으로 제공합니다.
그러나, 몇몇 클라우드 이미지는 DHCP를 사용하지 않거나 DHCP MTU 옵션을 무시하며 메타데이터 또는 스크립트를 사용한 구성을 필요로 합니다.
참고 페이지
네트워킹 옵션 1: 프로바이더 네트워크
네트워킹 옵션 2: 셀프서비스 네트워크
네트워킹 옵션 2 : 셀프서비스 네트워크로 구성
구성요소 설치
[root@controller ~]# yum install -y openstack-neutron openstack-neutron-ml2 \ openstack-neutron-linuxbridge ebtables
neutron.conf 파일 수정
[root@controller ~]# vi /etc/neutron/neutron.conf [DEFAULT] core_plugin = ml2 service_plugins = router allow_overlapping_ips = true transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone notify_nova_on_port_status_changes = true notify_nova_on_port_data_changes = true [database] connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = neutron password = NEUTRON_PASS [nova] auth_url = http://controller:35357 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = nova password = NOVA_PASS [oslo_concurrency] lock_path = /var/lib/neutron/tmp
ml2_conf.ini 파일 수정
[root@controller ~]# vi /etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = linuxbridge,l2population extension_drivers = port_security [ml2_type_flat] flat_networks = provider [ml2_type_vxlan] vni_ranges = 1:1000 [securitygroup] enable_ipset = true
리눅스 브릿지 에이전트 구성
리눅스 브릿지 에이전트는 인스턴스에 대한 layer-2 (브릿징과 스위칭) 가상 네트워킹 인프라를 구축하고 시큐리티 그룹을 처리 합니다.
설정내역 참고: https://docs.openstack.org/ocata/ko_KR/install-guide-rdo/neutron-controller-install-option2.html
linuxbridge_agent.ini 파일 수정
[root@controller ~]# vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:eth1 [vxlan] enable_vxlan = true local_ip = 40.0.0.101 l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
Layer-3 에이전트 구성
l3_agent.ini 파일 수정
[root@controller ~]# vi /etc/neutron/l3_agent.ini [DEFAULT] interface_driver = linuxbridge
dhcp_agent.ini 파일 수정
[root@controller ~]# vi /etc/neutron/dhcp_agent.ini [DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true
compute node 작업
[root@compute ~]# yum install -y openstack-neutron-linuxbridge ebtables ipset
neutron.conf 파일 수정
[root@compute ~]# vi /etc/neutron/neutron.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = neutron password = NEUTRON_PASS [oslo_concurrency] lock_path = /var/lib/neutron/tmp
네트워킹 옵션 구성
site: https://docs.openstack.org/ocata/ko_KR/install-guide-rdo/neutron-compute-install-option2.html
네트워킹 옵션 2: 셀프 서비스 네트워크 compute 서비스에서 구성
[root@compute ~]# vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:eth1 [vxlan] enable_vxlan = true local_ip = 40.0.0.102 l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
네트워킹 서비스를 사용하기 위해 Compute 서비스를 구성 합니다.
nova.conf 수정
[root@compute ~]# vi /etc/nova/nova.conf [neutron] url = http://controller:9696 auth_url = http://controller:35357 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS
compute 서비스 재시작
[root@compute ~]# systemctl restart openstack-nova-compute.service
linux 브릿지 에이전트를 시작 하고 enable
[root@compute ~]# systemctl enable neutron-linuxbridge-agent.service [root@compute ~]# systemctl start neutron-linuxbridge-agent.service
controller node 에서 작업
메타데이터 에이전트 구성
linux 브릿지 에이전트 실행전 ifcfg-eth1 file 정보수정 (확인 필요?)
[root@compute ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth1 TYPE=Ethernet BOOTPROTO=none DEVICE=eth1 ONBOOT=yes #IPADDR=192.168.122.102 #NETMASK=255.255.255.0 #GATEWAY=192.168.122.1 [root@compute ~]# systemctl restart network
https://docs.openstack.org/neutron/pike/install/controller-install-obs.html
metadata_agent 는 인스턴스에 대한 자격 증명과 같은 구성정보를 제공 합니다.
[root@controller ~]# vi /etc/neutron/metadata_agent.ini [DEFAULT] nova_metadata_host = controller metadata_proxy_shared_secret = METADATA_SECRET
nova.conf 수정
[root@controller ~]# vi /etc/nova/nova.conf [neutron] url = http://controller:9696 auth_url = http://controller:35357 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true metadata_proxy_shared_secret = METADATA_SECRET
설치 마무리
네트워킹 서비스 초기화 스크립트 실행시 ML2 플러그인 구성 파일인 /etc/neturon/plugins/ml2/ml2_conf.ini 파일을 가르키는
심볼릭 링크 /etc/neturon/plugin.ini 파일을 생성 합니다. 심볼릭 링크 파일이 생성되지 않는다면, 다음 명령어를 통해서 생성합니다.
[root@controller ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
neutron 데이터 베이스 등록
[root@controller ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \ --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
compute api 서비스를 재시작 합니다.
[root@controller ~]# systemctl restart openstack-nova-api.service
네트워킹 서비스 enable
[root@controller ~]# systemctl enable neutron-server.service \ neutron-linuxbridge-agent.service neutron-dhcp-agent.service \ neutron-metadata-agent.service [root@controller ~]# systemctl start neutron-server.service \ neutron-linuxbridge-agent.service neutron-dhcp-agent.service \ neutron-metadata-agent.service
네트워킹 옵션 2에 대해 layer-3 서비스를 활성화 및 실행
[root@controller ~]# systemctl enable neutron-l3-agent.service [root@controller ~]# systemctl start neutron-l3-agent.service
검증
[root@controller ~]# . admin-openrc [root@controller ~]# openstack extension list --network +----------------------------------------------------------------------------------------------+---------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+ | Name | Alias | Description | +----------------------------------------------------------------------------------------------+---------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+ | Default Subnetpools | default-subnetpools | Provides ability to mark and use a subnetpool as the default | | Network IP Availability | network-ip-availability | Provides IP availability data for each network and subnet. | | Network Availability Zone | network_availability_zone | Availability zone support for network. | | Auto Allocated Topology Services | auto-allocated-topology | Auto Allocated Topology Services. | | Neutron L3 Configurable external gateway mode | ext-gw-mode | Extension of the router abstraction for specifying whether SNAT should occur on the external gateway | | Port Binding | binding | Expose port bindings of a virtual port to external application | | agent | agent | The agent management extension. | | Subnet Allocation | subnet_allocation | Enables allocation of subnets from a subnet pool | | L3 Agent Scheduler | l3_agent_scheduler | Schedule routers among l3 agents | | Tag support | tag | Enables to set tag on resources. | | Neutron external network | external-net | Adds external network attribute to network resource. | | Tag support for resources with standard attribute: trunk, policy, security_group, floatingip | standard-attr-tag | Enables to set tag on resources with standard attribute. | | Neutron Service Flavors | flavors | Flavor specification for Neutron advanced services | | Network MTU | net-mtu | Provides MTU attribute for a network resource. | | Availability Zone | availability_zone | The availability zone extension. | | Quota management support | quotas | Expose functions for quotas management per tenant | | If-Match constraints based on revision_number | revision-if-match | Extension indicating that If-Match based on revision_number is supported. | | HA Router extension | l3-ha | Add HA capability to routers. | | Provider Network | provider | Expose mapping of virtual networks to physical networks | | Multi Provider Network | multi-provider | Expose mapping of virtual networks to multiple physical networks | | Quota details management support | quota_details | Expose functions for quotas usage statistics per project | | Address scope | address-scope | Address scopes extension. | | Neutron Extra Route | extraroute | Extra routes configuration for L3 router | | Network MTU (writable) | net-mtu-writable | Provides a writable MTU attribute for a network resource. | | Subnet service types | subnet-service-types | Provides ability to set the subnet service_types field | | Resource timestamps | standard-attr-timestamp | Adds created_at and updated_at fields to all Neutron resources that have Neutron standard attributes. | | Neutron Service Type Management | service-type | API for retrieving service providers for Neutron advanced services | | Router Flavor Extension | l3-flavors | Flavor support for routers. | | Port Security | port-security | Provides port security | | Neutron Extra DHCP options | extra_dhcp_opt | Extra options configuration for DHCP. For example PXE boot options to DHCP clients can be specified (e.g. tftp-server, server-ip-address, bootfile-name) | | Resource revision numbers | standard-attr-revisions | This extension will display the revision number of neutron resources. | | Pagination support | pagination | Extension that indicates that pagination is enabled. | | Sorting support | sorting | Extension that indicates that sorting is enabled. | | security-group | security-group | The security groups extension. | | DHCP Agent Scheduler | dhcp_agent_scheduler | Schedule networks among dhcp agents | | Router Availability Zone | router_availability_zone | Availability zone support for router. | | RBAC Policies | rbac-policies | Allows creation and modification of policies that control tenant access to resources. | | Tag support for resources: subnet, subnetpool, port, router | tag-ext | Extends tag support to more L2 and L3 resources. | | standard-attr-description | standard-attr-description | Extension to add descriptions to standard attributes | | Neutron L3 Router | router | Router abstraction for basic L3 forwarding between L2 Neutron networks and access to external networks via a NAT gateway. | | Allowed Address Pairs | allowed-address-pairs | Provides allowed address pairs | | project_id field enabled | project-id | Extension that indicates that project_id field is enabled. | | Distributed Virtual Router | dvr | Enables configuration of Distributed Virtual Routers. | +----------------------------------------------------------------------------------------------+---------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+ [root@controller ~]#
에이전트 목록 확인
[root@controller ~]# openstack network agent list +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+ | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+ | 20957f8f-f380-4f16-b953-6cf7b15838ba | L3 agent | controller | nova | :-) | UP | neutron-l3-agent | | 22649ee4-bb5d-4caf-b2fb-660477c54bd1 | Metadata agent | controller | None | :-) | UP | neutron-metadata-agent | | 869eb101-a59b-4088-87ab-f17d1634efd3 | DHCP agent | controller | nova | :-) | UP | neutron-dhcp-agent | | e827c41e-b4a0-4e3a-8ca6-4cd8bea7a3bf | Linux bridge agent | controller | None | :-) | UP | neutron-linuxbridge-agent | +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+ [root@controller ~]#
정상적인 경우 (동기화 시간이 걸림?!)
[root@controller ~]# openstack network agent list +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+ | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+ | 20957f8f-f380-4f16-b953-6cf7b15838ba | L3 agent | controller | nova | :-) | UP | neutron-l3-agent | | 22649ee4-bb5d-4caf-b2fb-660477c54bd1 | Metadata agent | controller | None | :-) | UP | neutron-metadata-agent | | 6d3a2576-00c1-4dba-82e6-b37355e870cf | Linux bridge agent | compute | None | :-) | UP | neutron-linuxbridge-agent | | 869eb101-a59b-4088-87ab-f17d1634efd3 | DHCP agent | controller | nova | :-) | UP | neutron-dhcp-agent | | e827c41e-b4a0-4e3a-8ca6-4cd8bea7a3bf | Linux bridge agent | controller | None | :-) | UP | neutron-linuxbridge-agent | +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
추가확인 필요사항:
해당메시지 원인??
controller node
2017-12-02 10:43:01.242 975 WARNING oslo.service.loopingcall [-] Function 'neutron.plugins.ml2.drivers.agent._common_agent.CommonAgentLoop._report_state' run outlasted interval by 30.00 sec 2017-12-02 10:43:01.280 975 INFO neutron.plugins.ml2.drivers.agent._common_agent [-] Linux bridge agent Agent has just been revived. Doing a full sync. 2017-12-02 10:43:01.306 975 INFO oslo_messaging._drivers.amqpdriver [-] No calling threads waiting for msg_id : 7794b67aae68435ca91fd55654b06482 2017-12-02 10:43:01.315 975 INFO oslo_messaging._drivers.amqpdriver [-] No calling threads waiting for msg_id : 3bc8ea170a464c3e96c0ebd6dc6bfabe 2017-12-02 10:43:01.330 975 INFO neutron.plugins.ml2.drivers.agent._common_agent [req-b6886195-b847-460d-ab64-7c3d17c6af87 - - - - -] Linux bridge agent Agent RPC Daemon Started! 2017-12-02 10:43:01.331 975 INFO neutron.plugins.ml2.drivers.agent._common_agent [req-b6886195-b847-460d-ab64-7c3d17c6af87 - - - - -] Linux bridge agent Agent out of sync with plugin! 2017-12-02 10:43:01.338 975 INFO neutron.plugins.ml2.drivers.linuxbridge.agent.arp_protect [req-b6886195-b847-460d-ab64-7c3d17c6af87 - - - - -] Clearing orphaned ARP spoofing entries for devices []
Linux bridge agent Agent out of sync with plugin!
WARNING oslo.service.loopingcall
compute node
2017-12-02 10:43:50.012 956 ERROR neutron.plugins.ml2.drivers.agent._common_agent message = self.waiters.get(msg_id, timeout=timeout) 2017-12-02 10:43:50.012 956 ERROR neutron.plugins.ml2.drivers.agent._common_agent File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 347, in get 2017-12-02 10:43:50.012 956 ERROR neutron.plugins.ml2.drivers.agent._common_agent 'to message ID %s' % msg_id) 2017-12-02 10:43:50.012 956 ERROR neutron.plugins.ml2.drivers.agent._common_agent MessagingTimeout: Timed out waiting for a reply to message ID 086fad94db8c45ac842aa74d46bc27ad 2017-12-02 10:43:50.012 956 ERROR neutron.plugins.ml2.drivers.agent._common_agent 2017-12-02 10:43:50.013 956 WARNING oslo.service.loopingcall [-] Function 'neutron.plugins.ml2.drivers.agent._common_agent.CommonAgentLoop._report_state' run outlasted interval by 30.00 sec 2017-12-02 10:43:50.047 956 INFO neutron.plugins.ml2.drivers.agent._common_agent [-] Linux bridge agent Agent has just been revived. Doing a full sync. 2017-12-02 10:43:50.075 956 INFO oslo_messaging._drivers.amqpdriver [-] No calling threads waiting for msg_id : 086fad94db8c45ac842aa74d46bc27ad 2017-12-02 10:43:50.084 956 INFO oslo_messaging._drivers.amqpdriver [-] No calling threads waiting for msg_id : 1404bc1930c1477cac6ed8fdc1794f9d 2017-12-02 10:43:50.194 956 INFO neutron.plugins.ml2.drivers.agent._common_agent [req-f7ecd86f-c1cf-4a74-bb67-84bde93d17b6 - - - - -] Linux bridge agent Agent out of sync with plugin!
ERROR neutron.plugins.ml2.drivers.agent._common_agent
WARNING oslo.service.loopingcall [-] Function
Dashboard Horizon
https://docs.openstack.org/ocata/ko_KR/install-guide-rdo/horizon-install.html
https://docs.openstack.org/horizon/pike/install/install-rdo.html
controller node 에서 작업
openstack-dashboard 설치
[root@controller ~]# yum install -y openstack-dashboard
local_settings file 수정
[root@controller ~]# vi /etc/openstack-dashboard/local_settings OPENSTACK_HOST = "controller" OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user" # 모든곳에서 접속 가능하게 설정 ALLOWED_HOSTS = ['*', ] OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_API_VERSIONS = { "identity": 3, "image": 2, "volume": 2, } OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'Default' #옵션 2선택시 OPENSTACK_NEUTRON_NETWORK = { 'enable_router': True, 'enable_quotas': True, 'enable_ipv6': True, 'enable_distributed_router': True, 'enable_ha_router': False, 'enable_fip_topology_check': True, 'enable_lb' : False, 'enable_firewall' : False, 'enable_vpn' : True, #옵션 1선택시 주석제거 하여 사용 #OPENSTACK_NEUTRON_NETWORK = { # 'enable_router': False, # 'enable_quotas': False, # 'enable_ipv6': True, # 'enable_distributed_router': False, # 'enable_ha_router': False, # 'enable_fip_topology_check': False, TIME_ZONE = "Asia/Seoul" # 기존 내용 주석처리 SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', } }
openstack-dashboard.conf 파일 수정
[root@controller ~]# vi /etc/httpd/conf.d/openstack-dashboard.conf
WSGIDaemonProcess dashboard
WSGIProcessGroup dashboard
WSGISocketPrefix run/wsgi
WSGIApplicationGroup %{GLOBAL}
WSGIApplicationGroup %{GLOBAL} 추가후 httpd restart
httpd , memcached 서비스 재시작
[root@controller ~]# systemctl restart httpd memcached
firewalld 의 경우
[root@controller ~]# firewall-cmd --permanent --add-port=80/tcp success [root@controller ~]# firewall-cmd --reload success [root@controller ~]#
iptables 의 경우
[root@controller ~]# iptables -A IN_public_allow -p tcp -m tcp --dport 80 -m conntrack --ctstate NEW -j ACCEPT [root@controller ~]# service iptables save iptables: Saving firewall rules to /etc/sysconfig/iptables:[ OK ] [root@controller ~]# iptables -L
http://controller/dashboard 접속시 아래와 같은 메시지 확인 /var/log/httpd/error_log
[Thu Nov 30 13:45:57.003586 2017] [core:error] [pid 2015] [client 40.0.0.1:55710] Script timed out before returning headers: django.wsgi
[Thu Nov 30 13:45:58.178883 2017] [core:error] [pid 2012] [client 40.0.0.1:55712] Script timed out before returning headers: django.wsgi
[Thu Nov 30 13:47:35.981195 2017] [core:error] [pid 2013] [client 40.0.0.1:55714] Script timed out before returning headers: django.wsgi
openstack-status
[root@controller ~]# openstack-status == Nova services == openstack-nova-api: active openstack-nova-compute: inactive (disabled on boot) openstack-nova-network: inactive (disabled on boot) openstack-nova-scheduler: active openstack-nova-conductor: active openstack-nova-console: inactive (disabled on boot) openstack-nova-consoleauth: active openstack-nova-xvpvncproxy: inactive (disabled on boot) == Glance services == openstack-glance-api: active openstack-glance-registry: active == Keystone service == openstack-keystone: inactive (disabled on boot) == Horizon service == openstack-dashboard: active == neutron services == neutron-server: active neutron-dhcp-agent: active neutron-l3-agent: active neutron-metadata-agent: active neutron-linuxbridge-agent: active == Support services == mariadb: active dbus: active rabbitmq-server: active memcached: active == Keystone users == +----------------------------------+-----------+ | ID | Name | +----------------------------------+-----------+ | 1697cb80fbed4140bbde5cf1f2fcafc1 | admin | | 1a77bd31ec4744bf82fab8fcff5fa561 | nova | | b316501ef28c4b28a44c64e65315ef83 | glance | | d979c3d5c3e44d6bac0225361907eff3 | placement | | dd785cc29b9a4db8bdded1cb8ff056cc | demo | | f23bc457da1b44bb86712308af8e52b9 | neutron | +----------------------------------+-----------+ == Glance images == +--------------------------------------+--------+ | ID | Name | +--------------------------------------+--------+ | 8a830f40-8837-4efb-a8e4-550c85bfa9c8 | cirros | +--------------------------------------+--------+ == Nova managed services == +--------------------------------------+------------------+------------+----------+---------+-------+----------------------------+-----------------+-------------+ | Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | Forced down | +--------------------------------------+------------------+------------+----------+---------+-------+----------------------------+-----------------+-------------+ | b4331f58-93ac-4310-9af5-c7ed14ac7931 | nova-conductor | controller | internal | enabled | up | 2017-12-01T07:20:49.000000 | - | False | | 98a118aa-28ce-45bd-b468-f5a4862be5e3 | nova-consoleauth | controller | internal | enabled | up | 2017-12-01T07:20:51.000000 | - | False | | 5801fdd3-864d-4d46-b7c5-fac6b32a6687 | nova-scheduler | controller | internal | enabled | up | 2017-12-01T07:20:50.000000 | - | False | | 4e43a928-879c-46d5-a045-6aa8a705c363 | nova-compute | compute | nova | enabled | up | 2017-12-01T07:20:48.000000 | - | False | +--------------------------------------+------------------+------------+----------+---------+-------+----------------------------+-----------------+-------------+ == Nova networks == usage: nova [--version] [--debug] [--os-cache] [--timings] [--os-region-name <region-name>] [--service-type <service-type>] [--service-name <service-name>] [--os-endpoint-type <endpoint-type>] [--os-compute-api-version <compute-api-ver>] [--endpoint-override <bypass-url>] [--profile HMAC_KEY] [--insecure] [--os-cacert <ca-certificate>] [--os-cert <certificate>] [--os-key <key>] [--timeout <seconds>] [--os-auth-type <name>] [--os-auth-url OS_AUTH_URL] [--os-domain-id OS_DOMAIN_ID] [--os-domain-name OS_DOMAIN_NAME] [--os-project-id OS_PROJECT_ID] [--os-project-name OS_PROJECT_NAME] [--os-project-domain-id OS_PROJECT_DOMAIN_ID] [--os-project-domain-name OS_PROJECT_DOMAIN_NAME] [--os-trust-id OS_TRUST_ID] [--os-default-domain-id OS_DEFAULT_DOMAIN_ID] [--os-default-domain-name OS_DEFAULT_DOMAIN_NAME] [--os-user-id OS_USER_ID] [--os-username OS_USERNAME] [--os-user-domain-id OS_USER_DOMAIN_ID] [--os-user-domain-name OS_USER_DOMAIN_NAME] [--os-password OS_PASSWORD] <subcommand> ... error: argument <subcommand>: invalid choice: u'network-list' Try 'nova help ' for more information. == Nova instance flavors == +----+------+-----------+------+-----------+------+-------+-------------+-----------+ | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public | +----+------+-----------+------+-----------+------+-------+-------------+-----------+ +----+------+-----------+------+-----------+------+-------+-------------+-----------+ == Nova instances == +----+------+-----------+--------+------------+-------------+----------+ | ID | Name | Tenant ID | Status | Task State | Power State | Networks | +----+------+-----------+--------+------------+-------------+----------+ +----+------+-----------+--------+------------+-------------+----------+ [root@controller ~]#
SElinux 작업 (확인필요)
[root@controller ~]# setsebool -P httpd_can_network_connect on
Domain 이 안보이는군요.
//OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True 때문으로 보이는데 테스트 필요!
iptables Disable 시 정상적으로 Doamin 이 보입니다.
Domain : default
User Name : admin
Password : ADMIN_PASS
로그인후
Block Storage service Cinder
별도의 스토리지 노드가 있다면 해당 node 에서 작업을 해야 합니다.
테스트 구성의 경우 controller node / compute node 구성이며 nova-compute 를 제외한 모든 구성의 경우 controller node 에서 테스트를 하였습니다.
https://docs.openstack.org/ocata/ko_KR/install-guide-rdo/cinder.html
https://docs.openstack.org/cinder/pike/install/
controller node 에서 작업
cinder db 생성 및 사용자 생성
[root@controller ~]# mysql -uroot -p Enter password: Welcome to the MariaDB monitor. Commands end with ; or \g. Your MariaDB connection id is 26 Server version: 10.1.20-MariaDB MariaDB Server Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MariaDB [(none)]> CREATE DATABASE cinder; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> flush privileges; MariaDB [(none)]> quit; Bye [root@controller ~]#
openstack cinder 사용자 생성 CINDER_PASS 입력
[root@controller ~]# openstack user create --domain default --password-prompt cinder User Password: Repeat User Password: +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | default | | enabled | True | | id | 64384bdac78b4188a0eebd245015cb72 | | name | cinder | | options | {} | | password_expires_at | None | +---------------------+----------------------------------+ [root@controller ~]#
admin role 추가
root@controller ~]# openstack role add --project service --user cinder admin
cinderv2 / cinderv3 서비스 엔티티 생성
[root@controller ~]# openstack service create --name cinderv2 \ --description "OpenStack Block Storage" volumev2 +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | OpenStack Block Storage | | enabled | True | | id | 063e877108da473a9100a2720bd60aa7 | | name | cinderv2 | | type | volumev2 | +-------------+----------------------------------+ [root@controller ~]# openstack service create --name cinderv3 \ --description "OpenStack Block Storage" volumev3 +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | OpenStack Block Storage | | enabled | True | | id | 69309bb71f9641d69ba1f02619640b44 | | name | cinderv3 | | type | volumev3 | +-------------+----------------------------------+ [root@controller ~]#
Block Storage service api 엔드포인트 생성
[root@controller ~]# openstack endpoint create --region RegionOne \ volumev2 public http://controller:8776/v2/%\(project_id\)s +--------------+------------------------------------------+ | Field | Value | +--------------+------------------------------------------+ | enabled | True | | id | 549a0aaedc01449d8cd5c32ecc99417b | | interface | public | | region | RegionOne | | region_id | RegionOne | | service_id | 063e877108da473a9100a2720bd60aa7 | | service_name | cinderv2 | | service_type | volumev2 | | url | http://controller:8776/v2/%(project_id)s | +--------------+------------------------------------------+ [root@controller ~]# openstack endpoint create --region RegionOne \ volumev2 internal http://controller:8776/v2/%\(project_id\)s +--------------+------------------------------------------+ | Field | Value | +--------------+------------------------------------------+ | enabled | True | | id | 71d6d8fc61c248e98a658a04c52e4053 | | interface | internal | | region | RegionOne | | region_id | RegionOne | | service_id | 063e877108da473a9100a2720bd60aa7 | | service_name | cinderv2 | | service_type | volumev2 | | url | http://controller:8776/v2/%(project_id)s | +--------------+------------------------------------------+ [root@controller ~]# openstack endpoint create --region RegionOne \ volumev2 admin http://controller:8776/v2/%\(project_id\)s +--------------+------------------------------------------+ | Field | Value | +--------------+------------------------------------------+ | enabled | True | | id | 5df1ebc0ff964941ba20ce9741bb54fa | | interface | admin | | region | RegionOne | | region_id | RegionOne | | service_id | 063e877108da473a9100a2720bd60aa7 | | service_name | cinderv2 | | service_type | volumev2 | | url | http://controller:8776/v2/%(project_id)s | +--------------+------------------------------------------+ [root@controller ~]#
[root@controller ~]# openstack endpoint create --region RegionOne \ volumev3 public http://controller:8776/v3/%\(project_id\)s +--------------+------------------------------------------+ | Field | Value | +--------------+------------------------------------------+ | enabled | True | | id | feb6fadb6f3c44328cb1e21b9a95cf8c | | interface | public | | region | RegionOne | | region_id | RegionOne | | service_id | 69309bb71f9641d69ba1f02619640b44 | | service_name | cinderv3 | | service_type | volumev3 | | url | http://controller:8776/v3/%(project_id)s | +--------------+------------------------------------------+ [root@controller ~]# openstack endpoint create --region RegionOne \ volumev3 internal http://controller:8776/v3/%\(project_id\)s +--------------+------------------------------------------+ | Field | Value | +--------------+------------------------------------------+ | enabled | True | | id | f75311f0f96243a3b9dad05f624aabad | | interface | internal | | region | RegionOne | | region_id | RegionOne | | service_id | 69309bb71f9641d69ba1f02619640b44 | | service_name | cinderv3 | | service_type | volumev3 | | url | http://controller:8776/v3/%(project_id)s | +--------------+------------------------------------------+ [root@controller ~]# openstack endpoint create --region RegionOne \ volumev3 admin http://controller:8776/v3/%\(project_id\)s +--------------+------------------------------------------+ | Field | Value | +--------------+------------------------------------------+ | enabled | True | | id | 389f36c141ba4b08b7699e164610aba4 | | interface | admin | | region | RegionOne | | region_id | RegionOne | | service_id | 69309bb71f9641d69ba1f02619640b44 | | service_name | cinderv3 | | service_type | volumev3 | | url | http://controller:8776/v3/%(project_id)s | +--------------+------------------------------------------+ [root@controller ~]#
openstack-cinder 설치
[root@controller ~]# yum install -y openstack-cinder
cinder.conf 수정
[root@controller ~]# vi /etc/cinder/cinder.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 40.0.0.101 [database] connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = cinder password = CINDER_PASS [oslo_concurrency] lock_path = /var/lib/cinder/tmp
블록 스토리지 데이터베이스 등록
[root@controller ~]# su -s /bin/sh -c "cinder-manage db sync" cinder
Option “logdir” from group “DEFAULT” is deprecated. Use option “log-dir” from group “DEFAULT” message 출력 logdir 옵션 에서 log-dir 로 변경 ? (차후 테스트)
nova.conf 수정
[root@controller ~]# vi /etc/nova/nova.conf [cinder] os_region_name = RegionOne
Compute API 서비스를 재시작
[root@controller ~]# systemctl restart openstack-nova-api
블록스토리지 서비스 실행및 enable
[root@controller ~]# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service [root@controller ~]# systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
cinder localvoulum 의 경우 확인 필요
[root@controller ~]# openstack volume service list +------------------+------------+------+---------+-------+----------------------------+ | Binary | Host | Zone | Status | State | Updated At | +------------------+------------+------+---------+-------+----------------------------+ | cinder-scheduler | controller | nova | enabled | up | 2017-11-30T09:01:45.000000 | +------------------+------------+------+---------+-------+----------------------------+ [root@controller ~]#
cider node 에 별도의 cinder image 용 Disk 추가후 작업
참고 페이지: https://docs.openstack.org/ocata/ko_KR/install-guide-rdo/cinder-storage-install.html
https://docs.openstack.org/cinder/pike/install/cinder-storage-install-rdo.html
별도 Storage node 구성시 참고 사항
controller node 로 Cinder 구성시 아래 내용은 Skip 하시면 됩니다.
cinder node 에서 작업
Storage node 의 경우 별도의 작업이 필요 합니다. iscsi 이용 및 lvm 구성
LVM 설치
[root@cinder ~]# yum install -y lvm2
LVM enable 및 실행
[root@cinder ~]# systemctl enable lvm2-lvmetad.service [root@cinder ~]# systemctl start lvm2-lvmetad.service
controller System 의 디스크 정보 확인
[root@cinder ~]# fdisk -l Disk /dev/vda: 107.4 GB, 107374182400 bytes, 209715200 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0x0009fed1 Device Boot Start End Blocks Id System /dev/vda1 * 2048 4196351 2097152 83 Linux /dev/vda2 4196352 12584959 4194304 82 Linux swap / Solaris /dev/vda3 12584960 209715199 98565120 83 Linux <strong><span style="color: #ff0000;" data-mce-style="color: #ff0000;">Disk /dev/vdb: 53.7 GB, 53687091200 bytes, 104857600 sectors</span></strong> Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes [root@cinder ~]#
추가된 디스크 /dev/vdb
pv생성
[root@cinder ~]# pvcreate /dev/vdb Physical volume "/dev/vdb" successfully created. [root@cinder ~]#
cinder-volumes vg 생성
[root@cinder ~]# vgcreate cinder-volumes /dev/vdb Volume group "cinder-volumes" successfully created [root@cinder ~]#
lvm filter 수정 lvm.conf
[root@cinder ~]# vi /etc/lvm/lvm.conf # Cinder Settings filter = [ "a/vdb/", "r|.*/|" ]
lvm 을 운영체제 Disk 로 사용할 경우 추가 해야 합니다. Test Machine 의 경우 일반 볼륨 입니다.
lvmdisksan 확인 filter 설정전
[root@cinder ~]# lvmdiskscan /dev/vda1 [ 2.00 GiB] /dev/vda2 [ 4.00 GiB] /dev/vda3 [ <94.00 GiB] /dev/vdb [ 50.00 GiB] LVM physical volume 0 disks 3 partitions 1 LVM physical volume whole disk 0 LVM physical volumes [root@cinder ~]#
filter 설정후
[root@cinder ~]# lvmdiskscan /dev/vdb [ 50.00 GiB] LVM physical volume 0 disks 0 partitions 1 LVM physical volume whole disk 0 LVM physical volumes [root@cinder ~]#
패키지 설치
[root@cinder ~]# yum install -y openstack-cinder targetcli python-keystone
cinder.conf 수정
[root@cinder ~]# vi /etc/cinder/cinder.conf #my_ip 의 경우 managet먼트 인터페이스 ip 입니다. controller node ip 입력 [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 40.0.0.101 enabled_backends = lvm glance_api_servers = http://controller:9292 [database] connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = cinder password = CINDER_PASS #lvm 섹션은 존재하지 않습니다. 추가로 만들어 줍니다. [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver volume_group = cinder-volumes iscsi_protocol = iscsi iscsi_helper = lioadm [oslo_concurrency] lock_path = /var/lib/cinder/tmp
Cinder service 실행 및 enable
[root@cinder ~]# systemctl enable openstack-cinder-volume.service target.service [root@cinder ~]# systemctl start openstack-cinder-volume.service target.service
— Test중
provider network eth0 -> eth1 로 Device 변경
controller node 작업 BOOTPROTO=none , NETWORK 정보 주석처리
[root@controller ~]# vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini physical_interface_mappings = provider:eth1 [root@controller ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth1 TYPE=Ethernet BOOTPROTO=none DEVICE=eth1 ONBOOT=yes #IPADDR=192.168.122.101 #NETMASK=255.255.255.0 #GATEWAY=192.168.122.1 [root@controller ~]# [root@controller ~]# init 6
compute node 작업 BOOTPROTO=none , NETWORK 정보 주석처리
[root@compute ~]# vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini physical_interface_mappings = provider:eth1 [root@compute ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth1 TYPE=Ethernet BOOTPROTO=none DEVICE=eth1 ONBOOT=yes #IPADDR=192.168.122.102 #NETMASK=255.255.255.0 #GATEWAY=192.168.122.1 [root@compute ~]# init 6
System Rebooting 후 neutron-linuxbridge-agent 동작 확인
controller node [root@controller ~]# systemctl status neutron-linuxbridge-agent ● neutron-linuxbridge-agent.service - OpenStack Neutron Linux Bridge Agent Loaded: loaded (/usr/lib/systemd/system/neutron-linuxbridge-agent.service; enabled; vendor preset: disabled) Active: active (running) since Fri 2017-12-01 08:50:02 KST; 5min ago Process: 795 ExecStartPre=/usr/bin/neutron-enable-bridge-firewall.sh (code=exited, status=0/SUCCESS) Main PID: 829 (neutron-linuxbr) CGroup: /system.slice/neutron-linuxbridge-agent.service ├─ 829 /usr/bin/python2 /usr/bin/neutron-linuxbridge-agent --confi... ├─1378 sudo neutron-rootwrap-daemon /etc/neutron/rootwrap.conf └─1379 /usr/bin/python2 /usr/bin/neutron-rootwrap-daemon /etc/neut... Dec 01 08:50:02 controller systemd[1]: Starting OpenStack Neutron Linux Bri..... Dec 01 08:50:02 controller neutron-enable-bridge-firewall.sh[795]: net.bridge... Dec 01 08:50:02 controller neutron-enable-bridge-firewall.sh[795]: net.bridge... Dec 01 08:50:02 controller systemd[1]: Started OpenStack Neutron Linux Brid...t. Dec 01 08:50:06 controller neutron-linuxbridge-agent[829]: Guru meditation no... Dec 01 08:50:22 controller sudo[1378]: neutron : TTY=unknown ; PWD=/ ; USE...nf Hint: Some lines were ellipsized, use -l to show in full. [root@controller ~]# compute node [root@compute ~]# systemctl status neutron-linuxbridge-agent ● neutron-linuxbridge-agent.service - OpenStack Neutron Linux Bridge Agent Loaded: loaded (/usr/lib/systemd/system/neutron-linuxbridge-agent.service; enabled; vendor preset: disabled) Active: active (running) since Fri 2017-12-01 08:49:35 KST; 6min ago Process: 785 ExecStartPre=/usr/bin/neutron-enable-bridge-firewall.sh (code=exited, status=0/SUCCESS) Main PID: 804 (neutron-linuxbr) CGroup: /system.slice/neutron-linuxbridge-agent.service ├─804 /usr/bin/python2 /usr/bin/neutron-linuxbridge-agent --config... ├─963 sudo neutron-rootwrap-daemon /etc/neutron/rootwrap.conf └─964 /usr/bin/python2 /usr/bin/neutron-rootwrap-daemon /etc/neutr... Dec 01 08:49:35 compute systemd[1]: Starting OpenStack Neutron Linux Bridge..... Dec 01 08:49:35 compute neutron-enable-bridge-firewall.sh[785]: net.bridge.br... Dec 01 08:49:35 compute neutron-enable-bridge-firewall.sh[785]: net.bridge.br... Dec 01 08:49:35 compute systemd[1]: Started OpenStack Neutron Linux Bridge ...t. Dec 01 08:49:36 compute neutron-linuxbridge-agent[804]: Guru meditation now r... Dec 01 08:49:38 compute sudo[963]: neutron : TTY=unknown ; PWD=/ ; USER=ro...nf Hint: Some lines were ellipsized, use -l to show in full. [root@compute ~]#