참고 페이지 : https://www.freebsd.org/doc/handbook/network-dns.html

 

unbound 라는게 있지만 local 에서만 사용이 가능함.

DNS Server Configuration in FreeBSD 10.0 and Later
In FreeBSD 10.0, BIND has been replaced with Unbound. Unbound is a validating caching resolver only. If an authoritative server is needed, many are available from the Ports Collection.

Unbound is provided in the FreeBSD base system. By default, it will provide DNS resolution to the local machine only. While the base system package can be configured to provide resolution services beyond the local machine, it is recommended that such requirements be addressed by installing Unbound from the FreeBSD Ports Collection.

To enable Unbound, add the following to /etc/rc.conf:

local_unbound_enable="YES"

 

 

bind99 install

root@bsd10:~ # whereis bind99
bind99: /usr/ports/dns/bind99
root@bsd10:~ # cd /usr/ports/dns/bind99
root@bsd10:/usr/ports/dns/bind99 # make install clean
root@bsd10:/usr/ports/dns/bind99 # vi /etc/rc.conf

named_enable="YES"
root@bsd10:/usr/ports/dns/bind99 # init 6

 

named Deamon 실행 확인

root@bsd10:~ # sockstat -4 |grep -i named
bind     named      464   20 tcp4   127.0.0.1:53          *:*
bind     named      464   21 tcp4   127.0.0.1:953         *:*
bind     named      464   512 udp4  127.0.0.1:53          *:*
root@bsd10:~ #

 

namedb 디렉토리

/usr/local/etc/namedb
root@bsd10:~ # cd /usr/local/etc/namedb/
root@bsd10:/usr/local/etc/namedb # vi named.conf
//      listen-on       { 127.0.0.1; };
        listen-on       { 127.0.0.1; };

최하단에 추가
include "/usr/local/etc/namedb/named.conf.local";

 

named.conf.local 파일 생성 및 zone 파일생성

root@bsd10:/usr/local/etc/namedb # vi named.conf.local
zone "test.com" {
type master;
file "/usr/local/etc/namedb/working/test.com";
};
root@bsd10:/usr/local/etc/namedb # cd working/
root@bsd10:/usr/local/etc/namedb/working # vi test.com
$TTL 3600        ; 1 hour default TTL
@               IN      SOA      ns.test.com. mail.test.com. (
                                2006051501      ; Serial
                                10800           ; Refresh
                                3600            ; Retry
                                604800          ; Expire
                                300             ; Negative Response TTL
                        )
; DNS Servers
                IN      NS      ns.test.com.
                IN      MX 10   mail.test.com.
                IN      A       192.168.192.200

; Machine Names
ns              IN      A       192.168.192.200
mail            IN      A       192.168.192.200


; Aliases
www             IN      CNAME   test.com.

 

resolv.conf 변경 및 Ping Test 

root@bsd10:~ # vi /etc/resolv.conf
nameserver 192.168.192.200
root@bsd10:~ # service named restart
Stopping named.
Waiting for PIDS: 2540.
Starting named.
root@bsd10:~ #
root@bsd10:~ # ping test.com
PING test.com (192.168.192.200): 56 data bytes
64 bytes from 192.168.192.200: icmp_seq=0 ttl=64 time=0.023 ms
64 bytes from 192.168.192.200: icmp_seq=1 ttl=64 time=0.036 ms
^C
--- test.com ping statistics ---
2 packets transmitted, 2 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.023/0.029/0.036/0.006 ms
root@bsd10:~ #

 

타 System 에서 resolv.conf 파일 변경후 Dig 테스트를 진행합니다.

[root@centos74 named]# dig www.test.com

; <<>> DiG 9.9.4-RedHat-9.9.4-51.el7_4.1 <<>> www.test.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 55568
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 1, ADDITIONAL: 2

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;www.test.com.                  IN      A

;; ANSWER SECTION:
www.test.com.           3600    IN      CNAME   test.com.
test.com.               3600    IN      A       192.168.192.200

;; AUTHORITY SECTION:
test.com.               3600    IN      NS      ns.test.com.

;; ADDITIONAL SECTION:
ns.test.com.            3600    IN      A       192.168.192.200

;; Query time: 0 msec
;; SERVER: 192.168.192.200#53(192.168.192.200)
;; WHEN: Fri Dec 22 01:15:34 KST 2017
;; MSG SIZE  rcvd: 104

[root@centos74 named]#

 

 

DNS Server 

BIND Open Source DNS Server

CentOS7 으로 Test  를 진행 하였습니다.

 

 

bind 설치 

[root@centos74 ~]# yum install bind-*

 

bind 설정

[root@centos74 ~]# vi /etc/named.conf
options {
        listen-on port 53 { any; };
        listen-on-v6 port 53 { ::1; };
        directory       "/var/named";
        dump-file       "/var/named/data/cache_dump.db";
        statistics-file "/var/named/data/named_stats.txt";
        memstatistics-file "/var/named/data/named_mem_stats.txt";
        allow-query     { any; };

recursion yes;

 

도메인 추가

[root@centos74 ~]# vi /etc/named.rfc1912.zones

zone "test.com" In {
        type master;
        file "test.com";
        allow-update { none; };
};

zone "192.168.192.in-addr.arpa" IN {
        type master;
        file "test.com.rev";
        allow-update { none; };
};

zone "a.com" In {
        type master;
        file "a.com";
        allow-update { none; };
};

#zone "192.168.191.in-addr.arpa" IN {
#       type master;
#       file "a.com.rev";
#       allow-update { none; };
#};

zone "b.com" In {
        type master;
        file "b.com";
        allow-update { none; };
};

 

zone 파일 생성

[root@centos74 ~]# cd /var/named/
[root@centos74 named]# cp named.empty test.com
[root@centos74 named]# cp named.empty a.com
[root@centos74 named]# cp named.empty b.com
[root@centos74 named]# vi test.com
$TTL 3H
@       IN SOA  @ ns.test.com. (
                                        0       ; serial
                                        1D      ; refresh
                                        1H      ; retry
                                        1W      ; expire
                                        3H )    ; minimum
        IN      NS      ns.test.com.
        IN      A       192.168.192.100
ns      IN      A       192.168.192.100
www     IN      A       192.168.192.100

[root@centos74 named]# vi a.com
$TTL 3H
@       IN SOA  @ ns.a.com. (
                                        0       ; serial
                                        1D      ; refresh
                                        1H      ; retry
                                        1W      ; expire
                                        3H )    ; minimum
        IN      NS      ns.a.com.
        IN      A       192.168.192.100
ns      IN      A       192.168.192.100
www     IN      A       192.168.192.100

[root@centos74 named]# vi b.com
$TTL 3H
@       IN SOA  @ ns.b.com. (
                                        0       ; serial
                                        1D      ; refresh
                                        1H      ; retry
                                        1W      ; expire
                                        3H )    ; minimum
        IN      NS      ns.b.com.
        IN      A       192.168.192.100
ns      IN      A       192.168.192.100
www     IN      A       192.168.192.100

 

역방향 설정

[root@centos74 named]# vi test.com.rev
$TTL 1D
@       IN SOA  @ ns.test.com. (
                                        0       ; serial
                                        1D      ; refresh
                                        1H      ; retry
                                        1W      ; expire
                                        3H )    ; minimum
        IN      NS      ns.test.com.
10      IN      PTR     ns.test.com.

 

권한변경

[root@centos74 named]# chown root:named a.com
[root@centos74 named]# chown root:named b.com
[root@centos74 named]# chown root:named test.com
[root@centos74 named]# chown root:named test.com.rev

 

 

named enable & start

[root@centos74 named]# systemctl enable named
Created symlink from /etc/systemd/system/multi-user.target.wants/named.service to /usr/lib/systemd/system/named.service.
[root@centos74 named]# systemctl start named
[root@centos74 named]# systemctl status named
● named.service - Berkeley Internet Name Domain (DNS)
   Loaded: loaded (/usr/lib/systemd/system/named.service; enabled; vendor preset: disabled)
   Active: active (running) since Wed 2017-12-20 10:18:27 KST; 4s ago
  Process: 10934 ExecStop=/bin/sh -c /usr/sbin/rndc stop > /dev/null 2>&1 || /bin/kill -TERM $MAINPID (code=exited, status=0/SUCCESS)
  Process: 10949 ExecStart=/usr/sbin/named -u named -c ${NAMEDCONF} $OPTIONS (code=exited, status=0/SUCCESS)
  Process: 10946 ExecStartPre=/bin/bash -c if [ ! "$DISABLE_ZONE_CHECKING" == "yes" ]; then /usr/sbin/named-checkconf -z "$NAMEDCONF"; else echo "Checking of zone files is disabled"; fi (code=exited, status=0/SUCCESS)
 Main PID: 10951 (named)
   CGroup: /system.slice/named.service
           └─10951 /usr/sbin/named -u named -c /etc/named.conf

Dec 20 10:18:27 centos74 named[10951]: zone test.com/IN: loaded serial 0
Dec 20 10:18:27 centos74 named[10951]: zone 1.0.0.127.in-addr.arpa/IN: loaded serial 0
Dec 20 10:18:27 centos74 named[10951]: zone 192.168.192.in-addr.arpa/IN: loaded serial 0
Dec 20 10:18:27 centos74 named[10951]: zone b.com/IN: loaded serial 0
Dec 20 10:18:27 centos74 named[10951]: all zones loaded
Dec 20 10:18:27 centos74 named[10951]: running
Dec 20 10:18:27 centos74 named[10951]: zone test.com/IN: sending notifies (serial 0)
Dec 20 10:18:27 centos74 named[10951]: zone b.com/IN: sending notifies (serial 0)
Dec 20 10:18:27 centos74 named[10951]: zone 192.168.192.in-addr.arpa/IN: sending notifies (serial 0)
Dec 20 10:18:27 centos74 named[10951]: zone a.com/IN: sending notifies (serial 0)
[root@centos74 named]#

 

Ping Test

[root@centos74 named]# vi /etc/resolv.conf
; generated by /usr/sbin/dhclient-script
search localdomain
nameserver 192.168.192.100

 

 

bind-chroot — 차후 Test

 

 

 

Centos7 Apache Source 설치

 

참고페이지:http://publib.boulder.ibm.com/httpserv/manual60/install.html

https://mbrownnyc.wordpress.com/technology-solutions/create-a-secure-linux-web-server/install-and-configure-apache-from-source/

 

 

설치된 Apache 패키지를 제거 합니다.

[root@centos74 ~]# yum remove -y httpd httpd-*

 

Source 설치에 필요한 패키지를 설치 합니다.

[root@centos74 ~]# yum install -y make gcc g++ gcc-c++ autoconf automake libtool pkgconfig findutils oepnssl openssl-devel openldap-devel pcre-devel libxml2-devel lua-devel curl curl-devel libcurl-devel expat-devel flex

 

패키지 다운로드 (http://mirror.apache-kr.org/httpd/ , http://mirror.apache-kr.org/apr/ )

[root@centos74 ~]# wget http://mirror.apache-kr.org/httpd/httpd-2.2.34.tar.gz
[root@centos74 ~]# wget http://mirror.apache-kr.org/apr/apr-1.6.3.tar.gz
[root@centos74 ~]# wget http://mirror.apache-kr.org/apr/apr-util-1.6.1.tar.gz
[root@centos74 ~]# wget http://downloads.sourceforge.net/project/pcre/pcre/8.41/pcre-8.41.tar.gz

 

설치

apr-1.6.3 설치
[root@centos74 ~]# tar xvf apr-1.6.3.tar.gz
[root@centos74 ~]# cd apr-1.6.3/
[root@centos74 apr-1.6.3]# ./configure --prefix=/usr/local/apr
[root@centos74 apr-1.6.3]# make && make install

apr-util-1.6.1 설치
[root@centos74 ~]# tar xvf apr-util-1.6.1.tar.gz
[root@centos74 ~]# cd apr-util-1.6.1/
[root@centos74 apr-util-1.6.1]# ./configure --with-apr=/usr/local/apr/
[root@centos74 apr-util-1.6.1]# make && make install

pcre-8.41 설치
[root@centos74 ~]# tar xvf pcre-8.41.tar.gz
[root@centos74 ~]# cd pcre-8.41/
[root@centos74 pcre-8.41]# ./configure --prefix=/usr/local/pcre
[root@centos74 pcre-8.41]# make && make install

httpd-2.2.34 설치
[root@centos74 ~]# tar xvf httpd-2.2.34.tar.gz
[root@centos74 ~]# cd httpd-2.2.34/
[root@centos74 httpd-2.2.34]# ./configure --enable-module=so --enable-mods-shared=most --enable-maintainer-mode --enable-deflate --enable-headers --enable-rewrite --enable-ssl --enable-proxy --enable-proxy-http --enable-proxy-ajp --enable-proxy-balance --with-included-apr --with-pcre=/usr/local/pcre --prefix=/usr/local/apache2
[root@centos74 httpd-2.2.34]# make && make install

 

httpd.conf 수정

[root@centos74 ~]# vi /usr/local/apache2/conf/httpd.conf
ServerName www.example.com:80
[root@centos74 ~]# /usr/local/apache2/bin/apachectl restart

 

systemctl 등록

[root@centos74 ~]# vi /etc/systemd/system/apache.service
[Unit]
Description=The Apache HTTP Server

[Service]
Type=forking
#EnvironmentFile=/usr/local/apache2/bin/envvars
PIDFile=/usr/local/apache2/logs/httpd.pid
ExecStart=/usr/local/apache2/bin/apachectl start
ExecReload=/usr/local/apache2/bin/apachectl graceful
ExecStop=/usr/local/apache2/bin/apachectl stop
KillSignal=SIGCONT
PrivateTmp=true


[Install]
WantedBy=multi-user.target

[root@centos74 ~]# systemctl daemon-reload
[root@centos74 ~]# systemctl enable apache

 

apache 실행 및 실행 확인

[root@centos74 bin]# systemctl start apache
[root@centos74 bin]# systemctl status apache
● apache.service - The Apache HTTP Server
   Loaded: loaded (/etc/systemd/system/apache.service; enabled; vendor preset: disabled)
   Active: active (running) since Sat 2017-12-16 00:00:26 KST; 21s ago
  Process: 2534 ExecStop=/usr/local/apache2/bin/apachectl stop (code=exited, status=0/SUCCESS)
  Process: 2541 ExecStart=/usr/local/apache2/bin/apachectl start (code=exited, status=0/SUCCESS)
 Main PID: 2544 (httpd)
   CGroup: /system.slice/apache.service
           ├─2544 /usr/local/apache2/bin/httpd -k start
           ├─2545 /usr/local/apache2/bin/httpd -k start
           ├─2546 /usr/local/apache2/bin/httpd -k start
           ├─2547 /usr/local/apache2/bin/httpd -k start
           ├─2548 /usr/local/apache2/bin/httpd -k start
           └─2549 /usr/local/apache2/bin/httpd -k start

Dec 16 00:00:26 centos74 systemd[1]: Starting The Apache HTTP Server...
Dec 16 00:00:26 centos74 systemd[1]: PID file /usr/local/apache2/logs/httpd.pid not readable (yet?) after start.
Dec 16 00:00:26 centos74 systemd[1]: Started The Apache HTTP Server.
[root@centos74 bin]#

 

 

 

참고 페이지:https://wiki.rvijay.in/index.php/Configuring_iSCSI_target_using_targetcli

https://www.rootusers.com/how-to-configure-an-iscsi-target-and-initiator-in-linux/

http://atodorov.org/blog/2015/04/07/how-to-configure-iscsi-target-on-red-hat-enterprise-linux-7/

 

iscsi-server

targetcli 설치

[root@centos74 ~]# yum install -y targetcli

 

targetcli 실행

[root@centos74 ~]# systemctl start target

 

[root@centos74 ~]# targetcli
Warning: Could not load preferences file /root/.targetcli/prefs.bin.
targetcli shell version 2.1.fb46
Copyright 2011-2013 by Datera, Inc and others.
For help on commands, type 'help'.

/>

 

몇가지가 더 있지만 자주 사용하는것 위주로 작성 하였습니다.

fileio: file 을 지정 ex) disk.img

block: block Device 를 지정 ex) lvm 및 /dev/sdb1

fileio Test 진행

[root@centos74 ~]# mkdir /images
[root@centos74 ~]# targetcli
/> backstores/fileio create disk1 /images/disk1.img 10G
Created fileio disk1 with size 10737418240
/>

/> 위치에서 생성을 해야 합니다. 위치 이동은 cd 명령어를 통하여 이동하실수 있습니다.

/> iscsi/ create iqn.2017-12.com.example:target1
Created target iqn.2017-12.com.example:target1.
Created TPG 1.
Global pref auto_add_default_portal=true
Created default portal listening on all IPs (0.0.0.0), port 3260.
/> cd iscsi/iqn.2017-12.com.example:target1/tpg1/
/iscsi/iqn.20...:target1/tpg1> ls
o- tpg1 ................................................. [no-gen-acls, no-auth]
  o- acls ............................................................ [ACLs: 0]
  o- luns ............................................................ [LUNs: 0]
  o- portals ...................................................... [Portals: 1]
    o- 0.0.0.0:3260 ....................................................... [OK]
/iscsi/iqn.20...:target1/tpg1> portals/ create
Using default IP port 3260
Binding to INADDR_ANY (0.0.0.0)
This NetworkPortal already exists in configFS
/iscsi/iqn.20...:target1/tpg1> luns/ create /backstores/fileio/disk1
Created LUN 0.

 

ACL Disable 및 read/write mode

/iscsi/iqn.20...:target1/tpg1> set attribute authentication=0
Parameter authentication is now '0'.
/iscsi/iqn.20...:target1/tpg1> set attribute demo_mode_write_protect=0
Parameter demo_mode_write_protect is now '0'.
/iscsi/iqn.20...:target1/tpg1> set attribute generate_node_acls=1
Parameter generate_node_acls is now '1'.
/iscsi/iqn.20...:target1/tpg1> exit
Global pref auto_save_on_exit=true
Last 10 configs saved in /etc/target/backup.
Configuration saved to /etc/target/saveconfig.json
[root@centos74 ~]#

 

block Test 진행

/dev/sdb lvm Type 으로 지정 (별도로 지정 하지 않고 lvm 을 생성 하여도 무방합니다)

[root@centos74 ~]# fdisk  -l

Disk /dev/sdb: 10.7 GB, 10737418240 bytes, 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/sda: 42.9 GB, 42949672960 bytes, 83886080 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000bfdf3

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048     4196351     2097152   83  Linux
/dev/sda2         4196352    12584959     4194304   82  Linux swap / Solaris
/dev/sda3        12584960    83886079    35650560   83  Linux
[root@centos74 ~]# fdisk /dev/sdb
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0x4f132fd5.

Command (m for help): n
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-20971519, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-20971519, default 20971519):
Using default value 20971519
Partition 1 of type Linux and of size 10 GiB is set

Command (m for help): t
Selected partition 1
Hex code (type L to list all codes): 8e
Changed type of partition 'Linux' to 'Linux LVM'

Command (m for help): wq
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
[root@centos74 ~]#

 

lvm 생성

[root@centos74 ~]# pvcreate /dev/sdb1
  Physical volume "/dev/sdb1" successfully created.
[root@centos74 ~]# vgcreate vg00 /dev/sdb1
  Volume group "vg00" successfully created
[root@centos74 ~]#
[root@centos74 ~]# lvcreate -L 1G -n lv-vol1 vg00
  Logical volume "lv-vol1" created.
[root@centos74 ~]# lvcreate -L 1G -n lv-vol2 vg00
  Logical volume "lv-vol2" created.
[root@centos74 ~]# lvcreate -L 1G -n lv-vol3 vg00
  Logical volume "lv-vol3" created.
[root@centos74 ~]# lvs
  LV      VG   Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv-vol1 vg00 -wi-a----- 1.00g
  lv-vol2 vg00 -wi-a----- 1.00g
  lv-vol3 vg00 -wi-a----- 1.00g
[root@centos74 ~]#

 

iscsi block Device 생성

[root@centos74 ~]# targetcli
targetcli shell version 2.1.fb46
Copyright 2011-2013 by Datera, Inc and others.
For help on commands, type 'help'.
/> cd /backstores/block
/> cd /backstores/block
/backstores/block> create lv-vol1 /dev/vg00/lv-vol1
Created block storage object lv-vol1 using /dev/vg00/lv-vol1.
/backstores/block> create lv-vol2 /dev/vg00/lv-vol2
Created block storage object lv-vol2 using /dev/vg00/lv-vol2.
/backstores/block> create lv-vol3 /dev/vg00/lv-vol3
Created block storage object lv-vol3 using /dev/vg00/lv-vol3.
/backstores/block>

 

wwn 생성 ( cd iscsi or iscsi/ create)

/iscsi> create iqn.2017-12.com.example:target2
Created target iqn.2017-12.com.example:target2.
Created TPG 1.
Global pref auto_add_default_portal=true
Created default portal listening on all IPs (0.0.0.0), port 3260.
/iscsi>

 

lun 생성

/iscsi> cd iqn.2017-12.com.example:target2/
/iscsi/iqn.20...ample:target2> ls
o- iqn.2017-12.com.example:target2 .......................................................................... [TPGs: 1]
  o- tpg1 ...................................................................................... [no-gen-acls, no-auth]
    o- acls ................................................................................................. [ACLs: 0]
    o- luns ................................................................................................. [LUNs: 0]
    o- portals ........................................................................................... [Portals: 1]
      o- 0.0.0.0:3260 ............................................................................................ [OK]
/iscsi/iqn.20...ample:target2> cd tpg1/luns
/iscsi/iqn.20...et2/tpg1/luns> create /backstores/block/lv-vol1
Created LUN 0.
/iscsi/iqn.20...et2/tpg1/luns> create /backstores/block/lv-vol2
Created LUN 1.
/iscsi/iqn.20...et2/tpg1/luns> create /backstores/block/lv-vol3
Created LUN 2.
/iscsi/iqn.20...et2/tpg1/luns>
/iscsi/iqn.20...et2/tpg1/luns> ls
o- luns ..................................................................................................... [LUNs: 3]
  o- lun0 ...................................................... [block/lv-vol1 (/dev/vg00/lv-vol1) (default_tg_pt_gp)]
  o- lun1 ...................................................... [block/lv-vol2 (/dev/vg00/lv-vol2) (default_tg_pt_gp)]
  o- lun2 ...................................................... [block/lv-vol3 (/dev/vg00/lv-vol3) (default_tg_pt_gp)]
/iscsi/iqn.20...et2/tpg1/luns>

 

ACL Disable 및 read/write mode

/iscsi/iqn.20...et2/tpg1/luns> cd

/iscsi/iqn.20...:target2/tpg1> set attribute authentication=0
Parameter authentication is now '0'.
/iscsi/iqn.20...:target2/tpg1> set attribute demo_mode_write_protect=0
Parameter demo_mode_write_protect is now '0'.
/iscsi/iqn.20...:target2/tpg1> set attribute generate_node_acls=1
Parameter generate_node_acls is now '1'.
/iscsi/iqn.20...:target2/tpg1> exit
Global pref auto_save_on_exit=true
Last 10 configs saved in /etc/target/backup.
Configuration saved to /etc/target/saveconfig.json
[root@centos74 ~]#

 

 

iscsi-Client

[root@centos7-test ~]# yum install -y iscsi-initiator-utils

 

fileio Test

iscsi lun 확인

[root@centos7-test ~]# iscsiadm --mode discoverydb --type sendtargets --portal 192.168.186.100 --discover
192.168.186.100:3260,1 iqn.2017-12.com.example:target1
[root@centos7-test ~]#

 

iscsi login 

[root@centos7-test ~]# iscsiadm --mode node --targetname iqn.2017-12.com.example:target1 --portal 192.168.186.100 --login
Logging in to [iface: default, target: iqn.2017-12.com.example:target1, portal: 192.168.186.100,3260] (multiple)
Login to [iface: default, target: iqn.2017-12.com.example:target1, portal: 192.168.186.100,3260] successful.
[root@centos7-test ~]#

 

scsi 확인 

[root@centos7-test ~]# cat /proc/scsi/scsi
Attached devices:
Host: scsi2 Channel: 00 Id: 00 Lun: 00
  Vendor: VMware,  Model: VMware Virtual S Rev: 1.0
  Type:   Direct-Access                    ANSI  SCSI revision: 02
Host: scsi4 Channel: 00 Id: 00 Lun: 00
  Vendor: NECVMWar Model: VMware SATA CD01 Rev: 1.00
  Type:   CD-ROM                           ANSI  SCSI revision: 05
Host: scsi34 Channel: 00 Id: 00 Lun: 00
  Vendor: LIO-ORG  Model: disk1            Rev: 4.0
  Type:   Direct-Access                    ANSI  SCSI revision: 05
[root@centos7-test ~]#

 

fdisk 확인 

[root@centos7-test ~]# fdisk  -l

Disk /dev/sda: 42.9 GB, 42949672960 bytes, 83886080 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000bfdf3

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048     4196351     2097152   83  Linux
/dev/sda2         4196352    12584959     4194304   82  Linux swap / Solaris
/dev/sda3        12584960    83886079    35650560   83  Linux

Disk /dev/sdb: 10.7 GB, 10737418240 bytes, 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 4194304 bytes

[root@centos7-test ~]#

 

iscsi logout

[root@centos7-test ~]# iscsiadm --mode node --targetname iqn.2017-12.com.example:target1 --portal 192.168.186.100 -u
Logging out of session [sid: 2, target: iqn.2017-12.com.example:target1, portal: 192.168.186.100,3260]
Logout of [sid: 2, target: iqn.2017-12.com.example:target1, portal: 192.168.186.100,3260] successful.
[root@centos7-test ~]# iscsiadm --mode node --targetname iqn.2017-12.com.example:target1 --portal 192.168.186.100 -o delete

 

fdisk 확인

[root@centos7-test ~]# fdisk  -l

Disk /dev/sda: 42.9 GB, 42949672960 bytes, 83886080 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000bfdf3

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048     4196351     2097152   83  Linux
/dev/sda2         4196352    12584959     4194304   82  Linux swap / Solaris
/dev/sda3        12584960    83886079    35650560   83  Linux
[root@centos7-test ~]#

 

block Test

iscsi lun 확인

[root@centos7-test ~]# iscsiadm --mode discoverydb --type sendtargets --portal 192.168.186.100 --discover
192.168.186.100:3260,1 iqn.2017-12.com.example:target1
192.168.186.100:3260,1 iqn.2017-12.com.example:target2
[root@centos7-test ~]#

 

iscsi login 

[root@centos7-test ~]# iscsiadm --mode node --targetname iqn.2017-12.com.example:target2 --portal 192.168.186.100 --login
Logging in to [iface: default, target: iqn.2017-12.com.example:target2, portal: 192.168.186.100,3260] (multiple)
Login to [iface: default, target: iqn.2017-12.com.example:target2, portal: 192.168.186.100,3260] successful.
[root@centos7-test ~]#

 

scsi 확인

[root@centos7-test ~]# cat /proc/scsi/scsi
Attached devices:
Host: scsi2 Channel: 00 Id: 00 Lun: 00
  Vendor: VMware,  Model: VMware Virtual S Rev: 1.0
  Type:   Direct-Access                    ANSI  SCSI revision: 02
Host: scsi4 Channel: 00 Id: 00 Lun: 00
  Vendor: NECVMWar Model: VMware SATA CD01 Rev: 1.00
  Type:   CD-ROM                           ANSI  SCSI revision: 05
Host: scsi35 Channel: 00 Id: 00 Lun: 00
  Vendor: LIO-ORG  Model: lv-vol1          Rev: 4.0
  Type:   Direct-Access                    ANSI  SCSI revision: 05
Host: scsi35 Channel: 00 Id: 00 Lun: 01
  Vendor: LIO-ORG  Model: lv-vol2          Rev: 4.0
  Type:   Direct-Access                    ANSI  SCSI revision: 05
Host: scsi35 Channel: 00 Id: 00 Lun: 02
  Vendor: LIO-ORG  Model: lv-vol3          Rev: 4.0
  Type:   Direct-Access                    ANSI  SCSI revision: 05
[root@centos7-test ~]#

 

fdisk 확인

[root@centos7-test ~]# fdisk -l

Disk /dev/sda: 42.9 GB, 42949672960 bytes, 83886080 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000bfdf3

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048     4196351     2097152   83  Linux
/dev/sda2         4196352    12584959     4194304   82  Linux swap / Solaris
/dev/sda3        12584960    83886079    35650560   83  Linux

Disk /dev/sdb: 1073 MB, 1073741824 bytes, 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 4194304 bytes


Disk /dev/sdc: 1073 MB, 1073741824 bytes, 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 4194304 bytes


Disk /dev/sdd: 1073 MB, 1073741824 bytes, 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 4194304 bytes

[root@centos7-test ~]#

 

iscsi logout

[root@centos7-test ~]# iscsiadm --mode node --targetname iqn.2017-12.com.example:target2 --portal 192.168.186.100 -u
Logging out of session [sid: 3, target: iqn.2017-12.com.example:target2, portal: 192.168.186.100,3260]
Logout of [sid: 3, target: iqn.2017-12.com.example:target2, portal: 192.168.186.100,3260] successful.
[root@centos7-test ~]# iscsiadm --mode node --targetname iqn.2017-12.com.example:target2 --portal 192.168.186.100 -o delete
[root@centos7-test ~]#

 

fdisk 확인

[root@centos7-test ~]# fdisk -l

Disk /dev/sda: 42.9 GB, 42949672960 bytes, 83886080 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000bfdf3

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048     4196351     2097152   83  Linux
/dev/sda2         4196352    12584959     4194304   82  Linux swap / Solaris
/dev/sda3        12584960    83886079    35650560   83  Linux
[root@centos7-test ~]#

 

방화벽 사용시 

[root@centos74 ~]# firewall-cmd --add-service=iscsi-target --permanent 
success
[root@centos74 ~]# firewall-cmd --reload

 

추가적인 내용은 차후 정리하도록 하겠습니다.

LINUX 보안 설정 정리

일반적으로 많이 사용하는 리눅스 보안 설정을 대략적으로 정리하였습니다.

외부망에 노출이 된 Host 라면 sshd port 변경및 root user 접속제한은 꼭하시기 바랍니다.

sshd_config 에서 Permitroot no 설정과 Port number 만 변경하여도 특정사용자 root 또는 다른 계정을 통한

사전공격(binary attack)은 감소 합니다.

root/root 로  설정후 공인 ip 사용시 하루 안에 해킹을 당할수 있습니다.

iptables 의 경우 차후 CentOS firewalld 와 같이 설명하도록 하겠습니다.

Test OS 환경: CentOS7.4

  • TCP Wrapper
  • login.defs 
  • pam_tally2
  • sshd_config 변경

 

TCP Wrapper

hosts allow / deny 의 경우 deny 에서 ALL:ALL 로 설정후 hosts.allow 에서 Service 별로 open 하는 형식으로 사용 합니다.

 

ex) 테스트 시나리오 centos7-test System 에서 centos74로 ssh 접속

centos74 , centos7-test

 

접속 확인

[root@centos74 ~]# tail -f /var/log/secure
Nov 30 22:30:21 centos74 sshd[2233]: Accepted password for root from 192.168.186.130 port 48408 ssh2
Nov 30 22:30:21 centos74 sshd[2233]: pam_unix(sshd:session): session opened for user root by (uid=0)

정상적으로 접속된것을 확인 할수 있습니다.

 

Test 시 공인망을 통하여 열어 놓았기 때문에 ssh root 로 접속시도가 상당히 많은것을 확인할수 있습니다.

접속을 시도하는 ip 의 경우 A클래스를 x 처리 하였습니다.  sshd_config 에서 PermitRootLogin no 변경 및 Port 변경이 필요 합니다.

Nov 30 22:30:38 centos74 sshd[2235]: Failed password for root from x.38.145.226 port 52215 ssh2
Nov 30 22:30:38 centos74 sshd[2235]: pam_succeed_if(sshd:auth): requirement "uid >= 1000" not met by user "root"
Nov 30 22:30:40 centos74 sshd[2235]: Failed password for root from x.38.145.226 port 52215 ssh2
Nov 30 22:30:40 centos74 sshd[2235]: pam_succeed_if(sshd:auth): requirement "uid >= 1000" not met by user "root"
Nov 30 22:30:42 centos74 sshd[2235]: Failed password for root from x.38.145.226 port 52215 ssh2
Nov 30 22:30:43 centos74 sshd[2235]: pam_succeed_if(sshd:auth): requirement "uid >= 1000" not met by user "root"
Nov 30 22:30:45 centos74 sshd[2235]: Failed password for root from x.38.145.226 port 52215 ssh2
Nov 30 22:30:45 centos74 sshd[2235]: error: maximum authentication attempts exceeded for root from x.38.145.226 port 52215 ssh2 [preauth]
Nov 30 22:30:45 centos74 sshd[2235]: Disconnecting: Too many authentication failures [preauth]
Nov 30 22:30:45 centos74 sshd[2235]: PAM 5 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=x.38.145.226  user=root
Nov 30 22:30:45 centos74 sshd[2235]: PAM service(sshd) ignoring max retries; 6 > 3

 

TCP Wrapper 설정

[root@centos74 ~]# vi /etc/hosts.deny
ALL:ALL

 

/var/log/secure 확인시

[root@centos74 ~]# tail -f /var/log/secure
Nov 30 22:42:45 centos74 sshd[2537]: refused connect from 192.168.186.130 (192.168.186.130)

 

centos7-test 에서 확인 ssh-client 확인

[root@centos7-test ~]# ssh centos74
ssh_exchange_identification: read: Connection reset by peer
[root@centos7-test ~]#

정상적으로 접속 할수 없습니다.

 

sshd: 192.168.186 대역추가

대역으로 추가시 반드시 ex) 192.168.0. 로 옥텟을 찍어 줘야 합니다.

특정 아이피만 접속 허가시 192.168.0.10 으로 설정 하시면 됩니다.

[root@centos74 ~]# vi /etc/hosts.allow
sshd:192.168.186.


[root@centos74 ~]# tail -f /var/log/secure
Nov 30 22:45:59 centos74 sshd[2579]: Accepted password for root from 192.168.186.130 port 48416 ssh2
Nov 30 22:45:59 centos74 sshd[2579]: pam_unix(sshd:session): session opened for user root by (uid=0)

 

별도의 설정 없이 VMHOST 에서 접속 하였지만 정상적으로 Web-site 에 접속 할수 있습니다.

apache 의 경우 mod_access 를 이용하여 allow 와 Deny 를 설정 할수 있습니다.

 

ldd 명령어를 통하여 libwrap 의 라이브러리를 사용하는지 확인 할수 있습니다.

httpd 확인

[root@centos74 ~]# which httpd
/usr/sbin/httpd
[root@centos74 ~]# ldd /usr/sbin/httpd |grep libwrap
[root@centos74 ~]#

 

sshd 확인

[root@centos74 ~]# ldd /usr/sbin/sshd
        linux-vdso.so.1 =>  (0x00007ffe8f1e9000)
        libfipscheck.so.1 => /lib64/libfipscheck.so.1 (0x00007ff0e08ee000)
        libwrap.so.0 => /lib64/libwrap.so.0 (0x00007ff0e06e3000)
        libaudit.so.1 => /lib64/libaudit.so.1 (0x00007ff0e04ba000)
        libpam.so.0 => /lib64/libpam.so.0 (0x00007ff0e02ab000)
        libselinux.so.1 => /lib64/libselinux.so.1 (0x00007ff0e0084000)
        libsystemd.so.0 => /lib64/libsystemd.so.0 (0x00007ff0e005b000)
        libcrypto.so.10 => /lib64/libcrypto.so.10 (0x00007ff0dfbfa000)
        libdl.so.2 => /lib64/libdl.so.2 (0x00007ff0df9f6000)
        libldap-2.4.so.2 => /lib64/libldap-2.4.so.2 (0x00007ff0df7a1000)
        liblber-2.4.so.2 => /lib64/liblber-2.4.so.2 (0x00007ff0df592000)
        libutil.so.1 => /lib64/libutil.so.1 (0x00007ff0df38f000)
        libz.so.1 => /lib64/libz.so.1 (0x00007ff0df178000)
        libcrypt.so.1 => /lib64/libcrypt.so.1 (0x00007ff0def41000)
        libresolv.so.2 => /lib64/libresolv.so.2 (0x00007ff0ded27000)
        libgssapi_krb5.so.2 => /lib64/libgssapi_krb5.so.2 (0x00007ff0dead9000)
        libkrb5.so.3 => /lib64/libkrb5.so.3 (0x00007ff0de7f1000)
        libk5crypto.so.3 => /lib64/libk5crypto.so.3 (0x00007ff0de5be000)
        libcom_err.so.2 => /lib64/libcom_err.so.2 (0x00007ff0de3b9000)
        libc.so.6 => /lib64/libc.so.6 (0x00007ff0ddff6000)
        libnsl.so.1 => /lib64/libnsl.so.1 (0x00007ff0ddddd000)
        libcap-ng.so.0 => /lib64/libcap-ng.so.0 (0x00007ff0ddbd6000)
        libpcre.so.1 => /lib64/libpcre.so.1 (0x00007ff0dd974000)
        /lib64/ld-linux-x86-64.so.2 (0x000055f3da440000)
        libcap.so.2 => /lib64/libcap.so.2 (0x00007ff0dd76f000)
        libm.so.6 => /lib64/libm.so.6 (0x00007ff0dd46c000)
        librt.so.1 => /lib64/librt.so.1 (0x00007ff0dd264000)
        liblzma.so.5 => /lib64/liblzma.so.5 (0x00007ff0dd03e000)
        libgcrypt.so.11 => /lib64/libgcrypt.so.11 (0x00007ff0dcdbc000)
        libgpg-error.so.0 => /lib64/libgpg-error.so.0 (0x00007ff0dcbb7000)
        libdw.so.1 => /lib64/libdw.so.1 (0x00007ff0dc970000)
        libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00007ff0dc759000)
        libpthread.so.0 => /lib64/libpthread.so.0 (0x00007ff0dc53d000)
        libsasl2.so.3 => /lib64/libsasl2.so.3 (0x00007ff0dc320000)
        libssl3.so => /lib64/libssl3.so (0x00007ff0dc0d3000)
        libsmime3.so => /lib64/libsmime3.so (0x00007ff0dbeac000)
        libnss3.so => /lib64/libnss3.so (0x00007ff0dbb82000)
        libnssutil3.so => /lib64/libnssutil3.so (0x00007ff0db954000)
        libplds4.so => /lib64/libplds4.so (0x00007ff0db750000)
        libplc4.so => /lib64/libplc4.so (0x00007ff0db54b000)
        libnspr4.so => /lib64/libnspr4.so (0x00007ff0db30c000)
        libfreebl3.so => /lib64/libfreebl3.so (0x00007ff0db109000)
        libkrb5support.so.0 => /lib64/libkrb5support.so.0 (0x00007ff0daefa000)
        libkeyutils.so.1 => /lib64/libkeyutils.so.1 (0x00007ff0dacf6000)
        libattr.so.1 => /lib64/libattr.so.1 (0x00007ff0daaf0000)
        libelf.so.1 => /lib64/libelf.so.1 (0x00007ff0da8d8000)
        libbz2.so.1 => /lib64/libbz2.so.1 (0x00007ff0da6c7000)
[root@centos74 ~]#

 

해당 내용은 아래 사이트에서 확인 하실수 있습니다.

정리가 잘되어있어 링크만 추가 합니다.

TCP Wrapper 의 경우 xinetd 기반으로 동작하는 서비스들의 접근 거부를 설정 할수 있는것으로 일반적으로 알고 있습니다.

TCP Wrapper https://www.joinc.co.kr/w/man/12/tcpwrapper

bsd 의 경우 inetd 설정으로 TCP Wrapper 로 apache 접근을 할수 있을것으로 보입니다.

상당히 오래된 자료라 테스트를 해보고 싶기는 하나 사용을 안할것으로 보여 링크만 남겨 놓습니다.

FreeBSD inetd 기반 apache http://freebsdhowtos.com/113.html

 

login.defs 

/etc/login.defs 설정시 패스워드 정책을 설정 할수 있습니다.

PASS_MAX_DAYS  9999      패스워드 최대사용 기간

PASS_MIN_DAYS        0       패스워드 변경최소 기간

PASS_MIN_LEN          0       패스워드 최소길이

PASS_WARN_AGE      7       패스워드 소멸 이전 경고 발송

login.defs 파일을 수정 하여도 기존 사용자는 영향을 받지 않으며 설정이후 추가된 사용자 부터 적용이 됩니다.

 

/etc/shadow 파일 설명 

[root@centos74 ~]# cat /etc/shadow
1   :                                                    2                                             :  3  :4:  5  :6:7:8:9
root:$6$ZayMBeKp$aTokocQJQg77pDbkUGqYuBC21ESGCkKafchr2OMMWzplyQnid4ECxNPkNFIXd8K0vkDiVJvQv0nJDpq4Hb3qh/:17497:0:99999:7:::
bin:*:17110:0:99999:7:::
daemon:*:17110:0:99999:7:::
adm:*:17110:0:99999:7:::
lp:*:17110:0:99999:7:::
test:$6$S9l9DJ9Q$SswkqlquRVyZOUZVETnrn1HJCjW3FQS9AvWSFe.ZUtSvfOJPnjgkc7XxHq4kdKqoe0StGEmJrqeZoZPYpw6Ig/:17500:0:99999:7:::
test1:$6$/wcpiL/o$vd.Gsw/aehbJ6WTgWSjohq0A9W3ks/5PA12SLA7MlVqdLxl0iJv8MkdmfThsb2s.Ux4mo1.QyleHgrfsNNmxt0:17500:0:99999:7:::
test2:$6$SWU0NjnK$8LA3TVRXCnveva/kETFn4vhaRL6tQooaGoaH9wT/mdD0CW6oVPA7f8z/vjGJL.p37HRjxkYRRmhpEgjQScMAr1:17500:0:99999:7:::
test3:!!:17500:0:99999:7:::
[root@centos74 ~]#

필드설명

1. Login Name : 사용자 계정

2. Encrypted : 패스워드를 암호화 시킨값

3. Last Changed: 1970년 1월 1일 부터 패스워드가 수정된 날짜의 일수를 계산

4. Minimum: 패스워드 변경되기 전 최소 사용기간(일)

5. Maximum: 패스워드 변경 전 최대사용 기간(일)

6. Inactive: 로그인 접속차단 기간(일)

7. Expire: 로그인 사용을 금지한 기간(일) (월/일/연도)

8. Reserved: 사용되지 않음

 

test 유저의 경우 default 로 /etc/login.defs 에서 설정한 값으로 설정이 되어 있습니다.

 

test:$6$S9l9DJ9Q$SswkqlquRVyZOUZVETnrn1HJCjW3FQS9AvWSFe.ZUtSvfOJPnjgkc7XxHq4kdKqoe0StGEmJrqeZoZPYpw6Ig/:17500:0:99999:7:::

PASS_MIN_DAYS: 0

PASS_MAX_DAYS: 99999

PASS_WARN_AGE: 7

 

/etc/login.defs 수정후 test4 User 생성

[root@centos74 ~]# vi /etc/login.defs
PASS_MAX_DAYS   90
PASS_MIN_DAYS   7
PASS_MIN_LEN    5

최대 사용기간 90일로 변경

최소 사용기간 7 일

패스워드 최소길이 5

 

test4:$6$QVwi5qjn$2g4IuWJOdmgOkXxwZFFGFdxroOFhtcKa0v.7.5NmHFl0PCXbK5km3yb1.MIHR9/m4GxXqOcLvkqbk8qEg5tDG/:17500:7:90:7:::

PASS_MIN_DAYS: 7

PASS_MAX_DAYS: 90

PASS_WARN_AGE: 7

패스워드 최소 길이의 경우 User password 변경시에만 적용 된니다. test3 User가 자신의 password 적용시 적용

root 로 password 변경시 적용되지 않습니다.

login.defs 설정시 cron 작업의 경우 해당 작업이 정지가 됩니다.

해당내역의 테스트 Note 는 별도로 하단에 작성해 놓았습니다.

 

pam_tally2

pam_tally2 의 경우 login counter module 입니다.

pam_tally2는 패스워드의 최소길이 및 영문 소문자 대문자, 숫자 , 특수문자 및 ssh login fail 시 계정잠금정책을 설정 할수 있습니다.

자세한내역은 : http://www.linux-pam.org/Linux-PAM-html/sag-pam_tally2.html 에서 확인하실수 있습니다.

일부 pam_tally 에서 설정하던 설정이 pam_tally2 로 바뀌면서 authconfig 로 바뀜에 따라 별도로 authconfig 포스팅 하도록 하겠습니다.

본 포스트에서는 pam_tally2 oner=fail deny , unlock_time 까지만 다루도록 하겠습니다.

 

pam 설치 확인

[root@centos74 ~]# rpm -aq |grep -i pam
pam-1.1.8-18.el7.x86_64

 

/etc/pam.d/system-auth 수정 (콘솔을 통한 접속 및 su 전환시)

#%PAM-1.0
# This file is auto-generated.
# User changes will be destroyed the next time authconfig is run.
auth        required      pam_env.so
auth        required      pam_tally2.so deny=5 unlock_time=1200
auth        required      pam_faildelay.so delay=2000000
auth        sufficient    pam_unix.so nullok try_first_pass
auth        requisite     pam_succeed_if.so uid >= 1000 quiet_success
auth        required      pam_deny.so

account     required      pam_unix.so
account     required      pam_tally2.so
account     sufficient    pam_localuser.so
account     sufficient    pam_succeed_if.so uid < 1000 quiet
account     required      pam_permit.so

password    requisite     pam_pwquality.so try_first_pass local_users_only retry=3 authtok_type=
password    sufficient    pam_unix.so sha512 shadow nullok try_first_pass use_authtok
password    required      pam_deny.so

session     optional      pam_keyinit.so revoke
session     required      pam_limits.so
session     optional      pam_systemd.so
session     [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid
session     required      pam_unix.so

 

password-auth 파일 수정 (원격접속 및 X-window 접속)

[root@centos74 ~]# vi /etc/pam.d/password-auth
#%PAM-1.0
# This file is auto-generated.
# User changes will be destroyed the next time authconfig is run.
auth        required      pam_env.so
auth        required      pam_tally2.so deny=5 unlock_time=1200
auth        required      pam_faildelay.so delay=2000000
auth        sufficient    pam_unix.so nullok try_first_pass
auth        requisite     pam_succeed_if.so uid >= 1000 quiet_success
auth        required      pam_deny.so

account     required      pam_unix.so
account     required      pam_tally2.so
account     sufficient    pam_localuser.so
account     sufficient    pam_succeed_if.so uid < 1000 quiet
account     required      pam_permit.so

password    requisite     pam_pwquality.so try_first_pass local_users_only retry=3 authtok_type=
password    sufficient    pam_unix.so sha512 shadow nullok try_first_pass use_authtok
password    required      pam_deny.so

session     optional      pam_keyinit.so revoke
session     required      pam_limits.so
session     optional      pam_systemd.so
session     [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid
session     required      pam_unix.so

deny= 횟수 설정 후 unlock_time 으로 시간으로 설정 할수도 있습니다.

centos5, centos6 의 pam_cracklib 설정은 authconfig 로 대체된것으로 보입니다.

 

ssh login fail 확인 3번이 넘은 시점에서는 test4 유저는 로그인 할수 없습니다.

[root@centos74 ~]# pam_tally2
Login           Failures Latest failure     From
test4               1    12/01/17 00:07:11  192.168.186.1
[root@centos74 ~]# pam_tally2
Login           Failures Latest failure     From
test4               2    12/01/17 00:07:15  192.168.186.1
[root@centos74 ~]# pam_tally2
Login           Failures Latest failure     From
test4               3    12/01/17 00:07:19  192.168.186.1
[root@centos74 ~]# pam_tally2
Login           Failures Latest failure     From
test4               4    12/01/17 00:07:26  192.168.186.1
[root@centos74 ~]#

 

test4 User unlock

[root@centos74 ~]# pam_tally2 -r -u test4
Login           Failures Latest failure     From
test4               5    12/01/17 00:07:31  192.168.186.1
[root@centos74 ~]# pam_tally2

 

일반유저 passwd 변경 관련 오류메시지

[test4@centos74 ~]$ passwd
Changing password for user test4.
Changing password for test4.
(current) UNIX password:
You must wait longer to change your password
passwd: Authentication token manipulation error

/etc/shadow 파일에서 PASS_MIN_DAYS 필드를 변경 해야 정상적으로 변경이 가능합니다.

root 유저의 경우 패스워드 정책에서 관계 없겠지만 일반유저의 경우 패스워드 정책에 걸려 변경이 안됩니다.

 

/etc/pam.d/password-auth 파일 password remember 테스트

#%PAM-1.0
# This file is auto-generated.
# User changes will be destroyed the next time authconfig is run.
auth        required      pam_env.so
auth        required      pam_tally2.so deny=5 unlock_time=1200
auth        required      pam_faildelay.so delay=2000000
auth        sufficient    pam_unix.so nullok try_first_pass
auth        requisite     pam_succeed_if.so uid >= 1000 quiet_success
auth        required      pam_deny.so

account     required      pam_unix.so
account     required      pam_tally2.so
account     sufficient    pam_localuser.so
account     sufficient    pam_succeed_if.so uid < 1000 quiet
account     required      pam_permit.so

password    requisite     pam_pwquality.so try_first_pass local_users_only retry=3 authtok_type=
password    sufficient    pam_unix.so sha512 shadow nullok try_first_pass use_authtok remember=2

 

remember=2 사용후 테스트

[test4@centos74 ~]$ passwd
Changing password for user test4.
Changing password for test4.
(current) UNIX password:
New password:
Retype new password:
passwd: all authentication tokens updated successfully.
[test4@centos74 ~]$

변경된 패스워드를 사용하여 ssh 로그인
기존에 사용하던 패스워드로 다시 설정

[test4@centos74 ~]$ passwd
Changing password for user test4.
Changing password for test4.
(current) UNIX password:
New password:
Retype new password:
Password has been already used. Choose another.
passwd: Authentication token manipulation error
[test4@centos74 ~]$

정상적으로 패스워드 설정을 할수 없습니다.

 

해당라인 제거후 테스트 (주석처리)

[root@centos74 ~]# cat /etc/pam.d/system-auth
#%PAM-1.0
# This file is auto-generated.
# User changes will be destroyed the next time authconfig is run.
auth        required      pam_env.so
#auth       required      pam_tally2.so deny=5 unlock_time=1200
auth        required      pam_faildelay.so delay=2000000
auth        sufficient    pam_unix.so nullok try_first_pass
auth        requisite     pam_succeed_if.so uid >= 1000 quiet_success
auth        required      pam_deny.so

account     required      pam_unix.so
#account     required      pam_tally2.so
account     sufficient    pam_localuser.so
account     sufficient    pam_succeed_if.so uid < 1000 quiet
account     required      pam_permit.so

password    requisite     pam_pwquality.so try_first_pass local_users_only retry=3 authtok_type=
password    sufficient    pam_unix.so sha512 shadow nullok try_first_pass use_authtok remember=2
password    required      pam_deny.so

session     optional      pam_keyinit.so revoke
session     required      pam_limits.so
session     optional      pam_systemd.so
session     [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid
session     required      pam_unix.so
[root@centos74 ~]#

password 변경

[test4@centos74 ~]$ passwd
Changing password for user test4.
Changing password for test4.
(current) UNIX password:
New password:
Retype new password:
passwd: all authentication tokens updated successfully.
[test4@centos74 ~]$

정상적으로 패스워드를 변경 할수 있습니다.

다른 옵션 내용의 경우 http://www.linux-pam.org/Linux-PAM-html/sag-pam_tally2.html 에서 확인하실수 있습니다.

 

sshd_config 설정

 

 

[root@centos74 ~]# cat /etc/ssh/sshd_config
#       $OpenBSD: sshd_config,v 1.100 2016/08/15 12:32:04 naddy Exp $

# This is the sshd server system-wide configuration file.  See
# sshd_config(5) for more information.

# This sshd was compiled with PATH=/usr/local/bin:/usr/bin

# The strategy used for options in the default sshd_config shipped with
# OpenSSH is to specify options with their default value where
# possible, but leave them commented.  Uncommented options override the
# default value.

# If you want to change the port on a SELinux system, you have to tell
# SELinux about this change.
# semanage port -a -t ssh_port_t -p tcp #PORTNUMBER
#
#Port 22          / Default 설정의 경우  Port 22 번을 사용 합니다.  
#AddressFamily any
#ListenAddress 0.0.0.0
#ListenAddress ::

HostKey /etc/ssh/ssh_host_rsa_key
#HostKey /etc/ssh/ssh_host_dsa_key
HostKey /etc/ssh/ssh_host_ecdsa_key
HostKey /etc/ssh/ssh_host_ed25519_key

# Ciphers and keying
#RekeyLimit default none

# Logging
#SyslogFacility AUTH
SyslogFacility AUTHPRIV
#LogLevel INFO

# Authentication:

#LoginGraceTime 2m         /   User 가 로그인에 실패했을 경우 서버가 연결을 끊는 시간 / Default 는 0 이며 제한이 없습니다. 
PermitRootLogin yes

/ root login 허용여부 / Default 는 yes 이며 root user 로그인이 가능 합니다.

#StrictModes yes

/ 로그인을 허용하기 전에 파일모드, 사용자 홈디렉토리를 sshd 가 체크

#MaxAuthTries 6
#MaxSessions 10

#PubkeyAuthentication yes

# The default is to check both .ssh/authorized_keys and .ssh/authorized_keys2
# but this is overridden so installations will only check .ssh/authorized_keys
AuthorizedKeysFile      .ssh/authorized_keys

#AuthorizedPrincipalsFile none

#AuthorizedKeysCommand none
#AuthorizedKeysCommandUser nobody

# For this to work you will also need host keys in /etc/ssh/ssh_known_hosts
#HostbasedAuthentication no
# Change to yes if you don't trust ~/.ssh/known_hosts for
# HostbasedAuthentication
#IgnoreUserKnownHosts no
# Don't read the user's ~/.rhosts and ~/.shosts files
#IgnoreRhosts yes

/ .rhosts 파일을 사용할지 여부 Default 설정은 yes 로 rhost 값을 사용 하지 않는다.

# To disable tunneled clear text passwords, change to no here!
#PasswordAuthentication yes
#PermitEmptyPasswords no
PasswordAuthentication yes

# Change to no to disable s/key passwords
#ChallengeResponseAuthentication yes
ChallengeResponseAuthentication no

# Kerberos options
#KerberosAuthentication no
#KerberosOrLocalPasswd yes
#KerberosTicketCleanup yes
#KerberosGetAFSToken no
#KerberosUseKuserok yes

# GSSAPI options
GSSAPIAuthentication yes
GSSAPICleanupCredentials no
#GSSAPIStrictAcceptorCheck yes
#GSSAPIKeyExchange no
#GSSAPIEnablek5users no

# Set this to 'yes' to enable PAM authentication, account processing,
# and session processing. If this is enabled, PAM authentication will
# be allowed through the ChallengeResponseAuthentication and
# PasswordAuthentication.  Depending on your PAM configuration,
# PAM authentication via ChallengeResponseAuthentication may bypass
# the setting of "PermitRootLogin without-password".
# If you just want the PAM account and session checks to run without
# PAM authentication, then enable this but set PasswordAuthentication
# and ChallengeResponseAuthentication to 'no'.
# WARNING: 'UsePAM no' is not supported in Red Hat Enterprise Linux and may cause several
# problems.
UsePAM yes

/ sshd 에서 PAM Module 을 사용할지 여부 Default 는 yes 입니다. no 설정시 pam설정은 무시 됩니다.

#AllowAgentForwarding yes
#AllowTcpForwarding yes
#GatewayPorts no
X11Forwarding yes

/ X11 포워딩 사용 여부 / Default 값으로 yes 입니다.

#X11DisplayOffset 10
#X11UseLocalhost yes
#PermitTTY yes
#PrintMotd yes
#PrintLastLog yes
#TCPKeepAlive yes                 / 클라이언트의 접속이 끊어졌는지 체크하기 위해 서버가 일정시간 메시지를 전달 
#UseLogin no
#UsePrivilegeSeparation sandbox
#PermitUserEnvironment no
#Compression delayed
#ClientAliveInterval 0

/ AliveInterval 설정

#ClientAliveCountMax 3

/ Alive Interval * AliveCountMax = 세션 유지 시간을 설정 (보통 .bash_profile TMOUT=초단위로 설정 합니다. ex) TMOUT=600 (10분)

#ShowPatchLevel no
#UseDNS yes
#PidFile /var/run/sshd.pid
#MaxStartups 10:30:100
#PermitTunnel no
#ChrootDirectory none
#VersionAddendum none

# no default banner path
#Banner none

/ Default 는 none 이며 ex) Banner /etc/issue.net 으로 설정 할수 있습니다. issue.net file 을 설정 하여 banner 를 설정하면 됩니다.

 

# Accept locale-related environment variables
AcceptEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES
AcceptEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT
AcceptEnv LC_IDENTIFICATION LC_ALL LANGUAGE
AcceptEnv XMODIFIERS

# override default of no subsystems
Subsystem       sftp    /usr/libexec/openssh/sftp-server

# Example of overriding settings on a per-user basis
#Match User anoncvs
#       X11Forwarding no
#       AllowTcpForwarding no
#       PermitTTY no
#       ForceCommand cvs server
[root@centos74 ~]#

PermitRootLogin no / root user 접속 금지 

Port 22 -> unknown Port ex) 4320 등으로 변경 하여 사용 하는걸 권장 합니다. 

추가적인 sshd_config 설정내용은 https://linux.die.net/man/5/sshd_config 에서 확인하실수 있습니다.

 

 

 

 

저번 포스트에서는 KVM 을 이용하여 node 2대 를 구성 하였습니다.

controller node / compute node

controller node 에서는

Identity Service Keystone , Image Service Glance , DashBoard horizon , Block Storage Cinder 까지 구성 하였으며

나머지 노드인 compute node 의 경우 nova 를 설치 하였습니다. Block Storage Cinder 의 local LVM 부분의 경우 차후 테스가 필요할것으로 보입니다.

 

kvm 의 vnc 환경이나 X11 포워딩 환경을 사용하여 http://controller/dashboard 에 접속을 합니다.

Horizon login

프로젝트 생성

Identity (인증) 에 Projects (프로젝트) 에 보면 Create Project 버튼이 있습니다.

해당 버튼을 클릭하여 Project 를 생성 합니다.

Identity -> Projects

 

프로잭트 생성

(프로잭트 생성후 변경사항이 있는 경우 Edit 버튼을 클릭하여 수정 할수 있습니다.)

Project Information: Name 이름 / Description 설명 등을 변경 할수 있습니다.

Project Members: 목록에 등록된 사용자를 프로젝트 멤버로 추가 할수 있습니다.

Project Groups: 프로젝트끼리 모아 프로젝트 Group 으로 생성 할수 있습니다.

Quotas : 가상 자원을 설정 할수 있습니다.

 

사용자 생성

Identity -> Users

Create User 를 클릭하여 사용자를 생성 합니다.

유저생성 화면

User Name , Password , Primary Project 에서 devop1 을 추가 합니다.

 

dashboard Sign Out후 Test 유저로 로그인을 합니다.

정상적으로 로그인 되면 아래와 같은 화면을 볼수 있습니다.

 

CLI 에서  User 생성

domain 의 경우 최초 설치시 만든 default 를 사용 –description 에 devop2 test 추가 devop2 project 를 생성

[root@controller ~]# . admin-openrc
[root@controller ~]# openstack project create --domain default --description "devop2 test" devop2
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | devop2 test                      |
| domain_id   | default                          |
| enabled     | True                             |
| id          | ce7f352aca0a43bfacdb3ce4a9e0bc28 |
| is_domain   | False                            |
| name        | devop2                           |
| parent_id   | default                          |
+-------------+----------------------------------+
[root@controller ~]#

 

test2 유저 생성

[root@controller ~]# openstack user create --domain default --password-prompt test2
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| enabled             | True                             |
| id                  | cd2b7e39737341a580d0498070fd552b |
| name                | test2                            |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+
[root@controller ~]#

 

추가 할수 있는 role 확인

[root@controller ~]# openstack role list
+----------------------------------+----------+
| ID                               | Name     |
+----------------------------------+----------+
| 533ffbca67d344c999b9fe46e59f815d | user     |
| 9fe2ff9ee4384b1894a90878d3e92bab | _member_ |
| ddc592d225f14fa8b643fd55b76e43db | admin    |
+----------------------------------+----------+
[root@controller ~]#

 

user role 추가 및 user list 확인

[root@controller ~]# openstack role add --project devop2 --user test2 user
[root@controller ~]# openstack user list
+----------------------------------+-----------+
| ID                               | Name      |
+----------------------------------+-----------+
| 086236627c6840d6ae7476e6241ab30d | placement |
| 0b1296a18d5f4eeaabe2bea439ff38d3 | neutron   |
| 64384bdac78b4188a0eebd245015cb72 | cinder    |
| 6733baa0f87648ff8d7c9faa8fbd4c1b | nova      |
| b5c44ea3ba234938b7701192a53a2494 | admin     |
| cd2b7e39737341a580d0498070fd552b | test2     |
| d1f577fea8ca42b183d99b586f2bf023 | glance    |
| d726706bc404497697c36518a144bea4 | test      |
| ea81fbf3c60c46b38767fc318b90eb18 | demo      |
+----------------------------------+-----------+
[root@controller ~]#

 

Openstack Pike 설치

Openstack 을 테스트 중이며 정상적인 작동을 보장하지 않습니다. 

포스팅 내용은 수정이 될수 있는점을 참고 하시기 바랍니다.

문서 업데이트 내역

최초작성 v0.0 2017-11-23

문서버전 v0.1 2017-11-30 Openstack Cinder 설치 완료

문서버전 v0.2 2017-12-01 linuxbridge_agent eth Device 수정 및 selinux enable or iptables rule 추가

문서버전 v0.3 2017-12-02 nova 실행시 iptables -> firewalld로 변경 하여 테스트

문서버전 v0.4 2017-12-03 KVM Network 변경후 테스트

 

한글 설치 가이드 의 경우 ocata 참고

https://docs.openstack.org/ocata/ko_KR/install-guide-rdo/common/conventions.html#

영문 설치 가이드의 경우 Pike 를 참고

https://docs.openstack.org/install-guide/common/conventions.html#notices

 

네트워크 구성및 kvm 구성은 차후 작성 하겠습니다.

일부내용의 경우 공식설치 가이드와 상이한 부분이 있습니다.

네트워크 구성의 경우 Test 가 진행중이며 해당 내용에 따라 문서의 network 부분이 변경될수 있습니다.

 

KVM 을 이용하여 2개의 network 구성후 작업

controller node / compute node

 

Network 구성 

eth0 의 경우 nat를 통한 외부 접속이 가능한 192망 이고, eth1 의 경우 controller 망으로 사용할 40망 입니다.

프로바이더 네트워크인 eth1 의 경우 임시로 ip 를 설정하여 Opnestack 설치용으로 사용 합니다.

해당 내역은 테스트 중입니다.

 

controller = eth0 + eth1

관리용 네트워크 eth0 = ip 40.0.0.101/24 (management network 추가)

프로바이더 네트워크 eth1 = ip  192.168.122.0/24

(Deafult NAT 이용)

 

compute = eth0 + eth1 

관리용 네트워크 eth0 = ip 40.0.0.102/24 (management network 추가) 임시 Gateway 192.168.122.1 사용

프로바이더 네트워크 eth1 = ip 192.168.122.0/24  (Deafult NAT 이용)

 

iptables

[root@controller ~]# yum install -y iptables-services
[root@controller ~]# systemctl enable iptables
[root@controller ~]# systemctl enable ip6tables

 

 

 

Controller node

 

/etc/hosts 수정

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
40.0.0.101      controller

 

/etc/hostname 수정

[root@controller ~]# vi /etc/hostname
controller

 

chrony 설치

[root@controller ~]# yum install -y chrony

 

/etc/chrony.conf 설정 변경

[root@controller ~]# vi /etc/chrony.conf

allow 40.0.0.0/24

[root@controller ~]# systemctl enable chronyd
[root@controller ~]# systemctl start chronyd

 

controller Server 에서 작업

저장소 활성화

[root@controller ~]# yum install  -y https://repos.fedorapeople.org/repos/openstack/openstack-pike/rdo-release-pike-1.noarch.rpm

 

openstack-pike 설치

[root@controller ~]# yum install -y centos-release-openstack-pike
[root@controller ~]# yum install -y openstack-utils

openstack-config 명령을 사용하기 위하여 openstack-utils 를 설치 합니다.

 

Package upgrade 및 리부팅

[root@controller ~]# yum update -y
[root@controller ~]# init 6

 

python-openstackclient 설치

[root@controller ~]# yum install -y python-openstackclient

 

openstack-selinux 설치

[root@controller ~]# yum install -y openstack-selinux
[root@controller ~]# init 6

 

openstack site: https://docs.openstack.org/install-guide/environment-sql-database-rdo.html

mariadb 설치

[root@controller ~]# yum install -y mariadb mariadb-server python2-PyMySQL

 

openstack.cnf 파일 생성

[root@controller ~]# vi /etc/my.cnf.d/openstack.cnf
[mysqld]
bind-address = 40.0.0.101

default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
max_connections=32000
max_connect_errors=1000

 

mariadb 실행

[root@controller ~]# systemctl enable mariadb.service
[root@controller ~]# systemctl start mariadb.service

 

mariadb 설정

[root@controller ~]# mysql_secure_installation

NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY!

In order to log into MariaDB to secure it, we'll need the current
password for the root user. If you've just installed MariaDB, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here.

Enter current password for root (enter for none):
OK, successfully used password, moving on...

Setting the root password ensures that nobody can log into the MariaDB
root user without the proper authorisation.

Set root password? [Y/n] y
New password:
Re-enter new password:
Password updated successfully!
Reloading privilege tables..
... Success!


By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
them. This is intended only for testing, and to make the installation
go a bit smoother. You should remove them before moving into a
production environment.

Remove anonymous users? [Y/n] y
... Success!

Normally, root should only be allowed to connect from 'localhost'. This
ensures that someone cannot guess at the root password from the network.

Disallow root login remotely? [Y/n] y
... Success!

By default, MariaDB comes with a database named 'test' that anyone can
access. This is also intended only for testing, and should be removed
before moving into a production environment.

Remove test database and access to it? [Y/n] y
- Dropping test database...
... Success!
- Removing privileges on test database...
... Success!

Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.

Reload privilege tables now? [Y/n] y
... Success!

Cleaning up...

All done! If you've completed all of the above steps, your MariaDB
installation should now be secure.

Thanks for using MariaDB!
[root@controller ~]#

 

rabbitmq-server 설치

[root@controller ~]# yum install -y rabbitmq-server

 

rabbitmq-server service enable 및 실행

[root@controller ~]# systemctl enable rabbitmq-server.service
[root@controller ~]# systemctl start rabbitmq-server.service

 

rabbitmq 계정 추가 rabbitmq 권한 설정

[root@controller ~]# rabbitmqctl add_user openstack RABBIT_PASS
Creating user "openstack" ...
[root@controller ~]# 
[root@controller ~]# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
Setting permissions for user "openstack" in vhost "/" ...
[root@controller ~]#

 

memcached 설치

[root@controller ~]# yum install -y memcached python-memcached

 

/etc/sysconfig/memcached 파일 편집 

[root@controller ~]# vi /etc/sysconfig/memcached
PORT="11211"
USER="memcached"
MAXCONN="1024"
CACHESIZE="64"
OPTIONS="-l 127.0.0.1,::1,controller"

 

memcached enable 및 실행

[root@controller ~]# systemctl enable memcached.service
[root@controller ~]# systemctl start memcached.service

 

keystone DB 생성

[root@controller ~]# mysql -uroot -p
Enter password:
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 10
Server version: 10.1.20-MariaDB MariaDB Server

Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> CREATE DATABASE keystone;
Query OK, 1 row affected (0.00 sec)

MariaDB [(none)]>  GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
     IDENTIFIED BY 'KEYSTONE_DBPASS';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]>  GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
     IDENTIFIED BY 'KEYSTONE_DBPASS';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> flush privileges;
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> quit;
Bye
[root@controller ~]#

 

Identity Service Keystone

 

openstack site :https://docs.openstack.org/keystone/pike/install/

keystone 설치

[root@controller ~]# yum install -y openstack-keystone httpd mod_wsgi

–test 12-06

/etc/keystone/keystone.conf 수정

[root@controller ~]# vi /etc/keystone/keystone.conf

[database]
connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone



[token]
provider = fernet

 

/etc/keystone.conf 수정 // Test 중

openstack-config --set /etc/keystone/keystone.conf DEFAULT transport_url rabbit://openstack:RABBIT_PASS@controller
openstack-config --set /etc/keystone/keystone.conf database connection mysql://keystone:KEYSTONE_DBPASS@controller/keystone
openstack-config --set /etc/keystone/keystone.conf cache backend oslo_cache.memcache_pool
openstack-config --set /etc/keystone/keystone.conf cache enabled true
openstack-config --set /etc/keystone/keystone.conf cache memcache_servers controller:11211
openstack-config --set /etc/keystone/keystone.conf memcache servers controller:11211
openstack-config --set /etc/keystone/keystone.conf token expiration 3600
openstack-config --set /etc/keystone/keystone.conf token provider fernet

 

 

 

identity 서비스 데이터 베이스 입력 

[root@controller ~]# su -s /bin/sh -c "keystone-manage db_sync" keystone

 

Fernet 키 저장소 초기화 

[root@controller ~]# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
[root@controller ~]# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

 

identity 서비스를 부트스트래핑 

# keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
  --bootstrap-admin-url http://controller:35357/v3/ \
  --bootstrap-internal-url http://controller:5000/v3/ \
  --bootstrap-public-url http://controller:5000/v3/ \
  --bootstrap-region-id RegionOne

 

apache 서버 구성

 

httpd.conf 수정

[root@controller ~]# vi /etc/httpd/conf/httpd.conf
ServerName controller

 

wsgi-keystone.conf 파일 링크 생성 

[root@controller ~]# ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/

 

httpd Service enable 및 실행 

[root@controller ~]# systemctl enable httpd.service
[root@controller ~]# systemctl start httpd.service

 

임시 환경 변수 입력 

[root@controller ~]# export OS_USERNAME=admin
[root@controller ~]# export OS_PASSWORD=ADMIN_PASS
[root@controller ~]# export OS_PROJECT_NAME=admin
[root@controller ~]# export OS_USER_DOMAIN_NAME=Default
[root@controller ~]# export OS_PROJECT_DOMAIN_NAME=Default
[root@controller ~]# export OS_AUTH_URL=http://controller:35357/v3
[root@controller ~]# export OS_IDENTITY_API_VERSION=3

 

service 프로젝트 생성

[root@controller ~]# openstack project create --domain default \
 --description "Service Project" service
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Service Project |
| domain_id | default |
| enabled | True |
| id | e02fcf8fd5204f5293eadf3a65d14f76 |
| is_domain | False |
| name | service |
| parent_id | default |
+-------------+----------------------------------+
[root@controller ~]#

 

demo 프로젝트 생성 

[root@controller ~]# openstack project create --domain default \
 --description "Demo Project" demo
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Demo Project |
| domain_id | default |
| enabled | True |
| id | e267fd20523944178b7e7c443b3b0190 |
| is_domain | False |
| name | demo |
| parent_id | default |
+-------------+----------------------------------+
[root@controller ~]#

 

demo 사용자 생성

DEMO_PASS 입력

[root@controller ~]# openstack user create --domain default \
 --password-prompt demo
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | fd8fdd89e95c47e4bbf808622e8712af |
| name | demo |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
[root@controller ~]#

 

user 역할 생성

[root@controller ~]# openstack role create user
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | None |
| id | aaf7906d0fc844959527cfcde3e1dab4 |
| name | user |
+-----------+----------------------------------+
[root@controller ~]#

 

demo 프로젝트 사용자 user 역할 추가 

[root@controller ~]# openstack role add --project demo --user demo user
[root@controller ~]#

 

사용자 테스트를 위하여 환경변수를 unset 합니다. 

[root@controller ~]# unset OS_AUTH_URL OS_PASSWORD

 

 

ADMIN_PASS 입력

[root@controller ~]# openstack --os-auth-url http://controller:35357/v3 \
  --os-project-domain-name Default --os-user-domain-name Default \
  --os-project-name admin --os-username admin token issue
Password:
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field      | Value                                                                                                                                                                                   |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires    | 2017-11-28T15:30:29+0000                                                                                                                                                                |
| id         | gAAAAABaHXMFaPfvhX4e6R7Jb3jThwrnctwrEZQKV6Xfv5X-FcQXQuWU-sTQbOrYpPvJuye1LM1yNuUQXZXgK8zL0yPW6xjcPaFGU5wjNp7HiVIXcJQyi5dTd9kFwMNpB94Zoqe4uHLyFR-VGzcKFueUTuPhEOrRRtITmE2vzlfog23ZrQZM6vk |
| project_id | 95d905a1b12544b48ba6e4b43a85ef0b                                                                                                                                                        |
| user_id    | b9a2717e56244411a9cf7b1b52073602                                                                                                                                                        |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
[root@controller ~]#

 

DEMO_PASS 입력

[root@controller ~]# openstack --os-auth-url http://controller:5000/v3 \
  --os-project-domain-name Default --os-user-domain-name Default \
  --os-project-name demo --os-username demo token issue
Password:
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field      | Value                                                                                                                                                                                   |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires    | 2017-11-28T15:31:39+0000                                                                                                                                                                |
| id         | gAAAAABaHXNLqqz_zEJ2Sjs8dV4AwZo9UvX5Gj2ImioeA6dFeFo0Vg2b2DSu_B3sgEJRTU49sF3e2WaweASbYSg_wFlGfdPA6N7HG2KZclpRrL3x94RFqpdpRfkq61N0qLm1JImL4sXCbNRLmHMZSfzxEIAFkxgYs1hgqFv2mzvS3a-dz0XHq4E |
| project_id | 25325c467e544a599764f51260b05d1a                                                                                                                                                        |
| user_id    | 6478fac787444bb98385714d76a21a4a                                                                                                                                                        |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
[root@controller ~]#

 

admin 과 demo 의 사용자 클라이언트 환경 스크립트 생성

admin-openrc 파일생성

[root@controller ~]# vi admin-openrc
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=ADMIN_PASS
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

 

demo-openrc 파일생성

[root@controller ~]# vi demo-openrc
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=DEMO_PASS
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

 

세션 재접속후 테스트 

root@controll's password:
Last login: Tue Nov 28 23:05:01 2017 from gateway
[root@controller ~]# . admin-openrc
[root@controller ~]# openstack token issue
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field      | Value                                                                                                                                                                                   |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires    | 2017-11-28T15:34:34+0000                                                                                                                                                                |
| id         | gAAAAABaHXP6JvekS_E1r7rZWI4d89Pb-rwfVOiZ-5Aqm8dRGzFuCfaDGK-HoshnbLlSUmrEU-Rr4zi9a9WXbpQSbbXnj1W3VwuJDz2tMluB9X2y5R7eUvxHkQHC_TBgtzCbxnwjG30fndFdbWqOvtIw2ZQUjzJOA3i9URQoHAkldEzsp9D0Vwo |
| project_id | 95d905a1b12544b48ba6e4b43a85ef0b                                                                                                                                                        |
| user_id    | b9a2717e56244411a9cf7b1b52073602                                                                                                                                                        |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
[root@controller ~]#

 

Image Service glance

 

openstack site : https://docs.openstack.org/glance/pike/install/

 

glance DB 생성

[root@controller ~]# mysql -uroot -p
Enter password:
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 19
Server version: 10.1.20-MariaDB MariaDB Server

Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]>  CREATE DATABASE glance;
Query OK, 1 row affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
       IDENTIFIED BY 'GLANCE_DBPASS';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
       IDENTIFIED BY 'GLANCE_DBPASS';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> flush privileges;
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> quit;
Bye
[root@controller ~]#

 

오픈스택 glance 유저 생성 

GLANCE_PASS 입력 

[root@controller ~]# openstack user create --domain default --password-prompt glance
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| enabled             | True                             |
| id                  | 7490f9bcc3894b5d8344f586b8f861b6 |
| name                | glance                           |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+
[root@controller ~]#

 

glance 사용자 추가및 service 프로젝트 추가 

[root@controller ~]# openstack role add --project service --user glance admin

 

glance 서비스 엔티티를 생성 합니다. 

[root@controller ~]# openstack service create --name glance \
   --description "OpenStack Image" image
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Image                  |
| enabled     | True                             |
| id          | 540df478eb89412d86534ef55831423a |
| name        | glance                           |
| type        | image                            |
+-------------+----------------------------------+
[root@controller ~]#

 

이미지 서비스 PAI 엔드포인트를 생성 #3-1

[root@controller ~]# openstack endpoint create --region RegionOne \
   image public http://controller:9292
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | a394bb427fb7408c970181dc46f1a6c0 |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 540df478eb89412d86534ef55831423a |
| service_name | glance                           |
| service_type | image                            |
| url          | http://controller:9292           |
+--------------+----------------------------------+
[root@controller ~]#

 

이미지 서비스 PAI 엔드포인트를 생성 #3-2

[root@controller ~]# openstack endpoint create --region RegionOne \
   image internal http://controller:9292
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | e6713b032e4a480b81ce16593f274d02 |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 540df478eb89412d86534ef55831423a |
| service_name | glance                           |
| service_type | image                            |
| url          | http://controller:9292           |
+--------------+----------------------------------+
[root@controller ~]#

 

이미지 서비스 PAI 엔드포인트를 생성 #3-3

[root@controller ~]# openstack endpoint create --region RegionOne \
   image admin http://controller:9292
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | dc473a0788e04f6baad4310177f4bad8 |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 540df478eb89412d86534ef55831423a |
| service_name | glance                           |
| service_type | image                            |
| url          | http://controller:9292           |
+--------------+----------------------------------+
[root@controller ~]#

 

openstack-glance 패키지 설치

[root@controller ~]# yum install -y openstack-glance

 

glance-api.conf 파일 수정 database , keystone_authtoken , paste_deploy , glance_store )

[root@controller ~]# vi /etc/glance/glance-api.conf


[database]
connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance


[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = GLANCE_PASS


[paste_deploy]
flavor = keystone

[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/

 

openstack-config Test 중

openstack-config --set /etc/glance/glance-api.conf DEFAULT transport_url rabbit://openstack:RABBIT_PASS@controller
openstack-config --set /etc/glance/glance-api.conf database connection mysql+pymysql://glance:GLANCE_DBPASS@controller/glance 
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_uri http://controller:5000 
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_url http://controller:35357 
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken memcached_servers controller:11211 
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_type password 
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_domain_name default 
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken user_domain_name default 
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken username glance 
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken password GLANCE_PASS
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_name service
openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone 
openstack-config --set /etc/glance/glance-api.conf glance_store stores file,http 
openstack-config --set /etc/glance/glance-api.conf glance_store default_store file 
openstack-config --set /etc/glance/glance-api.conf glance_store filesystem_store_datadir /var/lib/glance/images/

 

 

glance-registry.conf 파일 수정 ( database , keystone_authtoken , paste_deploy )

[root@controller ~]# vi /etc/glance/glance-registry.conf

[database]
connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance


[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = GLANCE_PASS


[paste_deploy]
flavor = keystone

 

openstack-config Test 중

openstack-config --set /etc/glance/glance-registry.conf DEFAULT transport_url rabbit://openstack:RABBIT_PASS@controller

openstack-config --set /etc/glance/glance-registry.conf database connection mysql+pymysql://glance:GLANCE_DBPASS@controller/glance 

openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_uri http://controller:5000 

openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_url http://controller:35357 

openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken memcached_serverscontroller:11211 

openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_type password 

openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_domain_name default 

openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken user_domain_name default 

openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_name service 

openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken username glance 

openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken password GLANCE_PASS

openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone

 

 

 

이미지 서비스 데이터 베이스 추가

[root@controller ~]# su -s /bin/sh -c "glance-manage db_sync" glance
/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:1328: OsloDBDeprecationWarning: EngineFacade is deprecated; please use oslo_db.sqlalchemy.enginefacade
  expire_on_commit=expire_on_commit, _conf=conf)
INFO  [alembic.runtime.migration] Context impl MySQLImpl.
INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
INFO  [alembic.runtime.migration] Running upgrade  -> liberty, liberty initial
INFO  [alembic.runtime.migration] Running upgrade liberty -> mitaka01, add index on created_at and updated_at columns of 'images' table
INFO  [alembic.runtime.migration] Running upgrade mitaka01 -> mitaka02, update metadef os_nova_server
INFO  [alembic.runtime.migration] Running upgrade mitaka02 -> ocata01, add visibility to and remove is_public from images
INFO  [alembic.runtime.migration] Running upgrade ocata01 -> pike01, drop glare artifacts tables
INFO  [alembic.runtime.migration] Context impl MySQLImpl.
INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
Upgraded database to: pike01, current revision(s): pike01
[root@controller ~]#

deprecated 메시지는 무시 해도 상관 없없습니다. 

 

openstack-glance Service 자동실행 추가및 Service Start 

[root@controller ~]# systemctl enable openstack-glance-api.service \
  openstack-glance-registry.service
[root@controller ~]# systemctl start openstack-glance-api.service \
  openstack-glance-registry.service
[root@controller ~]#

 

glance image 등록

[root@controller ~]# yum install -y wget
[root@controller ~]# wget http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img
[root@controller ~]# openstack image create "cirros" \
  --file cirros-0.3.5-x86_64-disk.img \
  --disk-format qcow2 --container-format bare \
  --public
+------------------+------------------------------------------------------+
| Field            | Value                                                |
+------------------+------------------------------------------------------+
| checksum         | f8ab98ff5e73ebab884d80c9dc9c7290                     |
| container_format | bare                                                 |
| created_at       | 2017-11-28T14:56:00Z                                 |
| disk_format      | qcow2                                                |
| file             | /v2/images/3e4448dd-5399-4fff-934e-bde198f6d9fa/file |
| id               | 3e4448dd-5399-4fff-934e-bde198f6d9fa                 |
| min_disk         | 0                                                    |
| min_ram          | 0                                                    |
| name             | cirros                                               |
| owner            | 95d905a1b12544b48ba6e4b43a85ef0b                     |
| protected        | False                                                |
| schema           | /v2/schemas/image                                    |
| size             | 13267968                                             |
| status           | active                                               |
| tags             |                                                      |
| updated_at       | 2017-11-28T14:56:00Z                                 |
| virtual_size     | None                                                 |
| visibility       | public                                               |
+------------------+------------------------------------------------------+
[root@controller ~]#

 

image 확인 

[root@controller ~]# openstack image list
+--------------------------------------+--------+--------+
| ID                                   | Name   | Status |
+--------------------------------------+--------+--------+
| 3e4448dd-5399-4fff-934e-bde198f6d9fa | cirros | active |
+--------------------------------------+--------+--------+
[root@controller ~]#

 

3. Compute Service

openstack site : https://docs.openstack.org/nova/pike/install/

firewalld Disable 경우 해당 설정 부터 Disable 

nova DB 생성 

[root@controller ~]# mysql -uroot -p
Enter password:
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 23
Server version: 10.1.20-MariaDB MariaDB Server

Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> CREATE DATABASE nova_api;

MariaDB [(none)]> CREATE DATABASE nova;

MariaDB [(none)]> CREATE DATABASE nova_cell0;



MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
       IDENTIFIED BY 'NOVA_DBPASS';

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
       IDENTIFIED BY 'NOVA_DBPASS';

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
       IDENTIFIED BY 'NOVA_DBPASS';



MariaDB [(none)]>  GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
       IDENTIFIED BY 'NOVA_DBPASS';

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \
       IDENTIFIED BY 'NOVA_DBPASS';

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \
       IDENTIFIED BY 'NOVA_DBPASS';

MariaDB [(none)]> flush privileges;

MariaDB [(none)]> quit;
Bye
[root@controller ~]#

 

nova 사용자 생성

NOVA_PASS 입력

[root@controller ~]# . admin-openrc
[root@controller ~]# openstack user create --domain default --password-prompt nova
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| enabled             | True                             |
| id                  | afcfca3df8034c25b0fd8f834bd08259 |
| name                | nova                             |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+
[root@controller ~]#

 

nova 사용자에 admin role 추가 

[root@controller ~]# openstack role add --project service --user nova admin

 

nova 서비스 엔티티 생성

[root@controller ~]# openstack service create --name nova \
   --description "OpenStack Compute" compute
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Compute                |
| enabled     | True                             |
| id          | f281d4d6ea18441ca117a57f5b9dcb5b |
| name        | nova                             |
| type        | compute                          |
+-------------+----------------------------------+
[root@controller ~]#

 

Compute API 서비스 엔드포인트를 생성

[root@controller ~]# openstack endpoint create --region RegionOne \
   compute public http://controller:8774/v2.1
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 75521f33cb1744f4b573a3e6b90e1acc |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | f281d4d6ea18441ca117a57f5b9dcb5b |
| service_name | nova                             |
| service_type | compute                          |
| url          | http://controller:8774/v2.1      |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne \
   compute internal http://controller:8774/v2.1
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 48313bf38bf74559bc5dcc346f2a1309 |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 78883460454a4fc6940ef56d6acf57e7 |
| service_name | nova                             |
| service_type | compute                          |
| url          | http://controller:8774/v2.1      |
+--------------+----------------------------------+
[root@controller ~]#
[root@controller ~]# openstack endpoint create --region RegionOne \
   compute admin http://controller:8774/v2.1
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 20c9795a55a3457c9e817c96b5518df3 |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | f281d4d6ea18441ca117a57f5b9dcb5b |
| service_name | nova                             |
| service_type | compute                          |
| url          | http://controller:8774/v2.1      |
+--------------+----------------------------------+
[root@controller ~]#

 

PLACEMENT_PASS 입력  / placement 서비스 사용자 생성

[root@controller ~]# openstack user create --domain default --password-prompt placement
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| enabled             | True                             |
| id                  | df2929587a264f0ebc914ac010f39538 |
| name                | placement                        |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+
[root@controller ~]#

 

서비스 프로젝트에 placement 사용자 추가 

[root@controller ~]# openstack role add --project service --user placement admin

 

서비스 카탈로그에 Placement API 항목을 생성

[root@controller ~]# openstack service create --name placement --description "Placement API" placement
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | Placement API                    |
| enabled     | True                             |
| id          | e9eae3013f2e4f7cb9a97676c6dff706 |
| name        | placement                        |
| type        | placement                        |
+-------------+----------------------------------+
[root@controller ~]#

 

Placement API 서비스 엔드포인트를 생성

[root@controller ~]# openstack endpoint create --region RegionOne placement public http://controller:8778
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | a0a6d335c64b45be982917837fc22bc9 |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | e9eae3013f2e4f7cb9a97676c6dff706 |
| service_name | placement                        |
| service_type | placement                        |
| url          | http://controller:8778           |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne placement internal http://controller:8778
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 628ce318919a49c491422f02e22a361d |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | e9eae3013f2e4f7cb9a97676c6dff706 |
| service_name | placement                        |
| service_type | placement                        |
| url          | http://controller:8778           |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne placement admin http://controller:8778
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 781854c9caa34d2495104d9640353f49 |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | e9eae3013f2e4f7cb9a97676c6dff706 |
| service_name | placement                        |
| service_type | placement                        |
| url          | http://controller:8778           |
+--------------+----------------------------------+
[root@controller ~]#

 

nova 패키지 설치

[root@controller ~]# yum install -y openstack-nova-api openstack-nova-conductor \
   openstack-nova-console openstack-nova-novncproxy \
   openstack-nova-scheduler openstack-nova-placement-api

 

nova.conf 수정

[root@controller ~]# vi /etc/nova/nova.conf
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:RABBIT_PASS@controller
my_ip = 40.0.0.101
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver


[api_database]
connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api


[database]
connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova


[api]
auth_strategy = keystone


[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = NOVA_PASS


[vnc]
enabled = true
vncserver_listen = $my_ip
vncserver_proxyclient_address = $my_ip


[glance]
api_servers = http://controller:9292


[oslo_concurrency]
lock_path = /var/lib/nova/tmp


[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:35357/v3
username = placement
password = PLACEMENT_PASS




 

00-nova-placement-api 파일 수정 (수정을 안했을 경우 아래와 같은 현상발생)

Due to a packaging bug, you must enable access to the Placement API by adding the following configuration to /etc/httpd/conf.d/00-nova-placement-api.conf:

[root@controller ~]# vi /etc/httpd/conf.d/00-nova-placement-api.conf
<Directory /usr/bin>
   <IfVersion >= 2.4>
      Require all granted
   </IfVersion>
   <IfVersion < 2.4>
      Order allow,deny
      Allow from all
   </IfVersion>
</Directory>

00-nova-placement-api 파일 수정 안했을 경우 확인할수 있는 내역

httpd 재시작

[root@controller ~]# systemctl restart httpd

 

아래 내역은 00-nova-placement-api 설정을 안했을 경우 나타나는 현상입니다.

nova-placement-api.log 확인시 denied messages 확인

[root@controller ~]# tail -f /var/log/nova/nova-placement-api.log
AH01630: client denied by server configuration: /usr/bin/nova-placement-api
AH01630: client denied by server configuration: /usr/bin/nova-placement-api
AH01630: client denied by server configuration: /usr/bin/nova-placement-api
AH01630: client denied by server configuration: /usr/bin/nova-placement-api
AH01630: client denied by server configuration: /usr/bin/nova-placement-api
AH01630: client denied by server configuration: /usr/bin/nova-placement-api
^C

 

compute node 리부팅전 controller 에서 확인시 Result: Warning 메시지 확인 

[root@controller ~]# nova-status upgrade check
+-------------------------------------------------------------------+
| Upgrade Check Results                                             |
+-------------------------------------------------------------------+
| Check: Cells v2                                                   |
| Result: Success                                                   |
| Details: None                                                     |
+-------------------------------------------------------------------+
| Check: Placement API                                              |
| Result: Success                                                   |
| Details: None                                                     |
+-------------------------------------------------------------------+
| Check: Resource Providers                                         |
| Result: Warning                                                   |
| Details: There are no compute resource providers in the Placement |
|   service but there are 1 compute nodes in the deployment.        |
|   This means no compute nodes are reporting into the              |
|   Placement service and need to be upgraded and/or fixed.         |
|   See                                                             |
|   http://docs.openstack.org/developer/nova/placement.html         |
|   for more details.                                               |
+-------------------------------------------------------------------+
[root@controller ~]#

(해당현상의 경우 firewalld Disable 로 해결 할수 있으나.. )

 

firewalld Disable 후 시스템 리부팅후 nova-manage 명령어 다시실행

[root@controller ~]# nova-status upgrade check
+------------------------------------------------------------------+
| Upgrade Check Results                                            |
+------------------------------------------------------------------+
| Check: Cells v2                                                  |
| Result: Failure                                                  |
| Details: No host mappings found but there are compute nodes. Run |
|   command 'nova-manage cell_v2 simple_cell_setup' and then       |
|   retry.                                                         |
+------------------------------------------------------------------+
| Check: Placement API                                             |
| Result: Success                                                  |
| Details: None                                                    |
+------------------------------------------------------------------+
| Check: Resource Providers                                        |
| Result: Success                                                  |
| Details: None                                                    |
+------------------------------------------------------------------+
[root@controller ~]# nova-manage cell_v2 simple_cell_setup
Cell0 is already setup
[root@controller ~]# nova-status upgrade check
+---------------------------+
| Upgrade Check Results     |
+---------------------------+
| Check: Cells v2           |
| Result: Success           |
| Details: None             |
+---------------------------+
| Check: Placement API      |
| Result: Success           |
| Details: None             |
+---------------------------+
| Check: Resource Providers |
| Result: Success           |
| Details: None             |
+---------------------------+
[root@controller ~]#

 

 

warning message 확인시 : https://docs.openstack.org/releasenotes/keystonemiddleware/ocata.html

service_token_roles_required false 로 설정 되어 있는것을 확인
True 로 설정을 바꾸면 해결? // Test 시 동일한 문제 발생

# openstack-config --set /etc/nova/nova.conf keystone_authtoken service_token_roles_required True

openstack-config 명령어 사용시 openstack-utils 패키지 필요

차후 firewalld Disable 및 iptables 작업후 nova 설치하여 테스트 필요!!

 

compute node Rebooting 후 확인시

[root@controller ~]# nova-status upgrade check
+---------------------------+
| Upgrade Check Results     |
+---------------------------+
| Check: Cells v2           |
| Result: Success           |
| Details: None             |
+---------------------------+
| Check: Placement API      |
| Result: Success           |
| Details: None             |
+---------------------------+
| Check: Resource Providers |
| Result: Success           |
| Details: None             |
+---------------------------+
[root@controller ~]#

위의 내역의 경우 확인 내역이며 셋팅은 아래 내용부터 하시면 됩니다. 

 

nova-api 데이터 베이스 등록

[root@controller ~]# su -s /bin/sh -c "nova-manage api_db sync" nova

 

cell0 데이터 베이스 등록

[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova

 

cell1 셀을 생성 

[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova

 

nova 데이터 베이스 등록 

[root@controller ~]# su -s /bin/sh -c "nova-manage db sync" nova
/usr/lib/python2.7/site-packages/pymysql/cursors.py:166: Warning: (1831, u'Duplicate index `block_device_mapping_instance_uuid_virtual_name_device_name_idx`. This is deprecated and will be disallowed in a future release.')
  result = self._query(query)
/usr/lib/python2.7/site-packages/pymysql/cursors.py:166: Warning: (1831, u'Duplicate index `uniq_instances0uuid`. This is deprecated and will be disallowed in a future release.')
  result = self._query(query)
[root@controller ~]#

(해당 메시지 부분의 경우 원인을 찾을수 없음)

 

nova cell0 cell1이 정확하게 등록 되었는지 확인 

[root@controller ~]# nova-manage cell_v2 list_cells
+-------+--------------------------------------+------------------------------------+-------------------------------------------------+
|  Name |                 UUID                 |           Transport URL            |               Database Connection               |
+-------+--------------------------------------+------------------------------------+-------------------------------------------------+
| cell0 | 00000000-0000-0000-0000-000000000000 |               none:/               | mysql+pymysql://nova:****@controller/nova_cell0 |
| cell1 | fbfd4a7d-3bcb-478a-b13c-6557c7c9acfe | rabbit://openstack:****@controller |    mysql+pymysql://nova:****@controller/nova    |
+-------+--------------------------------------+------------------------------------+-------------------------------------------------+
[root@controller ~]#

 

firewall-cmd 명령어

[root@controller ~]# firewall-cmd --permanent --add-port=5672/tcp
success
[root@controller ~]# firewall-cmd --reload
success
[root@controller ~]#

 

openstack-nova-api enable 및 Service 실행

[root@controller ~]# systemctl enable openstack-nova-api.service \
   openstack-nova-consoleauth.service openstack-nova-scheduler.service \
   openstack-nova-conductor.service openstack-nova-novncproxy.service

[root@controller ~]# systemctl start openstack-nova-api.service \
   openstack-nova-consoleauth.service openstack-nova-scheduler.service \
   openstack-nova-conductor.service openstack-nova-novncproxy.service
[root@controller ~]#

 

 

 

 

 

compute 에서 작업

firewalld Disable 시 해당 작업 내역에서 진행 하시면 됩니다.

저장소 활성화 및 update 

[root@compute ~]# yum install  -y https://repos.fedorapeople.org/repos/openstack/openstack-pike/rdo-release-pike-1.noarch.rpm
[root@compute ~]# yum update -y
[root@compute ~]# init 6

 

openstack-selinux 패키지 설치

[root@compute ~]# yum install -y openstack-selinux
[root@compute ~]# init 6

 

openstack-nova-compute 설치

[root@controller ~]# yum install -y openstack-nova-compute

 

nova.conf 수정

[root@compute ~]# vi /etc/nova/nova.conf
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:RABBIT_PASS@controller
my_ip = 40.0.0.102
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver


[api]
auth_strategy = keystone


[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = NOVA_PASS


[vnc]
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html


[glance]
api_servers = http://controller:9292


[oslo_concurrency]
lock_path = /var/lib/nova/tmp


[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:35357/v3
username = placement
password = PLACEMENT_PASS

[libvirt]
virt_type = qemu

 

virt_type 의 경우 아래와 같이 확인 하실수 있습니다.

[root@compute ~]# egrep -c '(vmx|svm)' /proc/cpuinfo
0

0 의 경우 qemu 설정 0 이외의 경우 kvm 으로 설정 합니다.

 

qemu 사용시

[root@compute ~]# yum install libguestfs-tools

 

 

예전방식으로 iptables Rule 을 추가시 아래내용 설정 firewalld 환경에서는 시스템 리부팅 해당 설정이 삭제 됩니다.

iptable 추가 후 iptables 저장및 재시작 (controller node / compute node 작업해야함)

[root@compute ~]# iptables -A IN_public_allow -p tcp -m tcp --dport 5672 -m conntrack --ctstate NEW -j ACCEPT
[root@controller ~]# service iptables save
iptables: Saving firewall rules to /etc/sysconfig/iptables:[  OK  ]
[root@controller ~]#
[root@controller ~]# systemctl restart iptables

 

or firewall-cmd 명령어

[root@controller ~]# firewall-cmd --permanent --add-port=5672/tcp
success
[root@controller ~]# firewall-cmd --reload
success
[root@controller ~]#

 

iptables  Rule을 사용할 경우 패키지 설치

controller node

[root@controller ~]# yum install -y iptables-services

 

System Rebooting 시 iptables 가 동작하지 않는 이유???

firewalld Disable 및 iptables 로 바꾸는 작업이 필요 (iptables 을 사전작업으로 하고 nova Test 필요)

[root@compute ~]# systemctl mask firewalld
[root@compute ~]# systemctl enable iptables
[root@compute ~]# systemctl enable ip6tables

 

openstack-nova-compute enable 및 실행

[root@compute ~]# systemctl enable libvirtd.service openstack-nova-compute.service
[root@compute ~]# systemctl start libvirtd.service openstack-nova-compute.service

 

정상적으로 nova-compute 가 실행 안될때

[root@compute ~]# tail -f /var/log/nova/nova-compute.log
2017-11-29 01:41:31.993 1840 ERROR oslo.messaging._drivers.impl_rabbit [req-bc6c79e0-5d57-454c-b8bb-c1fd04706b18 - - - - -] [bf6d14bf-42f7-4112-b9df-467e4f511594] AMQP rver on controller:5672 is unreachable: [Errno 113] EHOSTUNREACH. Trying again in 32 seconds. Client port: None: error: [Errno 113] EHOSTUNREACH
2017-11-29 01:42:04.042 1840 ERROR oslo.messaging._drivers.impl_rabbit [req-bc6c79e0-5d57-454c-b8bb-c1fd04706b18 - - - - -] [bf6d14bf-42f7-4112-b9df-467e4f511594] AMQP rver on controller:5672 is unreachable: [Errno 113] EHOSTUNREACH. Trying again in 32 seconds. Client port: None: error: [Errno 113] EHOSTUNREACH
2017-11-29 01:42:36.089 1840 ERROR oslo.messaging._drivers.impl_rabbit [req-bc6c79e0-5d57-454c-b8bb-c1fd04706b18 - - - - -] [bf6d14bf-42f7-4112-b9df-467e4f511594] AMQP rver on controller:5672 is unreachable: [Errno 113] EHOSTUNREACH. Trying again in 32 seconds. Client port: None: error: [Errno 113] EHOSTUNREACH
2017-11-29 01:43:08.141 1840 INFO oslo.messaging._drivers.impl_rabbit [req-bc6c79e0-5d57-454c-b8bb-c1fd04706b18 - - - - -] [bf6d14bf-42f7-4112-b9df-467e4f511594] Reconnted to AMQP server on controller:5672 via [amqp] client with port 53352.

firewalld Disable 시 정상 적동 안하며 iptables Rule 추가 해야지 정상적으로 openstack-nova-compute 데몬이 실행 합니다!

 

최초 실행시 firewalld 의 영향을 받는거 같으며 리부팅시 firewalld Disable 환경에서도 정상적으로 openstack-nova-compute 가 동작합니다.

[root@compute ~]# iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination
[root@compute ~]# systemctl status openstack-nova-compute.service
● openstack-nova-compute.service - OpenStack Nova Compute Server
   Loaded: loaded (/usr/lib/systemd/system/openstack-nova-compute.service; enabled; vendor preset: disabled)
   Active: active (running) since Sat 2017-12-02 07:47:01 KST; 21s ago
 Main PID: 1066 (nova-compute)
   CGroup: /system.slice/openstack-nova-compute.service
           └─1066 /usr/bin/python2 /usr/bin/nova-compute

Dec 02 07:46:56 compute systemd[1]: Starting OpenStack Nova Compute Server...
Dec 02 07:47:01 compute systemd[1]: Started OpenStack Nova Compute Server.
[root@compute ~]#

 

controller Rebooting 후 테스트 ( controller firewalld Enable 이고 Rule 이 빠졌을때는 정상적으로 동작 안하며 controller firewalld Disable 후 Rebooting 후 정상작동)

[root@compute ~]# systemctl status openstack-nova-compute.service
● openstack-nova-compute.service - OpenStack Nova Compute Server
   Loaded: loaded (/usr/lib/systemd/system/openstack-nova-compute.service; enabled; vendor preset: disabled)
   Active: active (running) since Sat 2017-12-02 07:56:26 KST; 3s ago
 Main PID: 1461 (nova-compute)
   CGroup: /system.slice/openstack-nova-compute.service
           └─1461 /usr/bin/python2 /usr/bin/nova-compute

Dec 02 07:56:22 compute systemd[1]: Starting OpenStack Nova Compute Server...
Dec 02 07:56:26 compute systemd[1]: Started OpenStack Nova Compute Server.
[root@compute ~]#

 

 

controller node 에서 확인

[root@controller ~]# . admin-openrc
[root@controller ~]# openstack compute service list --service nova-compute
+----+--------------+---------+------+---------+-------+----------------------------+
| ID | Binary       | Host    | Zone | Status  | State | Updated At                 |
+----+--------------+---------+------+---------+-------+----------------------------+
|  6 | nova-compute | compute | nova | enabled | up    | 2017-11-28T16:47:09.000000 |
+----+--------------+---------+------+---------+-------+----------------------------+
[root@controller ~]#

 

compute 호스트 찾기

[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
Found 2 cell mappings.
Skipping cell0 since it does not contain hosts.
Getting compute nodes from cell 'cell1': fbfd4a7d-3bcb-478a-b13c-6557c7c9acfe
Found 1 unmapped computes in cell: fbfd4a7d-3bcb-478a-b13c-6557c7c9acfe
Checking host mapping for compute host 'compute': f6f67c1c-f7c0-4d52-b521-30ee36610c60
Creating host mapping for compute host 'compute': f6f67c1c-f7c0-4d52-b521-30ee36610c60
[root@controller ~]#

 

interval수정

[root@controller ~]# vi /etc/nova/nova.conf
[scheduler]
discover_hosts_in_cells_interval = 300

 

서비스 구성요소 확인

[root@controller ~]# openstack compute service list
+----+------------------+------------+----------+---------+-------+----------------------------+
| ID | Binary           | Host       | Zone     | Status  | State | Updated At                 |
+----+------------------+------------+----------+---------+-------+----------------------------+
|  1 | nova-conductor   | controller | internal | enabled | up    | 2017-11-28T16:52:33.000000 |
|  2 | nova-consoleauth | controller | internal | enabled | up    | 2017-11-28T16:52:34.000000 |
|  3 | nova-scheduler   | controller | internal | enabled | up    | 2017-11-28T16:52:33.000000 |
|  6 | nova-compute     | compute    | nova     | enabled | up    | 2017-11-28T16:52:29.000000 |
+----+------------------+------------+----------+---------+-------+----------------------------+
[root@controller ~]#

 

endpoint 목록 확인

[root@controller ~]# openstack catalog list
+-----------+-----------+----------------------------------------+
| Name      | Type      | Endpoints                              |
+-----------+-----------+----------------------------------------+
| glance    | image     | RegionOne                              |
|           |           |   public: http://controller:9292       |
|           |           | RegionOne                              |
|           |           |   admin: http://controller:9292        |
|           |           | RegionOne                              |
|           |           |   internal: http://controller:9292     |
|           |           |                                        |
| keystone  | identity  | RegionOne                              |
|           |           |   internal: http://controller:5000/v3/ |
|           |           | RegionOne                              |
|           |           |   public: http://controller:5000/v3/   |
|           |           | RegionOne                              |
|           |           |   admin: http://controller:35357/v3/   |
|           |           |                                        |
| placement | placement | RegionOne                              |
|           |           |   internal: http://controller:8778     |
|           |           | RegionOne                              |
|           |           |   admin: http://controller:8778        |
|           |           | RegionOne                              |
|           |           |   public: http://controller:8778       |
|           |           |                                        |
| nova      | compute   | RegionOne                              |
|           |           |   admin: http://controller:8774/v2.1   |
|           |           | RegionOne                              |
|           |           |   public: http://controller:8774/v2.1  |
|           |           | RegionOne                              |
|           |           |   public: http://controller:8774/v2.1  |
|           |           |                                        |
+-----------+-----------+----------------------------------------+
[root@controller ~]#

 

image 서비스 이미지 확인

[root@controller ~]# openstack image list
+--------------------------------------+--------+--------+
| ID                                   | Name   | Status |
+--------------------------------------+--------+--------+
| 3e4448dd-5399-4fff-934e-bde198f6d9fa | cirros | active |
+--------------------------------------+--------+--------+
[root@controller ~]#

 

셀과 placement API 동작 확인

[root@controller ~]# nova-status upgrade check
+---------------------------+
| Upgrade Check Results     |
+---------------------------+
| Check: Cells v2           |
| Result: Success           |
| Details: None             |
+---------------------------+
| Check: Placement API      |
| Result: Success           |
| Details: None             |
+---------------------------+
| Check: Resource Providers |
| Result: Success           |
| Details: None             |
+---------------------------+
[root@controller ~]#

 

Test 전 controller node / compute node virt-clone 으로 vm 백업 

Networking Service neutron

https://docs.openstack.org/ocata/ko_KR/install-guide-rdo/neutron.html

https://docs.openstack.org/neutron/pike/install/

 

controller system 에서 작업

 

neutron db 생성

[root@controller ~]# mysql -uroot -p
Enter password:
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 18
Server version: 10.1.20-MariaDB MariaDB Server

Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> CREATE DATABASE neutron;
Query OK, 1 row affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
       IDENTIFIED BY 'NEUTRON_DBPASS';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
       IDENTIFIED BY 'NEUTRON_DBPASS';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> flush privileges;
Query OK, 0 rows affected (0.01 sec)

MariaDB [(none)]> quit;
Bye
[root@controller ~]#

 

admin 환경파일을 불러 옵니다.

[root@controller ~]# . admin-openrc

 

neutron 유저를 생성합니다.

NEUTRON_PASS 입력

[root@controller ~]# openstack user create --domain default --password-prompt neutron
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| enabled             | True                             |
| id                  | d989d834135e40ecbb461c622bfb36bd |
| name                | neutron                          |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+
[root@controller ~]#

 

neutron role 추가

[root@controller ~]# openstack role add --project service --user neutron admin

 

서비스 엔티티 생성

[root@controller ~]# openstack service create --name neutron \
   --description "OpenStack Networking" network
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Networking             |
| enabled     | True                             |
| id          | a714327b7ff14147811588e03e0572ff |
| name        | neutron                          |
| type        | network                          |
+-------------+----------------------------------+
[root@controller ~]#

 

네트워크 서비스 API 엔드포인트를 생성

[root@controller ~]# openstack endpoint create --region RegionOne \
   network public http://controller:9696
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | fb0b35d86a5c48f39916835f429fc9e4 |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | a714327b7ff14147811588e03e0572ff |
| service_name | neutron                          |
| service_type | network                          |
| url          | http://controller:9696           |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne \
   network internal http://controller:9696
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 54310c0061b345919558e80df87f7a55 |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | a714327b7ff14147811588e03e0572ff |
| service_name | neutron                          |
| service_type | network                          |
| url          | http://controller:9696           |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne \
   network admin http://controller:9696
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | d0234bd35b844d8da24bd806bc4fa6a9 |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | a714327b7ff14147811588e03e0572ff |
| service_name | neutron                          |
| service_type | network                          |
| url          | http://controller:9696           |
+--------------+----------------------------------+
[root@controller ~]#


네트워킹 옵션 구성

옵션 1 및 2에 의해 표현되는 두 아키텍처 중 하나를 사용하여 네트워킹 서비스를 배포할 수 있습니다.

옵션 1 은 프로바이더 (외부) 네트워크에 대한 인스턴스 연결만을 지원하는 가능한 단순한 아키텍처로 배포합니다.
셀프 서비스 (사설) 네트워크, 라우터 또는 플로팅 IP 주소가 없습니다. admin 또는 기타 권한이 있는 사용자만 프로바이더 네트워크를 관리할 수 있습니다.

옵션 2 는 옵션 1에 셀프 서비스 네트워크에 대한 인스턴스 연결을 지원하는 layer-3 서비스를 확장합니다.
demo 또는 다른 관리자 권한을 갖지 않은 사용자가 셀프 서비스와 프로바이더 네트워크 간 연결을 제공하는 라우터를 포함한 셀프 서비스 네트워크를 관리할 수 있습니다.
부가적으로, 플로팅 IP 주소는 인터넷과 같은 외부 네트워크로부터 셀프 서비스 네트워크를 사용하여 인스턴스에 대한 연결을 제공합니다.

셀프 서비스 네트워크는 보통 오버레이 네트워크를 사용합니다.
VXLAN과 같은 오버레이 네트워크 프로토콜은 오버헤드를 증가시키고 페이로드 또는 사용자 데이터를 위해 사용 가능한 용량을 감소시키는 추가적인 패킷 헤더를 포함합니다.
가상 네트워크 인프라에 대한 지식이 없더라도 인스턴스들은 기본 이더넷에 대한 1500 바이트의 maximum transmission unit (MTU) 를 사용하여 패킷 전송을 시도합니다.
네트워킹 서비스는 DHCP를 통해 인스턴스에 알맞은 MTU 값을 자동으로 제공합니다.
그러나, 몇몇 클라우드 이미지는 DHCP를 사용하지 않거나 DHCP MTU 옵션을 무시하며 메타데이터 또는 스크립트를 사용한 구성을 필요로 합니다.

 

참고 페이지

네트워킹 옵션 1: 프로바이더 네트워크

네트워킹 옵션 2: 셀프서비스 네트워크

 

네트워킹 옵션 2 : 셀프서비스 네트워크로 구성

 

구성요소 설치

[root@controller ~]# yum install -y openstack-neutron openstack-neutron-ml2 \
   openstack-neutron-linuxbridge ebtables

 

neutron.conf 파일 수정

[root@controller ~]# vi /etc/neutron/neutron.conf
[DEFAULT]
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
transport_url = rabbit://openstack:RABBIT_PASS@controller
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true


[database]
connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron


[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS


[nova]
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = NOVA_PASS


[oslo_concurrency]
lock_path = /var/lib/neutron/tmp

 

ml2_conf.ini 파일 수정

[root@controller ~]# vi /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security


[ml2_type_flat]
flat_networks = provider


[ml2_type_vxlan]
vni_ranges = 1:1000


[securitygroup]
enable_ipset = true



 

리눅스 브릿지 에이전트 구성

리눅스 브릿지 에이전트는 인스턴스에 대한 layer-2 (브릿징과 스위칭) 가상 네트워킹 인프라를 구축하고 시큐리티 그룹을 처리 합니다.

설정내역 참고: https://docs.openstack.org/ocata/ko_KR/install-guide-rdo/neutron-controller-install-option2.html

linuxbridge_agent.ini 파일 수정

[root@controller ~]# vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:eth1


[vxlan]
enable_vxlan = true
local_ip = 40.0.0.101
l2_population = true



[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

 

Layer-3 에이전트 구성

l3_agent.ini 파일 수정

[root@controller ~]# vi /etc/neutron/l3_agent.ini
[DEFAULT]
interface_driver = linuxbridge

 

dhcp_agent.ini 파일 수정

[root@controller ~]# vi /etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true

 

compute node 작업

[root@compute ~]# yum install -y openstack-neutron-linuxbridge ebtables ipset

 

neutron.conf 파일 수정

[root@compute ~]# vi /etc/neutron/neutron.conf
[DEFAULT]
transport_url = rabbit://openstack:RABBIT_PASS@controller
auth_strategy = keystone


[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS


[oslo_concurrency]
lock_path = /var/lib/neutron/tmp

 

네트워킹 옵션 구성 

site: https://docs.openstack.org/ocata/ko_KR/install-guide-rdo/neutron-compute-install-option2.html

네트워킹 옵션 2: 셀프 서비스 네트워크 compute 서비스에서 구성

[root@compute ~]# vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:eth1


[vxlan]
enable_vxlan = true
local_ip = 40.0.0.102
l2_population = true


[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

 

네트워킹 서비스를 사용하기 위해 Compute 서비스를 구성 합니다.

nova.conf 수정

[root@compute ~]# vi /etc/nova/nova.conf
[neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS

 

compute 서비스 재시작

[root@compute ~]# systemctl restart openstack-nova-compute.service

 

linux 브릿지 에이전트를 시작 하고 enable

[root@compute ~]# systemctl enable neutron-linuxbridge-agent.service
[root@compute ~]# systemctl start neutron-linuxbridge-agent.service

 

controller node 에서 작업

메타데이터 에이전트 구성

 

linux 브릿지 에이전트 실행전 ifcfg-eth1 file 정보수정 (확인 필요?)

[root@compute ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth1
TYPE=Ethernet
BOOTPROTO=none
DEVICE=eth1
ONBOOT=yes
#IPADDR=192.168.122.102
#NETMASK=255.255.255.0
#GATEWAY=192.168.122.1
[root@compute ~]# systemctl restart network

 

https://docs.openstack.org/neutron/pike/install/controller-install-obs.html

metadata_agent 는 인스턴스에 대한 자격 증명과 같은 구성정보를 제공 합니다.

[root@controller ~]# vi /etc/neutron/metadata_agent.ini
[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = METADATA_SECRET

 

nova.conf 수정

[root@controller ~]# vi /etc/nova/nova.conf
[neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS
service_metadata_proxy = true
metadata_proxy_shared_secret = METADATA_SECRET

 

설치 마무리

네트워킹 서비스 초기화 스크립트 실행시 ML2 플러그인 구성 파일인 /etc/neturon/plugins/ml2/ml2_conf.ini 파일을 가르키는

심볼릭 링크 /etc/neturon/plugin.ini 파일을 생성 합니다. 심볼릭 링크 파일이 생성되지 않는다면, 다음 명령어를 통해서 생성합니다.

[root@controller ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

 

neutron 데이터 베이스 등록

[root@controller ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
   --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

 

compute api 서비스를 재시작 합니다.

[root@controller ~]# systemctl restart openstack-nova-api.service

 

네트워킹 서비스 enable 

[root@controller ~]#  systemctl enable neutron-server.service \
  neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
  neutron-metadata-agent.service
[root@controller ~]#  systemctl start neutron-server.service \
  neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
  neutron-metadata-agent.service

 

네트워킹 옵션 2에 대해 layer-3 서비스를 활성화 및 실행

[root@controller ~]# systemctl enable neutron-l3-agent.service
[root@controller ~]# systemctl start neutron-l3-agent.service

 

검증

[root@controller ~]# . admin-openrc
[root@controller ~]# openstack extension list --network
+----------------------------------------------------------------------------------------------+---------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+
| Name                                                                                         | Alias                     | Description                                                                                                                                              |
+----------------------------------------------------------------------------------------------+---------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+
| Default Subnetpools                                                                          | default-subnetpools       | Provides ability to mark and use a subnetpool as the default                                                                                             |
| Network IP Availability                                                                      | network-ip-availability   | Provides IP availability data for each network and subnet.                                                                                               |
| Network Availability Zone                                                                    | network_availability_zone | Availability zone support for network.                                                                                                                   |
| Auto Allocated Topology Services                                                             | auto-allocated-topology   | Auto Allocated Topology Services.                                                                                                                        |
| Neutron L3 Configurable external gateway mode                                                | ext-gw-mode               | Extension of the router abstraction for specifying whether SNAT should occur on the external gateway                                                     |
| Port Binding                                                                                 | binding                   | Expose port bindings of a virtual port to external application                                                                                           |
| agent                                                                                        | agent                     | The agent management extension.                                                                                                                          |
| Subnet Allocation                                                                            | subnet_allocation         | Enables allocation of subnets from a subnet pool                                                                                                         |
| L3 Agent Scheduler                                                                           | l3_agent_scheduler        | Schedule routers among l3 agents                                                                                                                         |
| Tag support                                                                                  | tag                       | Enables to set tag on resources.                                                                                                                         |
| Neutron external network                                                                     | external-net              | Adds external network attribute to network resource.                                                                                                     |
| Tag support for resources with standard attribute: trunk, policy, security_group, floatingip | standard-attr-tag         | Enables to set tag on resources with standard attribute.                                                                                                 |
| Neutron Service Flavors                                                                      | flavors                   | Flavor specification for Neutron advanced services                                                                                                       |
| Network MTU                                                                                  | net-mtu                   | Provides MTU attribute for a network resource.                                                                                                           |
| Availability Zone                                                                            | availability_zone         | The availability zone extension.                                                                                                                         |
| Quota management support                                                                     | quotas                    | Expose functions for quotas management per tenant                                                                                                        |
| If-Match constraints based on revision_number                                                | revision-if-match         | Extension indicating that If-Match based on revision_number is supported.                                                                                |
| HA Router extension                                                                          | l3-ha                     | Add HA capability to routers.                                                                                                                            |
| Provider Network                                                                             | provider                  | Expose mapping of virtual networks to physical networks                                                                                                  |
| Multi Provider Network                                                                       | multi-provider            | Expose mapping of virtual networks to multiple physical networks                                                                                         |
| Quota details management support                                                             | quota_details             | Expose functions for quotas usage statistics per project                                                                                                 |
| Address scope                                                                                | address-scope             | Address scopes extension.                                                                                                                                |
| Neutron Extra Route                                                                          | extraroute                | Extra routes configuration for L3 router                                                                                                                 |
| Network MTU (writable)                                                                       | net-mtu-writable          | Provides a writable MTU attribute for a network resource.                                                                                                |
| Subnet service types                                                                         | subnet-service-types      | Provides ability to set the subnet service_types field                                                                                                   |
| Resource timestamps                                                                          | standard-attr-timestamp   | Adds created_at and updated_at fields to all Neutron resources that have Neutron standard attributes.                                                    |
| Neutron Service Type Management                                                              | service-type              | API for retrieving service providers for Neutron advanced services                                                                                       |
| Router Flavor Extension                                                                      | l3-flavors                | Flavor support for routers.                                                                                                                              |
| Port Security                                                                                | port-security             | Provides port security                                                                                                                                   |
| Neutron Extra DHCP options                                                                   | extra_dhcp_opt            | Extra options configuration for DHCP. For example PXE boot options to DHCP clients can be specified (e.g. tftp-server, server-ip-address, bootfile-name) |
| Resource revision numbers                                                                    | standard-attr-revisions   | This extension will display the revision number of neutron resources.                                                                                    |
| Pagination support                                                                           | pagination                | Extension that indicates that pagination is enabled.                                                                                                     |
| Sorting support                                                                              | sorting                   | Extension that indicates that sorting is enabled.                                                                                                        |
| security-group                                                                               | security-group            | The security groups extension.                                                                                                                           |
| DHCP Agent Scheduler                                                                         | dhcp_agent_scheduler      | Schedule networks among dhcp agents                                                                                                                      |
| Router Availability Zone                                                                     | router_availability_zone  | Availability zone support for router.                                                                                                                    |
| RBAC Policies                                                                                | rbac-policies             | Allows creation and modification of policies that control tenant access to resources.                                                                    |
| Tag support for resources: subnet, subnetpool, port, router                                  | tag-ext                   | Extends tag support to more L2 and L3 resources.                                                                                                         |
| standard-attr-description                                                                    | standard-attr-description | Extension to add descriptions to standard attributes                                                                                                     |
| Neutron L3 Router                                                                            | router                    | Router abstraction for basic L3 forwarding between L2 Neutron networks and access to external networks via a NAT gateway.                                |
| Allowed Address Pairs                                                                        | allowed-address-pairs     | Provides allowed address pairs                                                                                                                           |
| project_id field enabled                                                                     | project-id                | Extension that indicates that project_id field is enabled.                                                                                               |
| Distributed Virtual Router                                                                   | dvr                       | Enables configuration of Distributed Virtual Routers.                                                                                                    |
+----------------------------------------------------------------------------------------------+---------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+
[root@controller ~]#

 

에이전트 목록 확인

[root@controller ~]# openstack network agent list
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| ID                                   | Agent Type         | Host       | Availability Zone | Alive | State | Binary                    |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| 20957f8f-f380-4f16-b953-6cf7b15838ba | L3 agent           | controller | nova              | :-)   | UP    | neutron-l3-agent          |
| 22649ee4-bb5d-4caf-b2fb-660477c54bd1 | Metadata agent     | controller | None              | :-)   | UP    | neutron-metadata-agent    |
| 869eb101-a59b-4088-87ab-f17d1634efd3 | DHCP agent         | controller | nova              | :-)   | UP    | neutron-dhcp-agent        |
| e827c41e-b4a0-4e3a-8ca6-4cd8bea7a3bf | Linux bridge agent | controller | None              | :-)   | UP    | neutron-linuxbridge-agent |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
[root@controller ~]#

 

정상적인 경우 (동기화 시간이 걸림?!)

[root@controller ~]# openstack network agent list
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| ID                                   | Agent Type         | Host       | Availability Zone | Alive | State | Binary                    |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| 20957f8f-f380-4f16-b953-6cf7b15838ba | L3 agent           | controller | nova              | :-)   | UP    | neutron-l3-agent          |
| 22649ee4-bb5d-4caf-b2fb-660477c54bd1 | Metadata agent     | controller | None              | :-)   | UP    | neutron-metadata-agent    |
| 6d3a2576-00c1-4dba-82e6-b37355e870cf | Linux bridge agent | compute    | None              | :-)   | UP    | neutron-linuxbridge-agent |
| 869eb101-a59b-4088-87ab-f17d1634efd3 | DHCP agent         | controller | nova              | :-)   | UP    | neutron-dhcp-agent        |
| e827c41e-b4a0-4e3a-8ca6-4cd8bea7a3bf | Linux bridge agent | controller | None              | :-)   | UP    | neutron-linuxbridge-agent |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+

추가확인 필요사항:

해당메시지 원인?? 

controller node 

2017-12-02 10:43:01.242 975 WARNING oslo.service.loopingcall [-] Function 'neutron.plugins.ml2.drivers.agent._common_agent.CommonAgentLoop._report_state' run outlasted interval by 30.00 sec
2017-12-02 10:43:01.280 975 INFO neutron.plugins.ml2.drivers.agent._common_agent [-] Linux bridge agent Agent has just been revived. Doing a full sync.
2017-12-02 10:43:01.306 975 INFO oslo_messaging._drivers.amqpdriver [-] No calling threads waiting for msg_id : 7794b67aae68435ca91fd55654b06482
2017-12-02 10:43:01.315 975 INFO oslo_messaging._drivers.amqpdriver [-] No calling threads waiting for msg_id : 3bc8ea170a464c3e96c0ebd6dc6bfabe
2017-12-02 10:43:01.330 975 INFO neutron.plugins.ml2.drivers.agent._common_agent [req-b6886195-b847-460d-ab64-7c3d17c6af87 - - - - -] Linux bridge agent Agent RPC Daemon Started!
2017-12-02 10:43:01.331 975 INFO neutron.plugins.ml2.drivers.agent._common_agent [req-b6886195-b847-460d-ab64-7c3d17c6af87 - - - - -] Linux bridge agent Agent out of sync with plugin!
2017-12-02 10:43:01.338 975 INFO neutron.plugins.ml2.drivers.linuxbridge.agent.arp_protect [req-b6886195-b847-460d-ab64-7c3d17c6af87 - - - - -] Clearing orphaned ARP spoofing entries for devices []

Linux bridge agent Agent out of sync with plugin!

WARNING oslo.service.loopingcall

compute node

2017-12-02 10:43:50.012 956 ERROR neutron.plugins.ml2.drivers.agent._common_agent     message = self.waiters.get(msg_id, timeout=timeout)
2017-12-02 10:43:50.012 956 ERROR neutron.plugins.ml2.drivers.agent._common_agent   File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 347, in get
2017-12-02 10:43:50.012 956 ERROR neutron.plugins.ml2.drivers.agent._common_agent     'to message ID %s' % msg_id)
2017-12-02 10:43:50.012 956 ERROR neutron.plugins.ml2.drivers.agent._common_agent MessagingTimeout: Timed out waiting for a reply to message ID 086fad94db8c45ac842aa74d46bc27ad
2017-12-02 10:43:50.012 956 ERROR neutron.plugins.ml2.drivers.agent._common_agent
2017-12-02 10:43:50.013 956 WARNING oslo.service.loopingcall [-] Function 'neutron.plugins.ml2.drivers.agent._common_agent.CommonAgentLoop._report_state' run outlasted interval by 30.00 sec
2017-12-02 10:43:50.047 956 INFO neutron.plugins.ml2.drivers.agent._common_agent [-] Linux bridge agent Agent has just been revived. Doing a full sync.
2017-12-02 10:43:50.075 956 INFO oslo_messaging._drivers.amqpdriver [-] No calling threads waiting for msg_id : 086fad94db8c45ac842aa74d46bc27ad
2017-12-02 10:43:50.084 956 INFO oslo_messaging._drivers.amqpdriver [-] No calling threads waiting for msg_id : 1404bc1930c1477cac6ed8fdc1794f9d
2017-12-02 10:43:50.194 956 INFO neutron.plugins.ml2.drivers.agent._common_agent [req-f7ecd86f-c1cf-4a74-bb67-84bde93d17b6 - - - - -] Linux bridge agent Agent out of sync with plugin!

ERROR neutron.plugins.ml2.drivers.agent._common_agent

WARNING oslo.service.loopingcall [-] Function

 

 

 

Dashboard Horizon

https://docs.openstack.org/ocata/ko_KR/install-guide-rdo/horizon-install.html

https://docs.openstack.org/horizon/pike/install/install-rdo.html

 

controller node 에서 작업

openstack-dashboard 설치

[root@controller ~]# yum install -y openstack-dashboard

 

local_settings file 수정

[root@controller ~]# vi /etc/openstack-dashboard/local_settings


OPENSTACK_HOST = "controller"
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"

# 모든곳에서 접속 가능하게 설정
ALLOWED_HOSTS = ['*', ]

OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True

OPENSTACK_API_VERSIONS = {
    "identity": 3,
    "image": 2,
    "volume": 2,
}

OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'Default'

#옵션 2선택시 
OPENSTACK_NEUTRON_NETWORK = {
    'enable_router': True,
    'enable_quotas': True,
    'enable_ipv6': True,
    'enable_distributed_router': True,
    'enable_ha_router': False,
    'enable_fip_topology_check': True,
    'enable_lb' : False,
    'enable_firewall' : False,
    'enable_vpn' : True,

#옵션 1선택시 주석제거 하여 사용 
#OPENSTACK_NEUTRON_NETWORK = {
#    'enable_router': False,
#    'enable_quotas': False,
#    'enable_ipv6': True,
#    'enable_distributed_router': False,
#    'enable_ha_router': False,
#    'enable_fip_topology_check': False,


TIME_ZONE = "Asia/Seoul"


# 기존 내용 주석처리 
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'

CACHES = {
    'default': {
         'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
         'LOCATION': 'controller:11211',
    }
}

 

openstack-dashboard.conf 파일 수정

[root@controller ~]# vi /etc/httpd/conf.d/openstack-dashboard.conf
WSGIDaemonProcess dashboard
WSGIProcessGroup dashboard
WSGISocketPrefix run/wsgi
WSGIApplicationGroup %{GLOBAL}

WSGIApplicationGroup %{GLOBAL} 추가후 httpd restart

 

httpd , memcached 서비스 재시작

[root@controller ~]# systemctl restart httpd memcached

 

firewalld 의 경우

[root@controller ~]# firewall-cmd --permanent --add-port=80/tcp
success
[root@controller ~]# firewall-cmd --reload
success
[root@controller ~]#

 

iptables 의 경우 

[root@controller ~]# iptables -A IN_public_allow -p tcp -m tcp --dport 80 -m conntrack --ctstate NEW -j ACCEPT
[root@controller ~]# service iptables save
iptables: Saving firewall rules to /etc/sysconfig/iptables:[  OK  ]
[root@controller ~]# iptables -L

 

http://controller/dashboard 접속시 아래와 같은 메시지 확인 /var/log/httpd/error_log

[Thu Nov 30 13:45:57.003586 2017] [core:error] [pid 2015] [client 40.0.0.1:55710] Script timed out before returning headers: django.wsgi
[Thu Nov 30 13:45:58.178883 2017] [core:error] [pid 2012] [client 40.0.0.1:55712] Script timed out before returning headers: django.wsgi
[Thu Nov 30 13:47:35.981195 2017] [core:error] [pid 2013] [client 40.0.0.1:55714] Script timed out before returning headers: django.wsgi

 

 

 

openstack-status

[root@controller ~]# openstack-status
== Nova services ==
openstack-nova-api:                     active
openstack-nova-compute:                 inactive  (disabled on boot)
openstack-nova-network:                 inactive  (disabled on boot)
openstack-nova-scheduler:               active
openstack-nova-conductor:               active
openstack-nova-console:                 inactive  (disabled on boot)
openstack-nova-consoleauth:             active
openstack-nova-xvpvncproxy:             inactive  (disabled on boot)
== Glance services ==
openstack-glance-api:                   active
openstack-glance-registry:              active
== Keystone service ==
openstack-keystone:                     inactive  (disabled on boot)
== Horizon service ==
openstack-dashboard:                    active
== neutron services ==
neutron-server:                         active
neutron-dhcp-agent:                     active
neutron-l3-agent:                       active
neutron-metadata-agent:                 active
neutron-linuxbridge-agent:              active
== Support services ==
mariadb:                                active
dbus:                                   active
rabbitmq-server:                        active
memcached:                              active
== Keystone users ==
+----------------------------------+-----------+
| ID                               | Name      |
+----------------------------------+-----------+
| 1697cb80fbed4140bbde5cf1f2fcafc1 | admin     |
| 1a77bd31ec4744bf82fab8fcff5fa561 | nova      |
| b316501ef28c4b28a44c64e65315ef83 | glance    |
| d979c3d5c3e44d6bac0225361907eff3 | placement |
| dd785cc29b9a4db8bdded1cb8ff056cc | demo      |
| f23bc457da1b44bb86712308af8e52b9 | neutron   |
+----------------------------------+-----------+
== Glance images ==
+--------------------------------------+--------+
| ID                                   | Name   |
+--------------------------------------+--------+
| 8a830f40-8837-4efb-a8e4-550c85bfa9c8 | cirros |
+--------------------------------------+--------+
== Nova managed services ==
+--------------------------------------+------------------+------------+----------+---------+-------+----------------------------+-----------------+-------------+
| Id                                   | Binary           | Host       | Zone     | Status  | State | Updated_at                 | Disabled Reason | Forced down |
+--------------------------------------+------------------+------------+----------+---------+-------+----------------------------+-----------------+-------------+
| b4331f58-93ac-4310-9af5-c7ed14ac7931 | nova-conductor   | controller | internal | enabled | up    | 2017-12-01T07:20:49.000000 | -               | False       |
| 98a118aa-28ce-45bd-b468-f5a4862be5e3 | nova-consoleauth | controller | internal | enabled | up    | 2017-12-01T07:20:51.000000 | -               | False       |
| 5801fdd3-864d-4d46-b7c5-fac6b32a6687 | nova-scheduler   | controller | internal | enabled | up    | 2017-12-01T07:20:50.000000 | -               | False       |
| 4e43a928-879c-46d5-a045-6aa8a705c363 | nova-compute     | compute    | nova     | enabled | up    | 2017-12-01T07:20:48.000000 | -               | False       |
+--------------------------------------+------------------+------------+----------+---------+-------+----------------------------+-----------------+-------------+
== Nova networks ==
usage: nova [--version] [--debug] [--os-cache] [--timings]
            [--os-region-name <region-name>] [--service-type <service-type>]
            [--service-name <service-name>]
            [--os-endpoint-type <endpoint-type>]
            [--os-compute-api-version <compute-api-ver>]
            [--endpoint-override <bypass-url>] [--profile HMAC_KEY]
            [--insecure] [--os-cacert <ca-certificate>]
            [--os-cert <certificate>] [--os-key <key>] [--timeout <seconds>]
            [--os-auth-type <name>] [--os-auth-url OS_AUTH_URL]
            [--os-domain-id OS_DOMAIN_ID] [--os-domain-name OS_DOMAIN_NAME]
            [--os-project-id OS_PROJECT_ID]
            [--os-project-name OS_PROJECT_NAME]
            [--os-project-domain-id OS_PROJECT_DOMAIN_ID]
            [--os-project-domain-name OS_PROJECT_DOMAIN_NAME]
            [--os-trust-id OS_TRUST_ID]
            [--os-default-domain-id OS_DEFAULT_DOMAIN_ID]
            [--os-default-domain-name OS_DEFAULT_DOMAIN_NAME]
            [--os-user-id OS_USER_ID] [--os-username OS_USERNAME]
            [--os-user-domain-id OS_USER_DOMAIN_ID]
            [--os-user-domain-name OS_USER_DOMAIN_NAME]
            [--os-password OS_PASSWORD]
            <subcommand> ...
error: argument <subcommand>: invalid choice: u'network-list'
Try 'nova help ' for more information.
== Nova instance flavors ==
+----+------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+------+-----------+------+-----------+------+-------+-------------+-----------+
+----+------+-----------+------+-----------+------+-------+-------------+-----------+
== Nova instances ==
+----+------+-----------+--------+------------+-------------+----------+
| ID | Name | Tenant ID | Status | Task State | Power State | Networks |
+----+------+-----------+--------+------------+-------------+----------+
+----+------+-----------+--------+------------+-------------+----------+
[root@controller ~]#

 

SElinux 작업 (확인필요)

[root@controller ~]# setsebool -P httpd_can_network_connect on

 

 

Domain 이 안보이는군요.

//OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True 때문으로 보이는데 테스트 필요!

 

iptables Disable 시 정상적으로 Doamin 이 보입니다.

 

Domain : default

User Name : admin

Password : ADMIN_PASS

 

로그인후

 

Block Storage service Cinder 

 

별도의 스토리지 노드가 있다면 해당 node 에서 작업을 해야 합니다.

테스트 구성의 경우 controller node / compute node 구성이며 nova-compute 를 제외한 모든 구성의 경우 controller node 에서 테스트를 하였습니다.

https://docs.openstack.org/ocata/ko_KR/install-guide-rdo/cinder.html

https://docs.openstack.org/cinder/pike/install/

 

controller node 에서 작업

 

cinder db 생성 및 사용자 생성

 

[root@controller ~]# mysql -uroot -p
Enter password:
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 26
Server version: 10.1.20-MariaDB MariaDB Server

Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> CREATE DATABASE cinder;

MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
       IDENTIFIED BY 'CINDER_DBPASS';

MariaDB [(none)]>  GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
       IDENTIFIED BY 'CINDER_DBPASS';

MariaDB [(none)]> flush privileges;

MariaDB [(none)]> quit;
Bye
[root@controller ~]#

 

openstack cinder  사용자 생성 CINDER_PASS 입력

[root@controller ~]# openstack user create --domain default --password-prompt cinder
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| enabled             | True                             |
| id                  | 64384bdac78b4188a0eebd245015cb72 |
| name                | cinder                           |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+
[root@controller ~]#

 

admin role 추가

root@controller ~]# openstack role add --project service --user cinder admin

 

cinderv2 / cinderv3 서비스 엔티티 생성

[root@controller ~]# openstack service create --name cinderv2 \
   --description "OpenStack Block Storage" volumev2
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Block Storage          |
| enabled     | True                             |
| id          | 063e877108da473a9100a2720bd60aa7 |
| name        | cinderv2                         |
| type        | volumev2                         |
+-------------+----------------------------------+
[root@controller ~]# openstack service create --name cinderv3 \
   --description "OpenStack Block Storage" volumev3

+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Block Storage          |
| enabled     | True                             |
| id          | 69309bb71f9641d69ba1f02619640b44 |
| name        | cinderv3                         |
| type        | volumev3                         |
+-------------+----------------------------------+
[root@controller ~]#

 

Block Storage service api 엔드포인트 생성

[root@controller ~]# openstack endpoint create --region RegionOne \
   volumev2 public http://controller:8776/v2/%\(project_id\)s
+--------------+------------------------------------------+
| Field        | Value                                    |
+--------------+------------------------------------------+
| enabled      | True                                     |
| id           | 549a0aaedc01449d8cd5c32ecc99417b         |
| interface    | public                                   |
| region       | RegionOne                                |
| region_id    | RegionOne                                |
| service_id   | 063e877108da473a9100a2720bd60aa7         |
| service_name | cinderv2                                 |
| service_type | volumev2                                 |
| url          | http://controller:8776/v2/%(project_id)s |
+--------------+------------------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne \
   volumev2 internal http://controller:8776/v2/%\(project_id\)s
+--------------+------------------------------------------+
| Field        | Value                                    |
+--------------+------------------------------------------+
| enabled      | True                                     |
| id           | 71d6d8fc61c248e98a658a04c52e4053         |
| interface    | internal                                 |
| region       | RegionOne                                |
| region_id    | RegionOne                                |
| service_id   | 063e877108da473a9100a2720bd60aa7         |
| service_name | cinderv2                                 |
| service_type | volumev2                                 |
| url          | http://controller:8776/v2/%(project_id)s |
+--------------+------------------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne \
   volumev2 admin http://controller:8776/v2/%\(project_id\)s
+--------------+------------------------------------------+
| Field        | Value                                    |
+--------------+------------------------------------------+
| enabled      | True                                     |
| id           | 5df1ebc0ff964941ba20ce9741bb54fa         |
| interface    | admin                                    |
| region       | RegionOne                                |
| region_id    | RegionOne                                |
| service_id   | 063e877108da473a9100a2720bd60aa7         |
| service_name | cinderv2                                 |
| service_type | volumev2                                 |
| url          | http://controller:8776/v2/%(project_id)s |
+--------------+------------------------------------------+
[root@controller ~]#


 

[root@controller ~]# openstack endpoint create --region RegionOne \
   volumev3 public http://controller:8776/v3/%\(project_id\)s

+--------------+------------------------------------------+
| Field        | Value                                    |
+--------------+------------------------------------------+
| enabled      | True                                     |
| id           | feb6fadb6f3c44328cb1e21b9a95cf8c         |
| interface    | public                                   |
| region       | RegionOne                                |
| region_id    | RegionOne                                |
| service_id   | 69309bb71f9641d69ba1f02619640b44         |
| service_name | cinderv3                                 |
| service_type | volumev3                                 |
| url          | http://controller:8776/v3/%(project_id)s |
+--------------+------------------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne \
   volumev3 internal http://controller:8776/v3/%\(project_id\)s
+--------------+------------------------------------------+
| Field        | Value                                    |
+--------------+------------------------------------------+
| enabled      | True                                     |
| id           | f75311f0f96243a3b9dad05f624aabad         |
| interface    | internal                                 |
| region       | RegionOne                                |
| region_id    | RegionOne                                |
| service_id   | 69309bb71f9641d69ba1f02619640b44         |
| service_name | cinderv3                                 |
| service_type | volumev3                                 |
| url          | http://controller:8776/v3/%(project_id)s |
+--------------+------------------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne \
   volumev3 admin http://controller:8776/v3/%\(project_id\)s
+--------------+------------------------------------------+
| Field        | Value                                    |
+--------------+------------------------------------------+
| enabled      | True                                     |
| id           | 389f36c141ba4b08b7699e164610aba4         |
| interface    | admin                                    |
| region       | RegionOne                                |
| region_id    | RegionOne                                |
| service_id   | 69309bb71f9641d69ba1f02619640b44         |
| service_name | cinderv3                                 |
| service_type | volumev3                                 |
| url          | http://controller:8776/v3/%(project_id)s |
+--------------+------------------------------------------+
[root@controller ~]#


 

openstack-cinder 설치

[root@controller ~]# yum install -y openstack-cinder

 

cinder.conf 수정

[root@controller ~]# vi /etc/cinder/cinder.conf

[DEFAULT]
transport_url = rabbit://openstack:RABBIT_PASS@controller
auth_strategy = keystone
my_ip = 40.0.0.101


[database]
connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder


[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = CINDER_PASS


[oslo_concurrency]
lock_path = /var/lib/cinder/tmp

 

블록 스토리지 데이터베이스 등록

[root@controller ~]# su -s /bin/sh -c "cinder-manage db sync" cinder

Option “logdir” from group “DEFAULT” is deprecated. Use option “log-dir” from group “DEFAULT” message 출력 logdir 옵션 에서 log-dir 로 변경 ? (차후 테스트)

 

nova.conf 수정 

[root@controller ~]# vi /etc/nova/nova.conf
[cinder]
os_region_name = RegionOne

 

Compute API 서비스를 재시작

[root@controller ~]# systemctl restart openstack-nova-api

 

블록스토리지 서비스 실행및 enable 

[root@controller ~]# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
[root@controller ~]# systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service

 

cinder localvoulum 의 경우 확인 필요 

[root@controller ~]# openstack volume service list
+------------------+------------+------+---------+-------+----------------------------+
| Binary           | Host       | Zone | Status  | State | Updated At                 |
+------------------+------------+------+---------+-------+----------------------------+
| cinder-scheduler | controller | nova | enabled | up    | 2017-11-30T09:01:45.000000 |
+------------------+------------+------+---------+-------+----------------------------+
[root@controller ~]#

 

cider node 에 별도의 cinder image 용 Disk 추가후 작업

참고 페이지: https://docs.openstack.org/ocata/ko_KR/install-guide-rdo/cinder-storage-install.html

https://docs.openstack.org/cinder/pike/install/cinder-storage-install-rdo.html

 

별도 Storage node 구성시 참고 사항

controller node 로 Cinder 구성시 아래 내용은 Skip 하시면 됩니다.

cinder node 에서 작업

Storage node 의 경우 별도의 작업이 필요 합니다. iscsi 이용 및 lvm 구성

 

LVM 설치

[root@cinder ~]# yum install -y lvm2

 

LVM enable 및 실행

[root@cinder ~]# systemctl enable lvm2-lvmetad.service
[root@cinder ~]# systemctl start lvm2-lvmetad.service

 

controller System 의 디스크 정보 확인

[root@cinder ~]# fdisk -l

Disk /dev/vda: 107.4 GB, 107374182400 bytes, 209715200 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x0009fed1

   Device Boot      Start         End      Blocks   Id  System
/dev/vda1   *        2048     4196351     2097152   83  Linux
/dev/vda2         4196352    12584959     4194304   82  Linux swap / Solaris
/dev/vda3        12584960   209715199    98565120   83  Linux

<strong><span style="color: #ff0000;" data-mce-style="color: #ff0000;">Disk /dev/vdb: 53.7 GB, 53687091200 bytes, 104857600 sectors</span></strong>
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

[root@cinder ~]# 

추가된 디스크 /dev/vdb

 

pv생성

[root@cinder ~]# pvcreate /dev/vdb
  Physical volume "/dev/vdb" successfully created.
[root@cinder ~]# 

 

cinder-volumes vg 생성

[root@cinder ~]# vgcreate cinder-volumes /dev/vdb
  Volume group "cinder-volumes" successfully created
[root@cinder ~]# 

 

lvm filter 수정 lvm.conf 

[root@cinder ~]# vi /etc/lvm/lvm.conf
        # Cinder Settings
        filter = [ "a/vdb/", "r|.*/|" ]

lvm 을 운영체제 Disk 로 사용할 경우 추가 해야 합니다. Test Machine 의 경우 일반 볼륨 입니다.

 

lvmdisksan 확인 filter 설정전

[root@cinder ~]# lvmdiskscan
  /dev/vda1 [       2.00 GiB]
  /dev/vda2 [       4.00 GiB]
  /dev/vda3 [     <94.00 GiB]
  /dev/vdb  [      50.00 GiB] LVM physical volume
  0 disks
  3 partitions
  1 LVM physical volume whole disk
  0 LVM physical volumes
[root@cinder ~]# 

 

filter 설정후

[root@cinder ~]# lvmdiskscan
  /dev/vdb [      50.00 GiB] LVM physical volume
  0 disks
  0 partitions
  1 LVM physical volume whole disk
  0 LVM physical volumes
[root@cinder ~]# 

 

패키지 설치 

[root@cinder ~]# yum install -y openstack-cinder targetcli python-keystone

 

cinder.conf 수정

[root@cinder ~]# vi /etc/cinder/cinder.conf
#my_ip 의 경우 managet먼트 인터페이스 ip 입니다. controller node ip 입력

[DEFAULT]
transport_url = rabbit://openstack:RABBIT_PASS@controller
auth_strategy = keystone
my_ip = 40.0.0.101
enabled_backends = lvm
glance_api_servers = http://controller:9292


[database]
connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder


[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = CINDER_PASS


#lvm 섹션은 존재하지 않습니다. 추가로 만들어 줍니다.
[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = lioadm


[oslo_concurrency]
lock_path = /var/lib/cinder/tmp

 

Cinder service 실행 및 enable

[root@cinder ~]# systemctl enable openstack-cinder-volume.service target.service
[root@cinder ~]# systemctl start openstack-cinder-volume.service target.service

 

 

— Test중

provider network eth0 -> eth1 로 Device 변경

controller node 작업 BOOTPROTO=none , NETWORK 정보 주석처리

[root@controller ~]# vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini
physical_interface_mappings = provider:eth1
[root@controller ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth1
TYPE=Ethernet
BOOTPROTO=none
DEVICE=eth1
ONBOOT=yes
#IPADDR=192.168.122.101
#NETMASK=255.255.255.0
#GATEWAY=192.168.122.1
[root@controller ~]#
[root@controller ~]# init 6 

 

compute node 작업 BOOTPROTO=none , NETWORK 정보 주석처리

[root@compute ~]# vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini
physical_interface_mappings = provider:eth1
[root@compute ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth1
TYPE=Ethernet
BOOTPROTO=none
DEVICE=eth1
ONBOOT=yes
#IPADDR=192.168.122.102
#NETMASK=255.255.255.0
#GATEWAY=192.168.122.1
[root@compute ~]# init 6

 

System Rebooting 후 neutron-linuxbridge-agent 동작 확인

controller node

[root@controller ~]# systemctl status neutron-linuxbridge-agent
● neutron-linuxbridge-agent.service - OpenStack Neutron Linux Bridge Agent
   Loaded: loaded (/usr/lib/systemd/system/neutron-linuxbridge-agent.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2017-12-01 08:50:02 KST; 5min ago
  Process: 795 ExecStartPre=/usr/bin/neutron-enable-bridge-firewall.sh (code=exited, status=0/SUCCESS)
 Main PID: 829 (neutron-linuxbr)
   CGroup: /system.slice/neutron-linuxbridge-agent.service
           ├─ 829 /usr/bin/python2 /usr/bin/neutron-linuxbridge-agent --confi...
           ├─1378 sudo neutron-rootwrap-daemon /etc/neutron/rootwrap.conf
           └─1379 /usr/bin/python2 /usr/bin/neutron-rootwrap-daemon /etc/neut...

Dec 01 08:50:02 controller systemd[1]: Starting OpenStack Neutron Linux Bri.....
Dec 01 08:50:02 controller neutron-enable-bridge-firewall.sh[795]: net.bridge...
Dec 01 08:50:02 controller neutron-enable-bridge-firewall.sh[795]: net.bridge...
Dec 01 08:50:02 controller systemd[1]: Started OpenStack Neutron Linux Brid...t.
Dec 01 08:50:06 controller neutron-linuxbridge-agent[829]: Guru meditation no...
Dec 01 08:50:22 controller sudo[1378]:  neutron : TTY=unknown ; PWD=/ ; USE...nf
Hint: Some lines were ellipsized, use -l to show in full.
[root@controller ~]#

compute node

[root@compute ~]#  systemctl status neutron-linuxbridge-agent
● neutron-linuxbridge-agent.service - OpenStack Neutron Linux Bridge Agent
   Loaded: loaded (/usr/lib/systemd/system/neutron-linuxbridge-agent.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2017-12-01 08:49:35 KST; 6min ago
  Process: 785 ExecStartPre=/usr/bin/neutron-enable-bridge-firewall.sh (code=exited, status=0/SUCCESS)
 Main PID: 804 (neutron-linuxbr)
   CGroup: /system.slice/neutron-linuxbridge-agent.service
           ├─804 /usr/bin/python2 /usr/bin/neutron-linuxbridge-agent --config...
           ├─963 sudo neutron-rootwrap-daemon /etc/neutron/rootwrap.conf
           └─964 /usr/bin/python2 /usr/bin/neutron-rootwrap-daemon /etc/neutr...

Dec 01 08:49:35 compute systemd[1]: Starting OpenStack Neutron Linux Bridge.....
Dec 01 08:49:35 compute neutron-enable-bridge-firewall.sh[785]: net.bridge.br...
Dec 01 08:49:35 compute neutron-enable-bridge-firewall.sh[785]: net.bridge.br...
Dec 01 08:49:35 compute systemd[1]: Started OpenStack Neutron Linux Bridge ...t.
Dec 01 08:49:36 compute neutron-linuxbridge-agent[804]: Guru meditation now r...
Dec 01 08:49:38 compute sudo[963]:  neutron : TTY=unknown ; PWD=/ ; USER=ro...nf
Hint: Some lines were ellipsized, use -l to show in full.
[root@compute ~]#

 

CentOS 7 vncserver 설치

TigerVNC is a high-performance, platform-neutral implementation of VNC (Virtual Network Computing), a client/server application that allows users to launch and interact with graphical applications on remote machines. TigerVNC provides the levels of performance necessary to run 3D and video applications, and it attempts to maintain a common look and feel and re-use components, where possible, across the various platforms that it supports. TigerVNC also provides extensions for advanced authentication methods and TLS encryption

참고페이지: https://www.itzgeek.com/how-tos/linux/centos-how-tos/configure-vnc-server-on-centos-7-rhel-7.html

https://www.ipentec.com/document/linux-centos-7-vncserver-tigervncserver-install

tigervnc viewer : https://github.com/TigerVNC/tigervnc/releases

 

tigervnc-server 설치

[root@centos74 ~]# yum install -y tigervnc-server

 

vncserver 파일 복사

[root@centos74 ~]# cp /lib/systemd/system/vncserver@.service /etc/systemd/system/vncserver@:1.service

 

vncserver 파일 수정

[root@centos74 ~]# vi /etc/systemd/system/vncserver@\:1.service
[Service]
Type=forking
User=<USER>

# Clean any existing files in /tmp/.X11-unix environment
ExecStartPre=-/usr/bin/vncserver -kill %i
ExecStart=/usr/bin/vncserver %i
PIDFile=/home/<USER>/.vnc/%H%i.pid
ExecStop=-/usr/bin/vncserver -kill %i

 

수정후

[Service]
Type=forking

# Clean any existing files in /tmp/.X11-unix environment
ExecStartPre=/bin/sh -c '/usr/bin/vncserver -kill %i > /dev/null 2>&1 || :'
ExecStart=/usr/sbin/runuser -l root -c "/usr/bin/vncserver %i"
PIDFile=/root/.vnc/%H%i.pid
ExecStop=/bin/sh -c '/usr/bin/vncserver -kill %i > /dev/null 2>&1 || :'

 

vncserver 설정후 변경을 위하여 systemctl 설정을 Reload 합니다.

[root@centos74 ~]# systemctl daemon-reload

 

vncserver enable 

[root@centos74 ~]# systemctl enable vncserver@:1.service
Created symlink from /etc/systemd/system/multi-user.target.wants/vncserver@:1.service to /etc/systemd/system/vncserver@:1.service.
[root@centos74 ~]#

 

vncuser password 설정

[root@centos74 ~]# vncserver

You will require a password to access your desktops.

Password:
Verify:
Would you like to enter a view-only password (y/n)? y
Password:
Verify:

New 'centos74:2 (root)' desktop is centos74:2

Creating default startup script /root/.vnc/xstartup
Creating default config /root/.vnc/config
Starting applications specified in /root/.vnc/xstartup
Log file is /root/.vnc/centos74:2.log

[root@centos74 ~]#

 

방화벽을 설정 합니다.

[root@centos74 ~]#  firewall-cmd --permanent --zone=public --add-service vnc-server
success
[root@centos74 ~]# firewall-cmd --reload
success
[root@centos74 ~]#

 

vncserver 동작확인

[root@centos74 ~]# systemctl status vncserver@:1.service
● vncserver@:1.service - Remote desktop service (VNC)
   Loaded: loaded (/etc/systemd/system/vncserver@:1.service; enabled; vendor preset: disabled)
   Active: active (running) since Wed 2017-11-29 09:04:27 KST; 30s ago
  Process: 1100 ExecStart=/usr/sbin/runuser -l root -c /usr/bin/vncserver %i (code=exited, status=0/SUCCESS)
  Process: 1082 ExecStartPre=/bin/sh -c /usr/bin/vncserver -kill %i > /dev/null 2>&1 || : (code=exited, status=0/SUCCESS)
 Main PID: 1231 (Xvnc)
   CGroup: /system.slice/system-vncserver.slice/vncserver@:1.service
           ‣ 1231 /usr/bin/Xvnc :1 -auth /root/.Xauthority -desktop centos74:1 (root) -fp catalogue:/etc/X11/fontpath...

Nov 29 09:04:24 centos74 systemd[1]: Starting Remote desktop service (VNC)...
Nov 29 09:04:27 centos74 systemd[1]: Started Remote desktop service (VNC).
[root@centos74 ~]#

 

 

client 에는 tigervnc site 에서 다운받을수 있습니다.

http://tigervnc.org/

 

vncserver접속 vncport 의 경우 5901 port 를 사용하며 vnc-client 에서는 편의상 1 로 입력합니다.

password 입력후 vnc접속 확인을 할수 있습니다. 

 

vnc-server 가 정상적으로 동작을 하지 않을 경우

/root/.vnc 디렉토리의 *.log 및 *.pid 삭제

/tmp/.X11-unix 디렉토리의 X로 시작하는 파일을 삭제후 재시작 하시면 됩니다.

ex)

[root@test-machine ~]# cd .vnc/
[root@test-machine .vnc]# rm -f test-machine\:6.*
[root@test-machine .vnc]# cd /tmp/
[root@test-machine tmp]# rm .X11-unix/ -rf

 

 

 

CentOS 의 경우 Desktop 의 경우 yum groupinstall 로 간단하게 설치 할수 있습니다.

Runlevel 의 경우 기존에는 /etc/inittab 에서 지정을 하였지만 CentOS7 에서는 systemctl set-default 로 지정 할수 있습니다.

 

 

패키지 설치

[root@centos74 ~]# yum groupinstall "GNOME Desktop" -y

 

runlevel 확인 및 runlevel 변경후 시스템 리부팅

[root@centos74 ~]# systemctl get-default
multi-user.target
[root@centos74 ~]# systemctl set-default graphical.target
Removed symlink /etc/systemd/system/default.target.
Created symlink from /etc/systemd/system/default.target to /usr/lib/systemd/system/graphical.target.
[root@centos74 ~]#
[root@centos74 ~]# init 6

 

최소설치 에서 gnome desktop 을 설치 하면 아래와 같은 화면을 볼수 있습니다.

1을 눌러 License information 을 선택 합니다.

 

2를 눌러 I accept the license agreement 를 선택 합니다.

 

c를 눌러 설정을 마무리 합니다.