개인적으로 운영하는 ftp 를 lvm 으로 묶어 Storage 형태로 만들어 사용 하고 있습니다.

한 3~4년 주기로 깨지다 보니 그때마다 lvm 복구 작업을 종종 합니다.

Linux Soft Raid 를 이용하여 md1 + md2 를 구성하여 lvm 을 만들어 사용할까 합니다.

Test 환경의 경우 Centos 7 Version 에 1G disk 4개를 추가 하여 sdb + sdc = md1 , sdd + sde = md2 로 구성 하였습니다.

주의: dd 명령으로 복구 테스트하였을 경우 raid 볼륨 자체가 깨지는 문제로 테스트가 되지 않습니다. 🙂 

 

 

  • Disk 정보 (sdb / sdc /sdd /sde 총 4장의 1G Disk 가 장착되어 있습니다.)
[root@centos7 ~]# lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda      8:0    0   20G  0 disk
├─sda1   8:1    0    1G  0 part /boot
├─sda2   8:2    0    1G  0 part [SWAP]
└─sda3   8:3    0   18G  0 part /
sdb      8:16   0    1G  0 disk
sdc      8:32   0    1G  0 disk
sdd      8:48   0    1G  0 disk
sde      8:64   0    1G  0 disk
[root@centos7 ~]#

 

  • mdadm 설치 
[root@centos7 ~]# yum install -y mdadm

 

  • raid 장치 확인 
[root@centos7 ~]# cat /proc/mdstat
Personalities :
unused devices: <none>
[root@centos7 ~]#

 

  • fdisk 작업 (sdb / sdc /sdd /sde )
[root@centos7 ~]# fdisk /dev/sdb
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0x051d4fae.

Command (m for help): n
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-2097151, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-2097151, default 2097151):
Using default value 2097151
Partition 1 of type Linux and of size 1023 MiB is set

Command (m for help): t
Selected partition 1
Hex code (type L to list all codes): fd
Changed type of partition 'Linux' to 'Linux raid autodetect'

Command (m for help): wq
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
[root@centos7 ~]#

~중략

[root@centos7 ~]# partprobe

 

  • /dev/sdb1 , /dev/sdc1 디스크를 이용하여 raid 1 md1 를 구성 합니다.
[root@centos7 ~]# mdadm --create /dev/md1 --level=1 --raid-device=2 /dev/sdb1 /dev/sdc1
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
Continue creating array? y
mdadm: Fail create md1 when using /sys/module/md_mod/parameters/new_array
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.
[root@centos7 ~]#

 

  • /dev/sdd1 , /dev/sde1 디스크를 이용하여 raid 1 md2 를 구성 합니다. 
[root@centos7 ~]# mdadm --create /dev/md2 --level=1 --raid-device=2 /dev/sdd1 /dev/sde1
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
Continue creating array? y
mdadm: Fail create md2 when using /sys/module/md_mod/parameters/new_array
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md2 started.
[root@centos7 ~]#

 

  • raid 상태 확인 
[root@centos7 ~]# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sde1[1] sdd1[0]
      1046528 blocks super 1.2 [2/2] [UU]

md1 : active raid1 sdc1[1] sdb1[0]
      1046528 blocks super 1.2 [2/2] [UU]

unused devices: <none>
[root@centos7 ~]#

 

  • /etc/mdadm.conf 를 생성합니다.
[root@centos7 ~]# mdadm --detail --scan > /etc/mdadm.conf
[root@centos7 ~]# cat /etc/mdadm.conf
ARRAY /dev/md1 metadata=1.2 name=centos7:1 UUID=33827f12:f44165a0:19e7a7c9:18f88f40
ARRAY /dev/md2 metadata=1.2 name=centos7:2 UUID=b17e111b:e22dcf62:51b929f6:819a5c2c
[root@centos7 ~]#

 

  • lvm2 패키지를 설치 합니다.
[root@centos7 ~]# yum install lvm2 -y

 

  • lvm 생성후 xfs 파일시스템으로 포맷을 합니다.
[root@centos7 ~]# pvcreate /dev/md1 /dev/md2
  Physical volume "/dev/md1" successfully created.
  Physical volume "/dev/md2" successfully created.
[root@centos7 ~]# vgcreate vg00 /dev/md1 /dev/md2
  Volume group "vg00" successfully created
[root@centos7 ~]# lvcreate -l 100%free -n data01 vg00
  Logical volume "data01" created.
[root@centos7 ~]#

[root@centos7 ~]# mkfs.xfs /dev/vg00/data01
meta-data=/dev/vg00/data01       isize=512    agcount=4, agsize=130560 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=522240, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@centos7 ~]#

 

  • /etc/fstab 에 마운트 후 시스템 리부팅후 마운트 상태를 확인 합니다. 
[root@centos7 ~]# mkdir /data
[root@centos7 ~]# vi /etc/fstab

/dev/vg00/data01                          /data                   xfs     defaults        0 0

[root@centos7 ~]# mount -a
[root@centos7 ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/sda3                 18G  1.4G   17G   8% /
devtmpfs                 903M     0  903M   0% /dev
tmpfs                    912M     0  912M   0% /dev/shm
tmpfs                    912M  8.7M  903M   1% /run
tmpfs                    912M     0  912M   0% /sys/fs/cgroup
/dev/sda1               1014M  135M  880M  14% /boot
tmpfs                    183M     0  183M   0% /run/user/0
/dev/mapper/vg00-data01  2.0G   33M  2.0G   2% /data
[root@centos7 ~]# init 6 
[root@centos7 ~]# df -h |grep -i data
/dev/mapper/vg00-data01  2.0G   33M  2.0G   2% /data
[root@centos7 ~]#

 

  • mdadm 명령어를 이용하여 raid 정보를 확인 합니다. 
[root@centos7 ~]# mdadm --detail /dev/md1
/dev/md1:
           Version : 1.2
     Creation Time : Thu Jul 18 00:29:08 2019
        Raid Level : raid1
        Array Size : 1046528 (1022.00 MiB 1071.64 MB)
     Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Thu Jul 18 00:34:49 2019
             State : clean
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

              Name : centos7:1  (local to host centos7)
              UUID : 33827f12:f44165a0:19e7a7c9:18f88f40
            Events : 17

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
[root@centos7 ~]# mdadm --detail /dev/md2
/dev/md2:
           Version : 1.2
     Creation Time : Thu Jul 18 00:30:30 2019
        Raid Level : raid1
        Array Size : 1046528 (1022.00 MiB 1071.64 MB)
     Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Thu Jul 18 00:35:22 2019
             State : clean
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

              Name : centos7:2  (local to host centos7)
              UUID : b17e111b:e22dcf62:51b929f6:819a5c2c
            Events : 17

    Number   Major   Minor   RaidDevice State
       0       8       49        0      active sync   /dev/sdd1
       1       8       65        1      active sync   /dev/sde1
[root@centos7 ~]#

 

  • mdadm -f 옵션을 이용하여 /dev/md1 에 sdc1 디스크 장애 상황을 만듭니다. 
[root@centos7 ~]# mdadm /dev/md1 -f /dev/sdc1
mdadm: set /dev/sdc1 faulty in /dev/md1
[root@centos7 ~]# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sde1[1] sdd1[0]
      1046528 blocks super 1.2 [2/2] [UU]

md1 : active raid1 sdc1[1](F) sdb1[0]
      1046528 blocks super 1.2 [2/1] [U_] <---- UU 에서 U_ 로 변경 되었습니다.

unused devices: <none>
[root@centos7 ~]#


마운트는 정상적으로 되어 있고 touch 로 빈파일생성도 정상적으로 됩니다.

[root@centos7 ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/sda3                 18G  1.4G   17G   8% /
devtmpfs                 903M     0  903M   0% /dev
tmpfs                    912M     0  912M   0% /dev/shm
tmpfs                    912M  8.7M  904M   1% /run
tmpfs                    912M     0  912M   0% /sys/fs/cgroup
/dev/sda1               1014M  135M  880M  14% /boot
/dev/mapper/vg00-data01  2.0G   33M  2.0G   2% /data
tmpfs                    183M     0  183M   0% /run/user/0
[root@centos7 ~]# touch /data/0
[root@centos7 ~]# touch /data/1
[root@centos7 ~]# touch /data/2

 

  • 디스크 remove 의 경우 mdadm $mdX_name –remove $device_name 으로 디스크를 제거 하시면 됩니다.
[root@centos7 ~]# mdadm /dev/md1 --remove /dev/sdc1
mdadm: hot removed /dev/sdc1 from /dev/md1
[root@centos7 ~]# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sde1[1] sdd1[0]
      1046528 blocks super 1.2 [2/2] [UU]

md1 : active raid1 sdb1[0]                <--- sdc1 Disk 가 제거된것을 확인할수 있습니다.
      1046528 blocks super 1.2 [2/1] [U_]

unused devices: <none>
[root@centos7 ~]#

 

  • 시스템 shutdown 후 Disk 를 추가 합니다. 
[root@centos7 ~]# init 0

[root@centos7 ~]# lsblk
NAME              MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
sda                 8:0    0   20G  0 disk
├─sda1              8:1    0    1G  0 part  /boot
├─sda2              8:2    0    1G  0 part  [SWAP]
└─sda3              8:3    0   18G  0 part  /
sdb                 8:16   0    1G  0 disk
└─sdb1              8:17   0 1023M  0 part
  └─md1             9:1    0 1022M  0 raid1
    └─vg00-data01 253:0    0    2G  0 lvm   /data
sdc                 8:32   0    1G  0 disk
└─sdc1              8:33   0 1023M  0 part
sdd                 8:48   0    1G  0 disk
└─sdd1              8:49   0 1023M  0 part
  └─md2             9:2    0 1022M  0 raid1
    └─vg00-data01 253:0    0    2G  0 lvm   /data
sde                 8:64   0    1G  0 disk
└─sde1              8:65   0 1023M  0 part
  └─md2             9:2    0 1022M  0 raid1
    └─vg00-data01 253:0    0    2G  0 lvm   /data
sdf                 8:80   0    1G  0 disk
sr0                11:0    1 1024M  0 rom
[root@centos7 ~]# fdisk /dev/sdf
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0xb67cc56a.

Command (m for help): n
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p): p
Partition number (1-4, default 1):
First sector (2048-2097151, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-2097151, default 2097151):
Using default value 2097151
Partition 1 of type Linux and of size 1023 MiB is set

Command (m for help): t
Selected partition 1
Hex code (type L to list all codes): fd
Changed type of partition 'Linux' to 'Linux raid autodetect'

Command (m for help): wq
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
[root@centos7 ~]# partprobe

 

  • raid 에 disk 추가 
[root@centos7 ~]# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sde1[1] sdd1[0]
      1046528 blocks super 1.2 [2/2] [UU]

md1 : active raid1 sdf1[2] sdb1[0]
      1046528 blocks super 1.2 [2/1] [U_]
      [===============>.....]  recovery = 76.4% (800384/1046528) finish=0.0min speed=200096K/sec

unused devices: <none>
[root@centos7 ~]#

 

  • /dev/md1 에서 sdb1 Disk 에 장애상황을 만듭니다. 
[root@centos7 ~]# mdadm /dev/md1 -f /dev/sdb1
mdadm: set /dev/sdb1 faulty in /dev/md1
[root@centos7 ~]# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sde1[1] sdd1[0]
      1046528 blocks super 1.2 [2/2] [UU]

md1 : active raid1 sdf1[2] sdb1[0](F)
      1046528 blocks super 1.2 [2/1] [_U]

unused devices: <none>
[root@centos7 ~]#

 

  • mount 상태 확인 및 touch 로 빈파일을 생성해 봅니다. 
[root@centos7 ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/sda3                 18G  1.4G   17G   8% /
devtmpfs                 903M     0  903M   0% /dev
tmpfs                    912M     0  912M   0% /dev/shm
tmpfs                    912M  8.7M  904M   1% /run
tmpfs                    912M     0  912M   0% /sys/fs/cgroup
/dev/sda1               1014M  135M  880M  14% /boot
/dev/mapper/vg00-data01  2.0G   33M  2.0G   2% /data      <--- 정상적으로 마운트 되어 있습니다.
tmpfs                    183M     0  183M   0% /run/user/0
[root@centos7 ~]# touch /data/20

[root@centos7 ~]# touch /data/30
[root@centos7 ~]# touch /data/40

 

  • /dev/md1 에서 sdb1 디스크를 제거 합니다. 
[root@centos7 ~]# mdadm /dev/md1 --remove /dev/sdb1
mdadm: hot removed /dev/sdb1 from /dev/md1
[root@centos7 ~]# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sde1[1] sdd1[0]
      1046528 blocks super 1.2 [2/2] [UU]

md1 : active raid1 sdf1[2]
      1046528 blocks super 1.2 [2/1] [_U]

unused devices: <none>
[root@centos7 ~]#

 

  • 시스템 종료후 disk 를 추가 하여 /dev/md1 을 복구 합니다. 
[root@centos7 ~]# init 0
[root@centos7 ~]# lsblk
NAME              MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
sda                 8:0    0   20G  0 disk
├─sda1              8:1    0    1G  0 part  /boot
├─sda2              8:2    0    1G  0 part  [SWAP]
└─sda3              8:3    0   18G  0 part  /
sdb                 8:16   0    1G  0 disk
└─sdb1              8:17   0 1023M  0 part
sdc                 8:32   0    1G  0 disk
└─sdc1              8:33   0 1023M  0 part
sdd                 8:48   0    1G  0 disk
└─sdd1              8:49   0 1023M  0 part
  └─md2             9:2    0 1022M  0 raid1
    └─vg00-data01 253:0    0    2G  0 lvm   /data
sde                 8:64   0    1G  0 disk
└─sde1              8:65   0 1023M  0 part
  └─md2             9:2    0 1022M  0 raid1
    └─vg00-data01 253:0    0    2G  0 lvm   /data
sdf                 8:80   0    1G  0 disk
└─sdf1              8:81   0 1023M  0 part
  └─md1             9:1    0 1022M  0 raid1
    └─vg00-data01 253:0    0    2G  0 lvm   /data
sdg                 8:96   0    1G  0 disk       <--- 추가된 Disk 
[root@centos7 ~]#

fdisk 부분은 생략 합니다. 


[root@centos7 ~]# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sdd1[0] sde1[1]
      1046528 blocks super 1.2 [2/2] [UU]

md1 : active raid1 sdg1[3] sdf1[2]
      1046528 blocks super 1.2 [2/1] [_U]
      [=======>.............]  recovery = 38.2% (401280/1046528) finish=0.0min speed=200640K/sec

unused devices: <none>
[root@centos7 ~]#

 

  • raid 복구후 테스트 
[root@centos7 ~]# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sdd1[0] sde1[1]
      1046528 blocks super 1.2 [2/2] [UU]

md1 : active raid1 sdg1[3] sdf1[2]
      1046528 blocks super 1.2 [2/2] [UU]

unused devices: <none>
[root@centos7 ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/sda3                 18G  1.4G   17G   8% /
devtmpfs                 903M     0  903M   0% /dev
tmpfs                    912M     0  912M   0% /dev/shm
tmpfs                    912M  8.7M  904M   1% /run
tmpfs                    912M     0  912M   0% /sys/fs/cgroup
/dev/sda1               1014M  135M  880M  14% /boot
/dev/mapper/vg00-data01  2.0G   33M  2.0G   2% /data
tmpfs                    183M     0  183M   0% /run/user/0
[root@centos7 ~]# cp /tmp/* /data/
cp: omitting directory ‘/tmp/systemd-private-f539feae748d4cae9178d03dceab0233-vgauthd.service-gTNVZW’
cp: omitting directory ‘/tmp/systemd-private-f539feae748d4cae9178d03dceab0233-vmtoolsd.service-ISwWEL’
[root@centos7 ~]# ls -al /data/
total 4
drwxr-xr-x   2 root root 142 Jul 18 00:54 .
dr-xr-xr-x. 18 root root 256 Jul 18 00:33 ..
-rw-r--r--   1 root root   0 Jul 18 00:41 0
-rw-r--r--   1 root root   0 Jul 18 00:41 1
-rw-r--r--   1 root root   0 Jul 18 00:48 10
-rw-r--r--   1 root root   0 Jul 18 00:50 11
-rw-r--r--   1 root root   0 Jul 18 00:50 12
-rw-r--r--   1 root root   0 Jul 18 00:50 13
-rw-r--r--   1 root root   0 Jul 18 00:41 2
-rw-r--r--   1 root root   0 Jul 18 00:49 20
-rw-r--r--   1 root root   0 Jul 18 00:49 30
-rw-r--r--   1 root root   0 Jul 18 00:49 40
-rwx------   1 root root 836 Jul 18 00:54 ks-script-r7n_b9
-rw-------   1 root root   0 Jul 18 00:54 yum.log
[root@centos7 ~]#

 

Glusterfs 는 확장이 가능한 네트워크 분산 파일 시스템 입니다.

본문서는 HA-Proxy 테스트를 위하여 2 node 를 구성 하였으며, 단순 테스트용 입니다.

자세한 자료는 아래 사이트를 확인해 보세요.

Site: https://docs.gluster.org/en/latest/

 

 

1. yum repository 생성

[root@gluster01 ~]# yum install centos-release-gluster -y
[root@gluster02 ~]# yum install centos-release-gluster -y

 

2. /etc/hosts 파일에 gluster node 를 추가 합니다.

[root@gluster01 ~]# vi /etc/hosts

10.10.10.34 gluser01
10.10.10.35 gluser02

 

3. glusterfs-server 를 설치 합니다.

[root@gluster01 ~]# yum install glusterfs-server -y
[root@gluster02 ~]# yum install glusterfs-server -y

 

4. gluster service 를 활성화 하고 실행 합니다.

[root@gluster01 ~]# systemctl enable glusterd ; systemctl start glusterd
[root@gluster02 ~]# systemctl enable glusterd ; systemctl start glusterd

 

5. glusterfs 복제 볼륨 설정

[root@gluster01 ~]# gluster peer probe gluster02
peer probe: success.
[root@gluster02 ~]# gluster peer probe gluster01

[root@gluster01 ~]# mkdir /gluster-data
[root@gluster02 ~]# mkdir /gluster-data


[root@gluster01 ~]# gluster volume create volume01 replica 2 transport tcp gluster01:/gluster-data gluster02:/gluster-data force
volume create: volume01: success: please start the volume to access data
[root@gluster01 ~]#

[root@gluster01 ~]# gluster volume start volume01
volume start: volume01: success
[root@gluster01 ~]#

[root@gluster01 ~]# gluster volume info

Volume Name: volume01
Type: Replicate
Volume ID: d078cd71-c767-4af5-b96c-e91d37451894
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: gluster01:/gluster-data
Brick2: gluster02:/gluster-data
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
[root@gluster01 ~]#

 

6. /etc/fstab 설정

[root@gluster01 ~]# mkdir /www-data
[root@gluster02 ~]# mkdir /www-data


[root@gluster01 ~]# vi /etc/fstab
[root@gluster02 ~]# vi /etc/fstab
gluster01:/volume01 /www-data glusterfs defaults,_netdev,x-systemd.automount 0 0

 

7. 공유 테스트

[root@gluster01 ~]# mount -a
[root@gluster02 ~]# mount -a

[root@gluster02 ~]# cd /www-data/
[root@gluster02 www-data]# touch 0


[root@gluster01 ~]# ls -al /www-data/
total 0
drwxr-xr-x 3 root root 33 Jun 28 13:07 .
dr-xr-xr-x. 19 root root 262 Jun 28 13:05 ..
-rw-r--r-- 1 root root 0 Jun 28 13:07 0
[root@gluster01 ~]#

 

Kimchi is an HTML5 based management tool for KVM. It is designed to make it as easy as possible to get started with KVM and create your first guest.

kimchi-project site: https://github.com/kimchi-project

kvm 설치 : http://dev.crois.net/2017/11/26/system-centos7-kvm-install/

kvm bridge: http://dev.crois.net/2019/05/30/centos-bridge-%EC%84%A4%EC%A0%95/

  • 패키지 설치 및 daemon start
[root@kvm-server01 ~]# wget https://github.com/kimchi-project/kimchi/releases/download/2.5.0/wok-2.5.0-0.el7.centos.noarch.rpm
[root@kvm-server01 ~]# wget https://github.com/kimchi-project/kimchi/releases/download/2.5.0/kimchi-2.5.0-0.el7.centos.noarch.rpm
[root@kvm-server01 ~]# yum install wok-2.5.0-0.el7.centos.noarch.rpm
[root@kvm-server01 ~]# yum install -y kimchi-2.5.0-0.el7.centos.noarch.rpm
[root@kvm-server01 ~]# sed -i 's/^#session_timeout = .*/session_timeout = 1440/g' /etc/wok/wok.conf
[root@kvm-server01 ~]# systemctl enable wokd
[root@kvm-server01 ~]# systemctl start wokd

 

  • 8001 port 오픈 확인
[root@kvm-server01 ~]# netstat -antp |grep 8001
tcp        0      0 0.0.0.0:8001            0.0.0.0:*               LISTEN      28498/nginx: master
[root@kvm-server01 ~]#

 

  • web연결 (https://kvm-service01:8001)
  • root 유저로 로그인 합니다.

 

  • KVM 관리
  • Virtuallization 으로 이동 합니다.

 

CentOS Bridge 설정

kvm 사용시 Bridge 설정 참고

 

  • sysctl 설정값 확인
[root@kvm-server01 ~]# sysctl -a |grep -i net.ipv4.ip_forward
net.ipv4.ip_forward = 1

만약 설정 되어 있지 않다면 sysctl.conf 파일을 수정후 적용합니다. 

[root@kvm-server01 ~]# vi /etc/sysctl.conf
[root@kvm-server01 ~]# sysctl -p
net.ipv4.ip_forward = 1

 

  • bridge-utils 설치 확인 및 설치
[root@kvm-server01 ~]# rpm -aq |grep -i bridge-utils
bridge-utils-1.5-9.el7.x86_64

설치가 안되어 있으면 설치 
[root@kvm-server01 ~]# yum install bridge-utils -y

 

  • 설정 변경
[root@kvm-server01 ~]# cd /etc/sysconfig/network-scripts/
[root@kvm-server01 network-scripts]# cp ifcfg-em1 ifcfg-br0
[root@kvm-server01 network-scripts]# vi ifcfg-em1
# Generated by dracut initrd
DEVICE=em1
ONBOOT=yes
BRIDGE=br0

[root@kvm-server01 network-scripts]# vi ifcfg-br0
# Generated by dracut initrd
TYPE=Bridge
DEVICE=br0
ONBOOT=yes
BOOTPROTO=static
IPADDR=10.10.10.10
NETMASK=255.255.255.0
GATEWAY=10.10.10.254

 

  • network 데몬 재시작 및 확인
[root@kvm-server01 ~]# systemctl restart network
[root@kvm-server01 ~]# ifconfig
br0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.10.10.10  netmask 255.255.255.0  broadcast 10.10.10.255
        inet6 fe80::d6ae:52ff:fee7:acdd  prefixlen 64  scopeid 0x20<link>
        ether d4:ae:52:e7:ac:dd  txqueuelen 1000  (Ethernet)
        RX packets 924  bytes 54105 (52.8 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 85  bytes 9647 (9.4 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

em1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        ether d4:ae:52:e7:ac:dd  txqueuelen 1000  (Ethernet)
        RX packets 1071  bytes 81630 (79.7 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 88  bytes 10225 (9.9 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

virbr0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255
        ether 52:54:00:a3:ab:16  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@kvm-server01 ~]#

 

 

[CentOS7] Kerberos Test

본문서는 테스트용으로 작성중인 문서 이며 단순하게 참고 용도로만 부탁 드립니다.

Version 에 따라 안되는 점이 있기 때문에 꼭 동일한 버젼으로 설치 해야 합니다.

RHEL 7.0 / CentOS 7.0  에서 테스트 하였습니다.

상이한 버젼을 사용시 정상적으로 테스트가 안될수 있습니다.

system 구성시 instructor.example.com / system1.example.com 이 필요 합니다.

사전구성시 instructor.example.com 가 dns-server 로 구성 되어 있습니다.

참고사항: /etc/exports 에 *.example.com 으로 wildcard 지정시 dns 에서 역방향 설정을 해야 됩니다.

 

  • instructor.example.com 에서 작업
  • Kerberos 설치
[root@instructor ~]# yum install -y krb5-server krb5-workstation pam_krb5

 

  • /etc/krb5.conf 파일수정
  • 만약 example.com 가 아닌 다른 부분을 설정 한다면 example.com 을 다른 도메인으로 바꾸시고
  • kdc / admin_server 부분을 kerberos 도메인으로 설정 해야 합니다.
  • /var/kerberos/krb5kdc/kadm5.acl 파일 수정 필요
[root@instructor ~]# vi /etc/krb5.conf
[logging]
 default = FILE:/var/log/krb5libs.log
 kdc = FILE:/var/log/krb5kdc.log
 admin_server = FILE:/var/log/kadmind.log

[libdefaults]
 dns_lookup_realm = false
 ticket_lifetime = 24h
 renew_lifetime = 7d
 forwardable = true
 rdns = false
 default_realm = EXAMPLE.COM
 default_ccache_name = KEYRING:persistent:%{uid}

[realms]
 EXAMPLE.COM = {
  kdc = instructor.example.com
  admin_server = instructor.example.com
 }

[domain_realm]
 .example.com = EXAMPLE.COM
 example.com = EXAMPLE.COM

 

  • kadm5.acl 파일
# 별도의 도메인 사용시 example.com 이 아닌 test.com 등으로 설정 하시면 됩니다.
[root@instructor ~]# vi /var/kerberos/krb5kdc/kadm5.acl
*/admin@EXAMPLE.COM     *

 

  • Kerberos database maintenance utility 을 이용하여  KDC database master key 를 등록 합니다.
[root@instructor ~]# kdb5_util create -s -r EXAMPLE.COM
Loading random data
Initializing database '/var/kerberos/krb5kdc/principal' for realm 'EXAMPLE.COM',
master key name 'K/M@EXAMPLE.COM'
You will be prompted for the database Master Password.
It is important that you NOT FORGET this password.
Enter KDC database master key:
Re-enter KDC database master key to verify:
[root@instructor ~]#

 

  • kdb5_util 실행시 정지 현상
[root@instructor ~]# kdb5_util create -s -r EXAMPLE.COM
Loading random data


~ 중략 
정상적으로 실행이 안되고 넘어가지 않습니다. 

패키지 설치
[root@instructor ~]# yum install rng-tools
[root@instructor ~]# rngd -r /dev/urandom


nrgd 작업후 다시 실행시 정상적으로 실행이 됩니다.

[root@instructor ~]# kdb5_util create -s -r EXAMPLE.COM
Loading random data
Initializing database '/var/kerberos/krb5kdc/principal' for realm 'EXAMPLE.COM',
master key name 'K/M@EXAMPLE.COM'
You will be prompted for the database Master Password.
It is important that you NOT FORGET this password.
Enter KDC database master key:
Re-enter KDC database master key to verify:
[root@instructor ~]#

 

  • krb5kdc / kadmin 데몬 실행 및 활성화
[root@instructor ~]# systemctl start krb5kdc kadmin
[root@instructor ~]# systemctl enable krb5kdc kadmin

 

  • test 유저 생성 및 password 설정(Kerberos 와는 다른 패스워드를 설정합니다.)
[root@instructor ~]# useradd test
[root@instructor ~]# passwd test

 

  • root/admin 패스워드 설정
  • ssh test 를 진행 하기 위하여 일반유저를 생성 합니다.
  • test 유저의 암호를 등록 합니다.
[root@instructor ~]# kadmin.local
Authenticating as principal root/admin@EXAMPLE.COM with password.
kadmin.local:  addprinc root/admin
WARNING: no policy specified for root/admin@EXAMPLE.COM; defaulting to no policy
Enter password for principal "root/admin@EXAMPLE.COM":
Re-enter password for principal "root/admin@EXAMPLE.COM":
Principal "root/admin@EXAMPLE.COM" created.
kadmin.local:  addprinc test
WARNING: no policy specified for test@EXAMPLE.COM; defaulting to no policy
Enter password for principal "test@EXAMPLE.COM":
Re-enter password for principal "test@EXAMPLE.COM":
Principal "test@EXAMPLE.COM" created.
kadmin.local:  quit
[root@instructor ~]#

 

  • keytab 파일을 생성 합니다.
[root@instructor krb5kdc]# kadmin.local
Authenticating as principal root/admin@EXAMPLE.COM with password.
kadmin.local:  addprinc -randkey host/instructor.example.com
WARNING: no policy specified for host/instructor.example.com@EXAMPLE.COM; defaulting to no policy
Principal "host/instructor.example.com@EXAMPLE.COM" created.
kadmin.local:  ktadd host/instructor.example.com
Entry for principal host/instructor.example.com with kvno 2, encryption type aes256-cts-hmac-sha1-96 added to keytab FILE:/etc/krb5.keytab.
Entry for principal host/instructor.example.com with kvno 2, encryption type aes128-cts-hmac-sha1-96 added to keytab FILE:/etc/krb5.keytab.
Entry for principal host/instructor.example.com with kvno 2, encryption type des3-cbc-sha1 added to keytab FILE:/etc/krb5.keytab.
Entry for principal host/instructor.example.com with kvno 2, encryption type arcfour-hmac added to keytab FILE:/etc/krb5.keytab.
Entry for principal host/instructor.example.com with kvno 2, encryption type camellia256-cts-cmac added to keytab FILE:/etc/krb5.keytab.
Entry for principal host/instructor.example.com with kvno 2, encryption type camellia128-cts-cmac added to keytab FILE:/etc/krb5.keytab.
Entry for principal host/instructor.example.com with kvno 2, encryption type des-hmac-sha1 added to keytab FILE:/etc/krb5.keytab.
Entry for principal host/instructor.example.com with kvno 2, encryption type des-cbc-md5 added to keytab FILE:/etc/krb5.keytab.
kadmin.local:  quit
[root@instructor krb5kdc]#

 

  • local Test 를 진행 합니다.
  • 로그인시 kerberos 인증후 ssh 에 패스워드 없이 접속 할수 있습니다.
[root@instructor ~]# su - test
[test@instructor ~]$ kinit
Password for test@EXAMPLE.COM:
[test@instructor ~]$ klist
Ticket cache: KEYRING:persistent:1001:1001
Default principal: test@EXAMPLE.COM

Valid starting       Expires              Service principal
02/16/2019 13:15:31  02/17/2019 13:15:31  krbtgt/EXAMPLE.COM@EXAMPLE.COM
        renew until 02/16/2019 13:15:31
[test@instructor ~]$ ssh instructor.example.com
The authenticity of host 'instructor.example.com (192.168.0.100)' can't be established.
ECDSA key fingerprint is 5b:4e:c4:0b:af:2b:70:50:84:5e:d8:ca:99:23:99:1f.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'instructor.example.com,192.168.0.100' (ECDSA) to the list of known hosts.
Last login: Sat Feb 16 13:15:27 2019
[test@instructor ~]$

 

  • 아래 테스트는 별도의 시스템 에서 테스트 합니다.
  • ssh client test 진행
[root@rhel70-temp ~]# useradd test
[root@rhel70-temp ~]# passwd test

[root@rhel70-temp ~]# yum install -y krb5-workstation pam_krb5
[root@rhel70-temp ~]# authconfig  --enablekrb5 --update
[root@rhel70-temp ~]# scp root@instructor.example.com:/etc/krb5.conf /etc/krb5.conf

 

  • test 유저로 전환후 kinit 을 실행 하여 kerberos 인증을 합니다.
  • klist 시 정상적으로 값을 확인할수 있어야 합니다.
  • ssh  접속 테스트를 진행 합니다.
[root@rhel70-temp ~]# su - test
[test@rhel70-temp ~]$ kinit
Password for test@EXAMPLE.COM:
[test@rhel70-temp ~]$ klist
Ticket cache: KEYRING:persistent:1001:1001
Default principal: test@EXAMPLE.COM

Valid starting       Expires              Service principal
02/16/2019 13:20:21  02/17/2019 13:20:21  krbtgt/EXAMPLE.COM@EXAMPLE.COM
        renew until 02/16/2019 13:20:21
[test@rhel70-temp ~]$ ssh instructor.example.com
The authenticity of host 'instructor.example.com (192.168.0.100)' can't be established.
ECDSA key fingerprint is 5b:4e:c4:0b:af:2b:70:50:84:5e:d8:ca:99:23:99:1f.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'instructor.example.com,192.168.0.100' (ECDSA) to the list of known hosts.
Last login: Sat Feb 16 13:15:50 2019 from 192.168.0.100
[test@instructor ~]$

 

  • NFS kerberos 설정

 

  • /etc/exports 에 wildcard 설정시 dns 역방향 설정이 필요 합니다.
  • instructor.example.com 시스템에서 nfs 사용시 설정 하는 내용 입니다.
  • Command
# kadmin

addprinc -randkey host/instructor.example.com
addprinc -randkey host/system1.example.com
addprinc -randkey nfs/instructor.example.com
addprinc -randkey nfs/system1.example.com
addprinc -randkey nfs/instructor

ktadd nfs/instructor.example.com
ktadd host/instructor.example.com
ktadd nfs/instructor

ktadd -k /root/system1.keytab host/system1.example.com
ktadd -k /root/system1.keytab nfs/system1.example.com

 

  • 작업 참고용 Command 로그
  • 일부 설정 중복 으로 인하여 already exists 메시지를 확인 할수 있습니다.
[root@instructor ~]# kadmin
Authenticating as principal root/admin@EXAMPLE.COM with password.
Password for root/admin@EXAMPLE.COM:
kadmin:  addprinc -randkey host/instructor.example.com
WARNING: no policy specified for host/instructor.example.com@EXAMPLE.COM; defaulting to no policy
add_principal: Principal or policy already exists while creating "host/instructor.example.com@EXAMPLE.COM".
kadmin:  kadmin: addprinc -randkey host/system1.example.com
kadmin: Unknown request "kadmin:".  Type "?" for a request list.
kadmin:  [root@instructor ~]#
[root@instructor ~]#
[root@instructor ~]#
[root@instructor ~]# kadmin
Authenticating as principal root/admin@EXAMPLE.COM with password.
Password for root/admin@EXAMPLE.COM:
kadmin:  addprinc -randkey host/instructor.example.com
WARNING: no policy specified for host/instructor.example.com@EXAMPLE.COM; defaulting to no policy
add_principal: Principal or policy already exists while creating "host/instructor.example.com@EXAMPLE.COM".
kadmin:  addprinc -randkey host/system1.example.com
WARNING: no policy specified for host/system1.example.com@EXAMPLE.COM; defaulting to no policy
Principal "host/system1.example.com@EXAMPLE.COM" created.
kadmin:  addprinc -randkey nfs/instructor.example.com
WARNING: no policy specified for nfs/instructor.example.com@EXAMPLE.COM; defaulting to no policy
Principal "nfs/instructor.example.com@EXAMPLE.COM" created.
kadmin:  addprinc -randkey nfs/system1.example.com
WARNING: no policy specified for nfs/system1.example.com@EXAMPLE.COM; defaulting to no policy
Principal "nfs/system1.example.com@EXAMPLE.COM" created.
kadmin:  addprinc -randkey nfs/instructor
WARNING: no policy specified for nfs/instructor@EXAMPLE.COM; defaulting to no policy
Principal "nfs/instructor@EXAMPLE.COM" created.
kadmin:  ktadd nfs/instructor.example.com
Entry for principal nfs/instructor.example.com with kvno 2, encryption type aes256-cts-hmac-sha1-96 added to keytab FILE:/etc/krb5.keytab.
Entry for principal nfs/instructor.example.com with kvno 2, encryption type aes128-cts-hmac-sha1-96 added to keytab FILE:/etc/krb5.keytab.
Entry for principal nfs/instructor.example.com with kvno 2, encryption type des3-cbc-sha1 added to keytab FILE:/etc/krb5.keytab.
Entry for principal nfs/instructor.example.com with kvno 2, encryption type arcfour-hmac added to keytab FILE:/etc/krb5.keytab.
Entry for principal nfs/instructor.example.com with kvno 2, encryption type camellia256-cts-cmac added to keytab FILE:/etc/krb5.keytab.
Entry for principal nfs/instructor.example.com with kvno 2, encryption type camellia128-cts-cmac added to keytab FILE:/etc/krb5.keytab.
Entry for principal nfs/instructor.example.com with kvno 2, encryption type des-hmac-sha1 added to keytab FILE:/etc/krb5.keytab.
Entry for principal nfs/instructor.example.com with kvno 2, encryption type des-cbc-md5 added to keytab FILE:/etc/krb5.keytab.
kadmin:  ktadd host/instructor.example.com
Entry for principal host/instructor.example.com with kvno 3, encryption type aes256-cts-hmac-sha1-96 added to keytab FILE:/etc/krb5.keytab.
Entry for principal host/instructor.example.com with kvno 3, encryption type aes128-cts-hmac-sha1-96 added to keytab FILE:/etc/krb5.keytab.
Entry for principal host/instructor.example.com with kvno 3, encryption type des3-cbc-sha1 added to keytab FILE:/etc/krb5.keytab.
Entry for principal host/instructor.example.com with kvno 3, encryption type arcfour-hmac added to keytab FILE:/etc/krb5.keytab.
Entry for principal host/instructor.example.com with kvno 3, encryption type camellia256-cts-cmac added to keytab FILE:/etc/krb5.keytab.
Entry for principal host/instructor.example.com with kvno 3, encryption type camellia128-cts-cmac added to keytab FILE:/etc/krb5.keytab.
Entry for principal host/instructor.example.com with kvno 3, encryption type des-hmac-sha1 added to keytab FILE:/etc/krb5.keytab.
Entry for principal host/instructor.example.com with kvno 3, encryption type des-cbc-md5 added to keytab FILE:/etc/krb5.keytab.
kadmin:  ktadd nfs/instructor
Entry for principal nfs/instructor with kvno 2, encryption type aes256-cts-hmac-sha1-96 added to keytab FILE:/etc/krb5.keytab.
Entry for principal nfs/instructor with kvno 2, encryption type aes128-cts-hmac-sha1-96 added to keytab FILE:/etc/krb5.keytab.
Entry for principal nfs/instructor with kvno 2, encryption type des3-cbc-sha1 added to keytab FILE:/etc/krb5.keytab.
Entry for principal nfs/instructor with kvno 2, encryption type arcfour-hmac added to keytab FILE:/etc/krb5.keytab.
Entry for principal nfs/instructor with kvno 2, encryption type camellia256-cts-cmac added to keytab FILE:/etc/krb5.keytab.
Entry for principal nfs/instructor with kvno 2, encryption type camellia128-cts-cmac added to keytab FILE:/etc/krb5.keytab.
Entry for principal nfs/instructor with kvno 2, encryption type des-hmac-sha1 added to keytab FILE:/etc/krb5.keytab.
Entry for principal nfs/instructor with kvno 2, encryption type des-cbc-md5 added to keytab FILE:/etc/krb5.keytab.
kadmin:  ktadd -k /root/client.keytab host/system1.example.com
Entry for principal host/system1.example.com with kvno 2, encryption type aes256-cts-hmac-sha1-96 added to keytab WRFILE:/root/client.keytab.
Entry for principal host/system1.example.com with kvno 2, encryption type aes128-cts-hmac-sha1-96 added to keytab WRFILE:/root/client.keytab.
Entry for principal host/system1.example.com with kvno 2, encryption type des3-cbc-sha1 added to keytab WRFILE:/root/client.keytab.
Entry for principal host/system1.example.com with kvno 2, encryption type arcfour-hmac added to keytab WRFILE:/root/client.keytab.
Entry for principal host/system1.example.com with kvno 2, encryption type camellia256-cts-cmac added to keytab WRFILE:/root/client.keytab.
Entry for principal host/system1.example.com with kvno 2, encryption type camellia128-cts-cmac added to keytab WRFILE:/root/client.keytab.
Entry for principal host/system1.example.com with kvno 2, encryption type des-hmac-sha1 added to keytab WRFILE:/root/client.keytab.
Entry for principal host/system1.example.com with kvno 2, encryption type des-cbc-md5 added to keytab WRFILE:/root/client.keytab.
kadmin:  ktadd -k /root/client.keytab nfs/system1.example.com
Entry for principal nfs/system1.example.com with kvno 2, encryption type aes256-cts-hmac-sha1-96 added to keytab WRFILE:/root/client.keytab.
Entry for principal nfs/system1.example.com with kvno 2, encryption type aes128-cts-hmac-sha1-96 added to keytab WRFILE:/root/client.keytab.
Entry for principal nfs/system1.example.com with kvno 2, encryption type des3-cbc-sha1 added to keytab WRFILE:/root/client.keytab.
Entry for principal nfs/system1.example.com with kvno 2, encryption type arcfour-hmac added to keytab WRFILE:/root/client.keytab.
Entry for principal nfs/system1.example.com with kvno 2, encryption type camellia256-cts-cmac added to keytab WRFILE:/root/client.keytab.
Entry for principal nfs/system1.example.com with kvno 2, encryption type camellia128-cts-cmac added to keytab WRFILE:/root/client.keytab.
Entry for principal nfs/system1.example.com with kvno 2, encryption type des-hmac-sha1 added to keytab WRFILE:/root/client.keytab.
Entry for principal nfs/system1.example.com with kvno 2, encryption type des-cbc-md5 added to keytab WRFILE:/root/client.keytab.
kadmin:

 

  • 설정확인
kadmin:  listprincs
K/M@EXAMPLE.COM
host/instructor.example.com@EXAMPLE.COM
host/system1.example.com@EXAMPLE.COM
kadmin/admin@EXAMPLE.COM
kadmin/changepw@EXAMPLE.COM
kadmin/instructor.example.com@EXAMPLE.COM
krbtgt/EXAMPLE.COM@EXAMPLE.COM
nfs/instructor.example.com@EXAMPLE.COM
nfs/instructor@EXAMPLE.COM
nfs/system1.example.com@EXAMPLE.COM
root/admin@EXAMPLE.COM
test@EXAMPLE.COM
kadmin:  quit
[root@instructor ~]#

 

  • nfs 공유 디렉토리 생성
  • test 시 *.example.com 으로 대역을 주면 설정이 안됩니다.
[root@instructor ~]# mkdir /{public,protected}
[root@instructor ~]# vi /etc/exports
/public         192.168.0.*(rw,sync)
/protected      192.168.0.*(rw,sec=krb5p)

 

  • dns 역방향 설정후 아래와 같이 설정 해도 됩니다.
nslookup 으로 ip 로 도메인을 확인 할수 있어야 합니다. 
[root@instructor ~]# nslookup
> system1.example.com
Server:         192.168.0.100
Address:        192.168.0.100#53

Name:   system1.example.com
Address: 192.168.0.20
> 192.168.0.20
Server:         192.168.0.100
Address:        192.168.0.100#53

20.0.168.192.in-addr.arpa       name = system1.example.com.
>

[root@instructor ~]# cat /etc/exports
/public         *.example.com(rw,sync)
/protected      *.example.com(rw,sec=krb5p)
[root@instructor ~]#

 

  • /etc/sysconfig/nfs 설정 변경
[root@instructor ~]# vi /etc/sysconfig/nfs

RPCNFSDARGS="-V 4.2"

 

  • nfs-server , nfs-secure-server 실행 및 활성화
[root@instructor ~]# authconfig  --enablekrb5 --update
[root@instructor ~]# systemctl start nfs-secure-server nfs-server
[root@instructor ~]# systemctl enable nfs-secure-server nfs-server

 

  • nfs client 설정을 진행 합니다.
  • nfs-secure 사용을 위하여 패키지를 설치 합니다.
[root@system1 ~]# yum install -y nfs-utils

 

  • krb5.conf / krb5.keytab 파일 복사
  • nfs-secure 데몬 실행
[root@system1 ~]# scp root@instructor.example.com:/etc/krb5.conf /etc/krb5.conf
[root@system1 ~]# scp root@instructor.example.com:/root/system1.keytab /etc/krb5.keytab

[root@system1 ~]# systemctl enable nfs-secure
[root@system1 ~]# systemctl start nfs-secure

 

  • mount test 를 진행 합니다.
[root@system1 ~]# mount -t nfs -o sec=krb5p instructor.example.com:/protected /mnt
[root@system1 ~]# df -h
Filesystem                         Size  Used Avail Use% Mounted on
/dev/sda3                          9.8G  818M  9.0G   9% /
devtmpfs                           989M     0  989M   0% /dev
tmpfs                              994M     0  994M   0% /dev/shm
tmpfs                              994M  8.5M  986M   1% /run
tmpfs                              994M     0  994M   0% /sys/fs/cgroup
/dev/sda1                          997M   97M  901M  10% /boot
instructor.example.com:/protected  8.1G  3.3G  4.8G  41% /mnt
[root@system1 ~]#

 

  • dns 역방향 설정후 nfs wildcard 테스트
  • No such file or directory 오류메시지의 경우 역방향 설치전 임시 마운트 테스트 부분 입니다.
[root@system1 ~]# mount -t nfs -o sec=krb5p,vers=4.2 instructor.example.com:/protected /mnt
mount.nfs: mounting instructor.example.com:/protected failed, reason given by server: No such file or directory
[root@system1 ~]# mount -t nfs -o sec=krb5p,vers=4.2 instructor.example.com:/protected /mnt
[root@system1 ~]# df -h
Filesystem                         Size  Used Avail Use% Mounted on
/dev/sda3                          9.0G  821M  8.2G   9% /
devtmpfs                           989M     0  989M   0% /dev
tmpfs                              994M     0  994M   0% /dev/shm
tmpfs                              994M  8.5M  986M   1% /run
tmpfs                              994M     0  994M   0% /sys/fs/cgroup
/dev/sda1                          509M   90M  419M  18% /boot
instructor.example.com:/protected  9.0G  4.5G  4.6G  50% /mnt
[root@system1 ~]#

 

  • 만약 mount 가 되지 않을 경우 ketab 을 등록 합니다.
  • krb5-workstation / pam_krb5 를 설치 합니다.
[root@system1 ~]# yum install -y krb5-workstation pam_krb5
[root@system1 ~]# kinit -k -t /etc/krb5.keytab nfs/system1.example.com
[root@system1 ~]# klist
Ticket cache: KEYRING:persistent:0:0
Default principal: nfs/system1.example.com@EXAMPLE.COM

Valid starting       Expires              Service principal
02/16/2019 14:10:25  02/17/2019 14:10:25  krbtgt/EXAMPLE.COM@EXAMPLE.COM
        renew until 02/16/2019 14:10:25
[root@system1 ~]#

 

  • mount 테스트
[root@system1 ~]# mount -t nfs -o sec=krb5p instructor.example.com:/protected /mnt

[root@system1 ~]# df -h
Filesystem                         Size  Used Avail Use% Mounted on
/dev/sda3                          9.8G  841M  9.0G   9% /
devtmpfs                           989M     0  989M   0% /dev
tmpfs                              994M     0  994M   0% /dev/shm
tmpfs                              994M  8.5M  986M   1% /run
tmpfs                              994M     0  994M   0% /sys/fs/cgroup
/dev/sda1                          997M   97M  901M  10% /boot
instructor.example.com:/protected  8.1G  3.3G  4.8G  41% /mnt
[root@system1 ~]#

 

  • /etc/fstab 을 수정 합니다.
  • 일반 디렉토리와 kerberos 인증 디렉토리 마운트 를 합니다.
[root@system1 ~]# mkdir /mnt/{public,protected}

[root@system1 ~]# vi /etc/fstab

instructor.example.com:/protected         /mnt/protected          nfs     defaults,sec=krb5p,v4.2    0 0
instructor.example.com:/public            /mnt/public             nfs     defaults      0 0

 

  • 마운트 및 마운트 확인
[root@system1 ~]# mount -a
[root@system1 ~]# df -h
Filesystem                         Size  Used Avail Use% Mounted on
/dev/sda3                          9.0G  821M  8.2G   9% /
devtmpfs                           989M     0  989M   0% /dev
tmpfs                              994M     0  994M   0% /dev/shm
tmpfs                              994M  8.5M  986M   1% /run
tmpfs                              994M     0  994M   0% /sys/fs/cgroup
/dev/sda1                          509M   90M  419M  18% /boot
instructor.example.com:/protected  9.0G  4.5G  4.6G  50% /mnt/protected
instructor.example.com:/public     9.0G  4.5G  4.6G  50% /mnt/public
[root@system1 ~]#

 

  • system rebooting 및 확인
[root@system1 ~]# init 6


# 시스템 리부팅후 확인
login as: root
root@192.168.0.20's password:
Last login: Wed Feb 27 00:21:13 2019 from 192.168.0.1
[root@system1 ~]# df -h
Filesystem                         Size  Used Avail Use% Mounted on
/dev/sda3                          9.0G  820M  8.2G   9% /
devtmpfs                           989M     0  989M   0% /dev
tmpfs                              994M     0  994M   0% /dev/shm
tmpfs                              994M  8.6M  986M   1% /run
tmpfs                              994M     0  994M   0% /sys/fs/cgroup
/dev/sda1                          509M   90M  419M  18% /boot
instructor.example.com:/protected  9.0G  4.5G  4.6G  50% /mnt/protected
instructor.example.com:/public     9.0G  4.5G  4.6G  50% /mnt/public
[root@system1 ~]#

 

 

ReaR (Relax-and-Recover) 를 사용 하여 OS backup 을 진행 하고 pxe-boot 를 이용하여

System 을 복구 할수 있습니다.

rear site : http://relax-and-recover.org/

rear 참고 page : https://github.com/rear/rear

rear 참고 사항: 백업 대상 system 에만 설치하고 복구시에는 pxe-boot 를 이용합니다.

본 문서에서는 pxe-boot 설명은 대략 적인 내용만 하고 차후에 pxe-boot 를 다루도록 하겠습니다.

 

 

  • rear package 설치

  • epel-release package 를 설치 한후 rear 을 설치 합니다.
[root@centos66 ~]# yum install -y epel-release
[root@centos66 ~]# yum install -y rear

 

  • rear 설정 local.conf 파일을 설정 합니다.
  • PXE image 를 생성 하며 image 생성 nfs 주소를 설정 합니다.
  • nfs 주소 설정시 기존에 사용하던 pxe-boot machine 을 사용 하는것이 좋습니다.
  • BACKUP_PROG_EXCLUDE 에서 제외할 디렉토리를 선택 합니다.
[root@centos66 ~]# vi /etc/rear/local.conf
# Default is to create Relax-and-Recover rescue media as ISO image
# set OUTPUT to change that
# set BACKUP to activate an automated (backup and) restore of your data
# Possible configuration values can be found in /usr/share/rear/conf/default.conf
#
# This file (local.conf) is intended for manual configuration. For configuration
# through packages and other automated means we recommend creating a new
# file named site.conf next to this file and to leave the local.conf as it is.
# Our packages will never ship with a site.conf.


OUTPUT=PXE
OUTPUT_URL="nfs://192.168.100.10/data/restore"
BACKUP=NETFS
BACKUP_URL="nfs://192.168.100.10/data/restore"
BACKUP_PROG_EXCLUDE=('/backup/*' '/proc/*' '/dev/*' '/lost+found/*' '/mnt/*' '/media/*' '/sys/*')
[root@centos66 ~]#

 

  • rear 에서 사용할 root ssh-keygen  을 생성 합니다. 
[root@centos66 ~]# cd /root/.ssh/
[root@centos66 .ssh]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
6c:4f:d0:52:18:53:b8:bf:9c:ad:f5:d4:78:4d:31:d7 root@centos66
The key's randomart image is:
+--[ RSA 2048]----+
|        o=o      |
|        o+      .|
|        o..    oE|
|       ..o      +|
|        S..     .|
|       . o.    +.|
|         ..+. o +|
|          +..o . |
|          ..  .  |
+-----------------+
[root@centos66 .ssh]#
[root@centos66 .ssh]# cat id_rsa.pub >> authorized_keys
[root@centos66 .ssh]# chmod 600 authorized_keys
[root@centos66 .ssh]# ssh localhost
The authenticity of host 'localhost (::1)' can't be established.
RSA key fingerprint is 05:3d:74:2b:f9:49:5b:d3:91:49:3f:ec:e0:d9:9e:d7.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'localhost' (RSA) to the list of known hosts.
Last login: Wed Apr  3 17:10:36 2019 from 192.168.0.1
[root@centos66 ~]#

 

  • rear backup 을 진행 합니다. 
[root@centos66 ~]# rear -v mkbackup
Relax-and-Recover 1.17.2 / Git
Using log file: /var/log/rear/rear-centos66.log
Creating disk layout
Creating root filesystem layout
Copying files and directories
Copying binaries and libraries
Copying kernel modules
Creating initramfs
Copied kernel+initrd (61M) to /var/lib/rear/output
Created pxelinux config 'rear-centos66' and symlinks for MAC adresses in /var/lib/rear/output
Copying resulting files to nfs location
Encrypting disabled
Creating tar archive '/tmp/rear.TTOIfb0jOk1DyHJ/outputfs/centos66/backup.tar.gz'
Archived 1400 MiB [avg 6797 KiB/sec]OK
Archived 1400 MiB in 212 seconds [avg 6765 KiB/sec]
[root@centos66 ~]#

 

  • 백업 확인
[root@rhel70 ~]# chmod -R 777 /data
[root@rhel70 ~]# cd /data/restore/
[root@rhel70 restore]# ll
total 0
drwxrwxrwx 2 nfsnobody nfsnobody 185 Apr  4 01:15 centos66

[root@rhel70 restore]# cd centos66/
[root@rhel70 centos66]# ll
total 1503904
-rwxrwxrwx 1 nfsnobody nfsnobody    8106798 Apr  4 01:15 backup.log
-rwxrwxrwx 1 nfsnobody nfsnobody 1468774648 Apr  4 01:15 backup.tar.gz
-rwxrwxrwx 1 nfsnobody nfsnobody   58774023 Apr  4 01:12 centos66.initrd.cgz
-rwxrwxrwx 1 nfsnobody nfsnobody    4152336 Apr  4 01:12 centos66.kernel
-rwxrwxrwx 1 nfsnobody nfsnobody        268 Apr  4 01:12 centos66.message
-rwxrwxrwx 1 nfsnobody nfsnobody        516 Apr  4 01:12 README
-rwxrwxrwx 1 nfsnobody nfsnobody        468 Apr  4 01:12 rear-centos66
-rwxrwxrwx 1 nfsnobody nfsnobody     162404 Apr  4 01:12 rear.log
-rwxrwxrwx 1 nfsnobody nfsnobody        268 Apr  4 01:12 VERSION
[root@rhel70 centos66]#

 

  • pxe-boot 설정
  • pxeboot-server 의 경우 toilet 를 이용하여 생성 하였습니다. 🙂
  • 2번 라인에 CentOS 6.6 Restore 로 설정을 합니다.
[root@rhel70 ~]# cd /data/tftpboot/linux-install/
* RHEL & CentOS pxe-install
 ______  _______ _                 _       ____
|  _ \ \/ / ____| |__   ___   ___ | |_    / ___|  ___ _ ____   _____ _ __
| |_) \  /|  _| | '_ \ / _ \ / _ \| __|___\___ \ / _ \ '__\ \ / / _ \ '__|
|  __//  \| |___| |_) | (_) | (_) | ||_____|__) |  __/ |   \ V /  __/ |
|_|  /_/\_\_____|_.__/ \___/ \___/ \__|   |____/ \___|_|    \_/ \___|_|

Default number 1 RHEL 7.6 install

1. RHEL 7.6-64bit install
2. CentOS 6.6 Restore

 

  • default 파일 내용은 백업 디렉토리 rear-$hostname 파일에서 확인 가능 합니다.
[root@rhel70 centos66]# cat rear-centos66
default hd
prompt 1
timeout 300

label hd
localboot -1
say ENTER - boot local hard disk
say --------------------------------------------------------------------------------
display /centos66.message
say ----------------------------------------------------------
say rear = disaster recover this system with Relax and Recover
label rear
        kernel /centos66.kernel
        append initrd=/centos66.initrd.cgz root=/dev/ram0 vga=normal rw selinux=0 console=ttyS0,9600 console=tty0
[root@rhel70 centos66]#

 

  • default 파일 수정
[root@rhel70 linux-install]# cd pxelinux.cfg/
[root@rhel70 pxelinux.cfg]# pwd
/data/tftpboot/linux-install/pxelinux.cfg
[root@rhel70 pxelinux.cfg]# vi default
~ 중략 
LABEL 2
    kernel centos66/centos66.kernel
        append initrd=centos66/centos66.initrd.cgz root=/dev/ram0 vga=normal rw selinux=0 console=ttyS0,9600 console=tty0

 

  • kernel 이미지 디렉토리에 파일을 카피 합니다.
[root@rhel70 linux-install]# pwd
/data/tftpboot/linux-install
[root@rhel70 linux-install]# cp -arp /data/restore/centos66 .

 

  • 디렉토리 권한 변경 및 pxe-boot 를 실행 합니다.
[root@rhel70 ~]# chmod -R 777 /data
[root@rhel70 ~]# systemctl restart nfs
[root@rhel70 ~]# systemctl restart xinetd
[root@rhel70 ~]# systemctl restart dhcpd

 

  • pxe-boot test 를 진행 합니다.
  • 2 번을 눌러 CentOS 6.6 을 복구 합니다.

 

  • 1번을 입력 합니다.

 

  • rear recover 를 입력 하여 백업한 os 로 설치 합니다.

 

  • rear recover 실행 화면

 

  • 복구 완료 화면
  • 시스템을 리부팅 합니다.

 

  • 작업 완료후 로그인 합니다.

 

CentOS 7 Version Test ;

 

  • 패키지 설치
[root@www1 ~]# yum install -y epel-release
[root@www1 ~]# yum clean all & yum list

[root@www1 ~]# cat /etc/redhat-release
CentOS Linux release 7.6.1810 (Core)
[root@www1 ~]# yum install -y syslinux syslinux-extlinux rear

 

  • ssh-keygen  설정
[root@www1 ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:tnLcXylU9p0fDuBdVwEUQrRFoSYnavyH/S6hVldcmiQ root@www1
The key's randomart image is:
+---[RSA 2048]----+
|          o+oB+.o|
|            E . o|
|         o * o++o|
|      . . * oo+++|
|       +S  ..o.oo|
|      .o.oo+ .o.o|
|      . +o+o+ o..|
|       o o.o.o   |
|        .   +o   |
+----[SHA256]-----+
[root@www1 ~]# cd .ssh/
[root@www1 .ssh]# cat id_rsa.pub >> authorized_keys
[root@www1 .ssh]# chmod 600 authorized_keys

[root@www1 .ssh]# ssh localhost
The authenticity of host 'localhost (::1)' can't be established.
ECDSA key fingerprint is SHA256:fW1qDoTOUqdHZ9TlQsUdWQKdKSEhStuYd6Zd9jGABYQ.
ECDSA key fingerprint is MD5:3e:35:b5:89:a0:d6:57:82:e8:3d:48:aa:34:e5:82:de.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'localhost' (ECDSA) to the list of known hosts.
Last login: Fri Apr  5 18:30:46 2019 from 192.168.0.1
[root@www1 ~]# logout
Connection to localhost closed.
[root@www1 .ssh]#

 

  • 백업실행전 pxe-boot system 에 nfs mount point 를 백업 대상 system 에 임시로 마운트를 해봅니다.
  • rear 백업 실행
[root@www1 ~]# mount -t nfs 192.168.100.10:/data/restore /mnt
[root@www1 ~]# df -h
Filesystem                    Size  Used Avail Use% Mounted on
/dev/sda3                      17G  6.7G   11G  40% /
devtmpfs                      895M     0  895M   0% /dev
tmpfs                         910M     0  910M   0% /dev/shm
tmpfs                         910M   10M  900M   2% /run
tmpfs                         910M     0  910M   0% /sys/fs/cgroup
/dev/sda1                    1014M  194M  821M  20% /boot
tmpfs                         182M     0  182M   0% /run/user/0
192.168.100.10:/data/restore   20G  7.3G   13G  37% /mnt
[root@www1 ~]# umount /mnt/


[root@www1 ~]# rear -v mkbackup
Relax-and-Recover 2.4 / Git
Using log file: /var/log/rear/rear-www1.log
Using backup archive '/tmp/rear.j7Kop3mGgWOjFRG/outputfs/www1/backup.tar.gz'
Creating disk layout
Using guessed bootloader 'GRUB' (found in first bytes on /dev/sda)
Creating root filesystem layout
Skipping 'virbr0': not bound to any physical interface.
Copying logfile /var/log/rear/rear-www1.log into initramfs as '/tmp/rear-www1-partial-2019-04-05T18:37:05+0900.log'
Copying files and directories
Copying binaries and libraries
Copying kernel modules
Copying all files in /lib*/firmware/
Creating recovery/rescue system initramfs/initrd initrd.cgz with gzip default compression
Created initrd.cgz with gzip default compression (199441250 bytes) in 22 seconds
Copied kernel+initrd 197M to /var/lib/rear/output
Created pxelinux config 'rear-www1' and symlinks for MAC adresses in /var/lib/rear/output
Copying resulting files to nfs location
Saving /var/log/rear/rear-www1.log as rear-www1.log to nfs location
Creating tar archive '/tmp/rear.j7Kop3mGgWOjFRG/outputfs/www1/backup.tar.gz'
Archived 2570 MiB [avg 7455 KiB/sec] OK
Archived 2570 MiB in 354 seconds [avg 7434 KiB/sec]
Exiting rear mkbackup (PID 10986) and its descendant processes
Running exit tasks
[root@www1 ~]#

 

  • pxe-boot system 설정
  • /data/restore/www1 으로 이동후 rear-www1 파일을 확인 합니다.
[root@rhel70 restore]# cd
[root@rhel70 ~]# cd /data/restore/www1/
[root@rhel70 www1]# cat rear-www1
    default hd
prompt 1
timeout 300

label hd
localboot -1
say ENTER - boot local hard disk
say --------------------------------------------------------------------------------
    display /www1.message
    say ----------------------------------------------------------
    say rear = disaster recover this system with Relax-and-Recover
    label rear
        kernel /www1.kernel
        append initrd=/www1.initrd.cgz root=/dev/ram0 vga=normal rw selinux=0 console=ttyS0,9600 console=tty0
[root@rhel70 www1]#

 

  • pex-boot 설정파일을 수정 합니다.
  • boot.msg 파일 및 pxelinux.cfg/default 파일 수정
[root@rhel70 ~]# cd /data/tftpboot/linux-install/
[root@rhel70 linux-install]# vi boot.msg
Default number 1 RHEL 7.6 install

1. RHEL 7.6-64bit install
2. CentOS 6.6 Restore
3. CentOS www1 Restore



[root@rhel70 linux-install]# cd pxelinux.cfg/
[root@rhel70 pxelinux.cfg]# vi default

LABEL 3
    kernel www1/www1.kernel
        append initrd=www1/www1.initrd.cgz root=/dev/ram0 vga=normal rw selinux=0 console=ttyS0,9600 console=tty0

 

  • pxe-boot 설정
  • 백업 system 파일 복사
  • 디렉토리 권한 수정
[root@rhel70 ~]# cd /data/tftpboot/linux-install/
[root@rhel70 linux-install]# cp -r /data/restore/www1 .
[root@rhel70 linux-install]# chmod -R 777 /data

 

  • 복구 테스트
[root@rhel70 ~]# systemctl restart dhcpd
[root@rhel70 ~]# systemctl restart nfs-server
[root@rhel70 ~]# systemctl restart xinetd

 

  • 3 CentOS www Restore 를 선택 합니다.

 

  • network interface 정보를 설정 합니다.
  • 백업 장비와 신규장비의 network interface 맵핑 하는 설정 이며, restore 완료후 설정 하여도 됩니다.

 

  • network interface 설정

 

  • restore 를 진행 합니다.

 

  • 별도의 설정은 필요 없으며 자동으로 restore 가 진행 됩니다. 🙂

 

  • init 6 로 재부팅후 system 을 확인 합니다.

 

CentOS6 tar backup and Restore

주의!!! 단순 참고용으로만 참고 하시고 os 백업시 true image backup 을 사용 하세요

 

  • Full Backup 대상 system
  • Runlevel 1 로 변경 command 사용시 ssh 를 이용하기 때문에 eth0 과 sshd 데몬을 실행 합니다.
  • 백업 디렉토리는 /backup 입니다. 차후 restore system 에서 nfs  로 mount 합니다.
[root@centos66 ~]# init 1
[root@centos66 ~]# ifup eth0
[root@centos66 ~]# /etc/init.d/sshd start
[root@centos66 ~]# cd /backup/
[root@centos66 backup]# tar cvpzf osbackup.tgz --exclude=/backup --exclude=/proc  --exclude=/dev --exclude=/lost+found --exclude=/mnt --exclude=/media --exclude=/sys  /
[root@centos66 ~]# init 6

 

  • Restore 대상 system
  • centos6 설치시 파티션 구성은 동일하게 하였으며 최소설치를 하였습니다.
  • Runlevel 1 로 변경 하여 eth0 과 sshd 데몬을 실행 합니다.
  • fstab 과 grub.conf 파일을 복사 합니다.
[root@test-machine /]# init 1
[root@test-machine /]# ifup eth0
[root@test-machine /]# /etc/init.d/sshd start
[root@test-machine /]# /etc/init.d/rpcbind start
[root@test-machine /]# mount -t nfs 192.168.0.210:/backup /mnt

[root@test-machine /]# cp /etc/fstab /root/fstab
[root@test-machine /]# cp /boot/grub/grub.conf .
[root@test-machine / ]# cd /mnt
[root@test-machine mnt]# tar xvpzf osbackup.tgz -C /
[root@test-machine /]# cp fstab /etc/fstab
[root@test-machine /]# cp grub.conf /boot/grub/grub.conf

 

 

CentOS 7 에서 OS Full Backup 을 진행한후 최소설치한 system 에 복구를 예상하고 문서를 작성 하였습니다.

runlevel 3 이나 runlevel 5 에서도 정상적으로 되지만 system 사용량이 많을 경우 문제가 될수도 있음으로 runlevel 1 에서 작업을 합니다.

본 문서는 단순참고 부탁 드리며, OS 백업시 or 복구시 true image backup 을 권장 합니다.

runlevel 변경 및 OS backup 진행 

exclude 옵션을 사용하여 tar 묶음에서 제외할 디렉토리를 지정 합니다.

Full Backup 대상 system 에서 작업

[root@www1 ~]# init 1
[root@www1 ~]# tar cvpzf osbackup.tgz --exclude=/backup --exclude=/proc  --exclude=/dev --exclude=/lost+found --exclude=/mnt --exclude=/media --exclude=/sys \
  --exclude=/tmp --exclude=/var/run/* --exclude=/var/tmp/*  /

 

Backup Restore 대상 system 에서 작업

  • 최소설치 하여 동일한 파티션으로 vm 을 생성 하여 테스트를 진행 하였습니다.
  • 주의! /etc/fstab 을 사전에 백업 합니다.
  • 주의! grub2-mkconfig 작업이 필요 합니다.
  • login 과정과 nfs 부분은 생략 합니다.

 

[root@centos7 ~]# ifup ens33
[root@centos7 ~]# systemctl start sshd
[root@centos7 ~]# cp /etc/fstab /root/

nfs-utils 가 설치 되어 있지 않다면 설치 합니다. 
scp 로 os-backup.tar 파일을 가지고 오셔도 됩니다. 
[root@centos7 ~]# yum install nfs-utils -y

root@centos7 ~]# mount -t nfs 192.168.0.208:/backup /mnt

백업 파일 위치로 이동후 os restore 를 진행 합니다. 
[root@centos7 os]# tar xvpzf osbackup.tgz -C /
[root@centos7 os]# cp /root/fstab /etc/fstab
[root@centos7 os]# grub2-mkconfig -o /boot/grub2/grub.cfg
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-3.10.0-957.10.1.el7.x86_64
Found initrd image: /boot/initramfs-3.10.0-957.10.1.el7.x86_64.img
Found linux image: /boot/vmlinuz-3.10.0-693.el7.x86_64
Found initrd image: /boot/initramfs-3.10.0-693.el7.x86_64.img
Found linux image: /boot/vmlinuz-0-rescue-53c15c9a5ef0495580c75bb6ef2d9b56
Found initrd image: /boot/initramfs-0-rescue-53c15c9a5ef0495580c75bb6ef2d9b56.img
Found linux image: /boot/vmlinuz-0-rescue-0bc505cd861e44f2a4e91c3a592208ec
Found initrd image: /boot/initramfs-0-rescue-0bc505cd861e44f2a4e91c3a592208ec.img
done
[root@centos7 os]#

리부팅을 진행 합니다. 
[root@centos7 os]# init 6

 

system 리부팅후 /var/log/messages 를 점검 합니다.

[root@www1 log]# cat messages  |grep -i error
[root@www1 log]# cat messages  |grep -i fail

 

기타 다른 서비스를 사용 한다고 하면 해당 서비스를 점검 합니다.

 

 

[CentOS7] tomcat 8 Binary install

tomcat 8 Binary install 문서

tomcat Binary 파일은 https://archive.apache.org/dist/tomcat/ 에서 다운 받으실수 있습니다.

 

 

  • 차후 apache-tomcat 연동시 필요한 패키지를 설치 합니다.
[root@CentOS7 ~]# yum install -y httpd httpd-devel java-1.7.0-openjdk-devel

 

  • tomcat group 생성 및 user 를 생성 합니다.
[root@CentOS7 ~]# mkdir -p /opt/tomcat
[root@CentOS7 ~]# groupadd tomcat
[root@CentOS7 ~]# useradd -M -s /bin/nologin -g tomcat -d /opt/tomcat tomcat
[root@CentOS7 ~]# wget https://archive.apache.org/dist/tomcat/tomcat-8/v8.5.38/bin/apache-tomcat-8.5.38.tar.gz
[root@CentOS7 ~]# tar xvf apache-tomcat-8.5.38.tar.gz -C /opt/tomcat --strip-components=1
[root@CentOS7 ~]# cd /opt/tomcat
[root@CentOS7 ~]# chgrp -R tomcat /opt/tomcat
[root@CentOS7 ~]# chmod -R g+r conf
[root@CentOS7 ~]# chmod g+x conf
[root@CentOS7 tomcat]# chown -R tomcat webapps/ work/ temp/ logs/

 

  • systemd 파일 생성합니다.
[root@CentOS7 tomcat]# vi /etc/systemd/system/tomcat.service
# Systemd unit file for tomcat
[Unit]
Description=Apache Tomcat Web Application Container
After=syslog.target network.target

[Service]
Type=forking

Environment=JAVA_HOME=/usr/lib/jvm/jre
Environment=CATALINA_PID=/opt/tomcat/temp/tomcat.pid
Environment=CATALINA_HOME=/opt/tomcat
Environment=CATALINA_BASE=/opt/tomcat
Environment='CATALINA_OPTS=-Xms512M -Xmx1024M -server -XX:+UseParallelGC'
Environment='JAVA_OPTS=-Djava.awt.headless=true -Djava.security.egd=file:/dev/./urandom'

ExecStart=/opt/tomcat/bin/startup.sh
ExecStop=/bin/kill -15 $MAINPID

User=tomcat
Group=tomcat
UMask=0007
RestartSec=10
Restart=always

[Install]
WantedBy=multi-user.target

 

  • daemon-reload 및 tomcat 데몬을 실행 합니다.
[root@CentOS7 tomcat]# systemctl daemon-reload
[root@CentOS7 tomcat]# systemctl enable tomcat
[root@CentOS7 tomcat]# systemctl start tomcat

[root@CentOS7 tomcat]# systemctl status tomcat
● tomcat.service - Apache Tomcat Web Application Container
Loaded: loaded (/etc/systemd/system/tomcat.service; disabled; vendor preset: disabled)
Active: active (running) since Mon 2019-02-18 07:05:59 KST; 15s ago
Process: 1879 ExecStart=/opt/tomcat/bin/startup.sh (code=exited, status=0/SUCCESS)
Main PID: 1887 (catalina.sh)
CGroup: /system.slice/tomcat.service
├─1887 /bin/sh /opt/tomcat/bin/catalina.sh start
└─1888 java -Djava.util.logging.config.file=/opt/tomcat/conf/logging.properties -Djava.util.logging.ma...

Feb 18 07:05:59 CentOS7 systemd[1]: Starting Apache Tomcat Web Application Container...
Feb 18 07:05:59 CentOS7 systemd[1]: Started Apache Tomcat Web Application Container.
[root@CentOS7 tomcat]#

 

  • web-site 확인
  • http://192.168.0.10:8080/
[root@CentOS7 ~]# firewall-cmd --permanent --add-port=8080/tcp
[root@CentOS7 ~]# firewall-cmd --reload

 

 

  • tomcat-users.xml 파일 설정
  • admin user 와 password 를 지정 합니다.
[root@CentOS7 tomcat]# vi /opt/tomcat/conf/tomcat-users.xml
# line 추가
<user username="admin" password="password" roles="manager-gui,admin-gui"/>

 

  • 특정 아이피 대역에서만 접속 설정
[root@CentOS7 ~]# vi /opt/tomcat/webapps/manager/META-INF/context.xml
  <Valve className="org.apache.catalina.valves.RemoteAddrValve"
         allow="127\.\d+\.\d+\.\d+|::1|0:0:0:0:0:0:0:1|192.168.0.1" />

[root@CentOS7 ~]# systemctl restart tomcat

 

  • http://192.168.0.10:8080/manager/html 접속 확인
  • admin / password 를 입력합니다.

 

[CentOS7] Tomcat 9 install

tomcat download : https://archive.apache.org/dist/tomcat/

 

  • tomcat 설치전 java-1.8 을 설치 합니다.
  • tomcat 유저를 생성 합니다.
[root@CentOS7 ~]# yum install -y java-1.8.0-openjdk-devel
[root@CentOS7 ~]# useradd -m -U -d /opt/tomcat -s /bin/false tomcat

 

  • apache-tomcat-9.0.14 를 다운 받습니다.
  • 압축 해제후 /opt/tomcat 으로 이동 합니다.
  • tomcat.service 파일을 생성 합니다.
  • daemon-reload 후 tomcat 을 구동 합니다.
[root@CentOS7 ~]# cd /tmp
[root@CentOS7 tmp]# wget https://archive.apache.org/dist/tomcat/tomcat-9/v9.0.14/bin/apache-tomcat-9.0.14.tar.gz
[root@CentOS7 tmp]# tar -xf apache-tomcat-9.0.14.tar.gz
[root@CentOS7 tmp]# mv apache-tomcat-9.0.14 /opt/tomcat/
[root@CentOS7 tmp]# ln -s /opt/tomcat/apache-tomcat-9.0.14 /opt/tomcat/latest
[root@CentOS7 tmp]# chown -R tomcat: /opt/tomcat
[root@CentOS7 tmp]# sh -c 'chmod +x /opt/tomcat/latest/bin/*.sh'
[root@CentOS7 tmp]# vi /etc/systemd/system/tomcat.service
[Unit]
Description=Tomcat 9 servlet container
After=network.target

[Service]
Type=forking

User=tomcat
Group=tomcat

Environment="JAVA_HOME=/usr/lib/jvm/jre"
Environment="JAVA_OPTS=-Djava.security.egd=file:///dev/urandom"

Environment="CATALINA_BASE=/opt/tomcat/latest"
Environment="CATALINA_HOME=/opt/tomcat/latest"
Environment="CATALINA_PID=/opt/tomcat/latest/temp/tomcat.pid"
Environment="CATALINA_OPTS=-Xms512M -Xmx1024M -server -XX:+UseParallelGC"

ExecStart=/opt/tomcat/latest/bin/startup.sh
ExecStop=/opt/tomcat/latest/bin/shutdown.sh

[Install]
WantedBy=multi-user.target

[root@CentOS7 tmp]# systemctl daemon-reload
[root@CentOS7 tmp]# systemctl enable tomcat
[root@CentOS7 tmp]# systemctl start tomcat

 

  • 방화벽 사용시 8080 포트를 오픈 합니다.
[root@CentOS7 ~]# firewall-cmd --zone=public --permanent --add-port=8080/tcp
[root@CentOS7 ~]# firewall-cmd --reload

 

  • tomcat 에 접속해 봅니다.

 

  • Tomcat 웹 관리 인터페이스 구성
  • 8080 port 를 통하여 웹 접속은 가능 하지만 사용자를 만들지 않아 웹 관리 인터페이스에 액세스 할수 없습니다.
  • Tomcat 사용자는 tomcat-users.xml 파일에 정의 됩니다.
  • tomcat-users.xml 를 수정 합니다.
[root@CentOS7 ~]# vi /opt/tomcat/latest/conf/tomcat-users.xml
<!--
  <role rolename="tomcat"/>
  <role rolename="role1"/>
  <user username="tomcat" password="<must-be-changed>" roles="tomcat"/>
  <user username="both" password="<must-be-changed>" roles="tomcat,role1"/>
  <user username="role1" password="<must-be-changed>" roles="role1"/>
-->
   <role rolename="admin-gui"/>
   <role rolename="manager-gui"/>
   <user username="admin" password="password1" roles="admin-gui,manager-gui"/>
</tomcat-users>

 

  • 기본적으로 Tomcat 웹 관리 인터페이스는 localhost 에서만 액세스 할수 있도록 구성 되어 있습니다.
  • 보안 위험이 있음으로 원격 ip 또는 권장 되지 않은 곳에서 웹 인터페이스에 액세스 하려면  context.xml 파일을 수정 해야 합니다.
  • 어디에서나 접근 가능하게 설정 하기 위하여 아래 라인을 주석 처리 합니다.

 

모든 ip 접속 가능 설정

[root@CentOS7 ~]# vi /opt/tomcat/latest/webapps/manager/META-INF/context.xml
<Context antiResourceLocking="false" privileged="true" >
<!--  <Valve className="org.apache.catalina.valves.RemoteAddrValve"
         allow="127\.\d+\.\d+\.\d+|::1|0:0:0:0:0:0:0:1" />
  <Manager sessionAttributeValueClassNameFilter="java\.lang\.(?:Boolean|Integer|Long|Number|String)|org\.apache\.catalina\.filters\.CsrfPreventionFilter\$LruCache(?:\$1)?|java\.util\.(?:Linked)?HashMap"/>
-->
</Context>


[root@CentOS7 ~]# vi /opt/tomcat/latest/webapps/host-manager/META-INF/context.xml
<!--  <Valve className="org.apache.catalina.valves.RemoteAddrValve"
         allow="127\.\d+\.\d+\.\d+|::1|0:0:0:0:0:0:0:1" />
  <Manager sessionAttributeValueClassNameFilter="java\.lang\.(?:Boolean|Integer|Long|Number|String)|org\.apache\.catalina\.filters\.CsrfPreventionFilter\$LruCache(?:\$1)?|java\.util\.(?:Linked)?HashMap"/>
-->
</Context>

 

  •  설정변경후 tomcat 재시작
[root@CentOS7 ~]# systemctl restart tomcat

 

특정 아이피 192.168.0.1 에서만 접속 설정

[root@CentOS7 ~]# vi /opt/tomcat/latest/webapps/manager/META-INF/context.xml
<Context antiResourceLocking="false" privileged="true" >
  <Valve className="org.apache.catalina.valves.RemoteAddrValve"
         allow="127\.\d+\.\d+\.\d+|::1|0:0:0:0:0:0:0:1|192.168.0.1" />
</Context>

[root@CentOS7 ~]# vi /opt/tomcat/latest/webapps/host-manager/META-INF/context.xml
<Context antiResourceLocking="false" privileged="true" >
  <Valve className="org.apache.catalina.valves.RemoteAddrValve"
         allow="127\.\d+\.\d+\.\d+|::1|0:0:0:0:0:0:0:1|192.168.0.1" />
</Context>

 

  •  설정변경후 tomcat 재시작
[root@CentOS7 ~]# systemctl restart tomcat

 

 

  • 응용프로그램 을 배포, 배포해제 , 시작 , 중시 및 리로드 할수 있습니다.