github site 에서 가입을 합니다.

https://github.com

가입후 이메일 인증을 합니다.

// 작성중인 문서! 2018-03-31

 

우측상단의 New repository 를 클릭하여 repository 를 생성합니다.

 

Owner 부분의 bksanjuk 이 github id 이며 Repository name 부분에 test로 설정 했습니다.

(Project name 에 맞게 설정 하시면 됩니다.)

기본적으로 repository 를 만들면 Public 으로 만들게 됩니다.  AWS 등 Cloud Service 이용시 운영하는 System / DB 등의 Password 부분이 Commit 되지 않게 사용하셔야 합니다.

Repository name 을 설정 하였으면 Create repository 를 클릭하여 설정을 마무리 합니다.

 

 

설정이 마무리 되었습니다.

 

github 에 commit 할 디렉토리를 생성 합니다.

test@ubuntu-test:~$ mkdir test
test@ubuntu-test:~$ cd test/

 

…or create a new repository on the command line

test@ubuntu-test:~/test$ echo "# test" >> README.md
test@ubuntu-test:~/test$ git init
Initialized empty Git repository in /home/test/test/.git/
test@ubuntu-test:~/test$ git add README.md
test@ubuntu-test:~/test$ git config --global user.email "bksanjuk@test.com"
test@ubuntu-test:~/test$ git config --global user.name "bksanjuk"
test@ubuntu-test:~/test$ git commit -m "first commit"
[master (root-commit) fb6376b] first commit
 1 file changed, 1 insertion(+)
 create mode 100644 README.md
test@ubuntu-test:~/test$ git remote add origin https://github.com/bksanjuk/test.git
test@ubuntu-test:~/test$ git push -u origin master
Username for 'https://github.com': bksanjuk
Password for 'https://bksanjuk@github.com':
Counting objects: 3, done.
Writing objects: 100% (3/3), 213 bytes | 0 bytes/s, done.
Total 3 (delta 0), reused 0 (delta 0)
To https://github.com/bksanjuk/test.git
 * [new branch]      master -> master
Branch master set up to track remote branch master from origin.
test@ubuntu-test:~/test$

 

Github 확인

FreeBSD Z file system ZFS

작성중인 문서 — 2018.03.16

official site: https://www.freebsd.org/doc/handbook/zfs.html

ZFS tuning : https://www.freebsd.org/doc/handbook/zfs-advanced.html

ZFS 특징 및 용어 설명: https://www.freebsd.org/doc/handbook/zfs-term.html#zfs-term-vdev 

 

Test 환경 kvm – FreeBSD11 / Disk VirtIO 3 Disk

ZFS tuning point 의 경우 별도로 테스트 하도록 하겠습니다.

이번 포스트의 경우 FreeBSD 11 환경에서 ZFS 파일시스템만 초점을 맞추도록 하겠습니다.

ZFS 테스트를 위하여 vm 에서 Disk 를 3개 붙였습니다.

 

ZFS  설정

ZFS 사용을 위하여 /boot/loader.conf 를 수정 합니다.

root@bsd11:~ # vi /boot/loader.conf
vfs.zfs.min_auto_ashift=12
zfs_load="YES"

 

ZFS Service 를 실행 하기 위하여 rc.conf 에 zfs_enable 를 추가 합니다.

root@bsd11:~ # sysrc zfs_enable=YES
zfs_enable: NO -> YES
root@bsd11:~ #

 

시스템을 리부팅 합니다.

root@bsd11:~ # init 6

 

 

Disk 확인

kvm 에서 VirtIO 로 Disk 를 붙였기 때문에 /dev/adaX 가 아닌 /dev/vtbDX 로 인식이 됩니다.

root@bsd11:~ # ls -al /dev/vtb*
crw-r-----  1 root  operator  0x3d Mar 13 20:30 /dev/vtbd0
crw-r-----  1 root  operator  0x48 Mar 13 20:30 /dev/vtbd1
crw-r-----  1 root  operator  0x49 Mar 13 20:30 /dev/vtbd2
root@bsd11:~ #

 

Single Disk Pool 추가

root@bsd11:~ # zpool create test /dev/vtbd0
root@bsd11:~ # df -h
Filesystem      Size    Used   Avail Capacity  Mounted on
/dev/ada0s1a     18G    7.0G    9.9G    41%    /
devfs           1.0K    1.0K      0B   100%    /dev
test             19G     23K     19G     0%    /test
root@bsd11:~ #

 

mount 확인

zpool create 명령어로 붙인 test 를 바로 디렉토리에 마운트 합니다. ;;

확인해 보니 zfs 기능을 사용하지 않고 단순하게 test 를 만들고 test 디렉토리에 마운트 하는것으로 보입니다.

root@bsd11:/test # mount
/dev/ada0s1a on / (ufs, local, journaled soft-updates)
devfs on /dev (devfs, local, multilabel)
test on /test (zfs, local, nfsv4acls)

 

ZFS 압축 파일 시스템 ex)

root@bsd11:~ # zfs create test/compressed
# zfs set compression=gzip test/compressed
root@bsd11:~ # zfs set compression=gzip test/compressed

 

ZFS 압축파일 시스템 해제

root@bsd11:~ # zfs set compression=off test/compressed

 

디렉토리 확인

root@bsd11:~ # df -h
Filesystem         Size    Used   Avail Capacity  Mounted on
/dev/ada0s1a        18G    7.0G    9.9G    41%    /
devfs              1.0K    1.0K      0B   100%    /dev
test                19G     23K     19G     0%    /test
test/compressed     19G     23K     19G     0%    /test/compressed
root@bsd11:~ #

 

data 파일시스템을 만들고 데이터 블록 복사본을 2개씩 유지

root@bsd11:~ # zfs create test/data
root@bsd11:~ # zfs copies=2 test/data

 

test 풀의 각 파일 시스템의 사용 가능한 공간은 동일 합니다.

root@bsd11:~ # df -h
Filesystem         Size    Used   Avail Capacity  Mounted on
/dev/ada0s1a        18G    7.0G    9.9G    41%    /
devfs              1.0K    1.0K      0B   100%    /dev
test                19G     23K     19G     0%    /test
test/compressed     19G     23K     19G     0%    /test/compressed
test/data           19G     23K     19G     0%    /test/data
root@bsd11:~ #

 

ZFS pool 삭제

root@bsd11:~ # zfs destroy test/compressed
root@bsd11:~ # zfs destroy test/data
root@bsd11:~ # zpool destroy test

 

 

Z-RAID

디스크 오류로 인한 데이터 손실을 피하는 방법으로 사용할수 있을것으로 보이며, Z-RAID 구성시 3개 이상의 Disk 가 필요 합니다.

Sun™ recommends that the number of devices used in a RAID-Z configuration be between three and nine. 
For environments requiring a single pool consisting of 10 disks or more, consider breaking it up into smaller RAID-Z groups. 
If only two disks are available and redundancy is a requirement, consider using a ZFS mirror. Refer to zpool(8) for more details.

ZFS 구성시 Disk 구성은 3~9 개를 권장 합니다.

 

Disk 확인

root@bsd11:~ # ls -al /dev/vtb*
crw-r-----  1 root  operator  0x3d Mar 13 20:30 /dev/vtbd0
crw-r-----  1 root  operator  0x48 Mar 13 20:30 /dev/vtbd1
crw-r-----  1 root  operator  0x49 Mar 13 20:30 /dev/vtbd2
root@bsd11:~ #

 

Z-RAID  구성

zpool create 명령어로 storage 를 생성 합니다.

root@bsd11:~ # zpool create storage raidz vtbd0 vtbd1 vtbd2

 

home file system 생성

root@bsd11:~ # zfs create storage/home

 

2개의 복사본을 유지 하며 gzip 으로 압축 가능하게 설정

root@bsd11:~ # zfs set copies=2 storage/home
root@bsd11:~ # zfs set compression=gzip storage/home

 

기존 /home  디렉토리 (유저 디렉토리) 를 마이그레이션 합니다.

root@bsd11:~ # cp -rp /home/* /storage/home
root@bsd11:~ # rm -rf /home /usr/home
root@bsd11:~ # ln -s /storage/home /home
root@bsd11:~ # ln -s /storage/home /usr/home

 

기존 유저로 ssh login 을 Test 합니다.

정상적으로 login 이 됩니다.

[root@test ~]# ssh test@bsd11
Last login: Sat Mar  3 21:49:33 2018
FreeBSD 11.1-RELEASE (GENERIC) #0 r321309: Fri Jul 21 02:08:28 UTC 2017

Welcome to FreeBSD!

Release Notes, Errata: https://www.FreeBSD.org/releases/
Security Advisories:   https://www.FreeBSD.org/security/
FreeBSD Handbook:      https://www.FreeBSD.org/handbook/
FreeBSD FAQ:           https://www.FreeBSD.org/faq/
Questions List: https://lists.FreeBSD.org/mailman/listinfo/freebsd-questions/
FreeBSD Forums:        https://forums.FreeBSD.org/

Documents installed with the system are in the /usr/local/share/doc/freebsd/
directory, or can be installed later with:  pkg install en-freebsd-doc
For other languages, replace "en" with a language code like de or fr.

Show the version of FreeBSD installed:  freebsd-version ; uname -a
Please include that output and any error messages when posting questions.
Introduction to manual pages:  man man
FreeBSD directory layout:      man hier

Edit /etc/motd to change this login announcement.
You can change the video mode on all consoles by adding something like
the following to /etc/rc.conf:

        allscreens="80x30"

You can use "vidcontrol -i mode | grep T" for a list of supported text
modes.
                -- Konstantinos Konstantinidis <kkonstan@duth.gr>
$ ls -al
total 22
drwxr-xr-x  3 test  wheel    11 Mar  3 23:42 .
drwxr-xr-x  3 root  wheel     3 Mar 13 21:04 ..
-rw-r--r--  1 test  wheel  1055 Mar  3 21:43 .cshrc
-rw-r--r--  1 test  wheel   254 Mar  3 21:43 .login
-rw-r--r--  1 test  wheel   163 Mar  3 21:43 .login_conf
-rw-------  1 test  wheel   379 Mar  3 21:43 .mail_aliases
-rw-r--r--  1 test  wheel   336 Mar  3 21:43 .mailrc
-rw-r--r--  1 test  wheel   802 Mar  3 21:43 .profile
-rw-------  1 test  wheel   281 Mar  3 21:43 .rhosts
-rw-r--r--  1 test  wheel   849 Mar  3 21:43 .shrc
drwxr-xr-x  2 test  wheel     3 Mar  3 23:42 public_html
$ whoami
test

 

ZFS snapshot 생성

root@bsd11:~ # zfs snapshot storage/home@2018-03-13

 

ZFS snapshot rollback Test

Test 를 위하여 public_html 을 삭제 합니다.

root@bsd11:~ # ls -al /home/test/ |grep -i public_html
drwxr-xr-x  2 test  wheel     3 Mar  3 23:42 public_html
root@bsd11:~ #
root@bsd11:~ # rm -rf /home/test/public_html/
root@bsd11:~ # ls -al /home/test | grep -i publ

 

ZFS rollback  을 테스트 합니다.

zfs list -t 을 지정 하여 snapshot 을 확인 한후 rollback 을 진행 합니다.

root@bsd11:~ # zfs list -t snapshot
NAME                      USED  AVAIL  REFER  MOUNTPOINT
storage/home@2017-03-13  32.0K      -  57.3K  -
root@bsd11:~ #
root@bsd11:~ # zfs rollback storage/home@2018-03-13
root@bsd11:~ # ls -al /home/test/ |grep -i public_html
drwxr-xr-x  2 test  wheel     3 Mar  3 23:42 public_html
root@bsd11:~ #

 

ZFS snapshot 제거

snapshot 제거및 snapshot 확인

root@bsd11:~ # zfs destroy storage/home@2018-03-13
root@bsd11:~ # zfs list -t snapshot
no datasets available
root@bsd11:~ #

 

 

ZFS 복구

 

모든 Z-RAID 를 확인 합니다.

root@bsd11:~ # zpool status -x
all pools are healthy
root@bsd11:~ #

모든 풀이 온라인 상태이며 정상 입니다.

 

Disk 확인

root@bsd11:~ # zpool status
  pool: storage
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        storage     ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            vtbd0   ONLINE       0     0     0
            vtbd1   ONLINE       0     0     0
            vtbd2   ONLINE       0     0     0

errors: No known data errors
root@bsd11:~ #

 

 

비정상일시 아래와 같이 출력됩니다.

  pool: storage
 state: DEGRADED
status: One or more devices has been taken offline by the administrator.
	Sufficient replicas exist for the pool to continue functioning in a
	degraded state.
action: Online the device using 'zpool online' or replace the device with
	'zpool replace'.
 scrub: none requested
config:

	NAME        STATE     READ WRITE CKSUM
	storage     DEGRADED     0     0     0
	  raidz1    DEGRADED     0     0     0
	    da0     ONLINE       0     0     0
	    da1     OFFLINE      0     0     0
	    da2     ONLINE       0     0     0

errors: No known data errors

 

 

ZFS Disk 중 vtbd0 Disk 를 offline 으로 상태를 변경 합니다.

만약 Disk 가 하드웨어 이상이라고 하여도 아래와 같이 메시지가 출력 되며 System 을 Shutdown 후 Disk 교체후 storage 볼륨에 Repalace 하면 됩니다.

root@bsd11:~ # zpool status
  pool: storage
 state: DEGRADED
status: One or more devices has been taken offline by the administrator.
        Sufficient replicas exist for the pool to continue functioning in a
        degraded state.
action: Online the device using 'zpool online' or replace the device with
        'zpool replace'.
  scan: none requested
config:

        NAME                     STATE     READ WRITE CKSUM
        storage                  DEGRADED     0     0     0
          raidz1-0               DEGRADED     0     0     0
            4767353646844092173  OFFLINE      0     0     0  was /dev/vtbd0
            vtbd1                ONLINE       0     0     0
            vtbd2                ONLINE       0     0     0

errors: No known data errors
root@bsd11:~ #

 

Replace 명령어는 다음과 같습니다.

zpool replace $zfs_pool_name $disk_name

root@bsd11:~ # zpool replace storage vtbd0
invalid vdev specification
use '-f' to override the following errors:
/dev/vtbd0 is part of active pool 'storage'
root@bsd11:~ #

 

 

붙어 있는 Disk 를 그대로 활용할 경우 zfs online 으로 설정 하시면 됩니다.

별도로 Disk 를 붙여서 replace 할경우에만 zpool replace storage 를 사용 하면 됩니다.

root@bsd11:~ # zpool replace storage vtbd0
invalid vdev specification
use '-f' to override the following errors:
/dev/vtbd0 is part of active pool 'storage'
root@bsd11:~ # zpool online storage vtbd0
root@bsd11:~ # zpool status
  pool: storage
 state: ONLINE
  scan: resilvered 17.5K in 0h0m with 0 errors on Tue Mar 13 22:16:57 2018
config:

        NAME        STATE     READ WRITE CKSUM
        storage     ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            vtbd0   ONLINE       0     0     0
            vtbd1   ONLINE       0     0     0
            vtbd2   ONLINE       0     0     0

errors: No known data errors
root@bsd11:~ #

 

Disk 를 변경하여 테스트

root@bsd11:~ # zpool status
  pool: storage
 state: DEGRADED
status: One or more devices has been taken offline by the administrator.
        Sufficient replicas exist for the pool to continue functioning in a
        degraded state.
action: Online the device using 'zpool online' or replace the device with
        'zpool replace'.
  scan: resilvered 17.5K in 0h0m with 0 errors on Tue Mar 13 22:16:57 2018
config:

        NAME                     STATE     READ WRITE CKSUM
        storage                  DEGRADED     0     0     0
          raidz1-0               DEGRADED     0     0     0
            4767353646844092173  OFFLINE      0     0     0  was /dev/vtbd0
            vtbd1                ONLINE       0     0     0
            vtbd2                ONLINE       0     0     0

errors: No known data errors
root@bsd11:~ #

 

vm Shtudown 후 10G Volume 을 추가 하였습니다.

root@bsd11:~ # ls -al /dev/vtbd*
crw-r-----  1 root  operator  0x3d Mar 13 22:20 /dev/vtbd0
crw-r-----  1 root  operator  0x48 Mar 13 22:20 /dev/vtbd1
crw-r-----  1 root  operator  0x49 Mar 13 22:20 /dev/vtbd2
crw-r-----  1 root  operator  0x4a Mar 13 22:20 /dev/vtbd3
root@bsd11:~ #

 

Zpool Replace

zfs replace $pool_name $기존disk $신규disk 형식으로 사용 하시면 됩니다.

root@bsd11:~ # zpool replace storage vtbd0 vtbd3
root@bsd11:~ # zpool status
  pool: storage
 state: ONLINE
  scan: resilvered 178K in 0h0m with 0 errors on Tue Mar 13 22:29:36 2018
config:

        NAME        STATE     READ WRITE CKSUM
        storage     ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            vtbd3   ONLINE       0     0     0
            vtbd1   ONLINE       0     0     0
            vtbd2   ONLINE       0     0     0

errors: No known data errors
root@bsd11:~ #

 

Data Verification

Checksums can be disabled, but it is not recommended! Checksums take very little storage space and provide data integrity. 
Many ZFS features will not work properly with checksums disabled. 
There is no noticeable performance gain from disabling these checksums.

요약:

zfs 에서는 물결성을 위하여 Checksum을 사용 하며 해당기능을 Disable 할수 있습니다.

성능차이가 미비함으로 Disable 을 하지 않는것이 좋습니다.

 

 

체크섬 확인은 스크러빙 이며 다음 명령어를 사용하여 zfs pool 무결성을 검증할수 있습니다.

root@bsd11:~ # zpool scrub storage

 

 

검사전

scan: resilvered 178K in 0h0m with 0 errors on Tue Mar 13 22:29:36 2018

root@bsd11:~ # zpool status
  pool: storage
 state: ONLINE
  scan: resilvered 178K in 0h0m with 0 errors on Tue Mar 13 22:29:36 2018
config:

        NAME        STATE     READ WRITE CKSUM
        storage     ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            vtbd3   ONLINE       0     0     0
            vtbd1   ONLINE       0     0     0
            vtbd2   ONLINE       0     0     0

errors: No known data errors
root@bsd11:~ #

 

 

검사후

scan: scrub repaired 0 in 0h0m with 0 errors on Tue Mar 13 22:35:57 2018

root@bsd11:~ # zpool status storage
  pool: storage
 state: ONLINE
  scan: scrub repaired 0 in 0h0m with 0 errors on Tue Mar 13 22:35:57 2018
config:

        NAME        STATE     READ WRITE CKSUM
        storage     ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            vtbd3   ONLINE       0     0     0
            vtbd1   ONLINE       0     0     0
            vtbd2   ONLINE       0     0     0

errors: No known data errors
root@bsd11:~ #

스크럽 지속 시간의 경우 저장된 데이터의 양에 따라 다르며 많은 양의 데이터 확인시 오랜 시간이 걸릴 것입니다. 스크럼 확인시 한번에 하나씩 확인이 가능 합니다.

 

 

Zpool 관리

https://www.freebsd.org/doc/handbook/zfs-zpool.html

ZFS 관리의 경우 2가지 유틸로 나뉘며 zpool 유틸의 경우 풀의 작동을 제어
디스크 추가, 제거 , 교체 및 관리를 합니다.

zfs 유틸리티는 파일 시스템 과 볼륨 모두의 데이터 셋트 생성, 삭제를 관리 합니다.

 

 

pool 생성및 제거

mirror pool 을 생성합니다.

root@bsd11:~ # zpool create testpool mirror /dev/vtbd0 /dev/vtbd1
invalid vdev specification
use '-f' to override the following errors:
/dev/vtbd0 is part of potentially active pool 'storage'
root@bsd11:~ #

기존에 storage pool 에서 사용하여 정상적으로 생성이 안됩니다.

 

zpool labelclear 로 GPT 헤더를 삭제 합니다.

root@bsd11:~ # zpool labelclear -f /dev/vtbd0
root@bsd11:~ # zpool labelclear -f /dev/vtbd1
root@bsd11:~ # zpool labelclear -f /dev/vtbd2
root@bsd11:~ # zpool labelclear -f /dev/vtbd3

 

zpool create 로 testpool 을 mirror 로 생성 합니다.

root@bsd11:~ # zpool create testpool mirror /dev/vtbd0 /dev/vtbd1
root@bsd11:~ # zpool status
  pool: testpool
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        testpool    ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            vtbd0   ONLINE       0     0     0
            vtbd1   ONLINE       0     0     0

errors: No known data errors
root@bsd11:~ #

 

여러 vdev 를 만들때는 아래와 같이 생성 합니다.

zfs 용어의 경우 포스트 최상의 링크중 zfs 특징및 용어 설명을 참고 하시기 바랍니다.

root@bsd11:~ # zpool create testpool mirror /dev/vtbd0 /dev/vtbd1 mirror /dev/vtbd2 /dev/vtbd3
root@bsd11:~ # zpool status
  pool: testpool
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        testpool    ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            vtbd0   ONLINE       0     0     0
            vtbd1   ONLINE       0     0     0
          mirror-1  ONLINE       0     0     0
            vtbd2   ONLINE       0     0     0
            vtbd3   ONLINE       0     0     0

errors: No known data errors
root@bsd11:~ #

 

RAID-Z2 pool 생성

root@bsd11:~ # zpool create testpool raidz2 /dev/vtbd0p1 /dev/vtbd0p2 /dev/vtbd0p3 /dev/vtbd0p4 /dev/vtbd0p5 /dev/vtbd0p6
root@bsd11:~ # zpool status
  pool: testpool
 state: ONLINE
  scan: none requested
config:

        NAME         STATE     READ WRITE CKSUM
        testpool     ONLINE       0     0     0
          raidz2-0   ONLINE       0     0     0
            vtbd0p1  ONLINE       0     0     0
            vtbd0p2  ONLINE       0     0     0
            vtbd0p3  ONLINE       0     0     0
            vtbd0p4  ONLINE       0     0     0
            vtbd0p5  ONLINE       0     0     0
            vtbd0p6  ONLINE       0     0     0

errors: No known data errors
root@bsd11:~ #

 

 

장치 추가 및 제거

zpool attach 를 시용 하여 기존 vdev 에 디스크를 추가하거나, zpool add를 사용하여 vdev를 풀에 추가 할수 있습니다.

단일 디스크의 경우 중복성이 없기 때문에 손상이 발견 되었을시 복구 되지 않습니다.

zpool attach 를 사용하여 vdev에 디스크를 추가 하여 미러를 생성할수 있으며 중복성과 읽기 성능을 향상시킬수 있습니다.

root@bsd11:~ # zpool create testpool /dev/vtbd0p1
root@bsd11:~ # zpool status
  pool: testpool
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        testpool    ONLINE       0     0     0
          vtbd0p1   ONLINE       0     0     0

errors: No known data errors
root@bsd11:~ #

 

zpool attach 명령어를 이용하여 mirror 구성을 합니다.

root@bsd11:~ # zpool attach testpool vtbd0p1 vtbd0p2
root@bsd11:~ # zpool status
  pool: testpool
 state: ONLINE
  scan: resilvered 78.5K in 0h0m with 0 errors on Thu Mar 15 21:22:36 2018
config:

        NAME         STATE     READ WRITE CKSUM
        testpool     ONLINE       0     0     0
          mirror-0   ONLINE       0     0     0
            vtbd0p1  ONLINE       0     0     0
            vtbd0p2  ONLINE       0     0     0

errors: No known data errors
root@bsd11:~ #

 

Test 를 위하여 vm 을 zfs 로 설치 하였으며
Disk 를 동일하게 40G x2개 붙였습니다.

OS 40G Test Disk 40G

기존 Disk 의 ada0p3 용량을 확인 합니다.

root@bsd11:~ # gpart list
Geom name: ada0
modified: false
state: OK
fwheads: 16
fwsectors: 63
last: 83886039
first: 40
entries: 152
scheme: GPT
Providers:
1. Name: ada0p1
   Mediasize: 524288 (512K)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 20480
   Mode: r0w0e0
   rawuuid: edce08d3-2127-11e8-a62a-8fd7aec5b81f
   rawtype: 83bd6b9d-7f41-11dc-be0b-001560b84f0f
   label: gptboot0
   length: 524288
   offset: 20480
   type: freebsd-boot
   index: 1
   end: 1063
   start: 40
2. Name: ada0p2
   Mediasize: 2147483648 (2.0G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 1048576
   Mode: r1w1e0
   rawuuid: edd96866-2127-11e8-a62a-8fd7aec5b81f
   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
   label: swap0
   length: 2147483648
   offset: 1048576
   type: freebsd-swap
   index: 2
   end: 4196351
   start: 2048
3. Name: ada0p3
   Mediasize: 40800092160 (38G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 2148532224
   Mode: r1w1e1
   rawuuid: ede26582-2127-11e8-a62a-8fd7aec5b81f
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: zfs0
   length: 40800092160
   offset: 2148532224
   type: freebsd-zfs
   index: 3
   end: 83884031
   start: 4196352
Consumers:
1. Name: ada0
   Mediasize: 42949672960 (40G)
   Sectorsize: 512
   Mode: r2w2e3

 

신규 디스크의 용량을 설정 합니다.

root@bsd11:~ # gpart create -s GPT ada1
ada1 created
root@bsd11:~ # gpart add -t freebsd-zfs -s 512K ada1
ada1p1 added
root@bsd11:~ # gpart add -t freebsd-zfs -s 2G ada1
ada1p2 added
root@bsd11:~ # gpart add -t freebsd-zfs  ada1
ada1p3 added
root@bsd11:~ #
root@bsd11:~ # gpart show
=>      40  83886000  ada0  GPT  (40G)
        40      1024     1  freebsd-boot  (512K)
      1064       984        - free -  (492K)
      2048   4194304     2  freebsd-swap  (2.0G)
   4196352  79687680     3  freebsd-zfs  (38G)
  83884032      2008        - free -  (1.0M)

=>      40  83886000  ada1  GPT  (40G)
        40      1024     1  freebsd-zfs  (512K)
      1064   4194304     2  freebsd-zfs  (2.0G)
   4195368  79690672     3  freebsd-zfs  (38G)

root@bsd11:~ #

 

zpool attach zroot vdev 에 ada1p3 를 추가 합니다.

root@bsd11:~ # zpool status
  pool: zroot
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        zroot       ONLINE       0     0     0
          ada0p3    ONLINE       0     0     0

errors: No known data errors
root@bsd11:~ # zpool attach zroot ada0p3 ada1p3
Make sure to wait until resilver is done before rebooting.

If you boot from pool 'zroot', you may need to update
boot code on newly attached disk 'ada1p3'.

Assuming you use GPT partitioning and 'da0' is your new boot disk
you may use the following command:

        gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da0

root@bsd11:~ #

 

gpart 명령어로 부팅 가능하게 설정 합니다.

root@bsd11:~ # gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada1
partcode written to ada1p1
bootcode written to ada1
root@bsd11:~ # zpool status
  pool: zroot
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Thu Mar 15 22:35:38 2018
        4.70G scanned out of 6.65G at 50.2M/s, 0h0m to go
        4.70G resilvered, 70.75% done
config:

        NAME        STATE     READ WRITE CKSUM
        zroot       ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            ada0p3  ONLINE       0     0     0
            ada1p3  ONLINE       0     0     0  (resilvering)

errors: No known data errors
root@bsd11:~ #

 

resilvering 의 경우 ada0p3 의 내용을 ada1p3 로 mirror 구성 하게 되며 시간이 다소 걸립니다.

(용량에 따라 차등합니다.)

완료후 상태

root@bsd11:~ # zpool status
  pool: zroot
 state: ONLINE
  scan: resilvered 6.65G in 0h2m with 0 errors on Thu Mar 15 22:37:50 2018
config:

        NAME        STATE     READ WRITE CKSUM
        zroot       ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            ada0p3  ONLINE       0     0     0
            ada1p3  ONLINE       0     0     0

errors: No known data errors
root@bsd11:~ #

 

testpool 을 생성 합니다. vtbd0 와 vtbd1 Disk 를 추가 하였습니다.

root@bsd11:~ # zpool create testpool mirror /dev/vtbd0 /dev/vtbd1
root@bsd11:~ # zpool status
  pool: testpool
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        testpool    ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            vtbd0   ONLINE       0     0     0
            vtbd1   ONLINE       0     0     0

errors: No known data errors
root@bsd11:~ #

 

지금 상태에서는 vtbdX Disk 를 제거 할수 없습니다.

Disk 를 추가 합니다.

root@bsd11:~ # zpool add testpool mirror vtbd2 vtbd3
root@bsd11:~ # zpool status
  pool: testpool
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        testpool    ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            vtbd0   ONLINE       0     0     0
            vtbd1   ONLINE       0     0     0
          mirror-1  ONLINE       0     0     0
            vtbd2   ONLINE       0     0     0
            vtbd3   ONLINE       0     0     0

errors: No known data errors
root@bsd11:~ #

 

Test 를 위하여 vtbd2 Disk 를 제거 합니다.

충분한 여분의 중복이 있는 경우에만 Disk 를 제거 할수 있습니다.
미러구룹에 있는 하나의 디스크는 스트라이프로 동작합니다.

root@bsd11:~ # zpool detach testpool vtbd2
root@bsd11:~ # zpool status
  pool: testpool
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        testpool    ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            vtbd0   ONLINE       0     0     0
            vtbd1   ONLINE       0     0     0
          vtbd3     ONLINE       0     0     0

errors: No known data errors
root@bsd11:~ #

 

POOL 상태 체크

 

Disk 교체

zpool replace 는 이전 디스크의 모든 데이터를 새 디스크로 복사 합니다.
작업이 완료되면 이전 디스크가 vdev 에서 연결이 끊어 집니다.

root@bsd11:~ # zpool create testpool mirror /dev/vtbd0 /dev/vtbd1
root@bsd11:~ # zpool status
  pool: testpool
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        testpool    ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            vtbd0   ONLINE       0     0     0
            vtbd1   ONLINE       0     0     0

errors: No known data errors
root@bsd11:~ #

 

vtbd1 Disk 를 vtbd2 로 교체 합니다.

root@bsd11:~ # zpool status
  pool: testpool
 state: ONLINE
  scan: resilvered 80K in 0h0m with 0 errors on Thu Mar 15 23:07:04 2018
config:

        NAME        STATE     READ WRITE CKSUM
        testpool    ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            vtbd0   ONLINE       0     0     0
            vtbd2   ONLINE       0     0     0

errors: No known data errors
root@bsd11:~ #

 

Scrubbing pool

zfs pool 을 정기적으로 매월 한번이상 scrub 작업을 하는게 좋습니다.
디스크를 많이 사용하는 동안 실행 하면 성능이 저하 됩니다.

root@bsd11:~ # zpool scrub testpool
root@bsd11:~ # zpool status
  pool: testpool
 state: ONLINE
  scan: scrub repaired 0 in 0h0m with 0 errors on Thu Mar 15 23:10:20 2018
config:

        NAME        STATE     READ WRITE CKSUM
        testpool    ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            vtbd0   ONLINE       0     0     0
            vtbd2   ONLINE       0     0     0

errors: No known data errors
root@bsd11:~ #

 

Self-Healing Test

데이터 블록과 함께 저장되는 체크섬은 파일 시스템이 자동으로 복구 되도록 합니다.
체크섬이 저장 장치 풀의 다른 Disk 에 기록된 데이터와 일치하는 데이터를 자동으로 복구 합니다.

 

root@bsd11:/usr/local/etc # zpool status
  pool: testpool
 state: ONLINE
  scan: scrub repaired 5K in 0h0m with 0 errors on Thu Mar 15 23:17:43 2018
config:

        NAME        STATE     READ WRITE CKSUM
        testpool    ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            vtbd0   ONLINE       0     0     0
            vtbd1   ONLINE       0     0     0

errors: No known data errors
root@bsd11:/usr/local/etc #

 

Self-healing test 를 위하여 임이의 파일을 복사 한후 checksum 을 생성 합니다.

root@bsd11:~ # cd /usr/local/etc/
root@bsd11:/usr/local/etc # cp * /testpool/
cp: apache24 is a directory (not copied).
cp: bash_completion.d is a directory (not copied).
cp: man.d is a directory (not copied).
cp: newsyslog.conf.d is a directory (not copied).
cp: periodic is a directory (not copied).
cp: php is a directory (not copied).
cp: php-fpm.d is a directory (not copied).
cp: rc.d is a directory (not copied).
cp: ssl is a directory (not copied).
root@bsd11:/usr/local/etc # cd
root@bsd11:~ # sha1 /testpool > checksum.txt
root@bsd11:~ # cat checksum.txt
SHA1 (/testpool) = 34d4723284883bf65b788e6674c7e475dc4102e9
root@bsd11:~ #

 

vtbd0 Disk 를 dd 로 날립니다.

root@bsd11:~ # zpool export testpool
root@bsd11:~ # dd if=/dev/random of=/dev/vtbd0 bs=1m count=200
200+0 records in
200+0 records out
209715200 bytes transferred in 2.680833 secs (78227611 bytes/sec)
root@bsd11:~ #
root@bsd11:~ # zpool import testpool

 

testpool 의 cksum 을 확인 합니다.

root@bsd11:~ # zpool status testpool
  pool: testpool
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://illumos.org/msg/ZFS-8000-9P
  scan: scrub repaired 5K in 0h0m with 0 errors on Thu Mar 15 23:17:43 2018
config:

        NAME        STATE     READ WRITE CKSUM
        testpool    ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            vtbd0   ONLINE       0     0     0
            vtbd1   ONLINE       0     0     1

errors: No known data errors
root@bsd11:~ #

 

오류의 영향을 받지 않은 vtbd0 미러 디스크에 있는 중복성 사용을 감지
원본과 체크섬을 비교 하여 동일한지 여부를 출력합니다.

root@bsd11:~ # sha1 /testpool >> checksum.txt
root@bsd11:~ # cat checksum.txt
SHA1 (/testpool) = 34d4723284883bf65b788e6674c7e475dc4102e9
SHA1 (/testpool) = 34d4723284883bf65b788e6674c7e475dc4102e9
root@bsd11:~ #

풀 데이터를 의도적으로 변경 하기 전/후의 체크섬이 동일 합니다.
ZFS 가 체크섬이 다를때 자동으로 오류를 감지 하고 수정합니다.
pool 이 충분한 중복이 있는 경우에만 가능하며 단일 장치로 구성된 풀에는 자체 치유 기능이 없습니다.

 

스크러빙 작업 에서 vtbd0 에서 데이터를 읽고 vtbd1 에 잘못된 체크섬의 데이터를 다시 작성 합니다.

root@bsd11:~ # zpool scrub testpool
root@bsd11:~ # zpool status
  pool: testpool
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://illumos.org/msg/ZFS-8000-9P
  scan: scrub repaired 253K in 0h0m with 0 errors on Thu Mar 15 23:30:15 2018
config:

        NAME        STATE     READ WRITE CKSUM
        testpool    ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            vtbd0   ONLINE       0     0     0
            vtbd1   ONLINE       0     0    29

errors: No known data errors
root@bsd11:~ #

 

스크럽작업이 완료되면 zpool clear 를 실행하여 오류를 지울수 있습니다.

root@bsd11:~ # zpool clear testpool
root@bsd11:~ # zpool status
  pool: testpool
 state: ONLINE
  scan: scrub repaired 253K in 0h0m with 0 errors on Thu Mar 15 23:30:15 2018
config:

        NAME        STATE     READ WRITE CKSUM
        testpool    ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            vtbd0   ONLINE       0     0     0
            vtbd1   ONLINE       0     0     0

errors: No known data errors
root@bsd11:~ #

 

imports and export pool

root@bsd11:~ # zpool export testpool
root@bsd11:~ # zpool import
   pool: testpool
     id: 4252914017303616931
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:

        testpool    ONLINE
          mirror-0  ONLINE
            vtbd0   ONLINE
            vtbd1   ONLINE
root@bsd11:~ # df -h
Filesystem      Size    Used   Avail Capacity  Mounted on
/dev/ada0s1a     18G    7.8G    9.1G    46%    /
devfs           1.0K    1.0K      0B   100%    /dev
root@bsd11:~ #

 

import -o 옵션을 사용하여 경로를 지정 할수 있습니다.

경로를 지정하게 되면 $경로/$pool_name 로 됩니다.

root@bsd11:~ # zpool import -o altroot=/mnt testpool
root@bsd11:~ # df -h
Filesystem      Size    Used   Avail Capacity  Mounted on
/dev/ada0s1a     18G    7.8G    9.1G    46%    /
devfs           1.0K    1.0K      0B   100%    /dev
testpool        9.6G    265K    9.6G     0%    /mnt/testpool
root@bsd11:~ #

 

Storage pool upgrade

FreeBSD 를 업그레이드 하면 ZFS Version 이 업그레이드 되며 새로운 기능을 지원할수 있습니다. 업그레이드 할수 있지만 다운그레이드는 불가 합니다.

# 이전에 사용하던 pool 도 upgrade 해야 하는지 여부는 확인이 필요 할것으로 보입니다. 🙂

 

root@bsd11:~ # zpool upgrade

 

이전에 사용하던 pool upgrade

root@bsd11:~ # zpool upgrade testpool

 

zpool history

root@bsd11:~ # zpool history
History for 'testpool':
2018-03-15.23:06:26 zpool create testpool mirror /dev/vtbd0 /dev/vtbd1
2018-03-15.23:07:09 zpool replace testpool vtbd1 vtbd2
2018-03-15.23:10:25 zpool scrub testpool
2018-03-15.23:13:10 zpool replace testpool vtbd2 vtbd1
2018-03-15.23:14:25 zpool export testpool
2018-03-15.23:16:48 zpool import testpool
2018-03-15.23:17:48 zpool scrub testpool
2018-03-15.23:18:08 zpool clear testpool
2018-03-15.23:24:30 zpool export testpool
2018-03-15.23:25:49 zpool import testpool
2018-03-15.23:30:20 zpool scrub testpool
2018-03-15.23:32:46 zpool clear testpool
2018-03-15.23:37:30 zpool export testpool
2018-03-15.23:39:04 zpool import -o altroot=/mnt testpool

root@bsd11:~ #

 

-i 옵션 zfs 이벤트 까지 표시

root@bsd11:~ # zpool history -i
History for 'testpool':
2018-03-15.23:06:26 [txg:5] create pool version 5000; software version 5000/5; uts bsd11 11.1-RELEASE 1101001 amd64
2018-03-15.23:06:26 zpool create testpool mirror /dev/vtbd0 /dev/vtbd1
2018-03-15.23:07:04 [txg:16] scan setup func=2 mintxg=3 maxtxg=16
2018-03-15.23:07:04 [txg:17] scan done errors=0
2018-03-15.23:07:04 [txg:18] vdev attach replace vdev=/dev/vtbd2 for vdev=/dev/vtbd1
2018-03-15.23:07:09 [txg:19] detach vdev=/dev/vtbd1
2018-03-15.23:07:09 zpool replace testpool vtbd1 vtbd2
2018-03-15.23:10:20 [txg:57] scan setup func=1 mintxg=0 maxtxg=57
2018-03-15.23:10:20 [txg:58] scan done errors=0
2018-03-15.23:10:25 zpool scrub testpool
2018-03-15.23:13:05 [txg:94] scan setup func=2 mintxg=3 maxtxg=94
2018-03-15.23:13:05 [txg:95] scan done errors=0
2018-03-15.23:13:05 [txg:96] vdev attach replace vdev=/dev/vtbd1 for vdev=/dev/vtbd2
2018-03-15.23:13:10 [txg:97] detach vdev=/dev/vtbd2
2018-03-15.23:13:10 zpool replace testpool vtbd2 vtbd1
2018-03-15.23:14:25 zpool export testpool
2018-03-15.23:16:43 [txg:117] open pool version 5000; software version 5000/5; uts bsd11 11.1-RELEASE 1101001 amd64
2018-03-15.23:16:43 [txg:119] import pool version 5000; software version 5000/5; uts bsd11 11.1-RELEASE 1101001 amd64
2018-03-15.23:16:48 zpool import testpool
2018-03-15.23:17:43 [txg:132] scan setup func=1 mintxg=0 maxtxg=132
2018-03-15.23:17:43 [txg:133] scan done errors=0
2018-03-15.23:17:48 zpool scrub testpool
2018-03-15.23:18:08 zpool clear testpool
2018-03-15.23:24:30 zpool export testpool
2018-03-15.23:25:44 [txg:219] open pool version 5000; software version 5000/5; uts bsd11 11.1-RELEASE 1101001 amd64
2018-03-15.23:25:44 [txg:221] import pool version 5000; software version 5000/5; uts bsd11 11.1-RELEASE 1101001 amd64
2018-03-15.23:25:49 zpool import testpool
2018-03-15.23:30:15 [txg:276] scan setup func=1 mintxg=0 maxtxg=276
2018-03-15.23:30:15 [txg:277] scan done errors=0
2018-03-15.23:30:20 zpool scrub testpool
2018-03-15.23:32:46 zpool clear testpool
2018-03-15.23:37:30 zpool export testpool
2018-03-15.23:38:58 [txg:369] open pool version 5000; software version 5000/5; uts bsd11 11.1-RELEASE 1101001 amd64
2018-03-15.23:38:58 [txg:371] import pool version 5000; software version 5000/5; uts bsd11 11.1-RELEASE 1101001 amd64
2018-03-15.23:39:04 zpool import -o altroot=/mnt testpool

root@bsd11:~ #

 

-l 옵션 사용자 이름 및 hostname 표시

root@bsd11:~ # zpool history -l
History for 'testpool':
2018-03-15.23:06:26 zpool create testpool mirror /dev/vtbd0 /dev/vtbd1 [user 0 (root) on bsd11]
2018-03-15.23:07:09 zpool replace testpool vtbd1 vtbd2 [user 0 (root) on bsd11]
2018-03-15.23:10:25 zpool scrub testpool [user 0 (root) on bsd11]
2018-03-15.23:13:10 zpool replace testpool vtbd2 vtbd1 [user 0 (root) on bsd11]
2018-03-15.23:14:25 zpool export testpool [user 0 (root) on bsd11]
2018-03-15.23:16:48 zpool import testpool [user 0 (root) on bsd11]
2018-03-15.23:17:48 zpool scrub testpool [user 0 (root) on bsd11]
2018-03-15.23:18:08 zpool clear testpool [user 0 (root) on bsd11]
2018-03-15.23:24:30 zpool export testpool [user 0 (root) on bsd11]
2018-03-15.23:25:49 zpool import testpool [user 0 (root) on bsd11]
2018-03-15.23:30:20 zpool scrub testpool [user 0 (root) on bsd11]
2018-03-15.23:32:46 zpool clear testpool [user 0 (root) on bsd11]
2018-03-15.23:37:30 zpool export testpool [user 0 (root) on bsd11]
2018-03-15.23:39:04 zpool import -o altroot=/mnt testpool [user 0 (root) on bsd11]

root@bsd11:~ #

 

zpool 모니터링

root@bsd11:~ # zpool iostat
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
testpool     372K  9.94G      0      0     96    447
root@bsd11:~ #

 

zpool iostat -v  의 Verbose 옵션 입니다.

Disk read/write 까지 표시 됩니다.

root@bsd11:~ # zpool iostat -v
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
testpool     372K  9.94G      0      0     82    382
  mirror     372K  9.94G      0      0     82    382
    vtbd0       -      -      0      0  2.02K  1.65K
    vtbd1       -      -      0      0  1.38K  1.65K
----------  -----  -----  -----  -----  -----  -----

root@bsd11:~ #

 

 

ZFS 관리

https://www.freebsd.org/doc/handbook/zfs-zfs.html

zfs 유틸리티는 pool에 있는 ZFS 데이터 세트를 생성, 삭제 및 관리 합니다.

 

데이터 세트 생성 및 제거

root@bsd11:~ # zfs list
NAME                   USED  AVAIL  REFER  MOUNTPOINT
zroot                 6.65G  29.9G    88K  /zroot
zroot/ROOT             627M  29.9G    88K  none
zroot/ROOT/default     627M  29.9G   627M  /
zroot/jails           4.76G  29.9G   112K  /usr/jails
zroot/jails/basejail  1003M  29.9G   979M  /usr/jails/basejail
zroot/jails/database  1.95G  29.9G  1.95G  /usr/jails/database
zroot/jails/httpd     1.83G  29.9G  1.83G  /usr/jails/httpd
zroot/jails/newjail   4.66M  29.9G  4.66M  /usr/jails/newjail
zroot/jails/www       4.75M  29.9G  4.75M  /usr/jails/www
zroot/tmp               88K  29.9G    88K  /tmp
zroot/usr             1.27G  29.9G    88K  /usr
zroot/usr/home          88K  29.9G    88K  /usr/home
zroot/usr/ports        665M  29.9G   665M  /usr/ports
zroot/usr/src          633M  29.9G   633M  /usr/src
zroot/var              604K  29.9G    88K  /var
zroot/var/audit         88K  29.9G    88K  /var/audit
zroot/var/crash         88K  29.9G    88K  /var/crash
zroot/var/log          164K  29.9G   164K  /var/log
zroot/var/mail          88K  29.9G    88K  /var/mail
zroot/var/tmp           88K  29.9G    88K  /var/tmp
root@bsd11:~ #

 

신규 데이터 세트를 생성 하고 LZ4 압축을 활성화 합니다.

root@bsd11:~ # zfs create -o compress=lz4 zroot/zroottest
root@bsd11:~ # zfs list
NAME                   USED  AVAIL  REFER  MOUNTPOINT
zroot                 6.65G  29.9G    88K  /zroot
zroot/ROOT             627M  29.9G    88K  none
zroot/ROOT/default     627M  29.9G   627M  /
zroot/jails           4.76G  29.9G   112K  /usr/jails
zroot/jails/basejail  1003M  29.9G   979M  /usr/jails/basejail
zroot/jails/database  1.95G  29.9G  1.95G  /usr/jails/database
zroot/jails/httpd     1.83G  29.9G  1.83G  /usr/jails/httpd
zroot/jails/newjail   4.66M  29.9G  4.66M  /usr/jails/newjail
zroot/jails/www       4.75M  29.9G  4.75M  /usr/jails/www
zroot/tmp               88K  29.9G    88K  /tmp
zroot/usr             1.27G  29.9G    88K  /usr
zroot/usr/home          88K  29.9G    88K  /usr/home
zroot/usr/ports        665M  29.9G   665M  /usr/ports
zroot/usr/src          633M  29.9G   633M  /usr/src
zroot/var              604K  29.9G    88K  /var
zroot/var/audit         88K  29.9G    88K  /var/audit
zroot/var/crash         88K  29.9G    88K  /var/crash
zroot/var/log          164K  29.9G   164K  /var/log
zroot/var/mail          88K  29.9G    88K  /var/mail
zroot/var/tmp           88K  29.9G    88K  /var/tmp
zroot/zroottest         88K  29.9G    88K  /zroot/zroottest
root@bsd11:~ #

 

이전에 생성된 데이터 세트를 삭제 합니다.

root@bsd11:~ # zfs list
NAME                   USED  AVAIL  REFER  MOUNTPOINT
zroot                 6.65G  29.9G    88K  /zroot
zroot/ROOT             627M  29.9G    88K  none
zroot/ROOT/default     627M  29.9G   627M  /
zroot/jails           4.76G  29.9G   112K  /usr/jails
zroot/jails/basejail  1003M  29.9G   979M  /usr/jails/basejail
zroot/jails/database  1.95G  29.9G  1.95G  /usr/jails/database
zroot/jails/httpd     1.83G  29.9G  1.83G  /usr/jails/httpd
zroot/jails/newjail   4.66M  29.9G  4.66M  /usr/jails/newjail
zroot/jails/www       4.75M  29.9G  4.75M  /usr/jails/www
zroot/tmp               88K  29.9G    88K  /tmp
zroot/usr             1.27G  29.9G    88K  /usr
zroot/usr/home          88K  29.9G    88K  /usr/home
zroot/usr/ports        665M  29.9G   665M  /usr/ports
zroot/usr/src          633M  29.9G   633M  /usr/src
zroot/var              604K  29.9G    88K  /var
zroot/var/audit         88K  29.9G    88K  /var/audit
zroot/var/crash         88K  29.9G    88K  /var/crash
zroot/var/log          164K  29.9G   164K  /var/log
zroot/var/mail          88K  29.9G    88K  /var/mail
zroot/var/tmp           88K  29.9G    88K  /var/tmp
zroot/zroottest         88K  29.9G    88K  /zroot/zroottest
root@bsd11:~ # zfs destroy zroot/zroottest
root@bsd11:~ # zfs list
NAME                   USED  AVAIL  REFER  MOUNTPOINT
zroot                 6.65G  29.9G    88K  /zroot
zroot/ROOT             627M  29.9G    88K  none
zroot/ROOT/default     627M  29.9G   627M  /
zroot/jails           4.76G  29.9G   112K  /usr/jails
zroot/jails/basejail  1003M  29.9G   979M  /usr/jails/basejail
zroot/jails/database  1.95G  29.9G  1.95G  /usr/jails/database
zroot/jails/httpd     1.83G  29.9G  1.83G  /usr/jails/httpd
zroot/jails/newjail   4.66M  29.9G  4.66M  /usr/jails/newjail
zroot/jails/www       4.75M  29.9G  4.75M  /usr/jails/www
zroot/tmp               88K  29.9G    88K  /tmp
zroot/usr             1.27G  29.9G    88K  /usr
zroot/usr/home          88K  29.9G    88K  /usr/home
zroot/usr/ports        665M  29.9G   665M  /usr/ports
zroot/usr/src          633M  29.9G   633M  /usr/src
zroot/var              604K  29.9G    88K  /var
zroot/var/audit         88K  29.9G    88K  /var/audit
zroot/var/crash         88K  29.9G    88K  /var/crash
zroot/var/log          164K  29.9G   164K  /var/log
zroot/var/mail          88K  29.9G    88K  /var/mail
zroot/var/tmp           88K  29.9G    88K  /var/tmp
root@bsd11:~ #

zfs 최신 버전의 경우 zfs destroy 가 비동식 으로 동작 합니다.
여유공간이 풀에 표시되는데 몇분 정도 걸릴수 있습니다.

 

볼륨 생성 및 삭제

zfs 볼륨은 모든 파일 시스템으로 포멧될수 있으며, 파일 시스템 없이 원시 데이터를
저장 할수 있습니다. 사용자에게는 볼륨은 일반 디스크처럼 보입니다.

root@bsd11:~ # zfs create -V 250m -o compression=on zroot/fat32
root@bsd11:~ # zfs list zroot
NAME    USED  AVAIL  REFER  MOUNTPOINT
zroot  6.90G  29.7G    88K  /zroot
root@bsd11:~ # zfs list
NAME                   USED  AVAIL  REFER  MOUNTPOINT
zroot                 6.90G  29.7G    88K  /zroot
zroot/ROOT             627M  29.7G    88K  none
zroot/ROOT/default     627M  29.7G   627M  /
zroot/fat32            260M  29.9G    56K  -
zroot/jails           4.76G  29.7G   112K  /usr/jails
zroot/jails/basejail  1003M  29.7G   979M  /usr/jails/basejail
zroot/jails/database  1.95G  29.7G  1.95G  /usr/jails/database
zroot/jails/httpd     1.83G  29.7G  1.83G  /usr/jails/httpd
zroot/jails/newjail   4.66M  29.7G  4.66M  /usr/jails/newjail
zroot/jails/www       4.75M  29.7G  4.75M  /usr/jails/www
zroot/tmp               88K  29.7G    88K  /tmp
zroot/usr             1.27G  29.7G    88K  /usr
zroot/usr/home          88K  29.7G    88K  /usr/home
zroot/usr/ports        665M  29.7G   665M  /usr/ports
zroot/usr/src          633M  29.7G   633M  /usr/src
zroot/var              604K  29.7G    88K  /var
zroot/var/audit         88K  29.7G    88K  /var/audit
zroot/var/crash         88K  29.7G    88K  /var/crash
zroot/var/log          164K  29.7G   164K  /var/log
zroot/var/mail          88K  29.7G    88K  /var/mail
zroot/var/tmp           88K  29.7G    88K  /var/tmp
root@bsd11:~ #

 

zroot 볼륨의 fat32 를 msdos 타입으로 포멧 한후 /mnt 에 마운트 합니다.

-F32 지정시 잘못된 인수로 인식 하여 -F 옵션 없이 테스트 하였습니다.

root@bsd11:~ # newfs_msdos  /dev/zvol/zroot/fat32
newfs_msdos: cannot get number of sectors per track: Operation not supported
newfs_msdos: cannot get number of heads: Operation not supported
newfs_msdos: trim 62 sectors to adjust to a multiple of 63
/dev/zvol/zroot/fat32: 511648 sectors in 31978 FAT16 clusters (8192 bytes/cluster)
BytesPerSec=512 SecPerClust=16 ResSectors=1 FATs=2 RootDirEnts=512 Media=0xf0 FATsecs=125 SecPerTrack=63 Heads=16 HiddenSecs=0 HugeSectors=511938
root@bsd11:~ # mount -t msdosfs /dev/zvol/zroot/fat32 /mnt
mount_msdosfs: /dev/zvol/zroot/fat32: Invalid argument
root@bsd11:~ # mount -t msdosfs /dev/zvol/zroot/fat32 /mnt
root@bsd11:~ # df -h |grep -i mnt
/dev/zvol/zroot/fat32    250M     16K    250M     0%    /mnt
root@bsd11:~ # mount |grep -i mnt
/dev/zvol/zroot/fat32 on /mnt (msdosfs, local)
root@bsd11:~ #

 

데이터세트 이름변경

데이터 세트의 이름은 zfs rename 으로 변경 할수 있으며 최상위 볼륨의 이름도 변경 할수 있습니다. 최상위 볼륨의 이름을 변경 하게 되면 상속된 속성값이 변경 됩니다.
데이터 세트의 이름을 변경하면 마운트를 해제된 다음 새 위치 에서 다시 마운트 됩니다.
-u 옵션 사용시 해당 remount 를 방지 할수 있습니다.

root@bsd11:~ # zfs list
NAME                   USED  AVAIL  REFER  MOUNTPOINT
zroot                 6.90G  29.7G    88K  /zroot
zroot/ROOT             627M  29.7G    88K  none
zroot/ROOT/default     627M  29.7G   627M  /
zroot/fat32            260M  29.9G    68K  -
zroot/jails           4.76G  29.7G   112K  /usr/jails
zroot/jails/basejail  1003M  29.7G   979M  /usr/jails/basejail
zroot/jails/database  1.95G  29.7G  1.95G  /usr/jails/database
zroot/jails/httpd     1.83G  29.7G  1.83G  /usr/jails/httpd
zroot/jails/newjail   4.66M  29.7G  4.66M  /usr/jails/newjail
zroot/jails/www       4.75M  29.7G  4.75M  /usr/jails/www
zroot/tmp               88K  29.7G    88K  /tmp
zroot/usr             1.27G  29.7G    88K  /usr
zroot/usr/home          88K  29.7G    88K  /usr/home
zroot/usr/ports        665M  29.7G   665M  /usr/ports
zroot/usr/src          633M  29.7G   633M  /usr/src
zroot/var              604K  29.7G    88K  /var
zroot/var/audit         88K  29.7G    88K  /var/audit
zroot/var/crash         88K  29.7G    88K  /var/crash
zroot/var/log          164K  29.7G   164K  /var/log
zroot/var/mail          88K  29.7G    88K  /var/mail
zroot/var/tmp           88K  29.7G    88K  /var/tmp
root@bsd11:~ # zfs rename zroot/fat32 zroot/fat16
root@bsd11:~ # zfs list
NAME                   USED  AVAIL  REFER  MOUNTPOINT
zroot                 6.90G  29.7G    88K  /zroot
zroot/ROOT             627M  29.7G    88K  none
zroot/ROOT/default     627M  29.7G   627M  /
zroot/fat16            260M  29.9G    68K  -
zroot/jails           4.76G  29.7G   112K  /usr/jails
zroot/jails/basejail  1003M  29.7G   979M  /usr/jails/basejail
zroot/jails/database  1.95G  29.7G  1.95G  /usr/jails/database
zroot/jails/httpd     1.83G  29.7G  1.83G  /usr/jails/httpd
zroot/jails/newjail   4.66M  29.7G  4.66M  /usr/jails/newjail
zroot/jails/www       4.75M  29.7G  4.75M  /usr/jails/www
zroot/tmp               88K  29.7G    88K  /tmp
zroot/usr             1.27G  29.7G    88K  /usr
zroot/usr/home          88K  29.7G    88K  /usr/home
zroot/usr/ports        665M  29.7G   665M  /usr/ports
zroot/usr/src          633M  29.7G   633M  /usr/src
zroot/var              604K  29.7G    88K  /var
zroot/var/audit         88K  29.7G    88K  /var/audit
zroot/var/crash         88K  29.7G    88K  /var/crash
zroot/var/log          164K  29.7G   164K  /var/log
zroot/var/mail          88K  29.7G    88K  /var/mail
zroot/var/tmp           88K  29.7G    88K  /var/tmp
root@bsd11:~ #

 

스냅 샷은 위와 같은 방법으로 이름을 바꿀수 없으며 스냅샷의 특성으로 인하여
다른 상위 데이터 세트로 이름 변경이 불가 합니다.
재귀된 스냅샷의 이름을 변경 할려면 -r 옵셔을 지정 하고 하위 데이터 세트의 이름 같은 모든 스냅샷의 이름을 변경 해야 합니다.

root@bsd11:~ # zfs rename zroot/var/test@2018-03-15 new_test@2018-03-16
root@bsd11:~ # zfs list -t snapshot

 

데이터 세트 속성 설정

root@bsd11:~ # zfs set custom:costcenter=1234 zroot
root@bsd11:~ # zfs get custom:costcenter zroot
NAME   PROPERTY           VALUE              SOURCE
zroot  custom:costcenter  1234               local
root@bsd11:~ #

 

사용자정의 등록 정보를 제거 할려면 zfs -r 옵션을 사용합니다.

root@bsd11:~ # zfs get custom:costcenter zroot
NAME   PROPERTY           VALUE              SOURCE
zroot  custom:costcenter  1234               local
root@bsd11:~ # zfs inherit -r custom:costconter zroot
root@bsd11:~ # zfs get custom:costconter
NAME                                    PROPERTY           VALUE              SOURCE
zroot                                   custom:costconter  -                  -
zroot/ROOT                              custom:costconter  -                  -
zroot/ROOT/default                      custom:costconter  -                  -
zroot/fat16                             custom:costconter  -                  -
zroot/jails                             custom:costconter  -                  -
zroot/jails/basejail                    custom:costconter  -                  -
zroot/jails/basejail@20180306_19:50:50  custom:costconter  -                  -
zroot/jails/basejail@20180306_20:02:39  custom:costconter  -                  -
zroot/jails/database                    custom:costconter  -                  -
zroot/jails/httpd                       custom:costconter  -                  -
zroot/jails/newjail                     custom:costconter  -                  -
zroot/jails/www                         custom:costconter  -                  -
zroot/test                              custom:costconter  -                  -
zroot/tmp                               custom:costconter  -                  -
zroot/usr                               custom:costconter  -                  -
zroot/usr/home                          custom:costconter  -                  -
zroot/usr/ports                         custom:costconter  -                  -
zroot/usr/src                           custom:costconter  -                  -
zroot/var                               custom:costconter  -                  -
zroot/var/audit                         custom:costconter  -                  -
zroot/var/crash                         custom:costconter  -                  -
zroot/var/log                           custom:costconter  -                  -
zroot/var/mail                          custom:costconter  -                  -
zroot/var/tmp                           custom:costconter  -                  -
root@bsd11:~ # zfs get custom:costconter zroot
NAME   PROPERTY           VALUE              SOURCE
zroot  custom:costconter  -                  -
root@bsd11:~ #

 

공유속성 및 설정

NFS / SMB 공유 옵션 입니다.
ZFS 데이터 세트를 네트워크에서 공유 할수 있는 방법을 정의 할수 있습니다.
둘다 off 로 되어 있습니다.

root@bsd11:~ # zfs get sharenfs zroot/usr/home
NAME            PROPERTY  VALUE     SOURCE
zroot/usr/home  sharenfs  off       default
root@bsd11:~ # zfs get sharesmb zroot/usr/home
NAME            PROPERTY  VALUE     SOURCE
zroot/usr/home  sharesmb  off       default
root@bsd11:~ #

 

/usr/home nfs 공유 설정

root@bsd11:~ # zfs set sharenfs=on zroot/usr/home
root@bsd11:~ # zfs get sharenfs zroot/usr/home
NAME            PROPERTY  VALUE     SOURCE
zroot/usr/home  sharenfs  on        local
root@bsd11:~ #

 

zfs set sharenfs 사용 예)

다음과 같이 nfs 연결 정보를 설정 할수 있습니다.

root@bsd11:~ # zfs set sharenfs="-alldirs,=maproot=root,-network=192.168.0.0/24" zroot/usr/home
root@bsd11:~ # zfs get sharenfs zroot/usr/home
NAME            PROPERTY  VALUE                                           SOURCE
zroot/usr/home  sharenfs  -alldirs,=maproot=root,-network=192.168.0.0/24  local
root@bsd11:~ #

 

스냅 샷 관리

-작성중

 

 

FreeBSD VirtIO

virtio 참고 : https://wiki.libvirt.org/page/Virtio

So-called “full virtualization” is a nice feature because it allows you to run any operating system virtualized. However, it’s slow because the hypervisor has to emulate actual physical devices such as RTL8139 network cards . This emulation is both complicated and inefficient.

Virtio is a virtualization standard for network and disk device drivers where just the guest’s device driver “knows” it is running in a virtual environment, and cooperates with the hypervisor. This enables guests to get high performance network and disk operations, and gives most of the performance benefits of paravirtualization.

Note that virtio is different, but architecturally similar to, Xen paravirtualized device drivers (such as the ones that you can install in a Windows guest to make it go faster under Xen). Also similar is VMWare’s Guest Tools.

This page describes how to configure libvirt to use virtio with KVM guests.

kvm 사용시 virtio Driver 를 사용할 일이 종종 있습니다.

과거에는 virtio-kmod 를 ports 에서 설치 하고  /boot/loader.conf 에 virtio_load 등을 추가 하였지만

 

BSD 11 에서는 바로 Disk 를 추가 하여 사용 할수 있습니다.

 

VirtIO Driver load

Driver load 확인을 위한것임으로 별도로 확인할 필요는 없습니다.

 

root@bsd11:~ # kldload virtio
kldload: can't load virtio: module already loaded or in kernel
root@bsd11:~ #

이미 kernel 에서 load 되어 있다고 나옵니다.

 

차후 make world 도 해봐야 겠네요. 🙂

root@bsd11:~ # cat /usr/src/sys/amd64/conf/GENERIC | grep vir
device          virtio                  # Generic VirtIO bus (required)
device          virtio_pci              # VirtIO PCI device
device          virtio_blk              # VirtIO Block device
device          virtio_scsi             # VirtIO SCSI device
device          virtio_balloon          # VirtIO Memory Balloon device
root@bsd11:~ #

 

 

VirtIO Device 확인

Disk name 은 vtbdX 형식으로 추가 됩니다.

root@bsd11:~ # ls -al /dev/vtb*
crw-r-----  1 root  operator  0x48 Mar 11 21:49 /dev/vtbd0
root@bsd11:~ #

 

VirtIO Disk Test

root@bsd11:~ # gpart create -s GPT vtbd0
vtbd0 created
root@bsd11:~ # gpart add -t freebsd-ufs vtbd0
vtbd0p1 added
root@bsd11:~ # newfs -U /dev/vtbd0p1
/dev/vtbd0p1: 10240.0MB (20971440 sectors) block size 32768, fragment size 4096
        using 17 cylinder groups of 626.09MB, 20035 blks, 80256 inodes.
        with soft updates
super-block backups (for fsck_ffs -b #) at:
 192, 1282432, 2564672, 3846912, 5129152, 6411392, 7693632, 8975872, 10258112,
 11540352, 12822592, 14104832, 15387072, 16669312, 17951552, 19233792, 20516032
root@bsd11:~ # mkdir /data
root@bsd11:~ # mount /dev/vtbd0p1 /data/
root@bsd11:~ # df -h | grep -i data
/dev/vtbd0p1            9.7G    8.0K    8.9G     0%    /data
root@bsd11:~ #

 

FreeBSD Adding Disk

official site : https://www.freebsd.org/doc/handbook/disks-adding.html

자세한 내용은 FreeBSD handbook site 를 참고해 주세요.

BSD 7~8 Version 에서 자주 사용하였던 sysinstall 은 BSD 11 Version 에서는 더이상 사용하지 않습니다.

root@bsd11:~ # sysinstall
sysinstall: Command not found.
root@bsd11:~ #

 

BSD11 에서는 bsdinstall 을 사용합니다. 

root@BSD11:~ # bsdinstall

(Disk 추가 부분이 bsdinstall 에서 되는지 확인해보지 못하였습니다.)

 

gpart 명령어로 Disk 를 확인 합니다.

기존 디스크는 gpart show 로 확인 합니다.

root@bsd11:~ # ls -al /dev/ad*
crw-r-----  1 root  operator  0x5b Mar 11 20:57 /dev/ada0
crw-r-----  1 root  operator  0x5c Mar 11 20:57 /dev/ada0p1
crw-r-----  1 root  operator  0x5d Mar 11 20:57 /dev/ada0p2
crw-r-----  1 root  operator  0x5e Mar 11 20:57 /dev/ada0p3
crw-r-----  1 root  operator  0x5f Mar 11 20:57 /dev/ada1

기존 디스크 확인 
root@bsd11:~ # gpart show
=>      40  83886000  ada0  GPT  (40G)
        40      1024     1  freebsd-boot  (512K)
      1064       984        - free -  (492K)
      2048   4194304     2  freebsd-swap  (2.0G)
   4196352  79687680     3  freebsd-zfs  (38G)
  83884032      2008        - free -  (1.0M)

root@bsd11:~ #

 

ada1 Device 를 gpt 파티션 테이블로 설정 합니다.

gpart 명령어로 ada1 Device 의 용량을 확인 합니다.

root@bsd11:~ # gpart create -s GPT ada1
ada1 created
root@bsd11:~ #
root@bsd11:~ # gpart show ada1
=>      40  20971440  ada1  GPT  (10G)
        40  20971440        - free -  (10G)

root@bsd11:~ #

 

ada1 Device 의 모든 용량을 지정하여 freebsd-ufs Filesystem 으로 지정 합니다.

용량지정시 gpart add -t freebsd-ufs -s 1G ada1 으로 지정 하시면 됩니다.

root@bsd11:~ # gpart add -t freebsd-ufs  ada1
ada1p1 added
root@bsd11:~ # gpart show ada1
=>      40  20971440  ada1  GPT  (10G)
        40  20971440     1  freebsd-ufs  (10G)

root@bsd11:~ #

 

File System 포멧

root@bsd11:~ # newfs -U /dev/ada1p1
/dev/ada1p1: 10240.0MB (20971440 sectors) block size 32768, fragment size 4096
        using 17 cylinder groups of 626.09MB, 20035 blks, 80256 inodes.
        with soft updates
super-block backups (for fsck_ffs -b #) at:
 192, 1282432, 2564672, 3846912, 5129152, 6411392, 7693632, 8975872, 10258112, 11540352, 12822592, 14104832, 15387072, 16669312, 17951552, 19233792, 20516032
root@bsd11:~ #

 

마운트 포인트 생성 및 /etc/fstab 등록

root@bsd11:~ # mkdir /data
root@bsd11:~ # vi /etc/fstab
# Device                Mountpoint      FStype  Options         Dump    Pass#
/dev/ada0p2             none    swap    sw              0       0
/dev/ada1p1             /data   ufs     rw              0       0

 

마운트 확인

root@bsd11:~ # mount -a
root@bsd11:~ # df -h |grep data
/dev/ada1p1             9.7G    8.0K    8.9G     0%    /data
root@bsd11:~ #

 

파티션 삭제

/etc/fstab 의 /data 라인을 삭제 하고 Disk 를 umount 합니다.

추가한 /data 라인을 삭제 합니다. 
root@bsd11:~ # vi /etc/fstab
# Device                Mountpoint      FStype  Options         Dump    Pass#
/dev/ada0p2             none    swap    sw              0       0
/dev/ada1p1             /data   ufs     rw              0       0

root@bsd11:~ # umount /data
root@bsd11:~ # df -h |grep data
root@bsd11:~ #

 

gpart /data 슬라이스 삭제 및 GPT Disk 삭제

gpart 명령어로 Disk 정보를 확인 합니다. 
root@bsd11:~ # gpart show
=>      40  83886000  ada0  GPT  (40G)
        40      1024     1  freebsd-boot  (512K)
      1064       984        - free -  (492K)
      2048   4194304     2  freebsd-swap  (2.0G)
   4196352  79687680     3  freebsd-zfs  (38G)
  83884032      2008        - free -  (1.0M)

=>      40  20971440  ada1  GPT  (10G)
        40  20971440     1  freebsd-ufs  (10G)


ada1 Device 의 1번 슬라이스를 삭제 합니다. 
root@bsd11:~ # gpart delete -i1 ada1
ada1p1 deleted

ada1 Device 가 GPT 테이블로 남아 있습니다. 
root@bsd11:~ # gpart show ada1
=>      40  20971440  ada1  GPT  (10G)
        40  20971440        - free -  (10G)

GPT 테이블 완전 삭제시 gpart destory 명령어를 사용합니다. 
root@bsd11:~ # gpart destroy -F ada1
ada1 destroyed
root@bsd11:~ #

 

ex) zfs volume 으로 사용할 slice 생성

root@bsd11:~ # gpart show
=>      63  41942977  ada0  MBR  (20G)
        63         1        - free -  (512B)
        64  41942975     1  freebsd  [active]  (20G)
  41943039         1        - free -  (512B)

=>       0  41942975  ada0s1  BSD  (20G)
         0  39845888       1  freebsd-ufs  (19G)
  39845888   2097086       2  freebsd-swap  (1.0G)
  41942974         1          - free -  (512B)

=>      40  20971440  vtbd0  GPT  (10G)
        40   2097152      1  freebsd-zfs  (1.0G)
   2097192  18874288         - free -  (9.0G)

root@bsd11:~ #

root@bsd11:~ # gpart add -t freebsd-zfs -s 1G vtbd0
vtbd0p2 added
root@bsd11:~ # gpart add -t freebsd-zfs -s 1G vtbd0
vtbd0p3 added
root@bsd11:~ # gpart add -t freebsd-zfs -s 1G vtbd0
vtbd0p4 added
root@bsd11:~ # gpart add -t freebsd-zfs -s 1G vtbd0
vtbd0p5 added
root@bsd11:~ # gpart add -t freebsd-zfs -s 1G vtbd0
vtbd0p6 added
root@bsd11:~ #



 

생성한 slice 확인

root@bsd11:~ # gpart show
=>      63  41942977  ada0  MBR  (20G)
        63         1        - free -  (512B)
        64  41942975     1  freebsd  [active]  (20G)
  41943039         1        - free -  (512B)

=>       0  41942975  ada0s1  BSD  (20G)
         0  39845888       1  freebsd-ufs  (19G)
  39845888   2097086       2  freebsd-swap  (1.0G)
  41942974         1          - free -  (512B)

=>      40  20971440  vtbd0  GPT  (10G)
        40   2097152      1  freebsd-zfs  (1.0G)
   2097192   2097152      2  freebsd-zfs  (1.0G)
   4194344   2097152      3  freebsd-zfs  (1.0G)
   6291496   2097152      4  freebsd-zfs  (1.0G)
   8388648   2097152      5  freebsd-zfs  (1.0G)
  10485800   2097152      6  freebsd-zfs  (1.0G)
  12582952   8388528         - free -  (4.0G)

root@bsd11:~ #

 

 

삭제시

root@bsd11:~ # gpart delete -i 1 vtbd0
vtbd0p1 deleted

 

Freebsd httpd-vhost.conf 설정

참고페이지: https://httpd.apache.org/docs/2.4/ko/sections.html

FreeBSD apache24 vhost 설정으로 wiki.test.com / blog.test.com 을 설정하는 내용입니다.

 

blog.test.com VirtualHost 추가

blog.test.com 으로 사용할 Direcoty 생성 및 Directory 권한설정

root@bsd11:~ # mkdir /www
root@bsd11:~ # chown www:www /www/

 

apache24 설정 변경

root@bsd11:~ # vi /usr/local/etc/apache24/httpd.conf
~중략
# Virtual hosts
Include etc/apache24/extra/httpd-vhosts.conf

LoadModule vhost_alias_module libexec/apache24/mod_vhost_alias.so


root@bsd11:~ # cd /usr/local/etc/apache24/extra/
root@bsd11:/usr/local/etc/apache24/extra # vi httpd-vhosts.conf

<VirtualHost *:80>
    ServerAdmin admin@test.com
    DocumentRoot "/usr/local/www/dokuwiki"
    ServerName wiki.test.com
    ErrorLog "/var/log/wiki.test.com-error_log"
    CustomLog "/var/log/wiki.test.com-access_log" common
</VirtualHost>
<VirtualHost *:80>
    ServerAdmin admin@test.com
    DocumentRoot "/www"
    ServerName blog.test.com
    ErrorLog "/var/log/blog.test.com-error_log"
    CustomLog "/var/log/blog.test.com-access_log" common
     <Directory "/www">
        AllowOverride None
        Order Allow,deny
        Allow from all
     </Directory>
</VirtualHost>

 

httpd.conf 나 httpd-vhost.conf 에 사용할 디렉토리 설정을 추가해야 됩니다.

<Directory "/www">
   AllowOverride None
   Order Allow,deny
   Allow from all
</Directory>

 

설정을 마친후 apache24 를 재시작 합니다.

root@bsd11:~ # service apache24 restart
Performing sanity check on apache24 configuration:
Syntax OK
Stopping apache24.
Waiting for PIDS: 1125.
Performing sanity check on apache24 configuration:
Syntax OK
Starting apache24.
root@bsd11:~ #

 

web browser 확인

 

 

Freebsd ezjail ports install

Official pagehttps://www.freebsd.org/doc/handbook/jails-ezjail.html

참고페이지: https://www.cyberciti.biz/faq/howto-setup-freebsd-jail-with-ezjail/

https://www.davd.eu/posts-freebsd-jails-with-a-single-public-ip-address/

FreeBSD jail의 자세한 내용은 Freebsd 문서를 참고해 주시기 바랍니다.

 

FreeBSD11 에서 간단하게 사용해볼수 있는 Jail 설정에 관한 문서 입니다. zfs pool 사용의 경우 설치시 BSD 설치시 zfs 로 설치한 VM 을 사용하였습니다.

별도의 zfs의 구성으로 테스트를 진행하셔도 됩니다. ezjail 설치시 pkg install -y ezjail 로 설치 하여도 됩니다. 🙂

 

Jail network 설정

Jail 에서 사용할 lo1 Device 를 생성 합니다.

lo1 interface 설정 /etc/rc.conf 를 수정 합니다. 

jail 에서 사용할 가상 ip 를 10.0.0.1 ~ 10.0.0.9 까지 설정 합니다.

rc.conf 를 수정 합니다. 
root@bsd11:~ # vi /etc/rc.conf

#ifconfig_vtnet0="inet 192.168.0.40 netmask 255.255.255.0"
ifconfig_vtnet0_name="em0"
ifconfig_em0="inet 192.168.0.40 netmask 255.255.255.0"
defaultrouter="192.168.0.1"
cloned_interfaces="lo1"
ipv4_addrs_lo1="10.0.0.1-9/29"


lo1 device 를 생성합니다.  
root@bsd11:~ # service netif cloneup
Created clone interfaces: lo1.
root@bsd11:~ # ifconfig
vtnet0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=6c07bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,LRO,VLAN_HWTSO,LINKSTATE,RXCSUM_IPV6,TXCSUM_IPV6>
        ether 52:54:00:40:19:eb
        hwaddr 52:54:00:40:19:eb
        inet 192.168.0.40 netmask 0xffffff00 broadcast 192.168.0.255
        nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
        media: Ethernet 10Gbase-T <full-duplex>
        status: active
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
        options=600003<RXCSUM,TXCSUM,RXCSUM_IPV6,TXCSUM_IPV6>
        inet6 ::1 prefixlen 128
        inet6 fe80::1%lo0 prefixlen 64 scopeid 0x2
        inet 127.0.0.1 netmask 0xff000000
        nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
        groups: lo
lo1: flags=8008<LOOPBACK,MULTICAST> metric 0 mtu 16384
        options=600003<RXCSUM,TXCSUM,RXCSUM_IPV6,TXCSUM_IPV6>
        nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
        groups: lo
root@bsd11:~ #

 

lo1 interface 생성

root@bsd11:~ # service netif cloneup
Created clone interfaces: lo1.

root@bsd11:~ # ifconfig
em0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=6c07bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,LRO,VLAN_HWTSO,LINKSTATE,RXCSUM_IPV6,TXCSUM_IPV6>
        ether 52:54:00:2c:0c:a0
        hwaddr 52:54:00:2c:0c:a0
        inet 192.168.0.40 netmask 0xffffff00 broadcast 192.168.0.255
        nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
        media: Ethernet 10Gbase-T <full-duplex>
        status: active
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
        options=600003<RXCSUM,TXCSUM,RXCSUM_IPV6,TXCSUM_IPV6>
        inet6 ::1 prefixlen 128
        inet6 fe80::1%lo0 prefixlen 64 scopeid 0x2
        inet 127.0.0.1 netmask 0xff000000
        nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
        groups: lo
lo1: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
        options=600003<RXCSUM,TXCSUM,RXCSUM_IPV6,TXCSUM_IPV6>
        inet 10.0.0.1 netmask 0xfffffff8
        inet 10.0.0.2 netmask 0xffffffff
        inet 10.0.0.3 netmask 0xffffffff
        inet 10.0.0.4 netmask 0xffffffff
        inet 10.0.0.5 netmask 0xffffffff
        inet 10.0.0.6 netmask 0xffffffff
        inet 10.0.0.7 netmask 0xffffffff
        inet 10.0.0.8 netmask 0xffffffff
        inet 10.0.0.9 netmask 0xffffffff
        nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
        groups: lo
root@bsd11:~ #

 

pf 방화벽 설정

IP_PUB 의 경우 em0 의 ip 입니다.

web-service 테스트를 위하여 443 , 80 port 를 10.0.0.1 로 보냅니다.

root@bsd11:~ # vi /etc/pf.conf
# Public IP address
IP_PUB="192.168.0.40"

# Packet normalization
scrub in all

# Allow outbound connections from within the jails
nat on em0 from lo1:network to any -> (em0)

# webserver jail at 10.0.0.1
rdr on em0 proto tcp from any to $IP_PUB port 443 -> 10.0.0.1
# just an example in case you want to redirect to another port within your jail
rdr on em0 proto tcp from any to $IP_PUB port 80 -> 10.0.0.1

root@bsd11:~ #

 

pf 방화벽 실행

root@bsd11:~ # sysrc pf_enable=YES
pf_enable: NO -> YES
root@bsd11:~ # service pf start
Enabling pf.

 

ezjail 설치

root@bsd11:~ # whereis ezjail
ezjail: /usr/ports/sysutils/ezjail
root@bsd11:~ # cd /usr/ports/sysutils/ezjail/ && make install clean
root@bsd11:/usr/ports/sysutils/ezjail # rehash
root@bsd11:/usr/ports/sysutils/ezjail #

 

resolv.conf 파일을 카피 합니다.

root@bsd11:~ # cp /etc/resolv.conf /usr/jails/newjail/etc/

 

ezjail 을 실행합니다. 

root@bsd11:/usr/ports/sysutils/ezjail # sysrc ezjail_enable=YES
ezjail_enable:  -> YES
root@bsd11:/usr/ports/sysutils/ezjail # service ezjail start

 

base jail template 생성

root@bsd11:~ # ezjail-admin install
base.txz                                      100% of   99 MB 2970 kBps 00m34s
lib32.txz                                     100% of   17 MB 2761 kBps 00m07s
Looking up update.FreeBSD.org mirrors... 3 mirrors found.
Fetching metadata signature for 11.1-RELEASE from update5.freebsd.org... done.
Fetching metadata index... done.
Inspecting system...

 

ezjail-admin install 실행후 아래와 같은 디렉토리가 생성된것을 볼수 있습니다. 

root@bsd11:~ # ls -al /usr/jails
total 20
drwxr-xr-x   5 root  wheel  512 Mar  5 21:51 .
drwxr-xr-x  17 root  wheel  512 Mar  5 21:50 ..
drwxr-xr-x   9 root  wheel  512 Mar  5 21:51 basejail
drwxr-xr-x   3 root  wheel  512 Mar  5 21:51 flavours
drwxr-xr-x  13 root  wheel  512 Mar  5 21:51 newjail
root@bsd11:~ # ls -al /usr/jails/flavours/
total 12
drwxr-xr-x  3 root  wheel  512 Mar  5 21:51 .
drwxr-xr-x  5 root  wheel  512 Mar  5 21:51 ..
drwxr-xr-x  4 root  wheel  512 Mar  4 15:27 example
root@bsd11:~ # ls -al /usr/jails/basejail/
total 36
drwxr-xr-x   9 root  wheel   512 Mar  5 21:51 .
drwxr-xr-x   5 root  wheel   512 Mar  5 21:51 ..
drwxr-xr-x   2 root  wheel  1024 Mar  5 21:51 bin
drwxr-xr-x   9 root  wheel  1024 Mar  5 21:51 boot
drwxr-xr-x   4 root  wheel  1536 Mar  5 21:51 lib
drwxr-xr-x   3 root  wheel   512 Mar  5 21:51 libexec
drwxr-xr-x   2 root  wheel  2560 Mar  5 21:51 rescue
drwxr-xr-x   2 root  wheel  2560 Mar  5 21:51 sbin
drwxr-xr-x  11 root  wheel   512 Mar  5 21:51 usr
root@bsd11:~ # man /usr/jails
No manual entry for /usr/jails
root@bsd11:~ # ls -al /usr/local/etc/rc.d/ezjail
-rwxr-xr-x  1 root  wheel  8128 Mar  4 15:27 /usr/local/etc/rc.d/ezjail
root@bsd11:~ # ls -al /usr/local/etc/ezjail.conf
-rw-r--r--  1 root  wheel  2637 Mar  4 15:27 /usr/local/etc/ezjail.conf
root@bsd11:~ # ls -al /usr/local/etc/ezjail
total 8
drwxr-xr-x   2 root  wheel   512 Mar  4 15:27 .
drwxr-xr-x  12 root  wheel  1024 Mar  4 15:27 ..
root@bsd11:~ #

 

Jail 에서 사용할 ports 트리를 커밋 합니다.

root@bsd11:~ # ezjail-admin install -p

 

Test 를 위하여 httpd jail 을 생성 합니다. 

root@bsd11:~ # ezjail-admin create httpd 10.0.0.1
root@bsd11:~ # ezjail-admin start httpd
root@bsd11:~ # ezjail-admin list
STA JID  IP              Hostname                       Root Directory
--- ---- --------------- ------------------------------ ------------------------
DR  1    10.0.0.1        httpd                          /usr/jails/httpd
root@bsd11:~ #

 

jls 명령어로도 확인 가능 합니다. 

root@bsd11:~ # jls
   JID  IP Address      Hostname                      Path
     1  10.0.0.1        httpd                         /usr/jails/httpd
root@bsd11:~ #

 

 

httpd jail 생성후 파티션 확인

가상 파티션인 /usr/jails/httpd 가 생성 됩니다.

root@bsd11:~ # df -h
Filesystem             Size    Used   Avail Capacity  Mounted on
/dev/ada0s1a            18G     11G    6.1G    64%    /
devfs                  1.0K    1.0K      0B   100%    /dev
/usr/jails/basejail     18G     11G    6.1G    64%    /usr/jails/httpd/basejail
devfs                  1.0K    1.0K      0B   100%    /usr/jails/httpd/dev
fdescfs                1.0K    1.0K      0B   100%    /usr/jails/httpd/dev/fd
procfs                 4.0K    4.0K      0B   100%    /usr/jails/httpd/proc
root@bsd11:~ #

 

 

Jail console 로 httpd 로 접속 합니다.

root@bsd11:~ # ezjail-admin console httpd
FreeBSD 11.1-RELEASE (GENERIC) #0 r321309: Fri Jul 21 02:08:28 UTC 2017

Welcome to FreeBSD!

Release Notes, Errata: https://www.FreeBSD.org/releases/
Security Advisories:   https://www.FreeBSD.org/security/
FreeBSD Handbook:      https://www.FreeBSD.org/handbook/
FreeBSD FAQ:           https://www.FreeBSD.org/faq/
Questions List: https://lists.FreeBSD.org/mailman/listinfo/freebsd-questions/
FreeBSD Forums:        https://forums.FreeBSD.org/

Documents installed with the system are in the /usr/local/share/doc/freebsd/
directory, or can be installed later with:  pkg install en-freebsd-doc
For other languages, replace "en" with a language code like de or fr.

Show the version of FreeBSD installed:  freebsd-version ; uname -a
Please include that output and any error messages when posting questions.
Introduction to manual pages:  man man
FreeBSD directory layout:      man hier

Edit /etc/motd to change this login announcement.
root@httpd:~ #

 

 

httpd jail 에서 apache24 를 설치 합니다. 

ports 설치가 아닌 pkg 명령어를 통한 설치도 가능 합니다. 🙂

root@httpd:~ # make -C /usr/ports/www/apache24 config-recursive install
~중략
root@httpd:~ # sysrc apache24_enable=YES
apache24_enable:  -> YES
root@httpd:~ # cat /etc/rc.conf
apache24_enable="YES"


root@httpd:~ # vi /usr/local/etc/apache24/httpd.conf
ServerName www.example.com:80


root@httpd:~ # service apache24 start

root@httpd:~ # sockstat  -4
USER     COMMAND    PID   FD PROTO  LOCAL ADDRESS         FOREIGN ADDRESS
www      httpd      63005 3  tcp4   10.0.0.1:80           *:*
www      httpd      63004 3  tcp4   10.0.0.1:80           *:*
www      httpd      63003 3  tcp4   10.0.0.1:80           *:*
www      httpd      63002 3  tcp4   10.0.0.1:80           *:*
www      httpd      63001 3  tcp4   10.0.0.1:80           *:*
root     httpd      63000 3  tcp4   10.0.0.1:80           *:*
root     sendmail   3798  3  tcp4   10.0.0.1:25           *:*
root     syslogd    3718  6  udp4   10.0.0.1:514          *:*
root@httpd:~ #

 

 

접속 확인 

vm의 em0 에 설정되어있는 Public IP 192.168.0.40 으로 접속 하면 httpd jail 로 접속 하게 됩니다.

zroot/jails zfs pool 생성

최초 jail 구성시 먼저 zfs pool 을 생성 하고 작업을 합니다.

zfs 및 파일 시스템의 경우 별도로 포스팅 하겠습니다.

 

ezjail 설치 및 rc.conf 등록

root@bsd11:~ # pkg install -y ezjail
root@bsd11:~ # sysrc ezjail_enable=YES
ezjail_enable:  -> YES

 

ezjail 에서 zfs pool 을 사용하기 위하여 아래와 같이 ezjail.conf 를 수정합니다.

root@bsd11:~ # vi /usr/local/etc/ezjail.conf
# to collect them in this directory
 ezjail_jaildir=/usr/jails

~중략
# ZFS options

# Setting this to YES will start to manage the basejail and newjail in ZFS
 ezjail_use_zfs="YES"

# Setting this to YES will manage ALL new jails in their own zfs
 ezjail_use_zfs_for_jails="YES"

# The name of the ZFS ezjail should create jails on, it will be mounted at the ezjail_jaildir
 ezjail_jailzfs="zroot/jails"

 

zfs list 확인

root@bsd11:~ # zfs list
NAME                 USED  AVAIL  REFER  MOUNTPOINT
zroot               1.66G  34.9G    88K  /zroot
zroot/ROOT           405M  34.9G    88K  none
zroot/ROOT/default   405M  34.9G   405M  /
zroot/tmp             88K  34.9G    88K  /tmp
zroot/usr           1.27G  34.9G    88K  /usr
zroot/usr/home        88K  34.9G    88K  /usr/home
zroot/usr/ports      665M  34.9G   665M  /usr/ports
zroot/usr/src        633M  34.9G   633M  /usr/src
zroot/var            584K  34.9G    88K  /var
zroot/var/audit       88K  34.9G    88K  /var/audit
zroot/var/crash       88K  34.9G    88K  /var/crash
zroot/var/log        136K  34.9G   136K  /var/log
zroot/var/mail        88K  34.9G    88K  /var/mail
zroot/var/tmp         96K  34.9G    96K  /var/tmp
root@bsd11:~ #

 

zfs jails pool 생성

root@bsd11:~ # zfs create -p zroot/jails
root@bsd11:~ # zfs set mountpoint=/usr/jails zroot/jails
root@bsd11:~ # zfs list
NAME                 USED  AVAIL  REFER  MOUNTPOINT
zroot               1.66G  34.9G    88K  /zroot
zroot/ROOT           405M  34.9G    88K  none
zroot/ROOT/default   405M  34.9G   405M  /
zroot/jails           88K  34.9G    88K  /usr/jails
zroot/tmp             88K  34.9G    88K  /tmp
zroot/usr           1.27G  34.9G    88K  /usr
zroot/usr/home        88K  34.9G    88K  /usr/home
zroot/usr/ports      665M  34.9G   665M  /usr/ports
zroot/usr/src        633M  34.9G   633M  /usr/src
zroot/var            576K  34.9G    88K  /var
zroot/var/audit       88K  34.9G    88K  /var/audit
zroot/var/crash       88K  34.9G    88K  /var/crash
zroot/var/log        136K  34.9G   136K  /var/log
zroot/var/mail        88K  34.9G    88K  /var/mail
zroot/var/tmp         88K  34.9G    88K  /var/tmp
root@bsd11:~ #


변경전
zroot/jails           88K  34.9G    88K  /zroot/jails

변경후
zroot/jails           88K  34.9G    88K  /usr/jails

 

ezjail-admin install 을 실행하여 jails 에 필요한 디렉토리를 생성 합니다.

root@bsd11:~ # ezjail-admin install
base.txz                                        7% of   99 MB 2270 kBps 00m47s
lib32.txz                                     100% of   17 MB 1805 kBps 00m10s

 

디렉토리 확인 

root@bsd11:~ # df -h |grep -i jails
zroot/jails              35G    104K     35G     0%    /usr/jails
zroot/jails/basejail     35G    296M     35G     1%    /usr/jails/basejail
zroot/jails/newjail      35G    4.7M     35G     0%    /usr/jails/newjail

ZFS 사용시 아래와 같이 ro -> rw 로 변경해야 ports 설치가 가능합니다.

root@bsd11:~ # vi /etc/fstab.httpd
/usr/jails/basejail /usr/jails/httpd/basejail nullfs rw 0 0

다른부분은 위와 동일 합니다. 🙂

 

apache24+php71 jail & mariadb101 jail 구성

httpd jail : apache24+php71 / ip-adress 10.0.0.1

database jail : mariadb101 / ip-address 10.0.0.2

 

Freebsd APM 설치 참고:

[apm] apache24-php71-mariadb102 설치

 

pf.conf 설정을 변경하여 3306 port 를 10.0.0.2 설정 합니다.

root@bsd11:~ # vi /etc/pf.conf
# Public IP address
IP_PUB="192.168.0.40"

# Packet normalization
scrub in all

# Allow outbound connections from within the jails
nat on em0 from lo1:network to any -> (em0)

# webserver jail at 10.0.0.1
rdr on em0 proto tcp from any to $IP_PUB port 443 -> 10.0.0.1
# just an example in case you want to redirect to another port within your jail
rdr on em0 proto tcp from any to $IP_PUB port 80 -> 10.0.0.1

#mariadb jail at 10.0.0.2
rdr on em0 proto tcp from any to $IP_PUB port 3306 -> 10.0.0.2

 

apache24 와 php7 을 사용할 httpd jail 을 생성 및 실행

root@bsd11:~ # ezjail-admin create httpd 10.0.0.1
root@bsd11:~ # cp /etc/resolv.conf /usr/jails/httpd/etc/
root@bsd11:~ # ezjail-admin start httpd

 

mariadb101 에서 사용할 database jail 을 생성 및 실행

root@bsd11:~ # ezjail-admin create database 10.0.0.2
root@bsd11:~ # cp /etc/resolv.conf /usr/jails/database/etc/
root@bsd11:~ # ezjail-admin start database

 

파일시스템을  rw 로 수정 합니다.

root@bsd11:~ # vi /etc/fstab.httpd
/usr/jails/basejail /usr/jails/httpd/basejail nullfs rw 0 0

root@bsd11:~ # vi /etc/fstab.database
/usr/jails/basejail /usr/jails/database/basejail nullfs rw 0 0

 

 

jail list 확인및 httpd jail 접속

jail 접속시 ezjail-admin console 명령어를 사용합니다.

root@bsd11:~ # ezjail-admin list
STA JID  IP              Hostname                       Root Directory
--- ---- --------------- ------------------------------ ------------------------
ZS  N/A  10.0.0.1        httpd                          /usr/jails/httpd
ZS  N/A  10.0.0.2        database                       /usr/jails/database
root@bsd11:~ # ezjail-admin console httpd
FreeBSD 11.1-RELEASE (GENERIC) #0 r321309: Fri Jul 21 02:08:28 UTC 2017

Welcome to FreeBSD!

Release Notes, Errata: https://www.FreeBSD.org/releases/
Security Advisories:   https://www.FreeBSD.org/security/
FreeBSD Handbook:      https://www.FreeBSD.org/handbook/
FreeBSD FAQ:           https://www.FreeBSD.org/faq/
Questions List: https://lists.FreeBSD.org/mailman/listinfo/freebsd-questions/
FreeBSD Forums:        https://forums.FreeBSD.org/

Documents installed with the system are in the /usr/local/share/doc/freebsd/
directory, or can be installed later with:  pkg install en-freebsd-doc
For other languages, replace "en" with a language code like de or fr.

Show the version of FreeBSD installed:  freebsd-version ; uname -a
Please include that output and any error messages when posting questions.
Introduction to manual pages:  man man
FreeBSD directory layout:      man hier

Edit /etc/motd to change this login announcement.
root@httpd:~ #

 

apache24  설치

root@httpd:~ # make -C /usr/ports/www/apache24 config-recursive install
To run apache www server from startup, add apache24_enable="yes"
in your /etc/rc.conf. Extra options can be found in startup script.

Your hostname must be resolvable using at least 1 mechanism in
/etc/nsswitch.conf typically DNS or /etc/hosts or apache might
have issues starting depending on the modules you are using.

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

- apache24 default build changed from static MPM to modular MPM
- more modules are now enabled per default in the port
- icons and error pages moved from WWWDIR to DATADIR

   If build with modular MPM and no MPM is activated in
   httpd.conf, then mpm_prefork will be activated as default
   MPM in etc/apache24/modules.d to keep compatibility with
   existing php/perl/python modules!

Please compare the existing httpd.conf with httpd.conf.sample
and merge missing modules/instructions into httpd.conf!

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

===> SECURITY REPORT:
      This port has installed the following files which may act as network
      servers and may therefore pose a remote security risk to the system.
/usr/local/libexec/apache24/mod_cgid.so

      This port has installed the following startup scripts which may cause
      these network services to be started at boot time.
/usr/local/etc/rc.d/apache24
/usr/local/etc/rc.d/htcacheclean

      If there are vulnerabilities in these programs there may be a security
      risk to the system. FreeBSD makes no guarantee about the security of
      ports included in the Ports Collection. Please type 'make deinstall'
      to deinstall the port if this is a concern.

      For more information, and contact details about the security
      status of this software, see the following webpage:
http://httpd.apache.org/
root@httpd:~ #

 

php71 설치

jail 내부라 zfs enable 도 필요 없어 make config 가 의미가 있을지는 모르나 php71 에서는 make config 를 눌러 OK 를 선택 합니다.

의미는 없어 보입니다. 🙂

root@httpd:~ # cd /usr/ports/lang/php71/
root@httpd:/usr/ports/lang/php71 # make config


root@httpd:/usr/ports/lang/php71-extensions # cd
root@httpd:~ # make -C /usr/ports/lang/php71-extensions config-recursive install

설치 옵션에서 CURL FTP GD MYSQLi OPENSSL SOCKETS PDF SNMP ZIP 선택후 설치를 진행 합니다. 

 

mod_php71 설치

root@httpd:~ # pkg install -y mod_php71

ports 설치시 error 가 발생함으로 pkg 명령어를 이용하여 설치 합니다.

설치후 메세지

Message from mod_php71-7.1.14:

***************************************************************

Make sure index.php is part of your DirectoryIndex.

You should add the following to your Apache configuration file:

<FilesMatch "\.php$">
    SetHandler application/x-httpd-php
</FilesMatch>
<FilesMatch "\.phps$">
    SetHandler application/x-httpd-php-source
</FilesMatch>

*********************************************************************

If you are building PHP-based ports in poudriere(8) with ZTS enabled,
add WITH_MPM=event to /etc/make.conf to prevent build failures.

*********************************************************************

 

 

mraidb101 설치

database jail 로 접속 합니다.

root@bsd11:~ # ezjail-admin console database

 

mariadb101 을 설치 합니다.

root@bsd11:~ # make -C /usr/ports/databases/mariadb101-server/ config-recursive install

 

httpd jail 설정

rc.conf 에 apache24 enable 추가

root@bsd11:~ # ezjail-admin console httpd
root@httpd:~ # sysrc apache24_enable=YES
apache24_enable:  -> YES
root@httpd:~ #

 

apache24 setting

root@httpd:~ # cd /usr/local/etc/apache24/
root@httpd:/usr/local/etc/apache24 # cp httpd.conf httpd.conf.org
root@httpd:/usr/local/etc/apache24 # vi httpd.conf
<IfModule dir_module>
    DirectoryIndex index.html index.php
</IfModule>


ServerName 10.0.0.1:80

    #
    AddType application/x-compress .Z
    AddType application/x-gzip .gz .tgz
    AddType application/x-httpd-php .php .inc .html
    AddType application/x-httpd-source .phps

 

 

php.ini 파일 카피

root@httpd:~ # cd /usr/local/etc/
root@httpd:/usr/local/etc # cp php.ini-production php.ini

 

php.conf 파일생성

root@httpd:~ # vi /usr/local/etc/apache24/extra/php.conf
<IfModule dir_module>
    DirectoryIndex index.php index.html
    <FilesMatch "\.php$">
        SetHandler application/x-httpd-php
    </FilesMatch>
    <FilesMatch "\.phps$">
        SetHandler application/x-httpd-php-source
    </FilesMatch>
</IfModule>

 

apache24 실행

root@httpd:/usr/local/etc/apache24 # service apache24 restart
Performing sanity check on apache24 configuration:
Syntax OK
Stopping apache24.
Waiting for PIDS: 21662.
Performing sanity check on apache24 configuration:
Syntax OK
Starting apache24.
root@httpd:/usr/local/etc/apache24 #

 

 

database jail 설정

database jail 에 접속 하여 mariadb101 을 설정 합니다.

mariadb 실행후 db Password 를 설정 합니다.

root@bsd11:~ # ezjail-admin console 
root@database:~ # sysrc mysql_enable=YES
mysql_enable:  -> YES

mariadb102 Daemon 실행및 password 설정

root@database:~ # service mysql-server start
To start mysqld at boot time you have to copy
support-files/mysql.server to the right place for your system

PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER !
To do so, start the server, then issue the following commands:

'/usr/local/bin/mysqladmin' -u root password 'new-password'
'/usr/local/bin/mysqladmin' -u root -h database password 'new-password'

Alternatively you can run:
'/usr/local/bin/mysql_secure_installation'

which will also give you the option of removing the test
databases and anonymous user created by default.  This is
strongly recommended for production servers.

See the MariaDB Knowledgebase at http://mariadb.com/kb or the
MySQL manual for more instructions.

You can start the MariaDB daemon with:
cd '/usr/local' ; /usr/local/bin/mysqld_safe --datadir='/var/db/mysql'

You can test the MariaDB daemon with mysql-test-run.pl
cd '/usr/local/mysql-test' ; perl mysql-test-run.pl

Please report any problems at http://mariadb.org/jira

The latest information about MariaDB is available at http://mariadb.org/.
You can find additional information about the MySQL part at:
http://dev.mysql.com
Consider joining MariaDB's strong and vibrant community:
Get Involved
Starting mysql. root@database:~ # /usr/local/bin/mysql_secure_installation NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY! In order to log into MariaDB to secure it, we'll need the current password for the root user. If you've just installed MariaDB, and you haven't set the root password yet, the password will be blank, so you should just press enter here. Enter current password for root (enter for none): OK, successfully used password, moving on... Setting the root password ensures that nobody can log into the MariaDB root user without the proper authorisation. Set root password? [Y/n] y New password: Re-enter new password: Password updated successfully! Reloading privilege tables.. ... Success! By default, a MariaDB installation has an anonymous user, allowing anyone to log into MariaDB without having to have a user account created for them. This is intended only for testing, and to make the installation go a bit smoother. You should remove them before moving into a production environment. Remove anonymous users? [Y/n] y ... Success! Normally, root should only be allowed to connect from 'localhost'. This ensures that someone cannot guess at the root password from the network. Disallow root login remotely? [Y/n] y ... Success! By default, MariaDB comes with a database named 'test' that anyone can access. This is also intended only for testing, and should be removed before moving into a production environment. Remove test database and access to it? [Y/n] y - Dropping test database... ... Success! - Removing privileges on test database... ... Success! Reloading the privilege tables will ensure that all changes made so far will take effect immediately. Reload privilege tables now? [Y/n] y ... Success! Cleaning up... All done! If you've completed all of the above steps, your MariaDB installation should now be secure. Thanks for using MariaDB! root@database:~ #

 

my.cnf 파일 복사 및 character-set 변경

bind-address 를 0.0.0.0 으로 설정시 외부에서 접속 할수 있습니다.

pf 에서 별도로 설정을 하여 내부에서만 사용하게 설정해야 합니다. // 해당 설정의 경우 별도로 정리 하지 않았습니다.

root@database:~ # cp /usr/local/share/mysql/my-large.cnf /usr/local/etc/my.cnf
root@database:~ # vi /usr/local/etc/my.cnf

[client]
#password       = your_password
port            = 3306
socket          = /tmp/mysql.sock
default-character-set = utf8


# The MariaDB server
[mysqld]
bind-address=0.0.0.0
character-set-server=utf8
skip-character-set-client-handshake

 

mariadb 재시작 및 status 확인

root@database:~ # mysql -uroot -p
Enter password:
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 3
Server version: 10.1.31-MariaDB FreeBSD Ports

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> status;
--------------
mysql  Ver 15.1 Distrib 10.1.31-MariaDB, for FreeBSD11.1 (amd64) using readline 5.1

Connection id:          3
Current database:
Current user:           root@localhost
SSL:                    Not in use
Current pager:          more
Using outfile:          ''
Using delimiter:        ;
Server:                 MariaDB
Server version:         10.1.31-MariaDB FreeBSD Ports
Protocol version:       10
Connection:             Localhost via UNIX socket
Server characterset:    utf8
Db     characterset:    utf8
Client characterset:    utf8
Conn.  characterset:    utf8
UNIX socket:            /tmp/mysql.sock
Uptime:                 11 sec

Threads: 1  Questions: 4  Slow queries: 0  Opens: 17  Flush tables: 1  Open tables: 11  Queries per second avg: 0.363
--------------

MariaDB [(none)]>

db 설정이 완료 되었습니다.

 

Test 를 위하여 WordPress 를 설치해 봅니다. 🙂

WordPress 는 https://ko.wordpress.org/download/ Site 에서 다운 받으실수 있습니다.

host 에서 wordpress 파일을 httpd jail 의 root 디렉토리로 카피 합니다.

root@bsd11:~ # cp wordpress-4.9.4-ko_KR.zip /usr/jails/httpd/root/

 

test.php 파일 생성

root@httpd:~ # cd /usr/local/www/apache24/data
root@httpd:/usr/local/www/apache24/data # vi test.php

 

phpinfo 확인

WordPress 설치할 준비가 끝났습니다. 🙂

 

database jail  / db 생성

user 명 wp  / database wp / password password 입니다.

원격에서 접속 할수 있게 localhost 가 아닌 % 권한을 줍니다.

root@database:~ # mysql -uroot -p
Enter password:
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 4
Server version: 10.1.31-MariaDB FreeBSD Ports

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> create database wp;
Query OK, 1 row affected (0.01 sec)

MariaDB [(none)]> use mysql;
Database changed
MariaDB [mysql]> GRANT ALL ON wp.* TO 'wp'@'%' IDENTIFIED BY 'password';
Query OK, 0 rows affected (0.00 sec)

MariaDB [mysql]> flush privileges;
Query OK, 0 rows affected (0.00 sec)

MariaDB [mysql]> quit;
Bye
root@database:~ #

 

외부에서 원격 로그인으로 db 로 접속을 테스트 합니다.

root@bsd11:~ # pkg install mariadb101-client
root@bsd11:~ # mysql -h10.0.0.2 -uwp -p
Enter password:
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 4
Server version: 10.1.31-MariaDB FreeBSD Ports

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]>

 

httpd jails 에서 wordpress 파일을 압축해제 합니다.

root@httpd:~ # cp wordpress-4.9.4-ko_KR.zip /usr/local/www/apache24/data/
root@httpd:/usr/local/www/apache24/data # tar xvf wordpress-4.9.4-ko_KR.zip
root@httpd:/usr/local/www/apache24/data # chown -R www:www wordpress

 

web browser 

 

Let’s go! 를 클릭합니다.

데이터베이스 호스트에 database jail ip 를 입력 합니다. 

 

설치 실행하기를 클릭하여 설치를 진행합니다. 

 

WordPress 기본정보 기입후 워드프레스 설치하기를 클릭합니다. 

 

워드프레스 설치가 완료 되었습니다. 

 

로그인 확인 

 

 

 

 

 

Freebsd PF 방화벽 (Packet filter)

feebsd 문서:https://www.freebsd.org/doc/handbook/firewalls-pf.html

참고페이지: https://www.cyberciti.biz/faq/how-to-set-up-a-firewall-with-pf-on-freebsd-to-protect-a-web-server/

 

방화벽 테스트를 위하여 pure-ftpd 와 sshd_config 의 port 변경이 필요 합니다.

pure-ftpd 설치는 아래 링크를 참고하시면 됩니다.

[ftp-server] FreeBSD pure-ftpd 설치

 

/etc/rc.conf 수정

root@bsd11:~ # vi /etc/rc.conf

#PF setting
pf_enable="YES"
pflog_enable="YES"
pf_rules="/etc/pf.conf"
pflog_logfile="/var/log/pflog"


 

/etc/pf.conf 파일 생성

root@bsd11:~ # vi /etc/pf.conf
ext_if="em0"

set limit { states 80000, frags 5000 }

set block-policy drop

set skip on vnet1

set skip on lo0

scrub in all

antispoof for $ext_if

block in all

block out all

table <bruteforce> persist

table <sshbruteforce> persist

table <ftp> persist

block in quick log proto tcp from <bruteforce> to port 80

block in quick log proto tcp from <sshbruteforce> to port 2424

block in quick log proto tcp from <ftp> to port 21

pass in log proto tcp from any to port 21 keep state

pass in log proto tcp from any to port 30000:50000 keep state

pass in log proto tcp from any to port 2424 keep state

pass in on $ext_if proto tcp from any to $ext_if port 2424 \
            flags S/SA keep state \
            (max-src-conn-rate 10/30, overload <sshbruteforce> flush global)

pass in on $ext_if proto tcp from any to $ext_if port 80 \
        flags S/SA synproxy state

pass in on $ext_if proto tcp from any to $ext_if port 80 \
        flags S/SA keep state \
        (max-src-conn 100, max-src-conn-rate 300/10, \
        overload <bruteforce> flush global)

 

sshd_config 설정변경

root@bsd11:~ # vi /etc/ssh/sshd_config
#Port 22
Port 2424

 

 

pure-ftpd.conf 설정변경

PassivePortRange 주석을 제거 하여 설정값을 활성하 시키고 사용할 포트 Range 를 지정합니다.

root@bsd11:~ # vi /usr/local/etc/pure-ftpd.conf

PassivePortRange             30000 50000

 

시스템 리부팅

root@bsd11:~ # init 6

 

pf commands

pf config check 

root@bsd11:~ # service pf check
Checking pf rules.

 

pf status 확인

root@bsd11:~ # service pf status
Status: Enabled for 0 days 00:01:01           Debug: Urgent

State Table                          Total             Rate
  current entries                        0
  searches                             227            3.7/s
  inserts                                0            0.0/s
  removals                               0            0.0/s
Counters
  match                                227            3.7/s
  bad-offset                             0            0.0/s
  fragment                               0            0.0/s
  short                                  0            0.0/s
  normalize                              0            0.0/s
  memory                                 0            0.0/s
  bad-timestamp                          0            0.0/s
  congestion                             0            0.0/s
  ip-option                              0            0.0/s
  proto-cksum                            0            0.0/s
  state-mismatch                         0            0.0/s
  state-insert                           0            0.0/s
  state-limit                            0            0.0/s
  src-limit                              0            0.0/s
  synproxy                               0            0.0/s
  map-failed                             0            0.0/s
root@bsd11:~ #

 

 

 

pf.conf 설정내용 설명

— 차후 작성

 

 

Freebsd rename nic device

KVM 에서 운영하는 freebsd nic name 이 vtnet0 입니다.

일반적인 환경에서는 문제가 없지만 PF 등을 설정할때 다른 가상 Device 와 착각? 을 할것으로 보입니다. 🙂

vtnet0 -> em0 로 바꾸는 방법을 간단히 소개 할려고 합니다.

 

변경전 

root@bsd11:~ # ifconfig
vtnet0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=6c07bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,LRO,VLAN_HWTSO,LINKSTATE,RXCSUM_IPV6,TXCSUM_IPV6>
        ether 52:54:00:c1:cb:84
        hwaddr 52:54:00:c1:cb:84
        inet 192.168.0.40 netmask 0xffffff00 broadcast 192.168.0.255
        nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
        media: Ethernet 10Gbase-T <full-duplex>
        status: active
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
        options=600003<RXCSUM,TXCSUM,RXCSUM_IPV6,TXCSUM_IPV6>
        inet6 ::1 prefixlen 128
        inet6 fe80::1%lo0 prefixlen 64 scopeid 0x2
        inet 127.0.0.1 netmask 0xff000000
        nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
        groups: lo
root@bsd11:~ #

 

/etc/rc.conf 수정 및 시스템 리부팅

root@bsd11:~ # cat /etc/rc.conf
hostname="bsd11"
keymap="us.iso.kbd"
ifconfig_vtnet0_name="em0"
ifconfig_em0="inet 192.168.0.40 netmask 255.255.255.0"
root@bsd11:~ # init 6

ifconfig_vtnet0_name=”em0″  // vtnet0 의 Device name 을 em0 로 변경 합니다.
ifconfig_em0=”inet 192.168.0.40 netmask 255.255.255.0″  // 기존 vtnet0 를 em0 로 변경 합니다.

 

변경후 

root@bsd11:~ # ifconfig
em0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=6c07bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,LRO,VLAN_HWTSO,LINKSTATE,RXCSUM_IPV6,TXCSUM_IPV6>
        ether 52:54:00:c1:cb:84
        hwaddr 52:54:00:c1:cb:84
        inet 192.168.0.40 netmask 0xffffff00 broadcast 192.168.0.255
        nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
        media: Ethernet 10Gbase-T <full-duplex>
        status: active
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
        options=600003<RXCSUM,TXCSUM,RXCSUM_IPV6,TXCSUM_IPV6>
        inet6 ::1 prefixlen 128
        inet6 fe80::1%lo0 prefixlen 64 scopeid 0x2
        inet 127.0.0.1 netmask 0xff000000
        nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
        groups: lo
root@bsd11:~ #

 

Freebsd dokuwiki port install

dokuwiki 소개: 

DokuWiki(도쿠위키)는 데이터베이스가 필요 없는 사용하기 간단하고 범용성이 높은 오픈 소스 위키 소프트웨어입니다. 간명하고 읽기 편한 구문으로 사용자에게 사랑을 받고 있습니다. 유지 보수, 백업과 통합이 쉬워 관리자가 선호 합니다. 접근 제어 기능와 인증에 의한 연결 기능을 내장하고 있어, 특히 기업 환경에서의 이용에도 적합합니다. 활기찬 공동체가 기여한 많은 플러그인은 기존의 위키 이외의 광범위한 사용을 가능하게 합니다.

dokuwiki official site: https://www.dokuwiki.org/ko:dokuwiki

설치전 확인 사항: dokuwiki 설치전 apache 또는 nginx  Web Server 가 설치 되어 있어야 하며, php 설치 되어 있어야 합니다.

db 의 경우 별도로 필요 하지 않습니다.

 

dokuwiki 설치

root@bsd11:~ # whereis dokuwiki
dokuwiki: /usr/ports/www/dokuwiki
root@bsd11:~ # cd /usr/ports/www/dokuwiki/ && make config-recursive install

 

설치후 메세지

======================================================================
                          INSTALLATION NOTES

The wiki program have been installed to /usr/local/www/dokuwiki.

Please configure your web server to allow running PHP scripts there.

Please create dedicated data directory outside the installation directory
and make it owned by the process running these PHP scripts.  It is important
to make sure that your PHP intepreter does not allow running PHP scripts
there.

For first install, you may have to manually copy the contents from
/usr/local/www/dokuwiki/data into the newly created data directory and change
the owner of /usr/local/www/dokuwiki/conf to the web server.

Please go to http://www.your.host/dokuwiki/install.php to finish the
installation.  For FULL configuration instructions, see
http://wiki.splitbrain.org/wiki:config

After installation please change the permissions of
/usr/local/www/dokuwiki/conf back to root:wheel.

======================================================================

===>  Cleaning for php71-7.1.14
===>  Cleaning for php71-gd-7.1.14
===>  Cleaning for libXpm-3.5.12
===>  Cleaning for xextproto-7.3.0
===>  Cleaning for xorg-macros-1.19.1
===>  Cleaning for xproto-7.0.31
===>  Cleaning for libX11-1.6.5,1
===>  Cleaning for bigreqsproto-1.1.2
===>  Cleaning for xcmiscproto-1.2.2
===>  Cleaning for xtrans-1.3.5
===>  Cleaning for kbproto-1.0.7
===>  Cleaning for inputproto-2.3.2
===>  Cleaning for xf86bigfontproto-1.2.0
===>  Cleaning for libXau-1.0.8_3
===>  Cleaning for libXdmcp-1.1.2
===>  Cleaning for libxcb-1.12_2
===>  Cleaning for check-0.12.0
===>  Cleaning for xcb-proto-1.12
===>  Cleaning for python27-2.7.14_1
===>  Cleaning for libffi-3.2.1_2
===>  Cleaning for libpthread-stubs-0.4
===>  Cleaning for libxslt-1.1.29_1
===>  Cleaning for libgcrypt-1.8.2
===>  Cleaning for libgpg-error-1.27
===>  Cleaning for libiconv-1.14_11
===>  Cleaning for libXext-1.3.3_1,1
===>  Cleaning for libXt-1.1.5,1
===>  Cleaning for libSM-1.2.2_3,1
===>  Cleaning for libICE-1.0.9_1,1
===>  Cleaning for freetype2-2.8_1
===>  Cleaning for png-1.6.34
===>  Cleaning for jpeg-turbo-1.5.3
===>  Cleaning for nasm-2.13.03,1
===>  Cleaning for php71-mbstring-7.1.14
===>  Cleaning for oniguruma-6.7.1
===>  Cleaning for php71-openssl-7.1.14
===>  Cleaning for php71-session-7.1.14
===>  Cleaning for php71-xml-7.1.14
===>  Cleaning for php71-zlib-7.1.14
===>  Cleaning for dokuwiki-20170219e
root@bsd11:/usr/ports/www/dokuwiki #

 

설치후  rebooting 을 하지 않는다면 rehash 를 실행 합니다.

root@bsd11:/usr/ports/www/dokuwiki # rehash
root@bsd11:/usr/ports/www/dokuwiki #

 

apache24 설정

httpd.conf 설정 // Directory 기존 설정을 주석처리후 아래와 같이 설정 합니다.

주석을 제거 하여 활성화 합니다.

root@bsd11:/usr/local/etc/apache24 # vi httpd.conf
#<Directory />
#    AllowOverride none
#    Require all denied
#</Directory>
<Directory />
    AllowOverride none
    Order deny,allow
    Deny from all
</Directory>

Alias /wiki /usr/local/www/dokuwiki
<Directory "/usr/local/www/dokuwiki">
AllowOverride None
Order Allow,deny
Allow from all
</Directory>


LoadModule vhost_alias_module libexec/apache24/mod_vhost_alias.so

LoadModule rewrite_module libexec/apache24/mod_rewrite.so

# Virtual hosts
Include etc/apache24/extra/httpd-vhosts.conf


 

httpd-vhosts.conf 를 설정 합니다.

wiki.test.com 은 임시로 test 를 위하여 dns-server 에서 사전에 작업을 하였습니다.

root@bsd11:~ # cd /usr/local/etc/apache24/extra
root@bsd11:/usr/local/etc/apache24/extra # vi httpd-vhosts.conf
<VirtualHost *:80>
    ServerAdmin admin@test.com
    DocumentRoot "/usr/local/www/dokuwiki"
    ServerName wiki.test.com
    ErrorLog "/var/log/wiki.test.com-error_log"
    CustomLog "/var/log/wiki.test.com-access_log" common
</VirtualHost>

 

dokuwiki 디렉토리 권한설정

root@bsd11:/usr/local/etc/apache24/extra # cd /usr/local/www/
root@bsd11:/usr/local/www # chown -R www:www dokuwiki/

 

apache24 를 재시작 합니다.

root@bsd11:/usr/local/etc/apache24/extra # service apache24 restart
Performing sanity check on apache24 configuration:
Syntax OK
Stopping apache24.
Waiting for PIDS: 719.
Performing sanity check on apache24 configuration:
Syntax OK
Starting apache24.
root@bsd11:/usr/local/etc/apache24/extra #

 

 

http://vhost.domain/install.php 또는 http://domain/dokuwiki/install.php

Choose your language : 에서 Ko 를 선택 하면 페이지정보를 한글로 볼수 있습니다.

test 를 위하여 대략적인 정보를 기입한후 저장을 클릭합니다.

 

새 도쿠 위키를 클릭합니다.

 

로그인을 클릭합니다.

 

로그인을 합니다.

 

dokuwiki 가 정상적으로 설치 되었습니다. 🙂

 

보안설정의 경우 아래 링크를 참고 합니다.

http://www.dokuwiki.org/security 

jwilder/nginx-proxy image 이용 Nginx-web 컨테이너 추가

 

[docker] docker-compose Nginx-proxy multi-wordpress site

docker-compose Nginx-proxy 를 이용하여 Multi-Wordpress 컨테이너를 구성 하였습니다.

이번에는 WordPress 가 아닌 Nginx web-site를 구성 합니다.

 

Github

test@docker-test:~$ git clone https://github.com/visualwork/Docker-test.git

/Docker-test/test06 에 있습니다. 🙂

 

 

docker ps 확인시 nginx-proxy 컨테이너가 80 port 연결을 받아 다른 컨테이너의 Virtualhost로 접속하게 하였습니다.

test@docker-test:~/web-service$ docker ps
CONTAINER ID        IMAGE                 COMMAND                  CREATED             STATUS              PORTS                NAMES
5d75a1fe1bc0        wordpress:latest      "docker-entrypoint.s…"   5 minutes ago       Up 5 minutes        80/tcp               wp_blog2
c5a3310e88d8        mariadb               "docker-entrypoint.s…"   5 minutes ago       Up 5 minutes        3306/tcp             wp_blog_db2
9e7e6c795f52        wordpress:latest      "docker-entrypoint.s…"   8 minutes ago       Up 8 minutes        80/tcp               wp_blog
242f3c32f658        mariadb               "docker-entrypoint.s…"   8 minutes ago       Up 8 minutes        3306/tcp             wp_blog_db
fc8761f89097        jwilder/nginx-proxy   "/app/docker-entrypo…"   11 minutes ago      Up 11 minutes       0.0.0.0:80->80/tcp   nginx-proxy
test@docker-test:~/web-service$

 

ex) blog1 docker-compose.yml file

  wordpress:
     image: wordpress:latest
     expose:
       - 80
     restart: always
     volumes:
       - ./web-data:/var/www/html
     environment:
       VIRTUAL_HOST: blog1.test.com

VIRTUAL_HOST : blog1.test.com 으로 설정 되어 있습니다. environment: 의 VIRTUAL_HOST 부분을 참조 하여 nginx-proxy 에서 자동으로 등록하게 되어 있습니다.

 

nginx 를 사용할 디렉토리를 생성후 임시docker-compose.yml 을 만들어 줍니다.

컨테이너 구동만을 테스트 하기 위하여 임시로 만드는 docker-compose 파일입니다.

test@docker-test:~/web-service$ mkdir www
test@docker-test:~/web-service$ cd www
test@docker-test:~/web-service/www$ vi docker-compose.yml
version: '2'
services:
  nginx-www:
    image: nginx:latest
    restart: always
    environment:
      - VIRTUAL_HOST=www.test.com
    container_name: nginx-www

networks:
  default:
    external:
      name: nginx-proxy

 

nginx-www 컨테이너를 구동합니다

test@docker-test:~/web-service/www$ docker-compose up -d --build
Pulling nginx-www (nginx:latest)...
latest: Pulling from library/nginx
8176e34d5d92: Already exists
5b19c1bdd74b: Pull complete
4e9f6296fa34: Pull complete
Digest: sha256:4771d09578c7c6a65299e110b3ee1c0a2592f5ea2618d23e4ffe7a4cab1ce5de
Status: Downloaded newer image for nginx:latest
Creating nginx-www ... done
test@docker-test:~/web-service/www$

 

nginx-www 컨테이너 확인

test@docker-test:~/web-service/www$ docker ps
CONTAINER ID        IMAGE                 COMMAND                  CREATED             STATUS              PORTS                NAMES
7fcc8b64bf87        nginx:latest          "nginx -g 'daemon of…"   43 seconds ago      Up 42 seconds       80/tcp               nginx-www
5d75a1fe1bc0        wordpress:latest      "docker-entrypoint.s…"   12 minutes ago      Up 12 minutes       80/tcp               wp_blog2
c5a3310e88d8        mariadb               "docker-entrypoint.s…"   12 minutes ago      Up 12 minutes       3306/tcp             wp_blog_db2
9e7e6c795f52        wordpress:latest      "docker-entrypoint.s…"   15 minutes ago      Up 15 minutes       80/tcp               wp_blog
242f3c32f658        mariadb               "docker-entrypoint.s…"   15 minutes ago      Up 15 minutes       3306/tcp             wp_blog_db
fc8761f89097        jwilder/nginx-proxy   "/app/docker-entrypo…"   18 minutes ago      Up 18 minutes       0.0.0.0:80->80/tcp   nginx-proxy
test@docker-test:~/web-service/www$

 

nginx-proxy 컨테이너에 접속 합니다.

www.test.com ip 정보를 확인 합니다.

test@docker-test:~/web-service/www$ docker exec -it fc8761f89097 /bin/bash
root@fc8761f89097:/app# cat /etc/nginx/conf.d/default.conf
~중략
# www.test.com
upstream www.test.com {
                                ## Can be connect with "nginx-proxy" network
                        # nginx-www
                        server 172.18.0.7:80;
}
server {
        server_name www.test.com;
        listen 80 ;
        access_log /var/log/nginx/access.log vhost;
        location / {
                proxy_pass http://www.test.com;
        }
}
root@fc8761f89097:/app# 

 

docker inspect 명령어를 이용하여 nginx-www 에 설정된 ip  정보를 확인 합니다.

test@docker-test:~/web-service/www$ docker inspect nginx-www |grep 172
                    "Gateway": "172.18.0.1",
                    "IPAddress": "172.18.0.7",
test@docker-test:~/web-service/www$

정상적으로 설정이 되어 있습니다. 🙂

docker-compose down 하여 컨테이너를 정지 합니다.

test@docker-test:~/web-service/www$ docker-compose down
Stopping nginx-www ... done
Removing nginx-www ... done
Network nginx-proxy is external, skipping
test@docker-test:~/web-service/www$

 

디렉토리 생성

test@docker-test:~/web-service/www$ mkdir www-data
test@docker-test:~/web-service/www$ mkdir -p mariadb/db-data
test@docker-test:~/web-service/www$ mkdir -p nginx/conf
test@docker-test:~/web-service/www$ mkdir -p php/conf

 

docker-compose file 생성

test@docker-test:~/web-service/www$ vi docker-compose.yml
version: '2'
services:
  mysql:
    image: mariadb:10.3.1
    volumes:
      - ./mariadb/db-data/:/var/lib/mysql
      - ./mariadb/my.cnf:/etc/mysql/mariadb.conf.d/custom.cnf
    restart: always
    environment:
      MYSQL_ROOT_PASSWORD: root
      MYSQL_USER: docker
      MYSQL_PASSWORD: docker
      MYSQL_DATABASE: docker
    container_name: mariadb-www

  nginx-www:
    image: nginx:latest
    restart: always
    volumes:
      - ./nginx/conf:/etc/nginx/conf.d
      - ./www-data:/www

    environment:
      - VIRTUAL_HOST=www.test.com
    container_name: nginx-www

  php:
    build: php
    expose:
      - 9000
    restart: always
    volumes:
      - ./php/conf/php.ini:/usr/local/etc/php/conf.d/custom.ini
      - ./www-data:/www
    container_name: php-www


networks:
  default:
    external:
      name: nginx-proxy

 

nginx/conf/default.conf 파일생성

test@docker-test:~/web-service/www$ cd nginx/conf/default.conf
server {
    listen       80 default_server;
    server_name  localhost _;
    index        index.php index.html index.htm;
    root         /www;

    location / {
        try_files   $uri $uri/ /index.php?$query_string;
        autoindex on;
    }

    location ~ \.php$ {
        try_files $uri /index.php =404;
        fastcgi_split_path_info ^(.+\.php)(/.+)$;
        fastcgi_pass php:9000;
        fastcgi_index index.php;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        include fastcgi_params;
    }
}

 

php/Dockerfile 파일생성

test@docker-test:~/web-service/www$ vi php/Dockerfile
FROM php:7.1-fpm

RUN apt-get update

# Some basic extensions
RUN docker-php-ext-install -j$(nproc) json mbstring opcache pdo pdo_mysql mysqli

# Curl
RUN apt-get install -y libcurl4-openssl-dev
RUN docker-php-ext-install -j$(nproc) curl

# GD
RUN apt-get install -y libpng-dev libjpeg-dev
RUN docker-php-ext-install -j$(nproc) gd

# Intl
RUN apt-get install -y libicu-dev
RUN docker-php-ext-install -j$(nproc) intl

 

php/conf/php.ini 파일생성

test@docker-test:~/web-service/www$ vi php/conf/php.ini
display_errors = On
display_startup_errors = On
default_charset = "UTF-8"
html_errors = On
date.timezone = Asia/Seoul

 

mariadb/my.cnf 파일생성

test@docker-test:~/web-service/www$ vi mariadb/my.cnf
# MariaDB database server configuration file.
#
# You can copy this file to one of:
# - "/etc/mysql/my.cnf" to set global options,
# - "~/.my.cnf" to set user-specific options.
#
# One can use all long options that the program supports.
# Run program with --help to get a list of available options and with
# --print-defaults to see which it would actually understand and use.
#
# For explanations see
# http://dev.mysql.com/doc/mysql/en/server-system-variables.html

# This will be passed to all mysql clients
# It has been reported that passwords should be enclosed with ticks/quotes
# escpecially if they contain "#" chars...
# Remember to edit /etc/mysql/debian.cnf when changing the socket location.
[client]
port            = 3306
socket          = /var/run/mysqld/mysqld.sock
default-character-set = utf8mb4


# Here is entries for some specific programs
# The following values assume you have at least 32M ram

# This was formally known as [safe_mysqld]. Both versions are currently parsed.
[mysqld_safe]
socket          = /var/run/mysqld/mysqld.sock
nice            = 0

[mysqld]
#
# * Basic Settings
#
#user           = mysql
pid-file        = /var/run/mysqld/mysqld.pid
socket          = /var/run/mysqld/mysqld.sock
port            = 3306
basedir         = /usr
datadir         = /var/lib/mysql
tmpdir          = /tmp
lc_messages_dir = /usr/share/mysql
lc_messages     = en_US
skip-external-locking
#
# Instead of skip-networking the default is now to listen only on
# localhost which is more compatible and is not less secure.
#bind-address           = 127.0.0.1
#
# * Fine Tuning
#
max_connections         = 100
connect_timeout         = 5
wait_timeout            = 600
max_allowed_packet      = 16M
thread_cache_size       = 128
sort_buffer_size        = 4M
bulk_insert_buffer_size = 16M
tmp_table_size          = 32M
max_heap_table_size     = 32M
#
# * MyISAM
#
# This replaces the startup script and checks MyISAM tables if needed
# the first time they are touched. On error, make copy and try a repair.
myisam_recover_options = BACKUP
key_buffer_size         = 128M
#open-files-limit       = 2000
table_open_cache        = 400
myisam_sort_buffer_size = 512M
concurrent_insert       = 2
read_buffer_size        = 2M
read_rnd_buffer_size    = 1M
#
# * Query Cache Configuration
#
# Cache only tiny result sets, so we can fit more in the query cache.
query_cache_limit               = 128K
query_cache_size                = 64M
# for more write intensive setups, set to DEMAND or OFF
#query_cache_type               = DEMAND
#
# * Logging and Replication
#
# Both location gets rotated by the cronjob.
# Be aware that this log type is a performance killer.
# As of 5.1 you can enable the log at runtime!
#general_log_file        = /var/log/mysql/mysql.log
#general_log             = 1
#
# Error logging goes to syslog due to /etc/mysql/conf.d/mysqld_safe_syslog.cnf.
#
# we do want to know about network errors and such
#log_warnings           = 2
#
# Enable the slow query log to see queries with especially long duration
#slow_query_log[={0|1}]
slow_query_log_file     = /var/log/mysql/mariadb-slow.log
long_query_time = 10
#log_slow_rate_limit    = 1000
#log_slow_verbosity     = query_plan

#log-queries-not-using-indexes
#log_slow_admin_statements
#
# The following can be used as easy to replay backup logs or for replication.
# note: if you are setting up a replication slave, see README.Debian about
#       other settings you may need to change.
#server-id              = 1
#report_host            = master1
#auto_increment_increment = 2
#auto_increment_offset  = 1
#log_bin                        = /var/log/mysql/mariadb-bin
#log_bin_index          = /var/log/mysql/mariadb-bin.index
# not fab for performance, but safer
#sync_binlog            = 1
expire_logs_days        = 10
max_binlog_size         = 100M
# slaves
#relay_log              = /var/log/mysql/relay-bin
#relay_log_index        = /var/log/mysql/relay-bin.index
#relay_log_info_file    = /var/log/mysql/relay-bin.info
#log_slave_updates
#read_only
#
# If applications support it, this stricter sql_mode prevents some
# mistakes like inserting invalid dates etc.
#sql_mode               = NO_ENGINE_SUBSTITUTION,TRADITIONAL
#
# * InnoDB
#
# InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/.
# Read the manual for more InnoDB related options. There are many!
default_storage_engine  = InnoDB
# you can't just change log file size, requires special procedure
#innodb_log_file_size   = 50M
innodb_buffer_pool_size = 256M
innodb_log_buffer_size  = 8M
innodb_file_per_table   = 1
innodb_open_files       = 400
innodb_io_capacity      = 400
innodb_flush_method     = O_DIRECT
#
# * Security Features
#
# Read the manual, too, if you want chroot!
# chroot = /var/lib/mysql/
#
# For generating SSL certificates I recommend the OpenSSL GUI "tinyca".
#
# ssl-ca=/etc/mysql/cacert.pem
# ssl-cert=/etc/mysql/server-cert.pem
# ssl-key=/etc/mysql/server-key.pem

character-set-server = utf8mb4
collation-server = utf8mb4_unicode_ci



#
# * Galera-related settings
#
[galera]
# Mandatory settings
#wsrep_on=ON
#wsrep_provider=
#wsrep_cluster_address=
#binlog_format=row
#default_storage_engine=InnoDB
#innodb_autoinc_lock_mode=2
#
# Allow server to accept connections on all interfaces.
#
#bind-address=0.0.0.0
#
# Optional setting
#wsrep_slave_threads=1
#innodb_flush_log_at_trx_commit=0

[mysqldump]
quick
quote-names
max_allowed_packet      = 16M

[mysql]
#no-auto-rehash # faster start of mysql but no tab completion

[isamchk]
key_buffer              = 16M

#
# * IMPORTANT: Additional settings that can override those from this file!
#   The files must end with '.cnf', otherwise they'll be ignored.
#
!includedir /etc/mysql/conf.d/

 

docker-compose 실행

test@docker-test:~/web-service/www$ docker-compose up -d --build

 

컨테이너 확인

test@docker-test:~/web-service/www$ docker ps
CONTAINER ID        IMAGE                 COMMAND                  CREATED             STATUS              PORTS                    NAMES
130f3eb196e3        www_php               "docker-php-entrypoi…"   10 seconds ago      Up 9 seconds        9000/tcp                 php-www
563f8bbccd33        mariadb:10.3.1        "docker-entrypoint.s…"   10 seconds ago      Up 9 seconds        0.0.0.0:3306->3306/tcp   mariadb-www
677fc32c5a4b        nginx:latest          "nginx -g 'daemon of…"   10 seconds ago      Up 9 seconds        80/tcp                   nginx-www
5d75a1fe1bc0        wordpress:latest      "docker-entrypoint.s…"   About an hour ago   Up About an hour    80/tcp                   wp_blog2
c5a3310e88d8        mariadb               "docker-entrypoint.s…"   About an hour ago   Up About an hour    3306/tcp                 wp_blog_db2
9e7e6c795f52        wordpress:latest      "docker-entrypoint.s…"   About an hour ago   Up About an hour    80/tcp                   wp_blog
242f3c32f658        mariadb               "docker-entrypoint.s…"   About an hour ago   Up About an hour    3306/tcp                 wp_blog_db
fc8761f89097        jwilder/nginx-proxy   "/app/docker-entrypo…"   About an hour ago   Up About an hour    0.0.0.0:80->80/tcp       nginx-proxy
test@docker-test:~/web-service/www$

 

test.php 파일생성

test@docker-test:~/web-service/www$ vi www-data/test.php
<?php phpinfo(); ?>

 

web browser 확인

 

mariadb 테스트

테스트를 위하여 그누보드5 설치를 진행 했습니다.

주의 Host : localhost -> db컨테이너 이름으로 변경해 줘야 합니다.

(Docker 네트워크에 관해서는 차후 포스팅 하도록 하겠습니다.)

 

 

설치후 로그인

 

blog1.test.com / blog2.test.com / www.test.com

 

수고 하셨습니다. 🙂