FeeBSD locale 설정

site: https://www.freebsd.org/doc/handbook/using-localization.html

 

1./etc/login.conf 를 수정 하여 Global 설정을 변경 하는 방법 (모든 유저설정의 경우 설정)

/etc/login.conf 를 수정 합니다. 

root@bsd11:~ # vi /etc/login.conf

me:\
:charset=ko_KR.UTF-8:\
:lang=ko_KR.UTF-8:
root@bsd11:~ # cap_mkdb /etc/login.conf

 

재접속후 확인 

root@bsd11:~ # locale
LANG=ko_KR.UTF-8
LC_CTYPE="ko_KR.UTF-8"
LC_COLLATE="ko_KR.UTF-8"
LC_TIME="ko_KR.UTF-8"
LC_NUMERIC="ko_KR.UTF-8"
LC_MONETARY="ko_KR.UTF-8"
LC_MESSAGES="ko_KR.UTF-8"
LC_ALL=
root@bsd11:~ #

 

일반유저도 동일하게 적용됩니다.

별도의 .login_conf 를 수정할 필요가 없습니다.

$ cat .login_conf
# $FreeBSD: releng/11.1/share/skel/dot.login_conf 77995 2001-06-10 17:08:53Z ache $
#
# see login.conf(5)
#
#me:\
#       :charset=iso-8859-1:\
#       :lang=de_DE.ISO8859-1:
$ locale
LANG=ko_KR.UTF-8
LC_CTYPE="ko_KR.UTF-8"
LC_COLLATE="ko_KR.UTF-8"
LC_TIME="ko_KR.UTF-8"
LC_NUMERIC="ko_KR.UTF-8"
LC_MONETARY="ko_KR.UTF-8"
LC_MESSAGES="ko_KR.UTF-8"
LC_ALL=
$

 

 

 

2. 각각 설정할 경우 User Directory 의 .login_conf 를 수정 합니다. 

유저별 다른 Character Set 을 사용할 경우

$ vi .login_conf
#
#me:\
# :charset=iso-8859-1:\
# :lang=de_DE.ISO8859-1:


me:\
:charset=ko_KR.UTF-8:\
:lang=ko_KR.UTF-8:

$ locale
LANG=
LC_CTYPE="C"
LC_COLLATE="C"
LC_TIME="C"
LC_NUMERIC="C"
LC_MONETARY="C"
LC_MESSAGES="C"
LC_ALL=
$

ssh 재접속후 확인

$ locale
LANG=ko_KR.UTF-8
LC_CTYPE="ko_KR.UTF-8"
LC_COLLATE="ko_KR.UTF-8"
LC_TIME="ko_KR.UTF-8"
LC_NUMERIC="ko_KR.UTF-8"
LC_MONETARY="ko_KR.UTF-8"
LC_MESSAGES="ko_KR.UTF-8"
LC_ALL=
$

 

/usr/share/skel 부분은 생략해도 됩니다.

Global 설정의 경우 /etc/login.conf 파일을 수정하는것이 간편합니다. 🙂

생성되는 user 전부를 변경하기 위하여 /usr/share/skel 디렉토리의 dot.login_conf를 변경 합니다. 

root@bsd11-Client:~ # vi /usr/share/skel/dot.login_conf
# $FreeBSD: releng/11.1/share/skel/dot.login_conf 77995 2001-06-10 17:08:53Z ach
e $
#
# see login.conf(5)
#
me:\
        :charset=ko_KR.UTF-8:\
        :lang=ko_KR.UTF-8:

 

Test 삼아 test1 user 를 생성해 봅니다. 🙂

root@bsd11-Client:~ # pw user add test2 -m
root@bsd11-Client:~ # passwd  test2
Changing local password for test2
New Password:
Retype New Password:

 

test2 로 변경후 locale 확인

$ locale
LANG=ko_KR.UTF-8
LC_CTYPE="ko_KR.UTF-8"
LC_COLLATE="ko_KR.UTF-8"
LC_TIME="ko_KR.UTF-8"
LC_NUMERIC="ko_KR.UTF-8"
LC_MONETARY="ko_KR.UTF-8"
LC_MESSAGES="ko_KR.UTF-8"
LC_ALL=
$

 

3.예전에 많이 쓰던 방법 🙂

요즘은 global 설정으로 /etc/login.conf 만 설정 합니다.

root 유저의 경우

root@bsd11-Client:~ # vi .cshrc
setenv LANG ko_KR.UTF-8
setenv LC_ALL ko_KR.UTF-8

root@bsd11-Client:~ # locale
LANG=ko_KR.UTF-8
LC_CTYPE="ko_KR.UTF-8"
LC_COLLATE="ko_KR.UTF-8"
LC_TIME="ko_KR.UTF-8"
LC_NUMERIC="ko_KR.UTF-8"
LC_MONETARY="ko_KR.UTF-8"
LC_MESSAGES="ko_KR.UTF-8"
LC_ALL=ko_KR.UTF-8
root@bsd11-Client:~ #

적용 로케일 확인

root@bsd11-Client:~ # locale -a
LANG=ko_KR.UTF-8
LC_CTYPE="ko_KR.UTF-8"
LC_COLLATE="ko_KR.UTF-8"
LC_TIME="ko_KR.UTF-8"
LC_NUMERIC="ko_KR.UTF-8"
LC_MONETARY="ko_KR.UTF-8"
LC_MESSAGES="ko_KR.UTF-8"
LC_ALL=

 

 

FreeBSD weechat irc Client install

IRC Server 를 설치 하다 보니 weechat 을 오랜만에 설치 합니다.

심플하게 terminal 에서 irc 를 사용한다고 하면 weechat 이 좋을거 같습니다.

gnome 이나 xfce Desktop 사용시에는 xirc 를 추천 합니다. 🙂

Linux irc Client 의 경우 https://www.tecmint.com/best-irc-clients-for-linux/ 사이트에서 가장 많이쓰는 irc client 를 확인할수 있습니다.

 

ports 로 weechat 설치

설치하다가 오류가 나서 pkg 로 설치 했습니다.

root@bsd11-Client:~ # whereis weechat
weechat: /usr/ports/irc/weechat

root@bsd11-Client:~ # cd /usr/ports/irc/weechat && make install clean

===> Compilation failed unexpectedly.
Try to set MAKE_JOBS_UNSAFE=yes and rebuild before reporting the failure to
the maintainer.
*** Error code 1

Stop.
make[6]: stopped in /usr/ports/emulators/tpm-emulator
*** Error code 1

Stop.
make[5]: stopped in /usr/ports/security/trousers
*** Error code 1

Stop.
make[4]: stopped in /usr/ports/security/trousers
*** Error code 1

Stop.
make[3]: stopped in /usr/ports/security/gnutls
*** Error code 1

Stop.
make[2]: stopped in /usr/ports/security/gnutls
*** Error code 1

Stop.
make[1]: stopped in /usr/ports/irc/weechat
*** Error code 1

Stop.
make: stopped in /usr/ports/irc/weechat
root@bsd11-Client:/usr/ports/irc/weechat # make MAKE_JOBS_UNSAFE=yes install clean
~중략

[0/1] /usr/local/bin/cmake -H/usr/ports/emulators/tpm-emulator/work/tpm_emulator-0.7.4 -B/usr/ports/emulators/tpm-emulator/work/.build
-- Configuring done
-- Generating done
-- Build files have been written to: /usr/ports/emulators/tpm-emulator/work/.build
ninja: error: manifest 'build.ninja' still dirty after 100 tries

*** Error code 1

Stop.
make[6]: stopped in /usr/ports/emulators/tpm-emulator
*** Error code 1

Stop.
make[5]: stopped in /usr/ports/security/trousers
*** Error code 1

Stop.

 

 

pkg 명령어로 weechat 을 설치 합니다.

검색을 해보니 그냥 바이너리로 설치하는게 방법일거 같아서 pkg 명령어를 이용하여 설치 합니다.

서버 데몬이 아닌 간단히 의존성 없는 APP를 설치할 경우 pkg 를 이용하는것이 시간과 정신건강에 좋습니다.  🙂

root@bsd11-Client:~ # pkg install weechat
To run tcsd automatically, add the following line to /etc/rc.conf:

tcsd_enable="YES"

You might want to edit /usr/local/etc/tcsd.conf to reflect your setup.

If you want to use tcsd with software TPM emulator, use the following
configuration in /etc/rc.conf:

tcsd_enable="YES"
tcsd_mode="emulator"
tpmd_enable="YES"

To use TPM, add your_account to '_tss' group like following:

# pw groupmod _tss -m your_account
Message from lua52-5.2.4:

===> NOTICE:

The lua52 port currently does not have a maintainer. As a result, it is
more likely to have unresolved issues, not be up-to-date, or even be removed in
the future. To volunteer to maintain this port, please create an issue at:

https://bugs.freebsd.org/bugzilla

More information about port maintainership is available at:

https://www.freebsd.org/doc/en/articles/contributing/ports-contributing.html#maintain-port
root@bsd11-Client:~ #

sysrc 를 통하여 Deamon 을 enable 시켜줍니다. 
root@bsd11-Client:~ # sysrc tcsd_enable="YES"
tcsd_enable: -> YES
root@bsd11-Client:~ # sysrc tcsd_mode="emulator"
tcsd_mode: -> emulator
root@bsd11-Client:~ # sysrc tpmd_enable="YES"
tpmd_enable: -> YES
root@bsd11-Client:~ # cat /etc/rc.conf
hostname="bsd11-Client"
keymap="us.iso.kbd"
ifconfig_em0="DHCP"
sshd_enable="YES"
powerd_enable="YES"
# Set dumpdev to "AUTO" to enable crash dumps, "NO" to disable
dumpdev="AUTO"
zfs_enable="YES"
tcsd_enable="YES"
tcsd_mode="emulator"
tpmd_enable="YES"
root@bsd11-Client:~ #

 

일반유저에서 weechat 을 실행 합니다.

$ whoami
test1
$ weechat-curses

아래와 같은 화면을 볼수 있습니다.

/server add testirc 192.168.0.100/6667 를 입력하여 접속할 서버의 config 를 생성 합니다.

.weechat 디렉토리의 irc.conf 파일에 [server] 섹션 부분이 추가 됩니다.

/exit

exit 를 입력하여 weechat 에서 빠져 나옵니다.

weechat 을 실행 하고 나면 user directory 에 .weechat 디렉토리가 생성 됩니다.

 

.weechat 디렉토리의 irc.conf 파일을 설정 합니다. 

일반적인 접속 테스트만 진행하기 위하여 address 부분과 autoconnect , nick , username 만 설정 하였습니다.

$ vi irc.conf

[server_default]
addresses = "192.168.0.100/6667"
anti_flood_prio_high = 2
anti_flood_prio_low = 2
autoconnect = on
autojoin = ""
autoreconnect = on
autoreconnect_delay = 10
autorejoin = off
autorejoin_delay = 30
away_check = 0
away_check_max_nicks = 25
capabilities = ""
command = ""
command_delay = 0
connection_timeout = 60
ipv6 = off
local_hostname = ""
msg_kick = ""
msg_part = "WeeChat ${info:version}"
msg_quit = "WeeChat ${info:version}"
nicks = "test1,test11,test12,test13,test14"
nicks_alternate = on
notify = ""
password = ""
proxy = ""
realname = ""
sasl_fail = continue
sasl_key = ""
sasl_mechanism = plain
sasl_password = ""
sasl_timeout = 15
sasl_username = ""
split_msg_max_length = 512
ssl = off
ssl_cert = ""
ssl_dhkey_size = 2048
ssl_fingerprint = ""
ssl_priorities = "NORMAL:-VERS-SSL3.0"
ssl_verify = on
usermode = ""
username = "test1"


[server]
testirc.addresses = "192.168.0.100/6667"
testirc.proxy = ""
testirc.ipv6
testirc.ssl
testirc.ssl_cert = ""
testirc.ssl_priorities
testirc.ssl_dhkey_size
testirc.ssl_fingerprint = ""
testirc.ssl_verify
testirc.password
testirc.capabilities
testirc.sasl_mechanism
testirc.sasl_username
testirc.sasl_password
testirc.sasl_key
testirc.sasl_timeout
testirc.sasl_fail
testirc.autoconnect
testirc.autoreconnect
testirc.autoreconnect_delay
testirc.nicks = "test"
testirc.nicks_alternate
testirc.username = "test"
testirc.realname
testirc.local_hostname
testirc.usermode
testirc.command
testirc.command_delay
testirc.autojoin
testirc.autorejoin
testirc.autorejoin_delay
testirc.connection_timeout
testirc.anti_flood_prio_high
testirc.anti_flood_prio_low
testirc.away_check
testirc.away_check_max_nicks
testirc.msg_kick
testirc.msg_part
testirc.msg_quit
testirc.notify
testirc.split_msg_max_length

설정내용은 별도로 주석을 달지 않았습니다.

설정내용은 차후 업데이트 하도록 하겠습니다.

 

weechat 을 접속하여 irc server 에 접속을 합니다. 

$ weechat-curses

/join #testirc

channel 로 접속을 합니다.

weechat 설치가 완료 되었습니다. 🙂

 

charset 의 경우 .weechat 디렉토리의 $ charset.conf 에서 설정 할수 있습니다.

charset 의 경우 아래와 같이 변경 할수 있습니다. 

$ cd .weechat/
$ vi charset.conf
[default]
decode = "iso-8859-1"
encode = "UTF8"

decode 를 UTF8 로 변경 하여도 다시 iso-8859-1 로 변경 됩니다.

Locale 변경

freebsd 의 경우 Default C 로 locale 이 설정 되어 있습니다.

사용하는 user locale 을 변경 하기 위해선 home Directory 의 .login_conf 파일 수정이 필요 합니다.

$ vi .login_conf

# see login.conf(5)
#
#me:\
# :charset=iso-8859-1:\
# :lang=de_DE.ISO8859-1:
me:\
:charset=ko_KR.UTF-8:\
:lang=ko_KR.UTF-8:

재접속후 확인
$ locale
LANG=ko_KR.UTF-8
LC_CTYPE="ko_KR.UTF-8"
LC_COLLATE="ko_KR.UTF-8"
LC_TIME="ko_KR.UTF-8"
LC_NUMERIC="ko_KR.UTF-8"
LC_MONETARY="ko_KR.UTF-8"
LC_MESSAGES="ko_KR.UTF-8"
LC_ALL=
$

 

irc 접속하여 한글 테스트

혹시나 charset decode 부분을 UTF8 로 변경이 가능할수 있을거 같아 찾아 봤지만..

이런 메시지를 확인 했습니다. 🙂

 

weechat 에서 설정을 바꿀경우 /set charset.default.decode “UTF8” 으로도 바꿀수 있지만 decode 를 바꾸면 아래와 같은 메시지를 확인 할수 있습니다.

x23:21:39 weechat =!= | charset: UTF-8 is not allowed in charset decoding options (it is internal and default charset: decode of UTF-8 is OK even if you specify another
x | charset to decode)
x23:21:39 weechat =!= | Error: failed to set option “charset.default.decode”

 

/set charset 에서 확인시

charset.default.decode string “iso-8859-1”  선택하여 확인 하면 아래와 같은 메시지를 확인 할수 있습니다. 🙂

charset.default.decode: global decoding charset: charset used to decode incoming messages when they are not UTF-8 valid

 

FreeBSD major update 11.1 -> 11.2 update

make world 방법이 아닌 freebsd-update 를 이용한 방법 입니다.

mkae world 의 경우 차후 포스팅 하도록 하겠습니다. 🙂

root@bsd11:~ # freebsd-update upgrade -r 11.2-RELEASE
Looking up update.FreeBSD.org mirrors... none found.
Fetching metadata signature for 11.1-RELEASE from update.FreeBSD.org... done.
Fetching metadata index... done.
Fetching 1 metadata files... done.
Inspecting system... done.

The following components of FreeBSD seem to be installed:
kernel/generic src/src world/base world/doc world/lib32

The following components of FreeBSD do not seem to be installed:
kernel/generic-dbg world/base-dbg world/lib32-dbg

Does this look reasonable (y/n)? y

Fetching metadata signature for 11.2-RELEASE from update.FreeBSD.org... done.

~ 중략

47510....47520....47530....47540....47550....47560....47570....47580....47590....47600....47610....47620....47630....47640....47650....47660....
47670....47680....47690....47700....47710....47720....47730....47740....47750....47760....47770....47780....47790....47800....47810....47820....
47830....47840....47850....47860....47870....47880....47890....47900....47910....47920....47930....47940....47950....47960....47970....47980....
47990....48000....48010....48020....48030....48040....48050....48060....48070....48080....48090....48100....48110....48120....48130....48140....
48150....48160....48170....48180....48190....48200....48210....48220....48230....48240....48250....48260....48270....48280....48290....48300....
48310....48320....48330....48340....48350....48360....48370....48380....48390....48400....48410....48420....48430....48440....48450....48460....
48470....48480....48490....48500....48510.... done.
Applying patches...
Fetching 3598 files...

~중략

The following files will be updated as part of updating to 11.2-RELEASE-p2:
/.cshrc
/.profile
/COPYRIGHT
/bin/[
/bin/cat
/bin/chflags
/bin/chio
/bin/chmod
/bin/cp
/bin/csh
/bin/date
/bin/dd
/bin/df
/bin/domainname
/bin/echo
/bin/ed
/bin/expr
/bin/freebsd-version
/bin/getfacl
/bin/hostname
/bin/kenv
To install the downloaded upgrades, run "/usr/sbin/freebsd-update install".


root@bsd11:~ # /usr/sbin/freebsd-update install
Kernel updates have been installed. Please reboot and run
"/usr/sbin/freebsd-update install" again to finish installing updates.
root@bsd11:~ #
root@bsd11:~ # init 6

 

Rebooting 후 freebsd-update 실행

root@bsd11:~ # freebsd-update install
root@bsd11:~ # init 6

root@bsd11:~ # uname -a
FreeBSD bsd11 11.2-RELEASE-p2 FreeBSD 11.2-RELEASE-p2 #0: Tue Aug 14 21:45:40 UTC 2018 root@amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC amd64

/etc/rc.conf 을 기존 keymap을 us.iso.kdb 에서 us로 변경 해 줍니다.

 

keymap=”us”

root@bsd11:~ # cat /etc/rc.conf
hostname="bsd11"
#keymap="us.iso.kbd"
keymap="us"
ifconfig_em0="DHCP"
sshd_enable="YES"
ntpd_enable="YES"
# Set dumpdev to "AUTO" to enable crash dumps, "NO" to disable
dumpdev="AUTO"
zfs_enable="YES"
root@bsd11:~ #

 

 

 

처음 mysql 을 설치하였을때는 euckr charset 을 사용했었습니다.

그후 얼마가 있다가 utf8로 설정 하였고 다시 요즘에는 utf8mb4 로 설정 하고 있습니다.

utf8mb4 는 간략하게 정리 하면 utf8 charset 보다 다양한 문자들을 지정하기 위하여 생겨난 charset 입니다.

스마트폰의 발전으로 emoticon을 상당히 많이 사용합니다.  저도 포스팅에 사용 합니다. 🙂 <–

UTF-8 인코딩 의 경우 문자당 최대 4바이트 가  필요 합니다.

Mysql utf8 인코딩의 경우 3바이트만 지정 하며 utf8mb4 charset 에서 4바이트를 지원 하게 됩니다.

이제부터 mysql / mariadb 설치시  utf8mb4 charset 으로 설정 하세요!

https://stackoverflow.com/questions/30074492/what-is-the-difference-between-utf8mb4-and-utf8-charsets-in-mysql

github site 에서 가입을 합니다.

https://github.com

가입후 이메일 인증을 합니다.

// 작성중인 문서! 2018-03-31

 

우측상단의 New repository 를 클릭하여 repository 를 생성합니다.

 

Owner 부분의 bksanjuk 이 github id 이며 Repository name 부분에 test로 설정 했습니다.

(Project name 에 맞게 설정 하시면 됩니다.)

기본적으로 repository 를 만들면 Public 으로 만들게 됩니다.  AWS 등 Cloud Service 이용시 운영하는 System / DB 등의 Password 부분이 Commit 되지 않게 사용하셔야 합니다.

Repository name 을 설정 하였으면 Create repository 를 클릭하여 설정을 마무리 합니다.

 

 

설정이 마무리 되었습니다.

 

github 에 commit 할 디렉토리를 생성 합니다.

test@ubuntu-test:~$ mkdir test
test@ubuntu-test:~$ cd test/

 

…or create a new repository on the command line

test@ubuntu-test:~/test$ echo "# test" >> README.md
test@ubuntu-test:~/test$ git init
Initialized empty Git repository in /home/test/test/.git/
test@ubuntu-test:~/test$ git add README.md
test@ubuntu-test:~/test$ git config --global user.email "bksanjuk@test.com"
test@ubuntu-test:~/test$ git config --global user.name "bksanjuk"
test@ubuntu-test:~/test$ git commit -m "first commit"
[master (root-commit) fb6376b] first commit
 1 file changed, 1 insertion(+)
 create mode 100644 README.md
test@ubuntu-test:~/test$ git remote add origin https://github.com/bksanjuk/test.git
test@ubuntu-test:~/test$ git push -u origin master
Username for 'https://github.com': bksanjuk
Password for 'https://bksanjuk@github.com':
Counting objects: 3, done.
Writing objects: 100% (3/3), 213 bytes | 0 bytes/s, done.
Total 3 (delta 0), reused 0 (delta 0)
To https://github.com/bksanjuk/test.git
 * [new branch]      master -> master
Branch master set up to track remote branch master from origin.
test@ubuntu-test:~/test$

 

Github 확인

FreeBSD Z file system ZFS

작성중인 문서 — 2018.03.16

official site: https://www.freebsd.org/doc/handbook/zfs.html

ZFS tuning : https://www.freebsd.org/doc/handbook/zfs-advanced.html

ZFS 특징 및 용어 설명: https://www.freebsd.org/doc/handbook/zfs-term.html#zfs-term-vdev 

 

Test 환경 kvm – FreeBSD11 / Disk VirtIO 3 Disk

ZFS tuning point 의 경우 별도로 테스트 하도록 하겠습니다.

이번 포스트의 경우 FreeBSD 11 환경에서 ZFS 파일시스템만 초점을 맞추도록 하겠습니다.

ZFS 테스트를 위하여 vm 에서 Disk 를 3개 붙였습니다.

 

ZFS  설정

ZFS 사용을 위하여 /boot/loader.conf 를 수정 합니다.

root@bsd11:~ # vi /boot/loader.conf
vfs.zfs.min_auto_ashift=12
zfs_load="YES"

 

ZFS Service 를 실행 하기 위하여 rc.conf 에 zfs_enable 를 추가 합니다.

root@bsd11:~ # sysrc zfs_enable=YES
zfs_enable: NO -> YES
root@bsd11:~ #

 

시스템을 리부팅 합니다.

root@bsd11:~ # init 6

 

 

Disk 확인

kvm 에서 VirtIO 로 Disk 를 붙였기 때문에 /dev/adaX 가 아닌 /dev/vtbDX 로 인식이 됩니다.

root@bsd11:~ # ls -al /dev/vtb*
crw-r-----  1 root  operator  0x3d Mar 13 20:30 /dev/vtbd0
crw-r-----  1 root  operator  0x48 Mar 13 20:30 /dev/vtbd1
crw-r-----  1 root  operator  0x49 Mar 13 20:30 /dev/vtbd2
root@bsd11:~ #

 

Single Disk Pool 추가

root@bsd11:~ # zpool create test /dev/vtbd0
root@bsd11:~ # df -h
Filesystem      Size    Used   Avail Capacity  Mounted on
/dev/ada0s1a     18G    7.0G    9.9G    41%    /
devfs           1.0K    1.0K      0B   100%    /dev
test             19G     23K     19G     0%    /test
root@bsd11:~ #

 

mount 확인

zpool create 명령어로 붙인 test 를 바로 디렉토리에 마운트 합니다. ;;

확인해 보니 zfs 기능을 사용하지 않고 단순하게 test 를 만들고 test 디렉토리에 마운트 하는것으로 보입니다.

root@bsd11:/test # mount
/dev/ada0s1a on / (ufs, local, journaled soft-updates)
devfs on /dev (devfs, local, multilabel)
test on /test (zfs, local, nfsv4acls)

 

ZFS 압축 파일 시스템 ex)

root@bsd11:~ # zfs create test/compressed
# zfs set compression=gzip test/compressed
root@bsd11:~ # zfs set compression=gzip test/compressed

 

ZFS 압축파일 시스템 해제

root@bsd11:~ # zfs set compression=off test/compressed

 

디렉토리 확인

root@bsd11:~ # df -h
Filesystem         Size    Used   Avail Capacity  Mounted on
/dev/ada0s1a        18G    7.0G    9.9G    41%    /
devfs              1.0K    1.0K      0B   100%    /dev
test                19G     23K     19G     0%    /test
test/compressed     19G     23K     19G     0%    /test/compressed
root@bsd11:~ #

 

data 파일시스템을 만들고 데이터 블록 복사본을 2개씩 유지

root@bsd11:~ # zfs create test/data
root@bsd11:~ # zfs copies=2 test/data

 

test 풀의 각 파일 시스템의 사용 가능한 공간은 동일 합니다.

root@bsd11:~ # df -h
Filesystem         Size    Used   Avail Capacity  Mounted on
/dev/ada0s1a        18G    7.0G    9.9G    41%    /
devfs              1.0K    1.0K      0B   100%    /dev
test                19G     23K     19G     0%    /test
test/compressed     19G     23K     19G     0%    /test/compressed
test/data           19G     23K     19G     0%    /test/data
root@bsd11:~ #

 

ZFS pool 삭제

root@bsd11:~ # zfs destroy test/compressed
root@bsd11:~ # zfs destroy test/data
root@bsd11:~ # zpool destroy test

 

 

Z-RAID

디스크 오류로 인한 데이터 손실을 피하는 방법으로 사용할수 있을것으로 보이며, Z-RAID 구성시 3개 이상의 Disk 가 필요 합니다.

Sun™ recommends that the number of devices used in a RAID-Z configuration be between three and nine. 
For environments requiring a single pool consisting of 10 disks or more, consider breaking it up into smaller RAID-Z groups. 
If only two disks are available and redundancy is a requirement, consider using a ZFS mirror. Refer to zpool(8) for more details.

ZFS 구성시 Disk 구성은 3~9 개를 권장 합니다.

 

Disk 확인

root@bsd11:~ # ls -al /dev/vtb*
crw-r-----  1 root  operator  0x3d Mar 13 20:30 /dev/vtbd0
crw-r-----  1 root  operator  0x48 Mar 13 20:30 /dev/vtbd1
crw-r-----  1 root  operator  0x49 Mar 13 20:30 /dev/vtbd2
root@bsd11:~ #

 

Z-RAID  구성

zpool create 명령어로 storage 를 생성 합니다.

root@bsd11:~ # zpool create storage raidz vtbd0 vtbd1 vtbd2

 

home file system 생성

root@bsd11:~ # zfs create storage/home

 

2개의 복사본을 유지 하며 gzip 으로 압축 가능하게 설정

root@bsd11:~ # zfs set copies=2 storage/home
root@bsd11:~ # zfs set compression=gzip storage/home

 

기존 /home  디렉토리 (유저 디렉토리) 를 마이그레이션 합니다.

root@bsd11:~ # cp -rp /home/* /storage/home
root@bsd11:~ # rm -rf /home /usr/home
root@bsd11:~ # ln -s /storage/home /home
root@bsd11:~ # ln -s /storage/home /usr/home

 

기존 유저로 ssh login 을 Test 합니다.

정상적으로 login 이 됩니다.

[root@test ~]# ssh test@bsd11
Last login: Sat Mar  3 21:49:33 2018
FreeBSD 11.1-RELEASE (GENERIC) #0 r321309: Fri Jul 21 02:08:28 UTC 2017

Welcome to FreeBSD!

Release Notes, Errata: https://www.FreeBSD.org/releases/
Security Advisories:   https://www.FreeBSD.org/security/
FreeBSD Handbook:      https://www.FreeBSD.org/handbook/
FreeBSD FAQ:           https://www.FreeBSD.org/faq/
Questions List: https://lists.FreeBSD.org/mailman/listinfo/freebsd-questions/
FreeBSD Forums:        https://forums.FreeBSD.org/

Documents installed with the system are in the /usr/local/share/doc/freebsd/
directory, or can be installed later with:  pkg install en-freebsd-doc
For other languages, replace "en" with a language code like de or fr.

Show the version of FreeBSD installed:  freebsd-version ; uname -a
Please include that output and any error messages when posting questions.
Introduction to manual pages:  man man
FreeBSD directory layout:      man hier

Edit /etc/motd to change this login announcement.
You can change the video mode on all consoles by adding something like
the following to /etc/rc.conf:

        allscreens="80x30"

You can use "vidcontrol -i mode | grep T" for a list of supported text
modes.
                -- Konstantinos Konstantinidis <kkonstan@duth.gr>
$ ls -al
total 22
drwxr-xr-x  3 test  wheel    11 Mar  3 23:42 .
drwxr-xr-x  3 root  wheel     3 Mar 13 21:04 ..
-rw-r--r--  1 test  wheel  1055 Mar  3 21:43 .cshrc
-rw-r--r--  1 test  wheel   254 Mar  3 21:43 .login
-rw-r--r--  1 test  wheel   163 Mar  3 21:43 .login_conf
-rw-------  1 test  wheel   379 Mar  3 21:43 .mail_aliases
-rw-r--r--  1 test  wheel   336 Mar  3 21:43 .mailrc
-rw-r--r--  1 test  wheel   802 Mar  3 21:43 .profile
-rw-------  1 test  wheel   281 Mar  3 21:43 .rhosts
-rw-r--r--  1 test  wheel   849 Mar  3 21:43 .shrc
drwxr-xr-x  2 test  wheel     3 Mar  3 23:42 public_html
$ whoami
test

 

ZFS snapshot 생성

root@bsd11:~ # zfs snapshot storage/home@2018-03-13

 

ZFS snapshot rollback Test

Test 를 위하여 public_html 을 삭제 합니다.

root@bsd11:~ # ls -al /home/test/ |grep -i public_html
drwxr-xr-x  2 test  wheel     3 Mar  3 23:42 public_html
root@bsd11:~ #
root@bsd11:~ # rm -rf /home/test/public_html/
root@bsd11:~ # ls -al /home/test | grep -i publ

 

ZFS rollback  을 테스트 합니다.

zfs list -t 을 지정 하여 snapshot 을 확인 한후 rollback 을 진행 합니다.

root@bsd11:~ # zfs list -t snapshot
NAME                      USED  AVAIL  REFER  MOUNTPOINT
storage/home@2017-03-13  32.0K      -  57.3K  -
root@bsd11:~ #
root@bsd11:~ # zfs rollback storage/home@2018-03-13
root@bsd11:~ # ls -al /home/test/ |grep -i public_html
drwxr-xr-x  2 test  wheel     3 Mar  3 23:42 public_html
root@bsd11:~ #

 

ZFS snapshot 제거

snapshot 제거및 snapshot 확인

root@bsd11:~ # zfs destroy storage/home@2018-03-13
root@bsd11:~ # zfs list -t snapshot
no datasets available
root@bsd11:~ #

 

 

ZFS 복구

 

모든 Z-RAID 를 확인 합니다.

root@bsd11:~ # zpool status -x
all pools are healthy
root@bsd11:~ #

모든 풀이 온라인 상태이며 정상 입니다.

 

Disk 확인

root@bsd11:~ # zpool status
  pool: storage
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        storage     ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            vtbd0   ONLINE       0     0     0
            vtbd1   ONLINE       0     0     0
            vtbd2   ONLINE       0     0     0

errors: No known data errors
root@bsd11:~ #

 

 

비정상일시 아래와 같이 출력됩니다.

  pool: storage
 state: DEGRADED
status: One or more devices has been taken offline by the administrator.
	Sufficient replicas exist for the pool to continue functioning in a
	degraded state.
action: Online the device using 'zpool online' or replace the device with
	'zpool replace'.
 scrub: none requested
config:

	NAME        STATE     READ WRITE CKSUM
	storage     DEGRADED     0     0     0
	  raidz1    DEGRADED     0     0     0
	    da0     ONLINE       0     0     0
	    da1     OFFLINE      0     0     0
	    da2     ONLINE       0     0     0

errors: No known data errors

 

 

ZFS Disk 중 vtbd0 Disk 를 offline 으로 상태를 변경 합니다.

만약 Disk 가 하드웨어 이상이라고 하여도 아래와 같이 메시지가 출력 되며 System 을 Shutdown 후 Disk 교체후 storage 볼륨에 Repalace 하면 됩니다.

root@bsd11:~ # zpool status
  pool: storage
 state: DEGRADED
status: One or more devices has been taken offline by the administrator.
        Sufficient replicas exist for the pool to continue functioning in a
        degraded state.
action: Online the device using 'zpool online' or replace the device with
        'zpool replace'.
  scan: none requested
config:

        NAME                     STATE     READ WRITE CKSUM
        storage                  DEGRADED     0     0     0
          raidz1-0               DEGRADED     0     0     0
            4767353646844092173  OFFLINE      0     0     0  was /dev/vtbd0
            vtbd1                ONLINE       0     0     0
            vtbd2                ONLINE       0     0     0

errors: No known data errors
root@bsd11:~ #

 

Replace 명령어는 다음과 같습니다.

zpool replace $zfs_pool_name $disk_name

root@bsd11:~ # zpool replace storage vtbd0
invalid vdev specification
use '-f' to override the following errors:
/dev/vtbd0 is part of active pool 'storage'
root@bsd11:~ #

 

 

붙어 있는 Disk 를 그대로 활용할 경우 zfs online 으로 설정 하시면 됩니다.

별도로 Disk 를 붙여서 replace 할경우에만 zpool replace storage 를 사용 하면 됩니다.

root@bsd11:~ # zpool replace storage vtbd0
invalid vdev specification
use '-f' to override the following errors:
/dev/vtbd0 is part of active pool 'storage'
root@bsd11:~ # zpool online storage vtbd0
root@bsd11:~ # zpool status
  pool: storage
 state: ONLINE
  scan: resilvered 17.5K in 0h0m with 0 errors on Tue Mar 13 22:16:57 2018
config:

        NAME        STATE     READ WRITE CKSUM
        storage     ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            vtbd0   ONLINE       0     0     0
            vtbd1   ONLINE       0     0     0
            vtbd2   ONLINE       0     0     0

errors: No known data errors
root@bsd11:~ #

 

Disk 를 변경하여 테스트

root@bsd11:~ # zpool status
  pool: storage
 state: DEGRADED
status: One or more devices has been taken offline by the administrator.
        Sufficient replicas exist for the pool to continue functioning in a
        degraded state.
action: Online the device using 'zpool online' or replace the device with
        'zpool replace'.
  scan: resilvered 17.5K in 0h0m with 0 errors on Tue Mar 13 22:16:57 2018
config:

        NAME                     STATE     READ WRITE CKSUM
        storage                  DEGRADED     0     0     0
          raidz1-0               DEGRADED     0     0     0
            4767353646844092173  OFFLINE      0     0     0  was /dev/vtbd0
            vtbd1                ONLINE       0     0     0
            vtbd2                ONLINE       0     0     0

errors: No known data errors
root@bsd11:~ #

 

vm Shtudown 후 10G Volume 을 추가 하였습니다.

root@bsd11:~ # ls -al /dev/vtbd*
crw-r-----  1 root  operator  0x3d Mar 13 22:20 /dev/vtbd0
crw-r-----  1 root  operator  0x48 Mar 13 22:20 /dev/vtbd1
crw-r-----  1 root  operator  0x49 Mar 13 22:20 /dev/vtbd2
crw-r-----  1 root  operator  0x4a Mar 13 22:20 /dev/vtbd3
root@bsd11:~ #

 

Zpool Replace

zfs replace $pool_name $기존disk $신규disk 형식으로 사용 하시면 됩니다.

root@bsd11:~ # zpool replace storage vtbd0 vtbd3
root@bsd11:~ # zpool status
  pool: storage
 state: ONLINE
  scan: resilvered 178K in 0h0m with 0 errors on Tue Mar 13 22:29:36 2018
config:

        NAME        STATE     READ WRITE CKSUM
        storage     ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            vtbd3   ONLINE       0     0     0
            vtbd1   ONLINE       0     0     0
            vtbd2   ONLINE       0     0     0

errors: No known data errors
root@bsd11:~ #

 

Data Verification

Checksums can be disabled, but it is not recommended! Checksums take very little storage space and provide data integrity. 
Many ZFS features will not work properly with checksums disabled. 
There is no noticeable performance gain from disabling these checksums.

요약:

zfs 에서는 물결성을 위하여 Checksum을 사용 하며 해당기능을 Disable 할수 있습니다.

성능차이가 미비함으로 Disable 을 하지 않는것이 좋습니다.

 

 

체크섬 확인은 스크러빙 이며 다음 명령어를 사용하여 zfs pool 무결성을 검증할수 있습니다.

root@bsd11:~ # zpool scrub storage

 

 

검사전

scan: resilvered 178K in 0h0m with 0 errors on Tue Mar 13 22:29:36 2018

root@bsd11:~ # zpool status
  pool: storage
 state: ONLINE
  scan: resilvered 178K in 0h0m with 0 errors on Tue Mar 13 22:29:36 2018
config:

        NAME        STATE     READ WRITE CKSUM
        storage     ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            vtbd3   ONLINE       0     0     0
            vtbd1   ONLINE       0     0     0
            vtbd2   ONLINE       0     0     0

errors: No known data errors
root@bsd11:~ #

 

 

검사후

scan: scrub repaired 0 in 0h0m with 0 errors on Tue Mar 13 22:35:57 2018

root@bsd11:~ # zpool status storage
  pool: storage
 state: ONLINE
  scan: scrub repaired 0 in 0h0m with 0 errors on Tue Mar 13 22:35:57 2018
config:

        NAME        STATE     READ WRITE CKSUM
        storage     ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            vtbd3   ONLINE       0     0     0
            vtbd1   ONLINE       0     0     0
            vtbd2   ONLINE       0     0     0

errors: No known data errors
root@bsd11:~ #

스크럽 지속 시간의 경우 저장된 데이터의 양에 따라 다르며 많은 양의 데이터 확인시 오랜 시간이 걸릴 것입니다. 스크럼 확인시 한번에 하나씩 확인이 가능 합니다.

 

 

Zpool 관리

https://www.freebsd.org/doc/handbook/zfs-zpool.html

ZFS 관리의 경우 2가지 유틸로 나뉘며 zpool 유틸의 경우 풀의 작동을 제어
디스크 추가, 제거 , 교체 및 관리를 합니다.

zfs 유틸리티는 파일 시스템 과 볼륨 모두의 데이터 셋트 생성, 삭제를 관리 합니다.

 

 

pool 생성및 제거

mirror pool 을 생성합니다.

root@bsd11:~ # zpool create testpool mirror /dev/vtbd0 /dev/vtbd1
invalid vdev specification
use '-f' to override the following errors:
/dev/vtbd0 is part of potentially active pool 'storage'
root@bsd11:~ #

기존에 storage pool 에서 사용하여 정상적으로 생성이 안됩니다.

 

zpool labelclear 로 GPT 헤더를 삭제 합니다.

root@bsd11:~ # zpool labelclear -f /dev/vtbd0
root@bsd11:~ # zpool labelclear -f /dev/vtbd1
root@bsd11:~ # zpool labelclear -f /dev/vtbd2
root@bsd11:~ # zpool labelclear -f /dev/vtbd3

 

zpool create 로 testpool 을 mirror 로 생성 합니다.

root@bsd11:~ # zpool create testpool mirror /dev/vtbd0 /dev/vtbd1
root@bsd11:~ # zpool status
  pool: testpool
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        testpool    ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            vtbd0   ONLINE       0     0     0
            vtbd1   ONLINE       0     0     0

errors: No known data errors
root@bsd11:~ #

 

여러 vdev 를 만들때는 아래와 같이 생성 합니다.

zfs 용어의 경우 포스트 최상의 링크중 zfs 특징및 용어 설명을 참고 하시기 바랍니다.

root@bsd11:~ # zpool create testpool mirror /dev/vtbd0 /dev/vtbd1 mirror /dev/vtbd2 /dev/vtbd3
root@bsd11:~ # zpool status
  pool: testpool
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        testpool    ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            vtbd0   ONLINE       0     0     0
            vtbd1   ONLINE       0     0     0
          mirror-1  ONLINE       0     0     0
            vtbd2   ONLINE       0     0     0
            vtbd3   ONLINE       0     0     0

errors: No known data errors
root@bsd11:~ #

 

RAID-Z2 pool 생성

root@bsd11:~ # zpool create testpool raidz2 /dev/vtbd0p1 /dev/vtbd0p2 /dev/vtbd0p3 /dev/vtbd0p4 /dev/vtbd0p5 /dev/vtbd0p6
root@bsd11:~ # zpool status
  pool: testpool
 state: ONLINE
  scan: none requested
config:

        NAME         STATE     READ WRITE CKSUM
        testpool     ONLINE       0     0     0
          raidz2-0   ONLINE       0     0     0
            vtbd0p1  ONLINE       0     0     0
            vtbd0p2  ONLINE       0     0     0
            vtbd0p3  ONLINE       0     0     0
            vtbd0p4  ONLINE       0     0     0
            vtbd0p5  ONLINE       0     0     0
            vtbd0p6  ONLINE       0     0     0

errors: No known data errors
root@bsd11:~ #

 

 

장치 추가 및 제거

zpool attach 를 시용 하여 기존 vdev 에 디스크를 추가하거나, zpool add를 사용하여 vdev를 풀에 추가 할수 있습니다.

단일 디스크의 경우 중복성이 없기 때문에 손상이 발견 되었을시 복구 되지 않습니다.

zpool attach 를 사용하여 vdev에 디스크를 추가 하여 미러를 생성할수 있으며 중복성과 읽기 성능을 향상시킬수 있습니다.

root@bsd11:~ # zpool create testpool /dev/vtbd0p1
root@bsd11:~ # zpool status
  pool: testpool
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        testpool    ONLINE       0     0     0
          vtbd0p1   ONLINE       0     0     0

errors: No known data errors
root@bsd11:~ #

 

zpool attach 명령어를 이용하여 mirror 구성을 합니다.

root@bsd11:~ # zpool attach testpool vtbd0p1 vtbd0p2
root@bsd11:~ # zpool status
  pool: testpool
 state: ONLINE
  scan: resilvered 78.5K in 0h0m with 0 errors on Thu Mar 15 21:22:36 2018
config:

        NAME         STATE     READ WRITE CKSUM
        testpool     ONLINE       0     0     0
          mirror-0   ONLINE       0     0     0
            vtbd0p1  ONLINE       0     0     0
            vtbd0p2  ONLINE       0     0     0

errors: No known data errors
root@bsd11:~ #

 

Test 를 위하여 vm 을 zfs 로 설치 하였으며
Disk 를 동일하게 40G x2개 붙였습니다.

OS 40G Test Disk 40G

기존 Disk 의 ada0p3 용량을 확인 합니다.

root@bsd11:~ # gpart list
Geom name: ada0
modified: false
state: OK
fwheads: 16
fwsectors: 63
last: 83886039
first: 40
entries: 152
scheme: GPT
Providers:
1. Name: ada0p1
   Mediasize: 524288 (512K)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 20480
   Mode: r0w0e0
   rawuuid: edce08d3-2127-11e8-a62a-8fd7aec5b81f
   rawtype: 83bd6b9d-7f41-11dc-be0b-001560b84f0f
   label: gptboot0
   length: 524288
   offset: 20480
   type: freebsd-boot
   index: 1
   end: 1063
   start: 40
2. Name: ada0p2
   Mediasize: 2147483648 (2.0G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 1048576
   Mode: r1w1e0
   rawuuid: edd96866-2127-11e8-a62a-8fd7aec5b81f
   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
   label: swap0
   length: 2147483648
   offset: 1048576
   type: freebsd-swap
   index: 2
   end: 4196351
   start: 2048
3. Name: ada0p3
   Mediasize: 40800092160 (38G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 2148532224
   Mode: r1w1e1
   rawuuid: ede26582-2127-11e8-a62a-8fd7aec5b81f
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: zfs0
   length: 40800092160
   offset: 2148532224
   type: freebsd-zfs
   index: 3
   end: 83884031
   start: 4196352
Consumers:
1. Name: ada0
   Mediasize: 42949672960 (40G)
   Sectorsize: 512
   Mode: r2w2e3

 

신규 디스크의 용량을 설정 합니다.

root@bsd11:~ # gpart create -s GPT ada1
ada1 created
root@bsd11:~ # gpart add -t freebsd-zfs -s 512K ada1
ada1p1 added
root@bsd11:~ # gpart add -t freebsd-zfs -s 2G ada1
ada1p2 added
root@bsd11:~ # gpart add -t freebsd-zfs  ada1
ada1p3 added
root@bsd11:~ #
root@bsd11:~ # gpart show
=>      40  83886000  ada0  GPT  (40G)
        40      1024     1  freebsd-boot  (512K)
      1064       984        - free -  (492K)
      2048   4194304     2  freebsd-swap  (2.0G)
   4196352  79687680     3  freebsd-zfs  (38G)
  83884032      2008        - free -  (1.0M)

=>      40  83886000  ada1  GPT  (40G)
        40      1024     1  freebsd-zfs  (512K)
      1064   4194304     2  freebsd-zfs  (2.0G)
   4195368  79690672     3  freebsd-zfs  (38G)

root@bsd11:~ #

 

zpool attach zroot vdev 에 ada1p3 를 추가 합니다.

root@bsd11:~ # zpool status
  pool: zroot
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        zroot       ONLINE       0     0     0
          ada0p3    ONLINE       0     0     0

errors: No known data errors
root@bsd11:~ # zpool attach zroot ada0p3 ada1p3
Make sure to wait until resilver is done before rebooting.

If you boot from pool 'zroot', you may need to update
boot code on newly attached disk 'ada1p3'.

Assuming you use GPT partitioning and 'da0' is your new boot disk
you may use the following command:

        gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da0

root@bsd11:~ #

 

gpart 명령어로 부팅 가능하게 설정 합니다.

root@bsd11:~ # gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada1
partcode written to ada1p1
bootcode written to ada1
root@bsd11:~ # zpool status
  pool: zroot
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Thu Mar 15 22:35:38 2018
        4.70G scanned out of 6.65G at 50.2M/s, 0h0m to go
        4.70G resilvered, 70.75% done
config:

        NAME        STATE     READ WRITE CKSUM
        zroot       ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            ada0p3  ONLINE       0     0     0
            ada1p3  ONLINE       0     0     0  (resilvering)

errors: No known data errors
root@bsd11:~ #

 

resilvering 의 경우 ada0p3 의 내용을 ada1p3 로 mirror 구성 하게 되며 시간이 다소 걸립니다.

(용량에 따라 차등합니다.)

완료후 상태

root@bsd11:~ # zpool status
  pool: zroot
 state: ONLINE
  scan: resilvered 6.65G in 0h2m with 0 errors on Thu Mar 15 22:37:50 2018
config:

        NAME        STATE     READ WRITE CKSUM
        zroot       ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            ada0p3  ONLINE       0     0     0
            ada1p3  ONLINE       0     0     0

errors: No known data errors
root@bsd11:~ #

 

testpool 을 생성 합니다. vtbd0 와 vtbd1 Disk 를 추가 하였습니다.

root@bsd11:~ # zpool create testpool mirror /dev/vtbd0 /dev/vtbd1
root@bsd11:~ # zpool status
  pool: testpool
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        testpool    ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            vtbd0   ONLINE       0     0     0
            vtbd1   ONLINE       0     0     0

errors: No known data errors
root@bsd11:~ #

 

지금 상태에서는 vtbdX Disk 를 제거 할수 없습니다.

Disk 를 추가 합니다.

root@bsd11:~ # zpool add testpool mirror vtbd2 vtbd3
root@bsd11:~ # zpool status
  pool: testpool
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        testpool    ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            vtbd0   ONLINE       0     0     0
            vtbd1   ONLINE       0     0     0
          mirror-1  ONLINE       0     0     0
            vtbd2   ONLINE       0     0     0
            vtbd3   ONLINE       0     0     0

errors: No known data errors
root@bsd11:~ #

 

Test 를 위하여 vtbd2 Disk 를 제거 합니다.

충분한 여분의 중복이 있는 경우에만 Disk 를 제거 할수 있습니다.
미러구룹에 있는 하나의 디스크는 스트라이프로 동작합니다.

root@bsd11:~ # zpool detach testpool vtbd2
root@bsd11:~ # zpool status
  pool: testpool
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        testpool    ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            vtbd0   ONLINE       0     0     0
            vtbd1   ONLINE       0     0     0
          vtbd3     ONLINE       0     0     0

errors: No known data errors
root@bsd11:~ #

 

POOL 상태 체크

 

Disk 교체

zpool replace 는 이전 디스크의 모든 데이터를 새 디스크로 복사 합니다.
작업이 완료되면 이전 디스크가 vdev 에서 연결이 끊어 집니다.

root@bsd11:~ # zpool create testpool mirror /dev/vtbd0 /dev/vtbd1
root@bsd11:~ # zpool status
  pool: testpool
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        testpool    ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            vtbd0   ONLINE       0     0     0
            vtbd1   ONLINE       0     0     0

errors: No known data errors
root@bsd11:~ #

 

vtbd1 Disk 를 vtbd2 로 교체 합니다.

root@bsd11:~ # zpool status
  pool: testpool
 state: ONLINE
  scan: resilvered 80K in 0h0m with 0 errors on Thu Mar 15 23:07:04 2018
config:

        NAME        STATE     READ WRITE CKSUM
        testpool    ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            vtbd0   ONLINE       0     0     0
            vtbd2   ONLINE       0     0     0

errors: No known data errors
root@bsd11:~ #

 

Scrubbing pool

zfs pool 을 정기적으로 매월 한번이상 scrub 작업을 하는게 좋습니다.
디스크를 많이 사용하는 동안 실행 하면 성능이 저하 됩니다.

root@bsd11:~ # zpool scrub testpool
root@bsd11:~ # zpool status
  pool: testpool
 state: ONLINE
  scan: scrub repaired 0 in 0h0m with 0 errors on Thu Mar 15 23:10:20 2018
config:

        NAME        STATE     READ WRITE CKSUM
        testpool    ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            vtbd0   ONLINE       0     0     0
            vtbd2   ONLINE       0     0     0

errors: No known data errors
root@bsd11:~ #

 

Self-Healing Test

데이터 블록과 함께 저장되는 체크섬은 파일 시스템이 자동으로 복구 되도록 합니다.
체크섬이 저장 장치 풀의 다른 Disk 에 기록된 데이터와 일치하는 데이터를 자동으로 복구 합니다.

 

root@bsd11:/usr/local/etc # zpool status
  pool: testpool
 state: ONLINE
  scan: scrub repaired 5K in 0h0m with 0 errors on Thu Mar 15 23:17:43 2018
config:

        NAME        STATE     READ WRITE CKSUM
        testpool    ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            vtbd0   ONLINE       0     0     0
            vtbd1   ONLINE       0     0     0

errors: No known data errors
root@bsd11:/usr/local/etc #

 

Self-healing test 를 위하여 임이의 파일을 복사 한후 checksum 을 생성 합니다.

root@bsd11:~ # cd /usr/local/etc/
root@bsd11:/usr/local/etc # cp * /testpool/
cp: apache24 is a directory (not copied).
cp: bash_completion.d is a directory (not copied).
cp: man.d is a directory (not copied).
cp: newsyslog.conf.d is a directory (not copied).
cp: periodic is a directory (not copied).
cp: php is a directory (not copied).
cp: php-fpm.d is a directory (not copied).
cp: rc.d is a directory (not copied).
cp: ssl is a directory (not copied).
root@bsd11:/usr/local/etc # cd
root@bsd11:~ # sha1 /testpool > checksum.txt
root@bsd11:~ # cat checksum.txt
SHA1 (/testpool) = 34d4723284883bf65b788e6674c7e475dc4102e9
root@bsd11:~ #

 

vtbd0 Disk 를 dd 로 날립니다.

root@bsd11:~ # zpool export testpool
root@bsd11:~ # dd if=/dev/random of=/dev/vtbd0 bs=1m count=200
200+0 records in
200+0 records out
209715200 bytes transferred in 2.680833 secs (78227611 bytes/sec)
root@bsd11:~ #
root@bsd11:~ # zpool import testpool

 

testpool 의 cksum 을 확인 합니다.

root@bsd11:~ # zpool status testpool
  pool: testpool
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://illumos.org/msg/ZFS-8000-9P
  scan: scrub repaired 5K in 0h0m with 0 errors on Thu Mar 15 23:17:43 2018
config:

        NAME        STATE     READ WRITE CKSUM
        testpool    ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            vtbd0   ONLINE       0     0     0
            vtbd1   ONLINE       0     0     1

errors: No known data errors
root@bsd11:~ #

 

오류의 영향을 받지 않은 vtbd0 미러 디스크에 있는 중복성 사용을 감지
원본과 체크섬을 비교 하여 동일한지 여부를 출력합니다.

root@bsd11:~ # sha1 /testpool >> checksum.txt
root@bsd11:~ # cat checksum.txt
SHA1 (/testpool) = 34d4723284883bf65b788e6674c7e475dc4102e9
SHA1 (/testpool) = 34d4723284883bf65b788e6674c7e475dc4102e9
root@bsd11:~ #

풀 데이터를 의도적으로 변경 하기 전/후의 체크섬이 동일 합니다.
ZFS 가 체크섬이 다를때 자동으로 오류를 감지 하고 수정합니다.
pool 이 충분한 중복이 있는 경우에만 가능하며 단일 장치로 구성된 풀에는 자체 치유 기능이 없습니다.

 

스크러빙 작업 에서 vtbd0 에서 데이터를 읽고 vtbd1 에 잘못된 체크섬의 데이터를 다시 작성 합니다.

root@bsd11:~ # zpool scrub testpool
root@bsd11:~ # zpool status
  pool: testpool
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://illumos.org/msg/ZFS-8000-9P
  scan: scrub repaired 253K in 0h0m with 0 errors on Thu Mar 15 23:30:15 2018
config:

        NAME        STATE     READ WRITE CKSUM
        testpool    ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            vtbd0   ONLINE       0     0     0
            vtbd1   ONLINE       0     0    29

errors: No known data errors
root@bsd11:~ #

 

스크럽작업이 완료되면 zpool clear 를 실행하여 오류를 지울수 있습니다.

root@bsd11:~ # zpool clear testpool
root@bsd11:~ # zpool status
  pool: testpool
 state: ONLINE
  scan: scrub repaired 253K in 0h0m with 0 errors on Thu Mar 15 23:30:15 2018
config:

        NAME        STATE     READ WRITE CKSUM
        testpool    ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            vtbd0   ONLINE       0     0     0
            vtbd1   ONLINE       0     0     0

errors: No known data errors
root@bsd11:~ #

 

imports and export pool

root@bsd11:~ # zpool export testpool
root@bsd11:~ # zpool import
   pool: testpool
     id: 4252914017303616931
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:

        testpool    ONLINE
          mirror-0  ONLINE
            vtbd0   ONLINE
            vtbd1   ONLINE
root@bsd11:~ # df -h
Filesystem      Size    Used   Avail Capacity  Mounted on
/dev/ada0s1a     18G    7.8G    9.1G    46%    /
devfs           1.0K    1.0K      0B   100%    /dev
root@bsd11:~ #

 

import -o 옵션을 사용하여 경로를 지정 할수 있습니다.

경로를 지정하게 되면 $경로/$pool_name 로 됩니다.

root@bsd11:~ # zpool import -o altroot=/mnt testpool
root@bsd11:~ # df -h
Filesystem      Size    Used   Avail Capacity  Mounted on
/dev/ada0s1a     18G    7.8G    9.1G    46%    /
devfs           1.0K    1.0K      0B   100%    /dev
testpool        9.6G    265K    9.6G     0%    /mnt/testpool
root@bsd11:~ #

 

Storage pool upgrade

FreeBSD 를 업그레이드 하면 ZFS Version 이 업그레이드 되며 새로운 기능을 지원할수 있습니다. 업그레이드 할수 있지만 다운그레이드는 불가 합니다.

# 이전에 사용하던 pool 도 upgrade 해야 하는지 여부는 확인이 필요 할것으로 보입니다. 🙂

 

root@bsd11:~ # zpool upgrade

 

이전에 사용하던 pool upgrade

root@bsd11:~ # zpool upgrade testpool

 

zpool history

root@bsd11:~ # zpool history
History for 'testpool':
2018-03-15.23:06:26 zpool create testpool mirror /dev/vtbd0 /dev/vtbd1
2018-03-15.23:07:09 zpool replace testpool vtbd1 vtbd2
2018-03-15.23:10:25 zpool scrub testpool
2018-03-15.23:13:10 zpool replace testpool vtbd2 vtbd1
2018-03-15.23:14:25 zpool export testpool
2018-03-15.23:16:48 zpool import testpool
2018-03-15.23:17:48 zpool scrub testpool
2018-03-15.23:18:08 zpool clear testpool
2018-03-15.23:24:30 zpool export testpool
2018-03-15.23:25:49 zpool import testpool
2018-03-15.23:30:20 zpool scrub testpool
2018-03-15.23:32:46 zpool clear testpool
2018-03-15.23:37:30 zpool export testpool
2018-03-15.23:39:04 zpool import -o altroot=/mnt testpool

root@bsd11:~ #

 

-i 옵션 zfs 이벤트 까지 표시

root@bsd11:~ # zpool history -i
History for 'testpool':
2018-03-15.23:06:26 [txg:5] create pool version 5000; software version 5000/5; uts bsd11 11.1-RELEASE 1101001 amd64
2018-03-15.23:06:26 zpool create testpool mirror /dev/vtbd0 /dev/vtbd1
2018-03-15.23:07:04 [txg:16] scan setup func=2 mintxg=3 maxtxg=16
2018-03-15.23:07:04 [txg:17] scan done errors=0
2018-03-15.23:07:04 [txg:18] vdev attach replace vdev=/dev/vtbd2 for vdev=/dev/vtbd1
2018-03-15.23:07:09 [txg:19] detach vdev=/dev/vtbd1
2018-03-15.23:07:09 zpool replace testpool vtbd1 vtbd2
2018-03-15.23:10:20 [txg:57] scan setup func=1 mintxg=0 maxtxg=57
2018-03-15.23:10:20 [txg:58] scan done errors=0
2018-03-15.23:10:25 zpool scrub testpool
2018-03-15.23:13:05 [txg:94] scan setup func=2 mintxg=3 maxtxg=94
2018-03-15.23:13:05 [txg:95] scan done errors=0
2018-03-15.23:13:05 [txg:96] vdev attach replace vdev=/dev/vtbd1 for vdev=/dev/vtbd2
2018-03-15.23:13:10 [txg:97] detach vdev=/dev/vtbd2
2018-03-15.23:13:10 zpool replace testpool vtbd2 vtbd1
2018-03-15.23:14:25 zpool export testpool
2018-03-15.23:16:43 [txg:117] open pool version 5000; software version 5000/5; uts bsd11 11.1-RELEASE 1101001 amd64
2018-03-15.23:16:43 [txg:119] import pool version 5000; software version 5000/5; uts bsd11 11.1-RELEASE 1101001 amd64
2018-03-15.23:16:48 zpool import testpool
2018-03-15.23:17:43 [txg:132] scan setup func=1 mintxg=0 maxtxg=132
2018-03-15.23:17:43 [txg:133] scan done errors=0
2018-03-15.23:17:48 zpool scrub testpool
2018-03-15.23:18:08 zpool clear testpool
2018-03-15.23:24:30 zpool export testpool
2018-03-15.23:25:44 [txg:219] open pool version 5000; software version 5000/5; uts bsd11 11.1-RELEASE 1101001 amd64
2018-03-15.23:25:44 [txg:221] import pool version 5000; software version 5000/5; uts bsd11 11.1-RELEASE 1101001 amd64
2018-03-15.23:25:49 zpool import testpool
2018-03-15.23:30:15 [txg:276] scan setup func=1 mintxg=0 maxtxg=276
2018-03-15.23:30:15 [txg:277] scan done errors=0
2018-03-15.23:30:20 zpool scrub testpool
2018-03-15.23:32:46 zpool clear testpool
2018-03-15.23:37:30 zpool export testpool
2018-03-15.23:38:58 [txg:369] open pool version 5000; software version 5000/5; uts bsd11 11.1-RELEASE 1101001 amd64
2018-03-15.23:38:58 [txg:371] import pool version 5000; software version 5000/5; uts bsd11 11.1-RELEASE 1101001 amd64
2018-03-15.23:39:04 zpool import -o altroot=/mnt testpool

root@bsd11:~ #

 

-l 옵션 사용자 이름 및 hostname 표시

root@bsd11:~ # zpool history -l
History for 'testpool':
2018-03-15.23:06:26 zpool create testpool mirror /dev/vtbd0 /dev/vtbd1 [user 0 (root) on bsd11]
2018-03-15.23:07:09 zpool replace testpool vtbd1 vtbd2 [user 0 (root) on bsd11]
2018-03-15.23:10:25 zpool scrub testpool [user 0 (root) on bsd11]
2018-03-15.23:13:10 zpool replace testpool vtbd2 vtbd1 [user 0 (root) on bsd11]
2018-03-15.23:14:25 zpool export testpool [user 0 (root) on bsd11]
2018-03-15.23:16:48 zpool import testpool [user 0 (root) on bsd11]
2018-03-15.23:17:48 zpool scrub testpool [user 0 (root) on bsd11]
2018-03-15.23:18:08 zpool clear testpool [user 0 (root) on bsd11]
2018-03-15.23:24:30 zpool export testpool [user 0 (root) on bsd11]
2018-03-15.23:25:49 zpool import testpool [user 0 (root) on bsd11]
2018-03-15.23:30:20 zpool scrub testpool [user 0 (root) on bsd11]
2018-03-15.23:32:46 zpool clear testpool [user 0 (root) on bsd11]
2018-03-15.23:37:30 zpool export testpool [user 0 (root) on bsd11]
2018-03-15.23:39:04 zpool import -o altroot=/mnt testpool [user 0 (root) on bsd11]

root@bsd11:~ #

 

zpool 모니터링

root@bsd11:~ # zpool iostat
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
testpool     372K  9.94G      0      0     96    447
root@bsd11:~ #

 

zpool iostat -v  의 Verbose 옵션 입니다.

Disk read/write 까지 표시 됩니다.

root@bsd11:~ # zpool iostat -v
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
testpool     372K  9.94G      0      0     82    382
  mirror     372K  9.94G      0      0     82    382
    vtbd0       -      -      0      0  2.02K  1.65K
    vtbd1       -      -      0      0  1.38K  1.65K
----------  -----  -----  -----  -----  -----  -----

root@bsd11:~ #

 

 

ZFS 관리

https://www.freebsd.org/doc/handbook/zfs-zfs.html

zfs 유틸리티는 pool에 있는 ZFS 데이터 세트를 생성, 삭제 및 관리 합니다.

 

데이터 세트 생성 및 제거

root@bsd11:~ # zfs list
NAME                   USED  AVAIL  REFER  MOUNTPOINT
zroot                 6.65G  29.9G    88K  /zroot
zroot/ROOT             627M  29.9G    88K  none
zroot/ROOT/default     627M  29.9G   627M  /
zroot/jails           4.76G  29.9G   112K  /usr/jails
zroot/jails/basejail  1003M  29.9G   979M  /usr/jails/basejail
zroot/jails/database  1.95G  29.9G  1.95G  /usr/jails/database
zroot/jails/httpd     1.83G  29.9G  1.83G  /usr/jails/httpd
zroot/jails/newjail   4.66M  29.9G  4.66M  /usr/jails/newjail
zroot/jails/www       4.75M  29.9G  4.75M  /usr/jails/www
zroot/tmp               88K  29.9G    88K  /tmp
zroot/usr             1.27G  29.9G    88K  /usr
zroot/usr/home          88K  29.9G    88K  /usr/home
zroot/usr/ports        665M  29.9G   665M  /usr/ports
zroot/usr/src          633M  29.9G   633M  /usr/src
zroot/var              604K  29.9G    88K  /var
zroot/var/audit         88K  29.9G    88K  /var/audit
zroot/var/crash         88K  29.9G    88K  /var/crash
zroot/var/log          164K  29.9G   164K  /var/log
zroot/var/mail          88K  29.9G    88K  /var/mail
zroot/var/tmp           88K  29.9G    88K  /var/tmp
root@bsd11:~ #

 

신규 데이터 세트를 생성 하고 LZ4 압축을 활성화 합니다.

root@bsd11:~ # zfs create -o compress=lz4 zroot/zroottest
root@bsd11:~ # zfs list
NAME                   USED  AVAIL  REFER  MOUNTPOINT
zroot                 6.65G  29.9G    88K  /zroot
zroot/ROOT             627M  29.9G    88K  none
zroot/ROOT/default     627M  29.9G   627M  /
zroot/jails           4.76G  29.9G   112K  /usr/jails
zroot/jails/basejail  1003M  29.9G   979M  /usr/jails/basejail
zroot/jails/database  1.95G  29.9G  1.95G  /usr/jails/database
zroot/jails/httpd     1.83G  29.9G  1.83G  /usr/jails/httpd
zroot/jails/newjail   4.66M  29.9G  4.66M  /usr/jails/newjail
zroot/jails/www       4.75M  29.9G  4.75M  /usr/jails/www
zroot/tmp               88K  29.9G    88K  /tmp
zroot/usr             1.27G  29.9G    88K  /usr
zroot/usr/home          88K  29.9G    88K  /usr/home
zroot/usr/ports        665M  29.9G   665M  /usr/ports
zroot/usr/src          633M  29.9G   633M  /usr/src
zroot/var              604K  29.9G    88K  /var
zroot/var/audit         88K  29.9G    88K  /var/audit
zroot/var/crash         88K  29.9G    88K  /var/crash
zroot/var/log          164K  29.9G   164K  /var/log
zroot/var/mail          88K  29.9G    88K  /var/mail
zroot/var/tmp           88K  29.9G    88K  /var/tmp
zroot/zroottest         88K  29.9G    88K  /zroot/zroottest
root@bsd11:~ #

 

이전에 생성된 데이터 세트를 삭제 합니다.

root@bsd11:~ # zfs list
NAME                   USED  AVAIL  REFER  MOUNTPOINT
zroot                 6.65G  29.9G    88K  /zroot
zroot/ROOT             627M  29.9G    88K  none
zroot/ROOT/default     627M  29.9G   627M  /
zroot/jails           4.76G  29.9G   112K  /usr/jails
zroot/jails/basejail  1003M  29.9G   979M  /usr/jails/basejail
zroot/jails/database  1.95G  29.9G  1.95G  /usr/jails/database
zroot/jails/httpd     1.83G  29.9G  1.83G  /usr/jails/httpd
zroot/jails/newjail   4.66M  29.9G  4.66M  /usr/jails/newjail
zroot/jails/www       4.75M  29.9G  4.75M  /usr/jails/www
zroot/tmp               88K  29.9G    88K  /tmp
zroot/usr             1.27G  29.9G    88K  /usr
zroot/usr/home          88K  29.9G    88K  /usr/home
zroot/usr/ports        665M  29.9G   665M  /usr/ports
zroot/usr/src          633M  29.9G   633M  /usr/src
zroot/var              604K  29.9G    88K  /var
zroot/var/audit         88K  29.9G    88K  /var/audit
zroot/var/crash         88K  29.9G    88K  /var/crash
zroot/var/log          164K  29.9G   164K  /var/log
zroot/var/mail          88K  29.9G    88K  /var/mail
zroot/var/tmp           88K  29.9G    88K  /var/tmp
zroot/zroottest         88K  29.9G    88K  /zroot/zroottest
root@bsd11:~ # zfs destroy zroot/zroottest
root@bsd11:~ # zfs list
NAME                   USED  AVAIL  REFER  MOUNTPOINT
zroot                 6.65G  29.9G    88K  /zroot
zroot/ROOT             627M  29.9G    88K  none
zroot/ROOT/default     627M  29.9G   627M  /
zroot/jails           4.76G  29.9G   112K  /usr/jails
zroot/jails/basejail  1003M  29.9G   979M  /usr/jails/basejail
zroot/jails/database  1.95G  29.9G  1.95G  /usr/jails/database
zroot/jails/httpd     1.83G  29.9G  1.83G  /usr/jails/httpd
zroot/jails/newjail   4.66M  29.9G  4.66M  /usr/jails/newjail
zroot/jails/www       4.75M  29.9G  4.75M  /usr/jails/www
zroot/tmp               88K  29.9G    88K  /tmp
zroot/usr             1.27G  29.9G    88K  /usr
zroot/usr/home          88K  29.9G    88K  /usr/home
zroot/usr/ports        665M  29.9G   665M  /usr/ports
zroot/usr/src          633M  29.9G   633M  /usr/src
zroot/var              604K  29.9G    88K  /var
zroot/var/audit         88K  29.9G    88K  /var/audit
zroot/var/crash         88K  29.9G    88K  /var/crash
zroot/var/log          164K  29.9G   164K  /var/log
zroot/var/mail          88K  29.9G    88K  /var/mail
zroot/var/tmp           88K  29.9G    88K  /var/tmp
root@bsd11:~ #

zfs 최신 버전의 경우 zfs destroy 가 비동식 으로 동작 합니다.
여유공간이 풀에 표시되는데 몇분 정도 걸릴수 있습니다.

 

볼륨 생성 및 삭제

zfs 볼륨은 모든 파일 시스템으로 포멧될수 있으며, 파일 시스템 없이 원시 데이터를
저장 할수 있습니다. 사용자에게는 볼륨은 일반 디스크처럼 보입니다.

root@bsd11:~ # zfs create -V 250m -o compression=on zroot/fat32
root@bsd11:~ # zfs list zroot
NAME    USED  AVAIL  REFER  MOUNTPOINT
zroot  6.90G  29.7G    88K  /zroot
root@bsd11:~ # zfs list
NAME                   USED  AVAIL  REFER  MOUNTPOINT
zroot                 6.90G  29.7G    88K  /zroot
zroot/ROOT             627M  29.7G    88K  none
zroot/ROOT/default     627M  29.7G   627M  /
zroot/fat32            260M  29.9G    56K  -
zroot/jails           4.76G  29.7G   112K  /usr/jails
zroot/jails/basejail  1003M  29.7G   979M  /usr/jails/basejail
zroot/jails/database  1.95G  29.7G  1.95G  /usr/jails/database
zroot/jails/httpd     1.83G  29.7G  1.83G  /usr/jails/httpd
zroot/jails/newjail   4.66M  29.7G  4.66M  /usr/jails/newjail
zroot/jails/www       4.75M  29.7G  4.75M  /usr/jails/www
zroot/tmp               88K  29.7G    88K  /tmp
zroot/usr             1.27G  29.7G    88K  /usr
zroot/usr/home          88K  29.7G    88K  /usr/home
zroot/usr/ports        665M  29.7G   665M  /usr/ports
zroot/usr/src          633M  29.7G   633M  /usr/src
zroot/var              604K  29.7G    88K  /var
zroot/var/audit         88K  29.7G    88K  /var/audit
zroot/var/crash         88K  29.7G    88K  /var/crash
zroot/var/log          164K  29.7G   164K  /var/log
zroot/var/mail          88K  29.7G    88K  /var/mail
zroot/var/tmp           88K  29.7G    88K  /var/tmp
root@bsd11:~ #

 

zroot 볼륨의 fat32 를 msdos 타입으로 포멧 한후 /mnt 에 마운트 합니다.

-F32 지정시 잘못된 인수로 인식 하여 -F 옵션 없이 테스트 하였습니다.

root@bsd11:~ # newfs_msdos  /dev/zvol/zroot/fat32
newfs_msdos: cannot get number of sectors per track: Operation not supported
newfs_msdos: cannot get number of heads: Operation not supported
newfs_msdos: trim 62 sectors to adjust to a multiple of 63
/dev/zvol/zroot/fat32: 511648 sectors in 31978 FAT16 clusters (8192 bytes/cluster)
BytesPerSec=512 SecPerClust=16 ResSectors=1 FATs=2 RootDirEnts=512 Media=0xf0 FATsecs=125 SecPerTrack=63 Heads=16 HiddenSecs=0 HugeSectors=511938
root@bsd11:~ # mount -t msdosfs /dev/zvol/zroot/fat32 /mnt
mount_msdosfs: /dev/zvol/zroot/fat32: Invalid argument
root@bsd11:~ # mount -t msdosfs /dev/zvol/zroot/fat32 /mnt
root@bsd11:~ # df -h |grep -i mnt
/dev/zvol/zroot/fat32    250M     16K    250M     0%    /mnt
root@bsd11:~ # mount |grep -i mnt
/dev/zvol/zroot/fat32 on /mnt (msdosfs, local)
root@bsd11:~ #

 

데이터세트 이름변경

데이터 세트의 이름은 zfs rename 으로 변경 할수 있으며 최상위 볼륨의 이름도 변경 할수 있습니다. 최상위 볼륨의 이름을 변경 하게 되면 상속된 속성값이 변경 됩니다.
데이터 세트의 이름을 변경하면 마운트를 해제된 다음 새 위치 에서 다시 마운트 됩니다.
-u 옵션 사용시 해당 remount 를 방지 할수 있습니다.

root@bsd11:~ # zfs list
NAME                   USED  AVAIL  REFER  MOUNTPOINT
zroot                 6.90G  29.7G    88K  /zroot
zroot/ROOT             627M  29.7G    88K  none
zroot/ROOT/default     627M  29.7G   627M  /
zroot/fat32            260M  29.9G    68K  -
zroot/jails           4.76G  29.7G   112K  /usr/jails
zroot/jails/basejail  1003M  29.7G   979M  /usr/jails/basejail
zroot/jails/database  1.95G  29.7G  1.95G  /usr/jails/database
zroot/jails/httpd     1.83G  29.7G  1.83G  /usr/jails/httpd
zroot/jails/newjail   4.66M  29.7G  4.66M  /usr/jails/newjail
zroot/jails/www       4.75M  29.7G  4.75M  /usr/jails/www
zroot/tmp               88K  29.7G    88K  /tmp
zroot/usr             1.27G  29.7G    88K  /usr
zroot/usr/home          88K  29.7G    88K  /usr/home
zroot/usr/ports        665M  29.7G   665M  /usr/ports
zroot/usr/src          633M  29.7G   633M  /usr/src
zroot/var              604K  29.7G    88K  /var
zroot/var/audit         88K  29.7G    88K  /var/audit
zroot/var/crash         88K  29.7G    88K  /var/crash
zroot/var/log          164K  29.7G   164K  /var/log
zroot/var/mail          88K  29.7G    88K  /var/mail
zroot/var/tmp           88K  29.7G    88K  /var/tmp
root@bsd11:~ # zfs rename zroot/fat32 zroot/fat16
root@bsd11:~ # zfs list
NAME                   USED  AVAIL  REFER  MOUNTPOINT
zroot                 6.90G  29.7G    88K  /zroot
zroot/ROOT             627M  29.7G    88K  none
zroot/ROOT/default     627M  29.7G   627M  /
zroot/fat16            260M  29.9G    68K  -
zroot/jails           4.76G  29.7G   112K  /usr/jails
zroot/jails/basejail  1003M  29.7G   979M  /usr/jails/basejail
zroot/jails/database  1.95G  29.7G  1.95G  /usr/jails/database
zroot/jails/httpd     1.83G  29.7G  1.83G  /usr/jails/httpd
zroot/jails/newjail   4.66M  29.7G  4.66M  /usr/jails/newjail
zroot/jails/www       4.75M  29.7G  4.75M  /usr/jails/www
zroot/tmp               88K  29.7G    88K  /tmp
zroot/usr             1.27G  29.7G    88K  /usr
zroot/usr/home          88K  29.7G    88K  /usr/home
zroot/usr/ports        665M  29.7G   665M  /usr/ports
zroot/usr/src          633M  29.7G   633M  /usr/src
zroot/var              604K  29.7G    88K  /var
zroot/var/audit         88K  29.7G    88K  /var/audit
zroot/var/crash         88K  29.7G    88K  /var/crash
zroot/var/log          164K  29.7G   164K  /var/log
zroot/var/mail          88K  29.7G    88K  /var/mail
zroot/var/tmp           88K  29.7G    88K  /var/tmp
root@bsd11:~ #

 

스냅 샷은 위와 같은 방법으로 이름을 바꿀수 없으며 스냅샷의 특성으로 인하여
다른 상위 데이터 세트로 이름 변경이 불가 합니다.
재귀된 스냅샷의 이름을 변경 할려면 -r 옵셔을 지정 하고 하위 데이터 세트의 이름 같은 모든 스냅샷의 이름을 변경 해야 합니다.

root@bsd11:~ # zfs rename zroot/var/test@2018-03-15 new_test@2018-03-16
root@bsd11:~ # zfs list -t snapshot

 

데이터 세트 속성 설정

root@bsd11:~ # zfs set custom:costcenter=1234 zroot
root@bsd11:~ # zfs get custom:costcenter zroot
NAME   PROPERTY           VALUE              SOURCE
zroot  custom:costcenter  1234               local
root@bsd11:~ #

 

사용자정의 등록 정보를 제거 할려면 zfs -r 옵션을 사용합니다.

root@bsd11:~ # zfs get custom:costcenter zroot
NAME   PROPERTY           VALUE              SOURCE
zroot  custom:costcenter  1234               local
root@bsd11:~ # zfs inherit -r custom:costconter zroot
root@bsd11:~ # zfs get custom:costconter
NAME                                    PROPERTY           VALUE              SOURCE
zroot                                   custom:costconter  -                  -
zroot/ROOT                              custom:costconter  -                  -
zroot/ROOT/default                      custom:costconter  -                  -
zroot/fat16                             custom:costconter  -                  -
zroot/jails                             custom:costconter  -                  -
zroot/jails/basejail                    custom:costconter  -                  -
zroot/jails/basejail@20180306_19:50:50  custom:costconter  -                  -
zroot/jails/basejail@20180306_20:02:39  custom:costconter  -                  -
zroot/jails/database                    custom:costconter  -                  -
zroot/jails/httpd                       custom:costconter  -                  -
zroot/jails/newjail                     custom:costconter  -                  -
zroot/jails/www                         custom:costconter  -                  -
zroot/test                              custom:costconter  -                  -
zroot/tmp                               custom:costconter  -                  -
zroot/usr                               custom:costconter  -                  -
zroot/usr/home                          custom:costconter  -                  -
zroot/usr/ports                         custom:costconter  -                  -
zroot/usr/src                           custom:costconter  -                  -
zroot/var                               custom:costconter  -                  -
zroot/var/audit                         custom:costconter  -                  -
zroot/var/crash                         custom:costconter  -                  -
zroot/var/log                           custom:costconter  -                  -
zroot/var/mail                          custom:costconter  -                  -
zroot/var/tmp                           custom:costconter  -                  -
root@bsd11:~ # zfs get custom:costconter zroot
NAME   PROPERTY           VALUE              SOURCE
zroot  custom:costconter  -                  -
root@bsd11:~ #

 

공유속성 및 설정

NFS / SMB 공유 옵션 입니다.
ZFS 데이터 세트를 네트워크에서 공유 할수 있는 방법을 정의 할수 있습니다.
둘다 off 로 되어 있습니다.

root@bsd11:~ # zfs get sharenfs zroot/usr/home
NAME            PROPERTY  VALUE     SOURCE
zroot/usr/home  sharenfs  off       default
root@bsd11:~ # zfs get sharesmb zroot/usr/home
NAME            PROPERTY  VALUE     SOURCE
zroot/usr/home  sharesmb  off       default
root@bsd11:~ #

 

/usr/home nfs 공유 설정

root@bsd11:~ # zfs set sharenfs=on zroot/usr/home
root@bsd11:~ # zfs get sharenfs zroot/usr/home
NAME            PROPERTY  VALUE     SOURCE
zroot/usr/home  sharenfs  on        local
root@bsd11:~ #

 

zfs set sharenfs 사용 예)

다음과 같이 nfs 연결 정보를 설정 할수 있습니다.

root@bsd11:~ # zfs set sharenfs="-alldirs,=maproot=root,-network=192.168.0.0/24" zroot/usr/home
root@bsd11:~ # zfs get sharenfs zroot/usr/home
NAME            PROPERTY  VALUE                                           SOURCE
zroot/usr/home  sharenfs  -alldirs,=maproot=root,-network=192.168.0.0/24  local
root@bsd11:~ #

 

스냅 샷 관리

-작성중

 

 

FreeBSD VirtIO

virtio 참고 : https://wiki.libvirt.org/page/Virtio

So-called “full virtualization” is a nice feature because it allows you to run any operating system virtualized. However, it’s slow because the hypervisor has to emulate actual physical devices such as RTL8139 network cards . This emulation is both complicated and inefficient.

Virtio is a virtualization standard for network and disk device drivers where just the guest’s device driver “knows” it is running in a virtual environment, and cooperates with the hypervisor. This enables guests to get high performance network and disk operations, and gives most of the performance benefits of paravirtualization.

Note that virtio is different, but architecturally similar to, Xen paravirtualized device drivers (such as the ones that you can install in a Windows guest to make it go faster under Xen). Also similar is VMWare’s Guest Tools.

This page describes how to configure libvirt to use virtio with KVM guests.

kvm 사용시 virtio Driver 를 사용할 일이 종종 있습니다.

과거에는 virtio-kmod 를 ports 에서 설치 하고  /boot/loader.conf 에 virtio_load 등을 추가 하였지만

 

BSD 11 에서는 바로 Disk 를 추가 하여 사용 할수 있습니다.

 

VirtIO Driver load

Driver load 확인을 위한것임으로 별도로 확인할 필요는 없습니다.

 

root@bsd11:~ # kldload virtio
kldload: can't load virtio: module already loaded or in kernel
root@bsd11:~ #

이미 kernel 에서 load 되어 있다고 나옵니다.

 

차후 make world 도 해봐야 겠네요. 🙂

root@bsd11:~ # cat /usr/src/sys/amd64/conf/GENERIC | grep vir
device          virtio                  # Generic VirtIO bus (required)
device          virtio_pci              # VirtIO PCI device
device          virtio_blk              # VirtIO Block device
device          virtio_scsi             # VirtIO SCSI device
device          virtio_balloon          # VirtIO Memory Balloon device
root@bsd11:~ #

 

 

VirtIO Device 확인

Disk name 은 vtbdX 형식으로 추가 됩니다.

root@bsd11:~ # ls -al /dev/vtb*
crw-r-----  1 root  operator  0x48 Mar 11 21:49 /dev/vtbd0
root@bsd11:~ #

 

VirtIO Disk Test

root@bsd11:~ # gpart create -s GPT vtbd0
vtbd0 created
root@bsd11:~ # gpart add -t freebsd-ufs vtbd0
vtbd0p1 added
root@bsd11:~ # newfs -U /dev/vtbd0p1
/dev/vtbd0p1: 10240.0MB (20971440 sectors) block size 32768, fragment size 4096
        using 17 cylinder groups of 626.09MB, 20035 blks, 80256 inodes.
        with soft updates
super-block backups (for fsck_ffs -b #) at:
 192, 1282432, 2564672, 3846912, 5129152, 6411392, 7693632, 8975872, 10258112,
 11540352, 12822592, 14104832, 15387072, 16669312, 17951552, 19233792, 20516032
root@bsd11:~ # mkdir /data
root@bsd11:~ # mount /dev/vtbd0p1 /data/
root@bsd11:~ # df -h | grep -i data
/dev/vtbd0p1            9.7G    8.0K    8.9G     0%    /data
root@bsd11:~ #

 

FreeBSD Adding Disk

official site : https://www.freebsd.org/doc/handbook/disks-adding.html

자세한 내용은 FreeBSD handbook site 를 참고해 주세요.

BSD 7~8 Version 에서 자주 사용하였던 sysinstall 은 BSD 11 Version 에서는 더이상 사용하지 않습니다.

root@bsd11:~ # sysinstall
sysinstall: Command not found.
root@bsd11:~ #

 

BSD11 에서는 bsdinstall 을 사용합니다. 

root@BSD11:~ # bsdinstall

(Disk 추가 부분이 bsdinstall 에서 되는지 확인해보지 못하였습니다.)

 

gpart 명령어로 Disk 를 확인 합니다.

기존 디스크는 gpart show 로 확인 합니다.

root@bsd11:~ # ls -al /dev/ad*
crw-r-----  1 root  operator  0x5b Mar 11 20:57 /dev/ada0
crw-r-----  1 root  operator  0x5c Mar 11 20:57 /dev/ada0p1
crw-r-----  1 root  operator  0x5d Mar 11 20:57 /dev/ada0p2
crw-r-----  1 root  operator  0x5e Mar 11 20:57 /dev/ada0p3
crw-r-----  1 root  operator  0x5f Mar 11 20:57 /dev/ada1

기존 디스크 확인 
root@bsd11:~ # gpart show
=>      40  83886000  ada0  GPT  (40G)
        40      1024     1  freebsd-boot  (512K)
      1064       984        - free -  (492K)
      2048   4194304     2  freebsd-swap  (2.0G)
   4196352  79687680     3  freebsd-zfs  (38G)
  83884032      2008        - free -  (1.0M)

root@bsd11:~ #

 

ada1 Device 를 gpt 파티션 테이블로 설정 합니다.

gpart 명령어로 ada1 Device 의 용량을 확인 합니다.

root@bsd11:~ # gpart create -s GPT ada1
ada1 created
root@bsd11:~ #
root@bsd11:~ # gpart show ada1
=>      40  20971440  ada1  GPT  (10G)
        40  20971440        - free -  (10G)

root@bsd11:~ #

 

ada1 Device 의 모든 용량을 지정하여 freebsd-ufs Filesystem 으로 지정 합니다.

용량지정시 gpart add -t freebsd-ufs -s 1G ada1 으로 지정 하시면 됩니다.

root@bsd11:~ # gpart add -t freebsd-ufs  ada1
ada1p1 added
root@bsd11:~ # gpart show ada1
=>      40  20971440  ada1  GPT  (10G)
        40  20971440     1  freebsd-ufs  (10G)

root@bsd11:~ #

 

File System 포멧

root@bsd11:~ # newfs -U /dev/ada1p1
/dev/ada1p1: 10240.0MB (20971440 sectors) block size 32768, fragment size 4096
        using 17 cylinder groups of 626.09MB, 20035 blks, 80256 inodes.
        with soft updates
super-block backups (for fsck_ffs -b #) at:
 192, 1282432, 2564672, 3846912, 5129152, 6411392, 7693632, 8975872, 10258112, 11540352, 12822592, 14104832, 15387072, 16669312, 17951552, 19233792, 20516032
root@bsd11:~ #

 

마운트 포인트 생성 및 /etc/fstab 등록

root@bsd11:~ # mkdir /data
root@bsd11:~ # vi /etc/fstab
# Device                Mountpoint      FStype  Options         Dump    Pass#
/dev/ada0p2             none    swap    sw              0       0
/dev/ada1p1             /data   ufs     rw              0       0

 

마운트 확인

root@bsd11:~ # mount -a
root@bsd11:~ # df -h |grep data
/dev/ada1p1             9.7G    8.0K    8.9G     0%    /data
root@bsd11:~ #

 

파티션 삭제

/etc/fstab 의 /data 라인을 삭제 하고 Disk 를 umount 합니다.

추가한 /data 라인을 삭제 합니다. 
root@bsd11:~ # vi /etc/fstab
# Device                Mountpoint      FStype  Options         Dump    Pass#
/dev/ada0p2             none    swap    sw              0       0
/dev/ada1p1             /data   ufs     rw              0       0

root@bsd11:~ # umount /data
root@bsd11:~ # df -h |grep data
root@bsd11:~ #

 

gpart /data 슬라이스 삭제 및 GPT Disk 삭제

gpart 명령어로 Disk 정보를 확인 합니다. 
root@bsd11:~ # gpart show
=>      40  83886000  ada0  GPT  (40G)
        40      1024     1  freebsd-boot  (512K)
      1064       984        - free -  (492K)
      2048   4194304     2  freebsd-swap  (2.0G)
   4196352  79687680     3  freebsd-zfs  (38G)
  83884032      2008        - free -  (1.0M)

=>      40  20971440  ada1  GPT  (10G)
        40  20971440     1  freebsd-ufs  (10G)


ada1 Device 의 1번 슬라이스를 삭제 합니다. 
root@bsd11:~ # gpart delete -i1 ada1
ada1p1 deleted

ada1 Device 가 GPT 테이블로 남아 있습니다. 
root@bsd11:~ # gpart show ada1
=>      40  20971440  ada1  GPT  (10G)
        40  20971440        - free -  (10G)

GPT 테이블 완전 삭제시 gpart destory 명령어를 사용합니다. 
root@bsd11:~ # gpart destroy -F ada1
ada1 destroyed
root@bsd11:~ #

 

ex) zfs volume 으로 사용할 slice 생성

root@bsd11:~ # gpart show
=>      63  41942977  ada0  MBR  (20G)
        63         1        - free -  (512B)
        64  41942975     1  freebsd  [active]  (20G)
  41943039         1        - free -  (512B)

=>       0  41942975  ada0s1  BSD  (20G)
         0  39845888       1  freebsd-ufs  (19G)
  39845888   2097086       2  freebsd-swap  (1.0G)
  41942974         1          - free -  (512B)

=>      40  20971440  vtbd0  GPT  (10G)
        40   2097152      1  freebsd-zfs  (1.0G)
   2097192  18874288         - free -  (9.0G)

root@bsd11:~ #

root@bsd11:~ # gpart add -t freebsd-zfs -s 1G vtbd0
vtbd0p2 added
root@bsd11:~ # gpart add -t freebsd-zfs -s 1G vtbd0
vtbd0p3 added
root@bsd11:~ # gpart add -t freebsd-zfs -s 1G vtbd0
vtbd0p4 added
root@bsd11:~ # gpart add -t freebsd-zfs -s 1G vtbd0
vtbd0p5 added
root@bsd11:~ # gpart add -t freebsd-zfs -s 1G vtbd0
vtbd0p6 added
root@bsd11:~ #



 

생성한 slice 확인

root@bsd11:~ # gpart show
=>      63  41942977  ada0  MBR  (20G)
        63         1        - free -  (512B)
        64  41942975     1  freebsd  [active]  (20G)
  41943039         1        - free -  (512B)

=>       0  41942975  ada0s1  BSD  (20G)
         0  39845888       1  freebsd-ufs  (19G)
  39845888   2097086       2  freebsd-swap  (1.0G)
  41942974         1          - free -  (512B)

=>      40  20971440  vtbd0  GPT  (10G)
        40   2097152      1  freebsd-zfs  (1.0G)
   2097192   2097152      2  freebsd-zfs  (1.0G)
   4194344   2097152      3  freebsd-zfs  (1.0G)
   6291496   2097152      4  freebsd-zfs  (1.0G)
   8388648   2097152      5  freebsd-zfs  (1.0G)
  10485800   2097152      6  freebsd-zfs  (1.0G)
  12582952   8388528         - free -  (4.0G)

root@bsd11:~ #

 

 

삭제시

root@bsd11:~ # gpart delete -i 1 vtbd0
vtbd0p1 deleted

 

Freebsd httpd-vhost.conf 설정

참고페이지: https://httpd.apache.org/docs/2.4/ko/sections.html

FreeBSD apache24 vhost 설정으로 wiki.test.com / blog.test.com 을 설정하는 내용입니다.

 

blog.test.com VirtualHost 추가

blog.test.com 으로 사용할 Direcoty 생성 및 Directory 권한설정

root@bsd11:~ # mkdir /www
root@bsd11:~ # chown www:www /www/

 

apache24 설정 변경

root@bsd11:~ # vi /usr/local/etc/apache24/httpd.conf
~중략
# Virtual hosts
Include etc/apache24/extra/httpd-vhosts.conf

LoadModule vhost_alias_module libexec/apache24/mod_vhost_alias.so


root@bsd11:~ # cd /usr/local/etc/apache24/extra/
root@bsd11:/usr/local/etc/apache24/extra # vi httpd-vhosts.conf

<VirtualHost *:80>
    ServerAdmin admin@test.com
    DocumentRoot "/usr/local/www/dokuwiki"
    ServerName wiki.test.com
    ErrorLog "/var/log/wiki.test.com-error_log"
    CustomLog "/var/log/wiki.test.com-access_log" common
</VirtualHost>
<VirtualHost *:80>
    ServerAdmin admin@test.com
    DocumentRoot "/www"
    ServerName blog.test.com
    ErrorLog "/var/log/blog.test.com-error_log"
    CustomLog "/var/log/blog.test.com-access_log" common
     <Directory "/www">
        AllowOverride None
        Order Allow,deny
        Allow from all
     </Directory>
</VirtualHost>

 

httpd.conf 나 httpd-vhost.conf 에 사용할 디렉토리 설정을 추가해야 됩니다.

<Directory "/www">
   AllowOverride None
   Order Allow,deny
   Allow from all
</Directory>

 

설정을 마친후 apache24 를 재시작 합니다.

root@bsd11:~ # service apache24 restart
Performing sanity check on apache24 configuration:
Syntax OK
Stopping apache24.
Waiting for PIDS: 1125.
Performing sanity check on apache24 configuration:
Syntax OK
Starting apache24.
root@bsd11:~ #

 

web browser 확인

 

 

Freebsd ezjail ports install

Official pagehttps://www.freebsd.org/doc/handbook/jails-ezjail.html

참고페이지: https://www.cyberciti.biz/faq/howto-setup-freebsd-jail-with-ezjail/

https://www.davd.eu/posts-freebsd-jails-with-a-single-public-ip-address/

FreeBSD jail의 자세한 내용은 Freebsd 문서를 참고해 주시기 바랍니다.

 

FreeBSD11 에서 간단하게 사용해볼수 있는 Jail 설정에 관한 문서 입니다. zfs pool 사용의 경우 설치시 BSD 설치시 zfs 로 설치한 VM 을 사용하였습니다.

별도의 zfs의 구성으로 테스트를 진행하셔도 됩니다. ezjail 설치시 pkg install -y ezjail 로 설치 하여도 됩니다. 🙂

 

Jail network 설정

Jail 에서 사용할 lo1 Device 를 생성 합니다.

lo1 interface 설정 /etc/rc.conf 를 수정 합니다. 

jail 에서 사용할 가상 ip 를 10.0.0.1 ~ 10.0.0.9 까지 설정 합니다.

rc.conf 를 수정 합니다. 
root@bsd11:~ # vi /etc/rc.conf

#ifconfig_vtnet0="inet 192.168.0.40 netmask 255.255.255.0"
ifconfig_vtnet0_name="em0"
ifconfig_em0="inet 192.168.0.40 netmask 255.255.255.0"
defaultrouter="192.168.0.1"
cloned_interfaces="lo1"
ipv4_addrs_lo1="10.0.0.1-9/29"


lo1 device 를 생성합니다.  
root@bsd11:~ # service netif cloneup
Created clone interfaces: lo1.
root@bsd11:~ # ifconfig
vtnet0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=6c07bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,LRO,VLAN_HWTSO,LINKSTATE,RXCSUM_IPV6,TXCSUM_IPV6>
        ether 52:54:00:40:19:eb
        hwaddr 52:54:00:40:19:eb
        inet 192.168.0.40 netmask 0xffffff00 broadcast 192.168.0.255
        nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
        media: Ethernet 10Gbase-T <full-duplex>
        status: active
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
        options=600003<RXCSUM,TXCSUM,RXCSUM_IPV6,TXCSUM_IPV6>
        inet6 ::1 prefixlen 128
        inet6 fe80::1%lo0 prefixlen 64 scopeid 0x2
        inet 127.0.0.1 netmask 0xff000000
        nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
        groups: lo
lo1: flags=8008<LOOPBACK,MULTICAST> metric 0 mtu 16384
        options=600003<RXCSUM,TXCSUM,RXCSUM_IPV6,TXCSUM_IPV6>
        nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
        groups: lo
root@bsd11:~ #

 

lo1 interface 생성

root@bsd11:~ # service netif cloneup
Created clone interfaces: lo1.

root@bsd11:~ # ifconfig
em0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=6c07bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,LRO,VLAN_HWTSO,LINKSTATE,RXCSUM_IPV6,TXCSUM_IPV6>
        ether 52:54:00:2c:0c:a0
        hwaddr 52:54:00:2c:0c:a0
        inet 192.168.0.40 netmask 0xffffff00 broadcast 192.168.0.255
        nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
        media: Ethernet 10Gbase-T <full-duplex>
        status: active
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
        options=600003<RXCSUM,TXCSUM,RXCSUM_IPV6,TXCSUM_IPV6>
        inet6 ::1 prefixlen 128
        inet6 fe80::1%lo0 prefixlen 64 scopeid 0x2
        inet 127.0.0.1 netmask 0xff000000
        nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
        groups: lo
lo1: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
        options=600003<RXCSUM,TXCSUM,RXCSUM_IPV6,TXCSUM_IPV6>
        inet 10.0.0.1 netmask 0xfffffff8
        inet 10.0.0.2 netmask 0xffffffff
        inet 10.0.0.3 netmask 0xffffffff
        inet 10.0.0.4 netmask 0xffffffff
        inet 10.0.0.5 netmask 0xffffffff
        inet 10.0.0.6 netmask 0xffffffff
        inet 10.0.0.7 netmask 0xffffffff
        inet 10.0.0.8 netmask 0xffffffff
        inet 10.0.0.9 netmask 0xffffffff
        nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
        groups: lo
root@bsd11:~ #

 

pf 방화벽 설정

IP_PUB 의 경우 em0 의 ip 입니다.

web-service 테스트를 위하여 443 , 80 port 를 10.0.0.1 로 보냅니다.

root@bsd11:~ # vi /etc/pf.conf
# Public IP address
IP_PUB="192.168.0.40"

# Packet normalization
scrub in all

# Allow outbound connections from within the jails
nat on em0 from lo1:network to any -> (em0)

# webserver jail at 10.0.0.1
rdr on em0 proto tcp from any to $IP_PUB port 443 -> 10.0.0.1
# just an example in case you want to redirect to another port within your jail
rdr on em0 proto tcp from any to $IP_PUB port 80 -> 10.0.0.1

root@bsd11:~ #

 

pf 방화벽 실행

root@bsd11:~ # sysrc pf_enable=YES
pf_enable: NO -> YES
root@bsd11:~ # service pf start
Enabling pf.

 

ezjail 설치

root@bsd11:~ # whereis ezjail
ezjail: /usr/ports/sysutils/ezjail
root@bsd11:~ # cd /usr/ports/sysutils/ezjail/ && make install clean
root@bsd11:/usr/ports/sysutils/ezjail # rehash
root@bsd11:/usr/ports/sysutils/ezjail #

 

resolv.conf 파일을 카피 합니다.

root@bsd11:~ # cp /etc/resolv.conf /usr/jails/newjail/etc/

 

ezjail 을 실행합니다. 

root@bsd11:/usr/ports/sysutils/ezjail # sysrc ezjail_enable=YES
ezjail_enable:  -> YES
root@bsd11:/usr/ports/sysutils/ezjail # service ezjail start

 

base jail template 생성

root@bsd11:~ # ezjail-admin install
base.txz                                      100% of   99 MB 2970 kBps 00m34s
lib32.txz                                     100% of   17 MB 2761 kBps 00m07s
Looking up update.FreeBSD.org mirrors... 3 mirrors found.
Fetching metadata signature for 11.1-RELEASE from update5.freebsd.org... done.
Fetching metadata index... done.
Inspecting system...

 

ezjail-admin install 실행후 아래와 같은 디렉토리가 생성된것을 볼수 있습니다. 

root@bsd11:~ # ls -al /usr/jails
total 20
drwxr-xr-x   5 root  wheel  512 Mar  5 21:51 .
drwxr-xr-x  17 root  wheel  512 Mar  5 21:50 ..
drwxr-xr-x   9 root  wheel  512 Mar  5 21:51 basejail
drwxr-xr-x   3 root  wheel  512 Mar  5 21:51 flavours
drwxr-xr-x  13 root  wheel  512 Mar  5 21:51 newjail
root@bsd11:~ # ls -al /usr/jails/flavours/
total 12
drwxr-xr-x  3 root  wheel  512 Mar  5 21:51 .
drwxr-xr-x  5 root  wheel  512 Mar  5 21:51 ..
drwxr-xr-x  4 root  wheel  512 Mar  4 15:27 example
root@bsd11:~ # ls -al /usr/jails/basejail/
total 36
drwxr-xr-x   9 root  wheel   512 Mar  5 21:51 .
drwxr-xr-x   5 root  wheel   512 Mar  5 21:51 ..
drwxr-xr-x   2 root  wheel  1024 Mar  5 21:51 bin
drwxr-xr-x   9 root  wheel  1024 Mar  5 21:51 boot
drwxr-xr-x   4 root  wheel  1536 Mar  5 21:51 lib
drwxr-xr-x   3 root  wheel   512 Mar  5 21:51 libexec
drwxr-xr-x   2 root  wheel  2560 Mar  5 21:51 rescue
drwxr-xr-x   2 root  wheel  2560 Mar  5 21:51 sbin
drwxr-xr-x  11 root  wheel   512 Mar  5 21:51 usr
root@bsd11:~ # man /usr/jails
No manual entry for /usr/jails
root@bsd11:~ # ls -al /usr/local/etc/rc.d/ezjail
-rwxr-xr-x  1 root  wheel  8128 Mar  4 15:27 /usr/local/etc/rc.d/ezjail
root@bsd11:~ # ls -al /usr/local/etc/ezjail.conf
-rw-r--r--  1 root  wheel  2637 Mar  4 15:27 /usr/local/etc/ezjail.conf
root@bsd11:~ # ls -al /usr/local/etc/ezjail
total 8
drwxr-xr-x   2 root  wheel   512 Mar  4 15:27 .
drwxr-xr-x  12 root  wheel  1024 Mar  4 15:27 ..
root@bsd11:~ #

 

Jail 에서 사용할 ports 트리를 커밋 합니다.

root@bsd11:~ # ezjail-admin install -p

 

Test 를 위하여 httpd jail 을 생성 합니다. 

root@bsd11:~ # ezjail-admin create httpd 10.0.0.1
root@bsd11:~ # ezjail-admin start httpd
root@bsd11:~ # ezjail-admin list
STA JID  IP              Hostname                       Root Directory
--- ---- --------------- ------------------------------ ------------------------
DR  1    10.0.0.1        httpd                          /usr/jails/httpd
root@bsd11:~ #

 

jls 명령어로도 확인 가능 합니다. 

root@bsd11:~ # jls
   JID  IP Address      Hostname                      Path
     1  10.0.0.1        httpd                         /usr/jails/httpd
root@bsd11:~ #

 

 

httpd jail 생성후 파티션 확인

가상 파티션인 /usr/jails/httpd 가 생성 됩니다.

root@bsd11:~ # df -h
Filesystem             Size    Used   Avail Capacity  Mounted on
/dev/ada0s1a            18G     11G    6.1G    64%    /
devfs                  1.0K    1.0K      0B   100%    /dev
/usr/jails/basejail     18G     11G    6.1G    64%    /usr/jails/httpd/basejail
devfs                  1.0K    1.0K      0B   100%    /usr/jails/httpd/dev
fdescfs                1.0K    1.0K      0B   100%    /usr/jails/httpd/dev/fd
procfs                 4.0K    4.0K      0B   100%    /usr/jails/httpd/proc
root@bsd11:~ #

 

 

Jail console 로 httpd 로 접속 합니다.

root@bsd11:~ # ezjail-admin console httpd
FreeBSD 11.1-RELEASE (GENERIC) #0 r321309: Fri Jul 21 02:08:28 UTC 2017

Welcome to FreeBSD!

Release Notes, Errata: https://www.FreeBSD.org/releases/
Security Advisories:   https://www.FreeBSD.org/security/
FreeBSD Handbook:      https://www.FreeBSD.org/handbook/
FreeBSD FAQ:           https://www.FreeBSD.org/faq/
Questions List: https://lists.FreeBSD.org/mailman/listinfo/freebsd-questions/
FreeBSD Forums:        https://forums.FreeBSD.org/

Documents installed with the system are in the /usr/local/share/doc/freebsd/
directory, or can be installed later with:  pkg install en-freebsd-doc
For other languages, replace "en" with a language code like de or fr.

Show the version of FreeBSD installed:  freebsd-version ; uname -a
Please include that output and any error messages when posting questions.
Introduction to manual pages:  man man
FreeBSD directory layout:      man hier

Edit /etc/motd to change this login announcement.
root@httpd:~ #

 

 

httpd jail 에서 apache24 를 설치 합니다. 

ports 설치가 아닌 pkg 명령어를 통한 설치도 가능 합니다. 🙂

root@httpd:~ # make -C /usr/ports/www/apache24 config-recursive install
~중략
root@httpd:~ # sysrc apache24_enable=YES
apache24_enable:  -> YES
root@httpd:~ # cat /etc/rc.conf
apache24_enable="YES"


root@httpd:~ # vi /usr/local/etc/apache24/httpd.conf
ServerName www.example.com:80


root@httpd:~ # service apache24 start

root@httpd:~ # sockstat  -4
USER     COMMAND    PID   FD PROTO  LOCAL ADDRESS         FOREIGN ADDRESS
www      httpd      63005 3  tcp4   10.0.0.1:80           *:*
www      httpd      63004 3  tcp4   10.0.0.1:80           *:*
www      httpd      63003 3  tcp4   10.0.0.1:80           *:*
www      httpd      63002 3  tcp4   10.0.0.1:80           *:*
www      httpd      63001 3  tcp4   10.0.0.1:80           *:*
root     httpd      63000 3  tcp4   10.0.0.1:80           *:*
root     sendmail   3798  3  tcp4   10.0.0.1:25           *:*
root     syslogd    3718  6  udp4   10.0.0.1:514          *:*
root@httpd:~ #

 

 

접속 확인 

vm의 em0 에 설정되어있는 Public IP 192.168.0.40 으로 접속 하면 httpd jail 로 접속 하게 됩니다.

zroot/jails zfs pool 생성

최초 jail 구성시 먼저 zfs pool 을 생성 하고 작업을 합니다.

zfs 및 파일 시스템의 경우 별도로 포스팅 하겠습니다.

 

ezjail 설치 및 rc.conf 등록

root@bsd11:~ # pkg install -y ezjail
root@bsd11:~ # sysrc ezjail_enable=YES
ezjail_enable:  -> YES

 

ezjail 에서 zfs pool 을 사용하기 위하여 아래와 같이 ezjail.conf 를 수정합니다.

root@bsd11:~ # vi /usr/local/etc/ezjail.conf
# to collect them in this directory
 ezjail_jaildir=/usr/jails

~중략
# ZFS options

# Setting this to YES will start to manage the basejail and newjail in ZFS
 ezjail_use_zfs="YES"

# Setting this to YES will manage ALL new jails in their own zfs
 ezjail_use_zfs_for_jails="YES"

# The name of the ZFS ezjail should create jails on, it will be mounted at the ezjail_jaildir
 ezjail_jailzfs="zroot/jails"

 

zfs list 확인

root@bsd11:~ # zfs list
NAME                 USED  AVAIL  REFER  MOUNTPOINT
zroot               1.66G  34.9G    88K  /zroot
zroot/ROOT           405M  34.9G    88K  none
zroot/ROOT/default   405M  34.9G   405M  /
zroot/tmp             88K  34.9G    88K  /tmp
zroot/usr           1.27G  34.9G    88K  /usr
zroot/usr/home        88K  34.9G    88K  /usr/home
zroot/usr/ports      665M  34.9G   665M  /usr/ports
zroot/usr/src        633M  34.9G   633M  /usr/src
zroot/var            584K  34.9G    88K  /var
zroot/var/audit       88K  34.9G    88K  /var/audit
zroot/var/crash       88K  34.9G    88K  /var/crash
zroot/var/log        136K  34.9G   136K  /var/log
zroot/var/mail        88K  34.9G    88K  /var/mail
zroot/var/tmp         96K  34.9G    96K  /var/tmp
root@bsd11:~ #

 

zfs jails pool 생성

root@bsd11:~ # zfs create -p zroot/jails
root@bsd11:~ # zfs set mountpoint=/usr/jails zroot/jails
root@bsd11:~ # zfs list
NAME                 USED  AVAIL  REFER  MOUNTPOINT
zroot               1.66G  34.9G    88K  /zroot
zroot/ROOT           405M  34.9G    88K  none
zroot/ROOT/default   405M  34.9G   405M  /
zroot/jails           88K  34.9G    88K  /usr/jails
zroot/tmp             88K  34.9G    88K  /tmp
zroot/usr           1.27G  34.9G    88K  /usr
zroot/usr/home        88K  34.9G    88K  /usr/home
zroot/usr/ports      665M  34.9G   665M  /usr/ports
zroot/usr/src        633M  34.9G   633M  /usr/src
zroot/var            576K  34.9G    88K  /var
zroot/var/audit       88K  34.9G    88K  /var/audit
zroot/var/crash       88K  34.9G    88K  /var/crash
zroot/var/log        136K  34.9G   136K  /var/log
zroot/var/mail        88K  34.9G    88K  /var/mail
zroot/var/tmp         88K  34.9G    88K  /var/tmp
root@bsd11:~ #


변경전
zroot/jails           88K  34.9G    88K  /zroot/jails

변경후
zroot/jails           88K  34.9G    88K  /usr/jails

 

ezjail-admin install 을 실행하여 jails 에 필요한 디렉토리를 생성 합니다.

root@bsd11:~ # ezjail-admin install
base.txz                                        7% of   99 MB 2270 kBps 00m47s
lib32.txz                                     100% of   17 MB 1805 kBps 00m10s

 

디렉토리 확인 

root@bsd11:~ # df -h |grep -i jails
zroot/jails              35G    104K     35G     0%    /usr/jails
zroot/jails/basejail     35G    296M     35G     1%    /usr/jails/basejail
zroot/jails/newjail      35G    4.7M     35G     0%    /usr/jails/newjail

ZFS 사용시 아래와 같이 ro -> rw 로 변경해야 ports 설치가 가능합니다.

root@bsd11:~ # vi /etc/fstab.httpd
/usr/jails/basejail /usr/jails/httpd/basejail nullfs rw 0 0

다른부분은 위와 동일 합니다. 🙂

 

apache24+php71 jail & mariadb101 jail 구성

httpd jail : apache24+php71 / ip-adress 10.0.0.1

database jail : mariadb101 / ip-address 10.0.0.2

 

Freebsd APM 설치 참고:

[apm] apache24-php71-mariadb102 설치

 

pf.conf 설정을 변경하여 3306 port 를 10.0.0.2 설정 합니다.

root@bsd11:~ # vi /etc/pf.conf
# Public IP address
IP_PUB="192.168.0.40"

# Packet normalization
scrub in all

# Allow outbound connections from within the jails
nat on em0 from lo1:network to any -> (em0)

# webserver jail at 10.0.0.1
rdr on em0 proto tcp from any to $IP_PUB port 443 -> 10.0.0.1
# just an example in case you want to redirect to another port within your jail
rdr on em0 proto tcp from any to $IP_PUB port 80 -> 10.0.0.1

#mariadb jail at 10.0.0.2
rdr on em0 proto tcp from any to $IP_PUB port 3306 -> 10.0.0.2

 

apache24 와 php7 을 사용할 httpd jail 을 생성 및 실행

root@bsd11:~ # ezjail-admin create httpd 10.0.0.1
root@bsd11:~ # cp /etc/resolv.conf /usr/jails/httpd/etc/
root@bsd11:~ # ezjail-admin start httpd

 

mariadb101 에서 사용할 database jail 을 생성 및 실행

root@bsd11:~ # ezjail-admin create database 10.0.0.2
root@bsd11:~ # cp /etc/resolv.conf /usr/jails/database/etc/
root@bsd11:~ # ezjail-admin start database

 

파일시스템을  rw 로 수정 합니다.

root@bsd11:~ # vi /etc/fstab.httpd
/usr/jails/basejail /usr/jails/httpd/basejail nullfs rw 0 0

root@bsd11:~ # vi /etc/fstab.database
/usr/jails/basejail /usr/jails/database/basejail nullfs rw 0 0

 

 

jail list 확인및 httpd jail 접속

jail 접속시 ezjail-admin console 명령어를 사용합니다.

root@bsd11:~ # ezjail-admin list
STA JID  IP              Hostname                       Root Directory
--- ---- --------------- ------------------------------ ------------------------
ZS  N/A  10.0.0.1        httpd                          /usr/jails/httpd
ZS  N/A  10.0.0.2        database                       /usr/jails/database
root@bsd11:~ # ezjail-admin console httpd
FreeBSD 11.1-RELEASE (GENERIC) #0 r321309: Fri Jul 21 02:08:28 UTC 2017

Welcome to FreeBSD!

Release Notes, Errata: https://www.FreeBSD.org/releases/
Security Advisories:   https://www.FreeBSD.org/security/
FreeBSD Handbook:      https://www.FreeBSD.org/handbook/
FreeBSD FAQ:           https://www.FreeBSD.org/faq/
Questions List: https://lists.FreeBSD.org/mailman/listinfo/freebsd-questions/
FreeBSD Forums:        https://forums.FreeBSD.org/

Documents installed with the system are in the /usr/local/share/doc/freebsd/
directory, or can be installed later with:  pkg install en-freebsd-doc
For other languages, replace "en" with a language code like de or fr.

Show the version of FreeBSD installed:  freebsd-version ; uname -a
Please include that output and any error messages when posting questions.
Introduction to manual pages:  man man
FreeBSD directory layout:      man hier

Edit /etc/motd to change this login announcement.
root@httpd:~ #

 

apache24  설치

root@httpd:~ # make -C /usr/ports/www/apache24 config-recursive install
To run apache www server from startup, add apache24_enable="yes"
in your /etc/rc.conf. Extra options can be found in startup script.

Your hostname must be resolvable using at least 1 mechanism in
/etc/nsswitch.conf typically DNS or /etc/hosts or apache might
have issues starting depending on the modules you are using.

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

- apache24 default build changed from static MPM to modular MPM
- more modules are now enabled per default in the port
- icons and error pages moved from WWWDIR to DATADIR

   If build with modular MPM and no MPM is activated in
   httpd.conf, then mpm_prefork will be activated as default
   MPM in etc/apache24/modules.d to keep compatibility with
   existing php/perl/python modules!

Please compare the existing httpd.conf with httpd.conf.sample
and merge missing modules/instructions into httpd.conf!

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

===> SECURITY REPORT:
      This port has installed the following files which may act as network
      servers and may therefore pose a remote security risk to the system.
/usr/local/libexec/apache24/mod_cgid.so

      This port has installed the following startup scripts which may cause
      these network services to be started at boot time.
/usr/local/etc/rc.d/apache24
/usr/local/etc/rc.d/htcacheclean

      If there are vulnerabilities in these programs there may be a security
      risk to the system. FreeBSD makes no guarantee about the security of
      ports included in the Ports Collection. Please type 'make deinstall'
      to deinstall the port if this is a concern.

      For more information, and contact details about the security
      status of this software, see the following webpage:
http://httpd.apache.org/
root@httpd:~ #

 

php71 설치

jail 내부라 zfs enable 도 필요 없어 make config 가 의미가 있을지는 모르나 php71 에서는 make config 를 눌러 OK 를 선택 합니다.

의미는 없어 보입니다. 🙂

root@httpd:~ # cd /usr/ports/lang/php71/
root@httpd:/usr/ports/lang/php71 # make config


root@httpd:/usr/ports/lang/php71-extensions # cd
root@httpd:~ # make -C /usr/ports/lang/php71-extensions config-recursive install

설치 옵션에서 CURL FTP GD MYSQLi OPENSSL SOCKETS PDF SNMP ZIP 선택후 설치를 진행 합니다. 

 

mod_php71 설치

root@httpd:~ # pkg install -y mod_php71

ports 설치시 error 가 발생함으로 pkg 명령어를 이용하여 설치 합니다.

설치후 메세지

Message from mod_php71-7.1.14:

***************************************************************

Make sure index.php is part of your DirectoryIndex.

You should add the following to your Apache configuration file:

<FilesMatch "\.php$">
    SetHandler application/x-httpd-php
</FilesMatch>
<FilesMatch "\.phps$">
    SetHandler application/x-httpd-php-source
</FilesMatch>

*********************************************************************

If you are building PHP-based ports in poudriere(8) with ZTS enabled,
add WITH_MPM=event to /etc/make.conf to prevent build failures.

*********************************************************************

 

 

mraidb101 설치

database jail 로 접속 합니다.

root@bsd11:~ # ezjail-admin console database

 

mariadb101 을 설치 합니다.

root@bsd11:~ # make -C /usr/ports/databases/mariadb101-server/ config-recursive install

 

httpd jail 설정

rc.conf 에 apache24 enable 추가

root@bsd11:~ # ezjail-admin console httpd
root@httpd:~ # sysrc apache24_enable=YES
apache24_enable:  -> YES
root@httpd:~ #

 

apache24 setting

root@httpd:~ # cd /usr/local/etc/apache24/
root@httpd:/usr/local/etc/apache24 # cp httpd.conf httpd.conf.org
root@httpd:/usr/local/etc/apache24 # vi httpd.conf
<IfModule dir_module>
    DirectoryIndex index.html index.php
</IfModule>


ServerName 10.0.0.1:80

    #
    AddType application/x-compress .Z
    AddType application/x-gzip .gz .tgz
    AddType application/x-httpd-php .php .inc .html
    AddType application/x-httpd-source .phps

 

 

php.ini 파일 카피

root@httpd:~ # cd /usr/local/etc/
root@httpd:/usr/local/etc # cp php.ini-production php.ini

 

php.conf 파일생성

root@httpd:~ # vi /usr/local/etc/apache24/extra/php.conf
<IfModule dir_module>
    DirectoryIndex index.php index.html
    <FilesMatch "\.php$">
        SetHandler application/x-httpd-php
    </FilesMatch>
    <FilesMatch "\.phps$">
        SetHandler application/x-httpd-php-source
    </FilesMatch>
</IfModule>

 

apache24 실행

root@httpd:/usr/local/etc/apache24 # service apache24 restart
Performing sanity check on apache24 configuration:
Syntax OK
Stopping apache24.
Waiting for PIDS: 21662.
Performing sanity check on apache24 configuration:
Syntax OK
Starting apache24.
root@httpd:/usr/local/etc/apache24 #

 

 

database jail 설정

database jail 에 접속 하여 mariadb101 을 설정 합니다.

mariadb 실행후 db Password 를 설정 합니다.

root@bsd11:~ # ezjail-admin console 
root@database:~ # sysrc mysql_enable=YES
mysql_enable:  -> YES

mariadb102 Daemon 실행및 password 설정

root@database:~ # service mysql-server start
To start mysqld at boot time you have to copy
support-files/mysql.server to the right place for your system

PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER !
To do so, start the server, then issue the following commands:

'/usr/local/bin/mysqladmin' -u root password 'new-password'
'/usr/local/bin/mysqladmin' -u root -h database password 'new-password'

Alternatively you can run:
'/usr/local/bin/mysql_secure_installation'

which will also give you the option of removing the test
databases and anonymous user created by default.  This is
strongly recommended for production servers.

See the MariaDB Knowledgebase at http://mariadb.com/kb or the
MySQL manual for more instructions.

You can start the MariaDB daemon with:
cd '/usr/local' ; /usr/local/bin/mysqld_safe --datadir='/var/db/mysql'

You can test the MariaDB daemon with mysql-test-run.pl
cd '/usr/local/mysql-test' ; perl mysql-test-run.pl

Please report any problems at http://mariadb.org/jira

The latest information about MariaDB is available at http://mariadb.org/.
You can find additional information about the MySQL part at:
http://dev.mysql.com
Consider joining MariaDB's strong and vibrant community:
Get Involved
Starting mysql. root@database:~ # /usr/local/bin/mysql_secure_installation NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY! In order to log into MariaDB to secure it, we'll need the current password for the root user. If you've just installed MariaDB, and you haven't set the root password yet, the password will be blank, so you should just press enter here. Enter current password for root (enter for none): OK, successfully used password, moving on... Setting the root password ensures that nobody can log into the MariaDB root user without the proper authorisation. Set root password? [Y/n] y New password: Re-enter new password: Password updated successfully! Reloading privilege tables.. ... Success! By default, a MariaDB installation has an anonymous user, allowing anyone to log into MariaDB without having to have a user account created for them. This is intended only for testing, and to make the installation go a bit smoother. You should remove them before moving into a production environment. Remove anonymous users? [Y/n] y ... Success! Normally, root should only be allowed to connect from 'localhost'. This ensures that someone cannot guess at the root password from the network. Disallow root login remotely? [Y/n] y ... Success! By default, MariaDB comes with a database named 'test' that anyone can access. This is also intended only for testing, and should be removed before moving into a production environment. Remove test database and access to it? [Y/n] y - Dropping test database... ... Success! - Removing privileges on test database... ... Success! Reloading the privilege tables will ensure that all changes made so far will take effect immediately. Reload privilege tables now? [Y/n] y ... Success! Cleaning up... All done! If you've completed all of the above steps, your MariaDB installation should now be secure. Thanks for using MariaDB! root@database:~ #

 

my.cnf 파일 복사 및 character-set 변경

bind-address 를 0.0.0.0 으로 설정시 외부에서 접속 할수 있습니다.

pf 에서 별도로 설정을 하여 내부에서만 사용하게 설정해야 합니다. // 해당 설정의 경우 별도로 정리 하지 않았습니다.

root@database:~ # cp /usr/local/share/mysql/my-large.cnf /usr/local/etc/my.cnf
root@database:~ # vi /usr/local/etc/my.cnf

[client]
#password       = your_password
port            = 3306
socket          = /tmp/mysql.sock
default-character-set = utf8


# The MariaDB server
[mysqld]
bind-address=0.0.0.0
character-set-server=utf8
skip-character-set-client-handshake

 

mariadb 재시작 및 status 확인

root@database:~ # mysql -uroot -p
Enter password:
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 3
Server version: 10.1.31-MariaDB FreeBSD Ports

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> status;
--------------
mysql  Ver 15.1 Distrib 10.1.31-MariaDB, for FreeBSD11.1 (amd64) using readline 5.1

Connection id:          3
Current database:
Current user:           root@localhost
SSL:                    Not in use
Current pager:          more
Using outfile:          ''
Using delimiter:        ;
Server:                 MariaDB
Server version:         10.1.31-MariaDB FreeBSD Ports
Protocol version:       10
Connection:             Localhost via UNIX socket
Server characterset:    utf8
Db     characterset:    utf8
Client characterset:    utf8
Conn.  characterset:    utf8
UNIX socket:            /tmp/mysql.sock
Uptime:                 11 sec

Threads: 1  Questions: 4  Slow queries: 0  Opens: 17  Flush tables: 1  Open tables: 11  Queries per second avg: 0.363
--------------

MariaDB [(none)]>

db 설정이 완료 되었습니다.

 

Test 를 위하여 WordPress 를 설치해 봅니다. 🙂

WordPress 는 https://ko.wordpress.org/download/ Site 에서 다운 받으실수 있습니다.

host 에서 wordpress 파일을 httpd jail 의 root 디렉토리로 카피 합니다.

root@bsd11:~ # cp wordpress-4.9.4-ko_KR.zip /usr/jails/httpd/root/

 

test.php 파일 생성

root@httpd:~ # cd /usr/local/www/apache24/data
root@httpd:/usr/local/www/apache24/data # vi test.php

 

phpinfo 확인

WordPress 설치할 준비가 끝났습니다. 🙂

 

database jail  / db 생성

user 명 wp  / database wp / password password 입니다.

원격에서 접속 할수 있게 localhost 가 아닌 % 권한을 줍니다.

root@database:~ # mysql -uroot -p
Enter password:
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 4
Server version: 10.1.31-MariaDB FreeBSD Ports

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> create database wp;
Query OK, 1 row affected (0.01 sec)

MariaDB [(none)]> use mysql;
Database changed
MariaDB [mysql]> GRANT ALL ON wp.* TO 'wp'@'%' IDENTIFIED BY 'password';
Query OK, 0 rows affected (0.00 sec)

MariaDB [mysql]> flush privileges;
Query OK, 0 rows affected (0.00 sec)

MariaDB [mysql]> quit;
Bye
root@database:~ #

 

외부에서 원격 로그인으로 db 로 접속을 테스트 합니다.

root@bsd11:~ # pkg install mariadb101-client
root@bsd11:~ # mysql -h10.0.0.2 -uwp -p
Enter password:
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 4
Server version: 10.1.31-MariaDB FreeBSD Ports

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]>

 

httpd jails 에서 wordpress 파일을 압축해제 합니다.

root@httpd:~ # cp wordpress-4.9.4-ko_KR.zip /usr/local/www/apache24/data/
root@httpd:/usr/local/www/apache24/data # tar xvf wordpress-4.9.4-ko_KR.zip
root@httpd:/usr/local/www/apache24/data # chown -R www:www wordpress

 

web browser 

 

Let’s go! 를 클릭합니다.

데이터베이스 호스트에 database jail ip 를 입력 합니다. 

 

설치 실행하기를 클릭하여 설치를 진행합니다. 

 

WordPress 기본정보 기입후 워드프레스 설치하기를 클릭합니다. 

 

워드프레스 설치가 완료 되었습니다. 

 

로그인 확인