rancher 설치

Official site docs : https://rancher.com/docs/rancher/v2.x/en/ 

참고: https://rancher.com/docs/rancher/v2.x/en/installation/single-node/single-node-install-external-lb/ 

docker-compose 를 이용하여 blog 및 docs Site 를 운영 하고 있습니다.

사용량이 적은 Micro service 의 경우 크게 문제는 없지만 서비스가 늘어 갈수록 다소 부족함을 느껴

뒤늦게 알게된 rancher 를 테스트 하고 있습니다.

일반적인 설치와 기본 기능들만 테스트 하며 여러대의 docker 의 경우 별도로 포스팅 하겠습니다.

 

1.docker 삭제후 재설치

rancher 에서 지원 하지 않는 docker version 을 사용할 경우 docker 를 삭제 합니다.

[root@CentOS7 ~]# yum remove docker-ce-*
[root@CentOS7 ~]# curl https://releases.rancher.com/install-docker/18.06.sh | sh
[root@CentOS7 ~]# docker version
Client:
 Version:           18.06.3-ce
 API version:       1.38
 Go version:        go1.10.3
 Git commit:        d7080c1
 Built:             Wed Feb 20 02:26:51 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server:
 Engine:
  Version:          18.06.3-ce
  API version:      1.38 (minimum version 1.12)
  Go version:       go1.10.3
  Git commit:       d7080c1
  Built:            Wed Feb 20 02:28:17 2019
  OS/Arch:          linux/amd64
  Experimental:     false
[root@CentOS7 ~]#

 

 

 

2.rancher 설치

주의!!! rancher 에서 지원 하는 docker Version 을 사용 해야 합니다.

[root@CentOS7 ~]# docker run -d --restart=unless-stopped -p 8080:8080 rancher/server

설치 완료
[root@CentOS7 ~]# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                              NAMES
fda18a057b90        rancher/server      "/usr/bin/entry /usr…"   11 seconds ago      Up 9 seconds        3306/tcp, 0.0.0.0:8080->8080/tcp   admiring_meitner
[root@CentOS7 ~]#

 

 

3. host:8080 으로 접속 합니다.

 

4.INFRASTRUCTURE 를 클릭합니다.

 

5. Add Host 를 클릭합니다.

 

6. Save 를 클릭 합니다.

테스트로는 Single node 로 테스트 합니다.

 

7.IP 정보를 입력합니다.

 

8.스크립트를 복사하여 실행 합니다.

 

9. 터미널 에서 스크립트를 실행 합니다.

[root@CentOS7 ~]# sudo docker run -e CATTLE_AGENT_IP="192.168.0.10"  --rm --privileged -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/rancher:/var/lib/rancher \
rancher/agent:v1.2.11 http://192.168.0.10:8080/v1/scripts/E0A2CBD52872D58CC86C:1546214400000:EIL0DLMlcfxqiOLg3bxlr9chelc

 

 

10. rancher agent 확인

[root@CentOS7 ~]# docker ps
CONTAINER ID        IMAGE                   COMMAND                  CREATED             STATUS              PORTS                              NAMES
fc16c6b8a09c        rancher/agent:v1.2.11   "/run.sh run"            4 seconds ago       Up 3 seconds                                           rancher-agent
fda18a057b90        rancher/server          "/usr/bin/entry /usr…"   12 minutes ago      Up 12 minutes       3306/tcp, 0.0.0.0:8080->8080/tcp   admiring_meitner
[root@CentOS7 ~]#

 

 

11. rancher 상태 확인

rancher agent 가 설치 되고 나면 INFRASTRUCTURE 에서 상태를 확인 할수 있습니다.

정상구동이 안될시 docker version 을 확인 해야 하며, docker 재설치가 필요 합니다.

rancher 지원 docker version 확인 https://rancher.com/docs/rancher/v1.6/en/hosts/#supported-docker-versions 에서 확인할수 있습니다.

 

12. rancher 확인

single node 구성시 INFRASTRUCTURE 에서 상태를 확인 할수 있습니다.

 

13. rancher nginx 컨테이너 생성

rancher 를 이용하여 nginx 컨테이너를 생성 합니다. 사전작업으로는 디렉토리 생성 및 nginx config 파일을 생성 해야 합니다.

[root@CentOS7 ~]# mkdir -p /Workspace/nginx/conf
[root@CentOS7 ~]# mkdir /Workspace/wiki
[root@CentOS7 ~]# vi /Workspace/nginx/conf/default.conf
server {
    listen       80 default_server;
    server_name  localhost _;
    index        index.html index.htm;
    root         /code;

    location / {
        autoindex on;
    }
}

 

 

14.rancher 에 접속 하여 Service 를 생성 합니다.

Default -> Default 를 클릭하여 Stack 매뉴어 들어 갑니다.

Add Service 를 클릭하여 nginx Service 를 생성 합니다.

 

 

15. Add Service 생성

Name / Select Image / port Map 를 설정합니다.

 

16. Volumes 탭 설정

기존에 생성해 놓은 디렉토리를 연결 합니다.

 

17. Nginx Service 상태 확인

Create 를 클릭하면 rancher 에서 컨테이너를 생성 합니다.

  • nginx 컨테이너를 생성중입니다.

 

  • nginx 컨테이너를 생성 완료 하였습니다.

 

  • web-site 확인

IP 로 접속하면 nginx Default 디렉토리인 /Workspace/wiki 디렉토리 내용을 확인 할수 있습니다.

간단한 docs 를 운영할때는 이정도 설정이면 충분 합니다.

NFS Persistent Volume

NFS Persistent Volume 을 생성 합니다. Test 시 사용할 서비스는 mysql 과 wordpress 입니다.

테스트시 아래와 동일하게 설정해도 무방 하지만 온프레미스 에서 서비스를 하신다면 기본적으로 Storage 라인을 분리해야 하며, NFS 라인은 10G Network 를 이용해야 합니다.

참고페이지: https://docs.okd.io/latest/install_config/persistent_storage/persistent_storage_nfs.html

 

 

  • nfs-Server(storage) 설정

별도의 vm 에 nfs-utils 를 설치 하고 /data 디렉토리를 공유 합니다.

[root@k8s-storage ~]# yum install -y nfs-utils
[root@k8s-storage ~]# mkdir -p /data/{mysql,html}
[root@k8s-storage ~]#  chmod -R 755 /data
[root@k8s-storage ~]#  chown -R nfsnobody:nfsnobody /data/

[root@k8s-storage ~]# vi /etc/exports
/data/html      *(rw,sync,no_root_squash)
/data/mysql     *(rw,sync,no_root_squash)
[root@k8s-storage ~]#

[root@k8s-storage ~]#  systemctl enable nfs-server ; systemctl start nfs-server
[root@k8s-storage ~]#  systemctl status nfs-server

 

 

  • nfs-utils 패키지설치

nfs-utils 패키지를 모든노드에 설치 합니다.

k8s-master 설치
[root@k8s-master ~]# yum install -y nfs-utils 

k8s-node01 설치 
[root@k8s-node01 ~]# yum install nfs-utils 

k8s-node02 설치 
[root@k8s-node01 ~]# yum install nfs-utils

 

 

  • wordpress 작업 디렉토리 생성
[root@k8s-master ~]# mkdir wordpress
[root@k8s-master ~]# cd wordpress

 

 

 

  • pv-wordpress.yml 파일 생성

wordpress 와 mysql 에서 사용할 Persistent Volume 을 생성 합니다.

[root@k8s-master wordpress]# vi pv-wordpress.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
  name: wordpress-volume
  labels:
    type: local
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteMany
  nfs:
    server: 10.10.10.18
    # Exported path of your NFS server
    path: "/data/html"
 
---
kind: PersistentVolume
apiVersion: v1
metadata:
  name: mysql-volume
  labels:
    type: local
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteMany
  nfs:
    server: 10.10.10.18
    # Exported path of your NFS server
    path: "/data/mysql"

 

 

 

  • pv-wordpress 생성

pv-wordpress.yaml 파일을 이용하여 Persistent Volume 을 생성 합니다.

[root@k8s-master wordpress]# kubectl create -f pv-wordpress.yaml
persistentvolume/wordpress-volume created
persistentvolume/mysql-volume created
[root@k8s-master wordpress]#



[root@k8s-master wordpress]# kubectl get pv
NAME               CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
mysql-volume       10Gi       RWX            Retain           Available                                   15s
wordpress-volume   10Gi       RWX            Retain           Available                                   15s
[root@k8s-master wordpress]#

 

 

 

  • pvc-wordpress.yaml 파일 생성

Persistent Volume Claim 을 생성 합니다.

[root@k8s-master wordpress]# vi pvc-wordpress.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: wordpress-volumeclaim
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
 
      storage: 10Gi
 
 

---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: mysql-volumeclaim
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
 
      storage: 10Gi

 

 

  • pvc-wordpress 생성

pvc-wordpress.yaml 파일을 이용하여 Persistent Volume Claim 을 생성 합니다.

[root@k8s-master wordpress]# kubectl create -f pvc-wordpress.yaml
persistentvolumeclaim/wordpress-volumeclaim created
persistentvolumeclaim/mysql-volumeclaim created
[root@k8s-master wordpress]#


[root@k8s-master wordpress]# kubectl get pvc
NAME                    STATUS   VOLUME             CAPACITY   ACCESS MODES   STORAGECLASS   AGE
mysql-volumeclaim       Bound    mysql-volume       10Gi       RWX                           6s
wordpress-volumeclaim   Bound    wordpress-volume   10Gi       RWX                           6s
[root@k8s-master wordpress]#

 

 

  • mysql-password 생성
  • secret 삭제시 kubectl delete secret mysql-password
[root@k8s-master wordpress]# kubectl create secret generic mysql-password --from-literal=password=mysqlpassword
secret/mysql-password created
[root@k8s-master wordpress]# kubectl describe secret mysql-password
Name:         mysql-password
Namespace:    default
Labels:       <none>
Annotations:  <none>
 
Type:  Opaque
 
Data
====
password:  13 bytes
[root@k8s-master wordpress]#

 

 

 

  • mysql pod 파일 생성
[root@k8s-master wordpress]# vi mysql.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mysql
  labels:
    app: mysql
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mysql
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
        - image: mysql:5.6
          name: mysql
          env:
            - name: MYSQL_ROOT_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: mysql-password
                  key: password
            - name: MYSQL_DATABASE 
              value: wordpress         # WP 에서 사용할 DB명
            - name: MYSQL_USER
              value: wordpress         # WP 에서 사용할 USER명
            - name: MYSQL_ROOT_HOST
              value: '%'
            - name: MYSQL_PASSWORD 
              value: wordpress         # WP 데이터베이스 Password
          ports:
            - containerPort: 3306
              name: mysql
          volumeMounts:
            - name: mysql-persistent-storage
              mountPath: /var/lib/mysql
      volumes:
        - name: mysql-persistent-storage
          persistentVolumeClaim:
            claimName: mysql-volumeclaim

 

 

  • mysql pods 생성
[root@k8s-master wordpress]# kubectl create -f mysql.yaml
deployment.apps/mysql created
[root@k8s-master wordpress]# kubectl get pods -o wide
NAME                     READY   STATUS              RESTARTS   AGE   IP       NODE         NOMINATED NODE   READINESS GATES
mysql-5d4c989597-w9s2s   0/1     ContainerCreating   0          6s    <none>   k8s-node01   <none>           <none>
 
약 1~2분 소요 됩니다. 
[root@k8s-master wordpress]# kubectl get pods -o wide
NAME                     READY   STATUS    RESTARTS   AGE   IP          NODE         NOMINATED NODE   READINESS GATES
mysql-5d4c989597-w9s2s   1/1     Running   0          34s   20.20.2.2   k8s-node01   <none>           <none>
[root@k8s-master wordpress]#

 

 

  • mysql svc 생성
[root@k8s-master wordpress]# vi mysql-service.yaml
apiVersion: v1
 
kind: Service
metadata:
  name: mysql
  labels:
    app: mysql
spec:
  type: ClusterIP
  ports:
    - port: 3306
  selector:
    app: mysql
 
[root@k8s-master wordpress]# kubectl create -f mysql-service.yaml
service/mysql created
[root@k8s-master wordpress]#
 
 
[root@k8s-master wordpress]# kubectl get svc -o wide
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE   SELECTOR
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP    82d   <none>
mysql        ClusterIP   10.102.149.39   <none>        3306/TCP   7s    app=mysql
[root@k8s-master wordpress]#

 

 

  • wordpress pod 생성
[root@k8s-master wordpress]# vi wordpress.yaml
apiVersion: apps/v1
 
kind: Deployment
metadata:
  name: wordpress
  labels:
    app: wordpress
spec:
  replicas: 1
  selector:
    matchLabels:
      app: wordpress
  template:
    metadata:
      labels:
        app: wordpress
    spec:
      containers:
        - image: wordpress
          name: wordpress
          env:
          - name: WORDPRESS_DB_HOST
            value: mysql:3306
          - name: WORDPRESS_DB_NAME
            value: wordpress
          - name: WORDPRESS_DB_USER
            value: wordpress
          - name: WORDPRESS_DB_PASSWORD
            value: wordpress
          ports:
            - containerPort: 80
              name: wordpress
          volumeMounts:
            - name: wordpress-persistent-storage
              mountPath: /var/www/html
      volumes:
        - name: wordpress-persistent-storage
          persistentVolumeClaim:
            claimName: wordpress-volumeclaim
 
 
[root@k8s-master wordpress]# kubectl create -f wordpress.yaml
deployment.apps/wordpress created
[root@k8s-master wordpress]# kubectl get pods
NAME                         READY   STATUS              RESTARTS   AGE
NAME                         READY   STATUS    RESTARTS   AGE
mysql-6845698854-46pmx       1/1     Running   0          2m12s
wordpress-74747f4dbf-fbbnh   1/1     Running   0          12s
 
[root@k8s-master wordpress]#

 

 

  • wordpress svc 생성
[root@k8s-master wordpress]# vi wordpress-service.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    app: wordpress
  name: wordpress
spec:
  type: NodePort
  ports:
    - port: 80
      targetPort: 80
      protocol: TCP
  selector:
    app: wordpress
 
[root@k8s-master wordpress]# kubectl create -f wordpress-service.yaml
service/wordpress created
[root@k8s-master wordpress]#
 
 
[root@k8s-master wordpress]# kubectl get svc -o wide
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE   SELECTOR
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        95d   <none>
mysql        ClusterIP   10.99.245.34    <none>        3306/TCP       65s   app=mysql
wordpress    NodePort    10.104.25.182   <none>        80:31868/TCP   4s    app=wordpress
[root@k8s-master wordpress]#

 

 

 

 

 

wordpress mysql Persistent Volume 배포

kubernetes 에서 Persistent Volume 을 이용하여 wordpress , mysql 을 Persistent Volume 에 배포 합니다.

참고페이지: https://cloud.google.com/kubernetes-engine/docs/tutorials/persistent-disk?hl=ko

https://kubernetes.io/ko/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/#mysql과-wordpress에-필요한-리소스-구성-추가하기

 

 

1. kubernetes Service 확인

[root@k8s-master ~]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-node-fcbq8 2/2 Running 2 81d
kube-system calico-node-kzqlv 2/2 Running 2 81d
kube-system calico-node-r8ggc 2/2 Running 2 81d
kube-system coredns-fb8b8dccf-4zq8l 1/1 Running 1 81d
kube-system coredns-fb8b8dccf-fg9l7 1/1 Running 1 81d
kube-system etcd-k8s-master 1/1 Running 1 81d
kube-system kube-apiserver-k8s-master 1/1 Running 1 81d
kube-system kube-controller-manager-k8s-master 1/1 Running 1 81d
kube-system kube-proxy-ph9np 1/1 Running 1 81d
kube-system kube-proxy-x28cx 1/1 Running 1 81d
kube-system kube-proxy-z252g 1/1 Running 1 81d
kube-system kube-scheduler-k8s-master 1/1 Running 1 81d
kube-system kubernetes-dashboard-5f7b999d65-qmjfx 1/1 Running 1 81d
[root@k8s-master ~]#

 

 

2. Persistent Volume 생성

wordpress 와 mysql 에서 사용할 Persistent Volume 을 지정 합니다.

[root@k8s-master wordpress]# vi web01.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
  name: web01
  labels:
    type: local
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  hostPath:

    path: "/data/web01"



[root@k8s-master wordpress]# vi db01.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
  name: db01
  labels:
    type: local
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/data/db01"

[root@k8s-master wordpress]# kubectl create -f web01.yaml
persistentvolume/web01 created
[root@k8s-master wordpress]# kubectl create -f db01.yaml
persistentvolume/db01 created
[root@k8s-master wordpress]# kubectl get pv
NAME    CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
db01    5Gi        RWO            Retain           Available                                   3s
web01   5Gi        RWO            Retain           Available                                   8s
[root@k8s-master wordpress]#

 

 

3. Persistent Volume Claim 생성

[root@k8s-master wordpress]# vi wordpress-vol.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: wordpress-volumeclaim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:

      storage: 5Gi
[root@k8s-master wordpress]# vi mysql-vol.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: mysql-volumeclaim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:

      storage: 5Gi

[root@k8s-master wordpress]# kubectl create -f wordpress-vol.yaml
persistentvolumeclaim/wordpress-volumeclaim created
[root@k8s-master wordpress]# kubectl create -f mysql-vol.yaml
persistentvolumeclaim/mysql-volumeclaim created
[root@k8s-master wordpress]# kubectl get pvc
NAME                    STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
mysql-volumeclaim       Bound    db01     5Gi        RWO                           4s
wordpress-volumeclaim   Bound    web01    5Gi        RWO                           11s
[root@k8s-master wordpress]#

 

 

 

 

4. mysql root password 생성

secret 삭제시 kubectl delete secret mysql-password

[root@k8s-master wordpress]# kubectl create secret generic mysql-password --from-literal=password=mysqlpassword
secret/mysql-password created
[root@k8s-master wordpress]# kubectl describe secret mysql-password
Name:         mysql-password
Namespace:    default
Labels:       <none>
Annotations:  <none>

Type:  Opaque

Data
====
password:  13 bytes
[root@k8s-master wordpress]#

 

 

5. mysql pod 생성

wordpress 에서 사용할 DB 와 DB 사용자 / Password 를 지정 합니다.

[root@k8s-master wordpress]# vi mysql.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mysql
  labels:
    app: mysql
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mysql
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
        - image: mysql:5.6
          name: mysql
          env:
            - name: MYSQL_ROOT_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: mysql-password
                  key: password
            - name: MYSQL_DATABASE 
              value: wordpress         # WP 에서 사용할 DB명
            - name: MYSQL_USER
              value: wordpress         # WP 에서 사용할 USER명
            - name: MYSQL_ROOT_HOST
              value: '%'
            - name: MYSQL_PASSWORD 
              value: wordpress         # WP 데이터베이스 Password
          ports:
            - containerPort: 3306
              name: mysql
          volumeMounts:
            - name: mysql-persistent-storage
              mountPath: /var/lib/mysql
      volumes:
        - name: mysql-persistent-storage
          persistentVolumeClaim:
            claimName: mysql-volumeclaim


[root@k8s-master wordpress]# kubectl create -f mysql.yaml
deployment.apps/mysql created
[root@k8s-master wordpress]# kubectl get pods -o wide
NAME                     READY   STATUS    RESTARTS   AGE   IP           NODE         NOMINATED NODE   READINESS GATES
mysql-6845698854-46pmx   1/1     Running   0          4s    20.20.2.10   k8s-node02   <none>           <none>
[root@k8s-master wordpress]#

 

 

 

6. mysq-service 생성

[root@k8s-master wordpress]# vi mysql-service.yaml
apiVersion: v1

kind: Service
metadata:
  name: mysql
  labels:
    app: mysql
spec:
  type: ClusterIP
  ports:
    - port: 3306
  selector:
    app: mysql
[root@k8s-master wordpress]# kubectl create -f mysql-service.yaml
service/mysql created
[root@k8s-master wordpress]# kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP    10d
mysql        ClusterIP   10.109.232.234   <none>        3306/TCP   3s
[root@k8s-master wordpress]#

 

 

7. wordpress pod 생성

[root@k8s-master wordpress]# vi wordpress.yaml
apiVersion: apps/v1

kind: Deployment
metadata:
  name: wordpress
  labels:
    app: wordpress
spec:
  replicas: 1
  selector:
    matchLabels:
      app: wordpress
  template:
    metadata:
      labels:
        app: wordpress
    spec:
      containers:
        - image: wordpress
          name: wordpress
          env:
          - name: WORDPRESS_DB_HOST
            value: mysql:3306
          - name: WORDPRESS_DB_NAME
            value: wordpress
          - name: WORDPRESS_DB_USER
            value: wordpress
          - name: WORDPRESS_DB_PASSWORD
            value: wordpress
          ports:
            - containerPort: 80
              name: wordpress
          volumeMounts:
            - name: wordpress-persistent-storage
              mountPath: /var/www/html
      volumes:
        - name: wordpress-persistent-storage
          persistentVolumeClaim:
            claimName: wordpress-volumeclaim


[root@k8s-master wordpress]# kubectl create -f wordpress.yaml
deployment.apps/wordpress created
[root@k8s-master wordpress]# kubectl get pods
NAME                         READY   STATUS              RESTARTS   AGE
NAME                         READY   STATUS    RESTARTS   AGE
mysql-6845698854-46pmx       1/1     Running   0          2m12s
wordpress-74747f4dbf-fbbnh   1/1     Running   0          12s

[root@k8s-master wordpress]#

 

 

8. wordpress-service 생성

[root@k8s-master wordpress]# vi wordpress-service.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    app: wordpress
  name: wordpress
spec:
  type: NodePort
  ports:
    - port: 80
      targetPort: 80
      protocol: TCP
  selector:
    app: wordpress
[root@k8s-master wordpress]# kubectl create -f wordpress-service.yaml
service/wordpress created
[root@k8s-master wordpress]# kubectl get svc -o wide
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE     SELECTOR
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP        10d     <none>
mysql        ClusterIP   10.111.162.117   <none>        3306/TCP       5m56s   app=mysql
wordpress    NodePort    10.107.48.226    <none>        80:30950/TCP   5m24s   app=wordpress
[root@k8s-master wordpress]#

 

 

 

k8s-master:30950 로 접속 하여 확인 가능 합니다.

k9s-master 는 192.168.0.10입니다.

 

 

CentOS7 minikube 설치

참고 페이지 : https://kubernetes.io/ko/docs/tasks/tools/install-minikube/

https://computingforgeeks.com/how-to-run-minikube-on-kvm/

kvm 환경에서 minikube 설치 방법을 정리 하였습니다. Virtualbox 및 vmware 에서는 kubernetes 3 node 구성이 좀더 편하게 설치 됩니다. 🙂

 

 

1. 가상화 지원 확인

[root@kvm-server01 ~]# grep -E --color 'vmx|svm' /proc/cpuinfo
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall
nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cp
l vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic popcnt aes lahf_lm ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid
dtherm ida arat spec_ctrl intel_stibp flush_l1d
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall
nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cp
l vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic popcnt aes lahf_lm ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid
dtherm ida arat spec_ctrl intel_stibp flush_l1d
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall
nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cp
l vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic popcnt aes lahf_lm ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid
dtherm ida arat spec_ctrl intel_stibp flush_l1d
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall
nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cp
l vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic popcnt aes lahf_lm ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid
dtherm ida arat spec_ctrl intel_stibp flush_l1d
~ 중략

 

 

2. Download minikube

[root@kvm-server01 ~]# wget https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
[root@kvm-server01 ~]# chmod +x minikube-linux-amd64
[root@kvm-server01 ~]# mv minikube-linux-amd64 /usr/local/bin/minikube
[root@kvm-server01 ~]# minikube version
minikube version: v1.3.1
commit: ca60a424ce69a4d79f502650199ca2b52f29e631
[root@kvm-server01 ~]#

 

 

3. Install kubectl

[root@kvm-server01 ~]# curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 40.9M 100 40.9M 0 0 29.0M 0 0:00:01 0:00:01 --:--:-- 29.0M
[root@kvm-server01 ~]# chmod +x kubectl
[root@kvm-server01 ~]# mv kubectl /usr/local/bin/
[root@kvm-server01 ~]# kubectl version -o json
{
"clientVersion": {
"major": "1",
"minor": "15",
"gitVersion": "v1.15.3",
"gitCommit": "2d3c76f9091b6bec110a5e63777c332469e0cba2",
"gitTreeState": "clean",
"buildDate": "2019-08-19T11:13:54Z",
"goVersion": "go1.12.9",
"compiler": "gc",
"platform": "linux/amd64"
}
}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
[root@kvm-server01 ~]#

 

 

4. Install Docker Machine KVM Driver

[root@kvm-server01 ~]# curl -LO https://storage.googleapis.com/minikube/releases/latest/docker-machine-driver-kvm2
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 13.8M 100 13.8M 0 0 13.4M 0 0:00:01 0:00:01 --:--:-- 13.5M
[root@kvm-server01 ~]#
[root@kvm-server01 ~]# chmod +x docker-machine-driver-kvm2
[root@kvm-server01 ~]# mv docker-machine-driver-kvm2 /usr/local/bin/

Starting minikube

[root@kvm-server01 ~]# chmod +x docker-machine-driver-kvm2
[root@kvm-server01 ~]# mv docker-machine-driver-kvm2 /usr/local/bin/
[root@kvm-server01 ~]# minikube start --vm-driver kvm2
* minikube v1.3.1 on Centos 7.6.1810
! Please don't run minikube as root or with 'sudo' privileges. It isn't necessary with kvm2 driver.
* Downloading VM boot image ...
minikube-v1.3.0.iso.sha256: 65 B / 65 B [--------------------] 100.00% ? p/s 0s
minikube-v1.3.0.iso: 131.07 MiB / 131.07 MiB [-------] 100.00% 48.99 MiB p/s 3s
* Creating kvm2 VM (CPUs=2, Memory=2000MB, Disk=20000MB) ...
* Preparing Kubernetes v1.15.2 on Docker 18.09.8 ...
* Downloading kubeadm v1.15.2
* Downloading kubelet v1.15.2
* Pulling images ...
* Launching Kubernetes ...
* Waiting for: apiserver proxy etcd scheduler controller dns
* Done! kubectl is now configured to use "minikube"
[root@kvm-server01 ~]#


[root@kvm-server01 ~]# sudo virsh list |grep mini
320 minikube running
[root@kvm-server01 ~]#

[root@kvm-server01 ~]# kubectl cluster-info
Kubernetes master is running at https://192.168.39.228:8443
KubeDNS is running at https://192.168.39.228:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[root@kvm-server01 ~]#


[root@kvm-server01 ~]# kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority: /root/.minikube/ca.crt
server: https://192.168.39.228:8443
name: minikube
contexts:
- context:
cluster: minikube
user: minikube
name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: minikube
user:
client-certificate: /root/.minikube/client.crt
client-key: /root/.minikube/client.key
[root@kvm-server01 ~]#


[root@kvm-server01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
minikube Ready master 15m v1.15.2
[root@kvm-server01 ~]# minikube ssh
_ _
_ _ ( ) ( )
___ ___ (_) ___ (_)| |/') _ _ | |_ __
/' _ ` _ `\| |/' _ `\| || , < ( ) ( )| '_`\ /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )( ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)

$ cat /etc/os-release
NAME=Buildroot
VERSION=2018.05.3
ID=buildroot
VERSION_ID=2018.05.3
PRETTY_NAME="Buildroot 2018.05.3"
$

 

 

kubernetes install

주의!!! 본 문서는 작성중인 문서 이며, 내용이 완벽하게 정리 되지 않았습니다.

단순참고 부탁 드립니다.

참고 사이트 :

https://www.howtoforge.com/tutorial/centos-kubernetes-docker-cluster/

https://kubernetes.io/docs/setup/cri/

https://juejin.im/post/5caea3ffe51d456e79545c32

https://cloud.tencent.com/developer/article/1409419

https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/

 

 

  • 모든 node 에서 작업
  • hosts파일 설정
[root@k8s-all-node ~]# vi /etc/hosts
10.10.10.27     k8s-master
10.10.10.28     k8s-node01
10.10.10.29     k8s-node02

 

  • SELINUX Disable
[root@k8s-all-node ~]# vi /etc/selinux/config

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled

 

  • firewalld disable
[root@k8s-all-node ~]# iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
[root@k8s-all-node ~]#

 

  • sysctl.conf 설정
[root@k8s-all-node ~]# cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF


[root@k8s-all-node ~]# sysctl --system
* Applying /usr/lib/sysctl.d/00-system.conf ...
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
kernel.yama.ptrace_scope = 0
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/k8s.conf ...
* Applying /etc/sysctl.conf ...
[root@k8s-all-node ~]#

 

  • swap off
[root@k8s-all-node ~]# swapoff -a

[root@k8s-all-node ~]# vi /etc/fstab
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=d7bb5d3b-5b37-47e0-8c26-fe40f7311597 /                       xfs     defaults        0 0
UUID=43ec35ea-2e35-46f1-864c-b13603a8acac /boot                   xfs     defaults        0 0
#UUID=2de336ec-4a33-36r1-8w2s-asdf2342ccgg swap                   swap     defaults        0 0

 

[root@k8s-all-node ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
[root@k8s-all-node ~]# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
[root@k8s-all-node ~]# yum list docker-ce --showduplicates | sort -r
[root@k8s-all-node ~]# yum install -y docker-ce-18.06.3.ce

[root@k8s-all-node ~]# mkdir /etc/docker

# cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF


[root@k8s-all-node ~]# mkdir -p /etc/systemd/system/docker.service.d
[root@k8s-all-node ~]# systemctl daemon-reload
[root@k8s-all-node ~]# systemctl restart docker

 

  • kubernetes install & system rebooting
[root@k8s-all-node ~]# vi /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
        https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

[root@k8s-all-node ~]# yum install -y kubelet kubeadm kubectl
[root@k8s-all-node ~]# init 6

[root@k8s-all-node ~]# systemctl start docker ; systemctl enable docker
[root@k8s-all-node ~]# systemctl start kubelet ; systemctl enable kubelet

 

  • k8s-master only
  • coredns 의 경우 network add-on 설치후 Running 으로 상태가 바뀝니다.
[root@k8s-master ~]# kubeadm init --apiserver-advertise-address=10.10.10.27 --pod-network-cidr=20.20.0.0/16
~중략

[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.10.10.27:6443 --token syojz8.svxybs8x0f3iy28a \
    --discovery-token-ca-cert-hash sha256:b28c6474e92e2bc87e8f7b470119e506df36ae6ae08a8f50dd070f5d714a28e1
[root@k8s-master ~]#

[root@k8s-master ~]# mkdir -p $HOME/.kube
[root@k8s-master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

[root@k8s-master ~]# kubectl get nodes
NAME         STATUS   ROLES    AGE     VERSION
k8s-master   Ready    master   2m22s   v1.14.1
[root@k8s-master ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE
kube-system   coredns-fb8b8dccf-c9hvh              0/1     Pending   0          78s
kube-system   coredns-fb8b8dccf-hmt6w              0/1     Pending   0          78s
kube-system   etcd-k8s-master                      1/1     Running   0          41s
kube-system   kube-apiserver-k8s-master            1/1     Running   0          42s
kube-system   kube-controller-manager-k8s-master   1/1     Running   0          31s
kube-system   kube-proxy-92c9h                     1/1     Running   0          78s
kube-system   kube-scheduler-k8s-master            1/1     Running   0          16s
[root@k8s-master ~]#

 

[root@k8s-master ~]# kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
[root@k8s-master ~]# kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml

[root@k8s-master ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE
kube-system   calico-node-r72sb                    2/2     Running   0          38s
kube-system   coredns-fb8b8dccf-c9hvh              0/1     Running   0          4m15s
kube-system   coredns-fb8b8dccf-hmt6w              0/1     Running   0          4m15s
kube-system   etcd-k8s-master                      1/1     Running   0          3m38s
kube-system   kube-apiserver-k8s-master            1/1     Running   0          3m39s
kube-system   kube-controller-manager-k8s-master   1/1     Running   0          3m28s
kube-system   kube-proxy-92c9h                     1/1     Running   0          4m15s
kube-system   kube-scheduler-k8s-master            1/1     Running   0          3m13s
[root@k8s-master ~]#

 

  • k8s-master 에서 확인
[root@k8s-master ~]# kubeadm token create --print-join-command
kubeadm join 10.10.10.27:6443 --token eq8odd.rxcfznxvepos1pg8     --discovery-token-ca-cert-hash sha256:aa3949ebeec315e5d303a18fc049c33a89a9110d8bdec0a93f3c065dcb78c689 
[root@k8s-master ~]#

 

  • k8s-node01 / k8s-node02 에서 작업
[root@k8s-node01 ~]# kubeadm join 10.10.10.27:6443 --token \
 eq8odd.rxcfznxvepos1pg8     --discovery-token-ca-cert-hash sha256:aa3949ebeec315e5d303a18fc049c33a89a9110d8bdec0a93f3c065dcb78c689

[root@k8s-node02 ~]# kubeadm join 10.10.10.27:6443 --token \
eq8odd.rxcfznxvepos1pg8     --discovery-token-ca-cert-hash sha256:aa3949ebeec315e5d303a18fc049c33a89a9110d8bdec0a93f3c065dcb78c689

 

  • k8s-master 에서 확인
[root@k8s-master ~]# kubectl get nodes
NAME         STATUS   ROLES    AGE     VERSION
k8s-master   Ready    master   8m43s   v1.14.3
k8s-node01   Ready    <none>   73s     v1.14.3
k8s-node02   Ready    <none>   62s     v1.14.3
[root@k8s-master ~]#

 

  • dash-board 설치 를 위하여 인증서를 생성 합니다.
[root@k8s-master ~]# mkdir /root/certs
[root@k8s-master ~]# cd /root/certs
[root@k8s-master certs]# openssl genrsa -des3 -passout pass:x -out dashboard.pass.key 2048
[root@k8s-master certs]# openssl rsa -passin pass:x -in dashboard.pass.key -out dashboard.key
[root@k8s-master certs]# openssl req -new -key dashboard.key -out dashboard.csr
[root@k8s-master certs]# openssl x509 -req -sha256 -days 365 -in dashboard.csr -signkey dashboard.key -out dashboard.crt

 

  • dash-board 를 설치 합니다.
[root@k8s-master ~]# kubectl create secret generic kubernetes-dashboard-certs --from-file=/root/certs -n kube-system
[root@k8s-master ~]# kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml

 

  • dash-borad 설정을 변경합니다.
[root@k8s-master ~]# kubectl edit service kubernetes-dashboard -n kube-system
#   type: ClusterIP    <--  부분을 NodePort 으로 변경

# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: Service
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"k8s-app":"kubernetes-dashboard"},"name":"kubernetes-dashboard","namespace":"kube-system"},"spec":{"ports":[{"port":443,"targetPort":8443}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
  creationTimestamp: "2019-06-12T07:41:01Z"
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
  resourceVersion: "2224"
  selfLink: /api/v1/namespaces/kube-system/services/kubernetes-dashboard
  uid: 6cb7d772-8ce5-11e9-ad2b-525400fce674
spec:
  clusterIP: 10.108.72.190
  ports:
  - port: 443
    protocol: TCP
    targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard
  sessionAffinity: None
#  type: ClusterIP    <--  변경전
  type: NodePort      <--  변경후 
status:
  loadBalancer: {}

 

  • dashboard 상태 확인 및 접속 정보 확인
  • 443:30906/TCP  로 맵핑되어 있습니다.
[root@k8s-master ~]#  kubectl get pods -n kube-system
NAME                                    READY   STATUS    RESTARTS   AGE
calico-node-8mvl8                       2/2     Running   0          15m
calico-node-br9sw                       2/2     Running   0          15m
calico-node-r72sb                       2/2     Running   0          18m
coredns-fb8b8dccf-c9hvh                 1/1     Running   0          22m
coredns-fb8b8dccf-hmt6w                 1/1     Running   0          22m
etcd-k8s-master                         1/1     Running   0          21m
kube-apiserver-k8s-master               1/1     Running   0          21m
kube-controller-manager-k8s-master      1/1     Running   0          21m
kube-proxy-6t9vw                        1/1     Running   0          15m
kube-proxy-8vw5v                        1/1     Running   0          15m
kube-proxy-92c9h                        1/1     Running   0          22m
kube-scheduler-k8s-master               1/1     Running   0          21m
kubernetes-dashboard-5f7b999d65-t88x2   1/1     Running   0          3m56s
[root@k8s-master ~]# 

[root@k8s-master ~]# kubectl get service -n kube-system
NAME                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE
calico-typha           ClusterIP   10.101.41.222   <none>        5473/TCP                 20m
kube-dns               ClusterIP   10.96.0.10      <none>        53/UDP,53/TCP,9153/TCP   23m
kubernetes-dashboard   NodePort    10.108.72.190   <none>        443:30906/TCP            5m5s
[root@k8s-master ~]#

 

  • dash-borad 계정생성
[root@k8s-master ~]# kubectl create serviceaccount cluster-admin-dashboard-sa
[root@k8s-master ~]# kubectl create clusterrolebinding cluster-admin-dashboard-sa --clusterrole=cluster-admin --serviceaccount=default:cluster-admin-dashboard-sa

 

  • dash-borad 토큰값 생성
[root@k8s-master ~]# kubectl get secret $(kubectl get serviceaccount cluster-admin-dashboard-sa -o jsonpath="{.secrets[0].name}") -o jsonpath="{.data.token}" | base64 --decode
eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImNsdXN0ZXItYWRtaW4tZGFzaGJvYXJkLXNhLXRva2VuLWNzZ3A4Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImNsdXN0ZXItYWRtaW4tZGFzaGJvYXJkLXNhIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNGFkYjU3Y2QtOGNlNi0xMWU5LWFkMmItNTI1NDAwZmNlNjc0Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmRlZmF1bHQ6Y2x1c3Rlci1hZG1pbi1kYXNoYm9hcmQtc2EifQ.E_T09ftzrV_68Ie0nuthJ1yjFeNByeok87x3F653dB9Pt0a7n6hWGOZsiCUaU0mevm56kl2QUgzV5J-waNvr5Fv4IZ5NMmId_XfIGWlsul2P6y4wag96DuG65K1T2DwoGix4GO8a1p7HISOQ0knxr0OVMOjXRLcOXUov3h3Mv87T-O1gjVIUHAMvB70aZK1ScBaULegqzQbHwjpRc7FFOKUQB4HANJ6gw1asMF4yw0M_dF3GK16GaCxxKEW6rQWGrdN_TNB2nIXKgKqfqHS_35o02yYd2_cU3TDZ14xGl7F2zSVJxzB99ftyC6pwquPF3y3qhXeUFNU0tyCyxKUrWQ
[root@k8s-master ~]#

 

  • dash-borad 접속 ( https://10.10.10.27:30906/#!/login )
  • 안전하지 않음(으)로 이동을 클릭 합니다.

 

  • 토큰정보를 입력 합니다.

 

  • dash-board 화면

 

  • k8s Testing — 작성중
[root@k8s-master ~]# kubectl create deployment nginx --image=nginx
[root@k8s-master ~]# kubectl describe deployment nginx
Name:                   nginx
Namespace:              default
CreationTimestamp:      Fri, 03 May 2019 00:28:11 +0900
Labels:                 app=nginx
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               app=nginx
Replicas:               1 desired | 1 updated | 1 total | 0 available | 1 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=nginx
  Containers:
   nginx:
    Image:        nginx
    Port:         <none>
    Host Port:    <none>
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      False   MinimumReplicasUnavailable
  Progressing    True    ReplicaSetUpdated
OldReplicaSets:  <none>
NewReplicaSet:   nginx-65f88748fd (1/1 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  18s   deployment-controller  Scaled up replica set nginx-65f88748fd to 1
[root@k8s-master ~]#


[root@k8s-master ~]# kubectl create service nodeport nginx --tcp=80:80


[root@k8s-master ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP        6m33s
nginx        NodePort    10.102.109.228   <none>        80:30187/TCP   21s
[root@k8s-master ~]#

 

  • nginx 확인
[root@k8s-master ~]#  curl k8s-node01:30187
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
[root@k8s-master ~]#

 

  • pods scale
최초 생성시 1개의 pods 입니다. 

[root@k8s-master ~]# kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
nginx-65f88748fd-8lqrb   1/1     Running   0          5m12s
[root@k8s-master ~]#

pods 을 5개로 늘립니다. 
[root@k8s-master ~]# kubectl scale --replicas=5 deployment/nginx
deployment.extensions/nginx scaled


[root@k8s-master ~]# kubectl get pods
NAME                     READY   STATUS              RESTARTS   AGE
nginx-65f88748fd-6v7n5   1/1     Running             0          13s
nginx-65f88748fd-86svl   0/1     ContainerCreating   0          13s
nginx-65f88748fd-8lqrb   1/1     Running             0          12m
nginx-65f88748fd-pq8p8   0/1     ContainerCreating   0          13s
nginx-65f88748fd-w4tq8   0/1     ContainerCreating   0          13s
[root@k8s-master ~]#

 

  • pod 삭제
[root@k8s-master ~]# kubectl delete deployment/nginx

삭제 확인 
[root@k8s-master ~]# kubectl get pods -o wide
No resources found.

 

 

docker image 백업 복구

docker command : docker save & docker load

  • 백업할 docker image name 를 확인 합니다. 
[root@centos-docker ftp-service]# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                                                      NAMES
b27a0fcf0969        ssh-server          "/usr/sbin/sshd -D"      2 minutes ago       Up 2 minutes        0.0.0.0:22222->22/tcp                                      ssh-server
1a1af135eebc        pure-ftpd           "/bin/sh -c '/usr/sb…"   2 minutes ago       Up 2 minutes        0.0.0.0:21->21/tcp, 0.0.0.0:20000-20099->20000-20099/tcp   ftpd
[root@centos-docker ftp-service]#

 

  • docker save 명령어로 image 를 백업 합니다.
[root@centos-docker ftp-service]# docker save -o ssh-server.tar ssh-server
[root@centos-docker ftp-service]# docker save -o pure-ftpd.tar pure-ftpd

 

  • docker-compose 디렉토리를 백업합니다.
[root@centos-docker ftp-service]# cd ..
anaconda-ks.cfg  ftp-service 
[root@centos-docker ~]# tar cvf ftp-service.tar ftp-service/
ftp-service/
ftp-service/ssh-server/
ftp-service/ssh-server/Dockerfile
ftp-service/docker-compose.yml
ftp-service/pure-ftpd/
ftp-service/pure-ftpd/Dockerfile
ftp-service/pure-ftpd/pureftpd.passwd
ftp-service/ssh-server.tar
ftp-service/pure-ftpd.tar
[root@centos-docker ~]#

 

  • docker image 복구 (scp 나 ftp 를 이용하여 복구할 시스템에서 ftp-service.tar 파일을 복사해옵니다.)
[root@centos74-docker02 ~]# scp root@192.168.1.23:/root/ftp-service.tar .
[root@centos74-docker02 ~]# tar xvf ftp-service.tar
[root@centos74-docker02 ~]# cd ftp-service/

 

  • docker load 명령어를 이용하여 docker image 를 load 합니다.
[root@centos74-docker02 ftp-service]# docker load -i ssh-server.tar
aa54c2bc1229: Loading layer [==================================================>]  121.6MB/121.6MB
7dd604ffa87f: Loading layer [==================================================>]  15.87kB/15.87kB
2f0d1e8214b2: Loading layer [==================================================>]  11.78kB/11.78kB
297fd071ca2f: Loading layer [==================================================>]  3.072kB/3.072kB
4f78d015fcfa: Loading layer [==================================================>]  5.632kB/5.632kB
5b2e491f227c: Loading layer [==================================================>]  104.2MB/104.2MB
0e6f6b46199d: Loading layer [==================================================>]  3.072kB/3.072kB
Loaded image: ssh-server:latest
[root@centos74-docker02 ftp-service]# docker load -i pure-ftpd.tar
eef560b4ec4f: Loading layer [==================================================>]    197MB/197MB
11a0c2f551fd: Loading layer [==================================================>]  209.9kB/209.9kB
dda5ec330bd9: Loading layer [==================================================>]  7.168kB/7.168kB
5f96fa66dc12: Loading layer [==================================================>]  3.072kB/3.072kB
5e158c9ee888: Loading layer [==================================================>]  5.632kB/5.632kB
df1e500aba99: Loading layer [==================================================>]  210.5MB/210.5MB
f97cf7fc54cb: Loading layer [==================================================>]  13.94MB/13.94MB
d622b75b6850: Loading layer [==================================================>]  5.637MB/5.637MB
f1f64220d033: Loading layer [==================================================>]  10.75kB/10.75kB
93a7b057a761: Loading layer [==================================================>]  4.293MB/4.293MB
34e855b6a251: Loading layer [==================================================>]  582.7kB/582.7kB
44de25e21a3f: Loading layer [==================================================>]  379.4kB/379.4kB
62f4467edb00: Loading layer [==================================================>]  3.584kB/3.584kB
3b53068e8e0f: Loading layer [==================================================>]   5.12kB/5.12kB
6a75a4844f83: Loading layer [==================================================>]  3.072kB/3.072kB
Loaded image: pure-ftpd:latest
[root@centos74-docker02 ftp-service]#

 

  • docker image 를 확인 합니다.
  • docker-compose 를 실행 합니다.
[root@centos74-docker02 ftp-service]# docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
ssh-server          latest              286889354875        27 minutes ago      217MB
pure-ftpd           latest              50a17f653588        28 minutes ago      415MB
[root@centos74-docker02 ftp-service]#
[root@centos74-docker02 ftp-service]# docker-compose up -d
Creating ssh-server ... done
Creating ftpd       ... done
[root@centos74-docker02 ftp-service]#

 

  • 서비스 확인을 진행 합니다.

 

Docker 로 pure-ftpd 를 개인적으로 운영하고 있습니다.

사용자가 적을 경우 문제가 없지만 보안상의 이유 및 특정 디렉토리 변경 되는 이슈 때문에 아주 간단한 방법으로

ftp user 권한을 변경할까 합니다.

docker-compose 내용은 일전에 올려 놓은 포스트를 참고해 주세요.

링크는 아래 있습니다. 🙂

[docker] docker-compose pure-ftpd ssh-server구성

 

  • 디렉토리 권한 변경 
#ftp Directory 는 /ftp-data 를 사용 하고 있습니다. 
/ftp-data 를 755 권한합니다. 
# chmod -R 755 /ftp-data

 

  • pure-ftpd/Dockerfile 수정
[root@centos-docker ftp-service]# ll
total 4
-rw-rw-r-- 1 test01 test01 647 Mar  3  2017 docker-compose.yml
drwxrwxr-x 2 test01 test01  47 Apr 23 20:04 pure-ftpd
drwxrwxr-x 2 test01 test01  24 Apr 22 20:00 ssh-server
[root@centos-docker ftp-service]#

[root@centos-docker ftp-service]# cat pure-ftpd/Dockerfile
FROM ubuntu:14.04

MAINTAINER test@test.com

RUN sed -Ei 's/^# deb-src /deb-src /' /etc/apt/sources.list
RUN apt-get update && \
apt-get install pure-ftpd openssl libpam-dev libcap2-dev libldap2-dev libmysqlclient-dev libmysqlclient15-dev libpq-dev libssl-dev po-debconf dpkg-dev debhelper -y

RUN mkdir /tmp/pure-ftpd/ && \
        cd /tmp/pure-ftpd/ && \
        apt-get source pure-ftpd && \
        cd pure-ftpd-* && \
        sed -i '/^optflags=/ s/$/ --without-capabilities/g' ./debian/rules && \
        dpkg-buildpackage -b -uc
RUN dpkg -i /tmp/pure-ftpd/pure-ftpd-common*.deb
RUN apt-get -y install openbsd-inetd
RUN dpkg -i /tmp/pure-ftpd/pure-ftpd_*.deb
RUN apt-mark hold pure-ftpd pure-ftpd-common
RUN cd /etc/pure-ftpd && \
adduser ftpd-data && \
adduser ftpd-users && \          <-- 라인추가

 

  • docker-compose 실행 
[root@centos-docker ftp-service]# docker-compose up -d --build

 

  • pure-ftpd 유저 생성 및 password 생성 
[root@centos-docker ftp-service]# docker ps
CONTAINER ID        IMAGE                    COMMAND                  CREATED             STATUS              PORTS                                                      NAMES
417c5aee93e3        ssh-server               "/usr/sbin/sshd -D"      33 minutes ago      Up 33 minutes       0.0.0.0:22222->22/tcp                                      ssh-server
73ec4613358b        private/pure-ftpd:14.04   "/bin/sh -c '/usr/sb…"   33 minutes ago      Up 33 minutes       0.0.0.0:21->21/tcp, 0.0.0.0:20000-20099->20000-20099/tcp   ftpd
[root@centos-docker ftp-service]# docker
[root@centos-docker ftp-service]# docker exec -it 73ec4613358b /bin/bash

ftp user 작업 
ftpd-users 구룹은 Read Only User group 이며,
ftpd-data 구룹은  write User group 입니다. 
root@73ec4613358b:/# pure-pw useradd test01 -u ftpd-users -g ftpd-users -d /home/ftp
Password:
Enter it again:
root@73ec4613358b:/# pure-pw useradd data01 -u ftpd-data -g ftpd-data -d /home/ftp
Password:
Enter it again:
root@73ec4613358b:/# pure-pw mkdb
root@73ec4613358b:/# cat /etc/pure-ftpd/pureftpd.passwd
~중략
test01:$1$VWoNttg0$MeF4ibc.JQCdlS3BFp3rT.:1001:1001::/home/ftp/./::::::::::::
data01:$1$iP7Popf0$htP3edje8B2BptmyEe7mG0:1000:1000::/home/ftp/./::::::::::::

pureftpd.passwd 파일에 생성한 유저를 추가 합니다. 
[root@centos-docker ftp-service]# vi pure-ftpd/pureftpd.passwd


ftp 접속후 권한 테스트를 진행 합니다.

 

 

E: You must put some ‘source’ URIs in your sources.list

ubuntu Dockerfile 작성시 참고사항 : ubuntu 14.04 / ubuntu 16.04 / ubuntu 18.04(테스트 하지 않음)

test 된 ubuntu 버젼은 14.04 / 16.04 Version 입니다.

RUN apt-get 만 실행시 E: You must put some ‘source’ URIs in your sources.list 메세지가 떨어지며 정상적으로

build 가 되지 않습니다.

 

  • 기존 Dockerfile

FROM ubuntu:16.04
MAINTAINER ssh-test <test@test.com>

RUN apt-get update && apt-get install -y openssh-server \
  && mkdir /var/run/sshd \

 

  • 수정 Dockerfile

FROM ubuntu:16.04
MAINTAINER ssh-test <test@test.com>

RUN sed -Ei 's/^# deb-src /deb-src /' /etc/apt/sources.list
RUN apt-get update && apt-get install -y openssh-server \
  && mkdir /var/run/sshd \

 

이후 build 를 진행 하면 정상적으로 실행 되는것을 확인 하실수 있습니다.  🙂

 

docker quick install

주의!!! 별도 계정 이용시 usermod -aG docker $user-name

 

# yum install -y yum-utils device-mapper-persistent-data lvm2
# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
# yum install -y docker-ce
# sudo usermod -aG docker test
# systemctl enable docker ; systemctl start docker
# curl -L "https://github.com/docker/compose/releases/download/1.24.1/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
# chmod +x /usr/local/bin/docker-compose

 

 

Docker nginx + sphinx-doc install

 

 

Local 에서 sphinx 구성을 하지 않은 경우 웹페이지를 확인 하기 어렸습니다.

docker nginx + sphinx 를 간단하게 구성하는 방법을 포스팅 합니다.

sphinx-doc 설치는 아래 페이지를 참고 해 주세요.

Ubuntu sphinx-doc install

 

참고 페이지 : https://github.com/serra/sphinx-with-markdown

https://docs.readthedocs.io/en/latest/getting_started.html

https://recommonmark.readthedocs.io/en/latest/#autostructify

https://recommonmark.readthedocs.io/en/latest/auto_structify.html

https://github.com/rtfd/recommonmark

 

docker 및 docker-compose 설치를 합니다.

test@ubuntu-docs:~$ curl -s https://get.docker.com/ | sudo sh
test@ubuntu-docs:~$ sudo usermod -aG docker test
test@ubuntu-docs:~$ sudo curl -L https://github.com/docker/compose/releases/download/1.19.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
test@ubuntu-docs:~$ sudo chmod +x /usr/local/bin/docker-compose

docker-compose 대신 docker 만으로 사용하여도 됩니다.

 

docker-compose 에 사용할 디렉토리를 생성 합니다.

test@ubuntu-docs:~/Workspace$ mkdir web-docs
test@ubuntu-docs:~/Workspace$ cd web-docs/
test@ubuntu-docs:~/Workspace/web-docs$
test@ubuntu-docs:~/Workspace/web-docs$ mkdir docs

 

docker-compose.yml 파일을 생성 합니다.

./nginx/conf 디렉토리와 /etc/nginx/conf.d 디렉토리를 연결 합니다.

./docs 디렉토리와 컨테이너의 /code 디렉토리를 연결 합니다.

test@ubuntu-docs:~/Workspace/web-docs$ vi docker-compose.yml

version: '2'

services:
    nginx:
        image: nginx:1.10.2
        ports:
            - 80:80
        restart: always
        volumes:
            - ./nginx/conf:/etc/nginx/conf.d
            - ./docs:/code



test@ubuntu-docs:~/Workspace/web-docs$ mkdir -p nginx/conf/
test@ubuntu-docs:~/Workspace/web-docs$ vi nginx/conf/default.conf
server {
    listen       80 default_server;
    server_name  localhost _;
    index        index.html index.htm;
    root         /code;

    location / {
        autoindex on;
    }
}

 

docker-compose 를 실행 합니다.

test@ubuntu-docs:~/Workspace/web-docs$ docker-compose up -d --build

 

컨테이너 구동 확인

test@ubuntu-docs:~/Workspace/web-docs$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                         NAMES
e0e37ee2dbbe        nginx:1.10.2        "nginx -g 'daemon of…"   56 seconds ago      Up 54 seconds       0.0.0.0:80->80/tcp, 443/tcp   webdocs_nginx_1
test@ubuntu-docs:~/Workspace/web-docs$

 

sphinx 를 사용할 디렉토리로 이동 후 sphinx-quickstart 를 실행 합니다.

별도의 디렉토리를 지정 하지 않으면 sphinx-quickstart 를 실행한 디렉토리에 설정파일 등등이 생성 됩니다.

test@ubuntu-docs:~$ cd Workspace/web-docs/docs/
test@ubuntu-docs:~/Workspace/web-docs/docs$ sphinx-quickstart

Welcome to the Sphinx 1.3.6 quickstart utility.

Please enter values for the following settings (just press Enter to
accept a default value, if one is given in brackets).

Enter the root path for documentation.
> Root path for the documentation [.]:

You have two options for placing the build directory for Sphinx output.
Either, you use a directory "_build" within the root path, or you separate
"source" and "build" directories within the root path.
> Separate source and build directories (y/n) [n]:

Inside the root directory, two more directories will be created; "_templates"
for custom HTML templates and "_static" for custom stylesheets and other static
files. You can enter another prefix (such as ".") to replace the underscore.
> Name prefix for templates and static dir [_]:

The project name will occur in several places in the built documentation.
> Project name: opensource docs
> Author name(s): user01

Sphinx has the notion of a "version" and a "release" for the
software. Each version can have multiple releases. For example, for
Python the version is something like 2.5 or 3.0, while the release is
something like 2.5.1 or 3.0a1.  If you don't need this dual structure,
just set both to the same value.
> Project version: 1.0
> Project release [1.0]:

If the documents are to be written in a language other than English,
you can select a language here by its language code. Sphinx will then
translate text that it generates into that language.

For a list of supported codes, see
http://sphinx-doc.org/config.html#confval-language.
> Project language [en]: ko

The file name suffix for source files. Commonly, this is either ".txt"
or ".rst".  Only files with this suffix are considered documents.
> Source file suffix [.rst]:

One document is special in that it is considered the top node of the
"contents tree", that is, it is the root of the hierarchical structure
of the documents. Normally, this is "index", but if your "index"
document is a custom template, you can also set this to another filename.
> Name of your master document (without suffix) [index]:

Sphinx can also add configuration for epub output:
> Do you want to use the epub builder (y/n) [n]:

Please indicate if you want to use one of the following Sphinx extensions:
> autodoc: automatically insert docstrings from modules (y/n) [n]:
> doctest: automatically test code snippets in doctest blocks (y/n) [n]:
> intersphinx: link between Sphinx documentation of different projects (y/n) [n]:
> todo: write "todo" entries that can be shown or hidden on build (y/n) [n]:
> coverage: checks for documentation coverage (y/n) [n]:
> pngmath: include math, rendered as PNG images (y/n) [n]:
> mathjax: include math, rendered in the browser by MathJax (y/n) [n]:
> ifconfig: conditional inclusion of content based on config values (y/n) [n]:
> viewcode: include links to the source code of documented Python objects (y/n) [n]:

A Makefile and a Windows command file can be generated for you so that you
only have to run e.g. `make html' instead of invoking sphinx-build
directly.
> Create Makefile? (y/n) [y]:
> Create Windows command file? (y/n) [y]: n

Creating file ./conf.py.
Creating file ./index.rst.
Creating file ./Makefile.

Finished: An initial directory structure has been created.

You should now populate your master file ./index.rst and create other documentation
source files. Use the Makefile to build the docs, like so:
   make builder
where "builder" is one of the supported builders, e.g. html, latex or linkcheck.

test@ubuntu-docs:~/Workspace/web-docs/docs$

 

마크다운을 사용하기 위하여 recommonmark Python 패키지를 설치 합니다.

python-pip 패키지가 설치가 안되어 있을경우 설치를 먼저 진행합니다. 
$ sudo apt install python-pip
일반유저 에서 설치시 --user 옵션을 사용합니다.
$ pip install recommonmark  --user

 

conf.py 파일을 수정 합니다.

기존에 사용되던 source_suffix 라인은 주석 처리 합니다.

source_suffix 라인외의 항목은 수정이 아닌 추가된 항목 입니다.

2가지 경우로 설정 할수 있으며 markdown 고급기능에 따른 차이가? 있을수 있을거 같습니다.

자세한 내용은 링크 사이트를 참고해 주세요.

https://recommonmark.readthedocs.io/en/latest/ 설정시 

test@ubuntu-docs:~/Workspace/web-docs/docs$ vi conf.py
#source_suffix = '.rst'   <-- 주석 처리를 합니다. 
from recommonmark.parser import CommonMarkParser

source_parsers = {
    '.md': CommonMarkParser,
}

source_suffix = ['.rst', '.md']

from recommonmark.transform import AutoStructify

# At top on conf.py (with other import statements)
import recommonmark
from recommonmark.transform import AutoStructify

# At the bottom of conf.py
def setup(app):
    app.add_config_value('recommonmark_config', {
            'url_resolver': lambda url: github_doc_root + url,
            'auto_toc_tree_section': 'Contents',
            }, True)
    app.add_transform(AutoStructify)

 

https://recommonmark.readthedocs.io/en/latest/auto_structify.html  설정시 아래와 같이 conf.py 를 설정합니다. 

import sys
import os
import sphinx_rtd_theme

import recommonmark              <--  추가 


#source_suffix = '.rst'          <-- 기존 항목 주석처리 
from recommonmark.parser import CommonMarkParser
from recommonmark.transform import AutoStructify

source_parsers = {
    '.md': CommonMarkParser,
}

source_suffix = ['.rst', '.md']

github_doc_root = 'https://github.com/rtfd/recommonmark/tree/master/doc/'
def setup(app):
    app.add_config_value('recommonmark_config', {
            'url_resolver': lambda url: github_doc_root + url,
            'auto_toc_tree_section': 'Contents',
            }, True)
    app.add_transform(AutoStructify)

 

make html 을 실행 합니다. 

test@ubuntu-docs:~/Workspace/web-docs/docs$ make html
sphinx-build -b html -d _build/doctrees   . _build/html
Running Sphinx v1.8.1
loading pickled environment... done
building [mo]: targets for 0 po files that are out of date
building [html]: targets for 0 source files that are out of date
updating environment: 2 added, 0 changed, 2 removed
reading sources... [100%] test1
looking for now-outdated files... none found
pickling environment... done
checking consistency... done
preparing documents... done
writing output... [100%] test1
generating indices... genindex
writing additional pages... search
copying static files... done
copying extra files... done
dumping search index in English (code: en) ... done
dumping object inventory... done
build succeeded.

The HTML pages are in _build/html.

Build finished. The HTML pages are in _build/html.
test@ubuntu-docs:~/Workspace/web-docs/docs$

 

테마를 설치 합니다.

test@ubuntu-docs:~/Workspace/web-docs/docs$ pip install sphinx_rtd_theme

 

테마를 적용합니다.

test@ubuntu-docs:~/Workspace/web-docs/docs$ vi conf.py
import sphinx_rtd_theme

#html_theme = 'alabaster'
html_theme = 'sphinx_rtd_theme'

html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]

 

make html 을 실행 합니다.

test@ubuntu-docs:~/Workspace/web-docs/docs$ make html
sphinx-build -b html -d _build/doctrees   . _build/html
Running Sphinx v1.3.6
loading translations [ko]... done
loading pickled environment... done
building [mo]: targets for 0 po files that are out of date
building [html]: targets for 1 source files that are out of date
updating environment: [config changed] 1 added, 0 changed, 0 removed
reading sources... [100%] index
looking for now-outdated files... none found
pickling environment... done
checking consistency... done
preparing documents... done
writing output... [100%] index
generating indices... genindex
writing additional pages... search
copying static files... done
copying extra files... done
dumping search index in English (code: en) ... done
dumping object inventory... done
build succeeded.

Build finished. The HTML pages are in _build/html.
test@ubuntu-docs:~/Workspace/web-docs/docs$

 

web page 의 root page 는 /home/test/Workspace/web-docs/docs/_build/html 입니다.

docker-compose.yml 파일을 수정 해야 합니다. 🙂

test@ubuntu-docs:~/Workspace/web-docs/docs$ ls -al _build/html/
total 40
drwxrwxr-x 4 test test 4096 Sep 30 21:02 .
drwxrwxr-x 4 test test 4096 Sep 30 21:02 ..
-rw-rw-r-- 1 test test  230 Sep 30 21:02 .buildinfo
-rw-rw-r-- 1 test test 2663 Sep 30 21:02 genindex.html
-rw-rw-r-- 1 test test 3939 Sep 30 21:02 index.html
-rw-rw-r-- 1 test test  228 Sep 30 21:02 objects.inv
-rw-rw-r-- 1 test test 3045 Sep 30 21:02 search.html
-rw-rw-r-- 1 test test  323 Sep 30 21:02 searchindex.js
drwxrwxr-x 2 test test 4096 Sep 30 21:02 _sources
drwxrwxr-x 2 test test 4096 Sep 30 21:02 _static
test@ubuntu-docs:~/Workspace/web-docs/docs$

 

docker-compose down 및 docker-compose.yml 파일 수정

volumes 라인을 수정 합니다.

test@ubuntu-docs:~/Workspace/web-docs$ docker-compose down
test@ubuntu-docs:~/Workspace/web-docs$ vi docker-compose.yml
~중략
        volumes:
            - ./nginx/conf:/etc/nginx/conf.d
            - ./docs/_build/html:/code

 

docker-compose 구동

test@ubuntu-docs:~/Workspace/web-docs$ docker-compose up -d --build

Web page 를 확인 합니다.

 

markdown test

test@ubuntu-docs:~/Workspace/web-docs/docs$ vi index.rst
.. web-docs documentation master file, created by
   sphinx-quickstart on Mon Oct  1 18:59:13 2018.
   You can adapt this file completely to your liking, but it should at least
   contain the root `toctree` directive.

Welcome to web-docs's documentation!
====================================

Contents:

.. toctree::
   :maxdepth: 2

   test1.md
   test2.md



Indices and tables
==================

* :ref:`genindex`
* :ref:`modindex`
* :ref:`search

 

test1 / test2 메뉴 생성

test@ubuntu-docs:~/Workspace/web-docs/docs$ vi test1.md
# test1
test@ubuntu-docs:~/Workspace/web-docs/docs$ vi test2.md
# test2

 

make html 실행

test@ubuntu-docs:~/Workspace/web-docs/docs$ make html
sphinx-build -b html -d _build/doctrees   . _build/html
Running Sphinx v1.8.1
loading translations [ko]... done
loading pickled environment... done
building [mo]: targets for 0 po files that are out of date
building [html]: targets for 3 source files that are out of date
updating environment: 3 added, 0 changed, 0 removed
reading sources... [100%] test2
looking for now-outdated files... none found
pickling environment... done
checking consistency... done
preparing documents... done
writing output... [100%] test2
generating indices... genindex
writing additional pages... search
copying static files... done
copying extra files... done
dumping search index in English (code: en) ... done
dumping object inventory... done
build succeeded.

The HTML pages are in _build/html.

Build finished. The HTML pages are in _build/html.
test@ubuntu-docs:~/Workspace/web-docs/docs$

 

web page 확인