리눅스

[draft] 외부 etcd TLS 클러스터를 구성하고 kubeadm을 사용하여 쿠버네티스 클러스터를 설정하는 방법

변군Dev 2024. 8. 9. 13:11
728x90

외부 etcd TLS 클러스터를 구성하고 kubeadm을 사용하여 쿠버네티스 클러스터를 설정하는 방법

테스트 환경

호스트 이름 아이피 주소 ROLES 비고
k8s-master1 192.168.0.131 control-plane kubernetes, etcd
k8s-master2 192.168.0.132 control-plane kubernetes, etcd
k8s-master3 192.168.0.111 control-plane kubernetes, etcd
k8s-worker3 192.168.0.112 worker node kubernetes

쿠버네티스 설치

sudo rm -f /etc/apt/keyrings/kubernetes-apt-keyring.gpg
KUBERNETES_VERSION="v1.27"
sudo mkdir -p -m 755 /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/${KUBERNETES_VERSION}/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/${KUBERNETES_VERSION}/deb/ /" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

Containerd 설치 및 설정

Containerd 설치

sudo rm -f /etc/apt/trusted.gpg.d/docker.gpg
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmour -o /etc/apt/trusted.gpg.d/docker.gpg
sudo add-apt-repository -y "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt-get update
sudo apt-get install -y containerd
sudo systemctl --now enable containerd

Containerd 설정 파일을 생성하고 SystemdCgroup을 활성화

sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml > /dev/null
sudo sed -i 's/^\([[:blank:]]*\)SystemdCgroup = false/\1SystemdCgroup = true/' /etc/containerd/config.toml

CNI 플러그인 설치 및 경로 설정

CNI_VERSION="v1.5.1"
CNI_TGZ=https://github.com/containernetworking/plugins/releases/download/${CNI_VERSION}/cni-plugins-linux-amd64-${CNI_VERSION}.tgz
sudo mkdir -p /opt/cni/bin
curl -fsSL $CNI_TGZ | sudo tar -C /opt/cni/bin -xz

Containerd 서비스 재시작

sudo systemctl restart containerd

TLS를 사용하는 외부 etcd 클러스터 설정

etcd 설치

sudo apt-get update
sudo apt-get install -y etcd
sudo systemctl --now enable etcd

etcd 인증서 생성

mkdir -p ~/kube_script/ssl
cd ~/kube_script/ssl
더보기

---

1. 인증 기관(CA) 생성

- CA의 개인 키 생성

openssl genpkey -algorithm RSA -out ca.key -pkeyopt rsa_keygen_bits:2048

- CA의 인증서 생성

openssl req -x509 -new -nodes -key ca.key -subj "/CN=etcd-ca" -days 3650 -out ca.crt

2. 서버 인증서 및 키 생성

- 서버의 개인 키 생성

openssl genpkey -algorithm RSA -out server.key -pkeyopt rsa_keygen_bits:2048

- 서버의 CSR(Certificate Signing Request) 생성

openssl req -new -key server.key -subj "/CN=server" -out server.csr

3. 서버 인증서에 사용할 OpenSSL 구성 파일 생성

cat <<EOF > server-openssl.cnf
[ req ]
distinguished_name = req_distinguished_name
req_extensions = req_ext
x509_extensions = v3_req
prompt = no

[ req_distinguished_name ]
C = KR
ST = Seoul
L = Jongno-gu
O = SangChul Blog
OU = Infrastructure Team
CN = etcd-server

[ req_ext ]
subjectAltName = @alt_names

[ v3_req ]
subjectAltName = @alt_names

[ alt_names ]
DNS.1 = localhost
DNS.2 = k8s-master1
DNS.3 = k8s-master2
DNS.4 = k8s-master3
IP.1 = 127.0.0.1
IP.2 = 192.168.0.131
IP.3 = 192.168.0.132
IP.4 = 192.168.0.111
EOF

4. 서버 인증서 서명

openssl x509 \
-req \
-in server.csr \
-CA ca.crt \
-CAkey ca.key \
-CAcreateserial \
-out server.crt \
-days 3650 \
-extensions v3_req \
-extfile server-openssl.cnf

5. csr, cnf 파일 삭제

rm -f server.csr server-openssl.cnf

---

SSL 인증서 생성 스크립트 다운로드

curl -fsSL https://raw.githubusercontent.com/anti1346/codes/main/kubernetes/generate-etcd-certs.sh -o generate-etcd-certs.sh

스크립트를 열어 직접 수정

vim generate-etcd-certs.sh
# 환경 변수 설정
ETCD_NODE_1_HOSTNAME="k8s-master1"
ETCD_NODE_2_HOSTNAME="k8s-master2"
ETCD_NODE_3_HOSTNAME="k8s-master3"
ETCD_NODE_1_IP="192.168.0.131"
ETCD_NODE_2_IP="192.168.0.132"
ETCD_NODE_3_IP="192.168.0.111"
bash generate-etcd-certs.sh
tar czf ssl.tar.gz ssl

인증서를 각 노드에 배포

scp ssl.tar.gz ubuntu@127.0.0.1:~
scp ssl.tar.gz ubuntu@192.168.0.132:~
scp ssl.tar.gz ubuntu@192.168.0.111:~

k8s-master1, k8s-master2, k8s-master3

각 노드에서 etcd 설정

etcd 인증서 배포 및 권한 설정

mkdir -p /etc/etcd/ssl
tar xfz /home/ubuntu/ssl.tar.gz -C /etc/etcd
sudo chmod -R 600 /etc/etcd/ssl/*.key
sudo chmod -R 644 /etc/etcd/ssl/*.crt
sudo chown -R etcd:etcd /etc/etcd

etcd 데이터 디렉토리 생성 및 권한 설정

sudo mkdir -p /var/lib/etcd
sudo touch /var/lib/etcd/.touch
sudo chmod -R 700 /var/lib/etcd
sudo chown -R etcd:etcd /var/lib/etcd

etcd 클러스터 설정

  • k8s-master1
cat <<EOF | sudo tee /etc/default/etcd
ETCD_NAME="k8s-master1"
ETCD_DATA_DIR="/var/lib/etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.0.131:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.0.131:2379,https://127.0.0.1:2379"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.0.131:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.0.131:2380"
ETCD_INITIAL_CLUSTER="k8s-master1=https://192.168.0.131:2380,k8s-master2=https://192.168.0.132:2380,k8s-master3=https://192.168.0.111:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

ETCD_TRUSTED_CA_FILE="/etc/etcd/ssl/ca.crt"
ETCD_CERT_FILE="/etc/etcd/ssl/server.crt"
ETCD_KEY_FILE="/etc/etcd/ssl/server.key"
ETCD_CLIENT_CERT_AUTH="true"

ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ssl/ca.crt"
ETCD_PEER_CERT_FILE="/etc/etcd/ssl/peer.crt"
ETCD_PEER_KEY_FILE="/etc/etcd/ssl/peer.key"
ETCD_PEER_CLIENT_CERT_AUTH="true"
EOF
  • k8s-master2
cat <<EOF | sudo tee /etc/default/etcd
ETCD_NAME="k8s-master2"
ETCD_DATA_DIR="/var/lib/etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.0.132:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.0.132:2379,https://127.0.0.1:2379"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.0.132:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.0.132:2380"
ETCD_INITIAL_CLUSTER="k8s-master1=https://192.168.0.131:2380,k8s-master2=https://192.168.0.132:2380,k8s-master3=https://192.168.0.111:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

ETCD_TRUSTED_CA_FILE="/etc/etcd/ssl/ca.crt"
ETCD_CERT_FILE="/etc/etcd/ssl/server.crt"
ETCD_KEY_FILE="/etc/etcd/ssl/server.key"
ETCD_CLIENT_CERT_AUTH="true"

ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ssl/ca.crt"
ETCD_PEER_CERT_FILE="/etc/etcd/ssl/peer.crt"
ETCD_PEER_KEY_FILE="/etc/etcd/ssl/peer.key"
ETCD_PEER_CLIENT_CERT_AUTH="true"
EOF
  • k8s-master3
cat <<EOF | sudo tee /etc/default/etcd
ETCD_NAME="k8s-master3"
ETCD_DATA_DIR="/var/lib/etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.0.111:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.0.111:2379,https://127.0.0.1:2379"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.0.111:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.0.111:2380"
ETCD_INITIAL_CLUSTER="k8s-master1=https://192.168.0.131:2380,k8s-master2=https://192.168.0.132:2380,k8s-master3=https://192.168.0.111:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

ETCD_TRUSTED_CA_FILE="/etc/etcd/ssl/ca.crt"
ETCD_CERT_FILE="/etc/etcd/ssl/server.crt"
ETCD_KEY_FILE="/etc/etcd/ssl/server.key"
ETCD_CLIENT_CERT_AUTH="true"

ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ssl/ca.crt"
ETCD_PEER_CERT_FILE="/etc/etcd/ssl/peer.crt"
ETCD_PEER_KEY_FILE="/etc/etcd/ssl/peer.key"
ETCD_PEER_CLIENT_CERT_AUTH="true"
EOF

각 마스터 노드에서 etcd 서비스를 시작

sudo systemctl daemon-reload
sudo systemctl enable etcd
sudo systemctl start etcd

sudo systemctl status etcd
sudo systemctl restart etcd

etcd 클러스터 상태 확인

export ETCDCTL_API=3
etcdctl endpoint health \
--cacert=/etc/etcd/ssl/ca.crt \
--cert=/etc/etcd/ssl/peer.crt \
--key=/etc/etcd/ssl/peer.key \
--endpoints=https://$(hostname -I | awk '{print $1}'):2379
etcdctl member list \
-w table \
--cacert=/etc/etcd/ssl/ca.crt \
--cert=/etc/etcd/ssl/peer.crt \
--key=/etc/etcd/ssl/peer.key \
--endpoints=https://$(hostname -I | awk '{print $1}'):2379
etcdctl endpoint health --cluster \
--cacert=/etc/etcd/ssl/ca.crt \
--cert=/etc/etcd/ssl/peer.crt \
--key=/etc/etcd/ssl/peer.key \
--endpoints=https://$(hostname -I | awk '{print $1}'):2379
etcdctl endpoint status --cluster \
-w table \
--cacert=/etc/etcd/ssl/ca.crt \
--cert=/etc/etcd/ssl/peer.crt \
--key=/etc/etcd/ssl/peer.key \
--endpoints=https://$(hostname -I | awk '{print $1}'):2379

etcd 서비스 재시작

sudo systemctl restart etcd

Kubernetes Control Plane 설정

etcd 클라이언트 인증서 복사

mkdir -p /etc/kubernetes/pki/etcd
cp /etc/etcd/ssl/ca.crt /etc/kubernetes/pki/etcd/ca.pem
cp /etc/etcd/ssl/peer.crt /etc/kubernetes/pki/etcd/etcd-client.pem
cp /etc/etcd/ssl/peer.key /etc/kubernetes/pki/etcd/etcd-client-key.pem

etcd 클라이언트 인증서를 다른 마스터 노드로 배포

cd /etc/kubernetes/pki
tar czf etcd.tar.gz etcd
scp etcd.tar.gz ubuntu@192.168.0.132:~
scp etcd.tar.gz ubuntu@192.168.0.111:~
tar xfz /home/ubuntu/etcd.tar.gz -C /etc/kubernetes/pki

kubelet 활성화 및 시작

sudo systemctl enable kubelet
sudo systemctl start kubelet

kubeadm 구성 파일 생성 및 초기화

k8s-master1(192.168.0.131)에서 Kubernetes 제어 플레인을 초기화하기 위한 kubeadm 구성 파일을 생성합니다. 이 구성 파일에는 외부 etcd 클러스터에 대한 정보가 포함됩니다.

 

kubeadm 구성 파일 생성

cd ~/kube_script
vim kubeadmcfg.yaml
---
apiVersion: "kubeadm.k8s.io/v1beta3"
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: "192.168.0.131"
  bindPort: 6443
---
apiVersion: "kubeadm.k8s.io/v1beta3"
kind: ClusterConfiguration
kubernetesVersion: stable
controlPlaneEndpoint: "192.168.0.131:6443"
network:
  podSubnet: "10.244.0.0/16"
etcd:
  external:
    endpoints:
      - https://192.168.0.131:2379
      - https://192.168.0.132:2379
      - https://192.168.0.111:2379
    caFile: /etc/kubernetes/pki/etcd/ca.pem
    certFile: /etc/kubernetes/pki/etcd/etcd-client.pem
    keyFile: /etc/kubernetes/pki/etcd/etcd-client-key.pem

쿠버네티스 제어 플레인(control-plane)을 초기화

sudo kubeadm init --config kubeadmcfg.yaml --upload-certs | tee $HOME/kubeadm_init_output.log
$ sudo kubeadm init --config kubeadmcfg.yaml --upload-certs | tee $HOME/kubeadm_init_output.log
...
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 192.168.0.131:6443 --token 09mgzn.585m8ntfy6lsw003 \
        --discovery-token-ca-cert-hash sha256:77b2b9c1465c2822ae4372dedec663a931867f9ec4499a18c2a0384c87f91cd8 \
        --control-plane --certificate-key 007170eaadc1437671e49027e766cb8d544a3c166b7b2ede8994230026121c1c

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.0.131:6443 --token 09mgzn.585m8ntfy6lsw003 \
        --discovery-token-ca-cert-hash sha256:77b2b9c1465c2822ae4372dedec663a931867f9ec4499a18c2a0384c87f91cd8

kubectl 설정

  • kubeadm을 사용하여 쿠버네티스 제어 플레인을 초기화합니다.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

다른 제어 플레인 노드 추가

  • 다른 제어 플레인 노드를 클러스터에 추가합니다.
sudo kubeadm join 192.168.0.131:6443 --token <token> \
    --discovery-token-ca-cert-hash sha256:<hash> \
    --control-plane
sudo kubeadm join 192.168.0.131:6443 --token p2rr6l.c6egii8es2v3e2sl \
        --discovery-token-ca-cert-hash sha256:537eb413807adc97393443ac8e6bfff1ea8e766ad327185331509d7b4edd46dc \
        --control-plane

워커 노드 추가

  • 워커 노드를 클러스터에 추가합니다.
sudo kubeadm join 192.168.0.131:6443 --token <token> \
    --discovery-token-ca-cert-hash sha256:<hash>
sudo kubeadm join 192.168.0.131:6443 --token p2rr6l.c6egii8es2v3e2sl \
        --discovery-token-ca-cert-hash sha256:537eb413807adc97393443ac8e6bfff1ea8e766ad327185331509d7b4edd46dc

네트워크 플러그인 설치

  • Flannel 네트워크 플러그인 설치
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
  • Calico 네트워크 플러그인 설치
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

클러스터 상태 확인 및 테스트

노드 상태 확인

  • 모든 노드가 Ready 상태인지 확인합니다.
kubectl get nodes
$ kubectl get nodes
NAME          STATUS   ROLES           AGE     VERSION
k8s-master1   Ready    control-plane   25m     v1.27.16
k8s-master2   Ready    control-plane   24m     v1.27.16
k8s-master3   Ready    control-plane   23m     v1.27.16
k8s-worker3   Ready    <none>          3m30s   v1.27.16

시스템 네임스페이스의 파드 상태 확인

  • Kubernetes 시스템 구성 요소들이 정상적으로 실행되고 있는지 확인합니다.
kubectl get pods -n kube-system
$ kubectl get pods -n kube-system
NAME                                  READY   STATUS              RESTARTS      AGE
coredns-5d78c9869d-f76nz              0/1     ContainerCreating   0             16m
coredns-5d78c9869d-rjmzw              0/1     ContainerCreating   0             16m
kube-apiserver-k8s-master1            1/1     Running             2             16m
kube-apiserver-k8s-master2            1/1     Running             0             15m
kube-apiserver-k8s-master3            1/1     Running             0             14m
kube-controller-manager-k8s-master1   1/1     Running             4 (14m ago)   16m
kube-controller-manager-k8s-master2   1/1     Running             0             15m
kube-controller-manager-k8s-master3   1/1     Running             0             14m
kube-proxy-6wsfs                      1/1     Running             0             14m
kube-proxy-s7kk6                      1/1     Running             0             15m
kube-proxy-tjwdx                      1/1     Running             0             16m
kube-scheduler-k8s-master1            1/1     Running             5             16m
kube-scheduler-k8s-master2            1/1     Running             0             15m
kube-scheduler-k8s-master3            1/1     Running             0             14m

테스트 워크로드 배포

  • 간단한 Nginx 배포를 만들어 클러스터가 정상적으로 동작하는지 확인합니다.
kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --port=80 --type=NodePort

 

Kubernetes 클러스터를 구성하고 etcd를 포함한 외부 클러스터와 통합된 Kubernetes 제어 플레인을 설정하며 네트워크 플러그인을 설치하고 클러스터의 정상 작동을 확인할 수 있습니다.

 

728x90