본문 바로가기

리눅스

[draft] 쿠버네티스 클러스터를 고가용성 멀티 마스터 구성으로 설정하는 방법

728x90

쿠버네티스 클러스터를 고가용성(HA) 멀티 마스터 구성으로 설정하는 방법

스택드 ETCD 클러스터를 사용하는 구조로 각 컨트롤 플레인 노드가 ETCD 멤버로도 동작합니다.

 

containerd 서비스 재시작

sudo systemctl restart containerd

containerd 서비스 상태 확인

sudo systemctl status containerd --no-pager -l

kubelet 서비스 재시작

sudo systemctl restart kubelet

kubelet 서비스 상태 확인

sudo systemctl status kubelet --no-pager -l

첫 번째 컨트롤 플레인 노드 초기화

첫 번째 컨트롤 플레인 노드(k8s-master1)에서 클러스터를 초기화합니다.

sudo kubeadm init --control-plane-endpoint "<LOAD_BALANCER_DNS>:6443" --upload-certs
sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --control-plane-endpoint 192.168.0.131:6443 --upload-certs | tee $HOME/kubeadm_init_output.log
$ sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --control-plane-endpoint 192.168.0.131:6443 --upload-certs | tee $HOME/kubeadm_init_output.log
I0718 13:02:37.455874   12117 version.go:256] remote version is much newer: v1.30.3; falling back to: stable-1.27
[init] Using Kubernetes version: v1.27.16
...
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 192.168.0.131:6443 --token ekdc7c.6kuobewfq2rpkxy9 \
        --discovery-token-ca-cert-hash sha256:0d746efec2dc5eab3fadc71ef8263bdbae4b64578d6ceac02ebfc40997364c20 \
        --control-plane --certificate-key ffb8d5b70ae85720a022f8289d1cb4bdd44926c6dfc17eeda35809349cee095b

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.0.131:6443 --token ekdc7c.6kuobewfq2rpkxy9 \
        --discovery-token-ca-cert-hash sha256:0d746efec2dc5eab3fadc71ef8263bdbae4b64578d6ceac02ebfc40997364c20

kubeconfig 파일 설정 (첫 번째 컨트롤 플레인 노드에서)

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

네트워크 플러그인 설치

쿠버네티스 클러스터에서 네트워크를 설정하기 위해 네트워크 플러그인을 설치합니다.

kubectl apply -f https://docs.projectcalico.org/v3.14/manifests/calico.yaml
$ kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
poddisruptionbudget.policy/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
serviceaccount/calico-node created
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
deployment.apps/calico-kube-controllers created

다른 컨트롤 플레인 노드 설정

나머지 컨트롤 플레인 노드(k8s-master2, k8s-master3)에서 다음 명령어를 실행하여 클러스터에 조인합니다.

첫 번째 노드에서 복사한 kubeadm join 명령어를 사용합니다.

sudo kubeadm join <LOAD_BALANCER_DNS>:6443 --token <TOKEN> --discovery-token-ca-cert-hash sha256:<HASH> --control-plane --certificate-key <CERTIFICATE_KEY>
kubeadm join 192.168.0.131:6443 --token ekdc7c.6kuobewfq2rpkxy9 \
--discovery-token-ca-cert-hash sha256:0d746efec2dc5eab3fadc71ef8263bdbae4b64578d6ceac02ebfc40997364c20 \
--control-plane --certificate-key ffb8d5b70ae85720a022f8289d1cb4bdd44926c6dfc17eeda35809349cee095b
$ kubeadm join 192.168.0.131:6443 --token ekdc7c.6kuobewfq2rpkxy9 \
> --discovery-token-ca-cert-hash sha256:0d746efec2dc5eab3fadc71ef8263bdbae4b64578d6ceac02ebfc40997364c20 \
> --control-plane --certificate-key ffb8d5b70ae85720a022f8289d1cb4bdd44926c6dfc17eeda35809349cee095b
...
This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

        mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

kubeconfig 파일 설정

  • k8s-master2, k8s-master3 명령어를 실행
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

클러스터 상태 확인

  • 클러스터 상태 확인 모든 노드가 클러스터에 제대로 조인되었는지 확인합니다.
kubectl get nodes
$ kubectl get nodes                                       
NAME          STATUS   ROLES           AGE     VERSION
k8s-master1   Ready    control-plane   36m     v1.27.16
k8s-master2   Ready    control-plane   6m48s   v1.27.16
k8s-master3   Ready    control-plane   3m56s   v1.27.16

워커 노드 설정

각 워커 노드 (k8s-worker1, k8s-worker2, k8s-worker3)에서 다음 명령어를 실행하여 클러스터에 조인합니다.

kubeadm join 192.168.0.131:6443 --token ekdc7c.6kuobewfq2rpkxy9 \
> --discovery-token-ca-cert-hash sha256:0d746efec2dc5eab3fadc71ef8263bdbae4b64578d6ceac02ebfc40997364c20
$ kubeadm join 192.168.0.131:6443 --token ekdc7c.6kuobewfq2rpkxy9 \
> --discovery-token-ca-cert-hash sha256:0d746efec2dc5eab3fadc71ef8263bdbae4b64578d6ceac02ebfc40997364c20
...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

클러스터 상태 확인

  • 모든 노드가 클러스터에 제대로 조인되었는지 확인합니다.
kubectl get nodes
$ kubectl get nodes
NAME          STATUS   ROLES           AGE    VERSION
k8s-master1   Ready    control-plane   150m   v1.27.16
k8s-master2   Ready    control-plane   120m   v1.27.16
k8s-master3   Ready    control-plane   117m   v1.27.16
k8s-worker1   Ready    <none>          87m    v1.27.16
k8s-worker2   Ready    <none>          87m    v1.27.16
k8s-worker3   Ready    <none>          87m    v1.27.16

 

스택드 ETCD 클러스터를 사용하는 고가용성 쿠버네티스 클러스터를 구성할 수 있습니다.

 

728x90