728x90
ubuntu에 pacemaker 구성하기
테스트 환경
호스트 이름 | 서버 아이피 | 도메인 | 운영체제 | 비고 |
VIP | 192.168.0.60 | vip.cluster.local | ||
control1 | 192.168.0.51 | control1.cluster.local | Ubuntu 22.04 LTS | |
node3 | 192.168.0.63 | node3.cluster.local | Ubuntu 22.04 LTS |
pacemaker 설치
control1, node3 노드에서 같이 실행합니다.
1. 호스트 등록(hosts)
- /etc/hosts
cat <<EOF > /etc/hosts
# Cluster
192.168.0.60 vip.cluster.local vip
192.168.0.51 control1.cluster.local control1
192.168.0.63 node3.cluster.local node3
EOF
2. pacemaker, corosync, pcs 패키지 설치
apt-get install -y pacemaker corosync pcs
pacemakerd --version
$ pacemakerd --version
Pacemaker 2.1.2
Written by Andrew Beekhof
corosync -v
$ corosync -v
Corosync Cluster Engine, version '3.1.6'
Copyright (c) 2006-2021 Red Hat, Inc.
Built-in features: dbus monitoring watchdog augeas systemd xmlconf vqsim nozzle snmp pie relro bindnow
Available crypto models: nss openssl
Available compression models: zlib lz4 lz4hc lzo2 lzma bzip2 zstd
pcs --version
$ pcs --version
0.10.11
$ cat /etc/passwd | grep hacluster
hacluster:x:115:120::/var/lib/pacemaker:/usr/sbin/nologin
pcsd(pacemaker) 활성화 및 시작
systemctl --now enable pcsd
pcsd 서비스 확인
systemctl status pcsd
hacluster 계정의 비밀번호 생성
- hacluster 비밀번호 : hacluster
echo -e 'hacluster:hacluster' | chpasswd
(or)
passwd hacluster
$ passwd hacluster
Changing password for user hacluster.
New password:
BAD PASSWORD: The password contains the user name in some form
Retype new password:
passwd: all authentication tokens updated successfully.
3. pacemaker 클러스터 생성
systemctl restart pcsd
pcs status
$ pcs status
Error: error running crm_mon, is pacemaker running?
crm_mon: Error: cluster is not available on this node
pcs cluster status
$ pcs cluster status
Error: cluster is not currently running on this node
4. 한쪽 노드에서 클러스터 생성 및 실행하기
pcs host auth control1.cluster.local node3.cluster.local -u hacluster
root@control1:~$ pcs host auth control1.cluster.local node3.cluster.local -u hacluster
Password:
node3.cluster.local: Authorized
control1.cluster.local: Authorized
pcs cluster setup hacluster control1.cluster.local node3.cluster.local --force
root@control1:~$ pcs cluster setup hacluster control1.cluster.local node3.cluster.local --force
No addresses specified for host 'control1.cluster.local', using 'control1.cluster.local'
No addresses specified for host 'node3.cluster.local', using 'node3.cluster.local'
Destroying cluster on hosts: 'control1.cluster.local', 'node3.cluster.local'...
control1.cluster.local: Successfully destroyed cluster
node3.cluster.local: Successfully destroyed cluster
Requesting remove 'pcsd settings' from 'control1.cluster.local', 'node3.cluster.local'
control1.cluster.local: successful removal of the file 'pcsd settings'
node3.cluster.local: successful removal of the file 'pcsd settings'
Sending 'corosync authkey', 'pacemaker authkey' to 'control1.cluster.local', 'node3.cluster.local'
control1.cluster.local: successful distribution of the file 'corosync authkey'
control1.cluster.local: successful distribution of the file 'pacemaker authkey'
node3.cluster.local: successful distribution of the file 'corosync authkey'
node3.cluster.local: successful distribution of the file 'pacemaker authkey'
Sending 'corosync.conf' to 'control1.cluster.local', 'node3.cluster.local'
control1.cluster.local: successful distribution of the file 'corosync.conf'
node3.cluster.local: successful distribution of the file 'corosync.conf'
Cluster has been successfully set up.
pcs cluster auth -u hacluster -p hacluster
root@control1:~$ pcs cluster auth -u hacluster -p hacluster
control1.cluster.local: Already authorized
node3.cluster.local: Already authorized
Sending cluster config files to the nodes...
pcs cluster start --all
pcs cluster enable --all
control1 | node3 |
$ pcs cluster start --all node3.cluster.local: Starting Cluster... control1.cluster.local: Starting Cluster... |
$ pcs cluster start --all control1.cluster.local: Starting Cluster... node3.cluster.local: Starting Cluster... |
$ pcs cluster enable --all control1.cluster.local: Cluster Enabled node3.cluster.local: Cluster Enabled |
$ pcs cluster enable --all control1.cluster.local: Cluster Enabled pcs cluster statusnode3.cluster.local: Cluster Enabled |
hostname | command |
control1 | $ pcs status Cluster name: hacluster WARNINGS: No stonith devices and stonith-enabled is not false Cluster Summary: * Stack: corosync * Current DC: node3.cluster.local (version 2.1.2-ada5c3b36e2) - partition with quorum * Last updated: Tue Feb 7 10:25:30 2023 * Last change: Tue Feb 7 10:24:56 2023 by hacluster via crmd on node3.cluster.local * 2 nodes configured * 0 resource instances configured Node List: * Online: [ control1.cluster.local node3.cluster.local ] Full List of Resources: * No resources Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled |
control1 | $ pcs cluster status Cluster Status: Cluster Summary: * Stack: corosync * Current DC: node3.cluster.local (version 2.1.2-ada5c3b36e2) - partition with quorum * Last updated: Tue Feb 7 10:25:33 2023 * Last change: Tue Feb 7 10:24:56 2023 by hacluster via crmd on node3.cluster.local * 2 nodes configured * 0 resource instances configured Node List: * Online: [ control1.cluster.local node3.cluster.local ] PCSD Status: control1.cluster.local: Online node3.cluster.local: Online |
node3 | $ pcs status Cluster name: hacluster WARNINGS: No stonith devices and stonith-enabled is not false Cluster Summary: * Stack: corosync * Current DC: node3.cluster.local (version 2.1.2-ada5c3b36e2) - partition with quorum * Last updated: Tue Feb 7 10:25:30 2023 * Last change: Tue Feb 7 10:24:56 2023 by hacluster via crmd on node3.cluster.local * 2 nodes configured * 0 resource instances configured Node List: * Online: [ control1.cluster.local node3.cluster.local ] Full List of Resources: * No resources Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled |
node3 | $ pcs cluster status Cluster Status: Cluster Summary: * Stack: corosync * Current DC: node3.cluster.local (version 2.1.2-ada5c3b36e2) - partition with quorum * Last updated: Tue Feb 7 10:25:33 2023 * Last change: Tue Feb 7 10:24:56 2023 by hacluster via crmd on node3.cluster.local * 2 nodes configured * 0 resource instances configured Node List: * Online: [ control1.cluster.local node3.cluster.local ] PCSD Status: node3.cluster.local: Online control1.cluster.local: Online |
cat /var/lib/pcsd/known-hosts
cat /etc/corosync/corosync.conf
728x90
클러스터 옵션 구성
pcs cluster status
$ pcs cluster status
Cluster Status:
Cluster Summary:
* Stack: corosync
* Current DC: node3.cluster.local (version 2.1.2-ada5c3b36e2) - partition with quorum
* Last updated: Tue Feb 7 10:27:14 2023
* Last change: Tue Feb 7 10:24:56 2023 by hacluster via crmd on node3.cluster.local
* 2 nodes configured
* 0 resource instances configured
Node List:
* Online: [ control1.cluster.local node3.cluster.local ]
PCSD Status:
control1.cluster.local: Online
node3.cluster.local: Online
$ crm_simulate -sL
[ control1.cluster.local node3.cluster.local ]
No resources
$ pcs constraint config
Location Constraints:
Ordering Constraints:
Colocation Constraints:
Ticket Constraints:
정책 변경
- stonith 비활성화 (STONITH = Shoot The Other Node In The Head)
pcs property set stonith-enabled=false
- quorum policy 끄기
pcs property set no-quorum-policy=ignore
리소스 표준
$ pcs resource standards
lsb
ocf
service
systemd
$ pcs resource providers
heartbeat
pacemaker
$ pcs resource agents ocf:heartbeat
IPaddr2
iscsi
iSCSILogicalUnit
iSCSITarget
LVM-activate
클러스터 삭제
pcs cluster stop --all
known-hosts 파일 삭제
rm -f /var/lib/pcsd/known-hosts
corosync.conf 파일 삭제
rm -f /etc/corosync/corosync.conf
authkey 파일 삭제
rm -f /etc/pacemaker/authkey
클러스터 삭제
pcs cluster destroy
pacemaker corosync pcs 재설치
apt-get reinstall -y pacemaker corosync pcs
참고URL
- Pacemaker 1.1(Configuration Explained) : Pacemaker-1.1-Pacemaker_Explained-en-US.pdf
728x90
'리눅스' 카테고리의 다른 글
[리눅스] java(jdk) 설치 및 java 환경 설정 (0) | 2023.02.07 |
---|---|
[리눅스] ubuntu에 pacemaker VirtualIP(VIP) 구성하기 (0) | 2023.02.07 |
[리눅스] sudo 사용법 (0) | 2023.02.06 |
[리눅스] date 명령어 (0) | 2023.02.03 |
[리눅스] 유닉스 타임스탬프 변환기(unix timestamp converter) (0) | 2023.02.03 |