본문 바로가기

리눅스

Helm을 사용하여 Bitnami Elasticsearch 및 Kibana 설치

728x90

Helm을 사용하여 Bitnami Elasticsearch 및 Kibana 설치

limit 설정

sudo vim /etc/sysctl.conf
vm.max_map_count=262144
fs.file-max=65536
sysctl -p
$ sysctl -p
vm.max_map_count = 262144
fs.file-max = 65536
sudo vim /etc/security/limits.conf
*       -	nofile  65535
*       -	nproc   65535

Bitnami의 Helm 차트 저장소를 추가

helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm repo list | egrep bitnami
bitnami https://charts.bitnami.com/bitnami

Helm 차트 저장소를 업데이트

helm repo update
$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "bitnami" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈Happy Helming!⎈

Bitnami의 Elasticsearch Helm 차트를 검색

helm search repo -l bitnami/elasticsearch | head
$ helm search repo -l bitnami/elasticsearch | head
NAME                    CHART VERSION   APP VERSION     DESCRIPTION                                       
bitnami/elasticsearch   19.17.3         8.12.0          Elasticsearch is a distributed search and analy...
bitnami/elasticsearch   19.17.2         8.12.0          Elasticsearch is a distributed search and analy...
bitnami/elasticsearch   19.17.0         8.12.0          Elasticsearch is a distributed search and analy...
bitnami/elasticsearch   19.16.3         8.12.0          Elasticsearch is a distributed search and analy...
bitnami/elasticsearch   19.16.2         8.11.4          Elasticsearch is a distributed search and analy...
bitnami/elasticsearch   19.16.1         8.11.4          Elasticsearch is a distributed search and analy...
bitnami/elasticsearch   19.16.0         8.11.4          Elasticsearch is a distributed search and analy...
bitnami/elasticsearch   19.15.0         8.11.4          Elasticsearch is a distributed search and analy...
bitnami/elasticsearch   19.14.1         8.11.4          Elasticsearch is a distributed search and analy...

Elasticsearch Helm 차트의 기본값을 사용자 정의하기 위해 값 파일을 다운로드

curl -fsSL https://raw.githubusercontent.com/bitnami/charts/main/bitnami/elasticsearch/values.yaml -o values.yaml

elk-namespace 네임스페이스 생성

kubectl create namespace elk-namespace
$ kubectl create namespace elk-namespace
namespace/elk-namespace created

네임스페이스 조회

kubectl get namespace | egrep elk-namespace
$ kubectl get namespace | egrep elk-namespace
elk-namespace          Active   38s

Elasticsearch 배포

helm install my-elasticsearch \
-f values.yaml \
--set sysctlImage.enabled=true \
--version 19.17.3 \
-n elk-namespace \
bitnami/elasticsearch
$ helm install my-elasticsearch \
> -f values.yaml \
> --set sysctlImage.enabled=true \
> --version 19.17.3 \
> -n elk-namespace \
> bitnami/elasticsearch
NAME: my-elasticsearch
LAST DEPLOYED: Mon Feb  5 12:08:36 2024
NAMESPACE: elk-namespace
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: elasticsearch
CHART VERSION: 19.17.3
APP VERSION: 8.12.0

-------------------------------------------------------------------------------
 WARNING

    Elasticsearch requires some changes in the kernel of the host machine to
    work as expected. If those values are not set in the underlying operating
    system, the ES containers fail to boot with ERROR messages.

    More information about these requirements can be found in the links below:

      https://www.elastic.co/guide/en/elasticsearch/reference/current/file-descriptors.html
      https://www.elastic.co/guide/en/elasticsearch/reference/current/vm-max-map-count.html

    This chart uses a privileged initContainer to change those settings in the Kernel
    by running: sysctl -w vm.max_map_count=262144 && sysctl -w fs.file-max=65536

** Please be patient while the chart is being deployed **

  Elasticsearch can be accessed within the cluster on port 9200 at my-elasticsearch.elk-namespace.svc.cluster.local

  To access from outside the cluster execute the following commands:

    kubectl port-forward --namespace elk-namespace svc/my-elasticsearch 9200:9200 &
    curl http://127.0.0.1:9200/

파드 목록 조회

kubectl get pods -n elk-namespace
$ kubectl get pods -n elk-namespace
NAME                              READY   STATUS    RESTARTS   AGE
my-elasticsearch-coordinating-0   1/1     Running   0          110s
my-elasticsearch-coordinating-1   1/1     Running   0          110s
my-elasticsearch-data-0           1/1     Running   0          110s
my-elasticsearch-data-1           1/1     Running   0          110s
my-elasticsearch-ingest-0         1/1     Running   0          110s
my-elasticsearch-ingest-1         1/1     Running   0          110s
my-elasticsearch-master-0         1/1     Running   0          110s
my-elasticsearch-master-1         1/1     Running   0          110s
kubectl get pods -l app.kubernetes.io/instance=my-elasticsearch -n elk-namespace
$ kubectl get pods -l app.kubernetes.io/instance=my-elasticsearch -n elk-namespace
NAME                              READY   STATUS    RESTARTS   AGE
my-elasticsearch-coordinating-0   1/1     Running   0          2m5s
my-elasticsearch-coordinating-1   1/1     Running   0          2m5s
my-elasticsearch-data-0           1/1     Running   0          2m5s
my-elasticsearch-data-1           1/1     Running   0          2m5s
my-elasticsearch-ingest-0         1/1     Running   0          2m5s
my-elasticsearch-ingest-1         1/1     Running   0          2m5s
my-elasticsearch-master-0         1/1     Running   0          2m5s
my-elasticsearch-master-1         1/1     Running   0          2m5s

서비스 목록 조회

kubectl get services -n elk-namespace
$ kubectl get services -n elk-namespace
NAME                               TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
my-elasticsearch                   ClusterIP   10.105.121.208   <none>        9200/TCP,9300/TCP   2m45s
my-elasticsearch-coordinating-hl   ClusterIP   None             <none>        9200/TCP,9300/TCP   2m45s
my-elasticsearch-data-hl           ClusterIP   None             <none>        9200/TCP,9300/TCP   2m45s
my-elasticsearch-ingest-hl         ClusterIP   None             <none>        9200/TCP,9300/TCP   2m45s
my-elasticsearch-master-hl         ClusterIP   None             <none>        9200/TCP,9300/TCP   2m45s

kubectl port-forward 기본 구문

kubectl port-forward TYPE/NAME [options] [LOCAL_PORT:]REMOTE_PORT [...[LOCAL_PORT_N:]REMOTE_PORT_N]

쿠버네티스 클러스터 내부의 서비스를 로컬 머신에서 직접 접근할 수 있도록 port-forwading

kubectl port-forward --namespace elk-namespace svc/my-elasticsearch 9200:9200
$ kubectl port-forward service/my-elasticsearch-coordinating-hl 9200:9200 -n elk-namespace
Forwarding from 127.0.0.1:9200 -> 9200
Forwarding from [::1]:9200 -> 9200

백그라운 실행

kubectl port-forward --address 0.0.0.0 svc/my-elasticsearch 9200:9200 --namespace elk-namespace &
$ kubectl port-forward --address 0.0.0.0 svc/my-elasticsearch 9200:9200 --namespace elk-namespace &
[1] 1632445
Forwarding from 0.0.0.0:9200 -> 9200

curl 명령어를 사용하여 elasticsearch 호출

curl http://127.0.0.1:9200
$ curl http://localhost:9200
{
  "name" : "my-elasticsearch-coordinating-0",
  "cluster_name" : "elastic",
  "cluster_uuid" : "diZhNfhKQ2CftW0_4Wc3SA",
  "version" : {
    "number" : "8.12.0",
    "build_flavor" : "default",
    "build_type" : "tar",
    "build_hash" : "1665f706fd9354802c02146c1e6b5c0fbcddfbc9",
    "build_date" : "2024-01-11T10:05:27.953830042Z",
    "build_snapshot" : false,
    "lucene_version" : "9.9.1",
    "minimum_wire_compatibility_version" : "7.17.0",
    "minimum_index_compatibility_version" : "7.0.0"
  },
  "tagline" : "You Know, for Search"
}
curl http://localhost:9200/_cat/nodes
$ curl http://localhost:9200/_cat/nodes
10.244.0.49 29 39 0 2.03 5.78 4.24 d - my-elasticsearch-data-1
10.244.0.48 62 39 0 2.03 5.78 4.24 - - my-elasticsearch-coordinating-1
10.244.0.47 64 39 0 2.03 5.78 4.24 i - my-elasticsearch-ingest-0
10.244.0.50 69 39 0 2.03 5.78 4.24 m - my-elasticsearch-master-0
10.244.0.45 67 39 0 2.03 5.78 4.24 - - my-elasticsearch-coordinating-0
10.244.0.52 30 39 0 2.03 5.78 4.24 d - my-elasticsearch-data-0
10.244.0.51 68 39 0 2.03 5.78 4.24 m * my-elasticsearch-master-1
10.244.0.46 64 39 0 2.03 5.78 4.24 i - my-elasticsearch-ingest-1
curl http://localhost:9200/_cluster/health?pretty
$ curl http://localhost:9200/_cluster/health?pretty
{
  "cluster_name" : "elastic",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 8,
  "number_of_data_nodes" : 2,
  "active_primary_shards" : 0,
  "active_shards" : 0,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}

Kibana 배포

helm install my-kibana \
--set "elasticsearch.hosts[0]=elasticsearch.elk-namespace.svc.cluster.local" \
--set elasticsearch.port=9200 \
-n elk-namespace \
oci://registry-1.docker.io/bitnamicharts/kibana
$ helm install my-kibana \
> --set "elasticsearch.hosts[0]=elasticsearch.elk-namespace.svc.cluster.local" \
> --set elasticsearch.port=9200 \
> -n elk-namespace \
> oci://registry-1.docker.io/bitnamicharts/kibana
Pulled: registry-1.docker.io/bitnamicharts/kibana:10.8.3
Digest: sha256:7268dd2aed7e947f8e21644416b5e31236754c4864fc3bfd05f44390cbab5158
NAME: my-kibana
LAST DEPLOYED: Mon Feb  5 12:23:14 2024
NAMESPACE: elk-namespace
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: kibana
CHART VERSION: 10.8.3
APP VERSION: 8.12.0

** Please be patient while the chart is being deployed **

1. Get the application URL by running these commands:
  echo "Visit http://127.0.0.1:8080 to use your application"
  kubectl port-forward svc/my-kibana 8080:5601
kubectl port-forward --address 0.0.0.0 svc/my-kibana 8080:5601 -n elk-namespace &
$ kubectl port-forward --address 0.0.0.0 svc/my-kibana 8080:5601 -n elk-namespace &
[2] 1643695
Forwarding from 0.0.0.0:8080 -> 5601

Logstash 배포

helm install my-logstash \
--set elasticsearch.host=my-elasticsearch-master.elk-namespace.svc.cluster.local \
--set elasticsearch.port=9200 \
-n elk-namespace \
oci://registry-1.docker.io/bitnamicharts/logstash
$ helm install my-logstash \
> --set elasticsearch.host=my-elasticsearch-master.elk-namespace.svc.cluster.local \
> --set elasticsearch.port=9200 \
> -n elk-namespace \
> oci://registry-1.docker.io/bitnamicharts/logstash
Pulled: registry-1.docker.io/bitnamicharts/logstash:5.9.3
Digest: sha256:c77b3a07e692c68a42a7d90d814e2fa588e9799670a7340a03e1cd2b1c61ec3b
NAME: my-logstash
LAST DEPLOYED: Mon Feb  5 12:28:07 2024
NAMESPACE: elk-namespace
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: logstash
CHART VERSION: 5.9.3
APP VERSION: 8.12.0

** Please be patient while the chart is being deployed **

Logstash can be accessed through following DNS names from within your cluster:

    Logstash: my-logstash.elk-namespace.svc.cluster.local

To access Logstash from outside the cluster execute the following commands:

    export SERVICE_PORT=$(kubectl get --namespace elk-namespace -o jsonpath="{.spec.ports[0].port}" services my-logstash)
    kubectl --namespace elk-namespace port-forward svc/my-logstash ${SERVICE_PORT}:${SERVICE_PORT} &
    echo "http://127.0.0.1:${SERVICE_PORT}"

 

kubectl get all --namespace elk-namespace
$ kubectl get all --namespace elk-namespace
NAME                                  READY   STATUS    RESTARTS   AGE
pod/my-elasticsearch-coordinating-0   1/1     Running   0          20m
pod/my-elasticsearch-coordinating-1   1/1     Running   0          20m
pod/my-elasticsearch-data-0           1/1     Running   0          20m
pod/my-elasticsearch-data-1           1/1     Running   0          20m
pod/my-elasticsearch-ingest-0         1/1     Running   0          20m
pod/my-elasticsearch-ingest-1         1/1     Running   0          20m
pod/my-elasticsearch-master-0         1/1     Running   0          20m
pod/my-elasticsearch-master-1         1/1     Running   0          20m
pod/my-kibana-768cd5467-82r7l         1/1     Running   0          6m5s
pod/my-logstash-0                     1/1     Running   0          73s

NAME                                       TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
service/my-elasticsearch                   ClusterIP   10.105.121.208   <none>        9200/TCP,9300/TCP   20m
service/my-elasticsearch-coordinating-hl   ClusterIP   None             <none>        9200/TCP,9300/TCP   20m
service/my-elasticsearch-data-hl           ClusterIP   None             <none>        9200/TCP,9300/TCP   20m
service/my-elasticsearch-ingest-hl         ClusterIP   None             <none>        9200/TCP,9300/TCP   20m
service/my-elasticsearch-master-hl         ClusterIP   None             <none>        9200/TCP,9300/TCP   20m
service/my-kibana                          ClusterIP   10.105.206.166   <none>        5601/TCP            6m6s
service/my-logstash                        ClusterIP   10.103.131.225   <none>        8080/TCP            73s
service/my-logstash-headless               ClusterIP   None             <none>        8080/TCP            73s

NAME                        READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/my-kibana   1/1     1            1           6m5s

NAME                                  DESIRED   CURRENT   READY   AGE
replicaset.apps/my-kibana-768cd5467   1         1         1       6m5s

NAME                                             READY   AGE
statefulset.apps/my-elasticsearch-coordinating   2/2     20m
statefulset.apps/my-elasticsearch-data           2/2     20m
statefulset.apps/my-elasticsearch-ingest         2/2     20m
statefulset.apps/my-elasticsearch-master         2/2     20m
statefulset.apps/my-logstash                     1/1     73s

리소스 및 네임스페이스 조회 및 삭제

kibana, elasticsearch,logstash 제거

helm uninstall my-kibana --namespace elk-namespace
$ helm uninstall my-kibana --namespace elk-namespace
release "my-kibana" uninstalled
helm uninstall my-elasticsearch --namespace elk-namespace
$ helm uninstall my-elasticsearch --namespace elk-namespace
release "my-elasticsearch" uninstalled
helm uninstall my-logstash --namespace elk-namespace
$ helm uninstall my-logstash --namespace elk-namespace
release "my-logstash" uninstalled

my-elasticsearch 인스턴스와 관련된 PVC를 삭제

kubectl get pvc --namespace elk-namespace
$ kubectl get pvc --namespace elk-namespace
NAME                             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
data-my-elasticsearch-data-0     Bound    pvc-0058952d-fc41-45a0-a83e-f3ac100641d6   8Gi        RWO            standard       33m
data-my-elasticsearch-data-1     Bound    pvc-b060e50a-95dc-4e56-9b80-2f440e8c795e   8Gi        RWO            standard       33m
data-my-elasticsearch-master-0   Bound    pvc-b1097ac7-5a31-4f67-93c7-2783a7d3ce58   8Gi        RWO            standard       33m
data-my-elasticsearch-master-1   Bound    pvc-66f6f8e7-4815-4b9d-bb9f-49069fff2143   8Gi        RWO            standard       33m
kubectl delete pvc -l app.kubernetes.io/instance=my-elasticsearch -n elk-namespace
$ kubectl delete pvc -l app.kubernetes.io/instance=my-elasticsearch -n elk-namespace
persistentvolumeclaim "data-my-elasticsearch-data-0" deleted
persistentvolumeclaim "data-my-elasticsearch-data-1" deleted
persistentvolumeclaim "data-my-elasticsearch-master-0" deleted
persistentvolumeclaim "data-my-elasticsearch-master-1" deleted

 

kubectl get all --namespace elk-namespace
$ kubectl get all --namespace elk-namespace
No resources found in elk-namespace namespace.

 

kubectl get namespaces | egrep elk-namespace
$ kubectl get namespaces | egrep elk-namespace
elk-namespace          Active   38m

elk-namespace 네임스페이스 삭제

kubectl delete namespace elk-namespace
$ kubectl delete namespace elk-namespace
namespace "elk-namespace" deleted

포트 포워딩 프로세스를 확인

ps -ef | grep -v grep | grep port-forward

포트 포워딩 프로세스를 중지

kill -9

 

참고URL

- bitnami site : bitnami stacks

- github : bitnami elasticsearch

 

728x90