eck简介
Elastic Cloud on Kubernetes (ECK)可以基于K8s operator在Kubernetes集群来自动化部署、管理、编排Elasticsearch、Kibana、APM Server服务。
ECK 使用 Kubernetes Operator 模式构建而成,需要安装在您的 Kubernetes 集群内,其功能绝不仅限于简化 Kubernetes 上 Elasticsearch 和 Kibana 的部署工作这一项任务。ECK 专注于简化所有后期运行工作,例如:
管理和监测多个集群
- 轻松升级至新的堆栈版本
- 扩大或缩小集群容量
- 更改集群配置
- 动态调整本地存储的规模(包括 Elastic Local Volume(一款本地存储驱动器))
- 执行备份
在 ECK 上启动的所有 Elasticsearch 集群都默认受到保护,这意味着在最初创建的那一刻便已启用加密并受到默认强密码的保护。
官网:https://www.elastic.co/cn/elastic-cloud-kubernetes
项目地址:https://github.com/elastic/cloud-on-k8s
部署ECK
参考:
https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-quickstart.html
https://github.com/elastic/cloud-on-k8s/tree/master/config/recipes/beats
https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-volume-claim-templates.html
https://github.com/elastic/cloud-on-k8s/tree/master/config/samples
环境信息:
准备3个节点,这里配置master节点可调度pod:
1 2 3 4 5 | [root@master01 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION master01 Ready master 7d v1.18.2 node01 Ready <none> 7d v1.18.2 node02 Ready <none> 7d v1.18.2 |
eck部署版本:eck v1.1.0
准备nfs存储
作为测试,这里临时在master01节点部署nfs server提供pvc所需的存储资源。
1 2 3 4 5 6 7 8 | docker run -d \ --name nfs-server \ --privileged \ --restart always \ -p 2049:2049 \ -v /nfs-share:/nfs-share \ -e SHARED_DIRECTORY=/nfs-share \ itsthenetwork/nfs-server-alpine:latest |
部署nfs-client-provisioner,动态申请nfs存储资源
1 2 3 4 5 6 | helm repo add apphub https://apphub.aliyuncs.com helm install nfs-client-provisioner \ --set nfs.server=192.168.93.11 \ --set nfs.path=/ \ apphub/nfs-client-provisioner |
所有节点安装nfs客户端并启用rpcbind服务
1 2 | yum install -y nfs-utils systemctl enable --now rpcbind |
安装eck operator
1 | kubectl apply -f https://download.elastic.co/downloads/eck/1.1.0/all-in-one.yaml |
查看创建的pod
1 2 3 | [root@master01 ~]# kubectl -n elastic-system get pods NAME READY STATUS RESTARTS AGE elastic-operator-0 1/1 Running 1 17m |
查看创建的crd,创建了3个crd,apmserver、elasticsearche以及kibana.
1 2 3 4 | [root@master01 ~]# kubectl get crd | grep elastic apmservers.apm.k8s.elastic.co 2020-04-27T16:23:08Z elasticsearches.elasticsearch.k8s.elastic.co 2020-04-27T16:23:08Z kibanas.kibana.k8s.elastic.co 2020-04-27T16:23:08Z |
部署es和kibana
克隆github源码中的示例yaml到本地
1 2 3 | curl -LO https://github.com/elastic/cloud-on-k8s/archive/1.1.0.tar.gz tar -zxf cloud-on-k8s-1.1.0.tar.gz cd cloud-on-k8s-1.1.0/config/recipes/beats/ |
创建命名空间
1 | kubectl apply -f 0_ns.yaml |
部署es和kibana,配置 storageClassName为nfs-client,服务类型改为nodePort。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 | $ cat 1_monitor.yaml apiVersion: elasticsearch.k8s.elastic.co/v1 kind: Elasticsearch metadata: name: monitor namespace: beats spec: version: 7.6.2 nodeSets: - name: mdi count: 3 config: node.master: true node.data: true node.ingest: true node.store.allow_mmap: false volumeClaimTemplates: - metadata: name: elasticsearch-data spec: accessModes: - ReadWriteOnce resources: requests: storage: 50Gi storageClassName: nfs-client http: service: spec: type: NodePort --- apiVersion: kibana.k8s.elastic.co/v1 kind: Kibana metadata: name: monitor namespace: beats spec: version: 7.6.2 count: 1 elasticsearchRef: name: "monitor" http: service: spec: type: NodePort |
执行yaml文件部署es和kibana
1 | kubectl apply -f 1_monitor.yaml |
如果无法拉取镜像可以手动替换为dockerhub镜像:
1 2 3 4 | docker pull elastic/elasticsearch:7.6.2 docker pull elastic/kibana:7.6.2 docker tag elastic/elasticsearch:7.6.2 docker.elastic.co/elasticsearch/elasticsearch:7.6.2 docker tag elastic/kibana:7.6.2 docker.elastic.co/kibana/kibana:7.6.2 |
查看创建的Elasticsearch和kibana资源,包括运行状况,版本和节点数
1 2 3 4 5 6 7 | [root@master01 ~]# kubectl get elasticsearch NAME HEALTH NODES VERSION PHASE AGE quickstart green 3 7.6.2 Ready 77m [root@master01 ~]# kubectl get kibana NAME HEALTH NODES VERSION AGE quickstart green 1 7.6.2 137m |
查看创建的pods:
1 2 3 4 5 6 | [root@master01 ~]# kubectl -n beats get pods NAME READY STATUS RESTARTS AGE monitor-es-mdi-0 1/1 Running 0 109s monitor-es-mdi-1 1/1 Running 0 9m monitor-es-mdi-2 1/1 Running 0 3m26s monitor-kb-54cbdf6b8c-jklqm 1/1 Running 0 9m |
查看创建的pv和pvc
1 2 3 4 5 6 7 8 9 10 11 | [root@master01 ~]# kubectl -n beats get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE elasticsearch-data-monitor-es-mdi-0 Bound pvc-882be3e2-b752-474b-abea-7827b492d83d 10Gi RWO nfs-client 3m33s elasticsearch-data-monitor-es-mdi-1 Bound pvc-8e6ed97e-7524-47f5-b02c-1ff0d2af33af 10Gi RWO nfs-client 3m33s elasticsearch-data-monitor-es-mdi-2 Bound pvc-31b5f80d-8fbd-4762-ab69-650eb6619a2e 10Gi RWO nfs-client 3m33s [root@master01 ~]# kubectl -n beats get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-31b5f80d-8fbd-4762-ab69-650eb6619a2e 50Gi RWO Delete Bound beats/elasticsearch-data-monitor-es-mdi-2 nfs-client 3m35s pvc-882be3e2-b752-474b-abea-7827b492d83d 50Gi RWO Delete Bound beats/elasticsearch-data-monitor-es-mdi-0 nfs-client 3m35s pvc-8e6ed97e-7524-47f5-b02c-1ff0d2af33af 50Gi RWO Delete Bound beats/elasticsearch-data-monitor-es-mdi-1 nfs-client 3m35s |
查看创建的service,部署时已经将es和kibana服务类型改为NodePort,方便从集群外访问。
1 2 3 4 5 | [root@master01 ~]# kubectl -n beats get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE monitor-es-http NodePort 10.96.82.186 <none> 9200:31575/TCP 9m36s monitor-es-mdi ClusterIP None <none> <none> 9m34s monitor-kb-http NodePort 10.97.213.119 <none> 5601:30878/TCP 9m35s |
默认elasticsearch启用了验证,获取elastic用户的密码:
1 2 3 | PASSWORD=$(kubectl -n beats get secret monitor-es-elastic-user -o=jsonpath='{.data.elastic}' | base64 --decode) echo $PASSWORD |
浏览器访问elasticsearch:
1 | https://192.168.93.11:31575/ |
或者从Kubernetes集群内部访问elasticsearch的endpoint:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 | [root@master01 ~]# kubectl run -it --rm centos--image=centos -- sh sh-4.4# sh-4.4# PASSWORD=gf4mgr5fsbstwx76b8zl8m2g sh-4.4# curl -u "elastic:$PASSWORD" -k "https://monitor-es-http:9200" { "name" : "quickstart-es-default-2", "cluster_name" : "quickstart", "cluster_uuid" : "mrDgyhp7QWa7iVuY8Hx6gA", "version" : { "number" : "7.6.2", "build_flavor" : "default", "build_type" : "docker", "build_hash" : "ef48eb35cf30adf4db14086e8aabd07ef6fb113f", "build_date" : "2020-03-26T06:34:37.794943Z", "build_snapshot" : false, "lucene_version" : "8.4.0", "minimum_wire_compatibility_version" : "6.8.0", "minimum_index_compatibility_version" : "6.0.0-beta1" }, "tagline" : "You Know, for Search" } |
在浏览器中访问kibana,用户密码与elasticsearch相同,选择Explore on my own,可以看到还没有创建index。
1 | https://192.168.93.11:30878/ |
部署filebeat
使用dockerhub的镜像,版本改为7.6.2.
1 2 3 | sed -i 's#docker.elastic.co/beats/filebeat:7.6.0#elastic/filebeat:7.6.2#g' 2_filebeat-kubernetes.yaml kubectl apply -f 2_filebeat-kubernetes.yaml |
查看创建的pods
1 2 3 4 5 | [root@master01 beats]# kubectl -n beats get pods -l k8s-app=filebeat NAME READY STATUS RESTARTS AGE filebeat-dctrz 1/1 Running 0 9m32s filebeat-rgldp 1/1 Running 0 9m32s filebeat-srqf4 1/1 Running 0 9m32s |
如果无法拉取镜像,可手动拉取:
1 2 3 4 5 | docker pull elastic/filebeat:7.6.2 docker tag elastic/filebeat:7.6.2 docker.elastic.co/beats/filebeat:7.6.2 docker pull elastic/metricbeat:7.6.2 docker tag elastic/metricbeat:7.6.2 docker.elastic.co/beats/metricbeat:7.6.2 |
访问kibana,此时可以搜索到filebeat的index,填写index pattern,选择@timestrap然后创建index.
查看收集到的日志
部署metricbeat
1 | sed -i 's#docker.elastic.co/beats/metricbeat:7.6.0#elastic/metricbeat:7.6.2#g' 3_metricbeat-kubernetes.yaml |
查看创建的pods
1 2 3 4 5 6 | [root@master01 beats]# kubectl -n beats get pods -l k8s-app=metricbeat NAME READY STATUS RESTARTS AGE metricbeat-6956d987bb-c96nq 1/1 Running 0 76s metricbeat-6h42f 1/1 Running 0 76s metricbeat-dzkxq 1/1 Running 0 76s metricbeat-lffds 1/1 Running 0 76s |
此时访问kibana可以看到index中多了metricbeat