• 安装EFK插件
    • 配置 es-controller.yaml
    • 配置 es-service.yaml
    • 配置 fluentd-es-ds.yaml
    • 配置 kibana-controller.yaml
    • 给 Node 设置标签
    • 执行定义文件
    • 检查执行结果
    • 访问 kibana

    安装EFK插件

    我们通过在每台node上部署一个以DaemonSet方式运行的fluentd来收集每台node上的日志。Fluentd将docker日志目录/var/lib/docker/containers/var/log目录挂载到Pod中,然后Pod会在node节点的/var/log/pods目录中创建新的目录,可以区别不同的容器日志输出,该目录下有一个日志文件链接到/var/lib/docker/contianers目录下的容器日志输出。

    官方文件目录:cluster/addons/fluentd-elasticsearch

    1. $ ls *.yaml
    2. es-controller.yaml es-service.yaml fluentd-es-ds.yaml kibana-controller.yaml kibana-service.yaml efk-rbac.yaml

    同样EFK服务也需要一个efk-rbac.yaml文件,配置serviceaccount为efk

    已经修改好的 yaml 文件见:../manifests/EFK

    配置 es-controller.yaml

    1. $ diff es-controller.yaml.orig es-controller.yaml
    2. 24c24
    3. < - image: gcr.io/google_containers/elasticsearch:v2.4.1-2
    4. ---
    5. > - image: sz-pg-oam-docker-hub-001.tendcloud.com/library/elasticsearch:v2.4.1-2

    配置 es-service.yaml

    无需配置;

    配置 fluentd-es-ds.yaml

    1. $ diff fluentd-es-ds.yaml.orig fluentd-es-ds.yaml
    2. 26c26
    3. < image: gcr.io/google_containers/fluentd-elasticsearch:1.22
    4. ---
    5. > image: sz-pg-oam-docker-hub-001.tendcloud.com/library/fluentd-elasticsearch:1.22

    配置 kibana-controller.yaml

    1. $ diff kibana-controller.yaml.orig kibana-controller.yaml
    2. 22c22
    3. < image: gcr.io/google_containers/kibana:v4.6.1-1
    4. ---
    5. > image: sz-pg-oam-docker-hub-001.tendcloud.com/library/kibana:v4.6.1-1

    给 Node 设置标签

    定义 DaemonSet fluentd-es-v1.22 时设置了 nodeSelector beta.kubernetes.io/fluentd-ds-ready=true ,所以需要在期望运行 fluentd 的 Node 上设置该标签;

    1. $ kubectl get nodes
    2. NAME STATUS AGE VERSION
    3. 172.20.0.113 Ready 1d v1.6.0
    4. $ kubectl label nodes 172.20.0.113 beta.kubernetes.io/fluentd-ds-ready=true
    5. node "172.20.0.113" labeled

    给其他两台node打上同样的标签。

    执行定义文件

    1. $ kubectl create -f .
    2. serviceaccount "efk" created
    3. clusterrolebinding "efk" created
    4. replicationcontroller "elasticsearch-logging-v1" created
    5. service "elasticsearch-logging" created
    6. daemonset "fluentd-es-v1.22" created
    7. deployment "kibana-logging" created
    8. service "kibana-logging" created

    检查执行结果

    1. $ kubectl get deployment -n kube-system|grep kibana
    2. kibana-logging 1 1 1 1 2m
    3. $ kubectl get pods -n kube-system|grep -E 'elasticsearch|fluentd|kibana'
    4. elasticsearch-logging-v1-mlstp 1/1 Running 0 1m
    5. elasticsearch-logging-v1-nfbbf 1/1 Running 0 1m
    6. fluentd-es-v1.22-31sm0 1/1 Running 0 1m
    7. fluentd-es-v1.22-bpgqs 1/1 Running 0 1m
    8. fluentd-es-v1.22-qmn7h 1/1 Running 0 1m
    9. kibana-logging-1432287342-0gdng 1/1 Running 0 1m
    10. $ kubectl get service -n kube-system|grep -E 'elasticsearch|kibana'
    11. elasticsearch-logging 10.254.77.62 <none> 9200/TCP 2m
    12. kibana-logging 10.254.8.113 <none> 5601/TCP 2m

    kibana Pod 第一次启动时会用较长时间(10-20分钟)来优化和 Cache 状态页面,可以 tailf 该 Pod 的日志观察进度:

    1. $ kubectl logs kibana-logging-1432287342-0gdng -n kube-system -f
    2. ELASTICSEARCH_URL=http://elasticsearch-logging:9200
    3. server.basePath: /api/v1/proxy/namespaces/kube-system/services/kibana-logging
    4. {"type":"log","@timestamp":"2017-04-12T13:08:06Z","tags":["info","optimize"],"pid":7,"message":"Optimizing and caching bundles for kibana and statusPage. This may take a few minutes"}
    5. {"type":"log","@timestamp":"2017-04-12T13:18:17Z","tags":["info","optimize"],"pid":7,"message":"Optimization of bundles for kibana and statusPage complete in 610.40 seconds"}
    6. {"type":"log","@timestamp":"2017-04-12T13:18:17Z","tags":["status","plugin:kibana@1.0.0","info"],"pid":7,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
    7. {"type":"log","@timestamp":"2017-04-12T13:18:18Z","tags":["status","plugin:elasticsearch@1.0.0","info"],"pid":7,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
    8. {"type":"log","@timestamp":"2017-04-12T13:18:19Z","tags":["status","plugin:kbn_vislib_vis_types@1.0.0","info"],"pid":7,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
    9. {"type":"log","@timestamp":"2017-04-12T13:18:19Z","tags":["status","plugin:markdown_vis@1.0.0","info"],"pid":7,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
    10. {"type":"log","@timestamp":"2017-04-12T13:18:19Z","tags":["status","plugin:metric_vis@1.0.0","info"],"pid":7,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
    11. {"type":"log","@timestamp":"2017-04-12T13:18:19Z","tags":["status","plugin:spyModes@1.0.0","info"],"pid":7,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
    12. {"type":"log","@timestamp":"2017-04-12T13:18:19Z","tags":["status","plugin:statusPage@1.0.0","info"],"pid":7,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
    13. {"type":"log","@timestamp":"2017-04-12T13:18:19Z","tags":["status","plugin:table_vis@1.0.0","info"],"pid":7,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
    14. {"type":"log","@timestamp":"2017-04-12T13:18:19Z","tags":["listening","info"],"pid":7,"message":"Server running at http://0.0.0.0:5601"}
    15. {"type":"log","@timestamp":"2017-04-12T13:18:24Z","tags":["status","plugin:elasticsearch@1.0.0","info"],"pid":7,"state":"yellow","message":"Status changed from yellow to yellow - No existing Kibana index found","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
    16. {"type":"log","@timestamp":"2017-04-12T13:18:29Z","tags":["status","plugin:elasticsearch@1.0.0","info"],"pid":7,"state":"green","message":"Status changed from yellow to green - Kibana index ready","prevState":"yellow","prevMsg":"No existing Kibana index found"}

    访问 kibana

    1. 通过 kube-apiserver 访问:

      获取 monitoring-grafana 服务 URL

      1. $ kubectl cluster-info
      2. Kubernetes master is running at https://172.20.0.113:6443
      3. Elasticsearch is running at https://172.20.0.113:6443/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging
      4. Heapster is running at https://172.20.0.113:6443/api/v1/proxy/namespaces/kube-system/services/heapster
      5. Kibana is running at https://172.20.0.113:6443/api/v1/proxy/namespaces/kube-system/services/kibana-logging
      6. KubeDNS is running at https://172.20.0.113:6443/api/v1/proxy/namespaces/kube-system/services/kube-dns
      7. kubernetes-dashboard is running at https://172.20.0.113:6443/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
      8. monitoring-grafana is running at https://172.20.0.113:6443/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana
      9. monitoring-influxdb is running at https://172.20.0.113:6443/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb

      浏览器访问 URL: https://172.20.0.113:6443/api/v1/proxy/namespaces/kube-system/services/kibana-logging/app/kibana

    2. 通过 kubectl proxy 访问:

      创建代理

      1. $ kubectl proxy --address='172.20.0.113' --port=8086 --accept-hosts='^*$'
      2. Starting to serve on 172.20.0.113:8086

      浏览器访问 URL:http://172.20.0.113:8086/api/v1/proxy/namespaces/kube-system/services/kibana-logging

    在 Settings -> Indices 页面创建一个 index(相当于 mysql 中的一个 database),选中 Index contains time-based events,使用默认的 logstash-* pattern,点击 Create ;

    可能遇到的问题

    如果你在这里发现Create按钮是灰色的无法点击,且Time-filed name中没有选项,fluentd要读取/var/log/containers/目录下的log日志,这些日志是从/var/lib/docker/containers/${CONTAINER_ID}/${CONTAINER_ID}-json.log链接过来的,查看你的docker配置,—log-dirver需要设置为json-file格式,默认的可能是journald,参考docker logging。

    es-setting

    创建Index后,可以在 Discover 下看到 ElasticSearch logging 中汇聚的日志;

    es-home