Kubernetes 简明教程

Kubernetes - Monitoring

监控是管理大型群集的关键组件之一。对此,我们有许多工具。

Monitoring is one of the key component for managing large clusters. For this, we have a number of tools.

Monitoring with Prometheus

它是一个监控和警报系统。它在 SoundCloud 构建并于 2012 年开源。它很好地处理多维数据。

It is a monitoring and alerting system. It was built at SoundCloud and was open sourced in 2012. It handles the multi-dimensional data very well.

Prometheus 具有多个参与监控的组件:

Prometheus has multiple components to participate in monitoring −

  1. Prometheus − It is the core component that scraps and stores data.

  2. Prometheus node explore − Gets the host level matrices and exposes them to Prometheus.

  3. Ranch-eye − is an haproxy and exposes cAdvisor stats to Prometheus.

  4. Grafana − Visualization of data.

  5. InfuxDB − Time series database specifically used to store data from rancher.

  6. Prom-ranch-exporter − It is a simple node.js application, which helps in querying Rancher server for the status of stack of service.

monitoring prometheus

Sematext Docker Agent

这是一个现代化的 Docker 感知指标、事件和日志收集代理。它作为每个 Docker 主机上的一个微小容器运行,并收集所有集群节点和容器的日志、指标和事件。它发现所有容器(一个 Pod 中可能包含多个容器),包括 Kubernetes 核心服务的容器,如果核心服务部署在 Docker 容器中。部署后,所有日志和指标都将立即开箱即用。

It is a modern Docker-aware metrics, events, and log collection agent. It runs as a tiny container on every Docker host and collects logs, metrics, and events for all cluster node and containers. It discovers all containers (one pod might contain multiple containers) including containers for Kubernetes core services, if the core services are deployed in Docker containers. After its deployment, all logs and metrics are immediately available out of the box.

Deploying Agents to Nodes

Kubernetes 提供了 DeamonSets,可确保将 Pod 添加到集群中。

Kubernetes provides DeamonSets which ensures pods are added to the cluster.

Configuring SemaText Docker Agent

它是通过环境变量进行配置。

It is configured via environment variables.

  1. Get a free account at apps.sematext.com, if you don’t have one already.

  2. Create an SPM App of type “Docker” to obtain the SPM App Token. SPM App will hold your Kubernetes performance metrics and event.

  3. Create a Logsene App to obtain the Logsene App Token. Logsene App will hold your Kubernetes logs.

  4. Edit values of LOGSENE_TOKEN and SPM_TOKEN in the DaemonSet definition as shown below.

Create DaemonSet Object

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
   name: sematext-agent
spec:
   template:
      metadata:
         labels:
            app: sematext-agent
      spec:
         selector: {}
         dnsPolicy: "ClusterFirst"
         restartPolicy: "Always"
         containers:
         - name: sematext-agent
            image: sematext/sematext-agent-docker:latest
            imagePullPolicy: "Always"
            env:
            - name: SPM_TOKEN
               value: "REPLACE THIS WITH YOUR SPM TOKEN"
            - name: LOGSENE_TOKEN
               value: "REPLACE THIS WITH YOUR LOGSENE TOKEN"
            - name: KUBERNETES
               value: "1"
            volumeMounts:
               - mountPath: /var/run/docker.sock
                  name: docker-sock
               - mountPath: /etc/localtime
                  name: localtime
            volumes:
               - name: docker-sock
                  hostPath:
                     path: /var/run/docker.sock
               - name: localtime
                  hostPath:
                     path: /etc/localtime

Running the Sematext Agent Docker with kubectl

$ kubectl create -f sematext-agent-daemonset.yml
daemonset "sematext-agent-daemonset" created

Kubernetes Log

Kubernetes 容器的日志与 Docker 容器日志没有太大区别。但是,Kubernetes 用户需要查看已部署 Pod 的日志。因此,让 Kubernetes 特定的信息可用于日志搜索非常有用,例如 -

Kubernetes containers’ logs are not much different from Docker container logs. However, Kubernetes users need to view logs for the deployed pods. Hence, it is very useful to have Kubernetes-specific information available for log search, such as −

  1. Kubernetes namespace

  2. Kubernetes pod name

  3. Kubernetes container name

  4. Docker image name

  5. Kubernetes UID

Using ELK Stack and LogSpout

ELK 堆栈包括 Elasticsearch、Logstash 和 Kibana。为了收集日志并将其转发到日志记录平台,我们将使用 LogSpout(尽管有 FluentD 等其他选项)。

ELK stack includes Elasticsearch, Logstash, and Kibana. To collect and forward the logs to the logging platform, we will use LogSpout (though there are other options such as FluentD).

以下代码演示如何在 Kubernetes 上设置 ELK 集群并为 Elasticsearch 创建服务 -

The following code shows how to set up ELK cluster on Kubernetes and create service for ElasticSearch −

apiVersion: v1
kind: Service
metadata:
   name: elasticsearch
   namespace: elk
   labels:
      component: elasticsearch
spec:
   type: LoadBalancer
   selector:
      component: elasticsearch
   ports:
   - name: http
      port: 9200
      protocol: TCP
   - name: transport
      port: 9300
      protocol: TCP

Creating Replication Controller

apiVersion: v1
kind: ReplicationController
metadata:
   name: es
   namespace: elk
   labels:
      component: elasticsearch
spec:
   replicas: 1
   template:
      metadata:
         labels:
            component: elasticsearch
spec:
serviceAccount: elasticsearch
containers:
   - name: es
      securityContext:
      capabilities:
      add:
      - IPC_LOCK
   image: quay.io/pires/docker-elasticsearch-kubernetes:1.7.1-4
   env:
   - name: KUBERNETES_CA_CERTIFICATE_FILE
   value: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
   - name: NAMESPACE
   valueFrom:
      fieldRef:
         fieldPath: metadata.namespace
   - name: "CLUSTER_NAME"
      value: "myesdb"
   - name: "DISCOVERY_SERVICE"
      value: "elasticsearch"
   - name: NODE_MASTER
      value: "true"
   - name: NODE_DATA
      value: "true"
   - name: HTTP_ENABLE
      value: "true"
ports:
- containerPort: 9200
   name: http
   protocol: TCP
- containerPort: 9300
volumeMounts:
- mountPath: /data
   name: storage
volumes:
   - name: storage
      emptyDir: {}

Kibana URL

对于 Kibana,我们将 Elasticsearch URL 作为一个环境变量提供。

For Kibana, we provide the Elasticsearch URL as an environment variable.

- name: KIBANA_ES_URL
value: "http://elasticsearch.elk.svc.cluster.local:9200"
- name: KUBERNETES_TRUST_CERT
value: "true"

Kibana UI 可在容器端口 5601 和相应的主机/节点端口组合处访问。当您开始时,Kibana 中不会有任何数据(这符合预期,因为您没有推送任何数据)。

Kibana UI will be reachable at container port 5601 and corresponding host/Node Port combination. When you begin, there won’t be any data in Kibana (which is expected as you have not pushed any data).