Kubernetes 简明教程
Kubernetes - Overview
Kubernetes 是一个开源容器管理工具,由云原生计算基金会 (CNCF) 托管。这也称为 Borg 的增强版本,Borg 是在 Google 开发的,用于管理长期运行的进程和批量作业,这些作业以前是由独立的系统处理的。
Kubernetes in an open source container management tool hosted by Cloud Native Computing Foundation (CNCF). This is also known as the enhanced version of Borg which was developed at Google to manage both long running processes and batch jobs, which was earlier handled by separate systems.
Kubernetes 具有自动化部署、应用程序扩展和跨集群操作应用程序容器的能力。它能够创建以容器为中心的基础架构。
Kubernetes comes with a capability of automating deployment, scaling of application, and operations of application containers across clusters. It is capable of creating container centric infrastructure.
Features of Kubernetes
以下是 Kubernetes 的一些重要功能。
Following are some of the important features of Kubernetes.
-
Continues development, integration and deployment
-
Containerized infrastructure
-
Application-centric management
-
Auto-scalable infrastructure
-
Environment consistency across development testing and production
-
Loosely coupled infrastructure, where each component can act as a separate unit
-
Higher density of resource utilization
-
Predictable infrastructure which is going to be created
Kubernetes 的主要组件之一,它可以在物理和虚拟机基础架构的集群上运行应用程序。它还具有在云上运行应用程序的能力。 It helps in moving from host-centric infrastructure to container-centric infrastructure.
One of the key components of Kubernetes is, it can run application on clusters of physical and virtual machine infrastructure. It also has the capability to run applications on cloud. It helps in moving from host-centric infrastructure to container-centric infrastructure.
Kubernetes - Architecture
在本章中,我们将讨论 Kubernetes 的基本架构。
In this chapter, we will discuss the basic architecture of Kubernetes.
Kubernetes - Cluster Architecture
如下图所示,Kubernetes 遵循客户端-服务端架构。其中,我们在一台机器上安装 master,在单独的 Linux 机器上安装节点。
As seen in the following diagram, Kubernetes follows client-server architecture. Wherein, we have master installed on one machine and the node on separate Linux machines.

将在以下部分定义 master 和节点的关键组件。
The key components of master and node are defined in the following section.
Kubernetes - Master Machine Components
以下是 Kubernetes Master 机器的组件。
Following are the components of Kubernetes Master Machine.
etcd
它存储每个节点群集中可使用的配置信息。它是一个高可用性的键值存储,可在多个节点之间分配。只有 Kubernetes API 服务器才能访问它,因为它可能包含一些敏感信息。它是一个所有用户均可访问的分布式键值存储。
It stores the configuration information which can be used by each of the nodes in the cluster. It is a high availability key value store that can be distributed among multiple nodes. It is accessible only by Kubernetes API server as it may have some sensitive information. It is a distributed key value Store which is accessible to all.
API Server
Kubernetes 是一个 API 服务器,它使用 API 提供群集上的所有操作。API 服务器实现了一个界面,这意味着不同的工具和库可以轻松地与之通信。 Kubeconfig 是一个包含服务器端工具的程序包,可用于通信。它公开 Kubernetes API。
Kubernetes is an API server which provides all the operation on cluster using the API. API server implements an interface, which means different tools and libraries can readily communicate with it. Kubeconfig is a package along with the server side tools that can be used for communication. It exposes Kubernetes API.
Controller Manager
此组件负责大多数调节群集状态并执行任务的收集器。总的来说,可以将其视为在一个非终止循环中运行的守护进程,负责收集信息并将其发送到 API 服务器。它致力于获取群集的共享状态,然后进行更改以使服务器的当前状态达到所需状态。关键控制器是复制控制器、端点控制器、命名空间控制器和服务帐户控制器。控制器管理器运行不同类型的控制器来处理节点、端点等。
This component is responsible for most of the collectors that regulates the state of cluster and performs a task. In general, it can be considered as a daemon which runs in nonterminating loop and is responsible for collecting and sending information to API server. It works toward getting the shared state of cluster and then make changes to bring the current status of the server to the desired state. The key controllers are replication controller, endpoint controller, namespace controller, and service account controller. The controller manager runs different kind of controllers to handle nodes, endpoints, etc.
Scheduler
这是 Kubernetes master 的关键组件之一。它是一个负责分配工作负载的 master 服务。它负责跟踪群集节点上的工作负载利用率,然后将工作负载置于具有可用资源并接受工作负载的位置。换句话说,这是负责将 pod 分配给可用节点的机制。调度程序负责工作负载利用率并将 pod 分配给新节点。
This is one of the key components of Kubernetes master. It is a service in master responsible for distributing the workload. It is responsible for tracking utilization of working load on cluster nodes and then placing the workload on which resources are available and accept the workload. In other words, this is the mechanism responsible for allocating pods to available nodes. The scheduler is responsible for workload utilization and allocating pod to new node.
Kubernetes - Node Components
以下是与 Kubernetes master 通信所需的节点服务器的关键组件。
Following are the key components of Node server which are necessary to communicate with Kubernetes master.
Docker
每个节点的首要要求是 Docker,它有助于在相对隔离但轻量级的操作环境中运行封装的应用程序容器。
The first requirement of each node is Docker which helps in running the encapsulated application containers in a relatively isolated but lightweight operating environment.
Kubelet Service
这是每个节点中负责中继进出控制平面服务的少量服务。它与 etcd 存储进行交互以读取配置详细信息和权限值。它与主组件通信以接收命令和工作。然后, kubelet 进程负责维护工作状态和节点服务器。它管理网络规则、端口转发等。
This is a small service in each node responsible for relaying information to and from control plane service. It interacts with etcd store to read configuration details and wright values. This communicates with the master component to receive commands and work. The kubelet process then assumes responsibility for maintaining the state of work and the node server. It manages network rules, port forwarding, etc.
Kubernetes Proxy Service
这是一项在每个节点上运行的代理服务,有助于向外部主机提供服务。它有助于将请求转发到正确的容器,并且能够执行基本负载平衡。它确保网络环境是可预测且可访问的,同时它也是隔离的。它管理节点上的 pod、卷、秘密、创建新容器的运行状况检查等。
This is a proxy service which runs on each node and helps in making services available to the external host. It helps in forwarding the request to correct containers and is capable of performing primitive load balancing. It makes sure that the networking environment is predictable and accessible and at the same time it is isolated as well. It manages pods on node, volumes, secrets, creating new containers’ health checkup, etc.
Kubernetes - Setup
在设置 Kubernetes 之前,设置虚拟数据中心 (vDC) 非常重要。这可以看作一组机器,它们可以通过网络相互通信。对于实践方法,如果您没有设置物理或云基础设施,您可以在 PROFITBRICKS 上设置 vDC。
It is important to set up the Virtual Datacenter (vDC) before setting up Kubernetes. This can be considered as a set of machines where they can communicate with each other via the network. For hands-on approach, you can set up vDC on PROFITBRICKS if you do not have a physical or cloud infrastructure set up.
在任何云上完成 IaaS 设置后,您需要配置 Master 和 Node 。
Once the IaaS setup on any cloud is complete, you need to configure the Master and the Node.
Note − 设置已针对 Ubuntu 计算机展示。也可以在其他 Linux 计算机上设置相同的设置。
Note − The setup is shown for Ubuntu machines. The same can be set up on other Linux machines as well.
Prerequisites
Installing Docker − Kubernetes 的所有实例都需要 Docker。以下是安装 Docker 的步骤。
Installing Docker − Docker is required on all the instances of Kubernetes. Following are the steps to install the Docker.
Step 1 − 使用根用户帐户登录到计算机。
Step 1 − Log on to the machine with the root user account.
Step 2 - 更新软件包信息。确保 apt 软件包正常。
Step 2 − Update the package information. Make sure that the apt package is working.
Step 3 - 运行以下命令。
Step 3 − Run the following commands.
$ sudo apt-get update
$ sudo apt-get install apt-transport-https ca-certificates
Step 4 - 添加新的 GPG 密钥。
Step 4 − Add the new GPG key.
$ sudo apt-key adv \
--keyserver hkp://ha.pool.sks-keyservers.net:80 \
--recv-keys 58118E89F3A912897C070ADBF76221572C52609D
$ echo "deb https://apt.dockerproject.org/repo ubuntu-trusty main" | sudo tee
/etc/apt/sources.list.d/docker.list
Step 5 - 更新 API 软件包映像。
Step 5 − Update the API package image.
$ sudo apt-get update
一旦上述所有任务完成,就可开始实际安装 Docker 引擎。但是,需要先验证所用内核版本是否正确。
Once all the above tasks are complete, you can start with the actual installation of the Docker engine. However, before this you need to verify that the kernel version you are using is correct.
Install Docker Engine
运行以下命令可安装 Docker 引擎。
Run the following commands to install the Docker engine.
Step 1 - 登录到计算机。
Step 1 − Logon to the machine.
Step 2 - 更新软件包索引。
Step 2 − Update the package index.
$ sudo apt-get update
Step 3 - 使用以下命令安装 Docker Engine。
Step 3 − Install the Docker Engine using the following command.
$ sudo apt-get install docker-engine
Step 4 - 启动 Docker 守护程序。
Step 4 − Start the Docker daemon.
$ sudo apt-get install docker-engine
Step 5 - 要验证是否安装了 Docker,请使用以下命令。
Step 5 − To very if the Docker is installed, use the following command.
$ sudo docker run hello-world
Install etcd 2.0
这需要安装在 Kubernetes Master Machine 上。要安装它,请运行以下命令。
This needs to be installed on Kubernetes Master Machine. In order to install it, run the following commands.
$ curl -L https://github.com/coreos/etcd/releases/download/v2.0.0/etcd
-v2.0.0-linux-amd64.tar.gz -o etcd-v2.0.0-linux-amd64.tar.gz ->1
$ tar xzvf etcd-v2.0.0-linux-amd64.tar.gz ------>2
$ cd etcd-v2.0.0-linux-amd64 ------------>3
$ mkdir /opt/bin ------------->4
$ cp etcd* /opt/bin ----------->5
在上述命令集中 -
In the above set of command −
-
First, we download the etcd. Save this with specified name.
-
Then, we have to un-tar the tar package.
-
We make a dir. inside the /opt named bin.
-
Copy the extracted file to the target location.
现在我们可以准备构建 Kubernetes 了。我们需要在群集的所有计算机上安装 Kubernetes。
Now we are ready to build Kubernetes. We need to install Kubernetes on all the machines on the cluster.
$ git clone https://github.com/GoogleCloudPlatform/kubernetes.git
$ cd kubernetes
$ make release
上述命令将在 kubernetes 文件夹的根目录中创建一个 _output 目录。接下来,我们可以将该目录解压缩到任意我们选择的目录 /opt/bin 等中。
The above command will create a _output dir in the root of the kubernetes folder. Next, we can extract the directory into any of the directory of our choice /opt/bin, etc.
接下来,进入网络部分,其中我们需要开始实际设置 Kubernetes 主机和节点。为了做到这一点,我们将在可以在节点机器上执行的主机文件中创建一个条目。
Next, comes the networking part wherein we need to actually start with the setup of Kubernetes master and node. In order to do this, we will make an entry in the host file which can be done on the node machine.
$ echo "<IP address of master machine> kube-master
< IP address of Node Machine>" >> /etc/hosts
以下是以上命令的输出。
Following will be the output of the above command.

现在,我们将开始在 Kubernetes 主机上进行实际配置。
Now, we will start with the actual configuration on Kubernetes Master.
首先,我们将开始将所有配置文件复制到其正确的位置。
First, we will start copying all the configuration files to their correct location.
$ cp <Current dir. location>/kube-apiserver /opt/bin/
$ cp <Current dir. location>/kube-controller-manager /opt/bin/
$ cp <Current dir. location>/kube-kube-scheduler /opt/bin/
$ cp <Current dir. location>/kubecfg /opt/bin/
$ cp <Current dir. location>/kubectl /opt/bin/
$ cp <Current dir. location>/kubernetes /opt/bin/
以上命令将复制所有配置文件到所需位置。现在我们回到我们构建 Kubernetes 文件夹的相同目录。
The above command will copy all the configuration files to the required location. Now we will come back to the same directory where we have built the Kubernetes folder.
$ cp kubernetes/cluster/ubuntu/init_conf/kube-apiserver.conf /etc/init/
$ cp kubernetes/cluster/ubuntu/init_conf/kube-controller-manager.conf /etc/init/
$ cp kubernetes/cluster/ubuntu/init_conf/kube-kube-scheduler.conf /etc/init/
$ cp kubernetes/cluster/ubuntu/initd_scripts/kube-apiserver /etc/init.d/
$ cp kubernetes/cluster/ubuntu/initd_scripts/kube-controller-manager /etc/init.d/
$ cp kubernetes/cluster/ubuntu/initd_scripts/kube-kube-scheduler /etc/init.d/
$ cp kubernetes/cluster/ubuntu/default_scripts/kubelet /etc/default/
$ cp kubernetes/cluster/ubuntu/default_scripts/kube-proxy /etc/default/
$ cp kubernetes/cluster/ubuntu/default_scripts/kubelet /etc/default/
下一步是更新 /etc 下的已复制配置文件。目录。
The next step is to update the copied configuration file under /etc. dir.
使用以下命令在主机上配置 etcd。
Configure etcd on master using the following command.
$ ETCD_OPTS = "-listen-client-urls = http://kube-master:4001"
Configure kube-apiserver
为此,我们需要编辑我们先前复制的 /etc/default/kube-apiserver 文件。
For this on the master, we need to edit the /etc/default/kube-apiserver file which we copied earlier.
$ KUBE_APISERVER_OPTS = "--address = 0.0.0.0 \
--port = 8080 \
--etcd_servers = <The path that is configured in ETCD_OPTS> \
--portal_net = 11.1.1.0/24 \
--allow_privileged = false \
--kubelet_port = < Port you want to configure> \
--v = 0"
Configure the kube Controller Manager
我们需要在 /etc/default/kube-controller-manager 中添加以下内容。
We need to add the following content in /etc/default/kube-controller-manager.
$ KUBE_CONTROLLER_MANAGER_OPTS = "--address = 0.0.0.0 \
--master = 127.0.0.1:8080 \
--machines = kube-minion \ -----> #this is the kubernatics node
--v = 0
接下来,在相应文件中配置 kube 调度程序。
Next, configure the kube scheduler in the corresponding file.
$ KUBE_SCHEDULER_OPTS = "--address = 0.0.0.0 \
--master = 127.0.0.1:8080 \
--v = 0"
完成上述所有任务后,我们就可以通过启动 Kubernetes 主机继续进行。为了做到这一点,我们将重新启动 Docker。
Once all the above tasks are complete, we are good to go ahead by bring up the Kubernetes Master. In order to do this, we will restart the Docker.
$ service docker restart
Kubernetes Node Configuration
Kubernetes 节点将运行两个服务 kubelet and the kube-proxy 。在继续进行之前,我们需要将下载的二进制文件复制到我们希望配置 Kubernetes 节点所需文件夹。
Kubernetes node will run two services the kubelet and the kube-proxy. Before moving ahead, we need to copy the binaries we downloaded to their required folders where we want to configure the kubernetes node.
使用与我们在 Kubernetes 主机上所做的相同文件复制方法。因为它将只运行 kubelet 和 kube-proxy,所以我们将对其进行配置。
Use the same method of copying the files that we did for kubernetes master. As it will only run the kubelet and the kube-proxy, we will configure them.
$ cp <Path of the extracted file>/kubelet /opt/bin/
$ cp <Path of the extracted file>/kube-proxy /opt/bin/
$ cp <Path of the extracted file>/kubecfg /opt/bin/
$ cp <Path of the extracted file>/kubectl /opt/bin/
$ cp <Path of the extracted file>/kubernetes /opt/bin/
现在,我们将内容复制到适当的目录。
Now, we will copy the content to the appropriate dir.
$ cp kubernetes/cluster/ubuntu/init_conf/kubelet.conf /etc/init/
$ cp kubernetes/cluster/ubuntu/init_conf/kube-proxy.conf /etc/init/
$ cp kubernetes/cluster/ubuntu/initd_scripts/kubelet /etc/init.d/
$ cp kubernetes/cluster/ubuntu/initd_scripts/kube-proxy /etc/init.d/
$ cp kubernetes/cluster/ubuntu/default_scripts/kubelet /etc/default/
$ cp kubernetes/cluster/ubuntu/default_scripts/kube-proxy /etc/default/
我们将会配置 kubelet 和 kube-proxy conf 文件。
We will configure the kubelet and kube-proxy conf files.
我们将配置 /etc/init/kubelet.conf 。
We will configure the /etc/init/kubelet.conf.
$ KUBELET_OPTS = "--address = 0.0.0.0 \
--port = 10250 \
--hostname_override = kube-minion \
--etcd_servers = http://kube-master:4001 \
--enable_server = true
--v = 0"
/
对于 kube-proxy,我们将使用以下命令进行配置。
For kube-proxy, we will configure using the following command.
$ KUBE_PROXY_OPTS = "--etcd_servers = http://kube-master:4001 \
--v = 0"
/etc/init/kube-proxy.conf
最后,我们将重新启动 Docker 服务。
Finally, we will restart the Docker service.
$ service docker restart
现在我们已经完成了配置。您可以通过运行以下命令进行检查。
Now we are done with the configuration. You can check by running the following commands.
$ /opt/bin/kubectl get minions
Kubernetes - Images
Kubernetes(Docker)镜像是容器化基础设施的关键构建模块。到目前为止,我们仅支持 Kubernetes 来支持 Docker 镜像。容器中的每个容器都有在其中运行其 Docker 镜像。
Kubernetes (Docker) images are the key building blocks of Containerized Infrastructure. As of now, we are only supporting Kubernetes to support Docker images. Each container in a pod has its Docker image running inside it.
当配置一个 Pod 时,配置文件中图像的属性与 Docker 命令具有相同的语法。该配置文件中有一个用于定义图像名称的字段,我们准备从注册表中提取此名称。
When we are configuring a pod, the image property in the configuration file has the same syntax as the Docker command does. The configuration file has a field to define the image name, which we are planning to pull from the registry.
以下是从 Docker 注册中提取图像并将图像部署到 Kubernetes 容器的常见配置文件结构。
Following is the common configuration structure which will pull image from Docker registry and deploy in to Kubernetes container.
apiVersion: v1
kind: pod
metadata:
name: Tesing_for_Image_pull -----------> 1
spec:
containers:
- name: neo4j-server ------------------------> 2
image: <Name of the Docker image>----------> 3
imagePullPolicy: Always ------------->4
command: ["echo", "SUCCESS"] ------------------->
在以上的代码中,我们已经定义了 −
In the above code, we have defined −
-
name: Tesing_for_Image_pull − This name is given to identify and check what is the name of the container that would get created after pulling the images from Docker registry.
-
name: neo4j-server − This is the name given to the container that we are trying to create. Like we have given neo4j-server.
-
image: <Name of the Docker image> − This is the name of the image which we are trying to pull from the Docker or internal registry of images. We need to define a complete registry path along with the image name that we are trying to pull.
-
imagePullPolicy − Always - This image pull policy defines that whenever we run this file to create the container, it will pull the same name again.
-
command: [“echo”, “SUCCESS”] − With this, when we create the container and if everything goes fine, it will display a message when we will access the container.
为了提取图像并创建容器,我们将运行以下命令。
In order to pull the image and create a container, we will run the following command.
$ kubectl create –f Tesing_for_Image_pull
一旦获取日志,我们将得到一个成功输出。
Once we fetch the log, we will get the output as successful.
$ kubectl log Tesing_for_Image_pull
上述命令将生成一个成功的输出,或者我们将得到一个失败的输出。
The above command will produce an output of success or we will get an output as failure.
Note - 建议您亲自尝试所有命令。
Note − It is recommended that you try all the commands yourself.
Kubernetes - Jobs
作业的主要功能是创建一或更多个 Pod 并跟踪 Pod 的成功。它们确保成功完成指定数量的 Pod。当指定数量的成功 Pod 运行完成时,则作业被视为完成。
The main function of a job is to create one or more pod and tracks about the success of pods. They ensure that the specified number of pods are completed successfully. When a specified number of successful run of pods is completed, then the job is considered complete.
Creating a Job
使用以下命令创建作业 −
Use the following command to create a job −
apiVersion: v1
kind: Job ------------------------> 1
metadata:
name: py
spec:
template:
metadata
name: py -------> 2
spec:
containers:
- name: py ------------------------> 3
image: python----------> 4
command: ["python", "SUCCESS"]
restartPocliy: Never --------> 5
在以上的代码中,我们已经定义了 −
In the above code, we have defined −
-
kind: Job → We have defined the kind as Job which will tell kubectl that the yaml file being used is to create a job type pod.
-
Name:py → This is the name of the template that we are using and the spec defines the template.
-
name: py → we have given a name as py under container spec which helps to identify the Pod which is going to be created out of it.
-
Image: python → the image which we are going to pull to create the container which will run inside the pod.
-
*restartPolicy: Never →*This condition of image restart is given as never which means that if the container is killed or if it is false, then it will not restart itself.
我们将使用保存为 py.yaml 名称的 yaml 通过以下命令创建作业。
We will create the job using the following command with yaml which is saved with the name py.yaml.
$ kubectl create –f py.yaml
以上命令将创建作业。如果你要检查作业状态,可以使用以下命令。
The above command will create a job. If you want to check the status of a job, use the following command.
$ kubectl describe jobs/py
以上命令将创建作业。如果你要检查作业状态,可以使用以下命令。
The above command will create a job. If you want to check the status of a job, use the following command.
Scheduled Job
Kubernetes 中的计划作业使用 Cronetes ,它获取 Kubernetes 作业并在 Kubernetes 集群中启动它们。
Scheduled job in Kubernetes uses Cronetes, which takes Kubernetes job and launches them in Kubernetes cluster.
-
Scheduling a job will run a pod at a specified point of time.
-
A parodic job is created for it which invokes itself automatically.
Note − 计划作业的功能受 1.4 版支持,并且在启动 API 服务器时通过传递 –runtime-config=batch/v2alpha1 打开 batch/v2alpha1 API。
Note − The feature of a scheduled job is supported by version 1.4 and the betch/v2alpha 1 API is turned on by passing the –runtime-config=batch/v2alpha1 while bringing up the API server.
我们将使用与创建作业相同的 yaml,并使其成为计划作业。
We will use the same yaml which we used to create the job and make it a scheduled job.
apiVersion: v1
kind: Job
metadata:
name: py
spec:
schedule: h/30 * * * * ? -------------------> 1
template:
metadata
name: py
spec:
containers:
- name: py
image: python
args:
/bin/sh -------> 2
-c
ps –eaf ------------> 3
restartPocliy: OnFailure
在以上的代码中,我们已经定义了 −
In the above code, we have defined −
-
schedule: h/30 * * * * ? → To schedule the job to run in every 30 minutes.
-
/bin/sh: This will enter in the container with /bin/sh
-
ps –eaf → Will run ps -eaf command on the machine and list all the running process inside a container.
当我们尝试在指定时间点构建和运行一组任务,然后再完成进程时,此计划作业概念非常有用。
This scheduled job concept is useful when we are trying to build and run a set of tasks at a specified point of time and then complete the process.
Kubernetes - Labels & Selectors
Labels
标签是附加到 Pod、副本控制器和服务上的键值对。它们用作 Pod 和副本控制器等对象的标识属性。它们可以在创建时添加到一个对象中,也可以在运行时添加到一个对象中或对其进行修改。
Labels are key-value pairs which are attached to pods, replication controller and services. They are used as identifying attributes for objects such as pods and replication controller. They can be added to an object at creation time and can be added or modified at the run time.
Selectors
标签并不提供唯一性。一般来说,我们可以说许多对象可以带有相同的标签。标签选择器是 Kubernetes 中的核心分组基元。它们是由用户用来选择一组对象的。
Labels do not provide uniqueness. In general, we can say many objects can carry the same labels. Labels selector are core grouping primitive in Kubernetes. They are used by the users to select a set of objects.
Kubernetes API 当前支持两种类型的选择器 -
Kubernetes API currently supports two type of selectors −
-
Equality-based selectors
-
Set-based selectors
Equality-based Selectors
它们允许按键和值进行过滤。匹配的对象应满足所有指定的标签。
They allow filtering by key and value. Matching objects should satisfy all the specified labels.
Set-based Selectors
基于集合的选择器允许根据一组值过滤键。
Set-based selectors allow filtering of keys according to a set of values.
apiVersion: v1
kind: Service
metadata:
name: sp-neo4j-standalone
spec:
ports:
- port: 7474
name: neo4j
type: NodePort
selector:
app: salesplatform ---------> 1
component: neo4j -----------> 2
在上述代码中,我们将标签选择器用作 app: salesplatform ,将组件用作 component: neo4j 。
In the above code, we are using the label selector as app: salesplatform and component as component: neo4j.
一旦使用 kubectl 命令运行该文件,它将创建一个名为 sp-neo4j-standalone 的服务,将在端口 7474 上进行通信。类型是 NodePort ,带有新的标签选择器 app: salesplatform 和 component: neo4j 。
Once we run the file using the kubectl command, it will create a service with the name sp-neo4j-standalone which will communicate on port 7474. The ype is NodePort with the new label selector as app: salesplatform and component: neo4j.
Kubernetes - Namespace
命名空间为资源名称提供了其他限定。当多个团队使用同一个集群并且存在名称冲突的可能时,这会很有帮助。它可以作为多个集群之间的虚拟墙。
Namespace provides an additional qualification to a resource name. This is helpful when multiple teams are using the same cluster and there is a potential of name collision. It can be as a virtual wall between multiple clusters.
Functionality of Namespace
以下是 Kubernetes 中命名空间的一些重要功能−
Following are some of the important functionalities of a Namespace in Kubernetes −
-
Namespaces help pod-to-pod communication using the same namespace.
-
Namespaces are virtual clusters that can sit on top of the same physical cluster.
-
They provide logical separation between the teams and their environments.
Create a Namespace
以下命令用于创建命名空间。
The following command is used to create a namespace.
apiVersion: v1
kind: Namespce
metadata
name: elk
Control the Namespace
以下命令用于控制命名空间。
The following command is used to control the namespace.
$ kubectl create –f namespace.yml ---------> 1
$ kubectl get namespace -----------------> 2
$ kubectl get namespace <Namespace name> ------->3
$ kubectl describe namespace <Namespace name> ---->4
$ kubectl delete namespace <Namespace name>
在上述代码中,
In the above code,
-
We are using the command to create a namespace.
-
This will list all the available namespace.
-
This will get a particular namespace whose name is specified in the command.
-
This will describe the complete details about the service.
-
This will delete a particular namespace present in the cluster.
Using Namespace in Service - Example
以下是使用 service 中的命名空间的示例文件的示例。
Following is an example of a sample file for using namespace in service.
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
namespace: elk
labels:
component: elasticsearch
spec:
type: LoadBalancer
selector:
component: elasticsearch
ports:
- name: http
port: 9200
protocol: TCP
- name: transport
port: 9300
protocol: TCP
在上面的代码中,我们在带有 elk 名称的服务元数据下使用相同的命名空间。
In the above code, we are using the same namespace under service metadata with the name of elk.
Kubernetes - Node
节点是 Kubernetes 群集中的工作机器,也称为 Minion。它们是工作单元,可以是物理机、虚拟机或云实例。
A node is a working machine in Kubernetes cluster which is also known as a minion. They are working units which can be physical, VM, or a cloud instance.
每个节点都具有在上面运行 Pod 所需的所有必需配置,例如代理服务和 kubelet 服务以及 Docker,该服务用于在节点上创建的 Pod 中运行 Docker 容器。
Each node has all the required configuration required to run a pod on it such as the proxy service and kubelet service along with the Docker, which is used to run the Docker containers on the pod created on the node.
它们不是由 Kubernetes 创建的,而是由云服务提供商或 Kubernetes 群集管理器在物理机或虚拟机上从外部创建的。
They are not created by Kubernetes but they are created externally either by the cloud service provider or the Kubernetes cluster manager on physical or VM machines.
Kubernetes 处理多个节点的关键组件是控制器管理器,它运行多种控制器来管理节点。为了管理节点,Kubernetes 创建一个 kind 为“节点”的对象,它将验证所创建的对象是否为有效的节点。
The key component of Kubernetes to handle multiple nodes is the controller manager, which runs multiple kind of controllers to manage nodes. To manage nodes, Kubernetes creates an object of kind node which will validate that the object which is created is a valid node.
Service with Selector
apiVersion: v1
kind: node
metadata:
name: < ip address of the node>
labels:
name: <lable name>
实际对象以 JSON 格式创建,如下所示:
In JSON format the actual object is created which looks as follows −
{
Kind: node
apiVersion: v1
"metadata":
{
"name": "10.01.1.10",
"labels"
{
"name": "cluster 1 node"
}
}
}
Node Controller
它们是一系列服务,在 Kubernetes 主服务器中运行并根据 metadata.name 持续监控群集中的节点。如果所有必需的服务都在运行,则该节点被验证,且控制器会将新创建的 Pod 分配给该节点。如果它无效,则主节点不会为其分配任何 Pod,并会等到它变为有效为止。
They are the collection of services which run in the Kubernetes master and continuously monitor the node in the cluster on the basis of metadata.name. If all the required services are running, then the node is validated and a newly created pod will be assigned to that node by the controller. If it is not valid, then the master will not assign any pod to it and will wait until it becomes valid.
如果 –register-node 标志为 true,Kubernetes 主服务器会自动注册节点。
Kubernetes master registers the node automatically, if –register-node flag is true.
–register-node = true
但是,如果群集管理员希望手动管理它,可以通过开启以下 flat:
However, if the cluster administrator wants to manage it manually then it could be done by turning the flat of −
–register-node = false
Kubernetes - Service
服务可以定义为 pod 的逻辑集合。它可以定义为 pod 上的抽象,它提供一个单个 IP 地址和 DNS 名称,可以通过它访问 pod。使用服务,非常容易管理负载均衡配置。它帮助 pod 非常容易地扩展。
A service can be defined as a logical set of pods. It can be defined as an abstraction on the top of the pod which provides a single IP address and DNS name by which pods can be accessed. With Service, it is very easy to manage load balancing configuration. It helps pods to scale very easily.
在 Kubernetes 中,服务是一个 REST 对象,它的定义可以发布到 Kubernetes master 上的 Kubernetes apiServer 以创建一个新实例。
A service is a REST object in Kubernetes whose definition can be posted to Kubernetes apiServer on the Kubernetes master to create a new instance.
Service without Selector
apiVersion: v1
kind: Service
metadata:
name: Tutorial_point_service
spec:
ports:
- port: 8080
targetPort: 31999
上面的配置将创建一个名为 Tutorial_point_service 的服务。
The above configuration will create a service with the name Tutorial_point_service.
Service Config File with Selector
apiVersion: v1
kind: Service
metadata:
name: Tutorial_point_service
spec:
selector:
application: "My Application" -------------------> (Selector)
ports:
- port: 8080
targetPort: 31999
在这个示例中,我们有一个选择器;所以为了传输流量,我们需要手动创建一个端点。
In this example, we have a selector; so in order to transfer traffic, we need to create an endpoint manually.
apiVersion: v1
kind: Endpoints
metadata:
name: Tutorial_point_service
subnets:
address:
"ip": "192.168.168.40" -------------------> (Selector)
ports:
- port: 8080
在上面的代码中,我们创建了一个端点,它将路由流量到定义为“192.168.168.40:8080”的端点。
In the above code, we have created an endpoint which will route the traffic to the endpoint defined as “192.168.168.40:8080”.
Multi-Port Service Creation
apiVersion: v1
kind: Service
metadata:
name: Tutorial_point_service
spec:
selector:
application: “My Application” -------------------> (Selector)
ClusterIP: 10.3.0.12
ports:
-name: http
protocol: TCP
port: 80
targetPort: 31999
-name:https
Protocol: TCP
Port: 443
targetPort: 31998
Types of Services
ClusterIP − 这有助于在集群内限制服务。它在定义的 Kubernetes 集群内公开该服务。
ClusterIP − This helps in restricting the service within the cluster. It exposes the service within the defined Kubernetes cluster.
spec:
type: NodePort
ports:
- port: 8080
nodePort: 31999
name: NodeportService
NodePort − 它将在已部署节点上的静态端口公开服务。一个服务, NodePort 服务将路由到的服务自动创建。使用 NodeIP:nodePort 可以从集群外访问该服务。
NodePort − It will expose the service on a static port on the deployed node. A ClusterIP service, to which NodePort service will route, is automatically created. The service can be accessed from outside the cluster using the NodeIP:nodePort.
spec:
ports:
- port: 8080
nodePort: 31999
name: NodeportService
clusterIP: 10.20.30.40
Load Balancer − 它使用云服务商的负载均衡器。 NodePort 和 ClusterIP 服务自动创建,外部负载均衡器将路由到这些服务。
Load Balancer − It uses cloud providers’ load balancer. NodePort and ClusterIP services are created automatically to which the external load balancer will route.
一个完整服务 yaml 文件,其服务类型为节点端口。尝试自己创建一个。
A full service yaml file with service type as Node Port. Try to create one yourself.
apiVersion: v1
kind: Service
metadata:
name: appname
labels:
k8s-app: appname
spec:
type: NodePort
ports:
- port: 8080
nodePort: 31999
name: omninginx
selector:
k8s-app: appname
component: nginx
env: env_name
Kubernetes - Pod
Pod 是 Kubernetes 集群的节点内的一个容器及其存储的集合。可以创建一个 Pod,其中包含多个容器。例如,将数据库容器和数据容器保存在同一个 Pod 中。
A pod is a collection of containers and its storage inside a node of a Kubernetes cluster. It is possible to create a pod with multiple containers inside it. For example, keeping a database container and data container in the same pod.
Types of Pod
共有两种 Pod -
There are two types of Pods −
-
Single container pod
-
Multi container pod
Single Container Pod
它们可以通过 kubctl 运行命令轻松创建,其中我们在 Docker 注册表上有定义好的映像,在创建 Pod 时将提取映像。
They can be simply created with the kubctl run command, where you have a defined image on the Docker registry which we will pull while creating a pod.
$ kubectl run <name of pod> --image=<name of the image from registry>
Example − 我们将使用 Docker 集线器上可用的 tomcat 映像创建 Pod。
Example − We will create a pod with a tomcat image which is available on the Docker hub.
$ kubectl run tomcat --image = tomcat:8.0
还可以通过创建 yaml 文件然后运行 kubectl create 命令来完成此操作。
This can also be done by creating the yaml file and then running the kubectl create command.
apiVersion: v1
kind: Pod
metadata:
name: Tomcat
spec:
containers:
- name: Tomcat
image: tomcat: 8.0
ports:
containerPort: 7500
imagePullPolicy: Always
在创建上述 yaml 文件后,我们将会使用 tomcat.yml 名称保存文件并运行 create 命令来运行文档。
Once the above yaml file is created, we will save the file with the name of tomcat.yml and run the create command to run the document.
$ kubectl create –f tomcat.yml
它将创建名为 tomcat 的 Pod。我们可以将 describe 命令与 kubectl 一起使用来描述 Pod。
It will create a pod with the name of tomcat. We can use the describe command along with kubectl to describe the pod.
Multi Container Pod
多容器 Pod 使用 yaml mail 和容器定义创建。
Multi container pods are created using yaml mail with the definition of the containers.
apiVersion: v1
kind: Pod
metadata:
name: Tomcat
spec:
containers:
- name: Tomcat
image: tomcat: 8.0
ports:
containerPort: 7500
imagePullPolicy: Always
-name: Database
Image: mongoDB
Ports:
containerPort: 7501
imagePullPolicy: Always
在上述代码中,我们创建了一个 Pod,其中包含两个容器,一个用于 tomcat,另一个用于 MongoDB。
In the above code, we have created one pod with two containers inside it, one for tomcat and the other for MongoDB.
Kubernetes - Replication Controller
副本控制器是 Kubernetes 的一项主要特性,用于管理 Pod 生命周期。它的作用是确保在任何时间点都运行指定数量的副本 Pod。在希望确保运行指定数量的 Pod 或至少运行一个 Pod 时使用它。它具有增加或减少指定数量的 Pod 的能力。
Replication Controller is one of the key features of Kubernetes, which is responsible for managing the pod lifecycle. It is responsible for making sure that the specified number of pod replicas are running at any point of time. It is used in time when one wants to make sure that the specified number of pod or at least one pod is running. It has the capability to bring up or down the specified no of pod.
最好使用副本控制器管理 Pod 生命周期,而不是反复创建 Pod。
It is a best practice to use the replication controller to manage the pod life cycle rather than creating a pod again and again.
apiVersion: v1
kind: ReplicationController --------------------------> 1
metadata:
name: Tomcat-ReplicationController --------------------------> 2
spec:
replicas: 3 ------------------------> 3
template:
metadata:
name: Tomcat-ReplicationController
labels:
app: App
component: neo4j
spec:
containers:
- name: Tomcat- -----------------------> 4
image: tomcat: 8.0
ports:
- containerPort: 7474 ------------------------> 5
Setup Details
-
Kind: ReplicationController → In the above code, we have defined the kind as replication controller which tells the kubectl that the yaml file is going to be used for creating the replication controller.
-
name: Tomcat-ReplicationController → This helps in identifying the name with which the replication controller will be created. If we run the kubctl, get rc < Tomcat-ReplicationController > it will show the replication controller details.
-
replicas: 3 → This helps the replication controller to understand that it needs to maintain three replicas of a pod at any point of time in the pod lifecycle.
-
name: Tomcat → In the spec section, we have defined the name as tomcat which will tell the replication controller that the container present inside the pods is tomcat.
-
containerPort: 7474 → It helps in making sure that all the nodes in the cluster where the pod is running the container inside the pod will be exposed on the same port 7474.

在此处,Kubernetes 服务充当三个 tomcat 副本的负载均衡器。
Here, the Kubernetes service is working as a load balancer for three tomcat replicas.
Kubernetes - Replica Sets
副本集确保应该运行多少副本的 Pod。可以将其视为副本控制器的替代方案。副本集与副本控制器之间的主要区别是,副本控制器仅支持基于等式的选择器,而副本集支持基于集合的选择器。
Replica Set ensures how many replica of pod should be running. It can be considered as a replacement of replication controller. The key difference between the replica set and the replication controller is, the replication controller only supports equality-based selector whereas the replica set supports set-based selector.
apiVersion: extensions/v1beta1 --------------------->1
kind: ReplicaSet --------------------------> 2
metadata:
name: Tomcat-ReplicaSet
spec:
replicas: 3
selector:
matchLables:
tier: Backend ------------------> 3
matchExpression:
{ key: tier, operation: In, values: [Backend]} --------------> 4
template:
metadata:
lables:
app: Tomcat-ReplicaSet
tier: Backend
labels:
app: App
component: neo4j
spec:
containers:
- name: Tomcat
image: tomcat: 8.0
ports:
- containerPort: 7474
Setup Details
-
apiVersion: extensions/v1beta1 → In the above code, the API version is the advanced beta version of Kubernetes which supports the concept of replica set.
-
kind: ReplicaSet → We have defined the kind as the replica set which helps kubectl to understand that the file is used to create a replica set.
-
tier: Backend → We have defined the label tier as backend which creates a matching selector.
-
{key: tier, operation: In, values: [Backend]} → This will help matchExpression to understand the matching condition we have defined and in the operation which is used by matchlabel to find details.
使用 kubectl 运行上述文件,并使用 yaml 文件中提供的定义创建后端副本集。
Run the above file using kubectl and create the backend replica set with the provided definition in the yaml file.

Kubernetes - Deployments
部署被升级,并且复制控制器的版本更高。它们管理副本集的部署,副本集也是复制控制器的升级版本。它们具有更新副本集的能力,并且还能够回滚到之前的版本。
Deployments are upgraded and higher version of replication controller. They manage the deployment of replica sets which is also an upgraded version of the replication controller. They have the capability to update the replica set and are also capable of rolling back to the previous version.
它们提供了 matchLabels 和 selectors 的许多更新功能。我们已经在 Kubernetes master 中获得了一个名为部署控制器的新的控制器,它使之成为现实。它具有在中间更改部署的能力。
They provide many updated features of matchLabels and selectors. We have got a new controller in the Kubernetes master called the deployment controller which makes it happen. It has the capability to change the deployment midway.
Changing the Deployment
Updating − 用户可以在完成之前更新正在进行的部署。在此当中,现有部署将被结算,将创建新部署。
Updating − The user can update the ongoing deployment before it is completed. In this, the existing deployment will be settled and new deployment will be created.
Deleting − 用户可以在完成之前通过删除部署暂停/取消部署。重新创建相同的部署将重新开始它。
Deleting − The user can pause/cancel the deployment by deleting it before it is completed. Recreating the same deployment will resume it.
Rollback − 我们可以回滚部署或正在进行的部署。用户可以通过使用 DeploymentSpec.PodTemplateSpec = oldRC.PodTemplateSpec. 创建或更新部署。
Rollback − We can roll back the deployment or the deployment in progress. The user can create or update the deployment by using DeploymentSpec.PodTemplateSpec = oldRC.PodTemplateSpec.
Deployment Strategies
部署策略帮助定义了新的 RC 应该如何替换现有的 RC。
Deployment strategies help in defining how the new RC should replace the existing RC.
Recreate − 此功能将终止所有现有 RC,然后启动新 RC。这可实现快速部署,但当旧容器关闭而新容器尚未启动时,将导致停机。
Recreate − This feature will kill all the existing RC and then bring up the new ones. This results in quick deployment however it will result in downtime when the old pods are down and the new pods have not come up.
Rolling Update − 此功能会逐渐关闭旧 RC 并启动新 RC。这将导致部署速度变慢,但不会导致停机。在此过程中,始终有几个旧容器和几个新容器可用。
Rolling Update − This feature gradually brings down the old RC and brings up the new one. This results in slow deployment, however there is no deployment. At all times, few old pods and few new pods are available in this process.
部署配置文件如下所示。
The configuration file of Deployment looks like this.
apiVersion: extensions/v1beta1 --------------------->1
kind: Deployment --------------------------> 2
metadata:
name: Tomcat-ReplicaSet
spec:
replicas: 3
template:
metadata:
lables:
app: Tomcat-ReplicaSet
tier: Backend
spec:
containers:
- name: Tomcatimage:
tomcat: 8.0
ports:
- containerPort: 7474
在上述代码中,与副本集不同的唯一之处在于,我们将 kind 定义为 deployment。
In the above code, the only thing which is different from the replica set is we have defined the kind as deployment.
Create Deployment
$ kubectl create –f Deployment.yaml -–record
deployment "Deployment" created Successfully.
Kubernetes - Volumes
在 Kubernetes 中,可以将卷视为 Pod 中容器可以访问的目录。我们在 Kubernetes 中有不同类型的卷,并且该类型定义了如何创建卷及其内容。
In Kubernetes, a volume can be thought of as a directory which is accessible to the containers in a pod. We have different types of volumes in Kubernetes and the type defines how the volume is created and its content.
卷的概念与 Docker 一起出现,但是唯一的问题是卷极大地局限于一个特定的 Pod。一旦 Pod 的生命结束,卷也丢失了。
The concept of volume was present with the Docker, however the only issue was that the volume was very much limited to a particular pod. As soon as the life of a pod ended, the volume was also lost.
另一方面,通过 Kubernetes 创建的卷并不局限于任何容器。它支持在 Kubernetes 的 Pod 内部署的所有容器或其中任何一个容器。Kubernetes 卷的一个主要优点是,它支持不同类型的存储,其中 Pod 可以同时使用其中多个存储。
On the other hand, the volumes that are created through Kubernetes is not limited to any container. It supports any or all the containers deployed inside the pod of Kubernetes. A key advantage of Kubernetes volume is, it supports different kind of storage wherein the pod can use multiple of them at the same time.
Types of Kubernetes Volume
以下是某些流行的 Kubernetes 卷的列表 −
Here is a list of some popular Kubernetes Volumes −
-
emptyDir − It is a type of volume that is created when a Pod is first assigned to a Node. It remains active as long as the Pod is running on that node. The volume is initially empty and the containers in the pod can read and write the files in the emptyDir volume. Once the Pod is removed from the node, the data in the emptyDir is erased.
-
hostPath − This type of volume mounts a file or directory from the host node’s filesystem into your pod.
-
gcePersistentDisk − This type of volume mounts a Google Compute Engine (GCE) Persistent Disk into your Pod. The data in a gcePersistentDisk remains intact when the Pod is removed from the node.
-
awsElasticBlockStore − This type of volume mounts an Amazon Web Services (AWS) Elastic Block Store into your Pod. Just like gcePersistentDisk, the data in an awsElasticBlockStore remains intact when the Pod is removed from the node.
-
nfs − An nfs volume allows an existing NFS (Network File System) to be mounted into your pod. The data in an nfs volume is not erased when the Pod is removed from the node. The volume is only unmounted.
-
iscsi − An iscsi volume allows an existing iSCSI (SCSI over IP) volume to be mounted into your pod.
-
flocker − It is an open-source clustered container data volume manager. It is used for managing data volumes. A flocker volume allows a Flocker dataset to be mounted into a pod. If the dataset does not exist in Flocker, then you first need to create it by using the Flocker API.
-
glusterfs − Glusterfs is an open-source networked filesystem. A glusterfs volume allows a glusterfs volume to be mounted into your pod.
-
rbd − RBD stands for Rados Block Device. An rbd volume allows a Rados Block Device volume to be mounted into your pod. Data remains preserved after the Pod is removed from the node.
-
cephfs − A cephfs volume allows an existing CephFS volume to be mounted into your pod. Data remains intact after the Pod is removed from the node.
-
gitRepo − A gitRepo volume mounts an empty directory and clones a git repository into it for your pod to use.
-
secret − A secret volume is used to pass sensitive information, such as passwords, to pods.
-
persistentVolumeClaim − A persistentVolumeClaim volume is used to mount a PersistentVolume into a pod. PersistentVolumes are a way for users to “claim” durable storage (such as a GCE PersistentDisk or an iSCSI volume) without knowing the details of the particular cloud environment.
-
downwardAPI − A downwardAPI volume is used to make downward API data available to applications. It mounts a directory and writes the requested data in plain text files.
-
azureDiskVolume − An AzureDiskVolume is used to mount a Microsoft Azure Data Disk into a Pod.
Persistent Volume and Persistent Volume Claim
Persistent Volume (PV) − 这是由管理员预留的一块网络存储。它是集群中一个独立于使用此持续卷的任何单个 pod 的资源。
Persistent Volume (PV) − It’s a piece of network storage that has been provisioned by the administrator. It’s a resource in the cluster which is independent of any individual pod that uses the PV.
Persistent Volume Claim (PVC) − Kubernetes 为其 Pod 请求的存储称为 PVC。用户无需了解底层配置。声明必须在创建 Pod 的同个命名空间中进行创建。
Persistent Volume Claim (PVC) − The storage requested by Kubernetes for its pods is known as PVC. The user does not need to know the underlying provisioning. The claims must be created in the same namespace where the pod is created.
Creating Persistent Volume
kind: PersistentVolume ---------> 1
apiVersion: v1
metadata:
name: pv0001 ------------------> 2
labels:
type: local
spec:
capacity: -----------------------> 3
storage: 10Gi ----------------------> 4
accessModes:
- ReadWriteOnce -------------------> 5
hostPath:
path: "/tmp/data01" --------------------------> 6
在以上的代码中,我们已经定义了 −
In the above code, we have defined −
-
kind: PersistentVolume → We have defined the kind as PersistentVolume which tells kubernetes that the yaml file being used is to create the Persistent Volume.
-
name: pv0001 → Name of PersistentVolume that we are creating.
-
capacity: → This spec will define the capacity of PV that we are trying to create.
-
storage: 10Gi → This tells the underlying infrastructure that we are trying to claim 10Gi space on the defined path.
-
ReadWriteOnce → This tells the access rights of the volume that we are creating.
-
path: "/tmp/data01" → This definition tells the machine that we are trying to create volume under this path on the underlying infrastructure.
Checking PV
$ kubectl get pv
NAME CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
pv0001 10Gi RWO Available 14s
Creating Persistent Volume Claim
kind: PersistentVolumeClaim --------------> 1
apiVersion: v1
metadata:
name: myclaim-1 --------------------> 2
spec:
accessModes:
- ReadWriteOnce ------------------------> 3
resources:
requests:
storage: 3Gi ---------------------> 4
在以上的代码中,我们已经定义了 −
In the above code, we have defined −
-
kind: PersistentVolumeClaim → It instructs the underlying infrastructure that we are trying to claim a specified amount of space.
-
name: myclaim-1 → Name of the claim that we are trying to create.
-
ReadWriteOnce → This specifies the mode of the claim that we are trying to create.
-
storage: 3Gi → This will tell kubernetes about the amount of space we are trying to claim.
Getting Details About PVC
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
myclaim-1 Bound pv0001 10Gi RWO 7s
Using PV and PVC with POD
kind: Pod
apiVersion: v1
metadata:
name: mypod
labels:
name: frontendhttp
spec:
containers:
- name: myfrontend
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts: ----------------------------> 1
- mountPath: "/usr/share/tomcat/html"
name: mypd
volumes: -----------------------> 2
- name: mypd
persistentVolumeClaim: ------------------------->3
claimName: myclaim-1
在以上的代码中,我们已经定义了 −
In the above code, we have defined −
-
volumeMounts: → This is the path in the container on which the mounting will take place.
-
Volume: → This definition defines the volume definition that we are going to claim.
-
persistentVolumeClaim: → Under this, we define the volume name which we are going to use in the defined pod.
Kubernetes - Secrets
可以将机密定义为用于存储敏感数据(如用户名和密码)并使用加密功能的 Kubernetes 对象。
Secrets can be defined as Kubernetes objects used to store sensitive data such as user name and passwords with encryption.
在 Kubernetes 中创建机密有多种方法。
There are multiple ways of creating secrets in Kubernetes.
-
Creating from txt files.
-
Creating from yaml file.
Creating From Text File
要从文本文件(如用户名和密码)创建机密,我们首先需要将它们存储在 txt 文件中,并使用以下命令。
In order to create secrets from a text file such as user name and password, we first need to store them in a txt file and use the following command.
$ kubectl create secret generic tomcat-passwd –-from-file = ./username.txt –fromfile = ./.
password.txt
Creating From Yaml File
apiVersion: v1
kind: Secret
metadata:
name: tomcat-pass
type: Opaque
data:
password: <User Password>
username: <User Name>
Using Secrets
一旦我们创建了机密,就可以在 Pod 或复制控制器中使用它,如下所示:
Once we have created the secrets, it can be consumed in a pod or the replication controller as −
-
Environment Variable
-
Volume
As Environment Variable
要使用机密作为环境变量,我们将在 Pod yaml 文件的 spec 部分下使用 env 。
In order to use the secret as environment variable, we will use env under the spec section of pod yaml file.
env:
- name: SECRET_USERNAME
valueFrom:
secretKeyRef:
name: mysecret
key: tomcat-pass
As Volume
spec:
volumes:
- name: "secretstest"
secret:
secretName: tomcat-pass
containers:
- image: tomcat:7.0
name: awebserver
volumeMounts:
- mountPath: "/tmp/mysec"
name: "secretstest"
Secret Configuration As Environment Variable
apiVersion: v1
kind: ReplicationController
metadata:
name: appname
spec:
replicas: replica_count
template:
metadata:
name: appname
spec:
nodeSelector:
resource-group:
containers:
- name: appname
image:
imagePullPolicy: Always
ports:
- containerPort: 3000
env: -----------------------------> 1
- name: ENV
valueFrom:
configMapKeyRef:
name: appname
key: tomcat-secrets
在上面的代码中,在 env 定义下,我们在复制控制器中将机密用作环境变量。
In the above code, under the env definition, we are using secrets as environment variable in the replication controller.
Kubernetes - Network Policy
网络策略定义了同一个命名空间中的 pod 将如何彼此通信以及网络端点。它要求在 API 服务器的运行时配置中启用 extensions/v1beta1/networkpolicies 。它的资源使用标签选择 pod,并且定义规则以允许流量进入特定的 pod,除了在命名空间中定义的流量。
Network Policy defines how the pods in the same namespace will communicate with each other and the network endpoint. It requires extensions/v1beta1/networkpolicies to be enabled in the runtime configuration in the API server. Its resources use labels to select the pods and define rules to allow traffic to a specific pod in addition to which is defined in the namespace.
首先,我们需要配置命名空间隔离策略。基本上,负载均衡器上需要这种类型的网络策略。
First, we need to configure Namespace Isolation Policy. Basically, this kind of networking policies are required on the load balancers.
kind: Namespace
apiVersion: v1
metadata:
annotations:
net.beta.kubernetes.io/network-policy: |
{
"ingress":
{
"isolation": "DefaultDeny"
}
}
$ kubectl annotate ns <namespace> "net.beta.kubernetes.io/network-policy =
{\"ingress\": {\"isolation\": \"DefaultDeny\"}}"
创建命名空间后,我们需要创建网络策略。
Once the namespace is created, we need to create the Network Policy.
Kubernetes - API
Kubernetes API 是系统声明配置架构的基础。 Kubectl 命令行工具可用于创建、更新、删除和获取 API 对象。Kubernetes API 充当 Kubernetes 不同组件之间的通信器。
Kubernetes API serves as a foundation for declarative configuration schema for the system. Kubectl command-line tool can be used to create, update, delete, and get API object. Kubernetes API acts a communicator among different components of Kubernetes.
Adding API to Kubernetes
向 Kubernetes 添加一个新的 API 将会给 Kubernetes 添加新特性,从而增加 Kubernetes 的功能。然而,它也会同时增加系统的成本和可维护性。为了在成本和复杂性之间取得平衡,为此定义了几套规则。
Adding a new API to Kubernetes will add new features to Kubernetes, which will increase the functionality of Kubernetes. However, alongside it will also increase the cost and maintainability of the system. In order to create a balance between the cost and complexity, there are a few sets defined for it.
将要添加的 API 应该对超过 50% 的用户有用。在 Kubernetes 中没有其他方式可以实现此功能。在 Kubernetes 的社区会议中讨论特殊情况,然后添加 API。
The API which is getting added should be useful to more than 50% of the users. There is no other way to implement the functionality in Kubernetes. Exceptional circumstances are discussed in the community meeting of Kubernetes, and then API is added.
API Changes
为了增加 Kubernetes 的功能,会不断向系统引入更改。Kubernetes 团队这样做是为了给 Kubernetes 添加功能,而不会删除或影响系统的现有功能。
In order to increase the capability of Kubernetes, changes are continuously introduced to the system. It is done by Kubernetes team to add the functionality to Kubernetes without removing or impacting the existing functionality of the system.
为了演示一般流程,这里有一个(假设的)示例 -
To demonstrate the general process, here is an (hypothetical) example −
-
A user POSTs a Pod object to /api/v7beta1/…
-
The JSON is unmarshalled into a v7beta1.Pod structure
-
Default values are applied to the v7beta1.Pod
-
The v7beta1.Pod is converted to an api.Pod structure
-
The api.Pod is validated, and any errors are returned to the user
-
The api.Pod is converted to a v6.Pod (because v6 is the latest stable version)
-
The v6.Pod is marshalled into JSON and written to etcd
现在当 Pod 对象存储后,用户可以在任何受支持的 API 版本中获取该对象。例如 −
Now that we have the Pod object stored, a user can GET that object in any supported API version. For example −
-
A user GETs the Pod from /api/v5/…
-
The JSON is read from etcd and unmarshalled into a v6.Pod structure
-
Default values are applied to the v6.Pod
-
The v6.Pod is converted to an api.Pod structure
-
The api.Pod is converted to a v5.Pod structure
-
The v5.Pod is marshalled into JSON and sent to the user
此流程的含义是必须小心谨慎地进行 API 更改,并做好向后兼容。
The implication of this process is that API changes must be done carefully and backward compatibly.
API Versioning
为了更轻松地支持多个结构,Kubernetes 支持在不同 API 路径(例如 /api/v1 或 /apsi/extensions/v1beta1 )上的多个 API 版本
To make it easier to support multiple structures, Kubernetes supports multiple API versions each at different API path such as /api/v1 or /apsi/extensions/v1beta1
Kubernetes 的版本化标准在多个标准中进行定义。
Versioning standards at Kubernetes are defined in multiple standards.
Alpha Level
-
This version contains alpha (e.g. v1alpha1)
-
This version may be buggy; the enabled version may have bugs
-
Support for bugs can be dropped at any point of time.
-
Recommended to be used in short term testing only as the support may not be present all the time.
Beta Level
-
The version name contains beta (e.g. v2beta3)
-
The code is fully tested and the enabled version is supposed to be stable.
-
The support of the feature will not be dropped; there may be some small changes.
-
Recommended for only non-business-critical uses because of the potential for incompatible changes in subsequent releases.
Kubernetes - Kubectl
Kubectl 是与 Kubernetes API 交互的命令行实用程序。它是一个用于在 Kubernetes 集群中通信和管理容器的界面。
Kubectl is the command line utility to interact with Kubernetes API. It is an interface which is used to communicate and manage pods in Kubernetes cluster.
需要在本地设置 kubectl 以与 Kubernetes 集群交互。
One needs to set up kubectl to local in order to interact with Kubernetes cluster.
Setting Kubectl
使用 curl 命令将可执行文件下载到本地工作站。
Download the executable to the local workstation using the curl command.
Configuring Kubectl
以下是执行配置操作的步骤。
Following are the steps to perform the configuration operation.
$ kubectl config set-cluster default-cluster --server = https://${MASTER_HOST} --
certificate-authority = ${CA_CERT}
$ kubectl config set-credentials default-admin --certificateauthority = ${
CA_CERT} --client-key = ${ADMIN_KEY} --clientcertificate = ${
ADMIN_CERT}
$ kubectl config set-context default-system --cluster = default-cluster --
user = default-admin
$ kubectl config use-context default-system
-
Replace ${MASTER_HOST} with the master node address or name used in the previous steps.
-
Replace ${CA_CERT} with the absolute path to the ca.pem created in the previous steps.
-
Replace ${ADMIN_KEY} with the absolute path to the admin-key.pem created in the previous steps.
-
Replace ${ADMIN_CERT} with the absolute path to the admin.pem created in the previous steps.
Kubernetes - Kubectl Commands
Kubectl 控制 Kubernetes 群集。它是 Kubernetes 的关键组件之一,它在设置完成时在任意计算机上的工作站上运行。它有能力管理群集中的节点。
Kubectl controls the Kubernetes Cluster. It is one of the key components of Kubernetes which runs on the workstation on any machine when the setup is done. It has the capability to manage the nodes in the cluster.
Kubectl 命令用于交互和管理 Kubernetes 对象和群集。在本章中,我们将讨论通过 kubectl 在 Kubernetes 中使用的一些命令。
Kubectl commands are used to interact and manage Kubernetes objects and the cluster. In this chapter, we will discuss a few commands used in Kubernetes via kubectl.
kubectl annotate − 它会更新资源上的注释。
kubectl annotate − It updates the annotation on a resource.
$kubectl annotate [--overwrite] (-f FILENAME | TYPE NAME) KEY_1=VAL_1 ...
KEY_N = VAL_N [--resource-version = version]
例如,
For example,
kubectl annotate pods tomcat description = 'my frontend'
kubectl api-versions - 打印集群上受支持的 API 版本。
kubectl api-versions − It prints the supported versions of API on the cluster.
$ kubectl api-version;
kubectl apply - 具有通过文件或标准输入配置资源的功能。
kubectl apply − It has the capability to configure a resource by file or stdin.
$ kubectl apply –f <filename>
kubectl attach - 将东西附加到正在运行的容器中。
kubectl attach − This attaches things to the running container.
$ kubectl attach <pod> –c <container>
$ kubectl attach 123456-7890 -c tomcat-conatiner
kubectl autoscale - 用于自动扩展部署、副本集、副本控制器等已定义的 Pod。
kubectl autoscale − This is used to auto scale pods which are defined such as Deployment, replica set, Replication Controller.
$ kubectl autoscale (-f FILENAME | TYPE NAME | TYPE/NAME) [--min = MINPODS] --
max = MAXPODS [--cpu-percent = CPU] [flags]
$ kubectl autoscale deployment foo --min = 2 --max = 10
kubectl cluster-info - 显示集群信息。
kubectl cluster-info − It displays the cluster Info.
$ kubectl cluster-info
kubectl cluster-info dump - 转储与集群相关的相关信息,用于调试和诊断。
kubectl cluster-info dump − It dumps relevant information regarding cluster for debugging and diagnosis.
$ kubectl cluster-info dump
$ kubectl cluster-info dump --output-directory = /path/to/cluster-state
kubectl config - 修改 kubeconfig 文件。
kubectl config − Modifies the kubeconfig file.
$ kubectl config <SUBCOMMAD>
$ kubectl config –-kubeconfig <String of File name>
kubectl config current-context - 显示当前内容。
kubectl config current-context − It displays the current context.
$ kubectl config current-context
#deploys the current context
kubectl config delete-cluster - 从 kubeconfig 中删除指定集群。
kubectl config delete-cluster − Deletes the specified cluster from kubeconfig.
$ kubectl config delete-cluster <Cluster Name>
kubectl config delete-context - 从 kubeconfig 中删除指定上下文。
kubectl config delete-context − Deletes a specified context from kubeconfig.
$ kubectl config delete-context <Context Name>
kubectl config get-clusters - 显示 kubeconfig 中定义的集群。
kubectl config get-clusters − Displays cluster defined in the kubeconfig.
$ kubectl config get-cluster
$ kubectl config get-cluster <Cluser Name>
kubectl config get-contexts - 描述一个或多个上下文。
kubectl config get-contexts − Describes one or many contexts.
$ kubectl config get-context <Context Name>
kubectl config set-cluster - 设置 Kubernetes 中的集群条目。
kubectl config set-cluster − Sets the cluster entry in Kubernetes.
$ kubectl config set-cluster NAME [--server = server] [--certificateauthority =
path/to/certificate/authority] [--insecure-skip-tls-verify = true]
kubectl config set-context - 在 kubernetes 入口点中设置上下文条目。
kubectl config set-context − Sets a context entry in kubernetes entrypoint.
$ kubectl config set-context NAME [--cluster = cluster_nickname] [--
user = user_nickname] [--namespace = namespace]
$ kubectl config set-context prod –user = vipin-mishra
kubectl config set-credentials - 在 kubeconfig 中设置用户条目。
kubectl config set-credentials − Sets a user entry in kubeconfig.
$ kubectl config set-credentials cluster-admin --username = vipin --
password = uXFGweU9l35qcif
kubectl config set - 设置 kubeconfig 文件中的一个单独值。
kubectl config set − Sets an individual value in kubeconfig file.
$ kubectl config set PROPERTY_NAME PROPERTY_VALUE
kubectl config unset - 取消设置 kubectl 中特定的组件。
kubectl config unset − It unsets a specific component in kubectl.
$ kubectl config unset PROPERTY_NAME PROPERTY_VALUE
kubectl config use-context - 设置 kubectl 文件中的当前上下文。
kubectl config use-context − Sets the current context in kubectl file.
$ kubectl config use-context <Context Name>
kubectl config view
kubectl config view
$ kubectl config view
$ kubectl config view –o jsonpath='{.users[?(@.name == "e2e")].user.password}'
kubectl cp − 从容器中复制文件和目录并将其复制到容器中。
kubectl cp − Copy files and directories to and from containers.
$ kubectl cp <Files from source> <Files to Destinatiion>
$ kubectl cp /tmp/foo <some-pod>:/tmp/bar -c <specific-container>
kubectl create − 通过文件名或标准输入创建资源。为此,JSON 或 YAML 格式均可接受。
kubectl create − To create resource by filename of or stdin. To do this, JSON or YAML formats are accepted.
$ kubectl create –f <File Name>
$ cat <file name> | kubectl create –f -
同样,我们可以按照使用 create 命令和 kubectl 列出的内容创建多个内容。
In the same way, we can create multiple things as listed using the create command along with kubectl.
-
deployment
-
namespace
-
quota
-
secret docker-registry
-
secret
-
secret generic
-
secret tls
-
serviceaccount
-
service clusterip
-
service loadbalancer
-
service nodeport
kubectl delete − 通过文件名称、标准输入、资源和名称删除资源。
kubectl delete − Deletes resources by file name, stdin, resource and names.
$ kubectl delete –f ([-f FILENAME] | TYPE [(NAME | -l label | --all)])
kubectl describe − 描述 kubernetes 中的任何特定资源。显示资源或一组资源的详细信息。
kubectl describe − Describes any particular resource in kubernetes. Shows details of resource or a group of resources.
$ kubectl describe <type> <type name>
$ kubectl describe pod tomcat
kubectl drain − 用于对节点进行维护而将其耗尽。为维护准备节点。这会将该节点标记为不可用,这样就不应该将其分配给将创建的新容器。
kubectl drain − This is used to drain a node for maintenance purpose. It prepares the node for maintenance. This will mark the node as unavailable so that it should not be assigned with a new container which will be created.
$ kubectl drain tomcat –force
kubectl edit − 用于终止服务器上的资源。这允许直接编辑可以通过命令行工具接收的资源。
kubectl edit − It is used to end the resources on the server. This allows to directly edit a resource which one can receive via the command line tool.
$ kubectl edit <Resource/Name | File Name)
Ex.
$ kubectl edit rc/tomcat
kubectl exec − 有助于在容器中执行命令。
kubectl exec − This helps to execute a command in the container.
$ kubectl exec POD <-c CONTAINER > -- COMMAND < args...>
$ kubectl exec tomcat 123-5-456 date
kubectl expose − 用于将 Kubernetes 对象(例如 pod、副本控制器和服务)公开为新的 Kubernetes 服务。这具有通过正在运行的容器或 yaml 文件公开它的功能。
kubectl expose − This is used to expose the Kubernetes objects such as pod, replication controller, and service as a new Kubernetes service. This has the capability to expose it via a running container or from a yaml file.
$ kubectl expose (-f FILENAME | TYPE NAME) [--port=port] [--protocol = TCP|UDP]
[--target-port = number-or-name] [--name = name] [--external-ip = external-ip-ofservice]
[--type = type]
$ kubectl expose rc tomcat –-port=80 –target-port = 30000
$ kubectl expose –f tomcat.yaml –port = 80 –target-port =
kubectl get − 此命令能够获取有关 Kubernetes 资源的群集数据。
kubectl get − This command is capable of fetching data on the cluster about the Kubernetes resources.
$ kubectl get [(-o|--output=)json|yaml|wide|custom-columns=...|custom-columnsfile=...|
go-template=...|go-template-file=...|jsonpath=...|jsonpath-file=...]
(TYPE [NAME | -l label] | TYPE/NAME ...) [flags]
例如,
For example,
$ kubectl get pod <pod name>
$ kubectl get service <Service name>
kubectl logs − 用于获取 pod 中容器的日志。打印日志可以是定义 pod 中的容器名称。如果 POD 只有一个容器,则无需定义其名称。
kubectl logs − They are used to get the logs of the container in a pod. Printing the logs can be defining the container name in the pod. If the POD has only one container there is no need to define its name.
$ kubectl logs [-f] [-p] POD [-c CONTAINER]
Example
$ kubectl logs tomcat.
$ kubectl logs –p –c tomcat.8
kubectl port-forward − 用于将一个或多个本地端口转发到 pod。
kubectl port-forward − They are used to forward one or more local port to pods.
$ kubectl port-forward POD [LOCAL_PORT:]REMOTE_PORT
[...[LOCAL_PORT_N:]REMOTE_PORT_N]
$ kubectl port-forward tomcat 3000 4000
$ kubectl port-forward tomcat 3000:5000
kubectl replace − 能够按文件名称或 stdin 替换资源。
kubectl replace − Capable of replacing a resource by file name or stdin.
$ kubectl replace -f FILENAME
$ kubectl replace –f tomcat.yml
$ cat tomcat.yml | kubectl replace –f -
kubectl rolling-update − 对副本控制器执行滚动更新。通过一次更新一个 POD,使用一个新的副本控制器替换指定的副本控制器。
kubectl rolling-update − Performs a rolling update on a replication controller. Replaces the specified replication controller with a new replication controller by updating a POD at a time.
$ kubectl rolling-update OLD_CONTROLLER_NAME ([NEW_CONTROLLER_NAME] --
image = NEW_CONTAINER_IMAGE | -f NEW_CONTROLLER_SPEC)
$ kubectl rolling-update frontend-v1 –f freontend-v2.yaml
kubectl rollout − 能够管理部署的推出。
kubectl rollout − It is capable of managing the rollout of deployment.
$ Kubectl rollout <Sub Command>
$ kubectl rollout undo deployment/tomcat
除了上述内容之外,我们还可以使用推出执行多项任务,例如:
Apart from the above, we can perform multiple tasks using the rollout such as −
-
rollout history
-
rollout pause
-
rollout resume
-
rollout status
-
rollout undo
kubectl run − Run 命令能够在 Kubernetes 群集上运行映像。
kubectl run − Run command has the capability to run an image on the Kubernetes cluster.
$ kubectl run NAME --image = image [--env = "key = value"] [--port = port] [--
replicas = replicas] [--dry-run = bool] [--overrides = inline-json] [--command] --
[COMMAND] [args...]
$ kubectl run tomcat --image = tomcat:7.0
$ kubectl run tomcat –-image = tomcat:7.0 –port = 5000
kubectl scale − 它会扩展 Kubernetes 部署、副本集、副本控制器或作业的大小。
kubectl scale − It will scale the size of Kubernetes Deployments, ReplicaSet, Replication Controller, or job.
$ kubectl scale [--resource-version = version] [--current-replicas = count] --
replicas = COUNT (-f FILENAME | TYPE NAME )
$ kubectl scale –-replica = 3 rs/tomcat
$ kubectl scale –replica = 3 tomcat.yaml
kubectl set image − 更新 pod 模板的映像。
kubectl set image − It updates the image of a pod template.
$ kubectl set image (-f FILENAME | TYPE NAME)
CONTAINER_NAME_1 = CONTAINER_IMAGE_1 ... CONTAINER_NAME_N = CONTAINER_IMAGE_N
$ kubectl set image deployment/tomcat busybox = busybox ngnix = ngnix:1.9.1
$ kubectl set image deployments, rc tomcat = tomcat6.0 --all
kubectl set resources − 用于设置资源的内容。它使用 pod 模板更新对象上的资源/限制。
kubectl set resources − It is used to set the content of the resource. It updates resource/limits on object with pod template.
$ kubectl set resources (-f FILENAME | TYPE NAME) ([--limits = LIMITS & --
requests = REQUESTS]
$ kubectl set resources deployment tomcat -c = tomcat --
limits = cpu = 200m,memory = 512Mi
kubectl top node − 显示 CPU/内存/存储使用情况。top 命令允许你查看节点的资源消耗。
kubectl top node − It displays CPU/Memory/Storage usage. The top command allows you to see the resource consumption for nodes.
$ kubectl top node [node Name]
这个相同的命令还可以与 pod 一起使用。
The same command can be used with a pod as well.
Kubernetes - Creating an App
要为 Kubernetes 部署创建应用程序,我们需要先在 Docker 上创建应用程序。这可以通过两种方式完成 -
In order to create an application for Kubernetes deployment, we need to first create the application on the Docker. This can be done in two ways −
-
By downloading
-
From Docker file
By Downloading
可以从 Docker 集线器下载现有映像,并可以存储在本地 Docker 注册表上。
The existing image can be downloaded from Docker hub and can be stored on the local Docker registry.
要做到这一点,请运行 Docker pull 命令。
In order to do that, run the Docker pull command.
$ docker pull --help
Usage: docker pull [OPTIONS] NAME[:TAG|@DIGEST]
Pull an image or a repository from the registry
-a, --all-tags = false Download all tagged images in the repository
--help = false Print usage
以下是上述代码的输出。
Following will be the output of the above code.

上述屏幕截图显示了一组存储在我们的本地 Docker 注册表中的映像。
The above screenshot shows a set of images which are stored in our local Docker registry.
如果我们想要从包含要测试应用程序的映像构建容器,我们可以使用 Docker run 命令执行此操作。
If we want to build a container from the image which consists of an application to test, we can do it using the Docker run command.
$ docker run –i –t unbunt /bin/bash
From Docker File
要从 Docker 文件创建应用程序,我们需要先创建 Docker 文件。
In order to create an application from the Docker file, we need to first create a Docker file.
以下是 Jenkins Docker 文件的示例。
Following is an example of Jenkins Docker file.
FROM ubuntu:14.04
MAINTAINER vipinkumarmishra@virtusapolaris.com
ENV REFRESHED_AT 2017-01-15
RUN apt-get update -qq && apt-get install -qqy curl
RUN curl https://get.docker.io/gpg | apt-key add -
RUN echo deb http://get.docker.io/ubuntu docker main > /etc/apt/↩
sources.list.d/docker.list
RUN apt-get update -qq && apt-get install -qqy iptables ca-↩
certificates lxc openjdk-6-jdk git-core lxc-docker
ENV JENKINS_HOME /opt/jenkins/data
ENV JENKINS_MIRROR http://mirrors.jenkins-ci.org
RUN mkdir -p $JENKINS_HOME/plugins
RUN curl -sf -o /opt/jenkins/jenkins.war -L $JENKINS_MIRROR/war-↩
stable/latest/jenkins.war
RUN for plugin in chucknorris greenballs scm-api git-client git ↩
ws-cleanup ;\
do curl -sf -o $JENKINS_HOME/plugins/${plugin}.hpi \
-L $JENKINS_MIRROR/plugins/${plugin}/latest/${plugin}.hpi ↩
; done
ADD ./dockerjenkins.sh /usr/local/bin/dockerjenkins.sh
RUN chmod +x /usr/local/bin/dockerjenkins.sh
VOLUME /var/lib/docker
EXPOSE 8080
ENTRYPOINT [ "/usr/local/bin/dockerjenkins.sh" ]
创建上述文件后,请使用 Dockerfile 作为文件名保存它,并将 cd 作为文件路径。然后,运行以下命令。
Once the above file is created, save it with the name of Dockerfile and cd to the file path. Then, run the following command.

$ sudo docker build -t jamtur01/Jenkins .
构建映像后,我们可以测试映像是否运行正常,是否可以转换为容器。
Once the image is built, we can test if the image is working fine and can be converted to a container.
$ docker run –i –t jamtur01/Jenkins /bin/bash
Kubernetes - App Deployment
部署是将镜像转换为容器,然后将这些镜像分配给 Kubernetes 集群中容器的方法。这还有助于设置应用程序集群,其中包括部署服务、容器、副本控制器和副本集。可以以一种方式设置集群,使得部署在容器上的应用程序可以互相通信。
Deployment is a method of converting images to containers and then allocating those images to pods in the Kubernetes cluster. This also helps in setting up the application cluster which includes deployment of service, pod, replication controller and replica set. The cluster can be set up in such a way that the applications deployed on the pod can communicate with each other.
在此设置中,我们可以在一个应用程序的顶部设置负载平衡器,将流量转移到一组容器,然后它们与后端容器通信。容器之间的通信通过 Kubernetes 中内置的服务对象发生。
In this setup, we can have a load balancer setting on top of one application diverting traffic to a set of pods and later they communicate to backend pods. The communication between pods happen via the service object built in Kubernetes.

Ngnix Load Balancer Yaml File
apiVersion: v1
kind: Service
metadata:
name: oppv-dev-nginx
labels:
k8s-app: omni-ppv-api
spec:
type: NodePort
ports:
- port: 8080
nodePort: 31999
name: omninginx
selector:
k8s-app: appname
component: nginx
env: dev
Ngnix Replication Controller Yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: appname
spec:
replicas: replica_count
template:
metadata:
name: appname
labels:
k8s-app: appname
component: nginx
env: env_name
spec:
nodeSelector:
resource-group: oppv
containers:
- name: appname
image: IMAGE_TEMPLATE
imagePullPolicy: Always
ports:
- containerPort: 8080
resources:
requests:
memory: "request_mem"
cpu: "request_cpu"
limits:
memory: "limit_mem"
cpu: "limit_cpu"
env:
- name: BACKEND_HOST
value: oppv-env_name-node:3000
Frontend Service Yaml File
apiVersion: v1
kind: Service
metadata:
name: appname
labels:
k8s-app: appname
spec:
type: NodePort
ports:
- name: http
port: 3000
protocol: TCP
targetPort: 3000
selector:
k8s-app: appname
component: nodejs
env: dev
Frontend Replication Controller Yaml File
apiVersion: v1
kind: ReplicationController
metadata:
name: Frontend
spec:
replicas: 3
template:
metadata:
name: frontend
labels:
k8s-app: Frontend
component: nodejs
env: Dev
spec:
nodeSelector:
resource-group: oppv
containers:
- name: appname
image: IMAGE_TEMPLATE
imagePullPolicy: Always
ports:
- containerPort: 3000
resources:
requests:
memory: "request_mem"
cpu: "limit_cpu"
limits:
memory: "limit_mem"
cpu: "limit_cpu"
env:
- name: ENV
valueFrom:
configMapKeyRef:
name: appname
key: config-env
Backend Service Yaml File
apiVersion: v1
kind: Service
metadata:
name: backend
labels:
k8s-app: backend
spec:
type: NodePort
ports:
- name: http
port: 9010
protocol: TCP
targetPort: 9000
selector:
k8s-app: appname
component: play
env: dev
Backed Replication Controller Yaml File
apiVersion: v1
kind: ReplicationController
metadata:
name: backend
spec:
replicas: 3
template:
metadata:
name: backend
labels:
k8s-app: beckend
component: play
env: dev
spec:
nodeSelector:
resource-group: oppv
containers:
- name: appname
image: IMAGE_TEMPLATE
imagePullPolicy: Always
ports:
- containerPort: 9000
command: [ "./docker-entrypoint.sh" ]
resources:
requests:
memory: "request_mem"
cpu: "request_cpu"
limits:
memory: "limit_mem"
cpu: "limit_cpu"
volumeMounts:
- name: config-volume
mountPath: /app/vipin/play/conf
volumes:
- name: config-volume
configMap:
name: appname
Kubernetes - Autoscaling
Autoscaling 是 Kubernetes 集群中的主要功能之一。它是一个集群能够根据服务响应需求增加节点数量,并且在需求减少时减少节点数量的功能。此自动缩放功能目前在 Google Cloud Engine (GCE) 和 Google Container Engine (GKE) 中受支持,并且很快就会以 AWS 开始。
Autoscaling is one of the key features in Kubernetes cluster. It is a feature in which the cluster is capable of increasing the number of nodes as the demand for service response increases and decrease the number of nodes as the requirement decreases. This feature of auto scaling is currently supported in Google Cloud Engine (GCE) and Google Container Engine (GKE) and will start with AWS pretty soon.
要在 GCE 中设置可扩展基础设施,首先需要启用具有 Google Cloud Monitoring、Google Cloud Logging 和 Stackdriver 功能的活动 GCE 项目。
In order to set up scalable infrastructure in GCE, we need to first have an active GCE project with features of Google cloud monitoring, google cloud logging, and stackdriver enabled.
首先,我们将设置包含少量正在运行节点的群集。完成后,我们需要设置以下环境变量。
First, we will set up the cluster with few nodes running in it. Once done, we need to set up the following environment variable.
Environment Variable
export NUM_NODES = 2
export KUBE_AUTOSCALER_MIN_NODES = 2
export KUBE_AUTOSCALER_MAX_NODES = 5
export KUBE_ENABLE_CLUSTER_AUTOSCALER = true
完成后,我们将通过运行 kube-up.sh 来启动群集。这将创建一个群集以及群集自动调整。
Once done, we will start the cluster by running kube-up.sh. This will create a cluster together with cluster auto-scalar add on.
./cluster/kube-up.sh
在创建群集时,我们可以使用以下 kubectl 命令检查我们的群集。
On creation of the cluster, we can check our cluster using the following kubectl command.
$ kubectl get nodes
NAME STATUS AGE
kubernetes-master Ready,SchedulingDisabled 10m
kubernetes-minion-group-de5q Ready 10m
kubernetes-minion-group-yhdx Ready 8m
现在,我们可以在群集上部署一个应用程序,然后启用水平 Pod 自动伸缩。这可以通过使用以下命令来完成。
Now, we can deploy an application on the cluster and then enable the horizontal pod autoscaler. This can be done using the following command.
$ kubectl autoscale deployment <Application Name> --cpu-percent = 50 --min = 1 --
max = 10
上面的命令显示,随着应用程序负载的增加,我们将至少维护一个且最多维护 10 个 POD 副本。
The above command shows that we will maintain at least one and maximum 10 replica of the POD as the load on the application increases.
我们可以通过运行 $kubclt get hpa 命令检查自动伸缩器的状态。我们将使用以下命令增加 Pod 的负载。
We can check the status of autoscaler by running the $kubclt get hpa command. We will increase the load on the pods using the following command.
$ kubectl run -i --tty load-generator --image = busybox /bin/sh
$ while true; do wget -q -O- http://php-apache.default.svc.cluster.local; done
我们可以通过运行 $ kubectl get hpa 命令检查 hpa 。
We can check the hpa by running $ kubectl get hpa command.
$ kubectl get hpa
NAME REFERENCE TARGET CURRENT
php-apache Deployment/php-apache/scale 50% 310%
MINPODS MAXPODS AGE
1 20 2m
$ kubectl get deployment php-apache
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
php-apache 7 7 7 3 4m
我们可以使用以下命令检查正在运行的 Pod 的数量。
We can check the number of pods running using the following command.
jsz@jsz-desk2:~/k8s-src$ kubectl get pods
php-apache-2046965998-3ewo6 0/1 Pending 0 1m
php-apache-2046965998-8m03k 1/1 Running 0 1m
php-apache-2046965998-ddpgp 1/1 Running 0 5m
php-apache-2046965998-lrik6 1/1 Running 0 1m
php-apache-2046965998-nj465 0/1 Pending 0 1m
php-apache-2046965998-tmwg1 1/1 Running 0 1m
php-apache-2046965998-xkbw1 0/1 Pending 0 1m
最后,我们可以获取节点状态。
And finally, we can get the node status.
$ kubectl get nodes
NAME STATUS AGE
kubernetes-master Ready,SchedulingDisabled 9m
kubernetes-minion-group-6z5i Ready 43s
kubernetes-minion-group-de5q Ready 9m
kubernetes-minion-group-yhdx Ready 9m
Kubernetes - Dashboard Setup
设置 Kubernetes 仪表盘包括几个步骤,需要一组工具作为先决条件来设置它。
Setting up Kubernetes dashboard involves several steps with a set of tools required as the prerequisites to set it up.
-
Docker (1.3+)
-
go (1.5+)
-
nodejs (4.2.2+)
-
npm (1.3+)
-
java (7+)
-
gulp (3.9+)
-
Kubernetes (1.1.2+)
Setting Up the Dashboard
$ sudo apt-get update && sudo apt-get upgrade
Installing Python
$ sudo apt-get install python
$ sudo apt-get install python3
Installing GCC
$ sudo apt-get install gcc-4.8 g++-4.8
Installing make
$ sudo apt-get install make
Installing Java
$ sudo apt-get install openjdk-7-jdk
Installing Node.js
$ wget https://nodejs.org/dist/v4.2.2/node-v4.2.2.tar.gz
$ tar -xzf node-v4.2.2.tar.gz
$ cd node-v4.2.2
$ ./configure
$ make
$ sudo make install
Installing gulp
$ npm install -g gulp
$ npm install gulp
Verifying Versions
Java Version
$ java –version
java version "1.7.0_91"
OpenJDK Runtime Environment (IcedTea 2.6.3) (7u91-2.6.3-1~deb8u1+rpi1)
OpenJDK Zero VM (build 24.91-b01, mixed mode)
$ node –v
V4.2.2
$ npn -v
2.14.7
$ gulp -v
[09:51:28] CLI version 3.9.0
$ sudo gcc --version
gcc (Raspbian 4.8.4-1) 4.8.4
Copyright (C) 2013 Free Software Foundation, Inc. This is free software;
see the source for copying conditions. There is NO warranty; not even for
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
Installing GO
$ git clone https://go.googlesource.com/go
$ cd go
$ git checkout go1.4.3
$ cd src
Building GO
$ ./all.bash
$ vi /root/.bashrc
In the .bashrc
export GOROOT = $HOME/go
export PATH = $PATH:$GOROOT/bin
$ go version
go version go1.4.3 linux/arm
Installing Kubernetes Dashboard
$ git clone https://github.com/kubernetes/dashboard.git
$ cd dashboard
$ npm install -g bower
Running the Dashboard
$ git clone https://github.com/kubernetes/dashboard.git
$ cd dashboard
$ npm install -g bower
$ gulp serve
[11:19:12] Requiring external module babel-core/register
[11:20:50] Using gulpfile ~/dashboard/gulpfile.babel.js
[11:20:50] Starting 'package-backend-source'...
[11:20:50] Starting 'kill-backend'...
[11:20:50] Finished 'kill-backend' after 1.39 ms
[11:20:50] Starting 'scripts'...
[11:20:53] Starting 'styles'...
[11:21:41] Finished 'scripts' after 50 s
[11:21:42] Finished 'package-backend-source' after 52 s
[11:21:42] Starting 'backend'...
[11:21:43] Finished 'styles' after 49 s
[11:21:43] Starting 'index'...
[11:21:44] Finished 'index' after 1.43 s
[11:21:44] Starting 'watch'...
[11:21:45] Finished 'watch' after 1.41 s
[11:23:27] Finished 'backend' after 1.73 min
[11:23:27] Starting 'spawn-backend'...
[11:23:27] Finished 'spawn-backend' after 88 ms
[11:23:27] Starting 'serve'...
2016/02/01 11:23:27 Starting HTTP server on port 9091
2016/02/01 11:23:27 Creating API client for
2016/02/01 11:23:27 Creating Heapster REST client for http://localhost:8082
[11:23:27] Finished 'serve' after 312 ms
[BS] [BrowserSync SPA] Running...
[BS] Access URLs:
--------------------------------------
Local: http://localhost:9090/
External: http://192.168.1.21:9090/
--------------------------------------
UI: http://localhost:3001
UI External: http://192.168.1.21:3001
--------------------------------------
[BS] Serving files from: /root/dashboard/.tmp/serve
[BS] Serving files from: /root/dashboard/src/app/frontend
[BS] Serving files from: /root/dashboard/src/app
Kubernetes - Monitoring
监控是管理大型群集的关键组件之一。对此,我们有许多工具。
Monitoring is one of the key component for managing large clusters. For this, we have a number of tools.
Monitoring with Prometheus
它是一个监控和警报系统。它在 SoundCloud 构建并于 2012 年开源。它很好地处理多维数据。
It is a monitoring and alerting system. It was built at SoundCloud and was open sourced in 2012. It handles the multi-dimensional data very well.
Prometheus 具有多个参与监控的组件:
Prometheus has multiple components to participate in monitoring −
-
Prometheus − It is the core component that scraps and stores data.
-
Prometheus node explore − Gets the host level matrices and exposes them to Prometheus.
-
Ranch-eye − is an haproxy and exposes cAdvisor stats to Prometheus.
-
Grafana − Visualization of data.
-
InfuxDB − Time series database specifically used to store data from rancher.
-
Prom-ranch-exporter − It is a simple node.js application, which helps in querying Rancher server for the status of stack of service.

Sematext Docker Agent
这是一个现代化的 Docker 感知指标、事件和日志收集代理。它作为每个 Docker 主机上的一个微小容器运行,并收集所有集群节点和容器的日志、指标和事件。它发现所有容器(一个 Pod 中可能包含多个容器),包括 Kubernetes 核心服务的容器,如果核心服务部署在 Docker 容器中。部署后,所有日志和指标都将立即开箱即用。
It is a modern Docker-aware metrics, events, and log collection agent. It runs as a tiny container on every Docker host and collects logs, metrics, and events for all cluster node and containers. It discovers all containers (one pod might contain multiple containers) including containers for Kubernetes core services, if the core services are deployed in Docker containers. After its deployment, all logs and metrics are immediately available out of the box.
Deploying Agents to Nodes
Kubernetes 提供了 DeamonSets,可确保将 Pod 添加到集群中。
Kubernetes provides DeamonSets which ensures pods are added to the cluster.
Configuring SemaText Docker Agent
它是通过环境变量进行配置。
It is configured via environment variables.
-
Get a free account at apps.sematext.com, if you don’t have one already.
-
Create an SPM App of type “Docker” to obtain the SPM App Token. SPM App will hold your Kubernetes performance metrics and event.
-
Create a Logsene App to obtain the Logsene App Token. Logsene App will hold your Kubernetes logs.
-
Edit values of LOGSENE_TOKEN and SPM_TOKEN in the DaemonSet definition as shown below.
Create DaemonSet Object
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: sematext-agent
spec:
template:
metadata:
labels:
app: sematext-agent
spec:
selector: {}
dnsPolicy: "ClusterFirst"
restartPolicy: "Always"
containers:
- name: sematext-agent
image: sematext/sematext-agent-docker:latest
imagePullPolicy: "Always"
env:
- name: SPM_TOKEN
value: "REPLACE THIS WITH YOUR SPM TOKEN"
- name: LOGSENE_TOKEN
value: "REPLACE THIS WITH YOUR LOGSENE TOKEN"
- name: KUBERNETES
value: "1"
volumeMounts:
- mountPath: /var/run/docker.sock
name: docker-sock
- mountPath: /etc/localtime
name: localtime
volumes:
- name: docker-sock
hostPath:
path: /var/run/docker.sock
- name: localtime
hostPath:
path: /etc/localtime
Running the Sematext Agent Docker with kubectl
$ kubectl create -f sematext-agent-daemonset.yml
daemonset "sematext-agent-daemonset" created
Kubernetes Log
Kubernetes 容器的日志与 Docker 容器日志没有太大区别。但是,Kubernetes 用户需要查看已部署 Pod 的日志。因此,让 Kubernetes 特定的信息可用于日志搜索非常有用,例如 -
Kubernetes containers’ logs are not much different from Docker container logs. However, Kubernetes users need to view logs for the deployed pods. Hence, it is very useful to have Kubernetes-specific information available for log search, such as −
-
Kubernetes namespace
-
Kubernetes pod name
-
Kubernetes container name
-
Docker image name
-
Kubernetes UID
Using ELK Stack and LogSpout
ELK 堆栈包括 Elasticsearch、Logstash 和 Kibana。为了收集日志并将其转发到日志记录平台,我们将使用 LogSpout(尽管有 FluentD 等其他选项)。
ELK stack includes Elasticsearch, Logstash, and Kibana. To collect and forward the logs to the logging platform, we will use LogSpout (though there are other options such as FluentD).
以下代码演示如何在 Kubernetes 上设置 ELK 集群并为 Elasticsearch 创建服务 -
The following code shows how to set up ELK cluster on Kubernetes and create service for ElasticSearch −
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
namespace: elk
labels:
component: elasticsearch
spec:
type: LoadBalancer
selector:
component: elasticsearch
ports:
- name: http
port: 9200
protocol: TCP
- name: transport
port: 9300
protocol: TCP
Creating Replication Controller
apiVersion: v1
kind: ReplicationController
metadata:
name: es
namespace: elk
labels:
component: elasticsearch
spec:
replicas: 1
template:
metadata:
labels:
component: elasticsearch
spec:
serviceAccount: elasticsearch
containers:
- name: es
securityContext:
capabilities:
add:
- IPC_LOCK
image: quay.io/pires/docker-elasticsearch-kubernetes:1.7.1-4
env:
- name: KUBERNETES_CA_CERTIFICATE_FILE
value: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: "CLUSTER_NAME"
value: "myesdb"
- name: "DISCOVERY_SERVICE"
value: "elasticsearch"
- name: NODE_MASTER
value: "true"
- name: NODE_DATA
value: "true"
- name: HTTP_ENABLE
value: "true"
ports:
- containerPort: 9200
name: http
protocol: TCP
- containerPort: 9300
volumeMounts:
- mountPath: /data
name: storage
volumes:
- name: storage
emptyDir: {}
Kibana URL
对于 Kibana,我们将 Elasticsearch URL 作为一个环境变量提供。
For Kibana, we provide the Elasticsearch URL as an environment variable.
- name: KIBANA_ES_URL
value: "http://elasticsearch.elk.svc.cluster.local:9200"
- name: KUBERNETES_TRUST_CERT
value: "true"
Kibana UI 可在容器端口 5601 和相应的主机/节点端口组合处访问。当您开始时,Kibana 中不会有任何数据(这符合预期,因为您没有推送任何数据)。
Kibana UI will be reachable at container port 5601 and corresponding host/Node Port combination. When you begin, there won’t be any data in Kibana (which is expected as you have not pushed any data).