Openshift 简明教程

OpenShift - Quick Guide

OpenShift - Overview

OpenShift 是由红帽托管的云开发平台即服务 (PaaS)。它是一个开放源代码云端用户友好型平台,用于创建、测试和运行应用程序,并且最终将它们部署在云端。

OpenShift is a cloud development Platform as a Service (PaaS) hosted by Red Hat. It’s an open source cloud-based user-friendly platform used to create, test, and run applications, and finally deploy them on cloud.

OpenShift 能够管理使用不同语言(例如 Node.js、Ruby、Python、Perl 和 Java)编写的应用程序。OpenShift 的一项关键功能是它具有可扩展性,这让用户能够支持使用其他语言编写的应用程序。

OpenShift is capable of managing applications written in different languages, such as Node.js, Ruby, Python, Perl, and Java. One of the key features of OpenShift is it is extensible, which helps the users support the application written in other languages.

OpenShift 作为其抽象层带有各种虚拟化概念。OpenShift 背后的基础概念基于虚拟化。

OpenShift comes with various concepts of virtualization as its abstraction layer. The underlying concept behind OpenShift is based on virtualization.

Virtualization

一般来说,虚拟化可以定义为创建虚拟系统而不是事物的物理或实际版本,包括系统、存储或操作系统。虚拟化的主要目标是使 IT 基础设施更具可扩展性和可靠性。虚拟化概念已经存在了几十年,随着当今 IT 行业的发展,它可以应用于从系统级、硬件级到服务器级虚拟化的各个层面。

In general, virtualization can be defined as the creation of a virtual system rather than physical or actual version of anything starting from system, storage, or an operating system. The main goal of virtualization is to make the IT infrastructure more scalable and reliable. The concept of virtualization has been in existence from decades and with the evolution of IT industry today, it can be applied to a wide range of layers starting from System level, Hardware level, to Server level virtualization.

How It Works

它可以被描述为一项技术,其中任何应用程序或操作系统都抽象自其实际物理层。虚拟化技术的一个关键用途是服务器虚拟化,它使用一个名为管理程序的软件来抽象底层硬件之间的层。在虚拟化上运行的操作系统的性能与其在物理硬件上运行时一样好。然而,虚拟化的概念很流行,因为大多数运行的系统和应用程序不需要使用底层硬件。

It can be described as a technology in which any application or operating system is abstracted from its actual physical layer. One key use of the virtualization technology is server virtualization, which uses a software called hypervisor to abstract the layer from the underlying hardware. The performance of an operating system running on virtualization is as good as when it is running on the physical hardware. However, the concept of virtualization is popular as most of the system and application running do not require the use of the underlying hardware.

Physical vs Virtual Architecture

physical virtual architecture

Types of Virtualization

  1. Application Virtualization − In this method, the application is abstracted from the underlying operating system. This method is very useful in which the application can be run in isolation without being dependent on the operating system underneath.

  2. Desktop Virtualization − This method is used to reduce the workstation load in which one can access the desktop remotely, using a thin client at the desk. In this method, the desktops are mostly run in a datacenter. A classic example can be a Virtual Desktop Image (VDI) which is used in most of the organizations.

  3. Data Virtualization − It is a method of abstracting and getting away from traditional method of data and data management.

  4. Server Virtualization − In this method, server-related resources are virtualized which includes the physical server, process, and operating system. The software which enables this abstraction is often referred to as the hypervisor.

  5. Storage Virtualization − It is the process of pooling in multiple storage devices into a single storage device that is managed from a single central console.

  6. Network Virtualization − It is the method in which all available network resources are combined by splitting up the available bandwidth and channels, each of which is independent of each other.

OpenShift

OpenShift 是云驱动的平台即服务 (PaaS)。它是一种开放源代码技术,可帮助组织将他们的传统应用程序基础设施和平台从物理虚拟介质迁移到云端。

OpenShift is a cloud-enabled application Platform as a Service (PaaS). It’s an open source technology which helps organizations move their traditional application infrastructure and platform from physical, virtual mediums to the cloud.

OpenShift 支持多种多样的应用程序,这些应用程序可以在 OpenShift 云平台上轻松开发和部署。OpenShift 主要为开发人员和用户支持三种平台。

OpenShift supports a very large variety of applications, which can be easily developed and deployed on OpenShift cloud platform. OpenShift basically supports three kinds of platforms for the developers and users.

Infrastructure as a Service (IaaS)

在此格式中,服务提供商提供具有某些预定义虚拟硬件配置的硬件级虚拟机。这个领域有多个竞争对手,从 AWS 谷歌云、Rackspace 等开始。

In this format, the service provider provides hardware level virtual machines with some pre-defined virtual hardware configuration. There are multiple competitors in this space starting from AWS Google cloud, Rackspace, and many more.

在经过漫长的设置和投资过程之后,拥有 IaaS 的主要缺点是,人们仍然负责安装和维护操作系统和服务器包、管理基础设施网络以及负责基本系统管理。

The main drawback of having IaaS after a long procedure of setup and investment is that, one is still responsible for installing and maintaining the operating system and server packages, managing the network of infrastructure, and taking care of the basic system administration.

Software as a Service (SaaS)

有了 SaaS,人们最不用担心的是底层基础设施。它就像即插即用一样简单,用户只需注册服务并开始使用即可。此设置的主要缺点是只可以执行服务提供商允许的少量自定义。SaaS 最常见的示例之一是 Gmail,用户只需登录即可开始使用。用户也可以对自己的帐户进行一些小的修改。然而,它对开发人员来说并不是很有用。

With SaaS, one has the least worry about the underlying infrastructure. It is as simple as plug and play, wherein the user just has to sign up for the services and start using it. The main drawback with this setup is, one can only perform minimal amount of customization, which is allowed by the service provider. One of the most common example of SaaS is Gmail, where the user just needs to login and start using it. The user can also make some minor modifications to his account. However, it is not very useful from the developer’s point of view.

Platform as a Service (PaaS)

它可以被视为 SaaS 和 IaaS 之间的中层。PaaS 评估的主要目标是针对开发人员,其中可以使用一些命令来启动开发环境。这些环境设计为可以满足所有开发需求,从拥有带有数据库的 Web 应用程序服务器开始。要执行此操作,你只需要一个命令,然后服务提供商会为你处理这些事务。

It can be considered as a middle layer between SaaS and IaaS. The primary target of PaaS evaluation is for developers in which the development environment can be spin up with a few commands. These environments are designed in such a way that they can satisfy all the development needs, right from having a web application server with a database. To do this, you just require a single command and the service provider does the stuff for you.

Why Use OpenShift?

OpenShift 为企业部门提供了一个通用平台,让他们可以在云上托管自己的应用程序,而不用担心底层操作系统。这使得在云上使用、开发和部署应用程序变得十分容易。其中一项核心功能是为各种开发和测试提供受管的硬件和网络资源。使用 OpenShift,PaaS 开发人员可以自由地按照规范设计所需的环境。

OpenShift provides a common platform for enterprise units to host their applications on cloud without worrying about the underlying operating system. This makes it very easy to use, develop, and deploy applications on cloud. One of the key features is, it provides managed hardware and network resources for all kinds of development and testing. With OpenShift, PaaS developer has the freedom to design their required environment with specifications.

在服务计划方面,OpenShift 提供了不同类型服务等级协议。

OpenShift provides different kind of service level agreement when it comes to service plans.

Free − 该计划针对三年期,每个人 1GB 空间。

Free − This plan is limited to three years with 1GB space for each.

Bronze − 该计划包含三年,可扩展至 16 年,每年 1GB 空间。

Bronze − This plan includes 3 years and expands up to 16 years with 1GB space per year.

Sliver − 这是三年期的青铜计划,不过,有 6GB 存储容量,无需额外费用。

Sliver − This is 16-year plan of bronze, however, has a storage capacity of 6GB with no additional cost.

除上述功能外,OpenShift 还提供名为 OpenShift Enterprise 的内部部署版本。在 OpenShift 中,开发人员可以设计可扩展和不可扩展的应用程序,并利用 HAproxy 服务器实施这些设计。

Other than the above features, OpenShift also offers on-premises version known as OpenShift Enterprise. In OpenShift, developers have the leverage to design scalable and non-scalable applications and these designs are implemented using HAproxy servers.

Features

OpenShift 支持多项功能。其中部分功能包括−

There are multiple features supported by OpenShift. Few of them are −

  1. Multiple Language Support

  2. Multiple Database Support

  3. Extensible Cartridge System

  4. Source Code Version Management

  5. One-Click Deployment

  6. Multi Environment Support

  7. Standardized Developers’ workflow

  8. Dependency and Build Management

  9. Automatic Application Scaling

  10. Responsive Web Console

  11. Rich Command-line Toolset

  12. Remote SSH Login to Applications

  13. Rest API Support

  14. Self-service On Demand Application Stack

  15. Built-in Database Services

  16. Continuous Integration and Release Management

  17. IDE Integration

  18. Remote Debugging of Applications

OpenShift - Types

OpenShift 由其基础 OpenShift V2 衍生而来,后者主要基于年和插装模块的概念,其中每个组件都有自己的规范,从机器创建到应用程序部署,从构建到部署应用程序。

OpenShift came into existence from its base named OpenShift V2, which was mainly based on the concept of year and cartridges, where each component has its specifications starting from machine creation till application deployment, right from building to deploying the application.

Cartridges − 它们是构建新应用程序的重点,从环境需要运行它们的应用程序类型到此部分满足的所有依赖项。

Cartridges − They were the focal point of building a new application starting from the type of application the environment requires to run them and all the dependencies satisfied in this section.

year − 它可以被定义为具有资源、内存和 CPU 特定规范的裸机或服务器。它们被认为是运行应用程序的基本单位。

year − It can be defined as the bear metal machine or server with certain specifications regarding the resources, memory, and CPU. They were considered as a fundamental unit for running an application.

Application − 它们仅仅指的是将在 OpenShift 环境中部署并运行的应用程序或任何集成应用程序。

Application − These simply refer to the application or any integration application that will get deployed and run on OpenShift environment.

深入本节后,我们将讨论 OpenShift 的不同格式和产品。在早期,OpenShift 有三个主要版本。

As we go deeper in the section, we will discuss on different formats and offerings of OpenShift. In the earlier days, OpenShift had three major versions.

OpenShift Origin − 这是 OpenShift 的社区版或开源版。它也称为其他两个版本的上游项目。

OpenShift Origin − This was the community addition or open source version of OpenShift. It was also known as upstream project for other two versions.

OpenShift Online − 它是一个公有 PaaS,作为托管在 AWS 上的服务。

OpenShift Online − It is a pubic PaaS as a service hosted on AWS.

OpenShift Enterprise − 是 OpenShift 的增强版,具有 ISV 和供应商许可证。

OpenShift Enterprise − is the hardened version of OpenShift with ISV and vendor licenses.

OpenShift Online

OpenShift online 是 OpenShift 社区的应用,可用于在公有云上快速构建、部署和扩展容器化应用程序。它是红帽的公有云应用开发和托管平台,它可以实现自动化配置、管理和扩展应用程序,这有助于开发人员专注于编写应用程序逻辑。

OpenShift online is an offering of OpenShift community using which one can quickly build, deploy, and scale containerized applications on the public cloud. It is Red Hat’s public cloud application development and hosting platform, which enables automated provisioning, management and scaling of application which helps the developer focus on writing application logic.

Setting Up Account on Red Hat OpenShift Online

Step 1 − 转到浏览器并访问网站 https://manage.openshift.com/

Step 1 − Go to browser and visit the site https://manage.openshift.com/

redhat account setting step1

Step 2 − If you have a Red Hat account, login to OpenShift account using the Red Hat login ID and password using the following URL. https://developers.redhat.com

redhat account setting step2

Step 3 − If you do not have a Red Hat account login, then sign up for OpenShift online service using the following link.

redhat account setting step3 1

After login, you will see the following page.

redhat account setting step3 2

Once you have all the things in place, Red Hat will show some basic account details as shown in the following screenshot.

redhat account setting step3 3

最后,当你登录时,你将看到以下页面。

Finally, when you are logged in, you will see the following page.

redhat account setting login

OpenShift Container Platform

OpenShift 容器平台是一个企业平台,它帮助多个团队(如开发和 IT 运营团队)构建和部署容器化基础设施。OpenShift 中构建的所有容器都使用非常可靠的 Docker 容器化技术,该技术可部署在任何公共托管云平台的数据中心中。

OpenShift container platform is an enterprise platform which helps multiple teams such as development and IT operations team to build and deploy containerized infrastructure. All the containers built in OpenShift uses a very reliable Docker containerization technology, which can be deployed on any data center of publically hosted cloud platforms.

OpenShift 容器平台前身为 OpenShift Enterprises。它是一个红帽内部私有平台即服务,建立在 Docker 驱动的应用程序容器的核心概念之上,其中编排和管理由 Kubernetes 管理。

OpenShift container platform was formally known as OpenShift Enterprises. It is a Red Hat on-premise private platform as service, built on the core concept of application containers powered by Docker, where orchestration and administration is managed by Kubernetes.

换句话说,OpenShift 将 Docker 和 Kubernetes 引入企业级别。它是一款容器平台软件,供企业部门在自身选择的基础设施中部署和管理申请人。例如,在 AWS 实例上托管 OpenShift 实例。

In other words, OpenShift brings Docker and Kubernetes together to the enterprise level. It is a container platform software for enterprise units to deploy and manage applicants in an infrastructure of own choice. For example, hosting OpenShift instances on AWS instances.

OpenShift 容器平台可在 two package levels 中获得。

OpenShift container platform is available in two package levels.

OpenShift Container Local - 这适用于希望在本地机器上部署和测试应用程序的开发人员。此包主要由开发团队用于开发和测试应用程序。

OpenShift Container Local − This is for those developers who wish to deploy and test applications on the local machine. This package is mainly used by development teams for developing and testing applications.

OpenShift Container Lab − 这旨在用于全面评估从开发一直到预生产环境的部署的应用程序。

OpenShift Container Lab − This is designed for extended evaluation of application starting from development till deployment to pre-prod environment.

openshift container platform

OpenShift Dedicated

这是添加到 OpenShift 产品组合的另一份产品,在其中客户可以选择在他们选择的任何公有云上托管容器化平台。这为最终用户提供了真正的多云产品的感觉,他们可以在满足其需求的任何云上使用 OpenShift。

This is another offering added to the portfolio of OpenShift, wherein there is a customer choice of hosting a containerized platform on any of the public cloud of their choice. This gives the end user a true sense of multi-cloud offering, where they can use OpenShift on any cloud which satisfies their needs.

这是 Red Hat 的最新产品之一,最终用户可以使用它构建测试部署,并在托管在云上的 OpenShift 上运行他们的应用程序。

This is one of the newest offering of Red Hat where the end user can use OpenShift to build test deploy and run their application on OpenShift which is hosted on cloud.

Features of OpenShift Dedicated

专用 OpenShift 提供了公有云上的定制解决方案应用程序平台,并且继承了 OpenShift 3 技术。

OpenShift dedicated offers customized solution application platform on public cloud and it is inherited from OpenShift 3 technology.

  1. Extensible and Open − This is built on the open concept of Docker and deployed on cloud because of which it is can expend itself as and when required.

  2. Portability − As it is built using Docker, the applications running on Docker can easily be shipped from one place to the other, where Docker is supported.

  3. Orchestration − With OpenShift 3, one of the key features of container orchestration and cluster management is supported using Kubernetes which came into offering with OpenShift version 3.

  4. Automation − This version of OpenShift is enabled with the feature of source code management, build automation, and deployment automation which makes it very popular in the market as a Platform as a Service provider.

Competitors of OpenShift

Google App Engine − 这是 Google用于开发和托管 Web 应用程序的免费平台。Google 的应用程序引擎提供快速开发和部署平台。

Google App Engine − This is Google’s free platform for developing and hosting web applications. Google’s app engine offers fast development and deployment platform.

Microsoft Azure − Azure 云由 Microsoft 托管在其数据中心上。

Microsoft Azure − Azure cloud is hosted by Microsoft on their data centers.

Amazon Elastic Cloud Compute − 它们是 Amazon 提供的内置服务,有助于在云上开发和托管可扩展的 Web 应用程序。

Amazon Elastic Cloud Compute − They are built-in services provided by Amazon, which help in developing and hosting scalable web applications on cloud.

Cloud Foundry − 是适用于 Java、Ruby、Python 和 Node.js 应用程序的开源 PaaS 平台。

Cloud Foundry − is an open source PaaS platform for Java, Ruby, Python, and Node.js applications.

CloudStack − Apache 的 CloudStack 是由 Citrix 开发的一个项目,旨在成为 OpenShift 和 OpenStack 的直接竞争对手。

CloudStack − Apache’s CloudStack is a project developed by Citrix and is designed to become a direct competitor of OpenShift and OpenStack.

OpenStack − 红帽提供的另一种用于云计算的云技术。

OpenStack − Another cloud technology provided by Red Hat for cloud computing.

Kubernetes − 它是为管理 Docker 容器而构建的直接编排和集群管理技术。

Kubernetes − It is a direct orchestration and cluster management technology built to manage Docker container.

OpenShift - Architecture

OpenShift 是一个分层系统,其中每一层都使用 Kubernetes 和 Docker 集群与其他层紧密绑定。OpenShift 的架构设计为可以支持和管理 Docker 容器,后者使用 Kubernetes 托管在所有层之上。与 OpenShift V2 的早期版本不同,OpenShift V3 的新版本支持容器化基础架构。在此模型中,Docker 有助于创建基于 Linux 的轻量级容器,而 Kubernetes 支持在多个主机上编排和管理容器的任务。

OpenShift is a layered system wherein each layer is tightly bound with the other layer using Kubernetes and Docker cluster. The architecture of OpenShift is designed in such a way that it can support and manage Docker containers, which are hosted on top of all the layers using Kubernetes. Unlike the earlier version of OpenShift V2, the new version of OpenShift V3 supports containerized infrastructure. In this model, Docker helps in creation of lightweight Linux-based containers and Kubernetes supports the task of orchestrating and managing containers on multiple hosts.

openshift container platform architecture

Components of OpenShift

OpenShift 架构的关键组件之一是在 Kubernetes 中管理容器化基础架构。Kubernetes 负责基础架构的部署和管理。在任何 Kubernetes 集群中,我们都可以有多个主节点和多个节点,这可确保设置中没有故障点。

One of the key components of OpenShift architecture is to manage containerized infrastructure in Kubernetes. Kubernetes is responsible for Deployment and Management of infrastructure. In any Kubernetes cluster, we can have more than one master and multiple nodes, which ensures there is no point of failure in the setup.

key components of openshift architecture

Kubernetes Master Machine Components

Etcd − 它存储配置信息,集群中的每个节点都可以使用该信息。它是一个高可用性键值存储,可以分布在多个节点之间。它只能由 Kubernetes API 服务器访问,因为它可能包含敏感信息。它是一个分布式键值存储,所有节点都可以访问。

Etcd − It stores the configuration information, which can be used by each of the nodes in the cluster. It is a high availability key value store that can be distributed among multiple nodes. It should only be accessible by Kubernetes API server as it may have sensitive information. It is a distributed key value Store which is accessible to all.

API Server − Kubernetes 是一个 API 服务器,它使用 API 提供集群上的所有操作。API 服务器实现了一个接口,这意味着不同的工具和库可以随时与其通信。Kubeconfig 是一个软件包,与服务器端工具一起使用,用于通信。它公开了 Kubernetes API”。

API Server − Kubernetes is an API server which provides all the operation on cluster using the API. API server implements an interface which means different tools and libraries can readily communicate with it. A kubeconfig is a package along with the server side tools that can be used for communication. It exposes Kubernetes API”.

Controller Manager − 此组件负责大多数调节集群状态和执行任务的收集器。它可以被视为在非终止循环中运行并负责收集信息并将其发送到 API 服务器的守护进程。它努力获得集群的共享状态,然后进行更改,将服务器的当前状态转变成所需状态。关键控制器是复制控制器、端点控制器、命名空间控制器和服务帐户控制器。控制器管理器运行不同类型的控制器来处理节点、端点等。

Controller Manager − This component is responsible for most of the collectors that regulate the state of the cluster and perform a task. It can be considered as a daemon which runs in a non-terminating loop and is responsible for collecting and sending information to API server. It works towards getting the shared state of the cluster and then make changes to bring the current status of the server to a desired state. The key controllers are replication controller, endpoint controller, namespace controller, and service account controller. The controller manager runs different kind of controllers to handle nodes, endpoint, etc.

Scheduler − 它是 Kubernetes 主节点的关键组件。它是主节点中负责分配工作负载的服务。它负责跟踪集群节点上的工作负载利用率,然后将工作负载放在具有可用资源并接受工作负载的资源上。换句话说,这是将 Pod 分配给可用节点的负责机制。调度程序负责工作负载利用率并为新节点分配 Pod。

Scheduler − It is a key component of Kubernetes master. It is a service in master which is responsible for distributing the workload. It is responsible for tracking the utilization of working load on cluster nodes and then placing the workload on which resources are available and accepting the workload. In other words, this is the mechanism responsible for allocating pods to available nodes. The scheduler is responsible for workload utilization and allocating a pod to a new node.

Kubernetes Node Components

以下是与 Kubernetes 主节点通信必需的节点服务器的关键组件。

Following are the key components of the Node server, which are necessary to communicate with the Kubernetes master.

Docker − 每个节点的第一要求是 Docker,它有助于在相对孤立但轻量级的操作环境中运行封装的应用程序容器。

Docker − The first requirement of each node is Docker which helps in running the encapsulated application containers in a relatively isolated but lightweight operating environment.

Kubelet Service − 这是每个节点中的一个小服务,负责向控制平面服务中继信息并从控制平面服务中继信息。它与 etcd 存储进行交互以读取配置详细信息和 Wright 值。它与主组件通信以接收命令并工作。然后,kubelet 进程负责维护工作状态和节点服务器。它管理网络规则、端口转发等。

Kubelet Service − This is a small service in each node, which is responsible for relaying information to and from the control plane service. It interacts with etcd store to read the configuration details and Wright values. This communicates with the master component to receive commands and work. The kubelet process then assumes responsibility for maintaining the state of work and the node server. It manages network rules, port forwarding, etc.

Kubernetes Proxy Service − 这是一个在每个节点上运行的代理服务,有助于向外部主机提供服务。它有助于将请求转发到正确的容器。Kubernetes 代理服务能够执行原始负载平衡。它确保网络环境可预测且可访问,但同时它也是隔离的。它管理节点、卷、秘密上的 Pod,创建新的容器健康检查等。

Kubernetes Proxy Service − This is a proxy service which runs on each node and helps in making the services available to the external host. It helps in forwarding the request to correct containers. Kubernetes Proxy Service is capable of carrying out primitive load balancing. It makes sure that the networking environment is predictable and accessible but at the same time it is isolated as well. It manages pods on node, volumes, secrets, creating new containers health checkup, etc.

Integrated OpenShift Container Registry

OpenShift 容器注册表是 Red Hat 的内置存储单元,用于存储 Docker 镜像。随着 OpenShift 最新集成版本的推出,它已推出一个用户界面,用于查看 OpenShift 内部存储中的镜像。这些注册表能够保存具有指定标签的镜像,这些标签稍后用于从中构建容器。

OpenShift container registry is an inbuilt storage unit of Red Hat, which is used for storing Docker images. With the latest integrated version of OpenShift, it has come up with a user interface to view images in OpenShift internal storage. These registries are capable of holding images with specified tags, which are later used to build containers out of it.

Frequently Used Terms

Image − Kubernetes(Docker)镜像是容器化基础架构的关键组成部分。截至目前,Kubernetes 仅支持 Docker 镜像。Pod 中的每个容器在其内部运行其 Docker 镜像。在配置 Pod 时,配置文件中的镜像属性与 Docker 命令具有相同的语法。

Image − Kubernetes (Docker) images are the key building blocks of Containerized Infrastructure. As of now, Kubernetes only supports Docker images. Each container in a pod has its Docker image running inside it. When configuring a pod, the image property in the configuration file has the same syntax as the Docker command.

Project − 它们可以定义为 OpenShift V2 早期版本中存在的域名的重命名版本。

Project − They can be defined as the renamed version of the domain which was present in the earlier version of OpenShift V2.

Container − 将镜像部署到 Kubernetes 集群节点后创建这些镜像。

Container − They are the ones which are created after the image is deployed on a Kubernetes cluster node.

Node − 节点是 Kubernetes 集群中的工作机器,它也是主节点的最小规模。它们是工作单元,可以是物理机、VM 或云实例。

Node − A node is a working machine in Kubernetes cluster, which is also known as minion for master. They are working units which can a physical, VM, or a cloud instance.

Pod − Pod 是 Kubernetes 集群节点内的一组容器及其存储。可以在 Pod 内创建带有多个容器的 Pod。例如,将数据库容器和 Web 服务器容器保留在 Pod 内。

Pod − A pod is a collection of containers and its storage inside a node of a Kubernetes cluster. It is possible to create a pod with multiple containers inside it. For example, keeping the database container and web server container inside the pod.

OpenShift - Environment Setup

在本章中,我们将了解 OpenShift 的环境设置。

In this chapter, we will learn about the environment setup of OpenShift.

System Requirement

为了设置企业 OpenShift,需要一个有效的 Red Hat 帐户。由于 OpenShift 在 Kubernetes 主节点和节点架构上工作,因此我们需要在两台不同的机器上设置它们,其中一台作为主节点,另一台作为节点工作。为了设置这两个节点,有最低系统要求。

In order to set up enterprise OpenShift, one needs to have an active Red Hat account. As OpenShift works on Kubernetes master and node architecture, we need to set up both of them on separate machines, wherein one machine acts as a master and other works on the node. In order to set up both, there are minimum system requirements.

Master Machine Configuration

以下是主节点配置的最低系统要求。

Following are the minimum system requirements for master machine configuration.

  1. A base machine hosted either on physical, virtual, or on any of the cloud environment.

  2. At least Linux 7 with the required packages on that instance.

  3. 2 CPU core.

  4. At least 8 GB RAM.

  5. 30 GB of internal hard disk memory.

Node Machine Configuration

  1. Physical or virtual base image as given for the master machine.

  2. At least Linux 7 on the machine.

  3. Docker installed with not below than 1.6 version.

  4. 1 CPU core.

  5. 8 GB RAM.

  6. 15 GB hard disk for hosting images and 15 GB for storing images.

Step by Step Guide to OpenShift Setup

在以下介绍中,我们将设置 OpenShift 实验环境,该环境可以后扩展到更大的集群。由于 OpenShift 需要主节点和节点设置,因此我们需要至少在云、物理或虚拟机上托管两台机器。

In the following description, we are going to set up OpenShift lab environment, which can be later extended to a bigger cluster. As OpenShift requires master and node setup, we would need at least two machines hosted on either cloud, physical, or virtual machines.

Step 1 − 首先在两台机器上安装 Linux,Linux 7 应该是最低版本。如果拥有有效的 Red Hat 订阅,可以使用以下命令完成此操作。

Step 1 − First install Linux on both the machines, where the Linux 7 should be the least version. This can be done using the following commands if one has an active Red Hat subscription.

# subscription-manager repos --disable = "*"
# subscription-manager repos --enable = "rhel-7-server-rpms"
# subscription-manager repos --enable = "rhel-7-server-extras-rpms"
# subscription-manager repos --enable = "rhel-7-server-optional-rpms"
# subscription-manager repos --enable = "rhel-7-server-ose-3.0-rpms"
# yum install wget git net-tools bind-utils iptables-services bridge-utils
# yum install wget git net-tools bind-utils iptables-services bridge-utils
# yum install python-virtualenv
# yum install gcc
# yum install httpd-tools
# yum install docker
# yum update

一旦我们在两台机器上安装了所有上述基本包,下一步就是分别在机器上设置 Docker。

Once we have all the above base packages installed in both of the machines, the next step would be to set up Docker on the respective machines.

Step 2 − 配置 Docker,以便仅允许在本地网络上进行非安全通信。为此,编辑 /etc/sysconfig 中的 Docker 文件。如果文件不存在,则需要手动创建它。

Step 2 − Configure Docker so that it should allow insecure communication on the local network only. For this, edit the Docker file inside /etc/sysconfig. If the file is not present then you need to create it manually.

# vi /etc/sysconfig/docker
OPTIONS = --selinux-enabled --insecure-registry 192.168.122.0/24

在主节点上配置完 Docker 后,我们需要在两台机器之间建立无密码通信。为此,我们将使用公钥和私钥认证。

After configuring the Docker on the master machine, we need to set up a password-less communication between both the machines. For this, we will use public and private key authentication.

Step 3 − 在主节点上生成密钥,然后将 id_rsa.pub 密钥复制到节点机器的授权密钥文件,可以使用以下命令完成。

Step 3 − Generate keys on the master machine and then copy the id_rsa.pub key to the authorized key file of the node machine, which can be done using the following command.

# ssh-keygen
# ssh-copy-id -i .ssh/id_rsa.pub root@ose3-node.test.com

在设置好上述所有内容后,下一步是在主节点上设置 OpenShift 版本 3。

Once you have all of the above setup in place, next is to set up OpenShift version 3 on the master machine.

Step 4 − 从主节点运行以下 curl 命令。

Step 4 − From the master machine, run the following curl command.

# sh <(curl -s https://install.openshift.com/ose)

以上命令将设置 OSV3。下一步是配置机器上的 OpenShift V3。

The above command will put the setup in place for OSV3. The next step would be to configure OpenShift V3 on the machine.

如果您无法直接从互联网下载,则可从 https://install.openshift.com/portable/oo-install-ose.tgz 上的 tar 包中进行下载,然后可以在本地主服务器上运行安装程序。

If you cannot download from the Internet directly, then it could be downloaded from https://install.openshift.com/portable/oo-install-ose.tgz as a tar package from which the installer can run on the local master machine.

做好安装准备后,我们需开始机器上 OSV3 的实际配置。此安装非常适用于测试实际生产环境,我们已设置好 LDAP 和其他内容。

Once we have the setup ready, then we need to start with the actual configuration of OSV3 on the machines. This setup is very specific to test the environment for actual production, we have LDAP and other things in place.

Step 5 − 在主服务器上,配置 /etc/openshift/master/master-config.yaml 下的以下代码

Step 5 − On the master machine, configure the following code located under /etc/openshift/master/master-config.yaml

# vi /etc/openshift/master/master-config.yaml
identityProviders:
- name: my_htpasswd_provider
challenge: true
login: true
provider:
apiVersion: v1
kind: HTPasswdPasswordIdentityProvider
file: /root/users.htpasswd
routingConfig:
subdomain: testing.com

接下来,创建标准用户为默认管理员。

Next, create a standard user for default administration.

# htpasswd -c /root/users.htpasswd admin

Step 6 − 由于 OpenShift 使用 Docker 注册表配置映像,我们需要配置 Docker 注册表。这是用于在构建后创建和存储 Docker 映像的。

Step 6 − As OpenShift uses Docker registry for configuring images, we need to configure Docker registry. This is used for creating and storing the Docker images after build.

使用以下命令在 OpenShift 节点服务器上创建目录。

Create a directory on the OpenShift node machine using the following command.

# mkdir /images

接下来,使用创建注册表时创建的默认管理员凭据登录到主服务器。

Next, login to the master machine using the default admin credentials, which gets created while setting up the registry.

# oc login
Username: system:admin

切换到默认创建的项目。

Switch to the default created project.

# oc project default

Step 7 − 创建 Docker 注册表。

Step 7 − Create a Docker Registry.

#echo '{"kind":"ServiceAccount","apiVersion":"v1","metadata":{"name":"registry"}}' | oc create -f -

编辑用户权限。

Edit the user privileges.

#oc edit scc privileged
users:
- system:serviceaccount:openshift-infra:build-controller
- system:serviceaccount:default:registry

创建和编辑映像注册表。

Create and edit the image registry.

#oadm registry --service-account = registry --
config = /etc/openshift/master/admin.kubeconfig --
credentials = /etc/openshift/master/openshift-registry.kubeconfig --
images = 'registry.access.redhat.com/openshift3/ose-${component}:${version}' --
mount-host = /images

Step 8 − 创建默认路由。

Step 8 − Create a default routing.

默认情况下,OpenShift 使用 OpenVswitch 作为软件网络。使用以下命令创建默认路由。这用于负载平衡和代理路由。路由器类似于 Docker 注册表,同样在注册表中运行。

By default, OpenShift uses OpenVswitch as software network. Use the following command to create a default routing. This is used for load balancing and proxy routing. The router is similar to the Docker registry and also runs in a registry.

# echo '{"kind":"ServiceAccount","apiVersion":"v1","metadata":{"name":"router"}}' | oc create -f -

接下来,编辑用户的权限。

Next, edit the privileges of the user.

#oc edit scc privileged
users:
   - system:serviceaccount:openshift-infra:build-controller
   - system:serviceaccount:default:registry
   - system:serviceaccount:default:router

#oadm router router-1 --replicas = 1 --
credentials = '/etc/openshift/master/openshift-router.kubeconfig' --
images = 'registry.access.redhat.com/openshift3/ose-${component}:${version}'

Step 9 − 配置 DNS。

Step 9 − Configure the DNS.

为了处理 URL 请求,OpenShift 需要一个可用的 DNS 环境。此 DNS 配置需要创建一个通配符,而这个通配符是创建指向路由器的 DNS 通配符所需的。

In order to handle URL request, OpenShift needs a working DNS environment. This DNS configuration is required to create a wild card, which is required to create DNS wild card that points to a router.

# yum install bind-utils bind
# systemctl start named
# systemctl enable named
vi /etc/named.conf
options {listen-on port 53 { 10.123.55.111; };
forwarders {
   10.38.55.13;
   ;
};

zone "lab.com" IN {
   type master;
   file "/var/named/dynamic/test.com.zone";
   allow-update { none; };
};

Step 10 − 最后一步是在 OpenShift V3 主服务器上设置 github 服务器,此步骤是可选的。您可以使用以下命令序列轻松做到此点。

Step 10 − The final step would be to set up github server on OpenShift V3 master machine, which is optional. This can be done easily using the following sequence of commands.

#yum install curl openssh-server
#systemctl enable sshd
# systemctl start sshd
# firewall-cmd --permanent --add-service = http
# systemctl reload firewalld
#curl https://packages.gitlab.com/install/repositories/gitlab/gitlab-
#yum install gitlab-ce
# gitlab-ctl reconfigure

以上安装完成后,您可以通过测试和部署应用程序来验证,我们会在后续章节中进一步了解。

Once the above setup is complete, you can verify by test and deploy applications, which we will know more about in the subsequent chapters.

OpenShift - Basic Concept

在开始进行应用程序的实际设置和部署之前,我们需要了解 OpenShift V3 中使用的一些基本术语和概念。

Before beginning with the actual setup and deployment of applications, we need to understand some basic terms and concepts used in OpenShift V3.

Containers and Images

Images

这些是 OpenShift 的基本构建块,它们由 Docker 镜像组成。在 OpenShift 的每个 pod 中,管理集群在其内部运行自己的镜像。当我们配置一个 pod 时,我们有一个字段将从注册表中获得轮循。此配置文件将拉取镜像并将其部署到群集节点上。

These are the basic building blocks of OpenShift, which are formed out of Docker images. In each pod on OpenShift, the cluster has its own images running inside it. When we configure a pod, we have a field which will get pooled from the registry. This configuration file will pull the image and deploy it on the cluster node.

apiVersion: v1
kind: pod
metadata:
   name: Tesing_for_Image_pull -----------> Name of Pod
      spec:
containers:
- name: neo4j-server ------------------------> Name of the image
image: <Name of the Docker image>----------> Image to be pulled
imagePullPolicy: Always ------------->Image pull policy
command: [“echo”, “SUCCESS”] -------------------> Massage after image pull

要从其中拉取和创建图像,请运行以下命令。OC 是在登录后与 OpenShift 环境通信的客户端。

In order to pull and create an image out of it, run the following command. OC is the client to communicate with OpenShift environment after login.

$ oc create –f Tesing_for_Image_pull

Container

在将 Docker 图像部署到 OpenShift 集群时会创建此图像。在定义任何配置时,我们在配置文件中定义容器部分。一个容器可以在内部运行多个图像,运行在集群节点上的所有容器都由 OpenShift Kubernetes 管理。

This gets created when the Docker image gets deployed on the OpenShift cluster. While defining any configuration, we define the container section in the configuration file. One container can have multiple images running inside and all the containers running on cluster node are managed by OpenShift Kubernetes.

spec:
   containers:
   - name: py ------------------------> Name of the container
   image: python----------> Image going to get deployed on container
   command: [“python”, “SUCCESS”]
   restartPocliy: Never --------> Restart policy of container

以下是有关定义在其中运行多个图像的容器的规范。

Following are the specifications for defining a container having multiple images running inside it.

apiVersion: v1
kind: Pod
metadata:
   name: Tomcat
spec:
   containers:
   - name: Tomcat
   image: tomcat: 8.0
   ports:
   - containerPort: 7500
      imagePullPolicy: Always
      -name: Database
      Image: mongoDB
      Ports:
      - containerPort: 7501
imagePullPolicy: Always

在上面的配置中,我们定义了一个多容器 pod,其中包含两个 Tomcat 和 MongoDB 图像。

In the above configuration, we have defined a multi-container pod with two images of Tomcat and MongoDB inside it.

Pods and Services

Pods

Pod 可定义为 OpenShift(Kubernetes)集群节点中容器及其存储的集合。通常,我们有两种类型的 Pod,从单容器 Pod 到多容器 Pod。

Pod can be defined as a collection of container and its storage inside a node of OpenShift (Kubernetes) cluster. In general, we have two types of pod starting from a single container pod to multi-container pod.

Single Container Pod − 这些可以使用 OC 命令或基本配置 yml 文件轻松创建。

Single Container Pod − These can be easily created with OC command or by a basic configuration yml file.

$ oc run <name of pod> --image = <name of the image from registry>

使用简单的 yaml 文件创建它,如下所示。

Create it with a simple yaml file as follows.

apiVersion: v1
kind: Pod
metadata:
   name: apache
spec:
   containers:
   - name: apache
   image: apache: 8.0
   ports:
      - containerPort: 7500
imagePullPolicy: Always

创建上述文件后,它将生成一个带有以下命令的 Pod。

Once the above file is created, it will generate a pod with the following command.

$ oc create –f apache.yml

Multi-Container Pod − 多容器 Pod 是指在其内部运行多个容器的 Pod。它们是使用 yaml 文件创建的,如下所示。

Multi-Container Pod − Multi-container pods are those in which we have more than one container running inside it. They are created using yaml files as follows.

apiVersion: v1
kind: Pod
metadata:
   name: Tomcat
spec:
   containers:
   - name: Tomcat
   image: tomcat: 8.0
   ports:
      - containerPort: 7500
imagePullPolicy: Always
   -name: Database
   Image: mongoDB
   Ports:
      - containerPort: 7501
imagePullPolicy: Always

创建这些文件后,我们只需使用与上面相同的方法即可创建容器。

After creating these files, we can simply use the same method as above to create a container.

Service − 由于我们有一组容器在 Pod 中运行,因此我们同样有一个服务可以定义为一组逻辑 Pod。它是位于 Pod 之上的抽象层,它提供一个可以通过其访问 Pod 的 IP 和 DNS 名称。服务有助于管理负载均衡配置并非常容易地缩放 Pod。在 OpenShift 中,服务是一个 REST 对象,可以将其神化发送到 OpenShift 主控中的 apiService 以创建新实例。

Service − As we have a set of containers running inside a pod, in the same way we have a service that can be defined as a logical set of pods. It’s an abstracted layer on top of the pod, which provides a single IP and DNS name through which pods can be accessed. Service helps in managing the load balancing configuration and to scale the pod very easily. In OpenShift, a service is a REST object whose deification can be posted to apiService on OpenShift master to create a new instance.

apiVersion: v1
kind: Service
metadata:
   name: Tutorial_point_service
spec:
   ports:
      - port: 8080
         targetPort: 31999

Builds and Streams

Builds

在 OpenShift 中,构建是将图像转换为容器的过程。它是将源代码转换为图像的处理过程。此构建过程使用构建源代码为图像制定预定义的策略。

In OpenShift, build is a process of transforming images into containers. It is the processing which converts the source code to an image. This build process works on pre-defined strategy of building source code to image.

此构建过程了多种策略和资源。

The build processes multiple strategies and sources.

Build Strategies

  1. Source to Image − This is basically a tool, which helps in building reproducible images. These images are always in a ready stage to run using the Docker run command.

  2. Docker Build − This is the process in which the images are built using Docker file by running simple Docker build command.

  3. Custom Build − These are the builds which are used for creating base Docker images.

Build Sources

Git − 当使用 git 存储库构建图像时,将使用此资源。Dockerfile 是可选的。源代码中的配置如下所示。

Git − This source is used when the git repository is used for building images. The Dockerfile is optional. The configurations from the source code looks like the following.

source:
type: "Git"
git:
   uri: "https://github.com/vipin/testing.git"
   ref: "master"
contextDir: "app/dir"
dockerfile: "FROM openshift/ruby-22-centos7\nUSER example"

Dockerfile − Dockerfile 用作配置文件中的输入。

Dockerfile − The Dockerfile is used as an input in the configuration file.

source:
   type: "Dockerfile"
   dockerfile: "FROM ubuntu: latest
   RUN yum install -y httpd"

Image Streams − 在拉取图像后创建图像流。图像流的优点在于它会查找新图像版本的更新。这用于比较由标记标识的任意数量的 Docker 格式化容器图像。

Image Streams − Image streams are created after pulling the images. The advantage of an image stream is that it looks for updates on the new version of an image. This is used to compare any number of Docker formatted container images identified by tags.

在创建新映像时,映像流可以自动执行操作。所有构建和部署都可以监视映像操作,并据此执行操作。以下是我们定义构建流的方法。

Image streams can automatically perform an action when a new image is created. All the builds and deployments can watch for image action and perform an action accordingly. Following is how we define a build a stream.

apiVersion: v1
kind: ImageStream
metadata:
   annotations:
      openshift.io/generated-by: OpenShiftNewApp
   generation: 1
   labels:
      app: ruby-sample-build
   selflink: /oapi/v1/namespaces/test/imagestreams/origin-ruby-sample
   uid: ee2b9405-c68c-11e5-8a99-525400f25e34
spec: {}
status:
   dockerImageRepository: 172.30.56.218:5000/test/origin-ruby-sample
   tags:
   - items:
      - created: 2016-01-29T13:40:11Z
      dockerImageReference: 172.30.56.218:5000/test/origin-apache-sample
      generation: 1
      image: vklnld908.int.clsa.com/vipin/test
   tag: latest

Routes and Templates

Routes

在 OpenShift 中,路由通过创建和配置可外部访问的主机名将服务暴露给外部世界的方法。路由和端点用于将服务暴露给外部世界,用户在其中可以使用名称连接 (DNS) 来访问已定义的应用程序。

In OpenShift, routing is a method of exposing the service to the external world by creating and configuring externally reachable hostname. Routes and endpoints are used to expose the service to the external world, from where the user can use the name connectivity (DNS) to access defined application.

在 OpenShift 中,路由是通过管理员在集群上部署的路由器创建的。路由器用于将 HTTP (80) 和 https (443) 端口与外部应用程序绑定。

In OpenShift, routes are created by using routers which are deployed by OpenShift admin on the cluster. Routers are used to bind HTTP (80) and https (443) ports to external applications.

以下是路由支持的不同类型的协议 −

Following are the different kinds of protocol supported by routes −

  1. HTTP

  2. HTTPS

  3. TSL and web socket

在配置服务时,选择器用于配置服务并使用该服务查找端点。以下是我们创建服务并通过使用适当协议为该服务进行路由的示例。

When configuring the service, selectors are used to configure the service and find the endpoint using that service. Following is an example of how we create a service and the routing for that service by using an appropriate protocol.

{
   "kind": "Service",
   "apiVersion": "v1",
   "metadata": {"name": "Openshift-Rservice"},
   "spec": {
      "selector": {"name":"RService-openshift"},
      "ports": [
         {
            "protocol": "TCP",
            "port": 8888,
            "targetPort": 8080
         }
      ]
   }
}

接下来,运行以下命令并创建服务。

Next, run the following command and the service is created.

$ oc create -f ~/training/content/Openshift-Rservice.json

这是服务在创建后的样子。

This is how the service looks like after creation.

$ oc describe service Openshift-Rservice

Name:              Openshift-Rservice
Labels:            <none>
Selector:          name = RService-openshift
Type:              ClusterIP
IP:                172.30.42.80
Port:              <unnamed> 8080/TCP
Endpoints:         <none>
Session Affinity:  None
No events.

使用以下代码为服务创建路由。

Create a routing for service using the following code.

{
   "kind": "Route",
   "apiVersion": "v1",
   "metadata": {"name": "Openshift-service-route"},
   "spec": {
      "host": "hello-openshift.cloudapps.example.com",
      "to": {
         "kind": "Service",
         "name": "OpenShift-route-service"
      },
      "tls": {"termination": "edge"}
   }
}

当使用 OC 命令创建路由时,会创建路由资源的新实例。

When OC command is used to create a route, a new instance of route resource is created.

Templates

模板在 OpenShift 中被定义为可以多次使用的标准对象。它使用一组占位符进行参数化,这些占位符用于创建多个对象。这可用于创建从 Pod 到网络的任何内容,用户有权创建这些内容。如果图像中的 CLI 或 GUI 界面中的模板上传到项目目录,则可以创建对象列表。

Templates are defined as a standard object in OpenShift which can be used multiple times. It is parameterized with a list of placeholders which are used to create multiple objects. This can be used to create anything, starting from a pod to networking, for which users have authorization to create. A list of objects can be created, if the template from CLI or GUI interface in the image is uploaded to the project directory.

apiVersion: v1
kind: Template
metadata:
   name: <Name of template>
   annotations:
      description: <Description of Tag>
      iconClass: "icon-redis"
      tags: <Tages of image>
objects:
   - apiVersion: v1
   kind: Pod
   metadata:
      name: <Object Specification>
spec:
   containers:
      image: <Image Name>
      name: master
      ports:
      - containerPort: <Container port number>
         protocol: <Protocol>
labels:
   redis: <Communication Type>

Authentication and Authorization

Authentication

在 OpenShift 中,在配置主客户端结构时,主服务器会随 OAuth 服务器的内置特性而来。OAuth 服务器用于生成用于对 API 进行身份验证的令牌。由于 OAuth 作为主服务器的默认设置,因此我们默认使用允许所有身份提供商。存在可以按 /etc/openshift/master/master-config.yaml 配置的不同身份提供商。

In OpenShift, while configuring master and client structure, master comes up with an inbuilt feature of OAuth server. OAuth server is used for generating tokens, which is used for authentication to the API. Since, OAuth comes as a default setup for master, we have the Allow All identity provider used by default. Different identity providers are present which can be configured at /etc/openshift/master/master-config.yaml.

OAuth 中存在不同类型的身分提供者。

There are different types of identity providers present in OAuth.

  1. Allow All

  2. Deny All

  3. HTPasswd

  4. LDAP

  5. Basic Authentication

Allow All

apiVersion: v1
   kind: Pod
   metadata:
      name: redis-master
   spec:
      containers:
         image: dockerfile/redis
         name: master
      ports:
      - containerPort: 6379
         protocol: TCP
      oauthConfig:
      identityProviders:
      - name: my_allow_provider
         challenge: true
         login: true
      provider:
         apiVersion: v1
         kind: AllowAllPasswordIdentityProvider

Deny All

apiVersion: v1
kind: Pod
metadata:
   name: redis-master
spec:
   containers:
      image: dockerfile/redis
   name: master
   ports:
   - containerPort: 6379
      protocol: TCP
   oauthConfig:
   identityProviders:
   - name: my_allow_provider
      challenge: true
      login: true
   provider:
      apiVersion: v1
      kind: DenyAllPasswordIdentityProvider

HTPasswd

为了使用 HTPasswd,我们需要首先在主计算机上设置 Httpd-tools,然后按照我们对其他工具所做的相同方式进行配置。

In order to use HTPasswd, we need to first set up Httpd-tools on the master machine and then configure it in the same way as we did for others.

identityProviders:
   - name: my_htpasswd_provider
      challenge: true
      login: true
      provider:
         apiVersion: v1
         kind: HTPasswdPasswordIdentityProvider

Authorization

授权是 OpenShift 主服务器的一个功能,用于验证用户。这意味着它检查试图执行操作的用户以查看用户是否有权对给定项目执行该操作。这帮助管理员控制对项目的访问。

Authorization is a feature of OpenShift master, which is used to validate for validating a user. This means that it checks the user who is trying to perform an action to see if the user is authorized to perform that action on a given project. This helps the administrator to control access on the projects.

授权策略使用以下方式控制 −

Authorization policies are controlled using −

  1. Rules

  2. Roles

  3. Bindings

授权评估使用以下方法完成 −

Evaluation of authorization is done using −

  1. Identity

  2. Action

  3. Bindings

使用策略 −

Using Policies −

  1. Cluster policy

  2. Local policy

OpenShift - Getting Started

OpenShift 包含两种媒体可以创建和部署应用程序,即 GUI 或 CLI。在本章中,我们将使用 CLI 创建一个新应用程序。我们将使用 OC 客户端与 OpenShift 环境进行通信。

OpenShift consists of two types of medians to create and deploy applications, either by GUI or by CLI. In this chapter, we would be using CLI to create a new application. We would be using OC client to communicate with the OpenShift environment.

Creating a New Application

在 OpenShift 中,有三种创建新应用程序的方法。

In OpenShift, there are three methods of creating a new application.

  1. From a source code

  2. From an image

  3. From a template

From a Source Code

当我们尝试从源代码创建应用程序时,OpenShift 会查找 repo 内应该存在的 Docker 文件,该文件定义应用程序构建流程。我们将使用 oc new-app 创建该应用程序。

When we try to create an application from the source code, OpenShift looks for a Docker file that should be present inside the repo, which defines the application build flow. We will use oc new-app to create an application.

使用 repo 时,首要记住的一件事是,它应该指向 repo 中的一个来源,OpenShift 将从中拉取代码并对其进行构建。

First thing to keep in mind while using a repo is that , it should point to a origin in the repo from where OpenShift will pull the code and build it.

如果 repo 已在安装了 OC 客户端的 Docker 机器上克隆,且用户位于同一目录中,那么可以使用以下命令创建它。

If the repo is cloned on the Docker machine where OC client is installed and the user is inside the same directory, then it can be created using the following command.

$ oc new-app . <Hear. Denotes current working directory>

以下是一个尝试从远程 repo 构建特定分支的示例。

Following is an example of trying to build from remote repo for a specific branch.

$ oc new-app https://github.com/openshift/Testing-deployment.git#test1

此处,test1 是我们要从中尝试在 OpenShift 中创建一个新应用程序的分支。

Here, test1 is the branch from where we are trying to create a new application in OpenShift.

在 repo 中指定 Docker 文件时,我们需要定义构建策略,如下所示。

When specifying a Docker file in the repository, we need to define the build strategy as shown below.

$ oc new-app OpenShift/OpenShift-test~https://github.com/openshift/Testingdeployment.git

From an Image

使用映像构建应用程序时,这些映像存在于本地 Docker 服务器、内部托管的 Docker repo 或 Docker 中心中。用户需要确保的一件事是没有问题地从中心拉取映像。

While building an application using images, the images are present in the local Docker server, in the in-house hosted Docker repository, or on the Docker hub. The only thing that a user needs to make sure is, he has the access to pull images from the hub without any issue.

OpenShift 具有确定所用来源(Docker 映像或源流)的能力。但是,如果用户需要,他可以明确定义是映像流还是 Docker 映像。

OpenShift has the capability to determine the source used, whether it is a Docker image or a source stream. However, if the user wishes he can explicitly define whether it is an image stream or a Docker image.

$ oc new-app - - docker-image tomcat

使用映像流 −

Using an image stream −

$ oc new-app tomcat:v1

From a Template

模板可以用于创建新应用程序。它可以是已存在的模板或创建一个新模板。

Templates can be used for the creation of a new application. It can be an already existing template or creating a new template.

以下 yaml 文件基本上是一个可用于部署的模板。

Following yaml file is basically a template that can be used for deployment.

apiVersion: v1
kind: Template
metadata:
   name: <Name of template>
   annotations:
      description: <Description of Tag>
      iconClass: "icon-redis"
      tags: <Tages of image>
objects:
   - apiVersion: v1
   kind: Pod
   metadata:
      name: <Object Specification>
spec:
   containers:
      image: <Image Name>
      name: master
      ports:
      - containerPort: <Container port number>
         protocol: <Protocol>
labels:
   redis: <Communication Type>

Develop and Deploy a Web Application

Developing a New Application in OpenShift

为了在 OpenShift 中创建一个新应用程序,我们必须编写一个新的应用程序代码并使用 OpenShift OC 构建命令对其进行构建。如前所述,我们有多种方式可以创建一个新映像。此处,我们将使用模板构建应用程序。当使用 oc new-app 命令运行时,此模板将构建新应用程序。

In order to create a new application in OpenShift, we have to write a new application code and build it using OpenShift OC build commands. As discussed, we have multiple ways of creating a new image. Here, we will be using a template to build the application. This template will build a new application when run with oc new-app command.

以下模板将创建 − 两个前端应用程序和一个数据库。除此之外,它还将创建两个新服务,并且这些应用程序将部署到 OpenShift 集群。在构建和部署应用程序时,我们首先需要在 OpenShift 中创建一个名称空间,并在此名称空间下部署应用程序。

The following template will create − Two front-end applications and one database. Along with that, it will create two new services and those applications will get deployed to OpenShift cluster. While building and deploying an application, initially we need to create a namespace in OpenShift and deploy the application under that namespace.

Create a new namespace

Create a new namespace

$ oc new-project openshift-test --display-name = "OpenShift 3 Sample" --
description = "This is an example project to demonstrate OpenShift v3"

Template

{
   "kind": "Template",
   "apiVersion": "v1",
   "metadata": {
      "name": "openshift-helloworld-sample",
      "creationTimestamp": null,
         "annotations": {
         "description": "This example shows how to create a simple openshift
         application in openshift origin v3",
         "iconClass": "icon-openshift",
         "tags": "instant-app,openshift,mysql"
      }
   }
},

Object Definitions

Secret definition in a template

Secret definition in a template

"objects": [
{
   "kind": "Secret",
   "apiVersion": "v1",
   "metadata": {"name": "dbsecret"},
   "stringData" : {
      "mysql-user" : "${MYSQL_USER}",
      "mysql-password" : "${MYSQL_PASSWORD}"
   }
},

Service definition in a template

Service definition in a template

{
   "kind": "Service",
   "apiVersion": "v1",
   "metadata": {
      "name": "frontend",
      "creationTimestamp": null
   },
   "spec": {
      "ports": [
         {
            "name": "web",
            "protocol": "TCP",
            "port": 5432,
            "targetPort": 8080,
            "nodePort": 0
         }
      ],
      "selector": {"name": "frontend"},
      "type": "ClusterIP",
      "sessionAffinity": "None"
   },
   "status": {
      "loadBalancer": {}
   }
},

Route definition in a template

Route definition in a template

{
   "kind": "Route",
   "apiVersion": "v1",
   "metadata": {
      "name": "route-edge",
      "creationTimestamp": null,
      "annotations": {
         "template.openshift.io/expose-uri": "http://{.spec.host}{.spec.path}"
      }
   },
   "spec": {
      "host": "www.example.com",
      "to": {
         "kind": "Service",
         "name": "frontend"
      },
      "tls": {
         "termination": "edge"
      }
   },
   "status": {}
},
{
   "kind": "ImageStream",
   "apiVersion": "v1",
   "metadata": {
      "name": "origin-openshift-sample",
      "creationTimestamp": null
   },
   "spec": {},
   "status": {
      "dockerImageRepository": ""
   }
},
{
   "kind": "ImageStream",
   "apiVersion": "v1",
   "metadata": {
      "name": "openshift-22-ubuntu7",
      "creationTimestamp": null
   },
   "spec": {
      "dockerImageRepository": "ubuntu/openshift-22-ubuntu7"
   },
   "status": {
      "dockerImageRepository": ""
   }
},

Build config definition in a template

Build config definition in a template

{
   "kind": "BuildConfig",
   "apiVersion": "v1",
   "metadata": {
      "name": "openshift-sample-build",
      "creationTimestamp": null,
      "labels": {name": "openshift-sample-build"}
   },
   "spec": {
      "triggers": [
         { "type": "GitHub",
            "github": {
            "secret": "secret101" }
         },
         {
            "type": "Generic",
            "generic": {
               "secret": "secret101",
               "allowEnv": true }
         },
         {
            "type": "ImageChange",
            "imageChange": {}
         },
         { "type": "ConfigChange”}
      ],
      "source": {
         "type": "Git",
         "git": {
            "uri": https://github.com/openshift/openshift-hello-world.git }
      },
      "strategy": {
         "type": "Docker",
         "dockerStrategy": {
            "from": {
               "kind": "ImageStreamTag",
               "name": "openshift-22-ubuntu7:latest”
            },
            "env": [
               {
                  "name": "EXAMPLE",
                  "value": "sample-app"
               }
            ]
         }
      },
      "output": {
         "to": {
            "kind": "ImageStreamTag",
            "name": "origin-openshift-sample:latest"
         }
      },
      "postCommit": {
         "args": ["bundle", "exec", "rake", "test"]
      },
      "status": {
         "lastVersion": 0
      }
   }
},

Deployment config in a template

Deployment config in a template

"status": {
   "lastVersion": 0
}
{
   "kind": "DeploymentConfig",
   "apiVersion": "v1",
   "metadata": {
      "name": "frontend",
      "creationTimestamp": null
   }
},
"spec": {
   "strategy": {
      "type": "Rolling",
      "rollingParams": {
         "updatePeriodSeconds": 1,
         "intervalSeconds": 1,
         "timeoutSeconds": 120,
         "pre": {
            "failurePolicy": "Abort",
            "execNewPod": {
               "command": [
                  "/bin/true"
               ],
               "env": [
                  {
                     "name": "CUSTOM_VAR1",
                     "value": "custom_value1"
                  }
               ]
            }
         }
      }
   }
}
"triggers": [
   {
      "type": "ImageChange",
      "imageChangeParams": {
         "automatic": true,
         "containerNames": [
            "openshift-helloworld"
         ],
         "from": {
            "kind": "ImageStreamTag",
            "name": "origin-openshift-sample:latest"
         }
      }
   },
   {
      "type": "ConfigChange"
   }
],
"replicas": 2,
"selector": {
   "name": "frontend"
},
"template": {
   "metadata": {
      "creationTimestamp": null,
      "labels": {
         "name": "frontend"
      }
   },
   "spec": {
      "containers": [
         {
            "name": "openshift-helloworld",
            "image": "origin-openshift-sample",
            "ports": [
               {
                  "containerPort": 8080,
                  "protocol": "TCP”
               }
            ],
            "env": [
               {
                  "name": "MYSQL_USER",
                  "valueFrom": {
                     "secretKeyRef" : {
                        "name" : "dbsecret",
                        "key" : "mysql-user"
                     }
                  }
               },
               {
                  "name": "MYSQL_PASSWORD",
                  "valueFrom": {
                     "secretKeyRef" : {
                        "name" : "dbsecret",
                        "key" : "mysql-password"
                     }
                  }
               },
               {
                  "name": "MYSQL_DATABASE",
                  "value": "${MYSQL_DATABASE}"
               }
            ],
            "resources": {},
            "terminationMessagePath": "/dev/termination-log",
            "imagePullPolicy": "IfNotPresent",
            "securityContext": {
               "capabilities": {},
               "privileged": false
            }
         }
      ],
      "restartPolicy": "Always",
      "dnsPolicy": "ClusterFirst"
   },
   "status": {}
},

Service definition in a template

Service definition in a template

{
   "kind": "Service",
   "apiVersion": "v1",
   "metadata": {
      "name": "database",
      "creationTimestamp": null
   },
   "spec": {
   "ports": [
      {
         "name": "db",
         "protocol": "TCP",
         "port": 5434,
         "targetPort": 3306,
         "nodePort": 0
      }
   ],
   "selector": {
      "name": "database
   },
   "type": "ClusterIP",
   "sessionAffinity": "None" },
   "status": {
      "loadBalancer": {}
   }
},

Deployment config definition in a template

Deployment config definition in a template

{
   "kind": "DeploymentConfig",
   "apiVersion": "v1",
   "metadata": {
      "name": "database",
      "creationTimestamp": null
   },
   "spec": {
      "strategy": {
         "type": "Recreate",
         "resources": {}
      },
      "triggers": [
         {
            "type": "ConfigChange"
         }
      ],
      "replicas": 1,
      "selector": {"name": "database"},
      "template": {
         "metadata": {
            "creationTimestamp": null,
            "labels": {"name": "database"}
         },
         "template": {
            "metadata": {
               "creationTimestamp": null,
               "labels": {
                  "name": "database"
               }
            },
            "spec": {
               "containers": [
                  {
                     "name": "openshift-helloworld-database",
                     "image": "ubuntu/mysql-57-ubuntu7:latest",
                     "ports": [
                        {
                           "containerPort": 3306,
                           "protocol": "TCP"
                        }
                     ],
                     "env": [
                        {
                           "name": "MYSQL_USER",
                           "valueFrom": {
                              "secretKeyRef" : {
                                 "name" : "dbsecret",
                                 "key" : "mysql-user"
                              }
                           }
                        },
                        {
                           "name": "MYSQL_PASSWORD",
                           "valueFrom": {
                              "secretKeyRef" : {
                                 "name" : "dbsecret",
                                 "key" : "mysql-password"
                              }
                           }
                        },
                        {
                           "name": "MYSQL_DATABASE",
                           "value": "${MYSQL_DATABASE}"
                        }
                     ],
                     "resources": {},
                     "volumeMounts": [
                        {
                           "name": "openshift-helloworld-data",
                           "mountPath": "/var/lib/mysql/data"
                        }
                     ],
                     "terminationMessagePath": "/dev/termination-log",
                     "imagePullPolicy": "Always",
                     "securityContext": {
                        "capabilities": {},
                        "privileged": false
                     }
                  }
               ],
               "volumes": [
                  {
                     "name": "openshift-helloworld-data",
                     "emptyDir": {"medium": ""}
                  }
               ],
               "restartPolicy": "Always",
               "dnsPolicy": "ClusterFirst”
            }
         }
      },
      "status": {}
   },
   "parameters": [
      {
         "name": "MYSQL_USER",
         "description": "database username",
         "generate": "expression",
         "from": "user[A-Z0-9]{3}",
         "required": true
      },
      {
         "name": "MYSQL_PASSWORD",
         "description": "database password",
         "generate": "expression",
         "from": "[a-zA-Z0-9]{8}",
         "required": true
      },
      {
         "name": "MYSQL_DATABASE",
         "description": "database name",
         "value": "root",
         "required": true
      }
   ],
   "labels": {
      "template": "application-template-dockerbuild"
   }
}

上面的模板文件需要一次编译。我们需要先将所有内容复制到单个文件中,完成后将其命名为 yaml 文件。

The above template file needs to be compiled at once. We need to first copy all the content into a single file and name it as a yaml file once done.

我们需要运行以下命令来创建应用程序。

We need to run the following command to create the application.

$ oc new-app application-template-stibuild.json
--> Deploying template openshift-helloworld-sample for "application-template-stibuild.json"

   openshift-helloworld-sample
   ---------
   This example shows how to create a simple ruby application in openshift origin v3
   * With parameters:
      * MYSQL_USER = userPJJ # generated
      * MYSQL_PASSWORD = cJHNK3se # generated
      * MYSQL_DATABASE = root

--> Creating resources with label app = ruby-helloworld-sample ...
   service "frontend" created
   route "route-edge" created
   imagestream "origin-ruby-sample" created
   imagestream "ruby-22-centos7" created
   buildconfig "ruby-sample-build" created
   deploymentconfig "frontend" created
   service "database" created
   deploymentconfig "database" created

--> Success
   Build scheduled, use 'oc logs -f bc/ruby-sample-build' to track its progress.
   Run 'oc status' to view your app.

如果我们希望监控构建,则可以使用以下命令

If we wish to monitor the build, it can be done using −

$ oc get builds

NAME                        TYPE      FROM          STATUS     STARTED         DURATION
openshift-sample-build-1    Source   Git@bd94cbb    Running    7 seconds ago   7s

我们可以在 OpenShift 上使用以下命令检查已部署的应用程序

We can check the deployed applications on OpenShift using −

$ oc get pods
NAME                            READY   STATUS      RESTARTS   AGE
database-1-le4wx                1/1     Running     0          1m
frontend-1-e572n                1/1     Running     0          27s
frontend-1-votq4                1/1     Running     0          31s
opeshift-sample-build-1-build   0/1     Completed   0          1m

我们可以使用以下命令检查应用程序服务是否按照服务定义创建的

We can check if the application services are created as per the service definition using

$ oc get services
NAME        CLUSTER-IP      EXTERNAL-IP     PORT(S)      SELECTOR          AGE
database    172.30.80.39    <none>         5434/TCP     name=database      1m
frontend    172.30.17.4     <none>         5432/TCP     name=frontend      1m

OpenShift - Build Automation

在 OpenShift 中,我们有多种自动化构建流程的方法。为此,我们需要创建一个 BuildConfig 资源来描述构建流程。BuildConfig 中的流程可以与 Jenkins 作业定义中的作业定义进行比较。创建构建流程时,我们必须选择构建策略。

In OpenShift, we have multiple methods of automating the build pipeline. In order to do that we need to create a BuildConfig resource to describe the build flow. The flow in BuildConfig can be compared with the job definition in Jenkins job definition. While creating the build flow, we have to choose the build strategy.

BuildConfig File

在 OpenShift 中,BuildConfig 是用于连接 API 的 rest 对象,然后创建一个新实例。

In OpenShift, BuildConfig is a rest object used to connect to API and then create a new instance.

kind: "BuildConfig"
apiVersion: "v1"
metadata:
   name: "<Name of build config file>"
spec:
   runPolicy: "Serial"
   triggers:
   -
      type: "GitHub"
      github:
         secret: "<Secrete file name>"
   - type: "Generic"
   generic:
      secret: "secret101"
   -
   type: "ImageChange"
   source:
      type: "<Source of code>"
      git:
   uri: "https://github.com/openshift/openshift-hello-world"
   dockerfile: "FROM openshift/openshift-22-centos7\nUSER example"
   strategy:
      type: "Source"

sourceStrategy:
   from:
      kind: "ImageStreamTag"
      name: "openshift-20-centos7:latest"
   output:
      to:
         kind: "ImageStreamTag"
         name: "origin-openshift-sample:latest"
   postCommit:
      script: "bundle exec rake test"

在 OpenShift 中,有四种类型的构建策略。

In OpenShift, there are four types of build strategies.

  1. Source-to-image strategy

  2. Docker strategy

  3. Custom strategy

  4. Pipeline strategy

Source-to-image Strategy

允许从源代码开始创建容器映像。在此流程中,实际代码首先下载到容器中,然后在其中进行编译。已编译代码部署到同一容器中,并从该代码构建映像。

Allows creating container images starting from the source code. In this flow, the actual code gets downloaded first in the container and then gets compiled inside it. The compiled code gets deployed inside the same container and the image is built from that code.

strategy:
   type: "Source"
   sourceStrategy:
      from:
         kind: "ImageStreamTag"
         name: "builder-image:latest"
      forcePull: true

有多个策略。

There are multiple strategy policies.

  1. Forcepull

  2. Incremental Builds

  3. External Builds

Docker Strategy

在此流程中,OpenShift 使用 Dockerfile 构建映像,然后将创建的映像上传到 Docker 注册表。

In this flow, OpenShift uses Dockerfile to build the image and then upload the created images to the Docker registry.

strategy:
   type: Docker
   dockerStrategy:
      from:
         kind: "ImageStreamTag"
         name: "ubuntu:latest"

Docker 文件选项可以在文件路径、无缓存和强制拉取等多个位置使用。

Docker file option can be used in multiple locations starting from file path, no cache, and force pull.

  1. From Image

  2. Dockerfile path

  3. No cache

  4. Force pull

Custom Strategy

这是不同类型的构建策略之一,其中并不强制构建输出为映像。它可以与 Jenkins 的自由式作业进行比较。使用此策略,我们可以创建 Jar、rpm 和其他软件包。

This is one of the different kinds of build strategy, wherein there is no such compulsion that the output of the build is going to be an image. It can be compared to a free style job of Jenkins. With this, we can create Jar, rpm, and other packages.

strategy:
   type: "Custom"
   customStrategy:
      from:
         kind: "DockerImage"
         name: "openshift/sti-image-builder"

它包含多个构建策略。

It consists of multiple build strategies.

  1. Expose Docker socket

  2. Secrets

  3. Force pull

Pipeline Strategy

流水线策略用于创建自定义构建流水线。这主要用于在流水线中实现工作流。此构建流使用 Groovy DSL 语言自定义构建流水线流。OpenShift 将在 Jenkins 中创建一个流水线作业并执行它。此流水线流也可以在 Jenkins 中使用。在此策略中,我们使用 Jenkinsfile 并将其附加在 buildconfig 定义中。

Pipeline strategy is used to create custom build pipelines. This is basically used to implement the workflow in the pipeline. This build flow uses custom build pipeline flow using Groovy DSL language. OpenShift will create a pipeline job in Jenkins and execute it. This pipeline flow can also be used in Jenkins. In this strategy, we use Jenkinsfile and append that in the buildconfig definition.

Strategy:
   type: "JenkinsPipeline"
   jenkinsPipelineStrategy:
   jenkinsfile: "node('agent') {\nstage 'build'\nopenshiftBuild(buildConfig: 'OpenShift-build', showBuildLogs: 'true')\nstage 'deploy'\nopenshiftDeploy(deploymentConfig: 'backend')\n}"

Using build pipeline

Using build pipeline

kind: "BuildConfig"
apiVersion: "v1"
metadata:
   name: "test-pipeline"
spec:
   source:
      type: "Git"
      git:
         uri: "https://github.com/openshift/openshift-hello-world"
   strategy:
      type: "JenkinsPipeline"
      jenkinsPipelineStrategy:
         jenkinsfilePath: <file path repository>

OpenShift - CLI

OpenShift CLI 用于从命令行管理 OpenShift 应用程序。OpenShift CLI 能够管理端到端应用程序生命周期。通常,我们会使用 OC,它是 OpenShift 客户端,用来与 OpenShift 通信。

OpenShift CLI is used for managing OpenShift applications from the command line. OpenShift CLI has the capability to manage end-to-end application life cycle. In general, we would be using OC which is an OpenShift client to communicate with OpenShift.

OpenShift CLI Setup

为了在其他操作系统上设置 OC 客户端,我们需要完成一系列步骤。

In order to set up the OC client on a different operating system, we need to go through different sequence of steps.

OC Client for Windows

Step 1 − 从以下链接下载 oc cli https://github.com/openshift/origin/releases/tag/v3.6.0-alpha.2

Step 1 − Download the oc cli from the following link https://github.com/openshift/origin/releases/tag/v3.6.0-alpha.2

Step 2 − 在计算机上的目标路径解压该软件包。

Step 2 − Unzip the package on a target path on the machine.

Step 3 − 编辑系统的路径环境变量。

Step 3 − Edit the path environment variable of the system.

C:\Users\xxxxxxxx\xxxxxxxx>echo %PATH%

C:\oraclexe\app\oracle\product\10.2.0\server\bin;C:\Program Files
(x86)\Intel\iCLS Client\;C:\Program Files\Intel\iCLS Client\;C:\Program Files
(x86)\AMD APP\bin\x86_64;C:\Program Files (x86)\AMD APP\bin\x86;

C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\
v1.0\;C:\Program Files (x86)\Windows Live\Shared;C:\Program Files
(x86)\ATI Technologies\ATI.ACE\C

ore-Static;C:\Program Files\Intel\Intel(R) Management Engine
Components\DAL;C:\Program Files\Intel\Intel(R) Management Engine
Components\IPT;C:\Program Files (x86)\Intel\Intel(R) Management Engine Components\DAL;

Step 4 − 在 Windows 上验证 OC 设置。

Step 4 − Validate the OC setup on Windows.

C:\openshift-origin-client-tools-v3.6.0-alpha.2-3c221d5-windows>oc version
oc v3.6.0-alpha.2+3c221d5
kubernetes v1.6.1+5115d708d7
features: Basic-Auth

OC Client for Mac OS X

我们可以从与 Windows 相同的位置为 Mac OS 下载设置二进制文件,然后在其他位置解压,并将可执行文件路径设置在 PATH 环境变量下。

We can download the Mac OS setup binaries for the same location as for Windows and later unzip it at a location and set a path of executable under the environment PATH variable.

Alternatively

Alternatively

我们可以使用 Home brew 并用以下命令设置它。

We can use Home brew and set it up using the following command.

$ brew install openshift-cli

OC Client for Linux

在同一页面下,我们有 Linux 安装的 tar 文件,可用于安装。稍后,可以设置一个指向特定可执行文件位置的路径变量。

Under the same page, we have the tar file for Linux installation that can be used for installation. Later, a path variable can be set pointing to that particular executable location.

使用以下命令解压缩 tar 文件。

Unpack the tar file using the following command.

$ tar –xf < path to the OC setup tar file >

运行以下命令检查身份验证。

Run the following command to check the authentication.

C:\openshift-origin-client-tools-v3.6.0-alpha.2-3c221d5-windows>oc login
Server [https://localhost:8443]:

CLI Configuration Files

OC CLI 配置文件用于管理多个 OpenShift 服务器连接和身份验证机制。此配置文件还用于存储和管理多个配置文件,以及在它们之间进行切换。正常的配置文件如下所示。

OC CLI configuration file is used for managing multiple OpenShift server connection and authentication mechanism. This configuration file is also used for storing and managing multiple profiles and for switching between them. A normal configuration file looks like the following.

$ oc config view
apiVersion: v1
clusters:
   - cluster:
      server: https://vklnld908.int.example.com
   name: openshift

contexts:
- context:
   cluster: openshift
   namespace: testproject
   user: alice
   name: alice
current-context: alice
kind: Config
preferences: {}
users:
- name: vipin
   user:
      token: ZCJKML2365jhdfafsdj797GkjgjGKJKJGjkg232

Setting Up CLI Client

For setting user credential

$ oc config set-credentials <user_nickname>
[--client-certificate = <path/to/certfile>] [--client-key=<path/to/keyfile>]
[--token = <bearer_token>] [--username = <basic_user>] [--password = <basic_password>]

For setting cluster

$ oc config set-cluster <cluster_nickname> [--server = <master_ip_or_fqdn>]
[--certificate-authority = <path/to/certificate/authority>]
[--api-version = <apiversion>] [--insecure-skip-tls-verify = true]

Example

$ oc config set-credentials vipin --token = ZCJKML2365jhdfafsdj797GkjgjGKJKJGjkg232

For setting context

$ oc config set-context <context_nickname> [--cluster = <cluster_nickname>]
[--user = <user_nickname>] [--namespace = <namespace>]

CLI Profiles

在单个 CLI 配置文件中,我们可以拥有多个配置文件,其中每个配置文件具有不同的 OpenShift 服务器配置,稍后可用于在不同的 CLI 配置文件之间进行切换。

In a single CLI configuration file, we can have multiple profiles wherein each profile has a different OpenShift server configuration, which later can be used for switching between different CLI profiles.

apiVersion: v1
clusters: --→ 1
- cluster:
   insecure-skip-tls-verify: true
   server: https://vklnld908.int.example.com:8443
   name: vklnld908.int.example.com:8443
- cluster:
   insecure-skip-tls-verify: true
   server: https://vklnld1446.int.example.com:8443
   name: vklnld1446.int.example.com:8443
contexts: ---→ 2
- context:
   cluster: vklnld908.int.example.com:8443
   namespace: openshift-project
   user: vipin/vklnld908.int.example.com:8443
   name: openshift-project/vklnld908.int.example.com:8443/vipin
- context:
   cluster: vklnld908.int.example.com:8443
   namespace: testing-project
   user: alim/vklnld908.int.example.com:8443
   name: testproject-project/openshift1/alim
current-context: testing-project/vklnld908.int.example.com:8443/vipin - 3
kind: Config
preferences: {}

users:
- name: vipin/vklnld908.int.example.com:8443
user: ---→ 4
   token: ZCJKML2365jhdfafsdj797GkjgjGKJKJGjkg232

在上面的配置中,我们可以看到它被分为四个主要部分,从定义了 OpenShift 主机计算机的两个实例的 cluster 开始。第二个 context 部分定义了名为 vipin 和 alim 两个上下文。当前上下文定义了当前正在使用的上下文。如果我们在此处更改定义,则可以将其更改为其他上下文或配置文件。最后,定义了用户定义及其身份验证令牌,在我们的示例中是 vipin。

In the above configuration, we can see it is divided into four main sections starting from cluster which defines two instances of OpenShift master machines. Second context section defines two contexts named vipin and alim. The current context defines which context is currently in use. It can be changed to other context or profile, if we change the definition here. Finally, the user definition and its authentication token is defined which in our case is vipin.

如果我们想检查正在使用的当前配置文件,可以使用以下方法完成:

If we want to check the current profile in use, it can be done using −

$ oc status
oc status
In project testing Project (testing-project)
$ oc project
Using project "testing-project" from context named "testing-
project/vklnld908.int.example.com:8443/vipin" on server "https://vklnld908.int.example.com:8443".

如果我们想要切换到其他的 CLI,则可以使用以下命令从命令行执行。

If we want to switch to other CLI, it can be done from the command line using the following command.

$ oc project openshift-project
Now using project "Openshift-project" on server "
https://vklnld908.int.example.com:8443".

使用上述命令,我们可以切换概要文件。在任何时候,如果我们希望查看配置,我们可以使用 $ oc config view 命令。

Using the above command, we can switch between profiles. At any point of time, if we wish to view the configuration, we can use $ oc config view command.

OpenShift - CLI Operations

OpenShift CLI 能够执行所有基本和高级配置、管理、添加和部署应用程序。

OpenShift CLI is capable of performing all basic and advance configuration, management, addition, and deployment of applications.

我们可以使用 OC 命令执行不同类型的操作。此客户端可帮助您在任何 OpenShift 或 Kubernetes 兼容平台上开发、构建、部署和运行您的应用程序。它还包括“adm”子命令下的管理群集的管理命令。

We can perform different kinds of operations using OC commands. This client helps you develop, build, deploy, and run your applications on any OpenShift or Kubernetes compatible platform. It also includes the administrative commands for managing a cluster under the 'adm' subcommand.

Basic Commands

下表列出了基本 OC 命令。

Following table lists the basic OC commands.

Sr.No.

Commands & Description

1

Types An introduction to concepts and type

2

Login Log in to a server

3

new-project Request a new project

4

new-app Create a new application

5

Status Show an overview of the current project

6

Project Switch to another project

7

Projects Display existing projects

8

Explain Documentation of resources

9

Cluster Start and stop OpenShift cluster

Login

登录到服务器并保存登录信息以便后续使用。客户端的首次用户应运行此命令以连接到服务器,建立经过身份验证的会话,并将连接信息保存到配置文件中。默认配置将保存在您的主目录中,路径为“.kube/config”。

Log in to your server and save the login for subsequent use. First-time users of the client should run this command to connect to a server, establish an authenticated session, and save a connection to the configuration file. The default configuration will be saved to your home directory under ".kube/config".

登录所需的信息(例如用户名和密码、会话令牌,或服务器详细信息)可以通过标志提供。如果没有提供,该命令将根据需要提示用户输入。

The information required to login — like username and password, a session token, or the server details can be provided through flags. If not provided, the command will prompt for user input as needed.

Usage

Usage

oc login [URL] [options]

Example

# Log in interactively
oc login

# Log in to the given server with the given certificate authority file
oc login localhost:8443 --certificate-authority = /path/to/cert.crt

# Log in to the given server with the given credentials (will not prompt interactively)
oc login localhost:8443 --username = myuser --password=mypass

选项 -

Options −

-p, --password = " − 密码,如果没有提供将提示输入

-p, --password = " − Password, will prompt if not provided

-u, --username = " − 用户名,如果没有提供将提示输入

-u, --username = " − Username, will prompt if not provided

--certificate-authority = " − 证书颁发机构证书文件的路径

--certificate-authority = " − Path to a cert. file for the certificate authority

--insecure-skip-tls-verify = false − 如果为 true,则不会检查服务器证书的有效性。这会使你的 HTTPS 连接不安全

--insecure-skip-tls-verify = false − If true, the server’s certificate will not be checked for validity. This will make your HTTPS connections insecure

--token = " − API 服务器认证的持有者令牌

--token = " − Bearer token for authentication to the API server

要获取有关任何命令的完整详细信息,请使用 oc <Command Name> --help 命令。

To get the complete details regarding any command, use the oc <Command Name> --help command.

Build and Deploy Commands

下表列出了构建和部署命令。

Following table lists the build and deploy commands.

Sr.No.

Commands & Description

1

Rollout Manage a Kubernetes deployment or OpenShift deploy

2

Deploy View, start, cancel, or retry a deployment

3

Rollback Revert part of an application back to the previous state

4

new-build Create a new build configuration

5

start-build Start a new build

6

cancel-build Cancel running, pending, or new builds

7

import-image Imports images from a Docker registry

8

Tag Tag the existing images into image streams

Application Management Commands

下表列出了应用程序管理命令。

Following table lists the application management commands.

Sr.No.

Commands & Description

1

Get Display one or many resources

2

Describe Show details of a specific resource or a group of resources

3

Edit Edit a resource on the server

4

Set Commands that help set specific features on objects

5

Label Update the labels on a resource

6

Annotate Update the annotations on a resource

7

Expose Expose a replicated application as a service or route

8

Delete Delete one or more resources

9

Scale Change the number of pods in a deployment

10

Autoscale Autoscale a deployment config, deployment, replication, Controller or replica set

11

Secrets Manage secrets

12

Serviceaccounts Manage service accounts in your project

Troubleshooting and Debugging Commands

下表列出了故障排除和调试命令。

Following table lists the troubleshooting and debugging commands.

Sr.No.

Commands & Description

1

logs Print the logs for a resource

2

Rsh Start a shell session in a pod

3

Rsync Copy files between the local filesystem and a pod

4

port-forward Forward one or more local ports to a pod

5

Debug Launch a new instance of a pod for debugging

6

Exec Execute a command in a container

7

Procy Run a proxy to the Kubernetes API server

9

Attach Attach to a running container

10

Run Run a particular image on the cluster

11

Cp Copy files and directories to and from containers

Advanced Commands

下表列出了高级命令。

Following table lists the advanced commands.

Sr.No.

Commands & Description

1

adm Tools for managing a cluster

2

create Create a resource by filename or stdin

3

replace Replace a resource by filename or stdin

4

apply Apply a configuration to a resource by filename or stdin

5

patch Update field(s) of a resource using strategic merge patch

6

process Process a template into list of resources

7

export Export resources so they can be used elsewhere

8

extract Extract secrets or config maps to disk

9

idle Idle scalable resources

10

observe Observe changes to the resources and react to them (experimental)

11

policy Manage authorization policy

12

auth Inspect authorization

13

convert Convert config files between different API versions

14

import Commands that import applications

Setting Commands

下表列出了设置命令。

Following table lists the setting commands.

Sr.No.

Commands & Description

1

Logout End the current server session

2

Config Change the configuration files for the client

3

Whoami Return information about the current session

4

Completion Output shell completion code for the specified shell (bash or zsh)

OpenShift - Clusters

OpenShift 使用两种安装方法来设置 OpenShift 集群。

OpenShift uses two installation methods of setting up OpenShift cluster.

  1. Quick installation method

  2. Advanced configuration method

Setting Up Cluster

Quick Installation Method

此方法用于运行快速未获取的集群安装配置。为了使用此方法,我们首先需要安装安装程序。这可以通过运行以下命令来完成。

This method is used for running a quick unattained cluster setup configuration. In order to use this method, we need to first install the installer. This can be done by running the following command.

Interactive method

Interactive method

$ atomic-openshift-installer install

当人们希望运行交互式设置时,这是非常有用的。

This is useful when one wishes to run an interactive setup.

Unattended installation method

Unattended installation method

当人们希望设置无人参与的安装方式时,使用此方法,其中用户可以定义一个配置文件 yaml 文件,并将其放在 ~/.config/openshift/ 下,名称为 installer.cfg.yml。然后,可以运行以下命令来安装 –u tag

This method is used when one wishes to set up an unattended way of installation method, wherein the user can define a configuration yaml file and place it under ~/.config/openshift/ with the name of installer.cfg.yml. Then, the following command can be run to install the –u tag.

$ atomic-openshift-installer –u install

它默认情况下使用 ~/.config/openshift/ 下的配置文件。另一方面,Ansible 则用作安装的备份。

By default it uses the config file located under ~/.config/openshift/. Ansible on the other hand is used as a backup of installation.

version: v2
variant: openshift-enterprise
variant_version: 3.1
ansible_log_path: /tmp/ansible.log

deployment:
   ansible_ssh_user: root
   hosts:
   - ip: 172.10.10.1
   hostname: vklnld908.int.example.com
   public_ip: 24.222.0.1
   public_hostname: master.example.com
   roles:
      - master
      - node
   containerized: true
   connect_to: 24.222.0.1

   - ip: 172.10.10.2
   hostname: vklnld1446.int.example.com
   public_ip: 24.222.0.2
   public_hostname: node1.example.com
   roles:
      - node
   connect_to: 10.0.0.2

   - ip: 172.10.10.3
   hostname: vklnld1447.int.example.com
   public_ip: 10..22.2.3
   public_hostname: node2.example.com
   roles:
      - node
   connect_to: 10.0.0.3

roles:
   master:
      <variable_name1>: "<value1>"
      <variable_name2>: "<value2>"
   node:
      <variable_name1>: "<value1>"

在这里,我们有特定角色的变量,如果人们希望设置一些特定变量,则可以定义该变量。

Here, we have role-specific variable, which can be defined if one wishes to set up some specific variable.

完成后,我们可以使用以下命令验证安装。

Once done, we can verify the installation using the following command.

$ oc get nodes
NAME                    STATUS    AGE
master.example.com      Ready     10d
node1.example.com       Ready     10d
node2.example.com       Ready     10d

Advanced Installation

高级安装完全基于 Ansible 配置,其中存在关于主设备和节点配置的完整主机配置和变量定义。其中包含有关配置的所有详细信息。

Advanced installation is completely based on Ansible configuration wherein the complete host configuration and variables definition regarding master and node configuration is present. This contains all the details regarding the configuration.

一旦我们完成设置并准备好剧本,我们只需运行以下命令即可设置集群。

Once we have the setup and the playbook is ready, we can simply run the following command to setup the cluster.

$ ansible-playbook -i inventry/hosts ~/openshift-ansible/playbooks/byo/config.yml

Adding Hosts to a Cluster

我们可以使用以下命令向集群添加主机

We can add a host to the cluster using −

  1. Quick installer tool

  2. Advanced configuration method

Quick installation tool 在交互和非交互模式下均可运行。使用以下命令。

Quick installation tool works in both interactive and non-interactive mode. Use the following command.

$ atomic-openshift-installer -u -c </path/to/file> scaleup

可以用于添加 master 和 node 的应用程序配置文件缩放格式看起来如下。

Format of scaling the application configuration file looks can be used for adding both master as well as node.

Advanced Configuration Method

在此方法中,我们更新 Ansible 的 host 文件,然后在此文件中添加一个新 node 或服务器详细信息。配置文件如下所示。

In this method, we update the host file of Ansible and then add a new node or server details in this file. Configuration file looks like the following.

[OSEv3:children]
masters
nodes
new_nodes
new_master

在相同的 Ansible hosts 文件中,添加有关新 node 的变量详细信息,如下所示。

In the same Ansible hosts file, add variable details regarding the new node as shown below.

[new_nodes]
vklnld1448.int.example.com openshift_node_labels = "{'region': 'primary', 'zone': 'east'}"

最后,使用更新后的 host 文件运行新配置并调用配置文件,使用以下命令完成设置。

Finally, using the updated host file, run the new configuration and invoke the configuration file to get the setup done using the following command.

$ ansible-playbook -i /inventory/hosts /usr/share/ansible/openshift-ansible/playbooks/test/openshift-node/scaleup.yml

Managing Cluster Logs

OpenShift 集群日志不过是集群的 master 和 node 机器中生成日志。它可以管理任何类型的日志,从服务器日志、master 日志、容器日志、pod 日志等开始。有多种技术和应用程序用于容器日志管理。

OpenShift cluster log is nothing but the logs which are getting generated from the master and the node machines of cluster. These can manage any kind of log, starting from server log, master log, container log, pod log, etc. There are multiple technologies and applications present for container log management.

以下列出了可用于日志管理的一些工具。

Few of the tools are as listed, which can be implemented for log management.

  1. Fluentd

  2. ELK

  3. Kabna

  4. Nagios

  5. Splunk

ELK stack − 在尝试从所有 node 中收集日志并以系统格式呈现日志时,此堆栈非常有用。ELK 堆栈主要分为三大类。

ELK stack − This stack is useful while trying to collect the logs from all the nodes and present them in a systematic format. ELK stack is mainly divided in three major categories.

ElasticSearch − 主要负责从所有容器中收集信息,并将其放入中心位置。

ElasticSearch − Mainly resposible for collecting information from all the containers and putting it into a central location.

Fluentd − 用于将收集到的日志馈送到 elasticsearch 容器引擎。

Fluentd − Used for feeding collected logs to elasticsearch container engine.

Kibana − 一个图形界面,用于将收集的数据作为图形界面中的有用信息呈现。

Kibana − A graphical interface used for presenting the collected data as a useful information in a graphical interface.

需要注意的一个关键点是,当此系统部署在集群上时,它将开始从所有 node 中收集日志。

One key point to note is, when this system is deployed on the cluster it starts collecting logs from all the nodes.

Log Diagnostics

OpenShift 有一个内置的 oc adm dignostics 命令(带 OC),可用于分析多个错误情况。管理员可以从 master 中使用此工具作为集群管理员。此实用程序对于故障排除和诊断已知问题非常有用。该实用程序在 master 客户端和 node 上运行。

OpenShift has an inbuilt oc adm dignostics command with OC that can be used for analyzing multiple error situations. This tool can be used from the master as a cluster administrator. This utility is very helpful is troubleshooting and dignosing known problems. This runs on the master client and nodes.

如果在没有任何争论或标志的情况下运行,它将查找客户端、服务器和 node 机器配置文件,并将其用于诊断。可以通过传递以下参数单独运行诊断 −

If run without any agruments or flags, it will look for configuration files of the client, server, and node machnies, and use them for diagnostics. One can run the diagnostics individually by passing the following arguments −

  1. AggregatedLogging

  2. AnalyzeLogs

  3. ClusterRegistry

  4. ClusterRoleBindings

  5. ClusterRoles

  6. ClusterRouter

  7. ConfigContexts

  8. DiagnosticPod

  9. MasterConfigCheck

  10. MasterNode

  11. MetricsApiProxy

  12. NetworkCheck

  13. NodeConfigCheck

  14. NodeDefinitions

  15. ServiceExternalIPs

  16. UnitStatus

可以简单地使用以下命令运行它们。

One can simply run them with the following command.

$ oc adm diagnostics <DiagnosticName>

Upgrading a Cluster

集群升级涉及升级集群内的多项内容,并通过新组件和升级更新集群。这包括 −

Upgradation of the cluster involves upgrading multiple things within the cluster and getiing the cluster updated with new components and upgrdes. This involves −

  1. Upgradation of master components

  2. Upgradation of node components

  3. Upgradation of policies

  4. Upgradation of routes

  5. Upgradation of image stream

为了执行所有这些升级,我们首先需要准备好快速安装程序或 utils。为此,我们需要更新以下实用程序 −

In order to perform all these upgrades, we need to first get quick installers or utils in place. For that we need to update the following utilities −

  1. atomic-openshift-utils

  2. atomic-openshift-excluder

  3. atomic-openshift-docker-excluder

  4. etcd package

在开始升级之前,我们需要备份 master 机器上的 etcd,可以使用以下命令完成此操作。

Before starting the upgrade, we need to backup etcd on the master machine, which can be done using the following commands.

$ ETCD_DATA_DIR = /var/lib/origin/openshift.local.etcd
$ etcdctl backup \
   --data-dir $ETCD_DATA_DIR \
   --backup-dir $ETCD_DATA_DIR.bak.<date>

Upgradation of Master Components

在 OpenShift master 中,我们通过更新 etcd 文件,然后转到 Docker,从而启动升级。最后,我们运行自动执行程序,使集群进入所需位置。但是,在开始升级之前,我们需要首先在每个 master 上激活原子 OpenShift 包。可以使用以下命令完成此操作。

In OpenShift master, we start the upgrade by updating the etcd file and then moving on to Docker. Finally, we run the automated executer to get the cluster into the required position. However, before starting the upgrade we need to first activate the atomic openshift packages on each of the masters. This can be done using the following commands.

Step 1 - 移除 atomic-openshift 软件包

Step 1 − Remove atomic-openshift packages

$ atomic-openshift-excluder unexclude

Step 2 - 在所有主服务器上升级 etcd。

Step 2 − Upgrade etcd on all the masters.

$ yum update etcd

Step 3 - 重启 etcd 服务并检查其是否已成功启动。

Step 3 − Restart the service of etcd and check if it has started successfully.

$ systemctl restart etcd
$ journalctl -r -u etcd

Step 4 - 升级 Docker 软件包。

Step 4 − Upgrade the Docker package.

$ yum update docker

Step 5 - 重启 Docker 服务并检查其是否正常运行。

Step 5 − Restart the Docker service and check if it is correctly up.

$ systemctl restart docker
$ journalctl -r -u docker

Step 6 - 完成后,使用以下命令重新启动系统。

Step 6 − Once done, reboot the system with the following commands.

$ systemctl reboot
$ journalctl -r -u docker

Step 7 - 最后,运行 atomic-executer 将软件包恢复到 yum 排除列表中。

Step 7 − Finally, run the atomic-executer to get the packages back to the list of yum excludes.

$ atomic-openshift-excluder exclude

升级策略没有这种强制性,它仅在建议时才需要升级,可以通过以下命令进行检查。

There is no such compulsion for upgrading the policy, it only needs to be upgraded if recommended, which can be checked with the following command.

$ oadm policy reconcile-cluster-roles

在大多数情况下,我们不需要更新策略定义。

In most of the cases, we don’t need to update the policy definition.

Upgradation of Node Components

主服务器更新完成后,我们可以开始升级节点。需要记住的一点是,升级周期应该短,以避免集群中出现任何问题。

Once the master update is complete, we can start upgrading the nodes. One thing to keep in mind is, the period of upgrade should be short in order to avoid any kind of issue in the cluster.

Step 1 - 从您希望执行升级的所有节点中删除所有原子 OpenShift 软件包。

Step 1 − Remove all atomic OpenShift packages from all the nodes where you wish to perform the upgrade.

$ atomic-openshift-excluder unexclude

Step 2 - 接下来,在升级前禁用节点计划。

Step 2 − Next, disable node scheduling before upgrade.

$ oadm manage-node <node name> --schedulable = false

Step 3 - 将所有节点从当前主机复制到其他主机。

Step 3 − Replicate all the node from the current host to the other host.

$ oadm drain <node name> --force --delete-local-data --ignore-daemonsets

Step 4 - 在主机上升级 Docker 设置。

Step 4 − Upgrade Docker setup on host.

$ yum update docker

Step 5 - 重启 Docker 服务,然后启动 Docker 服务节点。

Step 5 − Restart Docker service and then start the Docker service node.

$systemctl restart docker
$ systemctl restart atomic-openshift-node

Step 6 - 检查两者是否都已正确启动。

Step 6 − Check if both of them started correctly.

$ journalctl -r -u atomic-openshift-node

Step 7 - 升级完成后,重新引导节点计算机。

Step 7 − After upgrade is complete, reboot the node machine.

$ systemctl reboot
$ journalctl -r -u docker

Step 8 - 重新启用节点上的计划。

Step 8 − Re-enable scheduling on nodes.

$ oadm manage-node <node> --schedulable.

Step 9 - 运行 atomic-openshift executer 将 OpenShift 软件包重新置于节点上。

Step 9 − Run the atomic-openshift executer to get the OpenShift package back on node.

$ atomic-openshift-excluder exclude

Step 10 − 最后,检查所有节点是否可用。

Step 10 − Finally, check if all the nodes are available.

$ oc get nodes

NAME                 STATUS   AGE
master.example.com   Ready    12d
node1.example.com    Ready    12d
node2.example.com    Ready    12d

OpenShift - Application Scaling

自动缩放是 OpenShift 中的一项功能,其中已部署的应用可以按需扩展和缩减,具体取决于某些规范。在 OpenShift 应用中,自动缩放也称为 pod 自动缩放。有两个 types of application scaling 如下。

Autoscaling is a feature in OpenShift where the applications deployned can scale and sink as and when requierd as per certain specifications. In OpenShift application, autoscaling is also known as pod autoscaling. There are two types of application scaling as follows.

Vertical Scaling

垂直扩展是向单台机器中添加越来越多的能力,这意味着添加更多的 CPU 和硬盘。这是 OpenShift 的一种旧方法,现在不被 OpenShift 发行版所支持。

Vertical scaling is all about adding more and more power to a single machine which means adding more CPU and hard disk. The is an old method of OpenShift which is now not supported by OpenShift releases.

Horizontal Scaling

当需要通过增加机器数量来处理更多请求时,此类型的扩展很有用。

This type of scaling is useful when there is a need of handling more request by increasing the number of machines.

在 OpenShift 中,有 two methods to enable the scaling feature

In OpenShift, there are two methods to enable the scaling feature.

  1. Using the deployment configuration file

  2. While running the image

Using Deployment Configuration File

在此方法中,通过 deploymant 配置 yaml 文件启用缩放功能。为此,OC autoscale 命令与最少和最大副本数一起使用,这些副本数需要在群集中的任何给定时间点运行。我们需要一个对象定义来创建自动扩缩器。以下是 Pod 自动扩缩器定义文件的示例。

In this method, the scaling feature is enabled via a deploymant configuration yaml file. For this, OC autoscale command is used with minimum and maximum number of replicas, which needs to run at any given point of time in the cluster. We need an object definition for the creation of autoscaler. Following is an example of pod autoscaler definition file.

apiVersion: extensions/v1beta1
kind: HorizontalPodAutoscaler
metadata:
   name: database
spec:
   scaleRef:
      kind: DeploymentConfig
      name: database
      apiVersion: v1
      subresource: scale
   minReplicas: 1
   maxReplicas: 10
   cpuUtilization:
      targetPercentage: 80

一旦文件就绪,我们需要以 yaml 格式保存文件并运行以下命令进行部署。

Once we have the file in place, we need to save it with yaml format and run the following command for deployment.

$ oc create –f <file name>.yaml

While Running the Image

还可以不使用 yaml 文件自动缩放,方法是在 oc 命令行中使用以下 oc autoscale 命令。

One can also autoscale without the yaml file, by using the following oc autoscale command in oc command line.

$ oc autoscale dc/database --min 1 --max 5 --cpu-percent = 75
deploymentconfig "database" autoscaled

此命令还将生成类似类型的文件,以后可将其用作参考。

This command will also generate a similar kind of file that can later be used for reference.

Deployment Strategies in OpenShift

OpenShift 中的部署策略定义了具有不同可用方法的部署流。在 OpenShift 中,以下是 important types of deployment strategies

Deployment strategy in OpenShift defines a flow of deployment with different available methods. In OpenShift, following are the important types of deployment strategies.

  1. Rolling strategy

  2. Recreate strategy

  3. Custom strategy

以下是部署配置文件的示例,主要用于在 OpenShift 节点上部署。

Following is an example of deployment configuration file, which is used mainly for deployment on OpenShift nodes.

kind: "DeploymentConfig"
apiVersion: "v1"
metadata:
   name: "database"
spec:
   template:
      metadata:
         labels:
            name: "Database1"
spec:
   containers:
      - name: "vipinopenshifttest"
         image: "openshift/mongoDB"
         ports:
            - containerPort: 8080
               protocol: "TCP"
replicas: 5
selector:
   name: "database"
triggers:
- type: "ConfigChange"
- type: "ImageChange"
   imageChangeParams:
      automatic: true
      containerNames:
         - "vipinopenshifttest"
      from:
         kind: "ImageStreamTag"
         name: "mongoDB:latest"
   strategy:
      type: "Rolling"

在上面的 Deploymentconfig 文件中,我们采用滚动策略。

In the above Deploymentconfig file, we have the strategy as Rolling.

我们可以使用以下 OC 命令进行部署。

We can use the following OC command for deployment.

$ oc deploy <deployment_config> --latest

Rolling Strategy

滚动策略用于滚动更新或部署。此进程还支持生命周期钩子,用于将代码注入任何部署进程。

Rolling strategy is used for rolling updates or deployment. This process also supports life-cycle hooks, which are used for injecting code into any deployment process.

strategy:
   type: Rolling
   rollingParams:
      timeoutSeconds: <time in seconds>
      maxSurge: "<definition in %>"
      maxUnavailable: "<Defintion in %>"
      pre: {}
      post: {}

Recreate Strategy

此部署策略具有滚动部署策略的一些基本特性,并且它还支持生命周期钩子。

This deployment strategy has some of the basic features of rolling deployment strategy and it also supports life-cycle hook.

strategy:
   type: Recreate
   recreateParams:
      pre: {}
      mid: {}
      post: {}

Custom Strategy

当希望提供自己的部署进程或流时,这非常有用。所有定制都可以根据要求完成。

This is very helpful when one wishes to provide his own deployment process or flow. All the customizations can be done as per the requirement.

strategy:
   type: Custom
   customParams:
      image: organization/mongoDB
      command: [ "ls -l", "$HOME" ]
      environment:
         - name: VipinOpenshiftteat
         value: Dev1

OpenShift - Administration

在本章中,我们将介绍一些主题,如如何管理节点、配置 服务帐 户等。

In this chapter, we will cover topics such as how to manage a node, configure a service account, etc.

Master and Node Configuration

在 OpenShift 中,我们需要使用开始命令和 OC 来启动新服务器。在启动新主服务器时,我们需要使用主服务器和开始命令,而在启动新节点时,我们需要使用节点和开始命令。为此,我们需要创建主服务器和节点的配置文件。我们可以使用以下命令为主服务器和节点创建一个基本配置文件

In OpenShift, we need to use the start command along with OC to boot up a new server. While launching a new master, we need to use the master along with the start command, whereas while starting the new node we need to use the node along with the start command. In order to do this, we need to create configuration files for the master as well as for the nodes. We can create a basic configuration file for the master and the node using the following command.

For master configuration file

$ openshift start master --write-config = /openshift.local.config/master

For node configuration file

$ oadm create-node-config --node-dir = /openshift.local.config/node-<node_hostname> --node = <node_hostname> --hostnames = <hostname>,<ip_address>

一旦运行了以下命令,我们将获得基础配置文件,可用作配置的起点。稍后,我们可以使用相同的文件来启动新服务器。

Once we run the following commands, we will get the base configuration files that can be used as the starting point for configuration. Later, we can have the same file to boot the new servers.

apiLevels:
- v1beta3
- v1
apiVersion: v1
assetConfig:
   logoutURL: ""
   masterPublicURL: https://172.10.12.1:7449
   publicURL: https://172.10.2.2:7449/console/
      servingInfo:
         bindAddress: 0.0.0.0:7449
         certFile: master.server.crt
         clientCA: ""
keyFile: master.server.key
   maxRequestsInFlight: 0
   requestTimeoutSeconds: 0
controllers: '*'
corsAllowedOrigins:
- 172.10.2.2:7449
- 127.0.0.1
- localhost
dnsConfig:
   bindAddress: 0.0.0.0:53
etcdClientInfo:
   ca: ca.crt
   certFile: master.etcd-client.crt
   keyFile: master.etcd-client.key
   urls:
   - https://10.0.2.15:4001
etcdConfig:
   address: 10.0.2.15:4001
   peerAddress: 10.0.2.15:7001
   peerServingInfo:
      bindAddress: 0.0.0.0:7001
      certFile: etcd.server.crt
      clientCA: ca.crt
      keyFile: etcd.server.key
   servingInfo:
      bindAddress: 0.0.0.0:4001
      certFile: etcd.server.crt
      clientCA: ca.crt
      keyFile: etcd.server.key
   storageDirectory: /root/openshift.local.etcd
etcdStorageConfig:
   kubernetesStoragePrefix: kubernetes.io
   kubernetesStorageVersion: v1
   openShiftStoragePrefix: openshift.io
   openShiftStorageVersion: v1
imageConfig:
   format: openshift/origin-${component}:${version}
   latest: false
kind: MasterConfig
kubeletClientInfo:
   ca: ca.crt
   certFile: master.kubelet-client.crt
   keyFile: master.kubelet-client.key
   port: 10250
kubernetesMasterConfig:
   apiLevels:
   - v1beta3
   - v1
   apiServerArguments: null
   controllerArguments: null
   masterCount: 1
   masterIP: 10.0.2.15
   podEvictionTimeout: 5m
   schedulerConfigFile: ""
   servicesNodePortRange: 30000-32767
   servicesSubnet: 172.30.0.0/16
   staticNodeNames: []
masterClients:
   externalKubernetesKubeConfig: ""
   openshiftLoopbackKubeConfig: openshift-master.kubeconfig
masterPublicURL: https://172.10.2.2:7449
networkConfig:
   clusterNetworkCIDR: 10.1.0.0/16
   hostSubnetLength: 8
   networkPluginName: ""
   serviceNetworkCIDR: 172.30.0.0/16
oauthConfig:
   assetPublicURL: https://172.10.2.2:7449/console/
   grantConfig:
      method: auto
   identityProviders:
   - challenge: true
   login: true
   name: anypassword
   provider:
      apiVersion: v1
      kind: AllowAllPasswordIdentityProvider
   masterPublicURL: https://172.10.2.2:7449/
   masterURL: https://172.10.2.2:7449/
   sessionConfig:
      sessionMaxAgeSeconds: 300
      sessionName: ssn
      sessionSecretsFile: ""
   tokenConfig:
      accessTokenMaxAgeSeconds: 86400
      authorizeTokenMaxAgeSeconds: 300
policyConfig:
   bootstrapPolicyFile: policy.json
   openshiftInfrastructureNamespace: openshift-infra
   openshiftSharedResourcesNamespace: openshift
projectConfig:
   defaultNodeSelector: ""
   projectRequestMessage: ""
   projectRequestTemplate: ""
   securityAllocator:
      mcsAllocatorRange: s0:/2
      mcsLabelsPerProject: 5
      uidAllocatorRange: 1000000000-1999999999/10000
routingConfig:
   subdomain: router.default.svc.cluster.local
serviceAccountConfig:
   managedNames:
   - default
   - builder
   - deployer

masterCA: ca.crt
   privateKeyFile: serviceaccounts.private.key
   privateKeyFile: serviceaccounts.private.key
   publicKeyFiles:
   - serviceaccounts.public.key
servingInfo:
   bindAddress: 0.0.0.0:8443
   certFile: master.server.crt
   clientCA: ca.crt
   keyFile: master.server.key
   maxRequestsInFlight: 0
   requestTimeoutSeconds: 3600

Node configuration files

allowDisabledDocker: true
apiVersion: v1
dnsDomain: cluster.local
dnsIP: 172.10.2.2
dockerConfig:
   execHandlerName: native
imageConfig:
   format: openshift/origin-${component}:${version}
   latest: false
kind: NodeConfig
masterKubeConfig: node.kubeconfig
networkConfig:
   mtu: 1450
   networkPluginName: ""
nodeIP: ""
nodeName: node1.example.com

podManifestConfig:
   path: "/path/to/pod-manifest-file"
   fileCheckIntervalSeconds: 30
servingInfo:
   bindAddress: 0.0.0.0:10250
   certFile: server.crt
   clientCA: node-client-ca.crt
   keyFile: server.key
volumeDirectory: /root/openshift.local.volumes

节点配置文件如下所示。一旦有了这些配置文件,我们就可以运行以下命令来创建主服务器和节点服务器。

This is how the node configuration files look like. Once we have these configuration files in place, we can run the following command to create master and node server.

$ openshift start --master-config = /openshift.local.config/master/master-
config.yaml --node-config = /openshift.local.config/node-<node_hostname>/node-
config.yaml

Managing Nodes

在 OpenShift 中,我们有 OC 命令行实用程序,主要用于执行 OpenShift 中的所有操作。我们可以使用以下命令来管理节点。

In OpenShift, we have OC command line utility which is mostly used for carrying out all the operations in OpenShift. We can use the following commands to manage the nodes.

For listing a node

$ oc get nodes
NAME                             LABELS
node1.example.com     kubernetes.io/hostname = vklnld1446.int.example.com
node2.example.com     kubernetes.io/hostname = vklnld1447.int.example.com

Describing details about a node

$ oc describe node <node name>

Deleting a node

$ oc delete node <node name>

Listing pods on a node

$ oadm manage-node <node1> <node2> --list-pods [--pod-selector=<pod_selector>] [-o json|yaml]

Evaluating pods on a node

$ oadm manage-node <node1> <node2> --evacuate --dry-run [--pod-selector=<pod_selector>]

Configuration Authentication

在 OpenShift 主服务器中,有一个内置的 OAuth 服务器,可用于管理身份验证。所有 OpenShift 用户都会从该服务器获取令牌,这有助于他们与 OpenShift API 通信。

In OpenShift master, there is a built-in OAuth server, which can be used for managing authentication. All OpenShift users get the token from this server, which helps them communicate to OpenShift API.

OpenShift 中有不同种类的身份验证级别,可以与主配置文件一起配置。

There are different kinds of authentication level in OpenShift, which can be configured along with the main configuration file.

  1. Allow all

  2. Deny all

  3. HTPasswd

  4. LDAP

  5. Basic authentication

  6. Request header

在定义主服务器配置时,我们可以定义标识策略,在其中可以定义我们想要使用的策略类型。

While defining the master configuration, we can define the identification policy where we can define the type of policy that we wish to use.

Allow All

全部允许

Allow All

oauthConfig:
   ...
   identityProviders:
   - name: Allow_Authontication
      challenge: true
      login: true
      provider:
         apiVersion: v1
         kind: AllowAllPasswordIdentityProvider

Deny All

这将拒绝所有用户名和密码的访问。

This will deny access to all usernames and passwords.

oauthConfig:
   ...
   identityProviders:
   - name: deny_Authontication
      challenge: true
      login: true
      provider:
         apiVersion: v1
         kind: DenyAllPasswordIdentityProvider

HTPasswd

HTPasswd 用于根据加密文件密码验证用户名和密码。

HTPasswd is used to validate the username and password against an encrypted file password.

要生成加密文件,请使用以下命令。

For generating an encrypted file, following is the command.

$ htpasswd </path/to/users.htpasswd> <user_name>

使用加密文件。

Using the encrypted file.

oauthConfig:
   ...
   identityProviders:
   - name: htpasswd_authontication
      challenge: true
      login: true
      provider:
         apiVersion: v1
         kind: HTPasswdPasswordIdentityProvider
         file: /path/to/users.htpasswd

LDAP Identity Provider

这用于 LDAP 身份验证,其中 LDAP 服务器在身份验证中扮演着关键角色。

This is used for LDAP authentication wherein LDAP server plays a key role in authentication.

oauthConfig:
   ...
   identityProviders:
   - name: "ldap_authontication"
      challenge: true
      login: true
      provider:
         apiVersion: v1
         kind: LDAPPasswordIdentityProvider
         attributes:
            id:
            - dn
            email:
            - mail
            name:
            - cn
            preferredUsername:
            - uid
         bindDN: ""
         bindPassword: ""
         ca: my-ldap-ca-bundle.crt
         insecure: false
         url: "ldap://ldap.example.com/ou=users,dc=acme,dc=com?uid"

Basic Authentication

当用户名和密码的验证针对服务器到服务器身份验证进行时,会使用此项。身份验证在基本 URL 中受到保护,并以 JSON 格式显示。

This is used when the validation of username and password is done against a server-to-server authentication. The authentication is protected in the base URL and is presented in JSON format.

oauthConfig:
   ...
   identityProviders:
   - name: my_remote_basic_auth_provider
      challenge: true
      login: true
      provider:
         apiVersion: v1
         kind: BasicAuthPasswordIdentityProvider
         url: https://www.vklnld908.int.example.com/remote-idp
         ca: /path/to/ca.file
         certFile: /path/to/client.crt
         keyFile: /path/to/client.key

Configuring a Service Account

服务帐 户提供了一种灵活的方式来访问 OpenShift API,以公平和密码进行身份验证。

Service accounts provide a flexible way of accessing OpenShift API exposing the username and password for authentication.

Enabling a Service Account

服务帐 户使用公钥和私钥对进行身份验证。对 API 的身份验证使用私钥进行,并针对公钥进行验证。

Service account uses a key pair of public and private key for authentication. Authentication to API is done using a private key and validating it against a public key.

ServiceAccountConfig:
   ...
   masterCA: ca.crt
   privateKeyFile: serviceaccounts.private.key
   publicKeyFiles:
   - serviceaccounts.public.key
   - ...

Creating a Service Account

使用以下命令创建 服务帐 户

Use the following command to create a service account

$ Openshift cli create service account <name of server account>

Working with HTTP Proxy

在大多数生产环境中,直接访问互联网是受限的。它们既没有暴露给互联网也没有通过 HTTP 或 HTTPS 代理进行暴露。在 OpenShift 环境中,此代理机器定义被设置为环境变量。

In most of the production environment, direct access to Internet is restricted. They are either not exposed to Internet or they are exposed via a HTTP or HTTPS proxy. In an OpenShift environment, this proxy machine definition is set as an environment variable.

这可以通过在 /etc/sysconfig 下的主机和节点文件中添加代理定义来完成。这与我们对任何其他应用程序所做的一样。

This can be done by adding a proxy definition on the master and node files located under /etc/sysconfig. This is similar as we do for any other application.

Master Machine

/etc/sysconfig/openshift-master

HTTP_PROXY=http://USERNAME:PASSWORD@172.10.10.1:8080/
HTTPS_PROXY=https://USERNAME:PASSWORD@172.10.10.1:8080/
NO_PROXY=master.vklnld908.int.example.com

Node Machine

/etc/sysconfig/openshift-node

HTTP_PROXY=http://USERNAME:PASSWORD@172.10.10.1:8080/
HTTPS_PROXY=https://USERNAME:PASSWORD@172.10.10.1:8080/
NO_PROXY=master.vklnld908.int.example.com

完成后,我们需要重新启动主机和节点机器。

Once done, we need to restart the master and node machines.

For Docker Pull

/etc/sysconfig/docker

HTTP_PROXY = http://USERNAME:PASSWORD@172.10.10.1:8080/
HTTPS_PROXY = https://USERNAME:PASSWORD@172.10.10.1:8080/
NO_PROXY = master.vklnld1446.int.example.com

为了使 pod 在代理环境中运行,可以使用以下方法:

In order to make a pod run in a proxy environment, it can be done using −

containers:
- env:
   - name: "HTTP_PROXY"
      value: "http://USER:PASSWORD@:10.0.1.1:8080"

可以使用 OC environment 命令来更新现有 env。

OC environment command can be used to update the existing env.

OpenShift Storage with NFS

在 OpenShift 中,持久化卷和持久化卷声明的概念构成了持久化存储。这是其中一个关键的概念,其中首先创建持久化卷,然后声明相同的卷。为此,我们需要在底层硬件上具有足够的容量和磁盘空间。

In OpenShift, the concept of persistent volume and persistent volume claims forms persistent storage. This is one of the key concepts in which first persistent volume is created and later that same volume is claimed. For this, we need to have enough capacity and disk space on the underlying hardware.

apiVersion: v1
kind: PersistentVolume
metadata:
   name: storage-unit1
spec:
   capacity:
      storage: 10Gi
   accessModes:
   - ReadWriteOnce
   nfs:
      path: /opt
      server: 10.12.2.2
   persistentVolumeReclaimPolicy: Recycle

接下来,使用 OC 创建命令创建持久化卷。

Next, using OC create command create Persistent Volume.

$ oc create -f storage-unit1.yaml

persistentvolume " storage-unit1 " created

声明创建的卷。

Claiming the created volume.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
   name: Storage-clame1
spec:
   accessModes:
      - ReadWriteOnce
   resources:
      requests:
         storage: 5Gi

创建声明。

Create the claim.

$ oc create -f Storage-claim1.yaml
persistentvolume " Storage-clame1 " created

User and Role Management

用户和角色管理用于管理用户、他们在不同项目上的访问权限和控制权。

User and role administration is used to manage users, their access and controls on different projects.

Creating a User

可以使用预定义模板在 OpenShift 中创建新用户。

Predefined templates can be used to create new users in OpenShift.

kind: "Template"
apiVersion: "v1"
parameters:
   - name: vipin
   required: true
objects:
   - kind: "User"
   apiVersion: "v1"
   metadata:
   name: "${email}"

- kind: "Identity"
   apiVersion: "v1"
   metadata:
      name: "vipin:${email}"
   providerName: "SAML"
   providerUserName: "${email}"
- kind: "UserIdentityMapping"
apiVersion: "v1"
identity:
   name: "vipin:${email}"
user:
   name: "${email}"

使用 oc create –f <file name> 创建用户。

Use oc create –f <file name> to create users.

$ oc create –f vipin.yaml

使用以下命令在 OpenShift 中删除用户。

Use the following command to delete a user in OpenShift.

$ oc delete user <user name>

Limiting User Access

ResourceQuotas 和 LimitRanges 用于限制用户访问级别。它们用于限制集群上的 pod 和容器。

ResourceQuotas and LimitRanges are used for limiting user access levels. They are used for limiting the pods and containers on the cluster.

apiVersion: v1
kind: ResourceQuota
metadata:
   name: resources-utilization
spec:
   hard:
      pods: "10"

Creating the quote using the above configuration

$ oc create -f resource-quota.yaml –n –Openshift-sample

Describing the resource quote

$ oc describe quota resource-quota  -n  Openshift-sample
Name:              resource-quota
Namespace:                              Openshift-sample
Resource           Used                  Hard
--------           ----                  ----
pods                3                    10

定义容器限制可用于限制已部署容器将使用的资源。它们用于定义某些对象的最高和最低限制。

Defining the container limits can be used for limiting the resources which are going to be used by deployed containers. They are used to define the maximum and minimum limitations of certain objects.

User project limitations

这基本上用于用户在任何时间点都可以拥有的项目数。它们基本上是通过将用户级别定义为青铜、白银和黄金类别的来完成的。

This is basically used for the number of projects a user can have at any point of time. They are basically done by defining the user levels in categories of bronze, silver, and gold.

我们需要首先定义一个对象,该对象保存青铜、白银和黄金类别的项目数量。这些需要在 master-confif.yaml 文件中完成。

We need to first define an object which holds the value of how many projects a bronze, silver, and gold category can have. These need to be done in the master-confif.yaml file.

admissionConfig:
   pluginConfig:
      ProjectRequestLimit:
         configuration:
            apiVersion: v1
            kind: ProjectRequestLimitConfig
            limits:
            - selector:
               level: platinum
            - selector:
               level: gold
            maxProjects: 15
            - selector:
               level: silver
            maxProjects: 10
            - selector:
               level: bronze
            maxProjects: 5

重启主服务器。

Restart the master server.

将用户分配到特定级别。

Assigning a user to a particular level.

$ oc label user vipin level = gold

在需要时将用户移出标签。

Moving the user out of the label, if required.

$ oc label user <user_name> level-

向用户添加角色。

Adding roles to a user.

$ oadm policy add-role-to-user  <user_name>

从用户处移除角色。

Removing the role from a user.

$ oadm policy remove-role-from-user  <user_name>

向用户添加集群角色。

Adding a cluster role to a user.

$ oadm policy add-cluster-role-to-user  <user_name>

从用户处移除集群角色。

Removing a cluster role from a user.

$ oadm policy remove-cluster-role-from-user  <user_name>

向组添加角色。

Adding a role to a group.

$ oadm policy add-role-to-user  <user_name>

从组处移除角色。

Removing a role from a group.

$ oadm policy remove-cluster-role-from-user  <user_name>

向组添加集群角色。

Adding a cluster role to a group.

$ oadm policy add-cluster-role-to-group  <groupname>

从组处移除集群角色。

Removing a cluster role from a group.

$ oadm policy remove-cluster-role-from-group <role> <groupname>

User for cluster administration

这是用户从创建到删除集群都能管理整个集群的最强大的角色之一。

This is one of the most powerful roles where the user has the capability to manage a complete cluster starting from creation till deletion of a cluster.

$ oadm policy add-role-to-user admin <user_name> -n <project_name>

User with ultimate power

$ oadm policy add-cluster-role-to-user cluster-admin <user_name>

OpenShift - Docker and Kubernetes

OpenShift 建立在 Docker 和 Kubernetes 之上。所有容器都建立在 Docker 集群之上,Docker 集群本质上是基于 Linux 机器、使用 Kubernetes 编排特性的 Kubernetes 服务。

OpenShift is built on top of Docker and Kubernetes. All the containers are built on top of Docker cluster, which is basically Kubernetes service on top of Linux machines, using Kubernetes orchestrations feature.

在此过程中,我们构建控制所有节点并将容器部署到所有节点的 Kubernetes master。Kubernetes 的主要功能是使用不同种类的配置文件来控制 OpenShift 集群和部署流程。与 Kubernetes 一样,我们以与使用 OC 命令行实用工具在集群节点上构建和部署容器相同的方式使用 kubctl。

In this process, we build Kubernetes master which controls all the nodes and deploys the containers to all the nodes. The main function of Kubernetes is to control OpenShift cluster and deployment flow using a different kind of configuration file. As in Kubernetes, we use kubctl in the same way we use OC command line utility to build and deploy containers on cluster nodes.

以下是用于在集群中创建不同种类对象的不同种类配置文件。

Following are the different kinds of config files used for creation of different kind of objects in the cluster.

  1. Images

  2. POD

  3. Service

  4. Replication Controller

  5. Replica set

  6. Deployment

Images

Kubernetes (Docker) 镜像是容器化基础设施的关键构建模块。截至目前,Kubernetes 仅支持 Docker 镜像。豆荚中的每个容器内部都运行着其 Docker 镜像。

Kubernetes (Docker) images are the key building blocks of Containerized Infrastructure. As of now, Kubernetes only support Docker images. Each container in a pod has its Docker image running inside it.

apiVersion: v1
kind: pod
metadata:
   name: Tesing_for_Image_pull -----------> 1
   spec:
   containers:
- name: neo4j-server ------------------------> 2
image: <Name of the Docker image>----------> 3
imagePullPolicy: Always ------------->4
command: [“echo”, “SUCCESS”] -------------------> 5

POD

豆荚是 Kubernetes 集群节点内的一组容器及其存储。可以在豆荚内部创建包含多个容器的豆荚。以下是在同一个豆荚中保留数据库容器和 Web 界面容器的示例。

A pod is collection of containers and its storage inside a node of a Kubernetes cluster. It is possible to create a pod with multiple containers inside it. Following is an example of keeping a database container and web interface container in the same pod.

apiVersion: v1
kind: Pod
metadata:
   name: Tomcat
spec:
   containers:
   - name: Tomcat
      image: tomcat: 8.0
      ports:
- containerPort: 7500
imagePullPolicy: Always

Service

一项服务可以定义为一组逻辑豆荚。可以将其定义为豆荚之上的抽象,该抽象提供一个单一的 IP 地址和 DNS 名称,豆荚可以通过它进行访问。使用 Service,可以非常轻松地管理负载平衡配置。它帮助 POOD 非常容易地进行扩展。

A service can be defined as a logical set of pods. It can be defined as an abstraction on top of the pod that provides a single IP address and DNS name by which pods can be accessed. With Service, it is very easy to manage load balancing configuration. It helps PODs to scale very easily.

apiVersion: v1
kind: Service
metadata:
   name: Tutorial_point_service
spec:
   ports:
   - port: 8080
      targetPort: 31999

Replication Controller

Replication Controller 是 Kubernetes 的一项关键功能,它负责管理 pod 生命周期。它负责确保在任何时间点都运行指定数量的 Pod 副本。

Replication Controller is one of the key features of Kubernetes, which is responsible for managing the pod lifecycle. It is responsible for making sure that specified numbers of pod replicas are running at any point of time.

apiVersion: v1
kind: ReplicationController
metadata:
   name: Tomcat-ReplicationController
spec:
   replicas: 3
   template:
   metadata:
      name: Tomcat-ReplicationController
   labels:
      app: App
      component: neo4j
   spec:
      containers:
      - name: Tomcat
      image: tomcat: 8.0
      ports:
      - containerPort: 7474

Replica Set

副本集确保应运行多少 Pod 副本。它可以被视为 Replication Controller 的替代品。

The replica set ensures how many replica of pod should be running. It can be considered as a replacement of the replication controller.

apiVersion: extensions/v1beta1
kind: ReplicaSet
metadata:
   name: Tomcat-ReplicaSet
spec:
   replicas: 3
   selector:
      matchLables:
      tier: Backend
   matchExpression:
      - { key: tier, operation: In, values: [Backend]}

   app: App
   component: neo4j
spec:
   containers:
   - name: Tomcat-
image: tomcat: 8.0
   ports:
containerPort: 7474

Deployment

Deployment 是 Replication Controller 的升级和更高版本。它们管理副本集的部署,副本集也是 Replication Controller 的升级版本。它们有能力更新副本集,并且也有能力回滚到之前的版本。

Deployments are upgraded and higher versions of the replication controller. They manage the deployment of replica sets, which is also an upgraded version of the replication controller. They have the capability to update the replica set and they are also capable of rolling back to the previous version.

apiVersion: extensions/v1beta1 --------------------->1
kind: Deployment --------------------------> 2
metadata:
   name: Tomcat-ReplicaSet
spec:
   replicas: 3
   template:
      metadata:
lables:
   app: Tomcat-ReplicaSet
   tier: Backend
spec:
   containers:
name: Tomcat-
   image: tomcat: 8.0
   ports:
   - containerPort: 7474

所有配置文件可以用来创建各自的 Kubernetes 对象。

All config files can be used to create their respective Kubernetes objects.

$ Kubectl create –f <file name>.yaml

可以使用以下命令来了解 Kubernetes 对象的详细信息和说明。

Following commands can be used to know the details and description of the Kubernetes objects.

For POD

For POD

$ Kubectl get pod <pod name>
$ kubectl delete pod <pod name>
$ kubectl describe pod <pod name>

For Replication Controller

For Replication Controller

$ Kubectl get rc <rc name>
$ kubectl delete rc <rc name>
$ kubectl describe rc <rc name>

For Service

For Service

$ Kubectl get svc <svc name>
$ kubectl delete svc <svc name>
$ kubectl describe svc <svc name>

有关如何使用 Docker 和 Kubernetes 的更多详情,请访问使用以下链接的 Kubernetes 教程 kubernetes

For more details on how to work with Docker and Kubernetes, please visit our Kubernetes tutorial using the following link kubernetes.

OpenShift - Security

OpenShift 安全性主要是由处理安全约束的两个组件构成。

OpenShift security is mainly a combination of two components that mainly handles security constraints.

  1. Security Context Constraints (SCC)

  2. Service Account

Security Context Constraints (SCC)

它基本用于 Pod 限制,这意味着它定义了 Pod 的限制,比如它可以执行哪些操作以及它可以在集群中访问哪些内容。

It is basically used for pod restriction, which means it defines the limitations for a pod, as in what actions it can perform and what all things it can access in the cluster.

OpenShift 提供了一套预定义的 SCC,可以由管理员使用、修改和扩展。

OpenShift provides a set of predefined SCC that can be used, modified, and extended by the administrator.

$ oc get scc
NAME              PRIV   CAPS  HOSTDIR  SELINUX    RUNASUSER         FSGROUP   SUPGROUP  PRIORITY
anyuid            false   []   false    MustRunAs  RunAsAny          RunAsAny  RunAsAny  10
hostaccess        false   []   true     MustRunAs  MustRunAsRange    RunAsAny  RunAsAny  <none>
hostmount-anyuid  false   []   true     MustRunAs  RunAsAny          RunAsAny  RunAsAny  <none>
nonroot           false   []   false    MustRunAs  MustRunAsNonRoot  RunAsAny  RunAsAny  <none>
privileged        true    []   true     RunAsAny   RunAsAny          RunAsAny  RunAsAny  <none>
restricted        false   []   false    MustRunAs  MustRunAsRange    RunAsAny  RunAsAny  <none>

如果希望使用任何预定义的 SCC,可以通过简单地将用户或组添加到 SCC 组中来实现。

If one wishes to use any pre-defined scc, that can be done by simply adding the user or the group to the scc group.

$ oadm policy add-user-to-scc <scc_name> <user_name>
$ oadm policy add-group-to-scc <scc_name> <group_name>

Service Account

服务账户基本用于控制对 OpenShift 主 API 的访问,当有来自任何主设备或节点设备的命令或请求时就会调用该 API。

Service accounts are basically used to control access to OpenShift master API, which gets called when a command or a request is fired from any of the master or node machine.

当任何应用程序或进程需要受限制的 SCC 未授权的功能时,你必须创建一个特定的服务账号并将该账号添加到各自的 SCC。但是,如果某个 SCC 不满足你的需求,那么最好创建针对你的需求生成一个新的 SCC,而不是使用最适合的一个。最终,将它设为部署配置。

Any time an application or a process requires a capability that is not granted by the restricted SCC, you will have to create a specific service account and add the account to the respective SCC. However, if a SCC does not suit your requirement, then it is better to create a new SCC specific to your requirement rather than using the one that is a best fit. In the end, set it for the deployment configuration.

$ oc create serviceaccount Cadmin
$ oc adm policy add-scc-to-user vipin -z Cadmin

Container Security

在 OpenShift 中,容器的安全性基于以下概念:容器平台有多安全,以及容器在什么地方运行。当我们讨论容器安全性以及需要关注的问题时,会有多个问题浮出水面。

In OpenShift, security of containers is based on the concept of how secure the container platform is and where are the containers running. There are multiple things that come into picture when we talk about container security and what needs to be taken care of.

Image Provenance − 有一个安全标签系统,可以准确且无可辩驳地识别生产环境中运行的容器来自何处。

Image Provenance − A secure labeling system is in place that identifies exactly and incontrovertibly where the containers running in the production environment came from.

Security Scanning − 一个图像扫描器自动检查所有图像,以查找已知的漏洞。

Security Scanning − An image scanner automatically checks all the images for known vulnerabilities.

Auditing − 定期审核生产环境以确保所有容器都基于最新容器,并且主机和容器都得到安全配置。

Auditing − The production environment is regularly audited to ensure all containers are based on up-to-date containers, and both hosts and containers are securely configured.

Isolation and Least Privilege −容器在符合有效运行所需的最小资源和权限的情况下运行。它们无法对主机或其他容器造成过度的干扰。

Isolation and Least Privilege − Containers run with the minimum resources and privileges needed to function effectively. They are not able to unduly interfere with the host or other containers.

Runtime Threat Detection −在运行时检测针对容器化应用程序的主动威胁并自动对其做出响应的功能。

Runtime Threat Detection − A capability that detects active threats against containerized application in runtime and automatically responds to it.

Access Controls −使用 AppArmor 或 SELinux 这样的 Linux 安全模块来实施访问控制。

Access Controls − Linux security modules, such as AppArmor or SELinux, are used to enforce access controls.

有一些主要方法可以实现容器安全。

There are few key methods by which container security is archived.

  1. Controlling access via oAuth

  2. Via self-service web console

  3. By Certificates of platform

Controlling Access via OAuth

在此方法中,OAuth 服务器中的身份验证以进行 API 控制的访问通过获得安全令牌来实现,该令牌内置在 OpenShift master 机器中。作为管理员,您可以修改 OAuth 服务器配置的配置。

In this method, authentication to API control access is archived getting a secured token for authentication via OAuth servers, which comes inbuilt in OpenShift master machine. As an administrator, you have the capability to modify the configuration of OAuth server configuration.

有关 OAuth 服务器配置的更多详细信息,请参阅本教程的第 5 章。

For more details on OAuth server configuration, refer to Chapter 5 of this tutorial.

Via Self-Service Web Console

此 Web 控制台安全功能内置在 OpenShift Web 控制台中。此控制台确保所有合作的团队在没有经过身份验证的情况下均无法访问其他环境。OpenShift 中的多租户 master 具有以下安全功能:

This web console security feature is inbuilt in OpenShift web console. This console ensures that all the teams working together do not have access to other environments without authentication. The multi-telnet master in OpenShift has the following security features −

  1. TCL layer is enabled

  2. Uses x.509 certificate for authentication

  3. Secures the etcd configuration on the master machine

By Certificates of Platform

在此方法中,每个主机的证书在安装期间通过 Ansible 进行配置。由于它通过 Rest API 使用 HTTPS 通信协议,因此我们需要到不同组件和对象的 TCL 安全连接。这些是预定义的证书,但是,甚至可以在访问的主机集群上安装自定义证书。在 master 的初始设置期间,可以使用 openshift_master_overwrite_named_certificates 参数覆盖现有证书来配置自定义证书。

In this method, certificates for each host is configured during installation via Ansible. As it uses HTTPS communication protocol via Rest API, we need TCL secured connection to different components and objects. These are pre-defined certificates, however, one can even have a custom certificate installed on the cluster of master for access. During the initial setup of the master, custom certificates can be configured by overriding the existing certificates using openshift_master_overwrite_named_certificates parameter.

Example

openshift_master_named_certificates = [{"certfile": "/path/on/host/to/master.crt",
"keyfile": "/path/on/host/to/master.key",
"cafile": "/path/on/host/to/mastercert.crt"}]

有关如何生成自定义证书的更多详细信息,请访问以下链接:

For more detail on how to generate custom certificates, visit the following link −

Network Security

在 OpenShift 中,软件定义网络 (SDN) 用于通信。网络命名空间用于群集中的每个 Pod,其中每个 Pod 都获取自己的 IP 地址和一系列端口在其上获取网络流量。通过这种方法,可以隔离 Pod,因为它不能与其他项目中的 Pod 通信。

In OpenShift, Software Defined Networking (SDN) is used for communication. Network namespace is used for each pod in the cluster, wherein each pod gets its own IP and a range of ports to get network traffic on it. By this method, one can isolate pods because of which it cannot communicate with pods in the other project.

Isolating a Project

群集管理员可以使用 CLI 中的以下 oadm command 来完成此操作。

This can be done by the cluster admin using the following oadm command from CLI.

$ oadm pod-network isolate-projects <project name 1> <project name 2>

这意味着上面定义的项目无法与群集中的其他项目进行通信。

This means that the projects defined above cannot communicate with other projects in the cluster.

Volume Security

卷安全显然意味着保护 OpenShift 群集中的项目的 PV 和 PVC。主要有四个部分来控制 OpenShift 中对卷的访问。

Volume security clearly means securing the PV and PVC of projects in OpenShift cluster. There are mainly four sections to control access to volumes in OpenShift.

  1. Supplemental Groups

  2. fsGroup

  3. runAsUser

  4. seLinuxOptions

补充组——补充组是常规的 Linux 组。当进程在系统中运行时,它会通过用户 ID 和组 ID 运行。这些组用于控制对共享存储的访问。

Supplemental Groups − Supplemental groups are regular Linux groups. When a process runs in the system, it runs with a user ID and group ID. These groups are used for controlling access to shared storage.

使用以下命令检查 NFS 挂载。

Check the NFS mount using the following command.

# showmount -e <nfs-server-ip-or-hostname>
Export list for f21-nfs.vm:
/opt/nfs *

使用以下命令检查挂载服务器上的 NFS 详细信息。

Check NFS details on the mount server using the following command.

# cat /etc/exports
/opt/nfs *(rw,sync,no_root_squash)
...
# ls -lZ /opt/nfs -d
drwxrws---. nfsnobody 2325 unconfined_u:object_r:usr_t:s0 /opt/nfs
# id nfsnobody
uid = 65534(nfsnobody) gid = 454265(nfsnobody) groups = 454265(nfsnobody)

/opt/nfs/ 导出可由 UID 454265 和组 2325 访问。

The /opt/nfs/ export is accessible by UID 454265 and the group 2325.

apiVersion: v1
kind: Pod
...
spec:
   containers:
   - name: ...
      volumeMounts:
      - name: nfs
         mountPath: /usr/share/...
   securityContext:
      supplementalGroups: [2325]
   volumes:
   - name: nfs
      nfs:
      server: <nfs_server_ip_or_host>
      path: /opt/nfs

fsGroup

fsGroup

fsGroup 表示用于添加容器辅助组的文件系统组。辅助组 ID 用于共享存储,而 fsGroup 用于块存储。

fsGroup stands for the file system group which is used for adding container supplemental groups. Supplement group ID is used for shared storage and fsGroup is used for block storage.

kind: Pod
spec:
   containers:
   - name: ...
   securityContext:
      fsGroup: 2325

runAsUser

runAsUser

runAsUser 使用用户 ID 进行通信。用于在 Pod 定义中定义容器映像。如果需要,可以在所有容器中使用单个 ID 用户。

runAsUser uses the user ID for communication. This is used in defining the container image in pod definition. A single ID user can be used in all containers, if required.

在运行容器时,定义的 ID 与导出上的所有者 ID 相匹配。如果指定的 ID 在外部定义,那么它将对 Pod 中的所有容器整体有效。如果它与特定 Pod 一起定义,那么它将仅对单个容器特定。

While running the container, the defined ID is matched with the owner ID on the export. If the specified ID is defined outside, then it becomes global to all the containers in the pod. If it is defined with a specific pod, then it becomes specific to a single container.

spec:
   containers:
   - name: ...
      securityContext:
         runAsUser: 454265