Openshift 简明教程
OpenShift - Administration
在本章中,我们将介绍一些主题,如如何管理节点、配置 服务帐 户等。
In this chapter, we will cover topics such as how to manage a node, configure a service account, etc.
Master and Node Configuration
在 OpenShift 中,我们需要使用开始命令和 OC 来启动新服务器。在启动新主服务器时,我们需要使用主服务器和开始命令,而在启动新节点时,我们需要使用节点和开始命令。为此,我们需要创建主服务器和节点的配置文件。我们可以使用以下命令为主服务器和节点创建一个基本配置文件
In OpenShift, we need to use the start command along with OC to boot up a new server. While launching a new master, we need to use the master along with the start command, whereas while starting the new node we need to use the node along with the start command. In order to do this, we need to create configuration files for the master as well as for the nodes. We can create a basic configuration file for the master and the node using the following command.
For master configuration file
$ openshift start master --write-config = /openshift.local.config/master
For node configuration file
$ oadm create-node-config --node-dir = /openshift.local.config/node-<node_hostname> --node = <node_hostname> --hostnames = <hostname>,<ip_address>
一旦运行了以下命令,我们将获得基础配置文件,可用作配置的起点。稍后,我们可以使用相同的文件来启动新服务器。
Once we run the following commands, we will get the base configuration files that can be used as the starting point for configuration. Later, we can have the same file to boot the new servers.
apiLevels:
- v1beta3
- v1
apiVersion: v1
assetConfig:
logoutURL: ""
masterPublicURL: https://172.10.12.1:7449
publicURL: https://172.10.2.2:7449/console/
servingInfo:
bindAddress: 0.0.0.0:7449
certFile: master.server.crt
clientCA: ""
keyFile: master.server.key
maxRequestsInFlight: 0
requestTimeoutSeconds: 0
controllers: '*'
corsAllowedOrigins:
- 172.10.2.2:7449
- 127.0.0.1
- localhost
dnsConfig:
bindAddress: 0.0.0.0:53
etcdClientInfo:
ca: ca.crt
certFile: master.etcd-client.crt
keyFile: master.etcd-client.key
urls:
- https://10.0.2.15:4001
etcdConfig:
address: 10.0.2.15:4001
peerAddress: 10.0.2.15:7001
peerServingInfo:
bindAddress: 0.0.0.0:7001
certFile: etcd.server.crt
clientCA: ca.crt
keyFile: etcd.server.key
servingInfo:
bindAddress: 0.0.0.0:4001
certFile: etcd.server.crt
clientCA: ca.crt
keyFile: etcd.server.key
storageDirectory: /root/openshift.local.etcd
etcdStorageConfig:
kubernetesStoragePrefix: kubernetes.io
kubernetesStorageVersion: v1
openShiftStoragePrefix: openshift.io
openShiftStorageVersion: v1
imageConfig:
format: openshift/origin-${component}:${version}
latest: false
kind: MasterConfig
kubeletClientInfo:
ca: ca.crt
certFile: master.kubelet-client.crt
keyFile: master.kubelet-client.key
port: 10250
kubernetesMasterConfig:
apiLevels:
- v1beta3
- v1
apiServerArguments: null
controllerArguments: null
masterCount: 1
masterIP: 10.0.2.15
podEvictionTimeout: 5m
schedulerConfigFile: ""
servicesNodePortRange: 30000-32767
servicesSubnet: 172.30.0.0/16
staticNodeNames: []
masterClients:
externalKubernetesKubeConfig: ""
openshiftLoopbackKubeConfig: openshift-master.kubeconfig
masterPublicURL: https://172.10.2.2:7449
networkConfig:
clusterNetworkCIDR: 10.1.0.0/16
hostSubnetLength: 8
networkPluginName: ""
serviceNetworkCIDR: 172.30.0.0/16
oauthConfig:
assetPublicURL: https://172.10.2.2:7449/console/
grantConfig:
method: auto
identityProviders:
- challenge: true
login: true
name: anypassword
provider:
apiVersion: v1
kind: AllowAllPasswordIdentityProvider
masterPublicURL: https://172.10.2.2:7449/
masterURL: https://172.10.2.2:7449/
sessionConfig:
sessionMaxAgeSeconds: 300
sessionName: ssn
sessionSecretsFile: ""
tokenConfig:
accessTokenMaxAgeSeconds: 86400
authorizeTokenMaxAgeSeconds: 300
policyConfig:
bootstrapPolicyFile: policy.json
openshiftInfrastructureNamespace: openshift-infra
openshiftSharedResourcesNamespace: openshift
projectConfig:
defaultNodeSelector: ""
projectRequestMessage: ""
projectRequestTemplate: ""
securityAllocator:
mcsAllocatorRange: s0:/2
mcsLabelsPerProject: 5
uidAllocatorRange: 1000000000-1999999999/10000
routingConfig:
subdomain: router.default.svc.cluster.local
serviceAccountConfig:
managedNames:
- default
- builder
- deployer
masterCA: ca.crt
privateKeyFile: serviceaccounts.private.key
privateKeyFile: serviceaccounts.private.key
publicKeyFiles:
- serviceaccounts.public.key
servingInfo:
bindAddress: 0.0.0.0:8443
certFile: master.server.crt
clientCA: ca.crt
keyFile: master.server.key
maxRequestsInFlight: 0
requestTimeoutSeconds: 3600
Node configuration files
allowDisabledDocker: true
apiVersion: v1
dnsDomain: cluster.local
dnsIP: 172.10.2.2
dockerConfig:
execHandlerName: native
imageConfig:
format: openshift/origin-${component}:${version}
latest: false
kind: NodeConfig
masterKubeConfig: node.kubeconfig
networkConfig:
mtu: 1450
networkPluginName: ""
nodeIP: ""
nodeName: node1.example.com
podManifestConfig:
path: "/path/to/pod-manifest-file"
fileCheckIntervalSeconds: 30
servingInfo:
bindAddress: 0.0.0.0:10250
certFile: server.crt
clientCA: node-client-ca.crt
keyFile: server.key
volumeDirectory: /root/openshift.local.volumes
节点配置文件如下所示。一旦有了这些配置文件,我们就可以运行以下命令来创建主服务器和节点服务器。
This is how the node configuration files look like. Once we have these configuration files in place, we can run the following command to create master and node server.
$ openshift start --master-config = /openshift.local.config/master/master-
config.yaml --node-config = /openshift.local.config/node-<node_hostname>/node-
config.yaml
Managing Nodes
在 OpenShift 中,我们有 OC 命令行实用程序,主要用于执行 OpenShift 中的所有操作。我们可以使用以下命令来管理节点。
In OpenShift, we have OC command line utility which is mostly used for carrying out all the operations in OpenShift. We can use the following commands to manage the nodes.
For listing a node
$ oc get nodes
NAME LABELS
node1.example.com kubernetes.io/hostname = vklnld1446.int.example.com
node2.example.com kubernetes.io/hostname = vklnld1447.int.example.com
Configuration Authentication
在 OpenShift 主服务器中,有一个内置的 OAuth 服务器,可用于管理身份验证。所有 OpenShift 用户都会从该服务器获取令牌,这有助于他们与 OpenShift API 通信。
In OpenShift master, there is a built-in OAuth server, which can be used for managing authentication. All OpenShift users get the token from this server, which helps them communicate to OpenShift API.
OpenShift 中有不同种类的身份验证级别,可以与主配置文件一起配置。
There are different kinds of authentication level in OpenShift, which can be configured along with the main configuration file.
-
Allow all
-
Deny all
-
HTPasswd
-
LDAP
-
Basic authentication
-
Request header
在定义主服务器配置时,我们可以定义标识策略,在其中可以定义我们想要使用的策略类型。
While defining the master configuration, we can define the identification policy where we can define the type of policy that we wish to use.
Allow All
全部允许
Allow All
oauthConfig:
...
identityProviders:
- name: Allow_Authontication
challenge: true
login: true
provider:
apiVersion: v1
kind: AllowAllPasswordIdentityProvider
Deny All
这将拒绝所有用户名和密码的访问。
This will deny access to all usernames and passwords.
oauthConfig:
...
identityProviders:
- name: deny_Authontication
challenge: true
login: true
provider:
apiVersion: v1
kind: DenyAllPasswordIdentityProvider
HTPasswd
HTPasswd 用于根据加密文件密码验证用户名和密码。
HTPasswd is used to validate the username and password against an encrypted file password.
要生成加密文件,请使用以下命令。
For generating an encrypted file, following is the command.
$ htpasswd </path/to/users.htpasswd> <user_name>
使用加密文件。
Using the encrypted file.
oauthConfig:
...
identityProviders:
- name: htpasswd_authontication
challenge: true
login: true
provider:
apiVersion: v1
kind: HTPasswdPasswordIdentityProvider
file: /path/to/users.htpasswd
LDAP Identity Provider
这用于 LDAP 身份验证,其中 LDAP 服务器在身份验证中扮演着关键角色。
This is used for LDAP authentication wherein LDAP server plays a key role in authentication.
oauthConfig:
...
identityProviders:
- name: "ldap_authontication"
challenge: true
login: true
provider:
apiVersion: v1
kind: LDAPPasswordIdentityProvider
attributes:
id:
- dn
email:
- mail
name:
- cn
preferredUsername:
- uid
bindDN: ""
bindPassword: ""
ca: my-ldap-ca-bundle.crt
insecure: false
url: "ldap://ldap.example.com/ou=users,dc=acme,dc=com?uid"
Basic Authentication
当用户名和密码的验证针对服务器到服务器身份验证进行时,会使用此项。身份验证在基本 URL 中受到保护,并以 JSON 格式显示。
This is used when the validation of username and password is done against a server-to-server authentication. The authentication is protected in the base URL and is presented in JSON format.
oauthConfig:
...
identityProviders:
- name: my_remote_basic_auth_provider
challenge: true
login: true
provider:
apiVersion: v1
kind: BasicAuthPasswordIdentityProvider
url: https://www.vklnld908.int.example.com/remote-idp
ca: /path/to/ca.file
certFile: /path/to/client.crt
keyFile: /path/to/client.key
Configuring a Service Account
服务帐 户提供了一种灵活的方式来访问 OpenShift API,以公平和密码进行身份验证。
Service accounts provide a flexible way of accessing OpenShift API exposing the username and password for authentication.
Enabling a Service Account
服务帐 户使用公钥和私钥对进行身份验证。对 API 的身份验证使用私钥进行,并针对公钥进行验证。
Service account uses a key pair of public and private key for authentication. Authentication to API is done using a private key and validating it against a public key.
ServiceAccountConfig:
...
masterCA: ca.crt
privateKeyFile: serviceaccounts.private.key
publicKeyFiles:
- serviceaccounts.public.key
- ...
Working with HTTP Proxy
在大多数生产环境中,直接访问互联网是受限的。它们既没有暴露给互联网也没有通过 HTTP 或 HTTPS 代理进行暴露。在 OpenShift 环境中,此代理机器定义被设置为环境变量。
In most of the production environment, direct access to Internet is restricted. They are either not exposed to Internet or they are exposed via a HTTP or HTTPS proxy. In an OpenShift environment, this proxy machine definition is set as an environment variable.
这可以通过在 /etc/sysconfig 下的主机和节点文件中添加代理定义来完成。这与我们对任何其他应用程序所做的一样。
This can be done by adding a proxy definition on the master and node files located under /etc/sysconfig. This is similar as we do for any other application.
Master Machine
/etc/sysconfig/openshift-master
HTTP_PROXY=http://USERNAME:PASSWORD@172.10.10.1:8080/
HTTPS_PROXY=https://USERNAME:PASSWORD@172.10.10.1:8080/
NO_PROXY=master.vklnld908.int.example.com
Node Machine
/etc/sysconfig/openshift-node
HTTP_PROXY=http://USERNAME:PASSWORD@172.10.10.1:8080/
HTTPS_PROXY=https://USERNAME:PASSWORD@172.10.10.1:8080/
NO_PROXY=master.vklnld908.int.example.com
完成后,我们需要重新启动主机和节点机器。
Once done, we need to restart the master and node machines.
For Docker Pull
/etc/sysconfig/docker
HTTP_PROXY = http://USERNAME:PASSWORD@172.10.10.1:8080/
HTTPS_PROXY = https://USERNAME:PASSWORD@172.10.10.1:8080/
NO_PROXY = master.vklnld1446.int.example.com
为了使 pod 在代理环境中运行,可以使用以下方法:
In order to make a pod run in a proxy environment, it can be done using −
containers:
- env:
- name: "HTTP_PROXY"
value: "http://USER:PASSWORD@:10.0.1.1:8080"
可以使用 OC environment 命令来更新现有 env。
OC environment command can be used to update the existing env.
OpenShift Storage with NFS
在 OpenShift 中,持久化卷和持久化卷声明的概念构成了持久化存储。这是其中一个关键的概念,其中首先创建持久化卷,然后声明相同的卷。为此,我们需要在底层硬件上具有足够的容量和磁盘空间。
In OpenShift, the concept of persistent volume and persistent volume claims forms persistent storage. This is one of the key concepts in which first persistent volume is created and later that same volume is claimed. For this, we need to have enough capacity and disk space on the underlying hardware.
apiVersion: v1
kind: PersistentVolume
metadata:
name: storage-unit1
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
nfs:
path: /opt
server: 10.12.2.2
persistentVolumeReclaimPolicy: Recycle
接下来,使用 OC 创建命令创建持久化卷。
Next, using OC create command create Persistent Volume.
$ oc create -f storage-unit1.yaml
persistentvolume " storage-unit1 " created
声明创建的卷。
Claiming the created volume.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: Storage-clame1
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
创建声明。
Create the claim.
$ oc create -f Storage-claim1.yaml
persistentvolume " Storage-clame1 " created
User and Role Management
用户和角色管理用于管理用户、他们在不同项目上的访问权限和控制权。
User and role administration is used to manage users, their access and controls on different projects.
Creating a User
可以使用预定义模板在 OpenShift 中创建新用户。
Predefined templates can be used to create new users in OpenShift.
kind: "Template"
apiVersion: "v1"
parameters:
- name: vipin
required: true
objects:
- kind: "User"
apiVersion: "v1"
metadata:
name: "${email}"
- kind: "Identity"
apiVersion: "v1"
metadata:
name: "vipin:${email}"
providerName: "SAML"
providerUserName: "${email}"
- kind: "UserIdentityMapping"
apiVersion: "v1"
identity:
name: "vipin:${email}"
user:
name: "${email}"
使用 oc create –f <file name> 创建用户。
Use oc create –f <file name> to create users.
$ oc create –f vipin.yaml
使用以下命令在 OpenShift 中删除用户。
Use the following command to delete a user in OpenShift.
$ oc delete user <user name>
Limiting User Access
ResourceQuotas 和 LimitRanges 用于限制用户访问级别。它们用于限制集群上的 pod 和容器。
ResourceQuotas and LimitRanges are used for limiting user access levels. They are used for limiting the pods and containers on the cluster.
apiVersion: v1
kind: ResourceQuota
metadata:
name: resources-utilization
spec:
hard:
pods: "10"
Creating the quote using the above configuration
$ oc create -f resource-quota.yaml –n –Openshift-sample
Describing the resource quote
$ oc describe quota resource-quota -n Openshift-sample
Name: resource-quota
Namespace: Openshift-sample
Resource Used Hard
-------- ---- ----
pods 3 10
定义容器限制可用于限制已部署容器将使用的资源。它们用于定义某些对象的最高和最低限制。
Defining the container limits can be used for limiting the resources which are going to be used by deployed containers. They are used to define the maximum and minimum limitations of certain objects.
User project limitations
这基本上用于用户在任何时间点都可以拥有的项目数。它们基本上是通过将用户级别定义为青铜、白银和黄金类别的来完成的。
This is basically used for the number of projects a user can have at any point of time. They are basically done by defining the user levels in categories of bronze, silver, and gold.
我们需要首先定义一个对象,该对象保存青铜、白银和黄金类别的项目数量。这些需要在 master-confif.yaml 文件中完成。
We need to first define an object which holds the value of how many projects a bronze, silver, and gold category can have. These need to be done in the master-confif.yaml file.
admissionConfig:
pluginConfig:
ProjectRequestLimit:
configuration:
apiVersion: v1
kind: ProjectRequestLimitConfig
limits:
- selector:
level: platinum
- selector:
level: gold
maxProjects: 15
- selector:
level: silver
maxProjects: 10
- selector:
level: bronze
maxProjects: 5
重启主服务器。
Restart the master server.
将用户分配到特定级别。
Assigning a user to a particular level.
$ oc label user vipin level = gold
在需要时将用户移出标签。
Moving the user out of the label, if required.
$ oc label user <user_name> level-
向用户添加角色。
Adding roles to a user.
$ oadm policy add-role-to-user <user_name>
从用户处移除角色。
Removing the role from a user.
$ oadm policy remove-role-from-user <user_name>
向用户添加集群角色。
Adding a cluster role to a user.
$ oadm policy add-cluster-role-to-user <user_name>
从用户处移除集群角色。
Removing a cluster role from a user.
$ oadm policy remove-cluster-role-from-user <user_name>
向组添加角色。
Adding a role to a group.
$ oadm policy add-role-to-user <user_name>
从组处移除角色。
Removing a role from a group.
$ oadm policy remove-cluster-role-from-user <user_name>
向组添加集群角色。
Adding a cluster role to a group.
$ oadm policy add-cluster-role-to-group <groupname>
从组处移除集群角色。
Removing a cluster role from a group.
$ oadm policy remove-cluster-role-from-group <role> <groupname>