Openshift 简明教程
OpenShift - Security
OpenShift 安全性主要是由处理安全约束的两个组件构成。
OpenShift security is mainly a combination of two components that mainly handles security constraints.
-
Security Context Constraints (SCC)
-
Service Account
Security Context Constraints (SCC)
它基本用于 Pod 限制,这意味着它定义了 Pod 的限制,比如它可以执行哪些操作以及它可以在集群中访问哪些内容。
It is basically used for pod restriction, which means it defines the limitations for a pod, as in what actions it can perform and what all things it can access in the cluster.
OpenShift 提供了一套预定义的 SCC,可以由管理员使用、修改和扩展。
OpenShift provides a set of predefined SCC that can be used, modified, and extended by the administrator.
$ oc get scc
NAME PRIV CAPS HOSTDIR SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY
anyuid false [] false MustRunAs RunAsAny RunAsAny RunAsAny 10
hostaccess false [] true MustRunAs MustRunAsRange RunAsAny RunAsAny <none>
hostmount-anyuid false [] true MustRunAs RunAsAny RunAsAny RunAsAny <none>
nonroot false [] false MustRunAs MustRunAsNonRoot RunAsAny RunAsAny <none>
privileged true [] true RunAsAny RunAsAny RunAsAny RunAsAny <none>
restricted false [] false MustRunAs MustRunAsRange RunAsAny RunAsAny <none>
如果希望使用任何预定义的 SCC,可以通过简单地将用户或组添加到 SCC 组中来实现。
If one wishes to use any pre-defined scc, that can be done by simply adding the user or the group to the scc group.
$ oadm policy add-user-to-scc <scc_name> <user_name>
$ oadm policy add-group-to-scc <scc_name> <group_name>
Service Account
服务账户基本用于控制对 OpenShift 主 API 的访问,当有来自任何主设备或节点设备的命令或请求时就会调用该 API。
Service accounts are basically used to control access to OpenShift master API, which gets called when a command or a request is fired from any of the master or node machine.
当任何应用程序或进程需要受限制的 SCC 未授权的功能时,你必须创建一个特定的服务账号并将该账号添加到各自的 SCC。但是,如果某个 SCC 不满足你的需求,那么最好创建针对你的需求生成一个新的 SCC,而不是使用最适合的一个。最终,将它设为部署配置。
Any time an application or a process requires a capability that is not granted by the restricted SCC, you will have to create a specific service account and add the account to the respective SCC. However, if a SCC does not suit your requirement, then it is better to create a new SCC specific to your requirement rather than using the one that is a best fit. In the end, set it for the deployment configuration.
$ oc create serviceaccount Cadmin
$ oc adm policy add-scc-to-user vipin -z Cadmin
Container Security
在 OpenShift 中,容器的安全性基于以下概念:容器平台有多安全,以及容器在什么地方运行。当我们讨论容器安全性以及需要关注的问题时,会有多个问题浮出水面。
In OpenShift, security of containers is based on the concept of how secure the container platform is and where are the containers running. There are multiple things that come into picture when we talk about container security and what needs to be taken care of.
Image Provenance − 有一个安全标签系统,可以准确且无可辩驳地识别生产环境中运行的容器来自何处。
Image Provenance − A secure labeling system is in place that identifies exactly and incontrovertibly where the containers running in the production environment came from.
Security Scanning − 一个图像扫描器自动检查所有图像,以查找已知的漏洞。
Security Scanning − An image scanner automatically checks all the images for known vulnerabilities.
Auditing − 定期审核生产环境以确保所有容器都基于最新容器,并且主机和容器都得到安全配置。
Auditing − The production environment is regularly audited to ensure all containers are based on up-to-date containers, and both hosts and containers are securely configured.
Isolation and Least Privilege −容器在符合有效运行所需的最小资源和权限的情况下运行。它们无法对主机或其他容器造成过度的干扰。
Isolation and Least Privilege − Containers run with the minimum resources and privileges needed to function effectively. They are not able to unduly interfere with the host or other containers.
Runtime Threat Detection −在运行时检测针对容器化应用程序的主动威胁并自动对其做出响应的功能。
Runtime Threat Detection − A capability that detects active threats against containerized application in runtime and automatically responds to it.
Access Controls −使用 AppArmor 或 SELinux 这样的 Linux 安全模块来实施访问控制。
Access Controls − Linux security modules, such as AppArmor or SELinux, are used to enforce access controls.
有一些主要方法可以实现容器安全。
There are few key methods by which container security is archived.
-
Controlling access via oAuth
-
Via self-service web console
-
By Certificates of platform
Controlling Access via OAuth
在此方法中,OAuth 服务器中的身份验证以进行 API 控制的访问通过获得安全令牌来实现,该令牌内置在 OpenShift master 机器中。作为管理员,您可以修改 OAuth 服务器配置的配置。
In this method, authentication to API control access is archived getting a secured token for authentication via OAuth servers, which comes inbuilt in OpenShift master machine. As an administrator, you have the capability to modify the configuration of OAuth server configuration.
有关 OAuth 服务器配置的更多详细信息,请参阅本教程的第 5 章。
For more details on OAuth server configuration, refer to Chapter 5 of this tutorial.
Via Self-Service Web Console
此 Web 控制台安全功能内置在 OpenShift Web 控制台中。此控制台确保所有合作的团队在没有经过身份验证的情况下均无法访问其他环境。OpenShift 中的多租户 master 具有以下安全功能:
This web console security feature is inbuilt in OpenShift web console. This console ensures that all the teams working together do not have access to other environments without authentication. The multi-telnet master in OpenShift has the following security features −
-
TCL layer is enabled
-
Uses x.509 certificate for authentication
-
Secures the etcd configuration on the master machine
By Certificates of Platform
在此方法中,每个主机的证书在安装期间通过 Ansible 进行配置。由于它通过 Rest API 使用 HTTPS 通信协议,因此我们需要到不同组件和对象的 TCL 安全连接。这些是预定义的证书,但是,甚至可以在访问的主机集群上安装自定义证书。在 master 的初始设置期间,可以使用 openshift_master_overwrite_named_certificates 参数覆盖现有证书来配置自定义证书。
In this method, certificates for each host is configured during installation via Ansible. As it uses HTTPS communication protocol via Rest API, we need TCL secured connection to different components and objects. These are pre-defined certificates, however, one can even have a custom certificate installed on the cluster of master for access. During the initial setup of the master, custom certificates can be configured by overriding the existing certificates using openshift_master_overwrite_named_certificates parameter.
Example
openshift_master_named_certificates = [{"certfile": "/path/on/host/to/master.crt",
"keyfile": "/path/on/host/to/master.key",
"cafile": "/path/on/host/to/mastercert.crt"}]
有关如何生成自定义证书的更多详细信息,请访问以下链接:
For more detail on how to generate custom certificates, visit the following link −
Network Security
在 OpenShift 中,软件定义网络 (SDN) 用于通信。网络命名空间用于群集中的每个 Pod,其中每个 Pod 都获取自己的 IP 地址和一系列端口在其上获取网络流量。通过这种方法,可以隔离 Pod,因为它不能与其他项目中的 Pod 通信。
In OpenShift, Software Defined Networking (SDN) is used for communication. Network namespace is used for each pod in the cluster, wherein each pod gets its own IP and a range of ports to get network traffic on it. By this method, one can isolate pods because of which it cannot communicate with pods in the other project.
Isolating a Project
群集管理员可以使用 CLI 中的以下 oadm command 来完成此操作。
This can be done by the cluster admin using the following oadm command from CLI.
$ oadm pod-network isolate-projects <project name 1> <project name 2>
这意味着上面定义的项目无法与群集中的其他项目进行通信。
This means that the projects defined above cannot communicate with other projects in the cluster.
Volume Security
卷安全显然意味着保护 OpenShift 群集中的项目的 PV 和 PVC。主要有四个部分来控制 OpenShift 中对卷的访问。
Volume security clearly means securing the PV and PVC of projects in OpenShift cluster. There are mainly four sections to control access to volumes in OpenShift.
-
Supplemental Groups
-
fsGroup
-
runAsUser
-
seLinuxOptions
补充组——补充组是常规的 Linux 组。当进程在系统中运行时,它会通过用户 ID 和组 ID 运行。这些组用于控制对共享存储的访问。
Supplemental Groups − Supplemental groups are regular Linux groups. When a process runs in the system, it runs with a user ID and group ID. These groups are used for controlling access to shared storage.
使用以下命令检查 NFS 挂载。
Check the NFS mount using the following command.
# showmount -e <nfs-server-ip-or-hostname>
Export list for f21-nfs.vm:
/opt/nfs *
使用以下命令检查挂载服务器上的 NFS 详细信息。
Check NFS details on the mount server using the following command.
# cat /etc/exports
/opt/nfs *(rw,sync,no_root_squash)
...
# ls -lZ /opt/nfs -d
drwxrws---. nfsnobody 2325 unconfined_u:object_r:usr_t:s0 /opt/nfs
# id nfsnobody
uid = 65534(nfsnobody) gid = 454265(nfsnobody) groups = 454265(nfsnobody)
/opt/nfs/ 导出可由 UID 454265 和组 2325 访问。
The /opt/nfs/ export is accessible by UID 454265 and the group 2325.
apiVersion: v1
kind: Pod
...
spec:
containers:
- name: ...
volumeMounts:
- name: nfs
mountPath: /usr/share/...
securityContext:
supplementalGroups: [2325]
volumes:
- name: nfs
nfs:
server: <nfs_server_ip_or_host>
path: /opt/nfs
fsGroup
fsGroup
fsGroup 表示用于添加容器辅助组的文件系统组。辅助组 ID 用于共享存储,而 fsGroup 用于块存储。
fsGroup stands for the file system group which is used for adding container supplemental groups. Supplement group ID is used for shared storage and fsGroup is used for block storage.
kind: Pod
spec:
containers:
- name: ...
securityContext:
fsGroup: 2325
runAsUser
runAsUser
runAsUser 使用用户 ID 进行通信。用于在 Pod 定义中定义容器映像。如果需要,可以在所有容器中使用单个 ID 用户。
runAsUser uses the user ID for communication. This is used in defining the container image in pod definition. A single ID user can be used in all containers, if required.
在运行容器时,定义的 ID 与导出上的所有者 ID 相匹配。如果指定的 ID 在外部定义,那么它将对 Pod 中的所有容器整体有效。如果它与特定 Pod 一起定义,那么它将仅对单个容器特定。
While running the container, the defined ID is matched with the owner ID on the export. If the specified ID is defined outside, then it becomes global to all the containers in the pod. If it is defined with a specific pod, then it becomes specific to a single container.
spec:
containers:
- name: ...
securityContext:
runAsUser: 454265