Linux Admin 简明教程
Linux Admin - Resource Mgmt with crgoups
cgroups 或控制组是 Linux 内核的一项功能,它允许管理员为服务和群组分配或限制系统资源。
cgroups or Control Groups are a feature of the Linux kernel that allows an administrator to allocate or cap the system resources for services and also group.
要列出正在运行的活动控制组,可以使用以下 ps 命令 −
To list active control groups running, we can use the following ps command −
[root@localhost]# ps xawf -eo pid,user,cgroup,args
8362 root - \_ [kworker/1:2]
1 root - /usr/lib/systemd/systemd --switched-
root --system -- deserialize 21
507 root 7:cpuacct,cpu:/system.slice /usr/lib/systemd/systemd-journald
527 root 7:cpuacct,cpu:/system.slice /usr/sbin/lvmetad -f
540 root 7:cpuacct,cpu:/system.slice /usr/lib/systemd/systemd-udevd
715 root 7:cpuacct,cpu:/system.slice /sbin/auditd -n
731 root 7:cpuacct,cpu:/system.slice \_ /sbin/audispd
734 root 7:cpuacct,cpu:/system.slice \_ /usr/sbin/sedispatch
737 polkitd 7:cpuacct,cpu:/system.slice /usr/lib/polkit-1/polkitd --no-debug
738 rtkit 6:memory:/system.slice/rtki /usr/libexec/rtkit-daemon
740 dbus 7:cpuacct,cpu:/system.slice /bin/dbus-daemon --system --
address=systemd: --nofork --nopidfile --systemd-activation
在 CentOS 6.X 中,资源管理已通过 systemd init 实现重新定义。当考虑服务资源管理时,主要要注意的是 cgroups。 cgroups 已随着 systemd 在功能和简单性上得到提升。
Resource Management, as of CentOS 6.X, has been redefined with the systemd init implementation. When thinking Resource Management for services, the main thing to focus on are cgroups. cgroups have advanced with systemd in both functionality and simplicity.
资源管理中 cgroups 的目标是 - 不允许任何一项服务全部拖垮系统。或者,任何一项服务进程(也许是一个编写很糟糕的 PHP 脚本)都不会因为消耗过多资源而损害服务器功能。
The goal of cgroups in resource management is -no one service can take the system, as a whole, down. Or no single service process (perhaps a poorly written PHP script) will cripple the server functionality by consuming too many resources.
cgroup 允许对下列资源的单元进行资源控制 −
cgroups allow resource control of units for the following resources −
-
CPU − Limit cpu intensive tasks that are not critical as other, less intensive tasks
-
Memory − Limit how much memory a service can consume
-
Disks − Limit disk i/o
CPU 时间:
*CPU Time: *
需要较低 CPU 优先级的任务可以拥有自定义配置的 CPU 片段。
Tasks needing less CPU priority can have custom configured CPU Slices.
举例来说,我们来看看以下两个服务。
Let’s take a look at the following two services for example.
Polite CPU Service 1
[root@localhost]# systemctl cat polite.service
# /etc/systemd/system/polite.service
[Unit]
Description = Polite service limits CPU Slice and Memory
After=remote-fs.target nss-lookup.target
[Service]
MemoryLimit = 1M
ExecStart = /usr/bin/sha1sum /dev/zero
ExecStop = /bin/kill -WINCH ${MAINPID}
WantedBy=multi-user.target
# /etc/systemd/system/polite.service.d/50-CPUShares.conf
[Service]
CPUShares = 1024
[root@localhost]#
Evil CPU Service 2
[root@localhost]# systemctl cat evil.service
# /etc/systemd/system/evil.service
[Unit]
Description = I Eat You CPU
After=remote-fs.target nss-lookup.target
[Service]
ExecStart = /usr/bin/md5sum /dev/zero
ExecStop = /bin/kill -WINCH ${MAINPID}
WantedBy=multi-user.target
# /etc/systemd/system/evil.service.d/50-CPUShares.conf
[Service]
CPUShares = 1024
[root@localhost]#
让我们使用较低的 CPU 优先级来设置 Polite Service:
Let’s set Polite Service using a lesser CPU priority −
systemctl set-property polite.service CPUShares = 20
/system.slice/polite.service
1 70.5 124.0K - -
/system.slice/evil.service
1 99.5 304.0K - -
正如我们所看到的,在正常的系统空闲时间里,这两个恶意进程仍在使用 CPU 周期。然而,设置使用更少时间片段的那个正在消耗更少的 CPU 时间。带着这点考虑,我们可以看到,使用更少的时间片段将会允许关键任务更好的访问系统资源。
As we can see, over a period of normal system idle time, both rogue processes are still using CPU cycles. However, the one set to have less time-slices is using less CPU time. With this in mind, we can see how using a lesser time time-slice would allow essential tasks better access the system resources.
为了为每一种资源设置服务,set-property 方法定义了以下参数:
To set services for each resource, the set-property method defines the following parameters −
systemctl set-property name parameter=value
CPU Slices |
CPUShares |
Memory Limit |
MemoryLimit |
Soft Memory Limit |
MemorySoftLimit |
Block IO Weight |
BlockIOWeight |
Block Device Limit (specified in /volume/path) ) |
BlockIODeviceWeight |
Read IO |
BlockIOReadBandwidth |
Disk Write IO |
BlockIOReadBandwidth |
大多数情况下,服务将受到 CPU 使用情况、内存限制和读/写 I/O 的限制。
Most often services will be limited by CPU use, Memory limits and Read / Write IO.
在更改每个方法之后,都有必要重新加载 systemd,并重新启动服务:
After changing each, it is necessary to reload systemd and restart the service −
systemctl set-property foo.service CPUShares = 250
systemctl daemon-reload
systemctl restart foo.service
Configure CGroups in CentOS Linux
要在 CentOS Linux 中创建自定义 cgroup,我们首先需要安装服务并对其进行配置。
To make custom cgroups in CentOS Linux, we need to first install services and configure them.
Step 1 - 安装 libcgroup(如果尚未安装)。
Step 1 − Install libcgroup (if not already installed).
[root@localhost]# yum install libcgroup
Package libcgroup-0.41-11.el7.x86_64 already installed and latest version
Nothing to do
[root@localhost]#
如我们所见,CentOS 7 默认安装了 libcgroup,使用万能安装程序。使用最小安装程序将要求我们安装 libcgroup 实用程序以及任何依赖项。
As we can see, by default CentOS 7 has libcgroup installed with the everything installer. Using a minimal installer will require us to install the libcgroup utilities along with any dependencies.
Step 2 - 启动并启用 cgconfig 服务。
Step 2 − Start and enable the cgconfig service.
[root@localhost]# systemctl enable cgconfig
Created symlink from /etc/systemd/system/sysinit.target.wants/cgconfig.service
to /usr/lib/systemd/system/cgconfig.service.
[root@localhost]# systemctl start cgconfig
[root@localhost]# systemctl status cgconfig
● cgconfig.service - Control Group configuration service
Loaded: loaded (/usr/lib/systemd/system/cgconfig.service; enabled; vendor
preset: disabled)
Active: active (exited) since Mon 2017-01-23 02:51:42 EST; 1min 21s ago
Main PID: 4692 (code=exited, status = 0/SUCCESS)
Memory: 0B
CGroup: /system.slice/cgconfig.service
Jan 23 02:51:42 localhost.localdomain systemd[1]: Starting Control Group
configuration service...
Jan 23 02:51:42 localhost.localdomain systemd[1]: Started Control Group
configuration service.
[root@localhost]#