文章目录
docker(四)docker安全理解docker安全容器与资源的关系对docker容器进程进行控制(资源方面)
docker(四)docker安全
为什么资源的隔离和限制在云时代更加重要?在默认情况下,一个操作系统里所有运行的进程共享CPU和内存资源,如果程序设计不当,最极端的情况,某进程出现死循环可能会耗尽CPU资源,或者由于内存泄漏消耗掉大部分系统资源,这在企业级产品场景下是不可接受的,所以进程的资源隔离技术是非常必要的 Linux操作系统本身从操作系统层面就支持虚拟化技术,叫做Linux container,也就是大家到处能看到的LXC的全称
LXC的三大特色:cgroup,namespace和unionFS Cgroup Cgroup是control group,又称为控制组,它主要是做资源控制。原理是将一组进程放在放在一个控制组里,通过给这个控制组分配指定的可用资源,达到控制这一组进程可用资源的目的。 Namespace Namespace又称为命名空间,它主要做访问隔离。其原理是针对一类资源进行抽象,并将其封装在一起提供给一个容器使用,对于这类资源,因为每个容器都有自己的抽象,而他们彼此之间是不可见的,所以就可以做到访问隔离 unionFS: 顾名思义,unionFS可以把文件系统上多个目录(也叫分支)内容联合挂载到同一个目录下,而目录的物理位置是分开的。 要理解unionFS,我们首先要认识bootfs和rootfs。 1.boot file system (bootfs):包含操作系统boot loader 和 kernel。用户不会修改这个文件系统。 一旦启动完成后,整个Linux内核加载进内存,之后bootfs会被卸载掉,从而释放出内存。 同样内核版本的不同的 Linux 发行版,其bootfs都是一致的。 2.root file system (rootfs):包含典型的目录结构,包括 /dev, /proc, /bin, /etc, /lib, /usr, and /tm 实际上这就是Docker容器镜像分层实现的技术基础。如果我们浏览Docker hub,能发现大多数镜像都不是从头开始制作,而是从一些base镜像基础上创建,比如debian基础镜像。
理解docker安全
docker容器的安全性,很大程度上依赖于linux系统自身,评估Docker的安全性时,主要考虑以下几个方面 Linux内核的命令空间(namespace)机制提供的容器隔离安全 Linux控制组机制对容器资源的控制能力安全 Linux内核的能力机制所带来的操作权限安全 Docker程序(特别是服务端)本身的抗攻击性 其他安全增强机制对容器安全性的影响 命名空间隔离安全 当docker run启动一个容器时,Docker将在后台为容器创建一个独立的命名空间,命名空间提供了最基础也最直接的隔离 与虚拟机方式相比,通过Linux namespace来实现的隔离不是那么彻底 容器只是运行在宿主机上的一种特殊的进程,那么多个容器之间使用的还是同一个宿主机的操作系统内核 在Linux内核中,有很多资源和对象是不能被Namespace化的 比如 时间
容器与资源的关系
[root@docker docker
]
root@06a3c366c16b:/
root@06a3c366c16b:/
[root@docker docker
]
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
06a3c366c16b ubuntu
"/bin/bash" 14 seconds ago Up 13 seconds vm1
[root@docker docker
]
"Pid": 1504,
"PidMode": "",
"PidsLimit": 0,
[root@docker docker
]
root 1504 0.0 0.1 18164 1960 pts/0 Ss+ 10:43 0:00 /bin/bash
root 1589 0.0 0.1 112704 1024 pts/0 R+ 10:44 0:00
grep --color
=auto 1504
[root@docker docker
]
[root@docker 1504
]
attr cwd map_files oom_adj schedstat task
autogroup environ maps oom_score sessionid timers
auxv exe mem oom_score_adj setgroups uid_map
cgroup fd mountinfo pagemap smaps wchan
clear_refs fdinfo mounts patch_state stack
cmdline gid_map mountstats personality
stat
comm io net projid_map statm
coredump_filter limits ns root status
cpuset loginuid numa_maps sched syscall
[root@docker 1504
]
"""
namespace 主要用作环境的隔离,主要有以下namespace:
UTS: 主机名与域名
IPC: 信号量、消息队列和共享内存
PID: 进程编号
Network:网络设备、网络栈、端口等等
Mount: 挂载点
User: 用户和用户组
"""
[root@docker ns
]
ipc mnt net pid user uts
控制组资源控制安全
当docker run启动一个容器时,Docker将在后台为容器创建一个独立的控制组策略集合
Linux Cgroup提供了很多有用的特性,确保各容器可以公平地分享主机的内存,cpu,磁盘io等资源
确保当发生在容器内的资源压力不会影响到本地主机系统和其他容器,它在防止拒绝服务DDos方面必不可少
[root@docker ns
]
cgroup on /sys/fs/cgroup/systemd
type cgroup
(rw,nosuid,nodev,noexec,relatime,xattr,release_agent
=/usr/lib/systemd/systemd-cgroups-agent,name
=systemd
)
cgroup on /sys/fs/cgroup/perf_event
type cgroup
(rw,nosuid,nodev,noexec,relatime,perf_event
)
cgroup on /sys/fs/cgroup/devices
type cgroup
(rw,nosuid,nodev,noexec,relatime,devices
)
cgroup on /sys/fs/cgroup/cpu,cpuacct
type cgroup
(rw,nosuid,nodev,noexec,relatime,cpuacct,cpu
)
cgroup on /sys/fs/cgroup/hugetlb
type cgroup
(rw,nosuid,nodev,noexec,relatime,hugetlb
)
cgroup on /sys/fs/cgroup/memory
type cgroup
(rw,nosuid,nodev,noexec,relatime,memory
)
cgroup on /sys/fs/cgroup/freezer
type cgroup
(rw,nosuid,nodev,noexec,relatime,freezer
)
cgroup on /sys/fs/cgroup/blkio
type cgroup
(rw,nosuid,nodev,noexec,relatime,blkio
)
cgroup on /sys/fs/cgroup/pids
type cgroup
(rw,nosuid,nodev,noexec,relatime,pids
)
cgroup on /sys/fs/cgroup/net_cls,net_prio
type cgroup
(rw,nosuid,nodev,noexec,relatime,net_prio,net_cls
)
cgroup on /sys/fs/cgroup/cpuset
type cgroup
(rw,nosuid,nodev,noexec,relatime,cpuset
)
[root@docker ns
]
[root@docker cgroup
]
blkio cpu,cpuacct freezer net_cls perf_event
cpu cpuset hugetlb net_cls,net_prio pids
cpuacct devices memory net_prio systemd
[root@docker cgroup
]
[root@docker cpu
]
cgroup.clone_children cpuacct.usage cpu.rt_runtime_us release_agent
cgroup.event_control cpuacct.usage_percpu cpu.shares system.slice
cgroup.procs cpu.cfs_period_us cpu.stat tasks
cgroup.sane_behavior cpu.cfs_quota_us docker user.slice
cpuacct.stat cpu.rt_period_us notify_on_release
[root@docker cpu
]
[root@docker docker
]
06a3c366c16be682286ca31da43026615db1b053d394fb0f8b08dc9c9126ae47
cgroup.clone_children
cgroup.event_control
cgroup.procs
cpuacct.stat
cpuacct.usage
cpuacct.usage_percpu
cpu.cfs_period_us
cpu.cfs_quota_us
cpu.rt_period_us
cpu.rt_runtime_us
cpu.shares
cpu.stat
notify_on_release
tasks
内核能力机制
能力机制
(Capability
)是linux内核一个强大的特性,可以提供细粒度的访问权限控制
大部分情况下,容器并不需要“真正的”root权限,容器只需要少数的能力即可
默认情况下,docker采用“白名单”机制,禁用“必需功能”之外的其他权限
[root@docker docker
]
root@06a3c366c16b:/
root@06a3c366c16b:/
root@06a3c366c16b:/
uid
=0
(root
) gid
=0
(root
) groups
=0
(root
)
root@06a3c366c16b:/
1: lo:
<LOOPBACK,UP,LOWER_UP
> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
4: eth0@if5:
<BROADCAST,MULTICAST,UP,LOWER_UP
> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
root@06a3c366c16b:/
RTNETLINK answers: Operation not permitted
Docker服务端防护
使用Docker容器的核心是Docker服务端,确保只有可信的用户才能访问Docker服务
将容器的root用户映射到本地主机上的非root用户,减轻容器和主机之间因权限提升而引起的安全问题
允许Docker服务端在非root权限下运行,利用安全可靠的子进程来代理执行需要特权权限的操作,这些子进程只允许在特定的范围内进行操作
其他安全特性
在内核中启用GRSEC和PAX 这将增加更多的编译和运行时的安全检查,并且通过地址随机化机制来避免恶意探测等
(启用该特性不需要Docker进行任何配置
)
使用一些有增强安全特性的容器模板
用户可以自定义更加严格的访问控制机制来定制安全策略
在文件系统挂载到容器内部时,可以通过配置只读模式来避免容器内的应用通过文件系统外部环境,特别是一些系统运行状态相关的目录
容器资源控制
linux cgroup??
Linux cgroup给用户暴露出来的操作接口是文件系统
它以文件和目录的方式组织在操作系统的/sys/fs/cgroup路径下
执行此命令查看:mount -t cgroup
在/sys/fs/cgroup下面有诸多如cpuset cpu memory这样的子目录,也叫子系统
在每个子系统下面,为每个容器创建一个控制组
(即创建一个新的目录
)
控制组下面的资源文件里填什么值,就靠用户执行docker run时所加的参数
[root@docker cgroup
]
blkio
(控制磁盘IO的
) cpu,cpuacct freezer net_cls perf_event
cpu cpuset hugetlb net_cls,net_prio pids
cpuacct devices memory net_prio systemd
[root@docker cgroup
]
/sys/fs/cgroup
[root@docker cpu
]
cgroup.clone_children cpuacct.usage cpu.rt_runtime_us release_agent
cgroup.event_control cpuacct.usage_percpu cpu.shares system.slice
cgroup.procs cpu.cfs_period_us cpu.stat tasks
cgroup.sane_behavior cpu.cfs_quota_us docker user.slice
cpuacct.stat cpu.rt_period_us notify_on_release
[root@docker cpu
]
[root@docker cpu
]
[root@docker x1
]
cgroup.clone_children cpuacct.usage_percpu cpu.shares
cgroup.event_control cpu.cfs_period_us cpu.stat
cgroup.procs cpu.cfs_quota_us notify_on_release
cpuacct.stat cpu.rt_period_us tasks
cpuacct.usage cpu.rt_runtime_us
[root@docker x1
]
100000
(微秒
)
[root@docker x1
]
-1
[root@docker x1
]
[root@docker x1
]
20000
[root@docker x1
]
[1
] 12572
12572 root 20 0 107992 608 516 R 99.9 0.1 0:13.84
dd
[root@docker x1
]
cgroup.clone_children cpuacct.usage_percpu cpu.shares
cgroup.event_control cpu.cfs_period_us cpu.stat
cgroup.procs cpu.cfs_quota_us notify_on_release
cpuacct.stat cpu.rt_period_us tasks
cpuacct.usage cpu.rt_runtime_us
[root@docker x1
]
12572 root 20 0 107992 608 516 R 20.0 0.1 1:40.57
dd
对docker容器进程进行控制(资源方面)
[root@docker ~
]
--cpu-period int Limit CPU CFS
(Completely Fair
--cpu-quota int Limit CPU CFS
(Completely Fair
--cpu-rt-period int Limit CPU real-time period
in
--cpu-rt-runtime int Limit CPU real-time runtime
in
-c, --cpu-shares int CPU shares
(relative weight
)
--cpus decimal Number of CPUs
--cpuset-cpus string CPUs
in which to allow execution
--cpuset-mems string MEMs
in which to allow execution
[root@docker ~
]
root@d3b1336b2adb:/
[root@docker docker
]
[root@docker d3b1336b2adb72c766227795eaa03dde575143abcbfda939c9b79a19b8839622
]
cgroup.clone_children cpuacct.usage_percpu cpu.shares
cgroup.event_control cpu.cfs_period_us cpu.stat
cgroup.procs cpu.cfs_quota_us notify_on_release
cpuacct.stat cpu.rt_period_us tasks
cpuacct.usage cpu.rt_runtime_us
[root@docker d3b1336b2adb72c766227795eaa03dde575143abcbfda939c9b79a19b8839622
]
1000000
[root@docker d3b1336b2adb72c766227795eaa03dde575143abcbfda939c9b79a19b8839622
]
20000
[root@docker d3b1336b2adb72c766227795eaa03dde575143abcbfda939c9b79a19b8839622
]
[root@docker ~
]
root@d3b1336b2adb:/
12877 root 20 0 4364 360 280 R 20.0 0.0 0:03.65
dd
容器可用内存包括两个部分:物理内存
(优先使用
)和swap交换分区
[root@docker ~
]
[root@docker cgroup
]
[root@docker cgroup
]
blkio cpu,cpuacct freezer net_cls perf_event
cpu cpuset hugetlb net_cls,net_prio pids
cpuacct devices memory net_prio systemd
[root@docker cgroup
]
[root@docker memory
]
/sys/fs/cgroup/memory
[root@docker memory
]
cgroup.clone_children memory.memsw.failcnt
cgroup.event_control memory.memsw.limit_in_bytes
cgroup.procs memory.memsw.max_usage_in_bytes
cgroup.sane_behavior memory.memsw.usage_in_bytes
docker memory.move_charge_at_immigrate
memory.failcnt memory.numa_stat
memory.force_empty memory.oom_control
memory.kmem.failcnt memory.pressure_level
memory.kmem.limit_in_bytes memory.soft_limit_in_bytes
memory.kmem.max_usage_in_bytes memory.stat
memory.kmem.slabinfo memory.swappiness
memory.kmem.tcp.failcnt memory.usage_in_bytes
memory.kmem.tcp.limit_in_bytes memory.use_hierarchy
memory.kmem.tcp.max_usage_in_bytes notify_on_release
memory.kmem.tcp.usage_in_bytes release_agent
memory.kmem.usage_in_bytes system.slice
memory.limit_in_bytes tasks
memory.max_usage_in_bytes user.slice
[root@docker memory
]
[root@docker memory
]
[root@docker x2
]
cgroup.clone_children memory.memsw.failcnt
cgroup.event_control memory.memsw.limit_in_bytes
cgroup.procs memory.memsw.max_usage_in_bytes
memory.failcnt memory.memsw.usage_in_bytes
memory.force_empty memory.move_charge_at_immigrate
memory.kmem.failcnt memory.numa_stat
memory.kmem.limit_in_bytes memory.oom_control
memory.kmem.max_usage_in_bytes memory.pressure_level
memory.kmem.slabinfo memory.soft_limit_in_bytes
memory.kmem.tcp.failcnt memory.stat
memory.kmem.tcp.limit_in_bytes memory.swappiness
memory.kmem.tcp.max_usage_in_bytes memory.usage_in_bytes
memory.kmem.tcp.usage_in_bytes memory.use_hierarchy
memory.kmem.usage_in_bytes notify_on_release
memory.limit_in_bytes tasks
memory.max_usage_in_bytes
[root@docker x2
]
9223372036854771712
"""
字节的转换
[root@foundation0 ~]# bc
bc 1.06.95
Copyright 1991-1994, 1997, 1998, 2000, 2004, 2006 Free Software Foundation, Inc.
This is free software with ABSOLUTELY NO WARRANTY.
For details type `warranty'.
256 * 1024 * 1024
268435456
"""
[root@docker x2
]
9223372036854771712
[root@docker x2
]
[root@docker x2
]
268435456
[root@docker x2
]
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/rhel-root 17811456 5025844 12785612 29% /
devtmpfs 495544 0 495544 0% /dev
tmpfs 507780 0 507780 0% /dev/shm
tmpfs 507780 13228 494552 3% /run
tmpfs 507780 0 507780 0% /sys/fs/cgroup
/dev/sda1 1038336 132704 905632 13% /boot
tmpfs 101560 0 101560 0% /run/user/0
[root@docker x2
]
total used
free shared buff/cache available
Mem: 991 138 176 12 676 652
Swap: 2047 0 2047
[root@docker x2
]
[root@docker shm
]
100+0 records
in
100+0 records out
104857600 bytes
(105 MB
) copied, 0.0328056 s, 3.2 GB/s
[root@docker shm
]
total used
free shared buff/cache available
Mem: 991 138 77 112 776 553
Swap: 2047 0 2047
[root@docker shm
]
200+0 records
in
200+0 records out
209715200 bytes
(210 MB
) copied, 0.101575 s, 2.1 GB/s
[root@docker shm
]
total used
free shared buff/cache available
Mem: 991 135 72 210 783 458
Swap: 2047 2 2045
[root@docker shm
]
300+0 records
in
300+0 records out
314572800 bytes
(315 MB
) copied, 0.135642 s, 2.3 GB/s
[root@docker shm
]
total used
free shared buff/cache available
Mem: 991 134 83 309 773 363
Swap: 2047 3 2044
[root@docker shm
]
uid
=0
(root
) gid
=0
(root
) groups
=0
(root
)
[root@docker shm
]
[root@docker memory
]
[root@docker x2
]
/sys/fs/cgroup/memory
[root@docker memory
]
[root@docker shm
]
100+0 records
in
100+0 records out
104857600 bytes
(105 MB
) copied, 0.0366051 s, 2.9 GB/s
[root@docker shm
]
200+0 records
in
200+0 records out
209715200 bytes
(210 MB
) copied, 0.0755738 s, 2.8 GB/s
[root@docker shm
]
300+0 records
in
300+0 records out
314572800 bytes
(315 MB
) copied, 0.352022 s, 894 MB/s
[root@docker shm
]
total used
free shared buff/cache available
Mem: 991 191 126 209 674 405
Swap: 2047 103 1944
[root@docker shm
]
400+0 records
in
400+0 records out
419430400 bytes
(419 MB
) copied, 0.33249 s, 1.3 GB/s
[root@docker shm
]
total used
free shared buff/cache available
Mem: 991 134 126 266 730 405
Swap: 2047 146 1901
[root@docker shm
]
450+0 records
in
450+0 records out
471859200 bytes
(472 MB
) copied, 0.411024 s, 1.1 GB/s
[root@docker shm
]
total used
free shared buff/cache available
Mem: 991 134 125 267 731 405
[root@docker x2
]
268435456
[root@docker x2
]
-bash: echo:
write error: Device or resource busy
[root@docker x2
]
/sys/fs/cgroup/memory
[root@docker memory
]
cgroup.clone_children memory.memsw.limit_in_bytes
cgroup.event_control memory.memsw.max_usage_in_bytes
cgroup.procs memory.memsw.usage_in_bytes
cgroup.sane_behavior memory.move_charge_at_immigrate
docker memory.numa_stat
memory.failcnt memory.oom_control
memory.force_empty memory.pressure_level
memory.kmem.failcnt memory.soft_limit_in_bytes
memory.kmem.limit_in_bytes memory.stat
memory.kmem.max_usage_in_bytes memory.swappiness
memory.kmem.slabinfo memory.usage_in_bytes
memory.kmem.tcp.failcnt memory.use_hierarchy
memory.kmem.tcp.limit_in_bytes notify_on_release
memory.kmem.tcp.max_usage_in_bytes release_agent
memory.kmem.tcp.usage_in_bytes system.slice
memory.kmem.usage_in_bytes tasks
memory.limit_in_bytes user.slice
memory.max_usage_in_bytes x2
memory.memsw.failcnt
[root@docker memory
]
[root@docker shm
]
bigfile
[root@docker shm
]
[root@docker shm
]
/sys/fs/cgroup/memory
[root@docker memory
]
cgroup.clone_children memory.memsw.limit_in_bytes
cgroup.event_control memory.memsw.max_usage_in_bytes
cgroup.procs memory.memsw.usage_in_bytes
cgroup.sane_behavior memory.move_charge_at_immigrate
docker memory.numa_stat
memory.failcnt memory.oom_control
memory.force_empty memory.pressure_level
memory.kmem.failcnt memory.soft_limit_in_bytes
memory.kmem.limit_in_bytes memory.stat
memory.kmem.max_usage_in_bytes memory.swappiness
memory.kmem.slabinfo memory.usage_in_bytes
memory.kmem.tcp.failcnt memory.use_hierarchy
memory.kmem.tcp.limit_in_bytes notify_on_release
memory.kmem.tcp.max_usage_in_bytes release_agent
memory.kmem.tcp.usage_in_bytes system.slice
memory.kmem.usage_in_bytes tasks
memory.limit_in_bytes user.slice
memory.max_usage_in_bytes x2
memory.memsw.failcnt
[root@docker memory
]
[root@docker x2
]
以上设置表明:物理内存和swap分区共同只能使用256M
[root@docker shm
]
Killed
[root@docker shm
]
total used
free shared buff/cache available
Mem: 991 133 126 267 731 405
Swap: 2047 0 2047
docker run -it --memory 256M --memory-swap
=256M ubuntu
block io
blkio.throttle.read_bps_device
blkio.throttle.write_bps_device
blkio.throttle.write_iops_device
blkio.throttle.write_bps_device
[root@docker blkio
]
--blkio-weight-device list Block IO weight
(relative device
--device list Add a host device to the container
--device-cgroup-rule list Add a rule to the cgroup allowed
devices list
--device-read-bps list Limit
read rate
(bytes per second
)
from a device
(default
[])
--device-read-iops list Limit
read rate
(IO per second
)
from a device
(default
[])
--device-write-bps list Limit
write rate
(bytes per
second
) to a device
(default
[])
--device-write-iops list Limit
write rate
(IO per second
)
to a device
(default
[])
[root@docker blkio
]
brw-rw---- 1 root disk 252, 0 Oct 24 15:41 /dev/vda
[root@docker blkio
]
brw-rw---- 1 root disk 252, 0(设备号) Oct 24 15:41 /dev/vda
[root@docker blkio
]
[root@docker blkio
]
252:0 1048576
[root@docker ~
]
10+0 records
in
10+0 records out
10485760 bytes
(10 MB
) copied, 0.00381529 s, 2.7 GB/s
[root@docker blkio
]
brw-rw---- 1 root disk 252, 0 Oct 24 15:48 /dev/vda
[root@docker blkio
]
[root@docker blkio
]
10+0 records
in
10+0 records out
10485760 bytes
(10 MB
) copied, 10.0025 s, 1.0 MB/s
[root@docker blkio
]
root@b9e61aad4a99:/
"""
direct 模式就是把写入请求直接封装成io 指令发到磁盘
非direct 模式,就把数据写入系统缓存,然后就认为io 成功,并由操作系统决定缓存中的数据什么时候被写入磁盘
"""
10+0 records
in
10+0 records out
10485760 bytes
(10 MB
) copied, 10.0021 s, 1.0 MB/s
[root@docker ~
]
[root@docker cgroup
]
blkio cpu,cpuacct freezer net_cls perf_event
cpu cpuset hugetlb net_cls,net_prio pids
cpuacct devices memory net_prio systemd
[root@docker cgroup
]
[root@docker blkio
]
blkio.io_merged blkio.throttle.read_bps_device
blkio.io_merged_recursive blkio.throttle.read_iops_device
blkio.io_queued blkio.throttle.write_bps_device
blkio.io_queued_recursive blkio.throttle.write_iops_device
blkio.io_service_bytes blkio.time
blkio.io_service_bytes_recursive blkio.time_recursive
blkio.io_serviced blkio.weight
blkio.io_serviced_recursive blkio.weight_device
blkio.io_service_time cgroup.clone_children
blkio.io_service_time_recursive cgroup.event_control
blkio.io_wait_time cgroup.procs
blkio.io_wait_time_recursive cgroup.sane_behavior
blkio.leaf_weight docker
blkio.leaf_weight_device notify_on_release
blkio.reset_stats release_agent
blkio.sectors system.slice
blkio.sectors_recursive tasks
blkio.throttle.io_service_bytes user.slice
blkio.throttle.io_serviced x3
[root@docker blkio
]
[root@docker docker
]
b9e61aad4a99aedcb42c9e827c82909a2ff6efac705055b6d5fbedc990eb9e07
blkio.io_merged
blkio.io_merged_recursive
blkio.io_queued
blkio.io_queued_recursive
blkio.io_service_bytes
blkio.io_service_bytes_recursive
blkio.io_serviced
blkio.io_serviced_recursive
blkio.io_service_time
blkio.io_service_time_recursive
blkio.io_wait_time
blkio.io_wait_time_recursive
blkio.leaf_weight
blkio.leaf_weight_device
blkio.reset_stats
blkio.sectors
blkio.sectors_recursive
blkio.throttle.io_service_bytes
blkio.throttle.io_serviced
blkio.throttle.read_bps_device
blkio.throttle.read_iops_device
blkio.throttle.write_bps_device
blkio.throttle.write_iops_device
blkio.time
blkio.time_recursive
blkio.weight
blkio.weight_device
cgroup.clone_children
cgroup.event_control
cgroup.procs
notify_on_release
tasks
[root@docker docker
]
[root@docker b9e61aad4a99aedcb42c9e827c82909a2ff6efac705055b6d5fbedc990eb9e07
]
blkio.io_merged blkio.sectors_recursive
blkio.io_merged_recursive blkio.throttle.io_service_bytes
blkio.io_queued blkio.throttle.io_serviced
blkio.io_queued_recursive blkio.throttle.read_bps_device
blkio.io_service_bytes blkio.throttle.read_iops_device
blkio.io_service_bytes_recursive blkio.throttle.write_bps_device
blkio.io_serviced blkio.throttle.write_iops_device
blkio.io_serviced_recursive blkio.time
blkio.io_service_time blkio.time_recursive
blkio.io_service_time_recursive blkio.weight
blkio.io_wait_time blkio.weight_device
blkio.io_wait_time_recursive cgroup.clone_children
blkio.leaf_weight cgroup.event_control
blkio.leaf_weight_device cgroup.procs
blkio.reset_stats notify_on_release
blkio.sectors tasks
[root@docker b9e61aad4a99aedcb42c9e827c82909a2ff6efac705055b6d5fbedc990eb9e07
]
8:0 1048576
[root@docker b9e61aad4a99aedcb42c9e827c82909a2ff6efac705055b6d5fbedc990eb9e07
]
"""
[root@docker ~]# ll /dev/sda
brw-rw---- 1 root disk 8, 0 Oct 24 15:48 /dev/sda
"""
限制用户
[root@docker shm
]
dd memory x2/
[root@docker ~
]
[root@docker ~
]
● cgred.service - CGroups Rules Engine Daemon
Loaded: loaded
(/usr/lib/systemd/system/cgred.service
; disabled
; vendor preset: disabled
)
Active: active
(running
) since Thu 2019-10-24 16:17:27 CST
; 5s ago
Process: 13737 ExecStart
=/usr/sbin/cgrulesengd
$OPTIONS (code
=exited, status
=0/SUCCESS
)
Main PID: 13738
(cgrulesengd
)
Tasks: 1
Memory: 3.1M
CGroup: /system.slice/cgred.service
└─13738 /usr/sbin/cgrulesengd -s -g cgred
Oct 24 16:17:27 docker systemd
[1
]: Starting CGroups Rules Engine Daemon
...
Oct 24 16:17:27 docker systemd
[1
]: Started CGroups Rules Engine Daemon.
[root@docker ~
]
[dd@docker ~
]$
id dd
uid
=1000
(dd
) gid
=1000
(dd
) groups
=1000
(dd
)
[dd@docker ~
]$
cd /dev/shm/
[dd@docker shm
]$
ls
[dd@docker shm
]$
dd if
=/dev/zero of
=dd bs
=1M count
=100
100+0 records
in
100+0 records out
104857600 bytes
(105 MB
) copied, 0.0361387 s, 2.9 GB/s
[dd@docker shm
]$
dd if
=/dev/zero of
=dd bs
=1M count
=200
200+0 records
in
200+0 records out
209715200 bytes
(210 MB
) copied, 0.0688063 s, 3.0 GB/s
[dd@docker shm
]$
dd if
=/dev/zero of
=dd bs
=1M count
=300
Killed
[root@docker x2
]
root@b6b9af37fc2f:/
total used
free shared buffers cached
Mem: 991 871 119 266 6 614
-/+ buffers/cache: 250 740
Swap: 2047 1 2046
root@b6b9af37fc2f:/
exit
[root@docker x2
]
total used
free shared buff/cache available
Mem: 991 140 143 266 707 400
Swap: 2047 1 2046