docker-安全
“”"
为什么资源的隔离和限制在云时代更加重要?在默认情况下,一个操作系统里所有运行的进程共享CPU和内存资源,如果程序设计不当,最极端的情况,某进程出现死循环可能会耗尽CPU资源,或者由于内存泄漏消耗掉大部分系统资源,这在企业级产品场景下是不可接受的,所以进程的资源隔离技术是非常必要的
Linux操作系统本身从操作系统层面就支持虚拟化技术,叫做Linux container,也就是大家到处能看到的LXC的全称
LXC的三大特色:cgroup,namespace和unionFS
Cgroup
Cgroup是control group,又称为控制组,它主要是做资源控制。原理是将一组进程放在放在一个控制组里,通过给这个控制组分配指定的可用资源,达到控制这一组进程可用资源的目的。
Namespace
Namespace又称为命名空间,它主要做访问隔离。其原理是针对一类资源进行抽象,并将其封装在一起提供给一个容器使用,对于这类资源,因为每个容器都有自己的抽象,而他们彼此之间是不可见的,所以就可以做到访问隔离
“”"
理解docker安全
docker容器的安全性,很大程度上依赖于linux系统自身,评估Docker的安全性时,主要考虑以下几个方面
Linux内核的命令空间(namespace)机制提供的容器隔离安全
Linux控制组机制对容器资源的控制能力安全
Linux内核的能力机制所带来的操作权限安全
Docker程序(特别是服务端)本身的抗攻击性
其他安全增强机制对容器安全性的影响
命名空间隔离安全
当docker run启动一个容器时,Docker将在后台为容器创建一个独立的命名空间,命名空间提供了最基础也最直接的隔离
与虚拟机方式相比,通过Linux namespace来实现的隔离不是那么彻底
容器只是运行在宿主机上的一种特殊的进程,那么多个容器之间使用的还是同一个宿主机的操作系统内核
在Linux内核中,有很多资源和对象是不能被Namespace化的 比如 时间
不能停掉容器 与资源会被释放掉
[root@docker docker]# docker run -it --name vm1 ubuntu
root@06a3c366c16b:/#
root@06a3c366c16b:/# [root@docker docker]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
06a3c366c16b ubuntu “/bin/bash” 14 seconds ago Up 13 seconds vm1
[root@docker docker]# docker inspect vm1 | grep Pid
“Pid”: 1504,
“PidMode”: “”,
“PidsLimit”: 0,
[root@docker docker]# ps aux |grep 1504
root 1504 0.0 0.1 18164 1960 pts/0 Ss+ 10:43 0:00 /bin/bash
root 1589 0.0 0.1 112704 1024 pts/0 R+ 10:44 0:00 grep --color=auto 1504
[root@docker docker]# cd /proc/1504
[root@docker 1504]# ls
attr cwd map_files oom_adj schedstat task
autogroup environ maps oom_score sessionid timers
auxv exe mem oom_score_adj setgroups uid_map
cgroup fd mountinfo pagemap smaps wchan
clear_refs fdinfo mounts patch_state stack
cmdline gid_map mountstats personality stat
comm io net projid_map statm
coredump_filter limits ns root status
cpuset loginuid numa_maps sched syscall
[root@docker 1504]# cd ns/
namespace
“”"
namespace 主要用作环境的隔离,主要有以下namespace:
UTS: 主机名与域名
IPC: 信号量、消息队列和共享内存
PID: 进程编号
Network:网络设备、网络栈、端口等等
Mount: 挂载点
User: 用户和用户组
“”"
[root@docker ns]# ls
ipc mnt net pid user uts
控制组资源控制安全
当docker run启动一个容器时,Docker将在后台为容器创建一个独立的控制组策略集合
Linux Cgroup提供了很多有用的特性,确保各容器可以公平地分享主机的内存,cpu,磁盘io等资源
确保当发生在容器内的资源压力不会影响到本地主机系统和其他容器,它在防止拒绝服务DDos方面必不可少
[root@docker ns]# mount -t cgroup
[root@docker ns]# cd /sys/fs/cgroup/
[root@docker cgroup]# ls
[root@docker cgroup]# cd cpu
[root@docker cpu]# ls
[root@docker cpu]# cd docker/
[root@docker docker]# ls
06a3c366c16be682286ca31da43026615db1b053d394fb0f8b08dc9c9126ae47
cgroup.clone_children
cgroup.event_control
cgroup.procs
cpuacct.stat
cpuacct.usage
cpuacct.usage_percpu
cpu.cfs_period_us
cpu.cfs_quota_us
cpu.rt_period_us
cpu.rt_runtime_us
cpu.shares
cpu.stat
notify_on_release
tasks
内核能力机制
能力机制(Capability)是linux内核一个强大的特性,可以提供细粒度的访问权限控制
大部分情况下,容器并不需要“真正的”root权限,容器只需要少数的能力即可
默认情况下,docker采用“白名单”机制,禁用“必需功能”之外的其他权限
[root@docker docker]# docker container attach vm1
root@06a3c366c16b:/#
root@06a3c366c16b:/#
root@06a3c366c16b:/# id #你所看到的root并不具备真正的root身份权限
uid=0(root) gid=0(root) groups=0(root)
root@06a3c366c16b:/# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
4: eth0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
root@06a3c366c16b:/# ip link set down eth0 #真正的root是具备这样的权限的 所以在容器内 你不是真正的root
RTNETLINK answers: Operation not permitted
Docker服务端防护
使用Docker容器的核心是Docker服务端,确保只有可信的用户才能访问Docker服务
将容器的root用户映射到本地主机上的非root用户,减轻容器和主机之间因权限提升而引起的安全问题
允许Docker服务端在非root权限下运行,利用安全可靠的子进程来代理执行需要特权权限的操作,这些子进程只允许在特定的范围内进行操作
其他安全特性
在内核中启用GRSEC和PAX 这将增加更多的编译和运行时的安全检查,并且通过地址随机化机制来避免恶意探测等(启用该特性不需要Docker进行任何配置)
使用一些有增强安全特性的容器模板
用户可以自定义更加严格的访问控制机制来定制安全策略
在文件系统挂载到容器内部时,可以通过配置只读模式来避免容器内的应用通过文件系统外部环境,特别是一些系统运行状态相关的目录
容器资源控制
linux cgroup??
其他资源控制 cat /etc/security/limits.conf
Linux cgroup给用户暴露出来的操作接口是文件系统
它以文件和目录的方式组织在操作系统的/sys/fs/cgroup路径下
执行此命令查看:mount -t cgroup
在/sys/fs/cgroup下面有诸多如cpuset cpu memory这样的子目录,也叫子系统
在每个子系统下面,为每个容器创建一个控制组(即创建一个新的目录)
控制组下面的资源文件里填什么值,就靠用户执行docker run时所加的参数
##资源控制的方式
#docker与linux内核无缝连接
###cpu限制
[root@docker cgroup]# ls
blkio(控制磁盘IO的) cpu,cpuacct freezer net_cls perf_event
cpu cpuset hugetlb net_cls,net_prio pids
cpuacct devices memory net_prio systemd
[root@docker cgroup]# pwd
/sys/fs/cgroup
[root@docker cpu]# ls
cgroup.clone_children cpuacct.usage cpu.rt_runtime_us release_agent
cgroup.event_control cpuacct.usage_percpu cpu.shares system.slice
cgroup.procs cpu.cfs_period_us cpu.stat tasks
cgroup.sane_behavior cpu.cfs_quota_us docker user.slice
cpuacct.stat cpu.rt_period_us notify_on_release
[root@docker cpu]# mkdir x1 #是上级目录的子目录(带有复制的资源)
[root@docker cpu]# cd x1/
[root@docker x1]# ls
cgroup.clone_children cpuacct.usage_percpu cpu.shares
cgroup.event_control cpu.cfs_period_us cpu.stat
cgroup.procs cpu.cfs_quota_us notify_on_release
cpuacct.stat cpu.rt_period_us tasks
cpuacct.usage cpu.rt_runtime_us
如不做修改 与父级目录内容是一样的
除了在容器中使用还可以控制我们操作系统本身的一些东西
[root@docker x1]# cat cpu.cfs_period_us #使用cpu的时间100毫秒
100000(微秒)
[root@docker x1]# cat cpu.cfs_quota_us #不做限制 给100就用100
-1
以上两个文件需要配合使用
[root@docker x1]# echo 20000 > cpu.cfs_quota_us #100毫秒只用20% 注意:只能用非交互的echo方式改写
[root@docker x1]# cat cpu.cfs_quota_us
20000
测试 top命令观察发现 与我们的限制之间并没有什么联系
[root@docker x1]# dd if=/dev/zero of=/dev/null & #此命令不占内存 只耗费cpu
[1] 12572
12572 root 20 0 107992 608 516 R 99.9 0.1 0:13.84 dd
#通过TOP命令查看 并没有发生什么变化
怎么建立联系
[root@docker x1]# ls
cgroup.clone_children cpuacct.usage_percpu cpu.shares
cgroup.event_control cpu.cfs_period_us cpu.stat
cgroup.procs cpu.cfs_quota_us notify_on_release
cpuacct.stat cpu.rt_period_us tasks
cpuacct.usage cpu.rt_runtime_us
[root@docker x1]# echo 12572 > tasks
12572 root 20 0 107992 608 516 R 20.0 0.1 1:40.57 dd
看到结果开始生效
对docker容器进程进行控制
“”"废话
[root@docker docker]# ls
3f5a12193cb1e6b3782a30a647edce9118a4e38e86edf305a0cbb7c78df0e494 #开启的容器
cgroup.clone_children
cgroup.event_control
cgroup.procs
cpuacct.stat
cpuacct.usage
cpuacct.usage_percpu
cpu.cfs_period_us
cpu.cfs_quota_us
cpu.rt_period_us
cpu.rt_runtime_us
cpu.shares
cpu.stat
notify_on_release
tasks
[root@docker docker]# pwd
/sys/fs/cgroup/cpu/docker
[root@docker docker]# cd 3f5a12193cb1e6b3782a30a647edce9118a4e38e86edf305a0cbb7c78df0e494 #对资源的复制
[root@docker 3f5a12193cb1e6b3782a30a647edce9118a4e38e86edf305a0cbb7c78df0e494]# ls
cgroup.clone_children cpuacct.usage_percpu cpu.shares
cgroup.event_control cpu.cfs_period_us cpu.stat
cgroup.procs cpu.cfs_quota_us notify_on_release
cpuacct.stat cpu.rt_period_us tasks
cpuacct.usage cpu.rt_runtime_us
[root@docker 3f5a12193cb1e6b3782a30a647edce9118a4e38e86edf305a0cbb7c78df0e494]#
[root@docker 3f5a12193cb1e6b3782a30a647edce9118a4e38e86edf305a0cbb7c78df0e494]#
“”"
[root@docker ~]# docker run --help|grep cpu
–cpu-period int Limit CPU CFS (Completely Fair
–cpu-quota int Limit CPU CFS (Completely Fair
–cpu-rt-period int Limit CPU real-time period in
–cpu-rt-runtime int Limit CPU real-time runtime in
-c, --cpu-shares int CPU shares (relative weight)
–cpus decimal Number of CPUs
–cpuset-cpus string CPUs in which to allow execution
–cpuset-mems string MEMs in which to allow execution
[root@docker ~]# docker run -it --name vm2 --cpu-period 100000 --cpu-quota 20000 ubuntu
root@d3b1336b2adb:/#
[root@docker docker]# cd d3b1336b2adb72c766227795eaa03dde575143abcbfda939c9b79a19b8839622/
[root@docker d3b1336b2adb72c766227795eaa03dde575143abcbfda939c9b79a19b8839622]# ls
cgroup.clone_children cpuacct.usage_percpu cpu.shares
cgroup.event_control cpu.cfs_period_us cpu.stat
cgroup.procs cpu.cfs_quota_us notify_on_release
cpuacct.stat cpu.rt_period_us tasks
cpuacct.usage cpu.rt_runtime_us
[root@docker d3b1336b2adb72c766227795eaa03dde575143abcbfda939c9b79a19b8839622]# cat cpu.rt_period_us
1000000
[root@docker d3b1336b2adb72c766227795eaa03dde575143abcbfda939c9b79a19b8839622]# cat cpu.cfs_quota_us
20000
[root@docker d3b1336b2adb72c766227795eaa03dde575143abcbfda939c9b79a19b8839622]#
测试是否生效
[root@docker ~]# docker run -it --name vm2 --cpu-period 100000 --cpu-quota 20000 ubuntu
使其还是占用的是物理cpu的资源
root@d3b1336b2adb:/# dd if=/dev/zero of=/dev/null &
12877 root 20 0 4364 360 280 R 20.0 0.0 0:03.65 dd
在物理机top查看 因为容器其实还是占用真实物理机的cpu
##内存限制
容器可用内存包括两个部分:物理内存(优先使用)和swap交换分区[root@docker ~]# cd /sys/fs/cgroup/
[root@docker cgroup]#
[root@docker cgroup]# ls
blkio cpu,cpuacct freezer net_cls perf_event
cpu cpuset hugetlb net_cls,net_prio pids
cpuacct devices memory net_prio systemd
[root@docker cgroup]# cd memory/
[root@docker memory]# pwd
/sys/fs/cgroup/memory
[root@docker memory]# ls
cgroup.clone_children memory.memsw.failcnt
cgroup.event_control memory.memsw.limit_in_bytes
cgroup.procs memory.memsw.max_usage_in_bytes
cgroup.sane_behavior memory.memsw.usage_in_bytes
docker memory.move_charge_at_immigrate
memory.failcnt memory.numa_stat
memory.force_empty memory.oom_control
memory.kmem.failcnt memory.pressure_level
memory.kmem.limit_in_bytes memory.soft_limit_in_bytes
memory.kmem.max_usage_in_bytes memory.stat
memory.kmem.slabinfo memory.swappiness
memory.kmem.tcp.failcnt memory.usage_in_bytes
memory.kmem.tcp.limit_in_bytes memory.use_hierarchy
memory.kmem.tcp.max_usage_in_bytes notify_on_release
memory.kmem.tcp.usage_in_bytes release_agent
memory.kmem.usage_in_bytes system.slice
memory.limit_in_bytes tasks
memory.max_usage_in_bytes user.slice
[root@docker memory]# mkdir x2
[root@docker memory]# cd x2/
[root@docker x2]# ls #内容从父级目录直接复制过来
cgroup.clone_children memory.memsw.failcnt
cgroup.event_control memory.memsw.limit_in_bytes
cgroup.procs memory.memsw.max_usage_in_bytes
memory.failcnt memory.memsw.usage_in_bytes
memory.force_empty memory.move_charge_at_immigrate
memory.kmem.failcnt memory.numa_stat
memory.kmem.limit_in_bytes memory.oom_control
memory.kmem.max_usage_in_bytes memory.pressure_level
memory.kmem.slabinfo memory.soft_limit_in_bytes
memory.kmem.tcp.failcnt memory.stat
memory.kmem.tcp.limit_in_bytes memory.swappiness
memory.kmem.tcp.max_usage_in_bytes memory.usage_in_bytes
memory.kmem.tcp.usage_in_bytes memory.use_hierarchy
memory.kmem.usage_in_bytes notify_on_release
memory.limit_in_bytes tasks
memory.max_usage_in_bytes
[root@docker x2]# cat memory.limit_in_bytes #默认没有控制 有多少用多少
9223372036854771712 #字节
控制:只能使用256M
“”"
字节的转换
[root@foundation0 ~]# bc
bc 1.06.95
Copyright 1991-1994, 1997, 1998, 2000, 2004, 2006 Free Software Foundation, Inc.
This is free software with ABSOLUTELY NO WARRANTY.
For details type `warranty’.
256 * 1024 * 1024
268435456
“”"
[root@docker x2]# cat memory.limit_in_bytes
9223372036854771712
[root@docker x2]# echo 268435456 > memory.limit_in_bytes
[root@docker x2]# cat memory.limit_in_bytes
268435456
[root@docker x2]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/rhel-root 17811456 5025844 12785612 29% /
devtmpfs 495544 0 495544 0% /dev
tmpfs 507780 0 507780 0% /dev/shm
tmpfs 507780 13228 494552 3% /run
tmpfs 507780 0 507780 0% /sys/fs/cgroup
/dev/sda1 1038336 132704 905632 13% /boot
tmpfs 101560 0 101560 0% /run/user/0
[root@docker x2]# free -m
total used free shared buff/cache available
Mem: 991 138 176 12 676 652
Swap: 2047 0 2047
[root@docker x2]# cd /dev/shm #注意 一定要在这个目录下 才会占用你的内存
[root@docker shm]# dd if=/dev/zero of=bigfile bs=1M count=100
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 0.0328056 s, 3.2 GB/s
[root@docker shm]# free -m
total used free shared buff/cache available
Mem: 991 138 77 112 776 553
Swap: 2047 0 2047
[root@docker shm]# dd if=/dev/zero of=bigfile bs=1M count=200
200+0 records in
200+0 records out
209715200 bytes (210 MB) copied, 0.101575 s, 2.1 GB/s
[root@docker shm]# free -m
total used free shared buff/cache available
Mem: 991 135 72 210 783 458
Swap: 2047 2 2045
[root@docker shm]# dd if=/dev/zero of=bigfile bs=1M count=300
300+0 records in
300+0 records out
314572800 bytes (315 MB) copied, 0.135642 s, 2.3 GB/s
我们看到并没有生效???
[root@docker shm]# free -m
total used free shared buff/cache available
Mem: 991 134 83 309 773 363
Swap: 2047 3 2044
[root@docker shm]# id root
uid=0(root) gid=0(root) groups=0(root)
[root@docker shm]# cd /sys/fs/cgroup/memory/
[root@docker memory]# cd x2/
[root@docker x2]# cd -
/sys/fs/cgroup/memory
怎么发生联系
因为这个命令不是持续运行的 就不能往task里面去写
[root@docker memory]# cd /dev/shm
[root@docker shm]# cgexec -g memory:x2 dd if=/dev/zero of=bigfile bs=1M count=100
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 0.0366051 s, 2.9 GB/s
[root@docker shm]# cgexec -g memory:x2 dd if=/dev/zero of=bigfile bs=1M count=200
200+0 records in
200+0 records out
209715200 bytes (210 MB) copied, 0.0755738 s, 2.8 GB/s
我什么还是可以呢??超过了我们的256M的限制
[root@docker shm]# cgexec -g memory:x2 dd if=/dev/zero of=bigfile bs=1M count=300
300+0 records in
300+0 records out
314572800 bytes (315 MB) copied, 0.352022 s, 894 MB/s
[root@docker shm]# free -m
total used free shared buff/cache available
Mem: 991 191 126 209 674 405
Swap: 2047 103 1944
[root@docker shm]# cgexec -g memory:x2 dd if=/dev/zero of=bigfile bs=1M count=400
400+0 records in
400+0 records out
419430400 bytes (419 MB) copied, 0.33249 s, 1.3 GB/s
[root@docker shm]# free -m
total used free shared buff/cache available
Mem: 991 134 126 266 730 405
Swap: 2047 146 1901
[root@docker shm]# cgexec -g memory:x2 dd if=/dev/zero of=bigfile bs=1M count=450
450+0 records in
450+0 records out
471859200 bytes (472 MB) copied, 0.411024 s, 1.1 GB/s
[root@docker shm]# free -m
total used free shared buff/cache available
Mem: 991 134 125 267 731 405
在物理内存不够的情况下 使用了swap分区
[root@docker x2]# cat memory.limit_in_bytes
268435456
[root@docker x2]# echo 268435456 >memory.memsw.limit_in_bytes
-bash: echo: write error: Device or resource busy
[root@docker x2]# cd -
/sys/fs/cgroup/memory
[root@docker memory]# ls
cgroup.clone_children memory.memsw.limit_in_bytes
cgroup.event_control memory.memsw.max_usage_in_bytes
cgroup.procs memory.memsw.usage_in_bytes
cgroup.sane_behavior memory.move_charge_at_immigrate
docker memory.numa_stat
memory.failcnt memory.oom_control
memory.force_empty memory.pressure_level
memory.kmem.failcnt memory.soft_limit_in_bytes
memory.kmem.limit_in_bytes memory.stat
memory.kmem.max_usage_in_bytes memory.swappiness
memory.kmem.slabinfo memory.usage_in_bytes
memory.kmem.tcp.failcnt memory.use_hierarchy
memory.kmem.tcp.limit_in_bytes notify_on_release
memory.kmem.tcp.max_usage_in_bytes release_agent
memory.kmem.tcp.usage_in_bytes system.slice
memory.kmem.usage_in_bytes tasks
memory.limit_in_bytes user.slice
memory.max_usage_in_bytes x2
memory.memsw.failcnt
[root@docker memory]# cd /dev/shm/
[root@docker shm]# ls
bigfile
[root@docker shm]# rm -rf bigfile #报错的解决
[root@docker shm]# cd -
/sys/fs/cgroup/memory
[root@docker memory]# ls
cgroup.clone_children memory.memsw.limit_in_bytes
cgroup.event_control memory.memsw.max_usage_in_bytes
cgroup.procs memory.memsw.usage_in_bytes
cgroup.sane_behavior memory.move_charge_at_immigrate
docker memory.numa_stat
memory.failcnt memory.oom_control
memory.force_empty memory.pressure_level
memory.kmem.failcnt memory.soft_limit_in_bytes
memory.kmem.limit_in_bytes memory.stat
memory.kmem.max_usage_in_bytes memory.swappiness
memory.kmem.slabinfo memory.usage_in_bytes
memory.kmem.tcp.failcnt memory.use_hierarchy
memory.kmem.tcp.limit_in_bytes notify_on_release
memory.kmem.tcp.max_usage_in_bytes release_agent
memory.kmem.tcp.usage_in_bytes system.slice
memory.kmem.usage_in_bytes tasks
memory.limit_in_bytes user.slice
memory.max_usage_in_bytes x2
memory.memsw.failcnt
[root@docker memory]# cd x2/
[root@docker x2]# echo 268435456 >memory.memsw.limit_in_bytes
这两个文件的意义:物理内存和swap加起来一共可用256M
以上设置表明:物理内存和swap分区共同只能使用256M
####此时实验成功
[root@docker shm]# cgexec -g memory:x2 dd if=/dev/zero of=bigfile bs=1M count=300
Killed
[root@docker shm]# free -m
total used free shared buff/cache available
Mem: 991 133 126 267 731 405
Swap: 2047 0 2047
docker run -it --memory 256M --memory-swap=256M ubuntu
#如果在启动容器时只指定 -m 而不指定 --memory-swap,那么–memory-swap 默认为-m的两倍
‘’’
block io #对磁盘的读写限制
每s读/写的数据量
blkio.throttle.read_bps_device
blkio.throttle.write_bps_device
每s读/写的操作次数
blkio.throttle.write_iops_device
blkio.throttle.write_bps_device
[root@docker blkio]# docker run --help |grep device
–blkio-weight-device list Block IO weight (relative device
–device list Add a host device to the container
–device-cgroup-rule list Add a rule to the cgroup allowed
devices list
–device-read-bps list Limit read rate (bytes per second)
from a device (default [])
–device-read-iops list Limit read rate (IO per second)
from a device (default [])
–device-write-bps list Limit write rate (bytes per
second) to a device (default [])
–device-write-iops list Limit write rate (IO per second)
to a device (default [])
[root@docker blkio]# ll /dev/vda
brw-rw---- 1 root disk 252, 0 Oct 24 15:41 /dev/vda
[root@docker blkio]# ll /dev/vda
brw-rw---- 1 root disk 252, 0(设备号) Oct 24 15:41 /dev/vda
[root@docker blkio]# echo “252:0 1048576” > blkio.throttle.write_bps_device
[root@docker blkio]# cat blkio.throttle.write_bps_device
252:0 1048576 ##1Mbps 1024* 1024
测试:好像没有生效
注意:当前block io限制只对direct io 有效(不适用文件系统缓存)
[root@docker ~]# cgexec -g blkio:x3 dd if=/dev/zero of=testfile bs=1M count=10
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.00381529 s, 2.7 GB/s
[root@docker blkio]# ll /dev/vda
brw-rw---- 1 root disk 252, 0 Oct 24 15:48 /dev/vda
[root@docker blkio]# echo “252:0 1048576” > blkio.throttle.write_bps_device
[root@docker blkio]# cgexec -g blkio:x3 dd if=/dev/zero of=/mnt/westosfile bs=1M count=10 oflag=direct
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 10.0025 s, 1.0 MB/s #控制非常精准
[root@docker blkio]# docker run -it --name vm2 --device-write-bps /dev/sda:1MB ubuntu
root@b9e61aad4a99:/# dd if=/dev/zero of=westos bs=1M count=10 oflag=direct #oflag=direct将跳过内存缓存
“”"
direct 模式就是把写入请求直接封装成io 指令发到磁盘
非direct 模式,就把数据写入系统缓存,然后就认为io 成功,并由操作系统决定缓存中的数据什么时候被写入磁盘
“”"
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 10.0021 s, 1.0 MB/s
[root@docker ~]# cd /sys/fs/cgroup/
[root@docker cgroup]# ls
blkio cpu,cpuacct freezer net_cls perf_event
cpu cpuset hugetlb net_cls,net_prio pids
cpuacct devices memory net_prio systemd
[root@docker cgroup]# cd blkio/
[root@docker blkio]# ls
blkio.io_merged blkio.throttle.read_bps_device
blkio.io_merged_recursive blkio.throttle.read_iops_device
blkio.io_queued blkio.throttle.write_bps_device
blkio.io_queued_recursive blkio.throttle.write_iops_device
blkio.io_service_bytes blkio.time
blkio.io_service_bytes_recursive blkio.time_recursive
blkio.io_serviced blkio.weight
blkio.io_serviced_recursive blkio.weight_device
blkio.io_service_time cgroup.clone_children
blkio.io_service_time_recursive cgroup.event_control
blkio.io_wait_time cgroup.procs
blkio.io_wait_time_recursive cgroup.sane_behavior
blkio.leaf_weight docker
blkio.leaf_weight_device notify_on_release
blkio.reset_stats release_agent
blkio.sectors system.slice
blkio.sectors_recursive tasks
blkio.throttle.io_service_bytes user.slice
blkio.throttle.io_serviced x3
[root@docker blkio]# cd docker/
[root@docker docker]# ls
b9e61aad4a99aedcb42c9e827c82909a2ff6efac705055b6d5fbedc990eb9e07
blkio.io_merged
blkio.io_merged_recursive
blkio.io_queued
blkio.io_queued_recursive
blkio.io_service_bytes
blkio.io_service_bytes_recursive
blkio.io_serviced
blkio.io_serviced_recursive
blkio.io_service_time
blkio.io_service_time_recursive
blkio.io_wait_time
blkio.io_wait_time_recursive
blkio.leaf_weight
blkio.leaf_weight_device
blkio.reset_stats
blkio.sectors
blkio.sectors_recursive
blkio.throttle.io_service_bytes
blkio.throttle.io_serviced
blkio.throttle.read_bps_device
blkio.throttle.read_iops_device
blkio.throttle.write_bps_device
blkio.throttle.write_iops_device
blkio.time
blkio.time_recursive
blkio.weight
blkio.weight_device
cgroup.clone_children
cgroup.event_control
cgroup.procs
notify_on_release
tasks
[root@docker docker]# cd b9e61aad4a99aedcb42c9e827c82909a2ff6efac705055b6d5fbedc990eb9e07
[root@docker b9e61aad4a99aedcb42c9e827c82909a2ff6efac705055b6d5fbedc990eb9e07]# ls
blkio.io_merged blkio.sectors_recursive
blkio.io_merged_recursive blkio.throttle.io_service_bytes
blkio.io_queued blkio.throttle.io_serviced
blkio.io_queued_recursive blkio.throttle.read_bps_device
blkio.io_service_bytes blkio.throttle.read_iops_device
blkio.io_service_bytes_recursive blkio.throttle.write_bps_device
blkio.io_serviced blkio.throttle.write_iops_device
blkio.io_serviced_recursive blkio.time
blkio.io_service_time blkio.time_recursive
blkio.io_service_time_recursive blkio.weight
blkio.io_wait_time blkio.weight_device
blkio.io_wait_time_recursive cgroup.clone_children
blkio.leaf_weight cgroup.event_control
blkio.leaf_weight_device cgroup.procs
blkio.reset_stats notify_on_release
blkio.sectors tasks
[root@docker b9e61aad4a99aedcb42c9e827c82909a2ff6efac705055b6d5fbedc990eb9e07]# cat blkio.throttle.write_bps_device
8:0 1048576
[root@docker b9e61aad4a99aedcb42c9e827c82909a2ff6efac705055b6d5fbedc990eb9e07]#
“”"
[root@docker ~]# ll /dev/sda
brw-rw---- 1 root disk 8, 0 Oct 24 15:48 /dev/sda
“”"
限制用户
[root@docker shm]# cat /etc/cgrules.conf
/etc/cgrules.conf
#The format of this file is described in cgrules.conf(5)
#manual page.
Example:
#
#@student cpu,memory usergroup/student/
#peter cpu test1/
#% memory test2/
End of file
这个用户所占用的内存
dd memory x2/
[root@docker ~]# systemctl start cgred
[root@docker ~]# systemctl status cgred
● cgred.service - CGroups Rules Engine Daemon
Loaded: loaded (/usr/lib/systemd/system/cgred.service; disabled; vendor preset: disabled)
Active: active (running) since Thu 2019-10-24 16:17:27 CST; 5s ago
Process: 13737 ExecStart=/usr/sbin/cgrulesengd $OPTIONS (code=exited, status=0/SUCCESS)
Main PID: 13738 (cgrulesengd)
Tasks: 1
Memory: 3.1M
CGroup: /system.slice/cgred.service
└─13738 /usr/sbin/cgrulesengd -s -g cgred
Oct 24 16:17:27 docker systemd[1]: Starting CGroups Rules Engine Daemon…
Oct 24 16:17:27 docker systemd[1]: Started CGroups Rules Engine Daemon.
[root@docker ~]# su - dd
[dd@docker ~]$ id dd
uid=1000(dd) gid=1000(dd) groups=1000(dd)
[dd@docker ~]$ cd /dev/shm/
[dd@docker shm]$ ls
[dd@docker shm]$ dd if=/dev/zero of=dd bs=1M count=100
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 0.0361387 s, 2.9 GB/s
[dd@docker shm]$ dd if=/dev/zero of=dd bs=1M count=200
200+0 records in
200+0 records out
209715200 bytes (210 MB) copied, 0.0688063 s, 3.0 GB/s
[dd@docker shm]$ dd if=/dev/zero of=dd bs=1M count=300
Killed
##隔离并不彻底
[root@docker x2]# docker run -it --memory-swap 256M --memory 256M ubuntu
root@b6b9af37fc2f:/# free -m
total used free shared buffers cached
Mem: 991 871 119 266 6 614
-/+ buffers/cache: 250 740
Swap: 2047 1 2046
root@b6b9af37fc2f:/# exit
exit
[root@docker x2]# free -m
total used free shared buff/cache available
Mem: 991 140 143 266 707 400
Swap: 2047 1 2046
更多推荐
所有评论(0)