Docker makes it possible (via the -c switch of the run command) to specify a value of shares of the CPU available to the container. This is a relative weight and has nothing to do with the actual processor speed. In fact, there is no way to say that a container should have access only to 1Ghz of the CPU. Keep that in mind.

Every new container will have 1024 shares of CPU by default. This value does not mean anything, when speaking of it alone. But if we start two containers and both will use 100% CPU, the CPU time will be divided equally between the two containers because they both have the same CPU shares (for the sake of simplicity I assume that there are no other processes running).

If we set one container’s CPU shares to 512 it will receive half of the CPU time compared to the other container. But this does not mean that it can use only half of the CPU. If the other container (with 1024 shares) is idle — our container will be allowed to use 100% of the CPU. That’s another thing to note.

Limits are enforced only when they should be. CGroups does not limit the processes upfront (for example by not allowing them to run fast, even if there are free resources). Instead it gives as much as it can and limits only when necessary (for example when many processes start to use the CPU heavily at the same time).

Of course it’s not easy (and I would say impossible) to say how many resources will be assigned to your process. It really depends on how other processes will behave and how many shares are assigned to them.

Example: managing the CPU shares of a container As I mentioned before you can use the -c switch to manage the value of shares assigned to all processes running inside of a Docker container. Since I have 4 cores on my machine available, I’ll tell stress to use all 4: $ docker run -it --rm stress --cpu 4 stress: info: [1] dispatching hogs: 4 cpu, 0 io, 0 vm, 0 hdd If we start two containers the same way, both will use around 50% of the CPU. But what happens if we modify the CPU shares for one container? $ docker run -it --rm -c 512 stress --cpu 4 stress: info: [1] dispatching hogs: 4 cpu, 0 io, 0 vm, 0 hdd As you can see, the CPU is divided between the two containers in such a way that the first container uses ~60% of the CPU and the other ~30%. This seems to be the expected result. Note The missing ~10% of the CPU was taken by GNOME, Chrome and my music player, in case you were wondering.

Attaching containers to cores Besides limiting shares of the CPU, we can do one more thing: we can pin the container’s processes to a particular processor (core). To do this, we use the --cpuset switch of the docker run command. To allow execution only on the first core: docker run -it --rm --cpuset=0 stress --cpu 1 To allow execution only on the first two cores: docker run -it --rm --cpuset=0,1 stress --cpu 2 You can of course mix the option --cpuset with -c . Note Share enforcement will only take place when the processes are run on the same core. This means that if you pin one container to the first core and the other container to the second core, both will use 100% of each core, even if they have different a CPU share value set (once again, I assume that only these two containers are running on the host).

Changing the shares value for a running container It is possible to change the value of shares for a running container (or any other process, of course). You can directly interact with the cgroups filesystem, but since we have systemd we can leverage it to manage this for us (since it manages the processes anyhow). For this purpose we’ll use the systemctl command with the set-property argument. Every new container created using the docker run command will have a systemd scope automatically assigned under which all of its processes will be executed. To change the CPU share for all processes in the container we just need to change it for the scope, like so: $ sudo systemctl set-property docker-4be96b853089bc6044b29cb873cac460b429cfcbdd0e877c0868eb2a901dbf80.scope CPUShares=512 Note Add --runtime to change the setting temporarily. Otherwise, this setting will be remembered when the host is restarted. This changes the default value from 1024 to 512 . You can see the result below. The change happens somewhere in the middle of the recording. Please note the CPU usage. In systemd-cgtop 100% means full use of 1 core and this is correct since I bound both containers to the same core. Note To show all properties you can use the systemctl show docker-4be96b853089bc6044b29cb873cac460b429cfcbdd0e877c0868eb2a901dbf80.scope command. To list all available properties take a look at man systemd.resource-control .

CGroups fs You can find all the information about the CPU for a specific container under /sys/fs/cgroup/cpu/system.slice/docker-$FULL_CONTAINER_ID.scope/ , for example: $ ls /sys/fs/cgroup/cpu/system.slice/docker-6935854d444d78abe52d629cb9d680334751a0cda82e11d2610e041d77a62b3f.scope/ cgroup.clone_children cpuacct.usage_percpu cpu.rt_runtime_us tasks cgroup.procs cpu.cfs_period_us cpu.shares cpuacct.stat cpu.cfs_quota_us cpu.stat cpuacct.usage cpu.rt_period_us notify_on_release Note More information about these files can be found in the RHEL Resource Management Guide. This information is spread across the cpu, cpuacct and cpuset sections.