Using docker can become addictive. Once you see how easy it is to download and start a container running an application that normally requires heavy configuration – like a web server, for example – you start to think of everything in terms of docker containers. But with this comes risks. Docker containers can grow in size as their corresponding applications consume memory. If you’re not careful, any of them can expand and fill all the available RAM, causing system slowdowns and instability. For this reason, it’s important to impose memory limits on docker containers.
While the same risks apply even to processes running outside docker, we have special tools with docker that make it easy to implement memory limits, so we don’t have to rely on clumsy system-wide tools.
No Default Docker Memory Limit
By default, docker inherits the memory preferences of the host system. Unless specifically configured otherwise, any individual docker container can expand in size to eat up the entire available memory on the host. So, for example, if a process has a memory leak or, god forbid, is maliciously configured, even if your system has tons of free memory, the docker container implementing that process can consume it all.
Competing with Other Processes
If a docker container grows in size to hog all the available memory, it starts to compete with other processes on the system. This can even trigger the “Out of Memory” (OOM) killer process that takes it upon itself to start shutting down processes that it deems to be of low priority. We’ve encountered this before in my article on /dev/shm, where the OOM process comes into play if an application goes rogue.
The OOM process isn’t very discriminate. It might even shut down the offending container itself, but there’s no guarantee that it will, and there’s a good chance that it will wreak havoc on other critical processes before it reaches that stage.
Needless to say, we all want to avoid this fate. For this reason, we need robust controls for every docker container to ensure that each stays within its limits.
How to Set a Docker Memory Limit
There are multiple ways to set up a docker container so that it has memory limits.
Using the -m Flag with Running
When starting a docker container, you can specify the memory limit with the “docker run” command like this:
docker run -dit --name funny_mirzakhani -m 512m alpine sh
In this case, I’m running the “Alpine” container with a shell, and I’m allocating it 512 MB of memory. I’ve given the container the name “funny_mirzakhani”. Now, to see the amount of memory used by this container, I use the following command:
docker inspect funny_mirzakhani | grep -i memory
And here’s the output:
As you can see, the memory allotted to the container is exactly 512 MB. The “Memory Swap” setting says that the process can use up to 1 GB of swap memory. If the container uses more than the RAM + Swap memory, then the OOM process will terminate the container.
Using the docker-compose.yml File
The docker compose utility allows you to create multiple docker containers at once. Let’s say you want to set up a web server and a database at the same time. We can use pre-built configurations that do everything all at once. To configure these containers so that they work together properly, we use “docker-compose.yml” files. In these files, we can specify how containers start, and there’s an option for the memory limits. For example, here’s what a sample docker-compose.yml file snippet might look like:
services:
my_service:
image: my_container
deploy:
resources:
limits:
memory: 512M
In this snippet, we configure the container to start with a memory limit of 512 GB.
Implementing Soft Limits on Containers
In addition to the hard limits that you can set in the yml file, docker also allows you to set a “soft” limit, which says that we would prefer it if the container remained within these limits, but it can exceed them if it has to.
Here’s the code for it in the yml file:
reservations:
memory: 256M
In the above snippet, the system will try and restrict the docker container’s memory to 256 MB. Since this is a “soft” limit, the container can exceed this limit as long as there’s sufficient memory for all other processes. But if the system finds itself short on memory, then the container can be throttled, or even the entire memory can be reclaimed.
Interestingly, even though the specification of a soft limit is in the docker-compose.yml file, the actual handling of the throttling isn’t done by docker, but by the system via the use of cgroups.
Implementing Swapping Limits
In the docker-compose. yml file, you can specify not only the amount of RAM that a container can use but also how much swap memory it has access to. For example, the following snippet:
limits:
memory: 512M
memory_swap: 1G
Specifies that the maximum RAM this container can use is 512 MB, but if necessary, it can access 1 GB of swap memory. This flexibility allows the container to budget its memory and use the swap memory for data that doesn’t need to be retrieved as quickly. The swap memory grants considerable flexibility to the docker container without restricting itself to the hard limits on RAM.
Controlling the Level of Swapping
The docker-compose.yml file can not only specify how much swap memory the container should use but also the level of “swappiness” it should exhibit. For example:
my_service:
deploy:
resources:
limits:
memory: 512M
memory_swappiness: 60
Memory “swappiness” is a measure of how eagerly the container moves its RAM memory to the disk if it’s starting to run low. With a default of 60, a value of zero means that the container will avoid swapping as much as possible, and a value of 100 says that it should swap aggressively.
Applications like databases benefit the most from having memory in RAM as often as possible. So, for a database container, it’s beneficial to have a low “swappiness” value. But if your application is likely to have a lot of data that need not be used frequently, consider setting a higher limit between 70 and 100.
Checking the Memory Usage of Docker Containers
To check the current memory usage of a docker container, use the “docker stats” command like this:
This command shows that for my basic Alpine container, it’s currently using only 476 KB out of a possible 512 MB – hardly anything for the moment. You can use this command to get an overview of the status of all the currently running docker containers, so you can see at a glance which ones are in danger of running out of memory by looking at the “MEM %” field and take corrective action if need be.
Conclusion
Docker has a robust mechanism for setting both hard and soft memory limits for its containers. You can configure not just the RAM and swap memory but also the eagerness with which the container makes use of the swap memory. These limits, while not enforced by docker itself, allow for a complete memory management solution so that your docker containers are organized with as little waste as possible.

I’m a NameHero team member, and an expert on WordPress and web hosting. I’ve been in this industry since 2008. I’ve also developed apps on Android and have written extensive tutorials on managing Linux servers. You can contact me on my website WP-Tweaks.com!
Leave a Reply