In Linux, /dev/shm is a special location that you can use to store files and folders. This location is available to all applications that use the system and is incredibly fast because the data isn’t stored on the disk, but in RAM. In addition, the data is only temporarily stored and is lost after a reboot, which makes it volatile.
The speed of /dev/shm makes it viable for a variety of use cases as shown below. But first, let’s see how it works in its raw form.
How /dev/shm Works
Let’s say I want to store some text in a file and write it to /dev/shm, I can do so as if it’s any other folder like this:
echo "Hello from /dev/shm" > /dev/shm/testfile
Here’s what it looks like:
As you can see, storing data in /dev/shm requires using the same commands that you’re already familiar with. There’s no special syntax. Just treat it as a regular file system and you’ll be good to go!
Getting the Size of the /dev/shm Folder
RAM is more of a precious resource compared to physical storage. So we have to constantly keep an eye on how much memory /dev/shm is using because we don’t want to starve our other applications of memory they might need for their operations – particularly if you have some heavy applications like scientific data or large databases running.
To see how much memory is currently allocated to /dev/shm, use the following command:
df -h /dev/shm
This gives the following output:
You can see that the allocated size for /dev/shm is 1.9 GB. In general, this size is around half of your actual physical RAM. On the same system, here is the amount of physical RAM available with the following command:
free -h
This shows us:
Here, the amount of available physical RAM is 3.8 GB, which is exactly half of the amount of memory available to /dev/shm.
No Pre-Allocation – Only Dynamic Allocation
You might think that 50% of all available memory is a huge waste of space for something that might never be used. Imagine allocating half of all your precious RAM to something like this! But in reality, it’s not as bad as it sounds. The memory available to /dev/shm isn’t pre-allocated, and other applications can use it if necessary.
The size of /dev/shm is only the maximum amount of memory it can use. It doesn’t mean that the system reserves 50% of your RAM for this file system. Priority is given to other active processes and storage. So if the rest of the processes that run on your system use up 80% of the available, physical RAM, then /dev/shm only gets to use what’s left – namely 20%.
Monitoring Excess Usage of /dev/shm
But even though other applications have priority over /dev/shm, a rogue process, or unexpectedly resource-consuming applications might end up filling up /dev/shm, leaving the rest of your processes starved for memory. In such situations, Linux will start using swap memory to compensate for the lack of space.
Once swap memory is exhausted, the “Out of Memory” (OOM) process. This is a destructive process that starts shutting down applications that it determines are of lower priority and that are using large amounts of memory. It doesn’t prioritize cleaning out /dev/shm, so it might end up killing regular Linux applications. If at all possible, we want to avoid triggering the OOM process.
For this reason, it’s a good idea to always keep an eye on how much space /dev/shm is using. You can do this in several ways – either monitor the usage from time to time, reduce the amount of memory available to it, or simply set a notification to trigger when it begins to use more memory than you’re comfortable with allowing. It doesn’t happen often, but it’s always a good thing to remember at the back of your mind.
Using /dev/shm to for Inter-Process Communication
We’ve seen earlier, that when you declare a variable in a bash script, it remains local to that script and can’t be accessed outside it. While this behavior is by design, it can make inter-process communication (IPC) difficult. But script writers can use the /dev/shm location to store data that they want other processes to access. Keeping in mind the storage limitations we discussed earlier, it’s a very convenient method to temporarily store stuff that you’ll need shortly down the line.
Danger of Losing Data
Because /dev/shm is RAM, the memory is volatile and is lost after a reboot. For this reason, we can’t use it as a form of long-term storage. It’s best used when you need the data further down the line in the same process, or when two processes are communicating in real-time. Anything more than that, and there’s a danger that the RAM contents will be lost and your process won’t work as intended. User beware!
Other Locations that Are Shareable
The /dev/shm folder isn’t the only one that’s accessible to all programs. There are plenty of other folders that share similar characteristics. For example:
- /tmp
- /var/tmp
- /usr/share
The above locations are shared across all accounts and users can store temporary files in them. Some, like /var/tmp, even persist between reboots and can be used for more than just temporary storage. Moreover, they don’t share the same strict restrictions on space that that /dev/shm does, because they’re not RAM and physical disk space is much less costly.
So why does anyone use /dev/shm with its volatile memory and limited storage? The reason, of course, is speed. RAM is much faster than a physical disk, and when you need to process data quickly like in some database applications, for example, then a disk can be simply too slow. And if such operations are repeated over and over, /dev/shm can make a real difference.
A side benefit of using /dev/shm, is that it reduces the wear and tear on physical disks, though this should hardly be a concern with modern hardware and SSDs. Still, I suppose it could still matter in some specialized situations.
/dev/shm vs Local Variables
While the most obvious benefit of /dev/shm is that you can share data across applications, you can even use it to access data in the same application. But then the obvious question is – why not use variables, instead? After all, variables too use RAM and are easily accessible.
The main reason is size. You can store very large objects in /dev/shm that would be messy to store in variables, which are mostly used for operational things like strings. To store blobs of data in a variable would involve things like serialization and memory allocation, which drives up the time to execute the code as well as the complexity.
So if a program has to store a big chunk of data temporarily, it can use /dev/shm to quickly access it down the line instead of using a variable.
Conclusion
As you can see, using /dev/shm is easy. It may be a little too easy since things can go badly when it swells to its maximum size and starts denying RAM to other processes. But if you use it responsibly, it can make your life easier for IPC and usage within the same program context.
I’m a NameHero team member, and an expert on WordPress and web hosting. I’ve been in this industry since 2008. I’ve also developed apps on Android and have written extensive tutorials on managing Linux servers. You can contact me on my website WP-Tweaks.com!
Leave a Reply