In this article we discuss the major concepts concerning memory management in the Linux operating system.
Table of contents.
- Virtual memory.
- Page caching.
- Huge pages.
- Anonymous memory.
- Nodes and Zones.
- Reclaiming pages.
- OOM killer.
Programs are brought into memory and placed within a process for execution.
Binding of instructions and data to memory can either happen at compile time if the memory location is already known and therefore code can be generated, at load time, here relocatable code is generated because location is not known or execution time whereby binding is delayed until run time if the process can be moved during its execution.
In a computer system physical memory is a finite resource and dealing with it has its complexities and this is why the concept of virtual memory exists.
The virtual memory is responsible for providing an abstraction for physical memory details from the application software, storing only needed information in the physical memory and protecting and controlling data sharing between processes.
By using the virtual memory each memory access uses a virtual address which is later translated to a physical address during processing.
Physical memory is divided into pages which can be mapped as virtual pages.
Mappings are described by page tables organized hierarchically that allow translation from virtual address to physical address.
Lowest level tables will have physical addresses of actual pages used by an application and those at higher levels will have physical addresses of lower level pages.
During address translation this register is used to access top level page table which is indexed by the high bits of the virtual address.
This index is also used to access hierarchy levels whereby the next bits of the virtual address are the index of the next level page table.
Physical memory is volatile thereby to place data into memory a file must be read.
When a file is read, this data is placed into a cache so as to avoid expensive disk access during subsequent readings.
When a file is written, data is also placed in a cache and later placed into the a storage device.
Written pages are marked as dirty and when they are reused, their file contents are synchronized with the updated data.
Address translation requires several memory accesses which are relatively slow compared to processor speed.
A TLB(Translation Lookaside Buffer) is used to maintain translation caches so as to reduce processor cycles for address translation.
Huge pages are mappings of for example, of 2M or 1G pages using entries from the second and third level page tables.
Since TLBs are a scarce resource this mechanism reduces pressures on TLB, it also improves TLB hit-rate and thereby improving overall system performance.
Mappings of physical memory with huge pages are supported by two mechanisms namely HugeTLB file system or Transparent HugePages.
The former uses RAM as its backing storage, file data created here resides in memory and is mapped using huge pages. For the latter users/system admins are responsible for configuring parts of memory to be mapped by the huge pages.
This is memory not backed by a file system. It is the virtual memory areas a program is able to access.
It is implicitly created for a program's stack and heap or by explicit calls to mmap system calls.
Read accesses will result in a page table entry referencing a physical page filled with zeros.
Write accesses will result in allocation of a physical page to hold the written data.
This page is marked as dirty and when reused by the kernel, it is swapped out.
Nodes and Zones.
Multi-processor systems demonstrate non-uniform memory access whereby memory is arranged into banks with different access latency which all depends on the distance between them and the processor.
In Linux a node will have its own zones - a list of free and used pages including statistic counters.
Hardware has restrictions on how physical memory is accessed e.g devices directly accessing addressable memory.
In Linux memory pages are grouped into zones according to their use. These zones are hardware dependent since not all architectures define all zones and DMA requirements differ across platforms.
This is the process of freeing and repurposing reclaimable memory pages.
Physical pages can store kernel's data structures, DMA'able buffers for use, read data from the file system, allocate memory by processes and depending on use.
Pages that can be freed anytime are referred to as reclaimable e.g page caches, anonymous memory.
unreclaimable pages are those that cannot be repurposed, they remain claimed until freed by a user.
Reclaiming of pages can happen synchronously or asynchronously.
As a system runs, memory becomes fragmented over time and sometimes allocation of a large physical contiguous memory is needed, e.g when a drive requires a large buffer for DMA.
This problem is solved by compaction whereby occupied pages are moved from lower parts of memory zone to free pages in the upper parts of the zone.
Free pages are later joined together to form a physically large contiguous memory location.
Compaction can also happen synchronously or asynchronously just like reclamation.
On a system memory may be exhausted and therefore the kernel is unable to reclaim enough memory so as to continue operations.
The OOM killer is responsible for selecting a sacrificial task for the sake of overall system health, this task is killed in hopes that enough memory is freed for normal operations to proceed.
The virtual memory is non-existent memory that the kernel can be referred to.
Whenever a process modifies data, the corresponding page is marked dirty, these pages can be either written back to memory or flushed.
With this article at OpenGenus, you must have a strong idea of Linux memory management.