Question: What are dirty pages and what is their purpose?

Whenever application/database process needs to add virtual page into physical memory but no free physical pages are left OS must clear-out remaining old pages.

Now if old page had not been written at all then this one does not need to be saved it can be simply recovered from the the data file. But if old page has been modified already then it must be preserved somewhere so application/database can re-used later on – this is called dirty page.

OS stores such dirty pages in swap files ( so it can be removed from physical memory so another ‘new’ page can be stored in physical memory )If lots of data will be removed from page cache to dirty page area – this might cause significant IO bottleneck if actual swap device is located on local disk ( sda ) and more-over cause further issues if local disk is used as well by local root ( OS ) disk.

Page cache in Linux is just a disk cache which brings additional performance to OS which helps with intensive high read/writes on files.

As ‘sub’ product of page cache is dirty page – which was explained in above example case. Dirty pages can be also observed whenever application will write to file or create file – first write will happen in page cache area – hence creating a file which 10MB file can be really fast:

# dd if=/dev/zero of=testfile.txt bs=1M count=100
10+0 records in
10+0 records out
10485760 bytes (100 MB) copied, 0,1121043 s, 866 MB/s

Its because that file is created in memory region not actual disk – hence response time is really fast. Under the OS such thing will be noted in /proc/meminfo and more over in ‘Dirty:

Before above command will get executed – note-down the /proc/meminfo and ‘Dirty’ row:

# more /proc/meminfo | grep -i dirty
Dirty: 96 kB

After command is executed:

# more /proc/meminfo | grep -i dirty
Dirty: 102516 kB

Periodically OS or application/database will initiate sync which will write actual testfile.txt to disk:

# more /proc/meminfo | grep -i dirty
Dirty: 76 kB

Now Oracle Database for example does not allow to do such writes into memory region as if OS will crash or if SAN LUn will fail – data will be compromised. That’s why Oracle Database requires data to be ‘in-sync’ hence all writes needs to be confirmed by backend like disk/lun before database will throw more write requests.

Normally Databases/Application periodically drop cache hence dirty pages are written to disk in small chunks. In some cases dirty pages can grow in size as maybe application/database did not configured page cache mechanism properly.

So dirty pages can write to swap files ( Swap area ) but also to special region in disk ( LUN/file-system ). If for example we create more than 100MB swap file which will be re-used later from swap file we might cause uncecessary IO issues on swap device. Enterprise systems store swap files and swap area on OS under solid state drives ( SSD ) or dedicated LUN hence local disk performance won’t be impacted ( as normally swap region is created on Local disk )

In some cases application/database might have issues internally and dirty pages will be written as swap files but will be never re-used this will cause swap area to grow and cause uncessary IOs on local disk and lead to large swap usage under OS.

To find out at what stage OS will try to dump dirty pages back to disk layer please check official kernel documentation around Virtual Memory here and look for settings like:

vm.dirty_background_ratio
vm.dirty_ratio
vm.swappiness

and

dirty_background_ratio
dirty_ratio
dirty_background_bytes
dirty_expire_centisecs

Above settings needs to be tuned per Database/Application requirement as OS does not have any ‘best practice’ setting for them – they are tuned per DB/APP load/configuration.

Whenever application/database will demand memory pages to be free on physical memory – OS tends to keep everything in page cache – hence OS will need to re-allocate some of the pages and mark them as dirty. This process is works fine if application/database end are properly tuned and scaled – otherwise it will cause really aggressive swappiness to occur – as OS will need to write all dirty pages back to swap disk – this can be controlled via vm.swappiness setting.

If application/database will do agreessive swappiness it might cause serious IO writes on swap device and lead to serious system stalls – always make sure that application/databases are properly configured in terms of memory management.

As explained not all pages will be marked as dirty – mostly unused pages will get discarded rather than marked as dirty ( it all depends if pages which already are allocated were modified or not )

To verify which PIDs are using swap area – bellow command can be used:

for file in /proc/*/status
do 
    awk '/VmSwap|Name/{printf $2 " " $3}END{ print ""}' $file
done | sort -k 2 -n -r

Releasing ‘consumed’ swap space is really limited, normally if PID exits properly or simply gets shutdown swap space will be re-claimed but killing PID or if it ends-up abnormally like segfault might still leave swap space consumed. Another option is to reboot as doing swapoff and swapon command can cause serious issues or even lead to system panic state.