Mel Gorman posted the seventh version of his Memory Compaction patches asking, "are there any further obstacles to merging?" The patches, first posted in May of 2007, provide a mechanism for moving GFP_MOVABLE pages into a smaller number of pageblocks, reducing externally fragmented memory. Mel explains that 'compaction' is another method of defragmenting memory, "for example, lumpy reclaim is a form of defragmentation as was slub 'defragmentation' (really a form of targeted reclaim). Hence, this is called 'compaction' to distinguish it from other forms of defragmentation."
The core compaction patch explains that memory is compacted in a zone by relocating movable pages towards the end of the zone:
"A single compaction run involves a migration scanner and a free scanner. Both scanners operate on pageblock-sized areas in the zone. The migration scanner starts at the bottom of the zone and searches for all movable pages within each area, isolating them onto a private list called migratelist. The free scanner starts at the top of the zone and searches for suitable areas and consumes the free pages within making them available for the migration scanner. The pages isolated for migration are then migrated to the newly isolated free pages."
"The objective of this patchset is to keep the system in a state where actions such as page reclaim or memory compaction will reduce external fragmentation in the system," Mel Gorman described his set of thirteen patches labeled "reduce external fragmentation by grouping pages by mobility v30". He explained, "it works by grouping pages of similar mobility together in PAGEBLOCK_NR_PAGES areas." He defined four mobility types as: "UNMOVABLE - Pages that cannot be trivially reclaimed or moved; MOVABLE - Pages that can be moved using the page migration mechanism; RECLAIMABLE - Pages that the kernel can often directly reclaim such as those used for inode caches; RESERVE - The areas where min_free_kbyte-related pages should be stored". Mel added:
"This grouping clearly requires additional work in the page allocator. kernbench shows effectively no performance difference varying between -0.2% and +1% on a variety of test machines. Success rates for huge page allocation are dramatically increased. For example, on a ppc64 machine, the vanilla kernel was only able to allocate 1% of memory as a hugepage and this was due to a single hugepage reserved as min_free_kbytes. With these patches applied, 40% was allocatable as superpages."
Mel Gorman offered a first release of a patchset that compacts memory, "this is a prototype for compacting memory to reduce external fragmentation so that free memory exists as fewer, but larger contiguous blocks. Rather than being a full defragmentation solution, this focuses exclusively on pages that are movable via the page migration mechanism." He notes that the patchset is currently incomplete, and at this time memory is only compacted manually, not automatically, "this version of the patchset is mainly concerned with getting the compaction mechanism correct." Mel goes on to describe how it works:
"A single compaction run involves two scanners operating within a zone - a migration and a free scanner. The migration scanner starts at the beginning of a zone and finds all movable pages within one pageblock_nr_pages-sized area and isolates them on a migratepages list. The free scanner begins at the end of the zone and searches on a per-area basis for enough free pages to migrate all the pages on the migratepages list. As each area is respectively migrated or exhausted of free pages, the scanners are advanced one area. A compaction run completes within a zone when the two scanners meet."