mm, compaction: fix fast_isolate_around() to stay within boundaries

Depending on the memory configuration, isolate_freepages_block() may scan
pages out of the target range and causes panic.

Panic can occur on systems with multiple zones in a single pageblock.

The reason it is rare is that it only happens in special
configurations.  Depending on how many similar systems there are, it
may be a good idea to fix this problem for older kernels as well.

The problem is that pfn as argument of fast_isolate_around() could be out
of the target range.  Therefore we should consider the case where pfn <
start_pfn, and also the case where end_pfn < pfn.

This problem should have been addressd by the commit 6e2b7044c1 ("mm,
compaction: make fast_isolate_freepages() stay within zone") but there was
an oversight.

 Case1: pfn < start_pfn

  <at memory compaction for node Y>
  |  node X's zone  | node Y's zone
  +-----------------+------------------------------...
   pageblock    ^   ^     ^
  +-----------+-----------+-----------+-----------+...
                ^   ^     ^
                ^   ^      end_pfn
                ^    start_pfn = cc->zone->zone_start_pfn
                 pfn
                <---------> scanned range by "Scan After"

 Case2: end_pfn < pfn

  <at memory compaction for node X>
  |  node X's zone  | node Y's zone
  +-----------------+------------------------------...
   pageblock  ^     ^   ^
  +-----------+-----------+-----------+-----------+...
              ^     ^   ^
              ^     ^    pfn
              ^      end_pfn
               start_pfn
              <---------> scanned range by "Scan Before"

It seems that there is no good reason to skip nr_isolated pages just after
given pfn.  So let perform simple scan from start to end instead of
dividing the scan into "Before" and "After".

Link: https://lkml.kernel.org/r/20221026112438.236336-1-a.naribayashi@fujitsu.com
Fixes: 6e2b7044c1 ("mm, compaction: make fast_isolate_freepages() stay within zone").
Signed-off-by: NARIBAYASHI Akira <a.naribayashi@fujitsu.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
This commit is contained in:
NARIBAYASHI Akira 2022-10-26 20:24:38 +09:00 committed by Andrew Morton
parent eba39236f1
commit be21b32afe
1 changed files with 5 additions and 13 deletions

View File

@ -1344,7 +1344,7 @@ move_freelist_tail(struct list_head *freelist, struct page *freepage)
} }
static void static void
fast_isolate_around(struct compact_control *cc, unsigned long pfn, unsigned long nr_isolated) fast_isolate_around(struct compact_control *cc, unsigned long pfn)
{ {
unsigned long start_pfn, end_pfn; unsigned long start_pfn, end_pfn;
struct page *page; struct page *page;
@ -1365,21 +1365,13 @@ fast_isolate_around(struct compact_control *cc, unsigned long pfn, unsigned long
if (!page) if (!page)
return; return;
/* Scan before */ isolate_freepages_block(cc, &start_pfn, end_pfn, &cc->freepages, 1, false);
if (start_pfn != pfn) {
isolate_freepages_block(cc, &start_pfn, pfn, &cc->freepages, 1, false);
if (cc->nr_freepages >= cc->nr_migratepages)
return;
}
/* Scan after */
start_pfn = pfn + nr_isolated;
if (start_pfn < end_pfn)
isolate_freepages_block(cc, &start_pfn, end_pfn, &cc->freepages, 1, false);
/* Skip this pageblock in the future as it's full or nearly full */ /* Skip this pageblock in the future as it's full or nearly full */
if (cc->nr_freepages < cc->nr_migratepages) if (cc->nr_freepages < cc->nr_migratepages)
set_pageblock_skip(page); set_pageblock_skip(page);
return;
} }
/* Search orders in round-robin fashion */ /* Search orders in round-robin fashion */
@ -1556,7 +1548,7 @@ fast_isolate_freepages(struct compact_control *cc)
return cc->free_pfn; return cc->free_pfn;
low_pfn = page_to_pfn(page); low_pfn = page_to_pfn(page);
fast_isolate_around(cc, low_pfn, nr_isolated); fast_isolate_around(cc, low_pfn);
return low_pfn; return low_pfn;
} }