mm/memory-failure: check the mapcount of the precise page

commit c79c5a0a00a9457718056b588f312baadf44e471 upstream.

A process may map only some of the pages in a folio, and might be missed
if it maps the poisoned page but not the head page.  Or it might be
unnecessarily hit if it maps the head page, but not the poisoned page.

Link: https://lkml.kernel.org/r/20231218135837.3310403-3-willy@infradead.org
Fixes: 7af446a841 ("HWPOISON, hugetlb: enable error handling path for hugepage")
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
This commit is contained in:
Matthew Wilcox (Oracle) 2023-12-18 13:58:36 +00:00 committed by Greg Kroah-Hartman
parent fb21c9780a
commit 4ee9d9291b
1 changed files with 3 additions and 3 deletions

View File

@ -1421,7 +1421,7 @@ static bool hwpoison_user_mappings(struct page *p, unsigned long pfn,
* This check implies we don't kill processes if their pages * This check implies we don't kill processes if their pages
* are in the swap cache early. Those are always late kills. * are in the swap cache early. Those are always late kills.
*/ */
if (!page_mapped(hpage)) if (!page_mapped(p))
return true; return true;
if (PageKsm(p)) { if (PageKsm(p)) {
@ -1477,10 +1477,10 @@ static bool hwpoison_user_mappings(struct page *p, unsigned long pfn,
try_to_unmap(folio, ttu); try_to_unmap(folio, ttu);
} }
unmap_success = !page_mapped(hpage); unmap_success = !page_mapped(p);
if (!unmap_success) if (!unmap_success)
pr_err("%#lx: failed to unmap page (mapcount=%d)\n", pr_err("%#lx: failed to unmap page (mapcount=%d)\n",
pfn, page_mapcount(hpage)); pfn, page_mapcount(p));
/* /*
* try_to_unmap() might put mlocked page in lru cache, so call * try_to_unmap() might put mlocked page in lru cache, so call