Another memory error injection interface debugfs:hwpoison/corrupt-pfn also
takes bogus refcount for hwpoison_filter(). It's justified because this
does a coarse filter, expecting that memory_failure() redoes the check for
sure.
Signed-off-by: Naoya Horiguchi <naoya.horiguchi@nec.com>
Signed-off-by: Oscar Salvador <osalvador@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Cc: Aristeu Rozanski <aris@ruivo.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Dmitry Yakunin <zeil@yandex-team.ru>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Oscar Salvador <osalvador@suse.com>
Cc: Qian Cai <cai@lca.pw>
Cc: Tony Luck <tony.luck@intel.com>
Link: https://lkml.kernel.org/r/20200922135650.1634-4-osalvador@suse.de
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
hpage is never used after try_to_split_thp_page() in memory_failure(), so
we don't have to update hpage. So let's not recalculate/use hpage.
Suggested-by: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>
Signed-off-by: Naoya Horiguchi <naoya.horiguchi@nec.com>
Signed-off-by: Oscar Salvador <osalvador@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Cc: Aristeu Rozanski <aris@ruivo.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Dmitry Yakunin <zeil@yandex-team.ru>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Oscar Salvador <osalvador@suse.com>
Cc: Qian Cai <cai@lca.pw>
Cc: Tony Luck <tony.luck@intel.com>
Link: https://lkml.kernel.org/r/20200922135650.1634-3-osalvador@suse.de
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Patch series "HWPOISON: soft offline rework", v7.
This patchset fixes a couple of issues that the patchset Naoya sent [1]
contained due to rebasing problems and a misunterdansting.
Main focus of this series is to stabilize soft offline. Historically soft
offlined pages have suffered from racy conditions because PageHWPoison is
used to a little too aggressively, which (directly or indirectly) invades
other mm code which cares little about hwpoison. This results in
unexpected behavior or kernel panic, which is very far from soft offline's
"do not disturb userspace or other kernel component" policy. An example
of this can be found here [2].
Along with several cleanups, this code refactors and changes the way soft
offline work. Main point of this change set is to contain target page
"via buddy allocator" or in migrating path. For ther former we first free
the target page as we do for normal pages, and once it has reached buddy
and it has been taken off the freelists, we flag it as HWpoison. For the
latter we never get to release the page in unmap_and_move, so the page is
under our control and we can handle it in hwpoison code.
[1] https://patchwork.kernel.org/cover/11704083/
[2] https://lore.kernel.org/linux-mm/20190826104144.GA7849@linux/T/#u
This patch (of 14):
Drop the PageHuge check, which is dead code since memory_failure() forks
into memory_failure_hugetlb() for hugetlb pages.
memory_failure() and memory_failure_hugetlb() shares some functions like
hwpoison_user_mappings() and identify_page_state(), so they should
properly handle 4kB page, thp, and hugetlb.
Signed-off-by: Naoya Horiguchi <naoya.horiguchi@nec.com>
Signed-off-by: Oscar Salvador <osalvador@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Tony Luck <tony.luck@intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Cc: Dmitry Yakunin <zeil@yandex-team.ru>
Cc: Qian Cai <cai@lca.pw>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>
Cc: Aristeu Rozanski <aris@ruivo.org>
Cc: Oscar Salvador <osalvador@suse.com>
Link: https://lkml.kernel.org/r/20200922135650.1634-1-osalvador@suse.de
Link: https://lkml.kernel.org/r/20200922135650.1634-2-osalvador@suse.de
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The file_ra_state being passed into page_cache_sync_readahead() was being
ignored in favour of using the one embedded in the struct file. The only
caller for which this makes a difference is the fsverity code if the file
has been marked as POSIX_FADV_RANDOM, but it's confusing and worth fixing.
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Eric Biggers <ebiggers@google.com>
Link: https://lkml.kernel.org/r/20200903140844.14194-10-willy@infradead.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Fold ra_submit() into its last remaining user and pass the
readahead_control struct to both do_page_cache_ra() and
page_cache_sync_ra().
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Eric Biggers <ebiggers@google.com>
Link: https://lkml.kernel.org/r/20200903140844.14194-9-willy@infradead.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Reimplement page_cache_sync_readahead() and page_cache_async_readahead()
as wrappers around versions of the function which take a readahead_control
in preparation for making do_sync_mmap_readahead() pass down an RAC
struct.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Cc: David Howells <dhowells@redhat.com>
Cc: Eric Biggers <ebiggers@google.com>
Link: https://lkml.kernel.org/r/20200903140844.14194-8-willy@infradead.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Reimplement force_page_cache_readahead() as a wrapper around
force_page_cache_ra(). Pass the existing readahead_control from
page_cache_sync_readahead().
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Eric Biggers <ebiggers@google.com>
Link: https://lkml.kernel.org/r/20200903140844.14194-7-willy@infradead.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Make ondemand_readahead() take a readahead_control struct in preparation
for making do_sync_mmap_readahead() pass down an RAC struct.
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Eric Biggers <ebiggers@google.com>
Link: https://lkml.kernel.org/r/20200903140844.14194-6-willy@infradead.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Rename __do_page_cache_readahead() to do_page_cache_ra() and call it
directly from ondemand_readahead() instead of indirecting via ra_submit().
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Cc: David Howells <dhowells@redhat.com>
Cc: Eric Biggers <ebiggers@google.com>
Link: https://lkml.kernel.org/r/20200903140844.14194-5-willy@infradead.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Define it in the callers instead of in page_cache_ra_unbounded().
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Cc: David Howells <dhowells@redhat.com>
Cc: Eric Biggers <ebiggers@google.com>
Link: https://lkml.kernel.org/r/20200903140844.14194-4-willy@infradead.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Patch series "Readahead patches for 5.9/5.10".
These are infrastructure for both the THP patchset and for the fscache
rewrite,
For both pieces of infrastructure being build on top of this patchset, we
want the ractl to be available higher in the call-stack.
For David's work, he wants to add the 'critical page' to the ractl so that
he knows which page NEEDS to be brought in from storage, and which ones
are nice-to-have. We might want something similar in block storage too.
It used to be simple -- the first page was the critical one, but then mmap
added fault-around and so for that usecase, the middle page is the
critical one. Anyway, I don't have any code to show that yet, we just
know that the lowest point in the callchain where we have that information
is do_sync_mmap_readahead() and so the ractl needs to start its life
there.
For THP, we havew the code that needs it. It's actually the apex patch to
the series; the one which finally starts to allocate THPs and present them
to consenting filesystems:
798bcf30ab
This patch (of 8):
Allow for a more concise definition of a struct readahead_control.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Eric Biggers <ebiggers@google.com>
Cc: David Howells <dhowells@redhat.com>
Link: https://lkml.kernel.org/r/20200903140844.14194-1-willy@infradead.org
Link: https://lkml.kernel.org/r/20200903140844.14194-3-willy@infradead.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The nr_thps counter is to support THPs in the page cache when the
filesystem doesn't understand THPs. Eventually it will be removed, but we
should still support filesystems which do not understand THPs yet. Move
the nr_thp manipulation functions to filemap.h since they're page-cache
specific.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
Cc: Hugh Dickins <hughd@google.com>
Cc: Song Liu <songliubraving@fb.com>
Cc: Rik van Riel <riel@surriel.com>
Cc: "Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Dave Chinner <dchinner@redhat.com>
Cc: Christoph Hellwig <hch@infradead.org>
Link: https://lkml.kernel.org/r/20200916032717.22917-2-willy@infradead.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The page cache needs to know whether the filesystem supports THPs so that
it doesn't send THPs to filesystems which can't handle them. Dave Chinner
points out that getting from the page mapping to the filesystem type is
too many steps (mapping->host->i_sb->s_type->fs_flags) so cache that
information in the address space flags.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
Cc: Hugh Dickins <hughd@google.com>
Cc: Song Liu <songliubraving@fb.com>
Cc: Rik van Riel <riel@surriel.com>
Cc: "Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Dave Chinner <dchinner@redhat.com>
Cc: Christoph Hellwig <hch@infradead.org>
Link: https://lkml.kernel.org/r/20200916032717.22917-1-willy@infradead.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Remove the assumption that a compound page has HPAGE_PMD_NR pins from the
page cache.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: SeongJae Park <sjpark@amazon.de>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: "Huang, Ying" <ying.huang@intel.com>
Link: https://lkml.kernel.org/r/20200908195539.25896-12-willy@infradead.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
page->mapping is undefined for tail pages, so operate exclusively on the
head page.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: SeongJae Park <sjpark@amazon.de>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Huang Ying <ying.huang@intel.com>
Link: https://lkml.kernel.org/r/20200908195539.25896-11-willy@infradead.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Remove the assumption that a compound page is HPAGE_PMD_SIZE, and the
assumption that any page is PAGE_SIZE.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: SeongJae Park <sjpark@amazon.de>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Huang Ying <ying.huang@intel.com>
Link: https://lkml.kernel.org/r/20200908195539.25896-10-willy@infradead.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Ask the page what size it is instead of assuming it's PMD size. Do this
for anon pages as well as file pages for when someone decides to support
that. Leave the assumption alone for pages which are PMD mapped; we don't
currently grow THPs beyond PMD size, so we don't need to change this code
yet.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: SeongJae Park <sjpark@amazon.de>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Huang Ying <ying.huang@intel.com>
Link: https://lkml.kernel.org/r/20200908195539.25896-9-willy@infradead.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Ask the page how many subpages it has instead of assuming it's PMD size.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: SeongJae Park <sjpark@amazon.de>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: "Huang, Ying" <ying.huang@intel.com>
Link: https://lkml.kernel.org/r/20200908195539.25896-8-willy@infradead.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Ask the page what size it is instead of assuming it's PMD size.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: SeongJae Park <sjpark@amazon.de>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Huang Ying <ying.huang@intel.com>
Link: https://lkml.kernel.org/r/20200908195539.25896-7-willy@infradead.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
File THPs may now be of arbitrary size, and we can't rely on that size
after doing the split so remember the number of pages before we start the
split.
Signed-off-by: Kirill A. Shutemov <kirill@shutemov.name>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: SeongJae Park <sjpark@amazon.de>
Cc: Huang Ying <ying.huang@intel.com>
Link: https://lkml.kernel.org/r/20200908195539.25896-6-willy@infradead.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
File THPs may now be of arbitrary order.
Signed-off-by: Kirill A. Shutemov <kirill@shutemov.name>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: SeongJae Park <sjpark@amazon.de>
Cc: Huang Ying <ying.huang@intel.com>
Link: https://lkml.kernel.org/r/20200908195539.25896-5-willy@infradead.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The implementation of split_page_owner() prefers a count rather than the
old order of the page. When we support a variable size THP, we won't
have the order at this point, but we will have the number of pages.
So change the interface to what the caller and callee would prefer.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: SeongJae Park <sjpark@amazon.de>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Huang Ying <ying.huang@intel.com>
Link: https://lkml.kernel.org/r/20200908195539.25896-4-willy@infradead.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
A compound page in the page cache will not necessarily be of PMD size,
so check explicitly.
[willy@infradead.org: fix remove page fault assumption of compound page size]
Link: https://lkml.kernel.org/r/20201001152259.14932-1-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Huang Ying <ying.huang@intel.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Link: https://lkml.kernel.org/r/20200908195539.25896-3-willy@infradead.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Patch series "Remove assumptions of THP size".
There are a number of places in the VM which assume that a THP is a PMD in
size. That's true today, and remains true after this patch series, but
this is a prerequisite for switching to arbitrary-sized THPs.
thp_nr_pages() still returns either HPAGE_PMD_NR or 1, but will be changed
later.
This patch (of 11):
page_cache_free_page() assumes THPs are PMD_SIZE; fix that assumption.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Huang Ying <ying.huang@intel.com>
Link: https://lkml.kernel.org/r/20200908195539.25896-1-willy@infradead.org
Link: https://lkml.kernel.org/r/20200908195539.25896-2-willy@infradead.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
When a THP is removed from the page cache by reclaim, we replace it with a
shadow entry that occupies all slots of the XArray previously occupied by
the THP. If the user then accesses that page again, we only allocate a
single page, but storing it into the shadow entry replaces all entries
with that one page. That leads to bugs like
page dumped because: VM_BUG_ON_PAGE(page_to_pgoff(page) != offset)
------------[ cut here ]------------
kernel BUG at mm/filemap.c:2529!
https://bugzilla.kernel.org/show_bug.cgi?id=206569
This is hard to reproduce with mainline, but happens regularly with the
THP patchset (as so many more THPs are created). This solution is take
from the THP patchset. It splits the shadow entry into order-0 pieces at
the time that we bring a new page into cache.
Fixes: 99cb0dbd47 ("mm,thp: add read-only THP support for (non-shmem) FS")
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Song Liu <songliubraving@fb.com>
Cc: "Kirill A . Shutemov" <kirill@shutemov.name>
Cc: Qian Cai <cai@lca.pw>
Link: https://lkml.kernel.org/r/20200903183029.14930-4-willy@infradead.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
In order to use multi-index entries for huge pages in the page cache, we
need to be able to split a multi-index entry (eg if a file is truncated in
the middle of a huge page entry). This version does not support splitting
more than one level of the tree at a time. This is an acceptable
limitation for the page cache as we do not expect to support order-12
pages in the near future.
[akpm@linux-foundation.org: export xas_split_alloc() to modules]
[willy@infradead.org: fix xarray split]
Link: https://lkml.kernel.org/r/20200910175450.GV6583@casper.infradead.org
[willy@infradead.org: fix xarray]
Link: https://lkml.kernel.org/r/20201001233943.GW20115@casper.infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Cc: "Kirill A . Shutemov" <kirill@shutemov.name>
Cc: Qian Cai <cai@lca.pw>
Cc: Song Liu <songliubraving@fb.com>
Link: https://lkml.kernel.org/r/20200903183029.14930-3-willy@infradead.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Patch series "Fix read-only THP for non-tmpfs filesystems".
As described more verbosely in the [3/3] changelog, we can inadvertently
put an order-0 page in the page cache which occupies 512 consecutive
entries. Users are running into this if they enable the
READ_ONLY_THP_FOR_FS config option; see
https://bugzilla.kernel.org/show_bug.cgi?id=206569 and Qian Cai has also
reported it here:
https://lore.kernel.org/lkml/20200616013309.GB815@lca.pw/
This is a rather intrusive way of fixing the problem, but has the
advantage that I've actually been testing it with the THP patches, which
means that it sees far more use than it does upstream -- indeed, Song has
been entirely unable to reproduce it. It also has the advantage that it
removes a few patches from my gargantuan backlog of THP patches.
This patch (of 3):
This function returns the order of the entry at the index. We need this
because there isn't space in the shadow entry to encode its order.
[akpm@linux-foundation.org: export xa_get_order to modules]
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Cc: "Kirill A . Shutemov" <kirill@shutemov.name>
Cc: Qian Cai <cai@lca.pw>
Cc: Song Liu <songliubraving@fb.com>
Link: https://lkml.kernel.org/r/20200903183029.14930-1-willy@infradead.org
Link: https://lkml.kernel.org/r/20200903183029.14930-2-willy@infradead.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This will help in adding proper locks in a later patch
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lkml.kernel.org/r/20200902114222.181353-9-aneesh.kumar@linux.ibm.com
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
set_pte_at() should not be used to set a pte entry at locations that
already holds a valid pte entry. Architectures like ppc64 don't do TLB
invalidate in set_pte_at() and hence expect it to be used to set locations
that are not a valid PTE.
Link: https://lkml.kernel.org/r/20200902114222.181353-8-aneesh.kumar@linux.ibm.com
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Saved write support was added to track the write bit of a pte after
marking the pte protnone. This was done so that AUTONUMA can convert a
write pte to protnone and still track the old write bit. When converting
it back we set the pte write bit correctly thereby avoiding a write fault
again. Hence enable the test only when CONFIG_NUMA_BALANCING is enabled
and use protnone protflags.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lkml.kernel.org/r/20200902114222.181353-6-aneesh.kumar@linux.ibm.com
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
ppc64 use bit 62 to indicate a pte entry (_PAGE_PTE). Avoid setting that
bit in random value.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lkml.kernel.org/r/20200902114222.181353-4-aneesh.kumar@linux.ibm.com
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
powerpc used to set the pte specific flags in set_pte_at(). This is
different from other architectures. To be consistent with other
architecture update pfn_pte to set _PAGE_PTE on ppc64. Also, drop now
unused pte_mkpte.
We add a VM_WARN_ON() to catch the usage of calling set_pte_at() without
setting _PAGE_PTE bit. We will remove that after a few releases.
With respect to huge pmd entries, pmd_mkhuge() takes care of adding the
_PAGE_PTE bit.
[akpm@linux-foundation.org: whitespace fix, per Christophe]
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Cc: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lkml.kernel.org/r/20200902114222.181353-3-aneesh.kumar@linux.ibm.com
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The conversion to request_mem_region() is broken because it assumes that
the range is marked busy prior to release. However, due to the way that
the kmem driver manipulates the IORESOURCE_BUSY flag (clears it to let
{add,remove}_memory() handle busy) it requires a manual release_resource()
to perform cleanup.
Given that the actual 'struct resource *' needs to be recalled, not just
the range, add that tracking to the kmem driver-data.
Fixes: 0513bd5bb1 ("device-dax/kmem: replace release_resource() with release_mem_region()")
Reported-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Cc: Vishal Verma <vishal.l.verma@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
Cc: Brice Goglin <Brice.Goglin@inria.fr>
Cc: Dave Jiang <dave.jiang@intel.com>
Cc: Ira Weiny <ira.weiny@intel.com>
Cc: Jia He <justin.he@arm.com>
Cc: Joao Martins <joao.m.martins@oracle.com>
Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Link: https://lkml.kernel.org/r/160272252925.3136502.17220638073995895400.stgit@dwillia2-desk3.amr.corp.intel.com
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This Kunit fixes update consists of several kunit tool bug fixes in
flag handling, run outside kernel tree, make errors, and generating
results.
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEEPZKym/RZuOCGeA/kCwJExA0NQxwFAl+It8sACgkQCwJExA0N
QxzwMhAAncxGyhPX0JoYJMpz6fFjITt0Bnrv21OcfO1VeIGExOoHfmpmsXVK1dbv
VIlLEAmjAlX6UIpEpi/DN3DfUFi9E0oeCZd4EOnehti12fs4sFYXeJvYK2TrKyNV
zCaUxHb6xxL4Toy4XuNRdD6016f39Vax4QcM/puNm5CNVMboWeyzZaqooASxLya8
SjZSthmS+kAqvHcB6aqPx7GhBib/k6T+Br5w6paM79tYk9OSMhvhtIx/+twQFw4e
gkT5ZcTdk+t6mjBUY+w1l28W9VlAQi5kjB+22M8loRv2spu3YyxlVqDuQI0MUy+N
RntNsxIwJOSt6N0DvUbXDEZwn/SvX0KS40nI5SSJjkcS1orAE2FQwjY2WzHCZp7T
stQ+Wp7nHuzlVTRUz1pd6ZFluUshptkMwEQfznMIqdxgbWZSQWPc9ypGp6dU9hti
jM5jGUnzYqIe5qdccImRd8fijLKJ/BOvmOauijhJPavhBmCqwseERhb0XN4vpiH+
YCkdT7HHVIZWOr6ECovKizeElTfZN3MBNZV30oDeOzsQSyp5W5vke9u+weKN7NF6
Y2+DACvT87B8miIUPv2HNlAi74nBx6RLDyqLbRgarjNOfDsDfkqw5OHxFBr29TDb
u7THjs63LouVGFgaON7yPMgnsYep4irYuaT7kM9QDW1Af0xyQSs=
=drcb
-----END PGP SIGNATURE-----
Merge tag 'linux-kselftest-kunit-fixes-5.10-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest
Pull Kunit updates from Shuah Khan:
"Several kunit tool bug fixes in flag handling, run outside kernel
tree, make errors, and generating results"
* tag 'linux-kselftest-kunit-fixes-5.10-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest:
kunit: tool: fix display of make errors
kunit: tool: handle when .kunit exists but .kunitconfig does not
kunit: tool: fix --alltests flag
kunit: tool: allow generating test results in JSON
kunit: tool: fix running kunit_tool from outside kernel tree
This kselftest update for Linux 5.10-rc1 consists of enhancements to
-- speed up headers_install done during selftest build
-- add generic make nesting support
-- add support to select individual tests:
- Selftests build/install generates run_kselftest.sh script to run
selftests on a target system. Currently the script doesn't have
support for selecting individual tests. Add support for it.
With this enhancement, user can select test collections (or tests)
individually. e.g:
run_kselftest.sh -c seccomp -t timers:posix_timers -t timers:nanosleep
Additionally adds a way to list all known tests with "-l", usage
with "-h", and perform a dry run without running tests with "-n".
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEEPZKym/RZuOCGeA/kCwJExA0NQxwFAl+IpLgACgkQCwJExA0N
QxygsBAAvE07jW7UDErVbpaKWafxmY2gYZCWr0/e2vDGE3BdPfJZjcbEUADIlk8o
prN2fO17fVE66B5K1ppqRSmLR7GBs/iVVpODW034FF1b0cRSlGFTxsmivxpdCOCy
ox7In1I5fXyXZx1MD3IoWWuSLP3UtP5O3MMX0q+3nqk/FTV5Qe9XBCNDDsb0aBuV
/tuQ+TcYec+mnvIlCtRJr01i+NnfDZyHflcyJy4i6GQ8s5O/bweb9iqiyIeEj5YG
P1GaUdQg7DLKkIpplS8jJvjmDD1uH+qiLKtUvEZ2TB7NSB/NUyW3oEFQmJz2b5j0
aAwyLh1ApTYZA2GPRBmd1/eS5VhEe/XRQ/cqzxSMYg9adFt6VvxwuRWgtE1GA/Jy
mRogJjFW+UaTXdFkjyJw3V3d03+YxfmsVVUYqp1kf0rrRkVB7gV3wiTzoJEVao48
y3xZuni93RGHGdYFBtEmM9hkFgwopMnEv6e2g1ndwdG1Fifw4R/KEDG+/htCIhwG
iiKCy4BCCDsBM+3fZvQG4wqgBIXIyn5W40+9l/GGKSjB1vZRob3VUva6Cwq4XdPp
iNICEmFwbfn5RanQFht+8Xkbsh/H+p7SVtCvG3cQbduJKxCyfTrXUKjyKYULG1Pv
uMtuZxJC9X/SritwTo2K/i+iCYDXfM0YsOfSNMF40BvW+Rtrss4=
=pf0+
-----END PGP SIGNATURE-----
Merge tag 'linux-kselftest-next-5.10-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest
Pull kselftest updates from Shuah Khan:
- speed up headers_install done during selftest build
- add generic make nesting support
- add support to select individual tests:
Selftests build/install generates run_kselftest.sh script to run
selftests on a target system. Currently the script doesn't have
support for selecting individual tests. Add support for it.
With this enhancement, user can select test collections (or tests)
individually. e.g:
run_kselftest.sh -c seccomp -t timers:posix_timers -t timers:nanosleep
Additionally adds a way to list all known tests with "-l", usage with
"-h", and perform a dry run without running tests with "-n".
* tag 'linux-kselftest-next-5.10-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest:
doc: dev-tools: kselftest.rst: Update examples and paths
selftests/run_kselftest.sh: Make each test individually selectable
selftests: Extract run_kselftest.sh and generate stand-alone test list
selftests: Add missing gitignore entries
selftests: more general make nesting support
selftests: use "$(MAKE)" instead of "make" for headers_install
Pull trivial updates from Jiri Kosina:
"The latest advances in computer science from the trivial queue"
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jikos/trivial:
xtensa: fix Kconfig typo
spelling.txt: Remove some duplicate entries
mtd: rawnand: oxnas: cleanup/simplify code
selftests: vm: add fragment CONFIG_GUP_BENCHMARK
perf: Fix opt help text for --no-bpf-event
HID: logitech-dj: Fix spelling in comment
bootconfig: Fix kernel message mentioning CONFIG_BOOT_CONFIG
MAINTAINERS: rectify MMP SUPPORT after moving cputype.h
scif: Fix spelling of EACCES
printk: fix global comment
lib/bitmap.c: fix spello
fs: Fix missing 'bit' in comment
Pull HID updates from Jiri Kosina:
- Lenovo X1 Tablet support improvements from Mikael Wikström
- "heartbeat" report fix for several Wacom devices from Jason Gerecke
- bounds checking fix in hid-roccat from Dan Carpenter
- stylus battery reporting fix from Dmitry Torokhov
- i2c-hid support for wakeup from suspend-to-idle from Kai-Heng Feng
- new driver for Vivaldi devices from Sean O'Brien
- other assorted small fixes and device ID additions
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/hid/hid:
HID: i2c-hid: Enable wakeup capability from Suspend-to-Idle
HID: add vivaldi HID driver
HID: hid-input: fix stylus battery reporting
HID: wacom: Avoid entering wacom_wac_pen_report for pad / battery
HID: i2c-hid: fix kerneldoc warnings in i2c-hid-core.c
HID: core: fix kerneldoc warnings in hid-core.c
HID: multitouch: Lenovo X1 Tablet Gen2 trackpoint and buttons
HID: multitouch: Lenovo X1 Tablet Gen3 trackpoint and buttons
HID: alps: clean up indentation issue
HID: intel-ish-hid: simplify the return expression of ishtp_bus_remove_device()
HID: hid-debug: fix nonblocking read semantics wrt EIO/ERESTARTSYS
HID: i2c-hid: Prefer asynchronous probe
HID: ite: Add USB id match for Acer One S1003 keyboard dock
HID: roccat: add bounds checking in kone_sysfs_write_settings()
HID: wiimote: narrow spinlock range in wiimote_hid_event()
HID: wiimote: make handlers[] const
HID: apple: Add support for Matias wireless keyboard
HID: cp2112: Use irqchip template
Pull livepatching update from Jiri Kosina:
"livepatching kselftest output fix from Miroslav Benes"
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/livepatching/livepatching:
selftests/livepatch: Do not check order when using "comm" for dmesg checking
-----BEGIN PGP SIGNATURE-----
iQEzBAABCAAdFiEEq1nRK9aeMoq1VSgcnJ2qBz9kQNkFAl+IUDwACgkQnJ2qBz9k
QNnjoAf8CVCf/N2kblZywcy/Ks/2WzlZDU4S/dVr93O2VLXEb6kT+onZq4tc/+sY
QIV8kUMGvt0Ez/KpwY6/wOUVEPZn8bSqM0vUsB+Xi8CwSz/CwcDKTbU/tm1667J0
eSFX96OL2hT5BockTzuESFohtykiPU3CvW10ae02N6k6XKTS0+frkn3cebJKno2O
NDt0j2HkVjvpO6DqS7lnceFxo4t6DV6YXJHMtkJqRk8hYItzjyjOB6Df+1WHJJOA
4jxlobY4qR2MHh1y2ncryPvvQ91sMIxx1beMMXXUIXr+8Zu8aFVWBLgSwyoxBgRP
G+ctJu3zMCmeeGnkpLfSlJK9ipYUMw==
=nXw4
-----END PGP SIGNATURE-----
Merge tag 'dio_for_v5.10-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/jack/linux-fs
Pull direct-io fix from Jan Kara:
"Fix for unaligned direct IO read past EOF in legacy DIO code"
* tag 'dio_for_v5.10-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/jack/linux-fs:
direct-io: defer alignment check until after the EOF check
direct-io: don't force writeback for reads beyond EOF
direct-io: clean up error paths of do_blockdev_direct_IO
-----BEGIN PGP SIGNATURE-----
iQEzBAABCAAdFiEEq1nRK9aeMoq1VSgcnJ2qBz9kQNkFAl+ITrkACgkQnJ2qBz9k
QNnNDgf/fEA4pI24FUlvdndDSLS51XEueSuzqjCU1cQ1C1uVmAf//gXkyQ7wJ/ef
Ph8hvHIaezpG6gE3xEQkREvf4EZQiIYDpjprz6ARLxn0rMdMDAqVDZ+5+F2Rlrk4
uPPYgc8cbyIHMNLQ2SBFRzb0xm/tuNlvLaQawKiaoZI8NdKJ1U8uGt7o1QFrDGGs
XdMdoYRHEYbaXao4PCH96JjNEA8zzPUhbDNYB+wwwqzzx5vfWLZK6SU0VivojNDD
JV4VhvYrQUkZ4gwePYhmS18Kp6GRkGM18Cu7Nh/R1ltUk4AdHmjTNGeRbGXqjlso
Q7v5tg5fQ0MUCcHzuZgmqgkgCd5pHw==
=roOT
-----END PGP SIGNATURE-----
Merge tag 'fs_for_v5.10-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/jack/linux-fs
Pull UDF, reiserfs, ext2, quota fixes from Jan Kara:
- a couple of UDF fixes for issues found by syzbot fuzzing
- a couple of reiserfs fixes for issues found by syzbot fuzzing
- some minor ext2 cleanups
- quota patches to support grace times beyond year 2038 for XFS quota
APIs
* tag 'fs_for_v5.10-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/jack/linux-fs:
reiserfs: Fix oops during mount
udf: Limit sparing table size
udf: Remove pointless union in udf_inode_info
udf: Avoid accessing uninitialized data on failed inode read
quota: clear padding in v2r1_mem2diskdqb()
reiserfs: Initialize inode keys properly
udf: Fix memory leak when mounting
udf: Remove redundant initialization of variable ret
reiserfs: only call unlock_new_inode() if I_NEW
ext2: Fix some kernel-doc warnings in balloc.c
quota: Expand comment describing d_itimer
quota: widen timestamps for the fs_disk_quota structure
reiserfs: Fix memory leak in reiserfs_parse_options()
udf: Use kvzalloc() in udf_sb_alloc_bitmap()
ext2: remove duplicate include
- various cleanups for the configfs samples (Bartosz Golaszewski)
-----BEGIN PGP SIGNATURE-----
iQI/BAABCgApFiEEgdbnc3r/njty3Iq9D55TZVIEUYMFAl+IjBILHGhjaEBsc3Qu
ZGUACgkQD55TZVIEUYM0WxAAv9JzqdcGdPiPdhfJH/4PTigeo7gFcEaQEj3q1XSS
hcrlsGwAoYZSFT/ZKVHfq8kZAMcRMvfhXZ2WLMmScPV+tFzsFtUP3+smQp9fsfBS
Cg+HCnnwTQnj4zhzLSLMQcX28qt/aG5ssLr8eiD5JxJzNxx9mfV01w3FNBvWbWM7
2ymaQXqdCbR4vRMPrmz8ukTHqCvsy2xbmIELTHx9Tnz1+jFT09S+qjSfL+p+iyQx
+RbzB/Lq1N45TkIqM2NqAKEnwJdQ2rAnBJb+D75wrEgCyvoP5LD8npvPnAzLdtaT
FN2p9I6ST06QEdxGyTJYKpshsTkcp/Qljs5EReeKjuqeGOR/gQKT4oHuvH2ph4OT
x35iN7ErD+TGna0InitiRSMOxM1KFE9vsPGPUIcka8rpEvuOA6ZyXcT3ZnuupR5i
ZGr+JRHLIMAdPimQ9MxsYupBVU8Fan1xMDyVoCDRWsHve+1qjzTafZmTQ8OmGGyK
F+9js2RXuLr+OYUeBiGY+67yxboMuENOQdETCvhWUmHitI5piLFXQyTaUFTZX5yv
NRty3Vw0ts5JywUR+xx7VgTKpbF5pLDEtMcJwcvoSV12ykq7oIQLezHMD0MtAu4N
mWsyXWZHAGphL1GIRbGyKgLoWJVlAzfmg/s3xo+bHiuaw++mvrzqUx4y4NG9UjwH
UCY=
=+IRh
-----END PGP SIGNATURE-----
Merge tag 'configfs-5.10' of git://git.infradead.org/users/hch/configfs
Pull configfs updates from Christoph Hellwig:
"Various cleanups for the configfs samples (Bartosz Golaszewski)"
* tag 'configfs-5.10' of git://git.infradead.org/users/hch/configfs:
samples: configfs: prefer pr_err() over bare printk(KERN_ERR
samples: configfs: don't use spaces before tabs
samples: configfs: consolidate local variables of the same type
samples: configfs: don't reinitialize variables which are already zeroed
samples: configfs: replace simple_strtoul() with kstrtoint()
samples: configfs: fix alignment in item struct
samples: configfs: drop unnecessary ternary operators
samples: configfs: remove redundant newlines
MAINTAINERS: add the sample directory to the configfs entry