diff options
author | David Hildenbrand <david@redhat.com> | 2024-02-26 15:13:23 +0100 |
---|---|---|
committer | Andrew Morton <akpm@linux-foundation.org> | 2024-03-04 17:01:21 -0800 |
commit | b4d02baa9f3ed9fc4311e0bd69966861a2f5eb83 (patch) | |
tree | e27959dd8d293736e022b600f3066b79b90cfa73 /lib/test_vmalloc.c | |
parent | fc4d182316bd5309b4066fd9ef21529ea397a7d4 (diff) | |
download | linux-b4d02baa9f3ed9fc4311e0bd69966861a2f5eb83.tar.gz linux-b4d02baa9f3ed9fc4311e0bd69966861a2f5eb83.tar.bz2 linux-b4d02baa9f3ed9fc4311e0bd69966861a2f5eb83.zip |
mm/memfd: refactor memfd_tag_pins() and memfd_wait_for_pins()
Patch series "mm: remove total_mapcount()", v2.
Let's remove the remaining user from mm/memfd.c so we can get rid of
total_mapcount().
This patch (of 2):
Both functions are the remaining users of total_mapcount(). Let's get rid
of the calls by converting the code to folios.
As it turns out, the code is unnecessarily complicated, especially:
1) We can query the number of pagecache references for a folio simply via
folio_nr_pages(). This will handle other folio sizes in the future
correctly.
2) The xas_set(xas, page->index + cache_count) call to increment the
iterator for large folios is not required. Remove it.
Further, simplify the XA_CHECK_SCHED check, counting each entry exactly
once.
Memfd pages can be swapped out when using shmem; leave xa_is_value()
checks in place.
Link: https://lkml.kernel.org/r/20240226141324.278526-1-david@redhat.com
Link: https://lkml.kernel.org/r/20240226141324.278526-2-david@redhat.com
Co-developed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'lib/test_vmalloc.c')
0 files changed, 0 insertions, 0 deletions