linux-mainline/mm
Thomas Hellstrom b2a403fdd1 mm/mapping_dirty_helpers: update huge page-table entry callbacks
Following the update of pagewalk code commit a07984d48146 ("mm: pagewalk:
add p4d_entry() and pgd_entry()") we can modify the mapping_dirty_helpers'
huge page-table entry callbacks to avoid splitting when a huge pud or -pmd
is encountered.

Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: Steven Price <steven.price@arm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Link: http://lkml.kernel.org/r/20200203154305.15045-1-thomas_os@shipmail.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-04-02 09:35:29 -07:00
..
kasan
backing-dev.c
balloon_compaction.c
cleancache.c
cma_debug.c
cma.c
cma.h
compaction.c
debug_page_ref.c
debug.c mm: dump_page(): additional diagnostics for huge pinned pages 2020-04-02 09:35:27 -07:00
dmapool.c
early_ioremap.c
fadvise.c
failslab.c
filemap.c
frame_vector.c
frontswap.c
gup_benchmark.c mm/gup_benchmark: support pin_user_pages() and related calls 2020-04-02 09:35:27 -07:00
gup.c mm/gup: fix omission of check on FOLL_LONGTERM in gup fast path 2020-04-02 09:35:27 -07:00
highmem.c
hmm.c
huge_memory.c mm/gup: track FOLL_PIN pages 2020-04-02 09:35:27 -07:00
hugetlb_cgroup.c
hugetlb.c mm/gup: page->hpage_pinned_refcount: exact pin counts for huge pages 2020-04-02 09:35:27 -07:00
hwpoison-inject.c
init-mm.c
internal.h mm: swap: make page_evictable() inline 2020-04-02 09:35:27 -07:00
interval_tree.c
Kconfig
Kconfig.debug
khugepaged.c
kmemleak-test.c
kmemleak.c
ksm.c
list_lru.c mm: memcg/slab: use mem_cgroup_from_obj() 2020-04-02 09:35:28 -07:00
maccess.c
madvise.c
Makefile
mapping_dirty_helpers.c mm/mapping_dirty_helpers: update huge page-table entry callbacks 2020-04-02 09:35:29 -07:00
memblock.c
memcontrol.c mm: memcg: make memory.oom.group tolerable to task migration 2020-04-02 09:35:29 -07:00
memfd.c
memory_hotplug.c
memory-failure.c
memory.c
mempolicy.c
mempool.c
memremap.c
memtest.c
migrate.c
mincore.c
mlock.c
mm_init.c
mmap.c
mmu_context.c
mmu_gather.c
mmu_notifier.c
mmzone.c
mprotect.c
mremap.c
msync.c
nommu.c
oom_kill.c
page_alloc.c mm: kmem: rename memcg_kmem_(un)charge() into memcg_kmem_(un)charge_page() 2020-04-02 09:35:28 -07:00
page_counter.c mm, memcg: prevent memory.min load/store tearing 2020-04-02 09:35:29 -07:00
page_ext.c
page_idle.c
page_io.c
page_isolation.c
page_owner.c
page_poison.c
page_vma_mapped.c
page-writeback.c mm/gup/writeback: add callbacks for inaccessible pages 2020-04-02 09:35:27 -07:00
pagewalk.c
percpu-internal.h
percpu-km.c
percpu-stats.c
percpu-vm.c
percpu.c
pgtable-generic.c
process_vm_access.c
ptdump.c
readahead.c
rmap.c mm/gup: page->hpage_pinned_refcount: exact pin counts for huge pages 2020-04-02 09:35:27 -07:00
rodata_test.c
shmem.c
shuffle.c
shuffle.h
slab_common.c mm, memcg: fix build error around the usage of kmem_caches 2020-04-02 09:35:28 -07:00
slab.c
slab.h mm: kmem: rename (__)memcg_kmem_(un)charge_memcg() to __memcg_kmem_(un)charge() 2020-04-02 09:35:28 -07:00
slob.c
slub.c
sparse-vmemmap.c
sparse.c
swap_cgroup.c
swap_slots.c mm/swap_slots.c: assign|reset cache slot by value directly 2020-04-02 09:35:27 -07:00
swap_state.c mm/swap_state.c: use the same way to count page in [add_to|delete_from]_swap_cache 2020-04-02 09:35:28 -07:00
swap.c mm: swap: use smp_mb__after_atomic() to order LRU bit set 2020-04-02 09:35:28 -07:00
swapfile.c mm/swapfile: fix data races in try_to_unuse() 2020-04-02 09:35:27 -07:00
truncate.c
usercopy.c
userfaultfd.c
util.c
vmacache.c
vmalloc.c
vmpressure.c
vmscan.c mm: swap: make page_evictable() inline 2020-04-02 09:35:27 -07:00
vmstat.c mm/gup: /proc/vmstat: pin_user_pages (FOLL_PIN) reporting 2020-04-02 09:35:27 -07:00
workingset.c
z3fold.c
zbud.c
zpool.c
zsmalloc.c
zswap.c