From e74a68468062d7ebd8ce17069e12ccc64cc6a58c Mon Sep 17 00:00:00 2001 From: Peter Collingbourne Date: Tue, 14 Feb 2023 21:09:11 -0800 Subject: [PATCH 1/7] arm64: Reset KASAN tag in copy_highpage with HW tags only MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit During page migration, the copy_highpage function is used to copy the page data to the target page. If the source page is a userspace page with MTE tags, the KASAN tag of the target page must have the match-all tag in order to avoid tag check faults during subsequent accesses to the page by the kernel. However, the target page may have been allocated in a number of ways, some of which will use the KASAN allocator and will therefore end up setting the KASAN tag to a non-match-all tag. Therefore, update the target page's KASAN tag to match the source page. We ended up unintentionally fixing this issue as a result of a bad merge conflict resolution between commit e059853d14ca ("arm64: mte: Fix/clarify the PG_mte_tagged semantics") and commit 20794545c146 ("arm64: kasan: Revert "arm64: mte: reset the page tag in page->flags""), which preserved a tag reset for PG_mte_tagged pages which was considered to be unnecessary at the time. Because SW tags KASAN uses separate tag storage, update the code to only reset the tags when HW tags KASAN is enabled. Signed-off-by: Peter Collingbourne Link: https://linux-review.googlesource.com/id/If303d8a709438d3ff5af5fd85706505830f52e0c Reported-by: "Kuan-Ying Lee (李冠穎)" Cc: # 6.1 Fixes: 20794545c146 ("arm64: kasan: Revert "arm64: mte: reset the page tag in page->flags"") Reviewed-by: Andrey Konovalov Link: https://lore.kernel.org/r/20230215050911.1433132-1-pcc@google.com Signed-off-by: Catalin Marinas --- arch/arm64/mm/copypage.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c index 8dd5a8fe64b4..4aadcfb01754 100644 --- a/arch/arm64/mm/copypage.c +++ b/arch/arm64/mm/copypage.c @@ -22,7 +22,8 @@ void copy_highpage(struct page *to, struct page *from) copy_page(kto, kfrom); if (system_supports_mte() && page_mte_tagged(from)) { - page_kasan_tag_reset(to); + if (kasan_hw_tags_enabled()) + page_kasan_tag_reset(to); /* It's a new page, shouldn't have been tagged yet */ WARN_ON_ONCE(!try_page_mte_tagging(to)); mte_copy_page_tags(kto, kfrom); From 0269680e5eb88f6223c53a8b3138cbfa60ba7657 Mon Sep 17 00:00:00 2001 From: Mark Brown Date: Thu, 9 Feb 2023 20:04:07 +0000 Subject: [PATCH 2/7] arm64/fpsimd: Remove warning for SME without SVE Support for SME without SVE is architecturally valid and has now been tested well enough so let's remove the warning message that is displayed at boot. Signed-off-by: Mark Brown Link: https://lore.kernel.org/r/20230209-arm64-sme-no-sve-v1-1-74eb3df2f878@kernel.org Signed-off-by: Catalin Marinas --- arch/arm64/kernel/fpsimd.c | 3 --- 1 file changed, 3 deletions(-) diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index c11cb445ffca..7e823ee7ffa2 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -2122,9 +2122,6 @@ static int __init fpsimd_init(void) pr_notice("Advanced SIMD is not implemented\n"); - if (cpu_have_named_feature(SME) && !cpu_have_named_feature(SVE)) - pr_notice("SME is implemented but not SVE\n"); - sve_sysctl_init(); sme_sysctl_init(); From b61b82f81e095fe265b0614045d17b08e6ee5c72 Mon Sep 17 00:00:00 2001 From: Sangmoon Kim Date: Mon, 20 Feb 2023 16:34:41 +0900 Subject: [PATCH 3/7] arm64: pass ESR_ELx to die() of cfi_handler Commit 0f2cb928a154 ("arm64: consistently pass ESR_ELx to die()") caused all callers to pass the ESR_ELx value to die(). For consistency, this patch also adds esr to die() call of cfi_handler. Also, when CFI error occurs, die handlers can use ESR_ELx value. Signed-off-by: Sangmoon Kim Acked-by: Mark Rutland Reviewed-by: Mark Brown Link: https://lore.kernel.org/r/20230220073441.2753-1-sangmoon.kim@samsung.com Signed-off-by: Catalin Marinas --- arch/arm64/kernel/traps.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c index 0ccc063daccb..4a623e2e982b 100644 --- a/arch/arm64/kernel/traps.c +++ b/arch/arm64/kernel/traps.c @@ -990,7 +990,7 @@ static int cfi_handler(struct pt_regs *regs, unsigned long esr) switch (report_cfi_failure(regs, regs->pc, &target, type)) { case BUG_TRAP_TYPE_BUG: - die("Oops - CFI", regs, 0); + die("Oops - CFI", regs, esr); break; case BUG_TRAP_TYPE_WARN: From 060a2c92d1b627c86c5c42ca69baf00457c00c5a Mon Sep 17 00:00:00 2001 From: Catalin Marinas Date: Wed, 22 Feb 2023 17:52:32 +0000 Subject: [PATCH 4/7] arm64: mm: hugetlb: Disable HUGETLB_PAGE_OPTIMIZE_VMEMMAP Revert the HUGETLB_PAGE_FREE_VMEMMAP selection from commit 1e63ac088f20 ("arm64: mm: hugetlb: enable HUGETLB_PAGE_FREE_VMEMMAP for arm64") but keep the flush_dcache_page() compound_head() change as it aligns with the corresponding check in the __sync_icache_dcache() function. The original config option was renamed in commit 47010c040dec ("mm: hugetlb_vmemmap: cleanup CONFIG_HUGETLB_PAGE_FREE_VMEMMAP*") to HUGETLB_PAGE_OPTIMIZE_VMEMMAP and the flush_dcache_page() check was further simplified by commit 2da1c30929a2 ("mm: hugetlb_vmemmap: delete hugetlb_optimize_vmemmap_enabled()"). The reason for the revert is that the generic vmemmap_remap_pte() function changes both the permissions (writeable to read-only) and the output address (pfn) of the vmemmap ptes. This is deemed UNPREDICTABLE by the Arm architecture without a break-before-make sequence (make the PTE invalid, TLBI, write the new valid PTE). However, such sequence is not possible since the vmemmap may be concurrently accessed by the kernel. Disable the optimisation until a better solution is found. Fixes: 1e63ac088f20 ("arm64: mm: hugetlb: enable HUGETLB_PAGE_FREE_VMEMMAP for arm64") Cc: # 5.19.x Cc: Muchun Song Cc: Will Deacon Cc: Anshuman Khandual Link: https://lore.kernel.org/r/Y9pZALdn3pKiJUeQ@arm.com Reviewed-by: Anshuman Khandual Link: https://lore.kernel.org/r/20230222175232.540851-1-catalin.marinas@arm.com Signed-off-by: Catalin Marinas --- arch/arm64/Kconfig | 1 - 1 file changed, 1 deletion(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 619ab046744a..71c35178e017 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -100,7 +100,6 @@ config ARM64 select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT select ARCH_WANT_FRAME_POINTERS select ARCH_WANT_HUGE_PMD_SHARE if ARM64_4K_PAGES || (ARM64_16K_PAGES && !ARM64_VA_BITS_36) - select ARCH_WANT_HUGETLB_PAGE_OPTIMIZE_VMEMMAP select ARCH_WANT_LD_ORPHAN_WARN select ARCH_WANTS_NO_INSTR select ARCH_WANTS_THP_SWAP if ARM64_4K_PAGES From 1b561d3949f8478c5403c9752b5533211a757226 Mon Sep 17 00:00:00 2001 From: Sudeep Holla Date: Thu, 23 Feb 2023 13:57:42 +0000 Subject: [PATCH 5/7] arm64: acpi: Fix possible memory leak of ffh_ctxt Allocated 'ffh_ctxt' memory leak is possible if the SMCCC version and conduit checks fail and -EOPNOTSUPP is returned without freeing the allocated memory. Fix the same by moving the allocation after the SMCCC version and conduit checks. Fixes: 1d280ce099db ("arm64: Add architecture specific ACPI FFH Opregion callbacks") Cc: # 6.2.x Cc: Will Deacon Reported-by: kernel test robot Reported-by: Dan Carpenter Suggested-by: Dan Carpenter Link: https://lore.kernel.org/r/202302191417.dAl9NuE8-lkp@intel.com/ Signed-off-by: Sudeep Holla Link: https://lore.kernel.org/r/20230223135742.2952091-1-sudeep.holla@arm.com Signed-off-by: Catalin Marinas --- arch/arm64/kernel/acpi.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/arm64/kernel/acpi.c b/arch/arm64/kernel/acpi.c index 378453faa87e..dba8fcec7f33 100644 --- a/arch/arm64/kernel/acpi.c +++ b/arch/arm64/kernel/acpi.c @@ -435,10 +435,6 @@ int acpi_ffh_address_space_arch_setup(void *handler_ctxt, void **region_ctxt) enum arm_smccc_conduit conduit; struct acpi_ffh_data *ffh_ctxt; - ffh_ctxt = kzalloc(sizeof(*ffh_ctxt), GFP_KERNEL); - if (!ffh_ctxt) - return -ENOMEM; - if (arm_smccc_get_version() < ARM_SMCCC_VERSION_1_2) return -EOPNOTSUPP; @@ -448,6 +444,10 @@ int acpi_ffh_address_space_arch_setup(void *handler_ctxt, void **region_ctxt) return -EOPNOTSUPP; } + ffh_ctxt = kzalloc(sizeof(*ffh_ctxt), GFP_KERNEL); + if (!ffh_ctxt) + return -ENOMEM; + if (conduit == SMCCC_CONDUIT_SMC) { ffh_ctxt->invoke_ffh_fn = __arm_smccc_smc; ffh_ctxt->invoke_ffh64_fn = arm_smccc_1_2_smc; From b3f11af9b2ce14d4662753d097a21e1d37a06fda Mon Sep 17 00:00:00 2001 From: Mark Rutland Date: Mon, 27 Feb 2023 11:58:19 +0000 Subject: [PATCH 6/7] arm64: ftrace: forbid CALL_OPS with CC_OPTIMIZE_FOR_SIZE Florian reports that when building with CONFIG_CC_OPTIMIZE_FOR_SIZE=y, he sees "Misaligned patch-site" warnings at boot, e.g. | Misaligned patch-site bcm2836_arm_irqchip_handle_irq+0x0/0x88 | WARNING: CPU: 0 PID: 0 at arch/arm64/kernel/ftrace.c:120 ftrace_call_adjust+0x4c/0x70 This is because GCC will silently ignore `-falign-functions=N` when passed `-Os`, resulting in functions not being aligned as we expect. This is a known issue, and to account for this we modified the kernel to avoid `-Os` generally. Unfortunately we forgot to account for CONFIG_CC_OPTIMIZE_FOR_SIZE. Forbid the use of CALL_OPS with CONFIG_CC_OPTIMIZE_FOR_SIZE=y to prevent this issue. All exising ftrace features will work as before, though without the performance benefit of CALL_OPS. Reported-by: Florian Fainelli Link: http://lore.kernel.org/linux-arm-kernel/2d9284c3-3805-402b-5423-520ced56d047@gmail.com Signed-off-by: Mark Rutland Cc: Marc Zyngier Cc: Stefan Wahren Cc: Steven Rostedt Cc: Will Deacon Tested-by: Florian Fainelli Link: https://lore.kernel.org/r/20230227115819.365630-1-mark.rutland@arm.com Signed-off-by: Catalin Marinas --- arch/arm64/Kconfig | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 71c35178e017..f7521695a474 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -188,7 +188,8 @@ config ARM64 select HAVE_DYNAMIC_FTRACE_WITH_ARGS \ if $(cc-option,-fpatchable-function-entry=2) select HAVE_DYNAMIC_FTRACE_WITH_CALL_OPS \ - if (DYNAMIC_FTRACE_WITH_ARGS && !CFI_CLANG) + if (DYNAMIC_FTRACE_WITH_ARGS && !CFI_CLANG && \ + !CC_OPTIMIZE_FOR_SIZE) select FTRACE_MCOUNT_USE_PATCHABLE_FUNCTION_ENTRY \ if DYNAMIC_FTRACE_WITH_ARGS select HAVE_EFFICIENT_UNALIGNED_ACCESS From 010338d729c1090036eb40d2a60b7b7bce2445b8 Mon Sep 17 00:00:00 2001 From: Ard Biesheuvel Date: Thu, 23 Feb 2023 21:41:01 +0100 Subject: [PATCH 7/7] arm64: kaslr: don't pretend KASLR is enabled if offset < MIN_KIMG_ALIGN Our virtual KASLR displacement is a randomly chosen multiple of 2 MiB plus an offset that is equal to the physical placement modulo 2 MiB. This arrangement ensures that we can always use 2 MiB block mappings (or contiguous PTE mappings for 16k or 64k pages) to map the kernel. This means that a KASLR offset of less than 2 MiB is simply the product of this physical displacement, and no randomization has actually taken place. Currently, we use 'kaslr_offset() > 0' to decide whether or not randomization has occurred, and so we misidentify this case. If the kernel image placement is not randomized, modules are allocated from a dedicated region below the kernel mapping, which is only used for modules and not for other vmalloc() or vmap() calls. When randomization is enabled, the kernel image is vmap()'ed randomly inside the vmalloc region, and modules are allocated in the vicinity of this mapping to ensure that relative references are always in range. However, unlike the dedicated module region below the vmalloc region, this region is not reserved exclusively for modules, and so ordinary vmalloc() calls may end up overlapping with it. This should rarely happen, given that vmalloc allocates bottom up, although it cannot be ruled out entirely. The misidentified case results in a placement of the kernel image within 2 MiB of its default address. However, the logic that randomizes the module region is still invoked, and this could result in the module region overlapping with the start of the vmalloc region, instead of using the dedicated region below it. If this happens, a single large vmalloc() or vmap() call will use up the entire region, and leave no space for loading modules after that. Since commit 82046702e288 ("efi/libstub/arm64: Replace 'preferred' offset with alignment check"), this is much more likely to occur on systems that boot via EFI but lack an implementation of the EFI RNG protocol, as in that case, the EFI stub will decide to leave the image where it found it, and the EFI firmware uses 64k alignment only. Fix this, by correctly identifying the case where the virtual displacement is a result of the physical displacement only. Signed-off-by: Ard Biesheuvel Reviewed-by: Mark Brown Acked-by: Mark Rutland Link: https://lore.kernel.org/r/20230223204101.1500373-1-ardb@kernel.org Signed-off-by: Catalin Marinas --- arch/arm64/include/asm/memory.h | 11 +++++++++++ arch/arm64/kernel/cpufeature.c | 2 +- arch/arm64/kernel/kaslr.c | 2 +- 3 files changed, 13 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index 9dd08cd339c3..78e5163836a0 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -180,6 +180,7 @@ #include #include #include +#include #include #if VA_BITS > 48 @@ -203,6 +204,16 @@ static inline unsigned long kaslr_offset(void) return kimage_vaddr - KIMAGE_VADDR; } +static inline bool kaslr_enabled(void) +{ + /* + * The KASLR offset modulo MIN_KIMG_ALIGN is taken from the physical + * placement of the image rather than from the seed, so a displacement + * of less than MIN_KIMG_ALIGN means that no seed was provided. + */ + return kaslr_offset() >= MIN_KIMG_ALIGN; +} + /* * Allow all memory at the discovery stage. We will clip it later. */ diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 45a42cf2191c..5643a9ca502a 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -1633,7 +1633,7 @@ bool kaslr_requires_kpti(void) return false; } - return kaslr_offset() > 0; + return kaslr_enabled(); } static bool __meltdown_safe = true; diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c index 325455d16dbc..e7477f21a4c9 100644 --- a/arch/arm64/kernel/kaslr.c +++ b/arch/arm64/kernel/kaslr.c @@ -41,7 +41,7 @@ static int __init kaslr_init(void) return 0; } - if (!kaslr_offset()) { + if (!kaslr_enabled()) { pr_warn("KASLR disabled due to lack of seed\n"); return 0; }