Commit Graph

141 Commits

Author SHA1 Message Date
Will Deacon 9d84ad425d Merge branch 'for-next/trivial' into for-next/core
* for-next/trivial:
  arm64: alternatives: add __init/__initconst to some functions/variables
  arm64/asm: Remove unused assembler DAIF save/restore macros
  arm64/kpti: Move DAIF masking to C code
  Revert "arm64/mm: Drop redundant BUG_ON(!pgtable_alloc)"
  arm64/mm: Drop unused restore_ttbr1
  arm64: alternatives: make apply_alternatives_vdso() static
  arm64/mm: Drop idmap_pg_end[] declaration
  arm64/mm: Drop redundant BUG_ON(!pgtable_alloc)
  arm64: make is_ttbrX_addr() noinstr-safe
  arm64/signal: Document our convention for choosing magic numbers
  arm64: atomics: lse: remove stale dependency on JUMP_LABEL
  arm64: paravirt: remove conduit check in has_pv_steal_clock
  arm64: entry: Fix typo
  arm64/booting: Add missing colon to FA64 entry
  arm64/mm: Drop ARM64_KERNEL_USES_PMD_MAPS
  arm64/asm: Remove unused enable_da macro
2022-12-06 11:33:29 +00:00
Mark Brown d503d01e50 arm64/asm: Remove unused assembler DAIF save/restore macros
There are no longer any users of the assembler macros for saving and
restoring DAIF so remove them to prevent further users being added, there
are C equivalents available.

Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20221123180209.634650-3-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
2022-11-25 12:17:53 +00:00
Anshuman Khandual 5b468dad6e arm64/mm: Drop unused restore_ttbr1
restore_ttbr1 procedure is not used anywhere, hence just drop it.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20221117123144.403582-1-anshuman.khandual@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2022-11-18 14:37:24 +00:00
Anshuman Khandual a4ee28615c arm64/mm: Simplify and document pte_to_phys() for 52 bit addresses
pte_to_phys() assembly definition does multiple bits field transformations
to derive physical address, embedded inside a page table entry. Unlike its
C counter part i.e __pte_to_phys(), pte_to_phys() is not very apparent. It
simplifies these operations via a new macro PTE_ADDR_HIGH_SHIFT indicating
how far the pte encoded higher address bits need to be left shifted. While
here, this also updates __pte_to_phys() and __phys_to_pte_val().

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Suggested-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Link: https://lore.kernel.org/r/20221107141753.2938621-1-anshuman.khandual@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2022-11-09 18:13:18 +00:00
Mark Brown e8e5104118 arm64/asm: Remove unused enable_da macro
We no longer use the enable_da macro, remove it to avoid having to think
about maintaining it.

Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20221019120346.72289-1-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
2022-11-07 16:19:19 +00:00
Catalin Marinas c704cf27a1 Merge branch 'for-next/alternatives' into for-next/core
* for-next/alternatives:
  : Alternatives (code patching) improvements
  arm64: fix the build with binutils 2.27
  arm64: avoid BUILD_BUG_ON() in alternative-macros
  arm64: alternatives: add shared NOP callback
  arm64: alternatives: add alternative_has_feature_*()
  arm64: alternatives: have callbacks take a cap
  arm64: alternatives: make alt_region const
  arm64: alternatives: hoist print out of __apply_alternatives()
  arm64: alternatives: proton-pack: prepare for cap changes
  arm64: alternatives: kvm: prepare for cap changes
  arm64: cpufeature: make cpus_have_cap() noinstr-safe
2022-09-30 09:18:22 +01:00
Mark Rutland 4c0bd995d7 arm64: alternatives: have callbacks take a cap
Today, callback alternatives are special-cased within
__apply_alternatives(), and are applied alongside patching for system
capabilities as ARM64_NCAPS is not part of the boot_capabilities feature
mask.

This special-casing is less than ideal. Giving special meaning to
ARM64_NCAPS for this requires some structures and loops to use
ARM64_NCAPS + 1 (AKA ARM64_NPATCHABLE), while others use ARM64_NCAPS.
It's also not immediately clear callback alternatives are only applied
when applying alternatives for system-wide features.

To make this a bit clearer, changes the way that callback alternatives
are identified to remove the special-casing of ARM64_NCAPS, and to allow
callback alternatives to be associated with a cpucap as with all other
alternatives.

New cpucaps, ARM64_ALWAYS_BOOT and ARM64_ALWAYS_SYSTEM are added which
are always detected alongside boot cpu capabilities and system
capabilities respectively. All existing callback alternatives are made
to use ARM64_ALWAYS_SYSTEM, and so will be patched at the same point
during the boot flow as before.

Subsequent patches will make more use of these new cpucaps.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: James Morse <james.morse@arm.com>
Cc: Joey Gouly <joey.gouly@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Will Deacon <will@kernel.org>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20220912162210.3626215-7-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2022-09-16 17:15:03 +01:00
Mark Brown fcf37b38ff arm64/sysreg: Add _EL1 into ID_AA64DFR0_EL1 definition names
Normally we include the full register name in the defines for fields within
registers but this has not been followed for ID registers. In preparation
for automatic generation of defines add the _EL1s into the defines for
ID_AA64DFR0_EL1 to follow the convention. No functional changes.

Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20220910163354.860255-3-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2022-09-16 12:38:57 +01:00
Mark Brown c0357a73fa arm64/sysreg: Align field names in ID_AA64DFR0_EL1 with architecture
The naming scheme the architecture uses for the fields in ID_AA64DFR0_EL1
does not align well with kernel conventions, using as it does a lot of
MixedCase in various arrangements. In preparation for automatically
generating the defines for this register rename the defines used to match
what is in the architecture.

Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20220910163354.860255-2-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2022-09-16 12:38:57 +01:00
Mark Brown 8f40baded4 arm64/sysreg: Standardise naming for ID_AA64MMFR2_EL1.VARange
The kernel refers to ID_AA64MMFR2_EL1.VARange as LVA. In preparation for
automatic generation of defines for the system registers bring the naming
used by the kernel in sync with that of DDI0487H.a. No functional change.

Signed-off-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Kristina Martsenko <kristina.martsenko@arm.com>
Link: https://lore.kernel.org/r/20220905225425.1871461-12-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2022-09-09 10:59:03 +01:00
Mark Brown 55adc08d7e arm64/sysreg: Add _EL1 into ID_AA64PFR0_EL1 definition names
Normally we include the full register name in the defines for fields within
registers but this has not been followed for ID registers. In preparation
for automatic generation of defines add the _EL1s into the defines for
ID_AA64PFR0_EL1 to follow the convention. No functional changes.

Signed-off-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Kristina Martsenko <kristina.martsenko@arm.com>
Link: https://lore.kernel.org/r/20220905225425.1871461-7-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2022-09-09 10:59:02 +01:00
Mark Brown a957c6be2b arm64/sysreg: Add _EL1 into ID_AA64MMFR2_EL1 definition names
Normally we include the full register name in the defines for fields within
registers but this has not been followed for ID registers. In preparation
for automatic generation of defines add the _EL1s into the defines for
ID_AA64MMFR2_EL1 to follow the convention. No functional changes.

Signed-off-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Kristina Martsenko <kristina.martsenko@arm.com>
Link: https://lore.kernel.org/r/20220905225425.1871461-6-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2022-09-09 10:59:02 +01:00
Mark Brown 2d987e64e8 arm64/sysreg: Add _EL1 into ID_AA64MMFR0_EL1 definition names
Normally we include the full register name in the defines for fields within
registers but this has not been followed for ID registers. In preparation
for automatic generation of defines add the _EL1s into the defines for
ID_AA64MMFR0_EL1 to follow the convention. No functional changes.

Signed-off-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Kristina Martsenko <kristina.martsenko@arm.com>
Link: https://lore.kernel.org/r/20220905225425.1871461-5-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2022-09-09 10:59:02 +01:00
Will Deacon f96d67a8af Merge branch 'for-next/boot' into for-next/core
* for-next/boot: (34 commits)
  arm64: fix KASAN_INLINE
  arm64: Add an override for ID_AA64SMFR0_EL1.FA64
  arm64: Add the arm64.nosve command line option
  arm64: Add the arm64.nosme command line option
  arm64: Expose a __check_override primitive for oddball features
  arm64: Allow the idreg override to deal with variable field width
  arm64: Factor out checking of a feature against the override into a macro
  arm64: Allow sticky E2H when entering EL1
  arm64: Save state of HCR_EL2.E2H before switch to EL1
  arm64: Rename the VHE switch to "finalise_el2"
  arm64: mm: fix booting with 52-bit address space
  arm64: head: remove __PHYS_OFFSET
  arm64: lds: use PROVIDE instead of conditional definitions
  arm64: setup: drop early FDT pointer helpers
  arm64: head: avoid relocating the kernel twice for KASLR
  arm64: kaslr: defer initialization to initcall where permitted
  arm64: head: record CPU boot mode after enabling the MMU
  arm64: head: populate kernel page tables with MMU and caches on
  arm64: head: factor out TTBR1 assignment into a macro
  arm64: idreg-override: use early FDT mapping in ID map
  ...
2022-07-25 10:59:15 +01:00
Tong Tiangen e4208e80a3 arm64: extable: move _cond_extable to _cond_uaccess_extable
Currently, We use _cond_extable for cache maintenance uaccess helper
caches_clean_inval_user_pou(), so this should be moved over to
EX_TYPE_UACCESS_ERR_ZERO and rename _cond_extable to _cond_uaccess_extable
for clarity.

Suggested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20220621072638.1273594-6-tongtiangen@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>
2022-06-28 12:11:47 +01:00
Ard Biesheuvel c0be8f18a3 arm64: head: factor out TTBR1 assignment into a macro
Create a macro load_ttbr1 to avoid having to repeat the same instruction
sequence 3 times in a subsequent patch. No functional change intended.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20220624150651.1358849-17-ardb@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
2022-06-24 17:18:10 +01:00
Ard Biesheuvel e8d13cced5 arm64: head: move assignment of idmap_t0sz to C code
Setting idmap_t0sz involves fiddling with the caches if done with the
MMU off. Since we will be creating an initial ID map with the MMU and
caches off, and the permanent ID map with the MMU and caches on, let's
move this assignment of idmap_t0sz out of the startup code, and replace
it with a macro that simply issues the three instructions needed to
calculate the value wherever it is needed before the MMU is turned on.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20220624150651.1358849-4-ardb@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
2022-06-24 17:18:09 +01:00
Will Deacon 641d804157 Merge branch 'for-next/spectre-bhb' into for-next/core
Merge in the latest Spectre mess to fix up conflicts with what was
already queued for 5.18 when the embargo finally lifted.

* for-next/spectre-bhb: (21 commits)
  arm64: Do not include __READ_ONCE() block in assembly files
  arm64: proton-pack: Include unprivileged eBPF status in Spectre v2 mitigation reporting
  arm64: Use the clearbhb instruction in mitigations
  KVM: arm64: Allow SMCCC_ARCH_WORKAROUND_3 to be discovered and migrated
  arm64: Mitigate spectre style branch history side channels
  arm64: proton-pack: Report Spectre-BHB vulnerabilities as part of Spectre-v2
  arm64: Add percpu vectors for EL1
  arm64: entry: Add macro for reading symbol addresses from the trampoline
  arm64: entry: Add vectors that have the bhb mitigation sequences
  arm64: entry: Add non-kpti __bp_harden_el1_vectors for mitigations
  arm64: entry: Allow the trampoline text to occupy multiple pages
  arm64: entry: Make the kpti trampoline's kpti sequence optional
  arm64: entry: Move trampoline macros out of ifdef'd section
  arm64: entry: Don't assume tramp_vectors is the start of the vectors
  arm64: entry: Allow tramp_alias to access symbols after the 4K boundary
  arm64: entry: Move the trampoline data page before the text page
  arm64: entry: Free up another register on kpti's tramp_exit path
  arm64: entry: Make the trampoline cleanup optional
  KVM: arm64: Allow indirect vectors to be used without SPECTRE_V3A
  arm64: spectre: Rename spectre_v4_patch_fw_mitigation_conduit
  ...
2022-03-14 19:08:31 +00:00
Joey Gouly e33c89256e Revert "arm64: Mitigate MTE issues with str{n}cmp()"
This reverts commit 59a68d4138.

Now that the str{n}cmp functions have been updated to handle MTE
properly, the workaround to use the generic functions is no longer
needed.

Signed-off-by: Joey Gouly <joey.gouly@arm.com>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20220301101435.19327-4-joey.gouly@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2022-03-07 21:57:02 +00:00
James Morse 228a26b912 arm64: Use the clearbhb instruction in mitigations
Future CPUs may implement a clearbhb instruction that is sufficient
to mitigate SpectreBHB. CPUs that implement this instruction, but
not CSV2.3 must be affected by Spectre-BHB.

Add support to use this instruction as the BHB mitigation on CPUs
that support it. The instruction is in the hint space, so it will
be treated by a NOP as older CPUs.

Reviewed-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
2022-02-24 14:02:44 +00:00
James Morse 558c303c97 arm64: Mitigate spectre style branch history side channels
Speculation attacks against some high-performance processors can
make use of branch history to influence future speculation.
When taking an exception from user-space, a sequence of branches
or a firmware call overwrites or invalidates the branch history.

The sequence of branches is added to the vectors, and should appear
before the first indirect branch. For systems using KPTI the sequence
is added to the kpti trampoline where it has a free register as the exit
from the trampoline is via a 'ret'. For systems not using KPTI, the same
register tricks are used to free up a register in the vectors.

For the firmware call, arch-workaround-3 clobbers 4 registers, so
there is no choice but to save them to the EL1 stack. This only happens
for entry from EL0, so if we take an exception due to the stack access,
it will not become re-entrant.

For KVM, the existing branch-predictor-hardening vectors are used.
When a spectre version of these vectors is in use, the firmware call
is sufficient to mitigate against Spectre-BHB. For the non-spectre
versions, the sequence of branches is added to the indirect vector.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
2022-02-24 13:58:52 +00:00
James Morse ba2689234b arm64: entry: Add vectors that have the bhb mitigation sequences
Some CPUs affected by Spectre-BHB need a sequence of branches, or a
firmware call to be run before any indirect branch. This needs to go
in the vectors. No CPU needs both.

While this can be patched in, it would run on all CPUs as there is a
single set of vectors. If only one part of a big/little combination is
affected, the unaffected CPUs have to run the mitigation too.

Create extra vectors that include the sequence. Subsequent patches will
allow affected CPUs to select this set of vectors. Later patches will
modify the loop count to match what the CPU requires.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
2022-02-16 13:16:08 +00:00
Mark Brown 9be34be87c arm64: Add macro version of the BTI instruction
BTI is only available from v8.5 so we need to encode it using HINT in
generic code and for older toolchains. Add an assembler macro based on
one written by Mark Rutland which lets us use the mnemonic and update
the existing users.

Suggested-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Will Deacon <will@kernel.org>
Signed-off-by: Mark Brown <broonie@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20211214152714.2380849-2-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-12-14 18:12:58 +00:00
Will Deacon d8a2c0fba5 Merge branch 'for-next/kexec' into for-next/core
* for-next/kexec:
  arm64: trans_pgd: remove trans_pgd_map_page()
  arm64: kexec: remove cpu-reset.h
  arm64: kexec: remove the pre-kexec PoC maintenance
  arm64: kexec: keep MMU enabled during kexec relocation
  arm64: kexec: install a copy of the linear-map
  arm64: kexec: use ld script for relocation function
  arm64: kexec: relocate in EL1 mode
  arm64: kexec: configure EL2 vectors for kexec
  arm64: kexec: pass kimage as the only argument to relocation function
  arm64: kexec: Use dcache ops macros instead of open-coding
  arm64: kexec: skip relocation code for inplace kexec
  arm64: kexec: flush image and lists during kexec load time
  arm64: hibernate: abstract ttrb0 setup function
  arm64: trans_pgd: hibernate: Add trans_pgd_copy_el2_vectors
  arm64: kernel: add helper for booted at EL2 and not VHE
2021-10-29 12:24:47 +01:00
Mark Rutland 819771cc28 arm64: extable: consolidate definitions
In subsequent patches we'll alter the structure and usage of struct
exception_table_entry. For inline assembly, we create these using the
`_ASM_EXTABLE()` CPP macro defined in <asm/uaccess.h>, and for plain
assembly code we use the `_asm_extable()` GAS macro defined in
<asm/assembler.h>, which are largely identical save for different
escaping and stringification requirements.

This patch moves the common definitions to a new <asm/asm-extable.h>
header, so that it's easier to keep the two in-sync, and to remove the
implication that these are only used for uaccess helpers (as e.g.
load_unaligned_zeropad() is only used on kernel memory, and depends upon
`_ASM_EXTABLE()`.

At the same time, a few minor modifications are made for clarity and in
preparation for subsequent patches:

* The structure creation is factored out into an `__ASM_EXTABLE_RAW()`
  macro. This will make it easier to support different fixup variants in
  subsequent patches without needing to update all users of
  `_ASM_EXTABLE()`, and makes it easier to see tha the CPP and GAS
  variants of the macros are structurally identical.

  For the CPP macro, the stringification of fields is left to the
  wrapper macro, `_ASM_EXTABLE()`, as in subsequent patches it will be
  necessary to stringify fields in wrapper macros to safely concatenate
  strings which cannot be token-pasted together in CPP.

* The fields of the structure are created separately on their own lines.
  This will make it easier to add/remove/modify individual fields
  clearly.

* Additional parentheses are added around the use of macro arguments in
  field definitions to avoid any potential problems with evaluation due
  to operator precedence, and to make errors upon misuse clearer.

* USER() is moved into <asm/asm-uaccess.h>, as it is not required by all
  assembly code, and is already refered to by comments in that file.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20211019160219.5202-8-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2021-10-21 10:45:22 +01:00
Pasha Tatashin 3744b5280e arm64: kexec: install a copy of the linear-map
To perform the kexec relocation with the MMU enabled, we need a copy
of the linear map.

Create one, and install it from the relocation code. This has to be done
from the assembly code as it will be idmapped with TTBR0. The kernel
runs in TTRB1, so can't use the break-before-make sequence on the mapping
it is executing from.

The makes no difference yet as the relocation code runs with the MMU
disabled.

Suggested-by: James Morse <james.morse@arm.com>
Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20210930143113.1502553-12-pasha.tatashin@soleen.com
Signed-off-by: Will Deacon <will@kernel.org>
2021-10-01 13:31:00 +01:00
Pasha Tatashin 3036ec5993 arm64: kexec: Use dcache ops macros instead of open-coding
kexec does dcache maintenance when it re-writes all memory. Our
dcache_by_line_op macro depends on reading the sanitized DminLine
from memory. Kexec may have overwritten this, so open-codes the
sequence.

dcache_by_line_op is a whole set of macros, it uses dcache_line_size
which uses read_ctr for the sanitsed DminLine. Reading the DminLine
is the first thing the dcache_by_line_op does.

Rename dcache_by_line_op dcache_by_myline_op and take DminLine as
an argument. Kexec can now use the slightly smaller macro.

This makes up-coming changes to the dcache maintenance easier on
the eye.

Code generated by the existing callers is unchanged.

Suggested-by: James Morse <james.morse@arm.com>
Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20210930143113.1502553-7-pasha.tatashin@soleen.com
Signed-off-by: Will Deacon <will@kernel.org>
2021-10-01 13:31:00 +01:00
Robin Murphy 59a68d4138 arm64: Mitigate MTE issues with str{n}cmp()
As with strlen(), the patches importing the updated str{n}cmp()
implementations were originally developed and tested before the
advent of CONFIG_KASAN_HW_TAGS, and have subsequently revealed
not to be MTE-safe. Since in-kernel MTE is still a rather niche
case, let it temporarily fall back to the generic C versions for
correctness until we can figure out the best fix.

Fixes: 758602c044 ("arm64: Import latest version of Cortex Strings' strcmp")
Fixes: 020b199bc7 ("arm64: Import latest version of Cortex Strings' strncmp")
Cc: <stable@vger.kernel.org> # 5.14.x
Reported-by: Branislav Rankov <branislav.rankov@arm.com>
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/34dc4d12eec0adae49b0ac927df642ed10089d40.1631890770.git.robin.murphy@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-09-21 14:50:19 +01:00
Will Deacon 25377204eb Merge branch 'for-next/caches' into for-next/core
Big cleanup of our cache maintenance routines, which were confusingly
named and inconsistent in their implementations.

* for-next/caches:
  arm64: Rename arm64-internal cache maintenance functions
  arm64: Fix cache maintenance function comments
  arm64: sync_icache_aliases to take end parameter instead of size
  arm64: __clean_dcache_area_pou to take end parameter instead of size
  arm64: __clean_dcache_area_pop to take end parameter instead of size
  arm64: __clean_dcache_area_poc to take end parameter instead of size
  arm64: __flush_dcache_area to take end parameter instead of size
  arm64: dcache_by_line_op to take end parameter instead of size
  arm64: __inval_dcache_area to take end parameter instead of size
  arm64: Fix comments to refer to correct function __flush_icache_range
  arm64: Move documentation of dcache_by_line_op
  arm64: assembler: remove user_alt
  arm64: Downgrade flush_icache_range to invalidate
  arm64: Do not enable uaccess for invalidate_icache_range
  arm64: Do not enable uaccess for flush_icache_range
  arm64: Apply errata to swsusp_arch_suspend_exit
  arm64: assembler: add conditional cache fixups
  arm64: assembler: replace `kaddr` with `addr`
2021-06-24 13:33:02 +01:00
Mark Rutland e176e2677c arm64: assembler: add set_this_cpu_offset
There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Suzuki Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20210520115031.18509-3-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2021-05-26 22:45:45 +01:00
Fuad Tabba 163d3f8069 arm64: dcache_by_line_op to take end parameter instead of size
To be consistent with other functions with similar names and
functionality in cacheflush.h, cache.S, and cachetlb.rst, change
to specify the range in terms of start and end, as opposed to
start and size.

No functional change intended.

Reported-by: Will Deacon <will@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Fuad Tabba <tabba@google.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20210524083001.2586635-12-tabba@google.com
Signed-off-by: Will Deacon <will@kernel.org>
2021-05-25 19:27:49 +01:00
Fuad Tabba 06b7a568ca arm64: Move documentation of dcache_by_line_op
The comment describing the macro dcache_by_line_op is placed
right before the previous macro of the one it describes, which is
a bit confusing. Move it to the macro it describes (dcache_by_line_op).

No functional change intended.

Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Fuad Tabba <tabba@google.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20210524083001.2586635-9-tabba@google.com
Signed-off-by: Will Deacon <will@kernel.org>
2021-05-25 19:27:48 +01:00
Mark Rutland d11b187760 arm64: assembler: add conditional cache fixups
It would be helpful if we could use both `dcache_by_line_op` and
`invalidate_icache_by_line` for user memory without accidentally fixing
up unexpected faults when performing maintenance on kernel addresses.

Let's make this possible by having both macros take an optional fixup
label, and only generating an extable entry if a label is provided.

At the same time, let's clean up the labels used to be globally unique
using \@ as we do for other macros.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Fuad Tabba <tabba@google.com>
Cc: Ard Biesheuvel <aedb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Fuad Tabba <tabba@google.com>
Cc: Will Deacon <will@kernel.org>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20210524083001.2586635-3-tabba@google.com
Signed-off-by: Will Deacon <will@kernel.org>
2021-05-25 19:27:48 +01:00
Mark Rutland e89d6cc510 arm64: assembler: replace `kaddr` with `addr`
The `__dcache_op_workaround_clean_cache` and `dcache_by_line_op` macros
are only expected to be usedc on kernel memory, without a user fault
fixup, and so we named their address variables `kaddr` to make this
clear.

Subseuqent patches will modify these to also work on user memory with an
(optional) user fault fixup, where `kaddr` won't make as much sense. To
aid the legibility of patches, this patch (only) replaces `kaddr` with
`addr` as a preparatory step.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Fuad Tabba <tabba@google.com>
Cc: Ard Biesheuvel <aedb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Fuad Tabba <tabba@google.com>
Cc: Will Deacon <will@kernel.org>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20210524083001.2586635-2-tabba@google.com
Signed-off-by: Will Deacon <will@kernel.org>
2021-05-25 19:27:48 +01:00
Linus Torvalds 152d32aa84 ARM:
- Stage-2 isolation for the host kernel when running in protected mode
 
 - Guest SVE support when running in nVHE mode
 
 - Force W^X hypervisor mappings in nVHE mode
 
 - ITS save/restore for guests using direct injection with GICv4.1
 
 - nVHE panics now produce readable backtraces
 
 - Guest support for PTP using the ptp_kvm driver
 
 - Performance improvements in the S2 fault handler
 
 x86:
 
 - Optimizations and cleanup of nested SVM code
 
 - AMD: Support for virtual SPEC_CTRL
 
 - Optimizations of the new MMU code: fast invalidation,
   zap under read lock, enable/disably dirty page logging under
   read lock
 
 - /dev/kvm API for AMD SEV live migration (guest API coming soon)
 
 - support SEV virtual machines sharing the same encryption context
 
 - support SGX in virtual machines
 
 - add a few more statistics
 
 - improved directed yield heuristics
 
 - Lots and lots of cleanups
 
 Generic:
 
 - Rework of MMU notifier interface, simplifying and optimizing
 the architecture-specific code
 
 - Some selftests improvements
 -----BEGIN PGP SIGNATURE-----
 
 iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmCJ13kUHHBib256aW5p
 QHJlZGhhdC5jb20ACgkQv/vSX3jHroM1HAgAqzPxEtiTPTFeFJV5cnPPJ3dFoFDK
 y/juZJUQ1AOtvuWzzwuf175ewkv9vfmtG6rVohpNSkUlJYeoc6tw7n8BTTzCVC1b
 c/4Dnrjeycr6cskYlzaPyV6MSgjSv5gfyj1LA5UEM16LDyekmaynosVWY5wJhju+
 Bnyid8l8Utgz+TLLYogfQJQECCrsU0Wm//n+8TWQgLf1uuiwshU5JJe7b43diJrY
 +2DX+8p9yWXCTz62sCeDWNahUv8AbXpMeJ8uqZPYcN1P0gSEUGu8xKmLOFf9kR7b
 M4U1Gyz8QQbjd2lqnwiWIkvRLX6gyGVbq2zH0QbhUe5gg3qGUX7JjrhdDQ==
 =AXUi
 -----END PGP SIGNATURE-----

Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm

Pull kvm updates from Paolo Bonzini:
 "This is a large update by KVM standards, including AMD PSP (Platform
  Security Processor, aka "AMD Secure Technology") and ARM CoreSight
  (debug and trace) changes.

  ARM:

   - CoreSight: Add support for ETE and TRBE

   - Stage-2 isolation for the host kernel when running in protected
     mode

   - Guest SVE support when running in nVHE mode

   - Force W^X hypervisor mappings in nVHE mode

   - ITS save/restore for guests using direct injection with GICv4.1

   - nVHE panics now produce readable backtraces

   - Guest support for PTP using the ptp_kvm driver

   - Performance improvements in the S2 fault handler

  x86:

   - AMD PSP driver changes

   - Optimizations and cleanup of nested SVM code

   - AMD: Support for virtual SPEC_CTRL

   - Optimizations of the new MMU code: fast invalidation, zap under
     read lock, enable/disably dirty page logging under read lock

   - /dev/kvm API for AMD SEV live migration (guest API coming soon)

   - support SEV virtual machines sharing the same encryption context

   - support SGX in virtual machines

   - add a few more statistics

   - improved directed yield heuristics

   - Lots and lots of cleanups

  Generic:

   - Rework of MMU notifier interface, simplifying and optimizing the
     architecture-specific code

   - a handful of "Get rid of oprofile leftovers" patches

   - Some selftests improvements"

* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (379 commits)
  KVM: selftests: Speed up set_memory_region_test
  selftests: kvm: Fix the check of return value
  KVM: x86: Take advantage of kvm_arch_dy_has_pending_interrupt()
  KVM: SVM: Skip SEV cache flush if no ASIDs have been used
  KVM: SVM: Remove an unnecessary prototype declaration of sev_flush_asids()
  KVM: SVM: Drop redundant svm_sev_enabled() helper
  KVM: SVM: Move SEV VMCB tracking allocation to sev.c
  KVM: SVM: Explicitly check max SEV ASID during sev_hardware_setup()
  KVM: SVM: Unconditionally invoke sev_hardware_teardown()
  KVM: SVM: Enable SEV/SEV-ES functionality by default (when supported)
  KVM: SVM: Condition sev_enabled and sev_es_enabled on CONFIG_KVM_AMD_SEV=y
  KVM: SVM: Append "_enabled" to module-scoped SEV/SEV-ES control variables
  KVM: SEV: Mask CPUID[0x8000001F].eax according to supported features
  KVM: SVM: Move SEV module params/variables to sev.c
  KVM: SVM: Disable SEV/SEV-ES if NPT is disabled
  KVM: SVM: Free sev_asid_bitmap during init if SEV setup fails
  KVM: SVM: Zero out the VMCB array used to track SEV ASID association
  x86/sev: Drop redundant and potentially misleading 'sev_enabled'
  KVM: x86: Move reverse CPUID helpers to separate header file
  KVM: x86: Rename GPR accessors to make mode-aware variants the defaults
  ...
2021-05-01 10:14:08 -07:00
Catalin Marinas a1e1eddef2 Merge branches 'for-next/misc', 'for-next/kselftest', 'for-next/xntable', 'for-next/vdso', 'for-next/fiq', 'for-next/epan', 'for-next/kasan-vmalloc', 'for-next/fgt-boot-init', 'for-next/vhe-only' and 'for-next/neon-softirqs-disabled', remote-tracking branch 'arm64/for-next/perf' into for-next/core
* for-next/misc:
  : Miscellaneous patches
  arm64/sve: Add compile time checks for SVE hooks in generic functions
  arm64/kernel/probes: Use BUG_ON instead of if condition followed by BUG.
  arm64/sve: Remove redundant system_supports_sve() tests
  arm64: mte: Remove unused mte_assign_mem_tag_range()
  arm64: Add __init section marker to some functions
  arm64/sve: Rework SVE access trap to convert state in registers
  docs: arm64: Fix a grammar error
  arm64: smp: Add missing prototype for some smp.c functions
  arm64: setup: name `tcr` register
  arm64: setup: name `mair` register
  arm64: stacktrace: Move start_backtrace() out of the header
  arm64: barrier: Remove spec_bar() macro
  arm64: entry: remove test_irqs_unmasked macro
  ARM64: enable GENERIC_FIND_FIRST_BIT
  arm64: defconfig: Use DEBUG_INFO_REDUCED

* for-next/kselftest:
  : Various kselftests for arm64
  kselftest: arm64: Add BTI tests
  kselftest/arm64: mte: Report filename on failing temp file creation
  kselftest/arm64: mte: Fix clang warning
  kselftest/arm64: mte: Makefile: Fix clang compilation
  kselftest/arm64: mte: Output warning about failing compiler
  kselftest/arm64: mte: Use cross-compiler if specified
  kselftest/arm64: mte: Fix MTE feature detection
  kselftest/arm64: mte: common: Fix write() warnings
  kselftest/arm64: mte: user_mem: Fix write() warning
  kselftest/arm64: mte: ksm_options: Fix fscanf warning
  kselftest/arm64: mte: Fix pthread linking
  kselftest/arm64: mte: Fix compilation with native compiler

* for-next/xntable:
  : Add hierarchical XN permissions for all page tables
  arm64: mm: use XN table mapping attributes for user/kernel mappings
  arm64: mm: use XN table mapping attributes for the linear region
  arm64: mm: add missing P4D definitions and use them consistently

* for-next/vdso:
  : Minor improvements to the compat vdso and sigpage
  arm64: compat: Poison the compat sigpage
  arm64: vdso: Avoid ISB after reading from cntvct_el0
  arm64: compat: Allow signal page to be remapped
  arm64: vdso: Remove redundant calls to flush_dcache_page()
  arm64: vdso: Use GFP_KERNEL for allocating compat vdso and signal pages

* for-next/fiq:
  : Support arm64 FIQ controller registration
  arm64: irq: allow FIQs to be handled
  arm64: Always keep DAIF.[IF] in sync
  arm64: entry: factor irq triage logic into macros
  arm64: irq: rework root IRQ handler registration
  arm64: don't use GENERIC_IRQ_MULTI_HANDLER
  genirq: Allow architectures to override set_handle_irq() fallback

* for-next/epan:
  : Support for Enhanced PAN (execute-only permissions)
  arm64: Support execute-only permissions with Enhanced PAN

* for-next/kasan-vmalloc:
  : Support CONFIG_KASAN_VMALLOC on arm64
  arm64: Kconfig: select KASAN_VMALLOC if KANSAN_GENERIC is enabled
  arm64: kaslr: support randomized module area with KASAN_VMALLOC
  arm64: Kconfig: support CONFIG_KASAN_VMALLOC
  arm64: kasan: abstract _text and _end to KERNEL_START/END
  arm64: kasan: don't populate vmalloc area for CONFIG_KASAN_VMALLOC

* for-next/fgt-boot-init:
  : Booting clarifications and fine grained traps setup
  arm64: Require that system registers at all visible ELs be initialized
  arm64: Disable fine grained traps on boot
  arm64: Document requirements for fine grained traps at boot

* for-next/vhe-only:
  : Dealing with VHE-only CPUs (a.k.a. M1)
  arm64: Get rid of CONFIG_ARM64_VHE
  arm64: Cope with CPUs stuck in VHE mode
  arm64: cpufeature: Allow early filtering of feature override

* arm64/for-next/perf:
  arm64: perf: Remove redundant initialization in perf_event.c
  perf/arm_pmu_platform: Clean up with dev_printk
  perf/arm_pmu_platform: Fix error handling
  perf/arm_pmu_platform: Use dev_err_probe() for IRQ errors
  docs: perf: Address some html build warnings
  docs: perf: Add new description on HiSilicon uncore PMU v2
  drivers/perf: hisi: Add support for HiSilicon PA PMU driver
  drivers/perf: hisi: Add support for HiSilicon SLLC PMU driver
  drivers/perf: hisi: Update DDRC PMU for programmable counter
  drivers/perf: hisi: Add new functions for HHA PMU
  drivers/perf: hisi: Add new functions for L3C PMU
  drivers/perf: hisi: Add PMU version for uncore PMU drivers.
  drivers/perf: hisi: Refactor code for more uncore PMUs
  drivers/perf: hisi: Remove unnecessary check of counter index
  drivers/perf: Simplify the SMMUv3 PMU event attributes
  drivers/perf: convert sysfs sprintf family to sysfs_emit
  drivers/perf: convert sysfs scnprintf family to sysfs_emit_at() and sysfs_emit()
  drivers/perf: convert sysfs snprintf family to sysfs_emit

* for-next/neon-softirqs-disabled:
  : Run kernel mode SIMD with softirqs disabled
  arm64: fpsimd: run kernel mode NEON with softirqs disabled
  arm64: assembler: introduce wxN aliases for wN registers
  arm64: assembler: remove conditional NEON yield macros
2021-04-15 14:00:38 +01:00
Marc Zyngier 3284cd638b Merge remote-tracking branch 'arm64/for-next/neon-softirqs-disabled' into kvmarm-master/next
Signed-off-by: Marc Zyngier <maz@kernel.org>
2021-04-13 15:46:58 +01:00
Ard Biesheuvel 13150149aa arm64: fpsimd: run kernel mode NEON with softirqs disabled
Kernel mode NEON can be used in task or softirq context, but only in
a non-nesting manner, i.e., softirq context is only permitted if the
interrupt was not taken at a point where the kernel was using the NEON
in task context.

This means all users of kernel mode NEON have to be aware of this
limitation, and either need to provide scalar fallbacks that may be much
slower (up to 20x for AES instructions) and potentially less safe, or
use an asynchronous interface that defers processing to a later time
when the NEON is guaranteed to be available.

Given that grabbing and releasing the NEON is cheap, we can relax this
restriction, by increasing the granularity of kernel mode NEON code, and
always disabling softirq processing while the NEON is being used in task
context.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20210302090118.30666-4-ardb@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-12 11:55:34 +01:00
Ard Biesheuvel 4c4dcd3541 arm64: assembler: introduce wxN aliases for wN registers
The AArch64 asm syntax has this slightly tedious property that the names
used in mnemonics to refer to registers depend on whether the opcode in
question targets the entire 64-bits (xN), or only the least significant
8, 16 or 32 bits (wN). When writing parameterized code such as macros,
this can be annoying, as macro arguments don't lend themselves to
indexed lookups, and so generating a reference to wN in a macro that
receives xN as an argument is problematic.

For instance, an upcoming patch that modifies the implementation of the
cond_yield macro to be able to refer to 32-bit registers would need to
modify invocations such as

  cond_yield	3f, x8

to

  cond_yield	3f, 8

so that the second argument can be token pasted after x or w to emit the
correct register reference. Unfortunately, this interferes with the self
documenting nature of the first example, where the second argument is
obviously a register, whereas in the second example, one would need to
go and look at the code to find out what '8' means.

So let's fix this by defining wxN aliases for all xN registers, which
resolve to the 32-bit alias of each respective 64-bit register. This
allows the macro implementation to paste the xN reference after a w to
obtain the correct register name.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20210302090118.30666-3-ardb@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-12 11:55:33 +01:00
Ard Biesheuvel 27248fe1ab arm64: assembler: remove conditional NEON yield macros
The users of the conditional NEON yield macros have all been switched to
the simplified cond_yield macro, and so the NEON specific ones can be
removed.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20210302090118.30666-2-ardb@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-12 11:55:33 +01:00
Marc Zyngier 755db23420 KVM: arm64: Generate final CTR_EL0 value when running in Protected mode
In protected mode, late CPUs are not allowed to boot (enforced by
the PSCI relay). We can thus specialise the read_ctr macro to
always return a pre-computed, sanitised value. Special care is
taken to prevent the use of this custome version outside of
the protected mode.

Reviewed-by: Quentin Perret <qperret@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
2021-03-25 11:00:33 +00:00
Hector Martin f0098155d3 arm64: Always keep DAIF.[IF] in sync
Apple SoCs (A11 and newer) have some interrupt sources hardwired to the
FIQ line. We implement support for this by simply treating IRQs and FIQs
the same way in the interrupt vectors.

To support these systems, the FIQ mask bit needs to be kept in sync with
the IRQ mask bit, so both kinds of exceptions are masked together. No
other platforms should be delivering FIQ exceptions right now, and we
already unmask FIQ in normal process context, so this should not have an
effect on other systems - if spurious FIQs were arriving, they would
already panic the kernel.

Signed-off-by: Hector Martin <marcan@marcan.st>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Hector Martin <marcan@marcan.st>
Cc: James Morse <james.morse@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will@kernel.org>
Acked-by: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20210315115629.57191-6-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-03-24 20:19:30 +00:00
Quentin Perret 8f4de66e24 arm64: asm: Provide set_sctlr_el2 macro
We will soon need to turn the EL2 stage 1 MMU on and off in nVHE
protected mode, so refactor the set_sctlr_el1 macro to make it usable
for that purpose.

Acked-by: Will Deacon <will@kernel.org>
Suggested-by: Will Deacon <will@kernel.org>
Signed-off-by: Quentin Perret <qperret@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20210319100146.1149909-17-qperret@google.com
2021-03-19 12:01:21 +00:00
Will Deacon f96a816fa5 Merge branch 'for-next/crypto' into for-next/core
Introduce a new macro to allow yielding the vector unit if preemption
is required. The initial users of this are being merged via the crypto
tree for 5.12.

* for-next/crypto:
  arm64: assembler: add cond_yield macro
2021-02-12 14:54:55 +00:00
Marc Zyngier 8cc8a32415 arm64: Turn the MMU-on sequence into a macro
Turning the MMU on is a popular sport in the arm64 kernel, and
we do it more than once, or even twice. As we are about to add
even more, let's turn it into a macro.

No expected functional change.

Signed-off-by: Marc Zyngier <maz@kernel.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: David Brazdil <dbrazdil@google.com>
Link: https://lore.kernel.org/r/20210208095732.3267263-4-maz@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
2021-02-08 12:51:26 +00:00
Ard Biesheuvel d13c613f13 arm64: assembler: add cond_yield macro
Add a macro cond_yield that branches to a specified label when called if
the TIF_NEED_RESCHED flag is set and decreasing the preempt count would
make the task preemptible again, resulting in a schedule to occur. This
can be used by kernel mode SIMD code that keeps a lot of state in SIMD
registers, which would make chunking the input in order to perform the
cond_resched() check from C code disproportionately costly.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20210203113626.220151-2-ardb@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
2021-02-03 20:48:02 +00:00
Andrey Konovalov 0fea6e9af8 kasan, arm64: expand CONFIG_KASAN checks
Some #ifdef CONFIG_KASAN checks are only relevant for software KASAN modes
(either related to shadow memory or compiler instrumentation).  Expand
those into CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS.

Link: https://lkml.kernel.org/r/e6971e432dbd72bb897ff14134ebb7e169bdcf0c.1606161801.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Alexander Potapenko <glider@google.com>
Tested-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Branislav Rankov <Branislav.Rankov@arm.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Evgenii Stepanov <eugenis@google.com>
Cc: Kevin Brodsky <kevin.brodsky@arm.com>
Cc: Marco Elver <elver@google.com>
Cc: Vasily Gorbik <gor@linux.ibm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-12-22 12:55:08 -08:00
David Brazdil ea391027d3 kvm: arm64: Remove hyp_adr/ldr_this_cpu
The hyp_adr/ldr_this_cpu helpers were introduced for use in hyp code
because they always needed to use TPIDR_EL2 for base, while
adr/ldr_this_cpu from kernel proper would select between TPIDR_EL2 and
_EL1 based on VHE/nVHE.

Simplify this now that the hyp mode case can be handled using the
__KVM_VHE/NVHE_HYPERVISOR__ macros.

Signed-off-by: David Brazdil <dbrazdil@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Acked-by: Andrew Scull <ascull@google.com>
Acked-by: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20200922204910.7265-6-dbrazdil@google.com
2020-09-30 08:37:07 +01:00
Mark Brown 3a9b136c99 arm64: asm: Provide a mechanism for generating ELF note for BTI
ELF files built for BTI should have a program property note section which
identifies them as such. The linker expects to find this note in all
object files it is linking into a BTI annotated output, the compiler will
ensure that this happens for C files but for assembler files we need to do
this in the source so provide a macro which can be used for this purpose.
To support likely future requirements for additional notes we split the
defininition of the flags to set for BTI code from the macro that creates
the note itself.

This is mainly for use in the vDSO which should be a normal ELF shared
library and should therefore include BTI annotations when built for BTI.

Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20200506195138.22086-9-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
2020-05-07 17:53:20 +01:00
Catalin Marinas da12d2739f Merge branches 'for-next/memory-hotremove', 'for-next/arm_sdei', 'for-next/amu', 'for-next/final-cap-helper', 'for-next/cpu_ops-cleanup', 'for-next/misc' and 'for-next/perf' into for-next/core
* for-next/memory-hotremove:
  : Memory hot-remove support for arm64
  arm64/mm: Enable memory hot remove
  arm64/mm: Hold memory hotplug lock while walking for kernel page table dump

* for-next/arm_sdei:
  : SDEI: fix double locking on return from hibernate and clean-up
  firmware: arm_sdei: clean up sdei_event_create()
  firmware: arm_sdei: Use cpus_read_lock() to avoid races with cpuhp
  firmware: arm_sdei: fix possible double-lock on hibernate error path
  firmware: arm_sdei: fix double-lock on hibernate with shared events

* for-next/amu:
  : ARMv8.4 Activity Monitors support
  clocksource/drivers/arm_arch_timer: validate arch_timer_rate
  arm64: use activity monitors for frequency invariance
  cpufreq: add function to get the hardware max frequency
  Documentation: arm64: document support for the AMU extension
  arm64/kvm: disable access to AMU registers from kvm guests
  arm64: trap to EL1 accesses to AMU counters from EL0
  arm64: add support for the AMU extension v1

* for-next/final-cap-helper:
  : Introduce cpus_have_final_cap_helper(), migrate arm64 KVM to it
  arm64: kvm: hyp: use cpus_have_final_cap()
  arm64: cpufeature: add cpus_have_final_cap()

* for-next/cpu_ops-cleanup:
  : cpu_ops[] access code clean-up
  arm64: Introduce get_cpu_ops() helper function
  arm64: Rename cpu_read_ops() to init_cpu_ops()
  arm64: Declare ACPI parking protocol CPU operation if needed

* for-next/misc:
  : Various fixes and clean-ups
  arm64: define __alloc_zeroed_user_highpage
  arm64/kernel: Simplify __cpu_up() by bailing out early
  arm64: remove redundant blank for '=' operator
  arm64: kexec_file: Fixed code style.
  arm64: add blank after 'if'
  arm64: fix spelling mistake "ca not" -> "cannot"
  arm64: entry: unmask IRQ in el0_sp()
  arm64: efi: add efi-entry.o to targets instead of extra-$(CONFIG_EFI)
  arm64: csum: Optimise IPv6 header checksum
  arch/arm64: fix typo in a comment
  arm64: remove gratuitious/stray .ltorg stanzas
  arm64: Update comment for ASID() macro
  arm64: mm: convert cpu_do_switch_mm() to C
  arm64: fix NUMA Kconfig typos

* for-next/perf:
  : arm64 perf updates
  arm64: perf: Add support for ARMv8.5-PMU 64-bit counters
  KVM: arm64: limit PMU version to PMUv3 for ARMv8.1
  arm64: cpufeature: Extract capped perfmon fields
  arm64: perf: Clean up enable/disable calls
  perf: arm-ccn: Use scnprintf() for robustness
  arm64: perf: Support new DT compatibles
  arm64: perf: Refactor PMU init callbacks
  perf: arm_spe: Remove unnecessary zero check on 'nr_pages'
2020-03-25 11:10:32 +00:00