RISC-V Patches for the 6.3 Merge Window, Part 1
There's a bunch of fixes/cleanups throughout the tree as usual, but we also have a handful of new features. * Various improvements to the extension detection and alternative patching infrastructure. * Zbb-optimized string routines. * Support for cpu-capacity in the RISC-V DT bindings. * Zicbom no longer depends on toolchain support. * Some performance and code size improvements to ftrace. * Support for ARCH_WANT_LD_ORPHAN_WARN. * Oops now contain the faulting instruction. -----BEGIN PGP SIGNATURE----- iQJHBAABCAAxFiEEKzw3R0RoQ7JKlDp6LhMZ81+7GIkFAmP49coTHHBhbG1lckBk YWJiZWx0LmNvbQAKCRAuExnzX7sYiTFnEACuQtGJWSwzH+ORswVyItKzqHcBMU3t rWHTFxQ7MdWgQO8nrWUtSypGY4n0DFTCe9w4H3tQFRDaTXbI+ycFjidEDt3eJCMb n6WiGuZpdKVS81CQ0Es4dTWQ1i/28fe1851CGK/PkybXdrPPofdCJ9k3Wepxflb/ 2UYxRDyjKt3KbJ2OmN2oF8Ek1rrsGhIC/Dhbdb2JsGZhYF10ZYjquaOLs31WbHMG O+n/N/JfZRAif1MDQ71ygAm9KV0kGqe/wcRtsJGETwJ8U3I/cjn2mAGd8BRdy4iL 9GFmTmi8q27ntUbakikNz3b4aE9xVnLDvRIyOciI3l8rQjrFAsfnQbuRwlaq6BVJ BF3e6nAjkcLj23FhbROTlfncEOzrklbNZ+uQIuvyffAUjDoePw9x7o0r+qj7FnOY WMfNecJMeE5OGVBqHSVFEcAMlN6uYu6wqbEipEpc+8sTg+w1LM0bUVNhV86/BrnL bh+4+7MPYtg45vy2Y8AuPUBFqR2uCekDpbxciCEGsaIzUYRas2zrt9UkWGjKs1VV q0qeLSNNA1wBq+q6FprTceipFQIqD5KnmI2GMucF6v4YFg5AzeSOpRc6aeqcs7Z2 +ApShSOFPjjntZbcpTgkvhrPExr0Jel0xY7YSazUUqY0xOHUwGNBEh/E4rzsRLxr qvUpFAIZT60dfQ== =XgYl -----END PGP SIGNATURE----- Merge tag 'riscv-for-linus-6.3-mw1' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux Pull RISC-V updates from Palmer Dabbelt: "There's a bunch of fixes/cleanups throughout the tree as usual, but we also have a handful of new features: - Various improvements to the extension detection and alternative patching infrastructure - Zbb-optimized string routines - Support for cpu-capacity in the RISC-V DT bindings - Zicbom no longer depends on toolchain support - Some performance and code size improvements to ftrace - Support for ARCH_WANT_LD_ORPHAN_WARN - Oops now contain the faulting instruction" * tag 'riscv-for-linus-6.3-mw1' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux: (67 commits) RISC-V: add a spin_shadow_stack declaration riscv: mm: hugetlb: Enable ARCH_WANT_HUGETLB_PAGE_OPTIMIZE_VMEMMAP riscv: Add header include guards to insn.h riscv: alternative: proceed one more instruction for auipc/jalr pair riscv: Avoid enabling interrupts in die() riscv, mm: Perform BPF exhandler fixup on page fault RISC-V: take text_mutex during alternative patching riscv: hwcap: Don't alphabetize ISA extension IDs RISC-V: fix ordering of Zbb extension riscv: jump_label: Fixup unaligned arch_static_branch function RISC-V: Only provide the single-letter extensions in HWCAP riscv: mm: fix regression due to update_mmu_cache change scripts/decodecode: Add support for RISC-V riscv: Add instruction dump to RISC-V splats riscv: select ARCH_WANT_LD_ORPHAN_WARN for !XIP_KERNEL riscv: vmlinux.lds.S: explicitly catch .init.bss sections from EFI stub riscv: vmlinux.lds.S: explicitly catch .riscv.attributes sections riscv: vmlinux.lds.S: explicitly catch .rela.dyn symbols riscv: lds: define RUNTIME_DISCARD_EXIT RISC-V: move some stray __RISCV_INSN_FUNCS definitions from kprobes ...
This commit is contained in:
commit
01687e7c93
|
@ -259,7 +259,7 @@ properties:
|
|||
|
||||
capacity-dmips-mhz:
|
||||
description:
|
||||
u32 value representing CPU capacity (see ./cpu-capacity.txt) in
|
||||
u32 value representing CPU capacity (see ../cpu/cpu-capacity.txt) in
|
||||
DMIPS/MHz, relative to highest capacity-dmips-mhz
|
||||
in the system.
|
||||
|
||||
|
|
|
@ -1,12 +1,12 @@
|
|||
==========================================
|
||||
ARM CPUs capacity bindings
|
||||
CPU capacity bindings
|
||||
==========================================
|
||||
|
||||
==========================================
|
||||
1 - Introduction
|
||||
==========================================
|
||||
|
||||
ARM systems may be configured to have cpus with different power/performance
|
||||
Some systems may be configured to have cpus with different power/performance
|
||||
characteristics within the same chip. In this case, additional information has
|
||||
to be made available to the kernel for it to be aware of such differences and
|
||||
take decisions accordingly.
|
|
@ -114,6 +114,12 @@ properties:
|
|||
List of phandles to idle state nodes supported
|
||||
by this hart (see ./idle-states.yaml).
|
||||
|
||||
capacity-dmips-mhz:
|
||||
description:
|
||||
u32 value representing CPU capacity (see ../cpu/cpu-capacity.txt) in
|
||||
DMIPS/MHz, relative to highest capacity-dmips-mhz
|
||||
in the system.
|
||||
|
||||
required:
|
||||
- riscv,isa
|
||||
- interrupt-controller
|
||||
|
|
|
@ -3,4 +3,46 @@
|
|||
RISC-V Linux User ABI
|
||||
=====================
|
||||
|
||||
ISA string ordering in /proc/cpuinfo
|
||||
------------------------------------
|
||||
|
||||
The canonical order of ISA extension names in the ISA string is defined in
|
||||
chapter 27 of the unprivileged specification.
|
||||
The specification uses vague wording, such as should, when it comes to ordering,
|
||||
so for our purposes the following rules apply:
|
||||
|
||||
#. Single-letter extensions come first, in canonical order.
|
||||
The canonical order is "IMAFDQLCBKJTPVH".
|
||||
|
||||
#. All multi-letter extensions will be separated from other extensions by an
|
||||
underscore.
|
||||
|
||||
#. Additional standard extensions (starting with 'Z') will be sorted after
|
||||
single-letter extensions and before any higher-privileged extensions.
|
||||
|
||||
#. For additional standard extensions, the first letter following the 'Z'
|
||||
conventionally indicates the most closely related alphabetical
|
||||
extension category. If multiple 'Z' extensions are named, they will be
|
||||
ordered first by category, in canonical order, as listed above, then
|
||||
alphabetically within a category.
|
||||
|
||||
#. Standard supervisor-level extensions (starting with 'S') will be listed
|
||||
after standard unprivileged extensions. If multiple supervisor-level
|
||||
extensions are listed, they will be ordered alphabetically.
|
||||
|
||||
#. Standard machine-level extensions (starting with 'Zxm') will be listed
|
||||
after any lower-privileged, standard extensions. If multiple machine-level
|
||||
extensions are listed, they will be ordered alphabetically.
|
||||
|
||||
#. Non-standard extensions (starting with 'X') will be listed after all standard
|
||||
extensions. If multiple non-standard extensions are listed, they will be
|
||||
ordered alphabetically.
|
||||
|
||||
An example string following the order is::
|
||||
|
||||
rv64imadc_zifoo_zigoo_zafoo_sbar_scar_zxmbaz_xqux_xrux
|
||||
|
||||
Misaligned accesses
|
||||
-------------------
|
||||
|
||||
Misaligned accesses are supported in userspace, but they may perform poorly.
|
||||
|
|
|
@ -260,7 +260,7 @@ for that purpose.
|
|||
|
||||
The arm and arm64 architectures directly map this to the arch_topology driver
|
||||
CPU scaling data, which is derived from the capacity-dmips-mhz CPU binding; see
|
||||
Documentation/devicetree/bindings/arm/cpu-capacity.txt.
|
||||
Documentation/devicetree/bindings/cpu/cpu-capacity.txt.
|
||||
|
||||
3.2 Frequency invariance
|
||||
------------------------
|
||||
|
|
|
@ -233,7 +233,7 @@ CFS调度类基于实体负载跟踪机制(Per-Entity Load Tracking, PELT)
|
|||
|
||||
arm和arm64架构直接把这个信息映射到arch_topology驱动的CPU scaling数据中(译注:参考
|
||||
arch_topology.h的percpu变量cpu_scale),它是从capacity-dmips-mhz CPU binding中衍生计算
|
||||
出来的。参见Documentation/devicetree/bindings/arm/cpu-capacity.txt。
|
||||
出来的。参见Documentation/devicetree/bindings/cpu/cpu-capacity.txt。
|
||||
|
||||
3.2 频率不变性
|
||||
--------------
|
||||
|
|
|
@ -14,10 +14,11 @@ config RISCV
|
|||
def_bool y
|
||||
select ARCH_ENABLE_HUGEPAGE_MIGRATION if HUGETLB_PAGE && MIGRATION
|
||||
select ARCH_ENABLE_SPLIT_PMD_PTLOCK if PGTABLE_LEVELS > 2
|
||||
select ARCH_ENABLE_THP_MIGRATION if TRANSPARENT_HUGEPAGE
|
||||
select ARCH_HAS_BINFMT_FLAT
|
||||
select ARCH_HAS_CURRENT_STACK_POINTER
|
||||
select ARCH_HAS_DEBUG_VM_PGTABLE
|
||||
select ARCH_HAS_DEBUG_VIRTUAL if MMU
|
||||
select ARCH_HAS_DEBUG_VM_PGTABLE
|
||||
select ARCH_HAS_DEBUG_WX
|
||||
select ARCH_HAS_FORTIFY_SOURCE
|
||||
select ARCH_HAS_GCOV_PROFILE_ALL
|
||||
|
@ -44,12 +45,14 @@ config RISCV
|
|||
select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT if MMU
|
||||
select ARCH_WANT_FRAME_POINTERS
|
||||
select ARCH_WANT_GENERAL_HUGETLB
|
||||
select ARCH_WANT_HUGETLB_PAGE_OPTIMIZE_VMEMMAP
|
||||
select ARCH_WANT_HUGE_PMD_SHARE if 64BIT
|
||||
select ARCH_WANT_LD_ORPHAN_WARN if !XIP_KERNEL
|
||||
select ARCH_WANTS_THP_SWAP if HAVE_ARCH_TRANSPARENT_HUGEPAGE
|
||||
select BINFMT_FLAT_NO_DATA_START_OFFSET if !MMU
|
||||
select BUILDTIME_TABLE_SORT if MMU
|
||||
select CLONE_BACKWARDS
|
||||
select CLINT_TIMER if !MMU
|
||||
select CLONE_BACKWARDS
|
||||
select COMMON_CLK
|
||||
select CPU_PM if CPU_IDLE
|
||||
select EDAC_SUPPORT
|
||||
|
@ -84,16 +87,16 @@ config RISCV
|
|||
select HAVE_ARCH_MMAP_RND_BITS if MMU
|
||||
select HAVE_ARCH_MMAP_RND_COMPAT_BITS if COMPAT
|
||||
select HAVE_ARCH_SECCOMP_FILTER
|
||||
select HAVE_ARCH_THREAD_STRUCT_WHITELIST
|
||||
select HAVE_ARCH_TRACEHOOK
|
||||
select HAVE_ARCH_TRANSPARENT_HUGEPAGE if 64BIT && MMU
|
||||
select ARCH_ENABLE_THP_MIGRATION if TRANSPARENT_HUGEPAGE
|
||||
select HAVE_ARCH_THREAD_STRUCT_WHITELIST
|
||||
select HAVE_ARCH_VMAP_STACK if MMU && 64BIT
|
||||
select HAVE_ASM_MODVERSIONS
|
||||
select HAVE_CONTEXT_TRACKING_USER
|
||||
select HAVE_DEBUG_KMEMLEAK
|
||||
select HAVE_DMA_CONTIGUOUS if MMU
|
||||
select HAVE_EBPF_JIT if MMU
|
||||
select HAVE_FUNCTION_ARG_ACCESS_API
|
||||
select HAVE_FUNCTION_ERROR_INJECTION
|
||||
select HAVE_GCC_PLUGINS
|
||||
select HAVE_GENERIC_VDSO if MMU && 64BIT
|
||||
|
@ -110,10 +113,9 @@ config RISCV
|
|||
select HAVE_PERF_USER_STACK_DUMP
|
||||
select HAVE_POSIX_CPU_TIMERS_TASK_WORK
|
||||
select HAVE_REGS_AND_STACK_ACCESS_API
|
||||
select HAVE_FUNCTION_ARG_ACCESS_API
|
||||
select HAVE_RSEQ
|
||||
select HAVE_STACKPROTECTOR
|
||||
select HAVE_SYSCALL_TRACEPOINTS
|
||||
select HAVE_RSEQ
|
||||
select IRQ_DOMAIN
|
||||
select IRQ_FORCED_THREADING
|
||||
select MODULES_USE_ELF_RELA if MODULES
|
||||
|
@ -137,7 +139,7 @@ config RISCV
|
|||
select HAVE_DYNAMIC_FTRACE_WITH_REGS if HAVE_DYNAMIC_FTRACE
|
||||
select HAVE_FTRACE_MCOUNT_RECORD if !XIP_KERNEL
|
||||
select HAVE_FUNCTION_GRAPH_TRACER
|
||||
select HAVE_FUNCTION_TRACER if !XIP_KERNEL
|
||||
select HAVE_FUNCTION_TRACER if !XIP_KERNEL && !PREEMPTION
|
||||
|
||||
config ARCH_MMAP_RND_BITS_MIN
|
||||
default 18 if 64BIT
|
||||
|
@ -234,9 +236,9 @@ config LOCKDEP_SUPPORT
|
|||
config RISCV_DMA_NONCOHERENT
|
||||
bool
|
||||
select ARCH_HAS_DMA_PREP_COHERENT
|
||||
select ARCH_HAS_SYNC_DMA_FOR_DEVICE
|
||||
select ARCH_HAS_SYNC_DMA_FOR_CPU
|
||||
select ARCH_HAS_SETUP_DMA_OPS
|
||||
select ARCH_HAS_SYNC_DMA_FOR_CPU
|
||||
select ARCH_HAS_SYNC_DMA_FOR_DEVICE
|
||||
select DMA_DIRECT_REMAP
|
||||
|
||||
config AS_HAS_INSN
|
||||
|
@ -351,11 +353,11 @@ endchoice
|
|||
config NUMA
|
||||
bool "NUMA Memory Allocation and Scheduler Support"
|
||||
depends on SMP && MMU
|
||||
select GENERIC_ARCH_NUMA
|
||||
select OF_NUMA
|
||||
select ARCH_SUPPORTS_NUMA_BALANCING
|
||||
select USE_PERCPU_NUMA_NODE_ID
|
||||
select GENERIC_ARCH_NUMA
|
||||
select NEED_PER_CPU_EMBED_FIRST_CHUNK
|
||||
select OF_NUMA
|
||||
select USE_PERCPU_NUMA_NODE_ID
|
||||
help
|
||||
Enable NUMA (Non-Uniform Memory Access) support.
|
||||
|
||||
|
@ -400,8 +402,8 @@ config RISCV_ISA_SVPBMT
|
|||
bool "SVPBMT extension support"
|
||||
depends on 64BIT && MMU
|
||||
depends on !XIP_KERNEL
|
||||
select RISCV_ALTERNATIVE
|
||||
default y
|
||||
select RISCV_ALTERNATIVE
|
||||
help
|
||||
Adds support to dynamically detect the presence of the SVPBMT
|
||||
ISA-extension (Supervisor-mode: page-based memory types) and
|
||||
|
@ -415,20 +417,36 @@ config RISCV_ISA_SVPBMT
|
|||
|
||||
If you don't know what to do here, say Y.
|
||||
|
||||
config TOOLCHAIN_HAS_ZICBOM
|
||||
config TOOLCHAIN_HAS_ZBB
|
||||
bool
|
||||
default y
|
||||
depends on !64BIT || $(cc-option,-mabi=lp64 -march=rv64ima_zicbom)
|
||||
depends on !32BIT || $(cc-option,-mabi=ilp32 -march=rv32ima_zicbom)
|
||||
depends on LLD_VERSION >= 150000 || LD_VERSION >= 23800
|
||||
depends on !64BIT || $(cc-option,-mabi=lp64 -march=rv64ima_zbb)
|
||||
depends on !32BIT || $(cc-option,-mabi=ilp32 -march=rv32ima_zbb)
|
||||
depends on LLD_VERSION >= 150000 || LD_VERSION >= 23900
|
||||
depends on AS_IS_GNU
|
||||
|
||||
config RISCV_ISA_ZBB
|
||||
bool "Zbb extension support for bit manipulation instructions"
|
||||
depends on TOOLCHAIN_HAS_ZBB
|
||||
depends on !XIP_KERNEL && MMU
|
||||
select RISCV_ALTERNATIVE
|
||||
default y
|
||||
help
|
||||
Adds support to dynamically detect the presence of the ZBB
|
||||
extension (basic bit manipulation) and enable its usage.
|
||||
|
||||
The Zbb extension provides instructions to accelerate a number
|
||||
of bit-specific operations (count bit population, sign extending,
|
||||
bitrotation, etc).
|
||||
|
||||
If you don't know what to do here, say Y.
|
||||
|
||||
config RISCV_ISA_ZICBOM
|
||||
bool "Zicbom extension support for non-coherent DMA operation"
|
||||
depends on TOOLCHAIN_HAS_ZICBOM
|
||||
depends on !XIP_KERNEL && MMU
|
||||
select RISCV_DMA_NONCOHERENT
|
||||
select RISCV_ALTERNATIVE
|
||||
default y
|
||||
select RISCV_ALTERNATIVE
|
||||
select RISCV_DMA_NONCOHERENT
|
||||
help
|
||||
Adds support to dynamically detect the presence of the ZICBOM
|
||||
extension (Cache Block Management Operations) and enable its
|
||||
|
@ -490,9 +508,9 @@ config RISCV_BOOT_SPINWAIT
|
|||
|
||||
config KEXEC
|
||||
bool "Kexec system call"
|
||||
select KEXEC_CORE
|
||||
select HOTPLUG_CPU if SMP
|
||||
depends on MMU
|
||||
select HOTPLUG_CPU if SMP
|
||||
select KEXEC_CORE
|
||||
help
|
||||
kexec is a system call that implements the ability to shutdown your
|
||||
current kernel, and to start another kernel. It is like a reboot
|
||||
|
@ -503,10 +521,10 @@ config KEXEC
|
|||
|
||||
config KEXEC_FILE
|
||||
bool "kexec file based systmem call"
|
||||
depends on 64BIT && MMU
|
||||
select HAVE_IMA_KEXEC if IMA
|
||||
select KEXEC_CORE
|
||||
select KEXEC_ELF
|
||||
select HAVE_IMA_KEXEC if IMA
|
||||
depends on 64BIT && MMU
|
||||
help
|
||||
This is new version of kexec system call. This system call is
|
||||
file based and takes file descriptors as system call argument
|
||||
|
@ -595,15 +613,15 @@ config EFI_STUB
|
|||
config EFI
|
||||
bool "UEFI runtime support"
|
||||
depends on OF && !XIP_KERNEL
|
||||
select LIBFDT
|
||||
select UCS2_STRING
|
||||
select EFI_PARAMS_FROM_FDT
|
||||
select EFI_STUB
|
||||
select EFI_GENERIC_STUB
|
||||
select EFI_RUNTIME_WRAPPERS
|
||||
select RISCV_ISA_C
|
||||
depends on MMU
|
||||
default y
|
||||
select EFI_GENERIC_STUB
|
||||
select EFI_PARAMS_FROM_FDT
|
||||
select EFI_RUNTIME_WRAPPERS
|
||||
select EFI_STUB
|
||||
select LIBFDT
|
||||
select RISCV_ISA_C
|
||||
select UCS2_STRING
|
||||
help
|
||||
This option provides support for runtime services provided
|
||||
by UEFI firmware (such as non-volatile variables, realtime
|
||||
|
@ -682,8 +700,8 @@ config PORTABLE
|
|||
bool
|
||||
default !NONPORTABLE
|
||||
select EFI
|
||||
select OF
|
||||
select MMU
|
||||
select OF
|
||||
|
||||
menu "Power management options"
|
||||
|
||||
|
|
|
@ -43,7 +43,7 @@ config ARCH_SUNXI
|
|||
|
||||
config ARCH_VIRT
|
||||
def_bool SOC_VIRT
|
||||
|
||||
|
||||
config SOC_VIRT
|
||||
bool "QEMU Virt Machine"
|
||||
select CLINT_TIMER if RISCV_M_MODE
|
||||
|
@ -88,7 +88,8 @@ config SOC_CANAAN_K210_DTB_BUILTIN
|
|||
If unsure, say Y.
|
||||
|
||||
config ARCH_CANAAN_K210_DTB_SOURCE
|
||||
def_bool SOC_CANAAN_K210_DTB_SOURCE
|
||||
string
|
||||
default SOC_CANAAN_K210_DTB_SOURCE
|
||||
|
||||
config SOC_CANAAN_K210_DTB_SOURCE
|
||||
string "Source file for the Canaan Kendryte K210 builtin DTB"
|
||||
|
|
|
@ -11,7 +11,11 @@ LDFLAGS_vmlinux :=
|
|||
ifeq ($(CONFIG_DYNAMIC_FTRACE),y)
|
||||
LDFLAGS_vmlinux := --no-relax
|
||||
KBUILD_CPPFLAGS += -DCC_USING_PATCHABLE_FUNCTION_ENTRY
|
||||
CC_FLAGS_FTRACE := -fpatchable-function-entry=8
|
||||
ifeq ($(CONFIG_RISCV_ISA_C),y)
|
||||
CC_FLAGS_FTRACE := -fpatchable-function-entry=4
|
||||
else
|
||||
CC_FLAGS_FTRACE := -fpatchable-function-entry=2
|
||||
endif
|
||||
endif
|
||||
|
||||
ifeq ($(CONFIG_CMODEL_MEDLOW),y)
|
||||
|
@ -58,9 +62,6 @@ riscv-march-$(CONFIG_RISCV_ISA_C) := $(riscv-march-y)c
|
|||
toolchain-need-zicsr-zifencei := $(call cc-option-yn, -march=$(riscv-march-y)_zicsr_zifencei)
|
||||
riscv-march-$(toolchain-need-zicsr-zifencei) := $(riscv-march-y)_zicsr_zifencei
|
||||
|
||||
# Check if the toolchain supports Zicbom extension
|
||||
riscv-march-$(CONFIG_TOOLCHAIN_HAS_ZICBOM) := $(riscv-march-y)_zicbom
|
||||
|
||||
# Check if the toolchain supports Zihintpause extension
|
||||
riscv-march-$(CONFIG_TOOLCHAIN_HAS_ZIHINTPAUSE) := $(riscv-march-y)_zihintpause
|
||||
|
||||
|
|
|
@ -4,6 +4,7 @@
|
|||
*/
|
||||
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/memory.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/string.h>
|
||||
#include <linux/bug.h>
|
||||
|
@ -107,7 +108,10 @@ void __init_or_module sifive_errata_patch_func(struct alt_entry *begin,
|
|||
|
||||
tmp = (1U << alt->errata_id);
|
||||
if (cpu_req_errata & tmp) {
|
||||
patch_text_nosync(alt->old_ptr, alt->alt_ptr, alt->alt_len);
|
||||
mutex_lock(&text_mutex);
|
||||
patch_text_nosync(ALT_OLD_PTR(alt), ALT_ALT_PTR(alt),
|
||||
alt->alt_len);
|
||||
mutex_lock(&text_mutex);
|
||||
cpu_apply_errata |= tmp;
|
||||
}
|
||||
}
|
||||
|
|
|
@ -5,6 +5,7 @@
|
|||
|
||||
#include <linux/bug.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/memory.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/string.h>
|
||||
#include <linux/uaccess.h>
|
||||
|
@ -87,6 +88,7 @@ void __init_or_module thead_errata_patch_func(struct alt_entry *begin, struct al
|
|||
struct alt_entry *alt;
|
||||
u32 cpu_req_errata = thead_errata_probe(stage, archid, impid);
|
||||
u32 tmp;
|
||||
void *oldptr, *altptr;
|
||||
|
||||
for (alt = begin; alt < end; alt++) {
|
||||
if (alt->vendor_id != THEAD_VENDOR_ID)
|
||||
|
@ -96,12 +98,17 @@ void __init_or_module thead_errata_patch_func(struct alt_entry *begin, struct al
|
|||
|
||||
tmp = (1U << alt->errata_id);
|
||||
if (cpu_req_errata & tmp) {
|
||||
oldptr = ALT_OLD_PTR(alt);
|
||||
altptr = ALT_ALT_PTR(alt);
|
||||
|
||||
/* On vm-alternatives, the mmu isn't running yet */
|
||||
if (stage == RISCV_ALTERNATIVES_EARLY_BOOT)
|
||||
memcpy((void *)__pa_symbol(alt->old_ptr),
|
||||
(void *)__pa_symbol(alt->alt_ptr), alt->alt_len);
|
||||
else
|
||||
patch_text_nosync(alt->old_ptr, alt->alt_ptr, alt->alt_len);
|
||||
if (stage == RISCV_ALTERNATIVES_EARLY_BOOT) {
|
||||
memcpy(oldptr, altptr, alt->alt_len);
|
||||
} else {
|
||||
mutex_lock(&text_mutex);
|
||||
patch_text_nosync(oldptr, altptr, alt->alt_len);
|
||||
mutex_unlock(&text_mutex);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -7,11 +7,11 @@
|
|||
#ifdef __ASSEMBLY__
|
||||
|
||||
.macro ALT_ENTRY oldptr newptr vendor_id errata_id new_len
|
||||
RISCV_PTR \oldptr
|
||||
RISCV_PTR \newptr
|
||||
REG_ASM \vendor_id
|
||||
REG_ASM \new_len
|
||||
.word \errata_id
|
||||
.4byte \oldptr - .
|
||||
.4byte \newptr - .
|
||||
.2byte \vendor_id
|
||||
.2byte \new_len
|
||||
.4byte \errata_id
|
||||
.endm
|
||||
|
||||
.macro ALT_NEW_CONTENT vendor_id, errata_id, enable = 1, new_c : vararg
|
||||
|
@ -59,11 +59,11 @@
|
|||
#include <linux/stringify.h>
|
||||
|
||||
#define ALT_ENTRY(oldptr, newptr, vendor_id, errata_id, newlen) \
|
||||
RISCV_PTR " " oldptr "\n" \
|
||||
RISCV_PTR " " newptr "\n" \
|
||||
REG_ASM " " vendor_id "\n" \
|
||||
REG_ASM " " newlen "\n" \
|
||||
".word " errata_id "\n"
|
||||
".4byte ((" oldptr ") - .) \n" \
|
||||
".4byte ((" newptr ") - .) \n" \
|
||||
".2byte " vendor_id "\n" \
|
||||
".2byte " newlen "\n" \
|
||||
".4byte " errata_id "\n"
|
||||
|
||||
#define ALT_NEW_CONTENT(vendor_id, errata_id, enable, new_c) \
|
||||
".if " __stringify(enable) " == 1\n" \
|
||||
|
|
|
@ -23,17 +23,25 @@
|
|||
#define RISCV_ALTERNATIVES_MODULE 1 /* alternatives applied during module-init */
|
||||
#define RISCV_ALTERNATIVES_EARLY_BOOT 2 /* alternatives applied before mmu start */
|
||||
|
||||
/* add the relative offset to the address of the offset to get the absolute address */
|
||||
#define __ALT_PTR(a, f) ((void *)&(a)->f + (a)->f)
|
||||
#define ALT_OLD_PTR(a) __ALT_PTR(a, old_offset)
|
||||
#define ALT_ALT_PTR(a) __ALT_PTR(a, alt_offset)
|
||||
|
||||
void __init apply_boot_alternatives(void);
|
||||
void __init apply_early_boot_alternatives(void);
|
||||
void apply_module_alternatives(void *start, size_t length);
|
||||
|
||||
void riscv_alternative_fix_offsets(void *alt_ptr, unsigned int len,
|
||||
int patch_offset);
|
||||
|
||||
struct alt_entry {
|
||||
void *old_ptr; /* address of original instruciton or data */
|
||||
void *alt_ptr; /* address of replacement instruction or data */
|
||||
unsigned long vendor_id; /* cpu vendor id */
|
||||
unsigned long alt_len; /* The replacement size */
|
||||
unsigned int errata_id; /* The errata id */
|
||||
} __packed;
|
||||
s32 old_offset; /* offset relative to original instruction or data */
|
||||
s32 alt_offset; /* offset relative to replacement instruction or data */
|
||||
u16 vendor_id; /* cpu vendor id */
|
||||
u16 alt_len; /* The replacement size */
|
||||
u32 errata_id; /* The errata id */
|
||||
};
|
||||
|
||||
struct errata_checkfunc_id {
|
||||
unsigned long vendor_id;
|
||||
|
|
|
@ -14,6 +14,7 @@
|
|||
#include <asm/auxvec.h>
|
||||
#include <asm/byteorder.h>
|
||||
#include <asm/cacheinfo.h>
|
||||
#include <asm/hwcap.h>
|
||||
|
||||
/*
|
||||
* These are used to set parameters in the core dumps.
|
||||
|
@ -59,12 +60,13 @@ extern bool compat_elf_check_arch(Elf32_Ehdr *hdr);
|
|||
#define STACK_RND_MASK (0x3ffff >> (PAGE_SHIFT - 12))
|
||||
#endif
|
||||
#endif
|
||||
|
||||
/*
|
||||
* This yields a mask that user programs can use to figure out what
|
||||
* instruction set this CPU supports. This could be done in user space,
|
||||
* but it's not easy, and we've already done it here.
|
||||
* Provides information on the availiable set of ISA extensions to userspace,
|
||||
* via a bitmap that coorespends to each single-letter ISA extension. This is
|
||||
* essentially defunct, but will remain for compatibility with userspace.
|
||||
*/
|
||||
#define ELF_HWCAP (elf_hwcap)
|
||||
#define ELF_HWCAP (elf_hwcap & ((1UL << RISCV_ISA_EXT_BASE) - 1))
|
||||
extern unsigned long elf_hwcap;
|
||||
|
||||
/*
|
||||
|
|
|
@ -7,6 +7,8 @@
|
|||
|
||||
#include <asm/alternative.h>
|
||||
#include <asm/csr.h>
|
||||
#include <asm/insn-def.h>
|
||||
#include <asm/hwcap.h>
|
||||
#include <asm/vendorid_list.h>
|
||||
|
||||
#ifdef CONFIG_ERRATA_SIFIVE
|
||||
|
@ -22,10 +24,6 @@
|
|||
#define ERRATA_THEAD_NUMBER 3
|
||||
#endif
|
||||
|
||||
#define CPUFEATURE_SVPBMT 0
|
||||
#define CPUFEATURE_ZICBOM 1
|
||||
#define CPUFEATURE_NUMBER 2
|
||||
|
||||
#ifdef __ASSEMBLY__
|
||||
|
||||
#define ALT_INSN_FAULT(x) \
|
||||
|
@ -55,7 +53,7 @@ asm(ALTERNATIVE("sfence.vma %0", "sfence.vma", SIFIVE_VENDOR_ID, \
|
|||
#define ALT_SVPBMT(_val, prot) \
|
||||
asm(ALTERNATIVE_2("li %0, 0\t\nnop", \
|
||||
"li %0, %1\t\nslli %0,%0,%3", 0, \
|
||||
CPUFEATURE_SVPBMT, CONFIG_RISCV_ISA_SVPBMT, \
|
||||
RISCV_ISA_EXT_SVPBMT, CONFIG_RISCV_ISA_SVPBMT, \
|
||||
"li %0, %2\t\nslli %0,%0,%4", THEAD_VENDOR_ID, \
|
||||
ERRATA_THEAD_PBMT, CONFIG_ERRATA_THEAD_PBMT) \
|
||||
: "=r"(_val) \
|
||||
|
@ -125,11 +123,11 @@ asm volatile(ALTERNATIVE_2( \
|
|||
"mv a0, %1\n\t" \
|
||||
"j 2f\n\t" \
|
||||
"3:\n\t" \
|
||||
"cbo." __stringify(_op) " (a0)\n\t" \
|
||||
CBO_##_op(a0) \
|
||||
"add a0, a0, %0\n\t" \
|
||||
"2:\n\t" \
|
||||
"bltu a0, %2, 3b\n\t" \
|
||||
"nop", 0, CPUFEATURE_ZICBOM, CONFIG_RISCV_ISA_ZICBOM, \
|
||||
"nop", 0, RISCV_ISA_EXT_ZICBOM, CONFIG_RISCV_ISA_ZICBOM, \
|
||||
"mv a0, %1\n\t" \
|
||||
"j 2f\n\t" \
|
||||
"3:\n\t" \
|
||||
|
|
|
@ -42,6 +42,14 @@ struct dyn_arch_ftrace {
|
|||
* 2) jalr: setting low-12 offset to ra, jump to ra, and set ra to
|
||||
* return address (original pc + 4)
|
||||
*
|
||||
*<ftrace enable>:
|
||||
* 0: auipc t0/ra, 0x?
|
||||
* 4: jalr t0/ra, ?(t0/ra)
|
||||
*
|
||||
*<ftrace disable>:
|
||||
* 0: nop
|
||||
* 4: nop
|
||||
*
|
||||
* Dynamic ftrace generates probes to call sites, so we must deal with
|
||||
* both auipc and jalr at the same time.
|
||||
*/
|
||||
|
@ -52,25 +60,43 @@ struct dyn_arch_ftrace {
|
|||
#define AUIPC_OFFSET_MASK (0xfffff000)
|
||||
#define AUIPC_PAD (0x00001000)
|
||||
#define JALR_SHIFT 20
|
||||
#define JALR_BASIC (0x000080e7)
|
||||
#define AUIPC_BASIC (0x00000097)
|
||||
#define JALR_RA (0x000080e7)
|
||||
#define AUIPC_RA (0x00000097)
|
||||
#define JALR_T0 (0x000282e7)
|
||||
#define AUIPC_T0 (0x00000297)
|
||||
#define NOP4 (0x00000013)
|
||||
|
||||
#define make_call(caller, callee, call) \
|
||||
#define to_jalr_t0(offset) \
|
||||
(((offset & JALR_OFFSET_MASK) << JALR_SHIFT) | JALR_T0)
|
||||
|
||||
#define to_auipc_t0(offset) \
|
||||
((offset & JALR_SIGN_MASK) ? \
|
||||
(((offset & AUIPC_OFFSET_MASK) + AUIPC_PAD) | AUIPC_T0) : \
|
||||
((offset & AUIPC_OFFSET_MASK) | AUIPC_T0))
|
||||
|
||||
#define make_call_t0(caller, callee, call) \
|
||||
do { \
|
||||
call[0] = to_auipc_insn((unsigned int)((unsigned long)callee - \
|
||||
(unsigned long)caller)); \
|
||||
call[1] = to_jalr_insn((unsigned int)((unsigned long)callee - \
|
||||
(unsigned long)caller)); \
|
||||
unsigned int offset = \
|
||||
(unsigned long) callee - (unsigned long) caller; \
|
||||
call[0] = to_auipc_t0(offset); \
|
||||
call[1] = to_jalr_t0(offset); \
|
||||
} while (0)
|
||||
|
||||
#define to_jalr_insn(offset) \
|
||||
(((offset & JALR_OFFSET_MASK) << JALR_SHIFT) | JALR_BASIC)
|
||||
#define to_jalr_ra(offset) \
|
||||
(((offset & JALR_OFFSET_MASK) << JALR_SHIFT) | JALR_RA)
|
||||
|
||||
#define to_auipc_insn(offset) \
|
||||
#define to_auipc_ra(offset) \
|
||||
((offset & JALR_SIGN_MASK) ? \
|
||||
(((offset & AUIPC_OFFSET_MASK) + AUIPC_PAD) | AUIPC_BASIC) : \
|
||||
((offset & AUIPC_OFFSET_MASK) | AUIPC_BASIC))
|
||||
(((offset & AUIPC_OFFSET_MASK) + AUIPC_PAD) | AUIPC_RA) : \
|
||||
((offset & AUIPC_OFFSET_MASK) | AUIPC_RA))
|
||||
|
||||
#define make_call_ra(caller, callee, call) \
|
||||
do { \
|
||||
unsigned int offset = \
|
||||
(unsigned long) callee - (unsigned long) caller; \
|
||||
call[0] = to_auipc_ra(offset); \
|
||||
call[1] = to_jalr_ra(offset); \
|
||||
} while (0)
|
||||
|
||||
/*
|
||||
* Let auipc+jalr be the basic *mcount unit*, so we make it 8 bytes here.
|
||||
|
|
|
@ -8,24 +8,11 @@
|
|||
#ifndef _ASM_RISCV_HWCAP_H
|
||||
#define _ASM_RISCV_HWCAP_H
|
||||
|
||||
#include <asm/alternative-macros.h>
|
||||
#include <asm/errno.h>
|
||||
#include <linux/bits.h>
|
||||
#include <uapi/asm/hwcap.h>
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
#include <linux/jump_label.h>
|
||||
/*
|
||||
* This yields a mask that user programs can use to figure out what
|
||||
* instruction set this cpu supports.
|
||||
*/
|
||||
#define ELF_HWCAP (elf_hwcap)
|
||||
|
||||
enum {
|
||||
CAP_HWCAP = 1,
|
||||
};
|
||||
|
||||
extern unsigned long elf_hwcap;
|
||||
|
||||
#define RISCV_ISA_EXT_a ('a' - 'a')
|
||||
#define RISCV_ISA_EXT_c ('c' - 'a')
|
||||
#define RISCV_ISA_EXT_d ('d' - 'a')
|
||||
|
@ -37,42 +24,31 @@ extern unsigned long elf_hwcap;
|
|||
#define RISCV_ISA_EXT_u ('u' - 'a')
|
||||
|
||||
/*
|
||||
* Increse this to higher value as kernel support more ISA extensions.
|
||||
* These macros represent the logical IDs of each multi-letter RISC-V ISA
|
||||
* extension and are used in the ISA bitmap. The logical IDs start from
|
||||
* RISCV_ISA_EXT_BASE, which allows the 0-25 range to be reserved for single
|
||||
* letter extensions. The maximum, RISCV_ISA_EXT_MAX, is defined in order
|
||||
* to allocate the bitmap and may be increased when necessary.
|
||||
*
|
||||
* New extensions should just be added to the bottom, rather than added
|
||||
* alphabetically, in order to avoid unnecessary shuffling.
|
||||
*/
|
||||
#define RISCV_ISA_EXT_MAX 64
|
||||
#define RISCV_ISA_EXT_NAME_LEN_MAX 32
|
||||
#define RISCV_ISA_EXT_BASE 26
|
||||
|
||||
/* The base ID for multi-letter ISA extensions */
|
||||
#define RISCV_ISA_EXT_BASE 26
|
||||
#define RISCV_ISA_EXT_SSCOFPMF 26
|
||||
#define RISCV_ISA_EXT_SSTC 27
|
||||
#define RISCV_ISA_EXT_SVINVAL 28
|
||||
#define RISCV_ISA_EXT_SVPBMT 29
|
||||
#define RISCV_ISA_EXT_ZBB 30
|
||||
#define RISCV_ISA_EXT_ZICBOM 31
|
||||
#define RISCV_ISA_EXT_ZIHINTPAUSE 32
|
||||
|
||||
/*
|
||||
* This enum represent the logical ID for each multi-letter RISC-V ISA extension.
|
||||
* The logical ID should start from RISCV_ISA_EXT_BASE and must not exceed
|
||||
* RISCV_ISA_EXT_MAX. 0-25 range is reserved for single letter
|
||||
* extensions while all the multi-letter extensions should define the next
|
||||
* available logical extension id.
|
||||
*/
|
||||
enum riscv_isa_ext_id {
|
||||
RISCV_ISA_EXT_SSCOFPMF = RISCV_ISA_EXT_BASE,
|
||||
RISCV_ISA_EXT_SVPBMT,
|
||||
RISCV_ISA_EXT_ZICBOM,
|
||||
RISCV_ISA_EXT_ZIHINTPAUSE,
|
||||
RISCV_ISA_EXT_SSTC,
|
||||
RISCV_ISA_EXT_SVINVAL,
|
||||
RISCV_ISA_EXT_ID_MAX
|
||||
};
|
||||
static_assert(RISCV_ISA_EXT_ID_MAX <= RISCV_ISA_EXT_MAX);
|
||||
#define RISCV_ISA_EXT_MAX 64
|
||||
#define RISCV_ISA_EXT_NAME_LEN_MAX 32
|
||||
|
||||
/*
|
||||
* This enum represents the logical ID for each RISC-V ISA extension static
|
||||
* keys. We can use static key to optimize code path if some ISA extensions
|
||||
* are available.
|
||||
*/
|
||||
enum riscv_isa_ext_key {
|
||||
RISCV_ISA_EXT_KEY_FPU, /* For 'F' and 'D' */
|
||||
RISCV_ISA_EXT_KEY_SVINVAL,
|
||||
RISCV_ISA_EXT_KEY_MAX,
|
||||
};
|
||||
#ifndef __ASSEMBLY__
|
||||
|
||||
#include <linux/jump_label.h>
|
||||
|
||||
struct riscv_isa_ext_data {
|
||||
/* Name of the extension displayed to userspace via /proc/cpuinfo */
|
||||
|
@ -81,20 +57,40 @@ struct riscv_isa_ext_data {
|
|||
unsigned int isa_ext_id;
|
||||
};
|
||||
|
||||
extern struct static_key_false riscv_isa_ext_keys[RISCV_ISA_EXT_KEY_MAX];
|
||||
|
||||
static __always_inline int riscv_isa_ext2key(int num)
|
||||
static __always_inline bool
|
||||
riscv_has_extension_likely(const unsigned long ext)
|
||||
{
|
||||
switch (num) {
|
||||
case RISCV_ISA_EXT_f:
|
||||
return RISCV_ISA_EXT_KEY_FPU;
|
||||
case RISCV_ISA_EXT_d:
|
||||
return RISCV_ISA_EXT_KEY_FPU;
|
||||
case RISCV_ISA_EXT_SVINVAL:
|
||||
return RISCV_ISA_EXT_KEY_SVINVAL;
|
||||
default:
|
||||
return -EINVAL;
|
||||
}
|
||||
compiletime_assert(ext < RISCV_ISA_EXT_MAX,
|
||||
"ext must be < RISCV_ISA_EXT_MAX");
|
||||
|
||||
asm_volatile_goto(
|
||||
ALTERNATIVE("j %l[l_no]", "nop", 0, %[ext], 1)
|
||||
:
|
||||
: [ext] "i" (ext)
|
||||
:
|
||||
: l_no);
|
||||
|
||||
return true;
|
||||
l_no:
|
||||
return false;
|
||||
}
|
||||
|
||||
static __always_inline bool
|
||||
riscv_has_extension_unlikely(const unsigned long ext)
|
||||
{
|
||||
compiletime_assert(ext < RISCV_ISA_EXT_MAX,
|
||||
"ext must be < RISCV_ISA_EXT_MAX");
|
||||
|
||||
asm_volatile_goto(
|
||||
ALTERNATIVE("nop", "j %l[l_yes]", 0, %[ext], 1)
|
||||
:
|
||||
: [ext] "i" (ext)
|
||||
:
|
||||
: l_yes);
|
||||
|
||||
return false;
|
||||
l_yes:
|
||||
return true;
|
||||
}
|
||||
|
||||
unsigned long riscv_isa_extension_base(const unsigned long *isa_bitmap);
|
||||
|
|
|
@ -12,6 +12,12 @@
|
|||
#define INSN_R_RD_SHIFT 7
|
||||
#define INSN_R_OPCODE_SHIFT 0
|
||||
|
||||
#define INSN_I_SIMM12_SHIFT 20
|
||||
#define INSN_I_RS1_SHIFT 15
|
||||
#define INSN_I_FUNC3_SHIFT 12
|
||||
#define INSN_I_RD_SHIFT 7
|
||||
#define INSN_I_OPCODE_SHIFT 0
|
||||
|
||||
#ifdef __ASSEMBLY__
|
||||
|
||||
#ifdef CONFIG_AS_HAS_INSN
|
||||
|
@ -20,6 +26,10 @@
|
|||
.insn r \opcode, \func3, \func7, \rd, \rs1, \rs2
|
||||
.endm
|
||||
|
||||
.macro insn_i, opcode, func3, rd, rs1, simm12
|
||||
.insn i \opcode, \func3, \rd, \rs1, \simm12
|
||||
.endm
|
||||
|
||||
#else
|
||||
|
||||
#include <asm/gpr-num.h>
|
||||
|
@ -33,9 +43,18 @@
|
|||
(.L__gpr_num_\rs2 << INSN_R_RS2_SHIFT))
|
||||
.endm
|
||||
|
||||
.macro insn_i, opcode, func3, rd, rs1, simm12
|
||||
.4byte ((\opcode << INSN_I_OPCODE_SHIFT) | \
|
||||
(\func3 << INSN_I_FUNC3_SHIFT) | \
|
||||
(.L__gpr_num_\rd << INSN_I_RD_SHIFT) | \
|
||||
(.L__gpr_num_\rs1 << INSN_I_RS1_SHIFT) | \
|
||||
(\simm12 << INSN_I_SIMM12_SHIFT))
|
||||
.endm
|
||||
|
||||
#endif
|
||||
|
||||
#define __INSN_R(...) insn_r __VA_ARGS__
|
||||
#define __INSN_I(...) insn_i __VA_ARGS__
|
||||
|
||||
#else /* ! __ASSEMBLY__ */
|
||||
|
||||
|
@ -44,6 +63,9 @@
|
|||
#define __INSN_R(opcode, func3, func7, rd, rs1, rs2) \
|
||||
".insn r " opcode ", " func3 ", " func7 ", " rd ", " rs1 ", " rs2 "\n"
|
||||
|
||||
#define __INSN_I(opcode, func3, rd, rs1, simm12) \
|
||||
".insn i " opcode ", " func3 ", " rd ", " rs1 ", " simm12 "\n"
|
||||
|
||||
#else
|
||||
|
||||
#include <linux/stringify.h>
|
||||
|
@ -60,14 +82,32 @@
|
|||
" (.L__gpr_num_\\rs2 << " __stringify(INSN_R_RS2_SHIFT) "))\n" \
|
||||
" .endm\n"
|
||||
|
||||
#define DEFINE_INSN_I \
|
||||
__DEFINE_ASM_GPR_NUMS \
|
||||
" .macro insn_i, opcode, func3, rd, rs1, simm12\n" \
|
||||
" .4byte ((\\opcode << " __stringify(INSN_I_OPCODE_SHIFT) ") |" \
|
||||
" (\\func3 << " __stringify(INSN_I_FUNC3_SHIFT) ") |" \
|
||||
" (.L__gpr_num_\\rd << " __stringify(INSN_I_RD_SHIFT) ") |" \
|
||||
" (.L__gpr_num_\\rs1 << " __stringify(INSN_I_RS1_SHIFT) ") |" \
|
||||
" (\\simm12 << " __stringify(INSN_I_SIMM12_SHIFT) "))\n" \
|
||||
" .endm\n"
|
||||
|
||||
#define UNDEFINE_INSN_R \
|
||||
" .purgem insn_r\n"
|
||||
|
||||
#define UNDEFINE_INSN_I \
|
||||
" .purgem insn_i\n"
|
||||
|
||||
#define __INSN_R(opcode, func3, func7, rd, rs1, rs2) \
|
||||
DEFINE_INSN_R \
|
||||
"insn_r " opcode ", " func3 ", " func7 ", " rd ", " rs1 ", " rs2 "\n" \
|
||||
UNDEFINE_INSN_R
|
||||
|
||||
#define __INSN_I(opcode, func3, rd, rs1, simm12) \
|
||||
DEFINE_INSN_I \
|
||||
"insn_i " opcode ", " func3 ", " rd ", " rs1 ", " simm12 "\n" \
|
||||
UNDEFINE_INSN_I
|
||||
|
||||
#endif
|
||||
|
||||
#endif /* ! __ASSEMBLY__ */
|
||||
|
@ -76,9 +116,14 @@
|
|||
__INSN_R(RV_##opcode, RV_##func3, RV_##func7, \
|
||||
RV_##rd, RV_##rs1, RV_##rs2)
|
||||
|
||||
#define INSN_I(opcode, func3, rd, rs1, simm12) \
|
||||
__INSN_I(RV_##opcode, RV_##func3, RV_##rd, \
|
||||
RV_##rs1, RV_##simm12)
|
||||
|
||||
#define RV_OPCODE(v) __ASM_STR(v)
|
||||
#define RV_FUNC3(v) __ASM_STR(v)
|
||||
#define RV_FUNC7(v) __ASM_STR(v)
|
||||
#define RV_SIMM12(v) __ASM_STR(v)
|
||||
#define RV_RD(v) __ASM_STR(v)
|
||||
#define RV_RS1(v) __ASM_STR(v)
|
||||
#define RV_RS2(v) __ASM_STR(v)
|
||||
|
@ -87,6 +132,7 @@
|
|||
#define RV___RS1(v) __RV_REG(v)
|
||||
#define RV___RS2(v) __RV_REG(v)
|
||||
|
||||
#define RV_OPCODE_MISC_MEM RV_OPCODE(15)
|
||||
#define RV_OPCODE_SYSTEM RV_OPCODE(115)
|
||||
|
||||
#define HFENCE_VVMA(vaddr, asid) \
|
||||
|
@ -134,4 +180,16 @@
|
|||
INSN_R(OPCODE_SYSTEM, FUNC3(0), FUNC7(51), \
|
||||
__RD(0), RS1(gaddr), RS2(vmid))
|
||||
|
||||
#define CBO_inval(base) \
|
||||
INSN_I(OPCODE_MISC_MEM, FUNC3(2), __RD(0), \
|
||||
RS1(base), SIMM12(0))
|
||||
|
||||
#define CBO_clean(base) \
|
||||
INSN_I(OPCODE_MISC_MEM, FUNC3(2), __RD(0), \
|
||||
RS1(base), SIMM12(1))
|
||||
|
||||
#define CBO_flush(base) \
|
||||
INSN_I(OPCODE_MISC_MEM, FUNC3(2), __RD(0), \
|
||||
RS1(base), SIMM12(2))
|
||||
|
||||
#endif /* __ASM_INSN_DEF_H */
|
||||
|
|
|
@ -0,0 +1,381 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0-only */
|
||||
/*
|
||||
* Copyright (C) 2020 SiFive
|
||||
*/
|
||||
|
||||
#ifndef _ASM_RISCV_INSN_H
|
||||
#define _ASM_RISCV_INSN_H
|
||||
|
||||
#include <linux/bits.h>
|
||||
|
||||
#define RV_INSN_FUNCT3_MASK GENMASK(14, 12)
|
||||
#define RV_INSN_FUNCT3_OPOFF 12
|
||||
#define RV_INSN_OPCODE_MASK GENMASK(6, 0)
|
||||
#define RV_INSN_OPCODE_OPOFF 0
|
||||
#define RV_INSN_FUNCT12_OPOFF 20
|
||||
|
||||
#define RV_ENCODE_FUNCT3(f_) (RVG_FUNCT3_##f_ << RV_INSN_FUNCT3_OPOFF)
|
||||
#define RV_ENCODE_FUNCT12(f_) (RVG_FUNCT12_##f_ << RV_INSN_FUNCT12_OPOFF)
|
||||
|
||||
/* The bit field of immediate value in I-type instruction */
|
||||
#define RV_I_IMM_SIGN_OPOFF 31
|
||||
#define RV_I_IMM_11_0_OPOFF 20
|
||||
#define RV_I_IMM_SIGN_OFF 12
|
||||
#define RV_I_IMM_11_0_OFF 0
|
||||
#define RV_I_IMM_11_0_MASK GENMASK(11, 0)
|
||||
|
||||
/* The bit field of immediate value in J-type instruction */
|
||||
#define RV_J_IMM_SIGN_OPOFF 31
|
||||
#define RV_J_IMM_10_1_OPOFF 21
|
||||
#define RV_J_IMM_11_OPOFF 20
|
||||
#define RV_J_IMM_19_12_OPOFF 12
|
||||
#define RV_J_IMM_SIGN_OFF 20
|
||||
#define RV_J_IMM_10_1_OFF 1
|
||||
#define RV_J_IMM_11_OFF 11
|
||||
#define RV_J_IMM_19_12_OFF 12
|
||||
#define RV_J_IMM_10_1_MASK GENMASK(9, 0)
|
||||
#define RV_J_IMM_11_MASK GENMASK(0, 0)
|
||||
#define RV_J_IMM_19_12_MASK GENMASK(7, 0)
|
||||
|
||||
/*
|
||||
* U-type IMMs contain the upper 20bits [31:20] of an immediate with
|
||||
* the rest filled in by zeros, so no shifting required. Similarly,
|
||||
* bit31 contains the signed state, so no sign extension necessary.
|
||||
*/
|
||||
#define RV_U_IMM_SIGN_OPOFF 31
|
||||
#define RV_U_IMM_31_12_OPOFF 0
|
||||
#define RV_U_IMM_31_12_MASK GENMASK(31, 12)
|
||||
|
||||
/* The bit field of immediate value in B-type instruction */
|
||||
#define RV_B_IMM_SIGN_OPOFF 31
|
||||
#define RV_B_IMM_10_5_OPOFF 25
|
||||
#define RV_B_IMM_4_1_OPOFF 8
|
||||
#define RV_B_IMM_11_OPOFF 7
|
||||
#define RV_B_IMM_SIGN_OFF 12
|
||||
#define RV_B_IMM_10_5_OFF 5
|
||||
#define RV_B_IMM_4_1_OFF 1
|
||||
#define RV_B_IMM_11_OFF 11
|
||||
#define RV_B_IMM_10_5_MASK GENMASK(5, 0)
|
||||
#define RV_B_IMM_4_1_MASK GENMASK(3, 0)
|
||||
#define RV_B_IMM_11_MASK GENMASK(0, 0)
|
||||
|
||||
/* The register offset in RVG instruction */
|
||||
#define RVG_RS1_OPOFF 15
|
||||
#define RVG_RS2_OPOFF 20
|
||||
#define RVG_RD_OPOFF 7
|
||||
#define RVG_RD_MASK GENMASK(4, 0)
|
||||
|
||||
/* The bit field of immediate value in RVC J instruction */
|
||||
#define RVC_J_IMM_SIGN_OPOFF 12
|
||||
#define RVC_J_IMM_4_OPOFF 11
|
||||
#define RVC_J_IMM_9_8_OPOFF 9
|
||||
#define RVC_J_IMM_10_OPOFF 8
|
||||
#define RVC_J_IMM_6_OPOFF 7
|
||||
#define RVC_J_IMM_7_OPOFF 6
|
||||
#define RVC_J_IMM_3_1_OPOFF 3
|
||||
#define RVC_J_IMM_5_OPOFF 2
|
||||
#define RVC_J_IMM_SIGN_OFF 11
|
||||
#define RVC_J_IMM_4_OFF 4
|
||||
#define RVC_J_IMM_9_8_OFF 8
|
||||
#define RVC_J_IMM_10_OFF 10
|
||||
#define RVC_J_IMM_6_OFF 6
|
||||
#define RVC_J_IMM_7_OFF 7
|
||||
#define RVC_J_IMM_3_1_OFF 1
|
||||
#define RVC_J_IMM_5_OFF 5
|
||||
#define RVC_J_IMM_4_MASK GENMASK(0, 0)
|
||||
#define RVC_J_IMM_9_8_MASK GENMASK(1, 0)
|
||||
#define RVC_J_IMM_10_MASK GENMASK(0, 0)
|
||||
#define RVC_J_IMM_6_MASK GENMASK(0, 0)
|
||||
#define RVC_J_IMM_7_MASK GENMASK(0, 0)
|
||||
#define RVC_J_IMM_3_1_MASK GENMASK(2, 0)
|
||||
#define RVC_J_IMM_5_MASK GENMASK(0, 0)
|
||||
|
||||
/* The bit field of immediate value in RVC B instruction */
|
||||
#define RVC_B_IMM_SIGN_OPOFF 12
|
||||
#define RVC_B_IMM_4_3_OPOFF 10
|
||||
#define RVC_B_IMM_7_6_OPOFF 5
|
||||
#define RVC_B_IMM_2_1_OPOFF 3
|
||||
#define RVC_B_IMM_5_OPOFF 2
|
||||
#define RVC_B_IMM_SIGN_OFF 8
|
||||
#define RVC_B_IMM_4_3_OFF 3
|
||||
#define RVC_B_IMM_7_6_OFF 6
|
||||
#define RVC_B_IMM_2_1_OFF 1
|
||||
#define RVC_B_IMM_5_OFF 5
|
||||
#define RVC_B_IMM_4_3_MASK GENMASK(1, 0)
|
||||
#define RVC_B_IMM_7_6_MASK GENMASK(1, 0)
|
||||
#define RVC_B_IMM_2_1_MASK GENMASK(1, 0)
|
||||
#define RVC_B_IMM_5_MASK GENMASK(0, 0)
|
||||
|
||||
#define RVC_INSN_FUNCT4_MASK GENMASK(15, 12)
|
||||
#define RVC_INSN_FUNCT4_OPOFF 12
|
||||
#define RVC_INSN_FUNCT3_MASK GENMASK(15, 13)
|
||||
#define RVC_INSN_FUNCT3_OPOFF 13
|
||||
#define RVC_INSN_J_RS2_MASK GENMASK(6, 2)
|
||||
#define RVC_INSN_OPCODE_MASK GENMASK(1, 0)
|
||||
#define RVC_ENCODE_FUNCT3(f_) (RVC_FUNCT3_##f_ << RVC_INSN_FUNCT3_OPOFF)
|
||||
#define RVC_ENCODE_FUNCT4(f_) (RVC_FUNCT4_##f_ << RVC_INSN_FUNCT4_OPOFF)
|
||||
|
||||
/* The register offset in RVC op=C0 instruction */
|
||||
#define RVC_C0_RS1_OPOFF 7
|
||||
#define RVC_C0_RS2_OPOFF 2
|
||||
#define RVC_C0_RD_OPOFF 2
|
||||
|
||||
/* The register offset in RVC op=C1 instruction */
|
||||
#define RVC_C1_RS1_OPOFF 7
|
||||
#define RVC_C1_RS2_OPOFF 2
|
||||
#define RVC_C1_RD_OPOFF 7
|
||||
|
||||
/* The register offset in RVC op=C2 instruction */
|
||||
#define RVC_C2_RS1_OPOFF 7
|
||||
#define RVC_C2_RS2_OPOFF 2
|
||||
#define RVC_C2_RD_OPOFF 7
|
||||
|
||||
/* parts of opcode for RVG*/
|
||||
#define RVG_OPCODE_FENCE 0x0f
|
||||
#define RVG_OPCODE_AUIPC 0x17
|
||||
#define RVG_OPCODE_BRANCH 0x63
|
||||
#define RVG_OPCODE_JALR 0x67
|
||||
#define RVG_OPCODE_JAL 0x6f
|
||||
#define RVG_OPCODE_SYSTEM 0x73
|
||||
|
||||
/* parts of opcode for RVC*/
|
||||
#define RVC_OPCODE_C0 0x0
|
||||
#define RVC_OPCODE_C1 0x1
|
||||
#define RVC_OPCODE_C2 0x2
|
||||
|
||||
/* parts of funct3 code for I, M, A extension*/
|
||||
#define RVG_FUNCT3_JALR 0x0
|
||||
#define RVG_FUNCT3_BEQ 0x0
|
||||
#define RVG_FUNCT3_BNE 0x1
|
||||
#define RVG_FUNCT3_BLT 0x4
|
||||
#define RVG_FUNCT3_BGE 0x5
|
||||
#define RVG_FUNCT3_BLTU 0x6
|
||||
#define RVG_FUNCT3_BGEU 0x7
|
||||
|
||||
/* parts of funct3 code for C extension*/
|
||||
#define RVC_FUNCT3_C_BEQZ 0x6
|
||||
#define RVC_FUNCT3_C_BNEZ 0x7
|
||||
#define RVC_FUNCT3_C_J 0x5
|
||||
#define RVC_FUNCT3_C_JAL 0x1
|
||||
#define RVC_FUNCT4_C_JR 0x8
|
||||
#define RVC_FUNCT4_C_JALR 0x9
|
||||
#define RVC_FUNCT4_C_EBREAK 0x9
|
||||
|
||||
#define RVG_FUNCT12_EBREAK 0x1
|
||||
#define RVG_FUNCT12_SRET 0x102
|
||||
|
||||
#define RVG_MATCH_AUIPC (RVG_OPCODE_AUIPC)
|
||||
#define RVG_MATCH_JALR (RV_ENCODE_FUNCT3(JALR) | RVG_OPCODE_JALR)
|
||||
#define RVG_MATCH_JAL (RVG_OPCODE_JAL)
|
||||
#define RVG_MATCH_FENCE (RVG_OPCODE_FENCE)
|
||||
#define RVG_MATCH_BEQ (RV_ENCODE_FUNCT3(BEQ) | RVG_OPCODE_BRANCH)
|
||||
#define RVG_MATCH_BNE (RV_ENCODE_FUNCT3(BNE) | RVG_OPCODE_BRANCH)
|
||||
#define RVG_MATCH_BLT (RV_ENCODE_FUNCT3(BLT) | RVG_OPCODE_BRANCH)
|
||||
#define RVG_MATCH_BGE (RV_ENCODE_FUNCT3(BGE) | RVG_OPCODE_BRANCH)
|
||||
#define RVG_MATCH_BLTU (RV_ENCODE_FUNCT3(BLTU) | RVG_OPCODE_BRANCH)
|
||||
#define RVG_MATCH_BGEU (RV_ENCODE_FUNCT3(BGEU) | RVG_OPCODE_BRANCH)
|
||||
#define RVG_MATCH_EBREAK (RV_ENCODE_FUNCT12(EBREAK) | RVG_OPCODE_SYSTEM)
|
||||
#define RVG_MATCH_SRET (RV_ENCODE_FUNCT12(SRET) | RVG_OPCODE_SYSTEM)
|
||||
#define RVC_MATCH_C_BEQZ (RVC_ENCODE_FUNCT3(C_BEQZ) | RVC_OPCODE_C1)
|
||||
#define RVC_MATCH_C_BNEZ (RVC_ENCODE_FUNCT3(C_BNEZ) | RVC_OPCODE_C1)
|
||||
#define RVC_MATCH_C_J (RVC_ENCODE_FUNCT3(C_J) | RVC_OPCODE_C1)
|
||||
#define RVC_MATCH_C_JAL (RVC_ENCODE_FUNCT3(C_JAL) | RVC_OPCODE_C1)
|
||||
#define RVC_MATCH_C_JR (RVC_ENCODE_FUNCT4(C_JR) | RVC_OPCODE_C2)
|
||||
#define RVC_MATCH_C_JALR (RVC_ENCODE_FUNCT4(C_JALR) | RVC_OPCODE_C2)
|
||||
#define RVC_MATCH_C_EBREAK (RVC_ENCODE_FUNCT4(C_EBREAK) | RVC_OPCODE_C2)
|
||||
|
||||
#define RVG_MASK_AUIPC (RV_INSN_OPCODE_MASK)
|
||||
#define RVG_MASK_JALR (RV_INSN_FUNCT3_MASK | RV_INSN_OPCODE_MASK)
|
||||
#define RVG_MASK_JAL (RV_INSN_OPCODE_MASK)
|
||||
#define RVG_MASK_FENCE (RV_INSN_OPCODE_MASK)
|
||||
#define RVC_MASK_C_JALR (RVC_INSN_FUNCT4_MASK | RVC_INSN_J_RS2_MASK | RVC_INSN_OPCODE_MASK)
|
||||
#define RVC_MASK_C_JR (RVC_INSN_FUNCT4_MASK | RVC_INSN_J_RS2_MASK | RVC_INSN_OPCODE_MASK)
|
||||
#define RVC_MASK_C_JAL (RVC_INSN_FUNCT3_MASK | RVC_INSN_OPCODE_MASK)
|
||||
#define RVC_MASK_C_J (RVC_INSN_FUNCT3_MASK | RVC_INSN_OPCODE_MASK)
|
||||
#define RVG_MASK_BEQ (RV_INSN_FUNCT3_MASK | RV_INSN_OPCODE_MASK)
|
||||
#define RVG_MASK_BNE (RV_INSN_FUNCT3_MASK | RV_INSN_OPCODE_MASK)
|
||||
#define RVG_MASK_BLT (RV_INSN_FUNCT3_MASK | RV_INSN_OPCODE_MASK)
|
||||
#define RVG_MASK_BGE (RV_INSN_FUNCT3_MASK | RV_INSN_OPCODE_MASK)
|
||||
#define RVG_MASK_BLTU (RV_INSN_FUNCT3_MASK | RV_INSN_OPCODE_MASK)
|
||||
#define RVG_MASK_BGEU (RV_INSN_FUNCT3_MASK | RV_INSN_OPCODE_MASK)
|
||||
#define RVC_MASK_C_BEQZ (RVC_INSN_FUNCT3_MASK | RVC_INSN_OPCODE_MASK)
|
||||
#define RVC_MASK_C_BNEZ (RVC_INSN_FUNCT3_MASK | RVC_INSN_OPCODE_MASK)
|
||||
#define RVC_MASK_C_EBREAK 0xffff
|
||||
#define RVG_MASK_EBREAK 0xffffffff
|
||||
#define RVG_MASK_SRET 0xffffffff
|
||||
|
||||
#define __INSN_LENGTH_MASK _UL(0x3)
|
||||
#define __INSN_LENGTH_GE_32 _UL(0x3)
|
||||
#define __INSN_OPCODE_MASK _UL(0x7F)
|
||||
#define __INSN_BRANCH_OPCODE _UL(RVG_OPCODE_BRANCH)
|
||||
|
||||
#define __RISCV_INSN_FUNCS(name, mask, val) \
|
||||
static __always_inline bool riscv_insn_is_##name(u32 code) \
|
||||
{ \
|
||||
BUILD_BUG_ON(~(mask) & (val)); \
|
||||
return (code & (mask)) == (val); \
|
||||
} \
|
||||
|
||||
#if __riscv_xlen == 32
|
||||
/* C.JAL is an RV32C-only instruction */
|
||||
__RISCV_INSN_FUNCS(c_jal, RVC_MASK_C_JAL, RVC_MATCH_C_JAL)
|
||||
#else
|
||||
#define riscv_insn_is_c_jal(opcode) 0
|
||||
#endif
|
||||
__RISCV_INSN_FUNCS(auipc, RVG_MASK_AUIPC, RVG_MATCH_AUIPC)
|
||||
__RISCV_INSN_FUNCS(jalr, RVG_MASK_JALR, RVG_MATCH_JALR)
|
||||
__RISCV_INSN_FUNCS(jal, RVG_MASK_JAL, RVG_MATCH_JAL)
|
||||
__RISCV_INSN_FUNCS(c_jr, RVC_MASK_C_JR, RVC_MATCH_C_JR)
|
||||
__RISCV_INSN_FUNCS(c_jalr, RVC_MASK_C_JALR, RVC_MATCH_C_JALR)
|
||||
__RISCV_INSN_FUNCS(c_j, RVC_MASK_C_J, RVC_MATCH_C_J)
|
||||
__RISCV_INSN_FUNCS(beq, RVG_MASK_BEQ, RVG_MATCH_BEQ)
|
||||
__RISCV_INSN_FUNCS(bne, RVG_MASK_BNE, RVG_MATCH_BNE)
|
||||
__RISCV_INSN_FUNCS(blt, RVG_MASK_BLT, RVG_MATCH_BLT)
|
||||
__RISCV_INSN_FUNCS(bge, RVG_MASK_BGE, RVG_MATCH_BGE)
|
||||
__RISCV_INSN_FUNCS(bltu, RVG_MASK_BLTU, RVG_MATCH_BLTU)
|
||||
__RISCV_INSN_FUNCS(bgeu, RVG_MASK_BGEU, RVG_MATCH_BGEU)
|
||||
__RISCV_INSN_FUNCS(c_beqz, RVC_MASK_C_BEQZ, RVC_MATCH_C_BEQZ)
|
||||
__RISCV_INSN_FUNCS(c_bnez, RVC_MASK_C_BNEZ, RVC_MATCH_C_BNEZ)
|
||||
__RISCV_INSN_FUNCS(c_ebreak, RVC_MASK_C_EBREAK, RVC_MATCH_C_EBREAK)
|
||||
__RISCV_INSN_FUNCS(ebreak, RVG_MASK_EBREAK, RVG_MATCH_EBREAK)
|
||||
__RISCV_INSN_FUNCS(sret, RVG_MASK_SRET, RVG_MATCH_SRET)
|
||||
__RISCV_INSN_FUNCS(fence, RVG_MASK_FENCE, RVG_MATCH_FENCE);
|
||||
|
||||
/* special case to catch _any_ system instruction */
|
||||
static __always_inline bool riscv_insn_is_system(u32 code)
|
||||
{
|
||||
return (code & RV_INSN_OPCODE_MASK) == RVG_OPCODE_SYSTEM;
|
||||
}
|
||||
|
||||
/* special case to catch _any_ branch instruction */
|
||||
static __always_inline bool riscv_insn_is_branch(u32 code)
|
||||
{
|
||||
return (code & RV_INSN_OPCODE_MASK) == RVG_OPCODE_BRANCH;
|
||||
}
|
||||
|
||||
#define RV_IMM_SIGN(x) (-(((x) >> 31) & 1))
|
||||
#define RVC_IMM_SIGN(x) (-(((x) >> 12) & 1))
|
||||
#define RV_X(X, s, mask) (((X) >> (s)) & (mask))
|
||||
#define RVC_X(X, s, mask) RV_X(X, s, mask)
|
||||
|
||||
#define RV_EXTRACT_RD_REG(x) \
|
||||
({typeof(x) x_ = (x); \
|
||||
(RV_X(x_, RVG_RD_OPOFF, RVG_RD_MASK)); })
|
||||
|
||||
#define RV_EXTRACT_UTYPE_IMM(x) \
|
||||
({typeof(x) x_ = (x); \
|
||||
(RV_X(x_, RV_U_IMM_31_12_OPOFF, RV_U_IMM_31_12_MASK)); })
|
||||
|
||||
#define RV_EXTRACT_JTYPE_IMM(x) \
|
||||
({typeof(x) x_ = (x); \
|
||||
(RV_X(x_, RV_J_IMM_10_1_OPOFF, RV_J_IMM_10_1_MASK) << RV_J_IMM_10_1_OFF) | \
|
||||
(RV_X(x_, RV_J_IMM_11_OPOFF, RV_J_IMM_11_MASK) << RV_J_IMM_11_OFF) | \
|
||||
(RV_X(x_, RV_J_IMM_19_12_OPOFF, RV_J_IMM_19_12_MASK) << RV_J_IMM_19_12_OFF) | \
|
||||
(RV_IMM_SIGN(x_) << RV_J_IMM_SIGN_OFF); })
|
||||
|
||||
#define RV_EXTRACT_ITYPE_IMM(x) \
|
||||
({typeof(x) x_ = (x); \
|
||||
(RV_X(x_, RV_I_IMM_11_0_OPOFF, RV_I_IMM_11_0_MASK)) | \
|
||||
(RV_IMM_SIGN(x_) << RV_I_IMM_SIGN_OFF); })
|
||||
|
||||
#define RV_EXTRACT_BTYPE_IMM(x) \
|
||||
({typeof(x) x_ = (x); \
|
||||
(RV_X(x_, RV_B_IMM_4_1_OPOFF, RV_B_IMM_4_1_MASK) << RV_B_IMM_4_1_OFF) | \
|
||||
(RV_X(x_, RV_B_IMM_10_5_OPOFF, RV_B_IMM_10_5_MASK) << RV_B_IMM_10_5_OFF) | \
|
||||
(RV_X(x_, RV_B_IMM_11_OPOFF, RV_B_IMM_11_MASK) << RV_B_IMM_11_OFF) | \
|
||||
(RV_IMM_SIGN(x_) << RV_B_IMM_SIGN_OFF); })
|
||||
|
||||
#define RVC_EXTRACT_JTYPE_IMM(x) \
|
||||
({typeof(x) x_ = (x); \
|
||||
(RVC_X(x_, RVC_J_IMM_3_1_OPOFF, RVC_J_IMM_3_1_MASK) << RVC_J_IMM_3_1_OFF) | \
|
||||
(RVC_X(x_, RVC_J_IMM_4_OPOFF, RVC_J_IMM_4_MASK) << RVC_J_IMM_4_OFF) | \
|
||||
(RVC_X(x_, RVC_J_IMM_5_OPOFF, RVC_J_IMM_5_MASK) << RVC_J_IMM_5_OFF) | \
|
||||
(RVC_X(x_, RVC_J_IMM_6_OPOFF, RVC_J_IMM_6_MASK) << RVC_J_IMM_6_OFF) | \
|
||||
(RVC_X(x_, RVC_J_IMM_7_OPOFF, RVC_J_IMM_7_MASK) << RVC_J_IMM_7_OFF) | \
|
||||
(RVC_X(x_, RVC_J_IMM_9_8_OPOFF, RVC_J_IMM_9_8_MASK) << RVC_J_IMM_9_8_OFF) | \
|
||||
(RVC_X(x_, RVC_J_IMM_10_OPOFF, RVC_J_IMM_10_MASK) << RVC_J_IMM_10_OFF) | \
|
||||
(RVC_IMM_SIGN(x_) << RVC_J_IMM_SIGN_OFF); })
|
||||
|
||||
#define RVC_EXTRACT_BTYPE_IMM(x) \
|
||||
({typeof(x) x_ = (x); \
|
||||
(RVC_X(x_, RVC_B_IMM_2_1_OPOFF, RVC_B_IMM_2_1_MASK) << RVC_B_IMM_2_1_OFF) | \
|
||||
(RVC_X(x_, RVC_B_IMM_4_3_OPOFF, RVC_B_IMM_4_3_MASK) << RVC_B_IMM_4_3_OFF) | \
|
||||
(RVC_X(x_, RVC_B_IMM_5_OPOFF, RVC_B_IMM_5_MASK) << RVC_B_IMM_5_OFF) | \
|
||||
(RVC_X(x_, RVC_B_IMM_7_6_OPOFF, RVC_B_IMM_7_6_MASK) << RVC_B_IMM_7_6_OFF) | \
|
||||
(RVC_IMM_SIGN(x_) << RVC_B_IMM_SIGN_OFF); })
|
||||
|
||||
/*
|
||||
* Get the immediate from a J-type instruction.
|
||||
*
|
||||
* @insn: instruction to process
|
||||
* Return: immediate
|
||||
*/
|
||||
static inline s32 riscv_insn_extract_jtype_imm(u32 insn)
|
||||
{
|
||||
return RV_EXTRACT_JTYPE_IMM(insn);
|
||||
}
|
||||
|
||||
/*
|
||||
* Update a J-type instruction with an immediate value.
|
||||
*
|
||||
* @insn: pointer to the jtype instruction
|
||||
* @imm: the immediate to insert into the instruction
|
||||
*/
|
||||
static inline void riscv_insn_insert_jtype_imm(u32 *insn, s32 imm)
|
||||
{
|
||||
/* drop the old IMMs, all jal IMM bits sit at 31:12 */
|
||||
*insn &= ~GENMASK(31, 12);
|
||||
*insn |= (RV_X(imm, RV_J_IMM_10_1_OFF, RV_J_IMM_10_1_MASK) << RV_J_IMM_10_1_OPOFF) |
|
||||
(RV_X(imm, RV_J_IMM_11_OFF, RV_J_IMM_11_MASK) << RV_J_IMM_11_OPOFF) |
|
||||
(RV_X(imm, RV_J_IMM_19_12_OFF, RV_J_IMM_19_12_MASK) << RV_J_IMM_19_12_OPOFF) |
|
||||
(RV_X(imm, RV_J_IMM_SIGN_OFF, 1) << RV_J_IMM_SIGN_OPOFF);
|
||||
}
|
||||
|
||||
/*
|
||||
* Put together one immediate from a U-type and I-type instruction pair.
|
||||
*
|
||||
* The U-type contains an upper immediate, meaning bits[31:12] with [11:0]
|
||||
* being zero, while the I-type contains a 12bit immediate.
|
||||
* Combined these can encode larger 32bit values and are used for example
|
||||
* in auipc + jalr pairs to allow larger jumps.
|
||||
*
|
||||
* @utype_insn: instruction containing the upper immediate
|
||||
* @itype_insn: instruction
|
||||
* Return: combined immediate
|
||||
*/
|
||||
static inline s32 riscv_insn_extract_utype_itype_imm(u32 utype_insn, u32 itype_insn)
|
||||
{
|
||||
s32 imm;
|
||||
|
||||
imm = RV_EXTRACT_UTYPE_IMM(utype_insn);
|
||||
imm += RV_EXTRACT_ITYPE_IMM(itype_insn);
|
||||
|
||||
return imm;
|
||||
}
|
||||
|
||||
/*
|
||||
* Update a set of two instructions (U-type + I-type) with an immediate value.
|
||||
*
|
||||
* Used for example in auipc+jalrs pairs the U-type instructions contains
|
||||
* a 20bit upper immediate representing bits[31:12], while the I-type
|
||||
* instruction contains a 12bit immediate representing bits[11:0].
|
||||
*
|
||||
* This also takes into account that both separate immediates are
|
||||
* considered as signed values, so if the I-type immediate becomes
|
||||
* negative (BIT(11) set) the U-type part gets adjusted.
|
||||
*
|
||||
* @utype_insn: pointer to the utype instruction of the pair
|
||||
* @itype_insn: pointer to the itype instruction of the pair
|
||||
* @imm: the immediate to insert into the two instructions
|
||||
*/
|
||||
static inline void riscv_insn_insert_utype_itype_imm(u32 *utype_insn, u32 *itype_insn, s32 imm)
|
||||
{
|
||||
/* drop possible old IMM values */
|
||||
*utype_insn &= ~(RV_U_IMM_31_12_MASK);
|
||||
*itype_insn &= ~(RV_I_IMM_11_0_MASK << RV_I_IMM_11_0_OPOFF);
|
||||
|
||||
/* add the adapted IMMs */
|
||||
*utype_insn |= (imm & RV_U_IMM_31_12_MASK) + ((imm & BIT(11)) << 1);
|
||||
*itype_insn |= ((imm & RV_I_IMM_11_0_MASK) << RV_I_IMM_11_0_OPOFF);
|
||||
}
|
||||
#endif /* _ASM_RISCV_INSN_H */
|
|
@ -18,6 +18,7 @@ static __always_inline bool arch_static_branch(struct static_key * const key,
|
|||
const bool branch)
|
||||
{
|
||||
asm_volatile_goto(
|
||||
" .align 2 \n\t"
|
||||
" .option push \n\t"
|
||||
" .option norelax \n\t"
|
||||
" .option norvc \n\t"
|
||||
|
@ -39,6 +40,7 @@ static __always_inline bool arch_static_branch_jump(struct static_key * const ke
|
|||
const bool branch)
|
||||
{
|
||||
asm_volatile_goto(
|
||||
" .align 2 \n\t"
|
||||
" .option push \n\t"
|
||||
" .option norelax \n\t"
|
||||
" .option norvc \n\t"
|
||||
|
|
|
@ -5,6 +5,7 @@
|
|||
#define _ASM_RISCV_MODULE_H
|
||||
|
||||
#include <asm-generic/module.h>
|
||||
#include <linux/elf.h>
|
||||
|
||||
struct module;
|
||||
unsigned long module_emit_got_entry(struct module *mod, unsigned long val);
|
||||
|
@ -111,4 +112,19 @@ static inline struct plt_entry *get_plt_entry(unsigned long val,
|
|||
|
||||
#endif /* CONFIG_MODULE_SECTIONS */
|
||||
|
||||
static inline const Elf_Shdr *find_section(const Elf_Ehdr *hdr,
|
||||
const Elf_Shdr *sechdrs,
|
||||
const char *name)
|
||||
{
|
||||
const Elf_Shdr *s, *se;
|
||||
const char *secstrs = (void *)hdr + sechdrs[hdr->e_shstrndx].sh_offset;
|
||||
|
||||
for (s = sechdrs, se = sechdrs + hdr->e_shnum; s < se; s++) {
|
||||
if (strcmp(name, secstrs + s->sh_name) == 0)
|
||||
return s;
|
||||
}
|
||||
|
||||
return NULL;
|
||||
}
|
||||
|
||||
#endif /* _ASM_RISCV_MODULE_H */
|
||||
|
|
|
@ -1,219 +0,0 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0-only */
|
||||
/*
|
||||
* Copyright (C) 2020 SiFive
|
||||
*/
|
||||
|
||||
#include <linux/bits.h>
|
||||
|
||||
/* The bit field of immediate value in I-type instruction */
|
||||
#define I_IMM_SIGN_OPOFF 31
|
||||
#define I_IMM_11_0_OPOFF 20
|
||||
#define I_IMM_SIGN_OFF 12
|
||||
#define I_IMM_11_0_OFF 0
|
||||
#define I_IMM_11_0_MASK GENMASK(11, 0)
|
||||
|
||||
/* The bit field of immediate value in J-type instruction */
|
||||
#define J_IMM_SIGN_OPOFF 31
|
||||
#define J_IMM_10_1_OPOFF 21
|
||||
#define J_IMM_11_OPOFF 20
|
||||
#define J_IMM_19_12_OPOFF 12
|
||||
#define J_IMM_SIGN_OFF 20
|
||||
#define J_IMM_10_1_OFF 1
|
||||
#define J_IMM_11_OFF 11
|
||||
#define J_IMM_19_12_OFF 12
|
||||
#define J_IMM_10_1_MASK GENMASK(9, 0)
|
||||
#define J_IMM_11_MASK GENMASK(0, 0)
|
||||
#define J_IMM_19_12_MASK GENMASK(7, 0)
|
||||
|
||||
/* The bit field of immediate value in B-type instruction */
|
||||
#define B_IMM_SIGN_OPOFF 31
|
||||
#define B_IMM_10_5_OPOFF 25
|
||||
#define B_IMM_4_1_OPOFF 8
|
||||
#define B_IMM_11_OPOFF 7
|
||||
#define B_IMM_SIGN_OFF 12
|
||||
#define B_IMM_10_5_OFF 5
|
||||
#define B_IMM_4_1_OFF 1
|
||||
#define B_IMM_11_OFF 11
|
||||
#define B_IMM_10_5_MASK GENMASK(5, 0)
|
||||
#define B_IMM_4_1_MASK GENMASK(3, 0)
|
||||
#define B_IMM_11_MASK GENMASK(0, 0)
|
||||
|
||||
/* The register offset in RVG instruction */
|
||||
#define RVG_RS1_OPOFF 15
|
||||
#define RVG_RS2_OPOFF 20
|
||||
#define RVG_RD_OPOFF 7
|
||||
|
||||
/* The bit field of immediate value in RVC J instruction */
|
||||
#define RVC_J_IMM_SIGN_OPOFF 12
|
||||
#define RVC_J_IMM_4_OPOFF 11
|
||||
#define RVC_J_IMM_9_8_OPOFF 9
|
||||
#define RVC_J_IMM_10_OPOFF 8
|
||||
#define RVC_J_IMM_6_OPOFF 7
|
||||
#define RVC_J_IMM_7_OPOFF 6
|
||||
#define RVC_J_IMM_3_1_OPOFF 3
|
||||
#define RVC_J_IMM_5_OPOFF 2
|
||||
#define RVC_J_IMM_SIGN_OFF 11
|
||||
#define RVC_J_IMM_4_OFF 4
|
||||
#define RVC_J_IMM_9_8_OFF 8
|
||||
#define RVC_J_IMM_10_OFF 10
|
||||
#define RVC_J_IMM_6_OFF 6
|
||||
#define RVC_J_IMM_7_OFF 7
|
||||
#define RVC_J_IMM_3_1_OFF 1
|
||||
#define RVC_J_IMM_5_OFF 5
|
||||
#define RVC_J_IMM_4_MASK GENMASK(0, 0)
|
||||
#define RVC_J_IMM_9_8_MASK GENMASK(1, 0)
|
||||
#define RVC_J_IMM_10_MASK GENMASK(0, 0)
|
||||
#define RVC_J_IMM_6_MASK GENMASK(0, 0)
|
||||
#define RVC_J_IMM_7_MASK GENMASK(0, 0)
|
||||
#define RVC_J_IMM_3_1_MASK GENMASK(2, 0)
|
||||
#define RVC_J_IMM_5_MASK GENMASK(0, 0)
|
||||
|
||||
/* The bit field of immediate value in RVC B instruction */
|
||||
#define RVC_B_IMM_SIGN_OPOFF 12
|
||||
#define RVC_B_IMM_4_3_OPOFF 10
|
||||
#define RVC_B_IMM_7_6_OPOFF 5
|
||||
#define RVC_B_IMM_2_1_OPOFF 3
|
||||
#define RVC_B_IMM_5_OPOFF 2
|
||||
#define RVC_B_IMM_SIGN_OFF 8
|
||||
#define RVC_B_IMM_4_3_OFF 3
|
||||
#define RVC_B_IMM_7_6_OFF 6
|
||||
#define RVC_B_IMM_2_1_OFF 1
|
||||
#define RVC_B_IMM_5_OFF 5
|
||||
#define RVC_B_IMM_4_3_MASK GENMASK(1, 0)
|
||||
#define RVC_B_IMM_7_6_MASK GENMASK(1, 0)
|
||||
#define RVC_B_IMM_2_1_MASK GENMASK(1, 0)
|
||||
#define RVC_B_IMM_5_MASK GENMASK(0, 0)
|
||||
|
||||
/* The register offset in RVC op=C0 instruction */
|
||||
#define RVC_C0_RS1_OPOFF 7
|
||||
#define RVC_C0_RS2_OPOFF 2
|
||||
#define RVC_C0_RD_OPOFF 2
|
||||
|
||||
/* The register offset in RVC op=C1 instruction */
|
||||
#define RVC_C1_RS1_OPOFF 7
|
||||
#define RVC_C1_RS2_OPOFF 2
|
||||
#define RVC_C1_RD_OPOFF 7
|
||||
|
||||
/* The register offset in RVC op=C2 instruction */
|
||||
#define RVC_C2_RS1_OPOFF 7
|
||||
#define RVC_C2_RS2_OPOFF 2
|
||||
#define RVC_C2_RD_OPOFF 7
|
||||
|
||||
/* parts of opcode for RVG*/
|
||||
#define OPCODE_BRANCH 0x63
|
||||
#define OPCODE_JALR 0x67
|
||||
#define OPCODE_JAL 0x6f
|
||||
#define OPCODE_SYSTEM 0x73
|
||||
|
||||
/* parts of opcode for RVC*/
|
||||
#define OPCODE_C_0 0x0
|
||||
#define OPCODE_C_1 0x1
|
||||
#define OPCODE_C_2 0x2
|
||||
|
||||
/* parts of funct3 code for I, M, A extension*/
|
||||
#define FUNCT3_JALR 0x0
|
||||
#define FUNCT3_BEQ 0x0
|
||||
#define FUNCT3_BNE 0x1000
|
||||
#define FUNCT3_BLT 0x4000
|
||||
#define FUNCT3_BGE 0x5000
|
||||
#define FUNCT3_BLTU 0x6000
|
||||
#define FUNCT3_BGEU 0x7000
|
||||
|
||||
/* parts of funct3 code for C extension*/
|
||||
#define FUNCT3_C_BEQZ 0xc000
|
||||
#define FUNCT3_C_BNEZ 0xe000
|
||||
#define FUNCT3_C_J 0xa000
|
||||
#define FUNCT3_C_JAL 0x2000
|
||||
#define FUNCT4_C_JR 0x8000
|
||||
#define FUNCT4_C_JALR 0xf000
|
||||
|
||||
#define FUNCT12_SRET 0x10200000
|
||||
|
||||
#define MATCH_JALR (FUNCT3_JALR | OPCODE_JALR)
|
||||
#define MATCH_JAL (OPCODE_JAL)
|
||||
#define MATCH_BEQ (FUNCT3_BEQ | OPCODE_BRANCH)
|
||||
#define MATCH_BNE (FUNCT3_BNE | OPCODE_BRANCH)
|
||||
#define MATCH_BLT (FUNCT3_BLT | OPCODE_BRANCH)
|
||||
#define MATCH_BGE (FUNCT3_BGE | OPCODE_BRANCH)
|
||||
#define MATCH_BLTU (FUNCT3_BLTU | OPCODE_BRANCH)
|
||||
#define MATCH_BGEU (FUNCT3_BGEU | OPCODE_BRANCH)
|
||||
#define MATCH_SRET (FUNCT12_SRET | OPCODE_SYSTEM)
|
||||
#define MATCH_C_BEQZ (FUNCT3_C_BEQZ | OPCODE_C_1)
|
||||
#define MATCH_C_BNEZ (FUNCT3_C_BNEZ | OPCODE_C_1)
|
||||
#define MATCH_C_J (FUNCT3_C_J | OPCODE_C_1)
|
||||
#define MATCH_C_JAL (FUNCT3_C_JAL | OPCODE_C_1)
|
||||
#define MATCH_C_JR (FUNCT4_C_JR | OPCODE_C_2)
|
||||
#define MATCH_C_JALR (FUNCT4_C_JALR | OPCODE_C_2)
|
||||
|
||||
#define MASK_JALR 0x707f
|
||||
#define MASK_JAL 0x7f
|
||||
#define MASK_C_JALR 0xf07f
|
||||
#define MASK_C_JR 0xf07f
|
||||
#define MASK_C_JAL 0xe003
|
||||
#define MASK_C_J 0xe003
|
||||
#define MASK_BEQ 0x707f
|
||||
#define MASK_BNE 0x707f
|
||||
#define MASK_BLT 0x707f
|
||||
#define MASK_BGE 0x707f
|
||||
#define MASK_BLTU 0x707f
|
||||
#define MASK_BGEU 0x707f
|
||||
#define MASK_C_BEQZ 0xe003
|
||||
#define MASK_C_BNEZ 0xe003
|
||||
#define MASK_SRET 0xffffffff
|
||||
|
||||
#define __INSN_LENGTH_MASK _UL(0x3)
|
||||
#define __INSN_LENGTH_GE_32 _UL(0x3)
|
||||
#define __INSN_OPCODE_MASK _UL(0x7F)
|
||||
#define __INSN_BRANCH_OPCODE _UL(OPCODE_BRANCH)
|
||||
|
||||
/* Define a series of is_XXX_insn functions to check if the value INSN
|
||||
* is an instance of instruction XXX.
|
||||
*/
|
||||
#define DECLARE_INSN(INSN_NAME, INSN_MATCH, INSN_MASK) \
|
||||
static inline bool is_ ## INSN_NAME ## _insn(long insn) \
|
||||
{ \
|
||||
return (insn & (INSN_MASK)) == (INSN_MATCH); \
|
||||
}
|
||||
|
||||
#define RV_IMM_SIGN(x) (-(((x) >> 31) & 1))
|
||||
#define RVC_IMM_SIGN(x) (-(((x) >> 12) & 1))
|
||||
#define RV_X(X, s, mask) (((X) >> (s)) & (mask))
|
||||
#define RVC_X(X, s, mask) RV_X(X, s, mask)
|
||||
|
||||
#define EXTRACT_JTYPE_IMM(x) \
|
||||
({typeof(x) x_ = (x); \
|
||||
(RV_X(x_, J_IMM_10_1_OPOFF, J_IMM_10_1_MASK) << J_IMM_10_1_OFF) | \
|
||||
(RV_X(x_, J_IMM_11_OPOFF, J_IMM_11_MASK) << J_IMM_11_OFF) | \
|
||||
(RV_X(x_, J_IMM_19_12_OPOFF, J_IMM_19_12_MASK) << J_IMM_19_12_OFF) | \
|
||||
(RV_IMM_SIGN(x_) << J_IMM_SIGN_OFF); })
|
||||
|
||||
#define EXTRACT_ITYPE_IMM(x) \
|
||||
({typeof(x) x_ = (x); \
|
||||
(RV_X(x_, I_IMM_11_0_OPOFF, I_IMM_11_0_MASK)) | \
|
||||
(RV_IMM_SIGN(x_) << I_IMM_SIGN_OFF); })
|
||||
|
||||
#define EXTRACT_BTYPE_IMM(x) \
|
||||
({typeof(x) x_ = (x); \
|
||||
(RV_X(x_, B_IMM_4_1_OPOFF, B_IMM_4_1_MASK) << B_IMM_4_1_OFF) | \
|
||||
(RV_X(x_, B_IMM_10_5_OPOFF, B_IMM_10_5_MASK) << B_IMM_10_5_OFF) | \
|
||||
(RV_X(x_, B_IMM_11_OPOFF, B_IMM_11_MASK) << B_IMM_11_OFF) | \
|
||||
(RV_IMM_SIGN(x_) << B_IMM_SIGN_OFF); })
|
||||
|
||||
#define EXTRACT_RVC_J_IMM(x) \
|
||||
({typeof(x) x_ = (x); \
|
||||
(RVC_X(x_, RVC_J_IMM_3_1_OPOFF, RVC_J_IMM_3_1_MASK) << RVC_J_IMM_3_1_OFF) | \
|
||||
(RVC_X(x_, RVC_J_IMM_4_OPOFF, RVC_J_IMM_4_MASK) << RVC_J_IMM_4_OFF) | \
|
||||
(RVC_X(x_, RVC_J_IMM_5_OPOFF, RVC_J_IMM_5_MASK) << RVC_J_IMM_5_OFF) | \
|
||||
(RVC_X(x_, RVC_J_IMM_6_OPOFF, RVC_J_IMM_6_MASK) << RVC_J_IMM_6_OFF) | \
|
||||
(RVC_X(x_, RVC_J_IMM_7_OPOFF, RVC_J_IMM_7_MASK) << RVC_J_IMM_7_OFF) | \
|
||||
(RVC_X(x_, RVC_J_IMM_9_8_OPOFF, RVC_J_IMM_9_8_MASK) << RVC_J_IMM_9_8_OFF) | \
|
||||
(RVC_X(x_, RVC_J_IMM_10_OPOFF, RVC_J_IMM_10_MASK) << RVC_J_IMM_10_OFF) | \
|
||||
(RVC_IMM_SIGN(x_) << RVC_J_IMM_SIGN_OFF); })
|
||||
|
||||
#define EXTRACT_RVC_B_IMM(x) \
|
||||
({typeof(x) x_ = (x); \
|
||||
(RVC_X(x_, RVC_B_IMM_2_1_OPOFF, RVC_B_IMM_2_1_MASK) << RVC_B_IMM_2_1_OFF) | \
|
||||
(RVC_X(x_, RVC_B_IMM_4_3_OPOFF, RVC_B_IMM_4_3_MASK) << RVC_B_IMM_4_3_OFF) | \
|
||||
(RVC_X(x_, RVC_B_IMM_5_OPOFF, RVC_B_IMM_5_MASK) << RVC_B_IMM_5_OFF) | \
|
||||
(RVC_X(x_, RVC_B_IMM_7_6_OPOFF, RVC_B_IMM_7_6_MASK) << RVC_B_IMM_7_6_OFF) | \
|
||||
(RVC_IMM_SIGN(x_) << RVC_B_IMM_SIGN_OFF); })
|
|
@ -31,7 +31,7 @@
|
|||
#define PTRS_PER_PTE (PAGE_SIZE / sizeof(pte_t))
|
||||
|
||||
/*
|
||||
* Half of the kernel address space (half of the entries of the page global
|
||||
* Half of the kernel address space (1/4 of the entries of the page global
|
||||
* directory) is for the direct mapping.
|
||||
*/
|
||||
#define KERN_VIRT_SIZE ((PTRS_PER_PGD / 2 * PGDIR_SIZE) / 2)
|
||||
|
@ -415,7 +415,7 @@ static inline void update_mmu_cache(struct vm_area_struct *vma,
|
|||
* Relying on flush_tlb_fix_spurious_fault would suffice, but
|
||||
* the extra traps reduce performance. So, eagerly SFENCE.VMA.
|
||||
*/
|
||||
flush_tlb_page(vma, address);
|
||||
local_flush_tlb_page(address);
|
||||
}
|
||||
|
||||
#define __HAVE_ARCH_UPDATE_MMU_TLB
|
||||
|
|
|
@ -7,6 +7,6 @@
|
|||
#include <uapi/asm/ptrace.h>
|
||||
|
||||
asmlinkage __visible
|
||||
void do_notify_resume(struct pt_regs *regs, unsigned long thread_info_flags);
|
||||
void do_work_pending(struct pt_regs *regs, unsigned long thread_info_flags);
|
||||
|
||||
#endif
|
||||
|
|
|
@ -18,6 +18,16 @@ extern asmlinkage void *__memcpy(void *, const void *, size_t);
|
|||
#define __HAVE_ARCH_MEMMOVE
|
||||
extern asmlinkage void *memmove(void *, const void *, size_t);
|
||||
extern asmlinkage void *__memmove(void *, const void *, size_t);
|
||||
|
||||
#define __HAVE_ARCH_STRCMP
|
||||
extern asmlinkage int strcmp(const char *cs, const char *ct);
|
||||
|
||||
#define __HAVE_ARCH_STRLEN
|
||||
extern asmlinkage __kernel_size_t strlen(const char *);
|
||||
|
||||
#define __HAVE_ARCH_STRNCMP
|
||||
extern asmlinkage int strncmp(const char *cs, const char *ct, size_t count);
|
||||
|
||||
/* For those files which don't want to check by kasan. */
|
||||
#if defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__)
|
||||
#define memcpy(dst, src, len) __memcpy(dst, src, len)
|
||||
|
|
|
@ -59,7 +59,8 @@ static inline void __switch_to_aux(struct task_struct *prev,
|
|||
|
||||
static __always_inline bool has_fpu(void)
|
||||
{
|
||||
return static_branch_likely(&riscv_isa_ext_keys[RISCV_ISA_EXT_KEY_FPU]);
|
||||
return riscv_has_extension_likely(RISCV_ISA_EXT_f) ||
|
||||
riscv_has_extension_likely(RISCV_ISA_EXT_d);
|
||||
}
|
||||
#else
|
||||
static __always_inline bool has_fpu(void) { return false; }
|
||||
|
|
|
@ -43,6 +43,7 @@
|
|||
#ifndef __ASSEMBLY__
|
||||
|
||||
extern long shadow_stack[SHADOW_OVERFLOW_STACK_SIZE / sizeof(long)];
|
||||
extern unsigned long spin_shadow_stack;
|
||||
|
||||
#include <asm/processor.h>
|
||||
#include <asm/csr.h>
|
||||
|
|
|
@ -28,8 +28,12 @@
|
|||
#define COMPAT_VDSO_SYMBOL(base, name) \
|
||||
(void __user *)((unsigned long)(base) + compat__vdso_##name##_offset)
|
||||
|
||||
extern char compat_vdso_start[], compat_vdso_end[];
|
||||
|
||||
#endif /* CONFIG_COMPAT */
|
||||
|
||||
extern char vdso_start[], vdso_end[];
|
||||
|
||||
#endif /* !__ASSEMBLY__ */
|
||||
|
||||
#endif /* CONFIG_MMU */
|
||||
|
|
|
@ -11,10 +11,14 @@
|
|||
#include <linux/cpu.h>
|
||||
#include <linux/uaccess.h>
|
||||
#include <asm/alternative.h>
|
||||
#include <asm/module.h>
|
||||
#include <asm/sections.h>
|
||||
#include <asm/vdso.h>
|
||||
#include <asm/vendorid_list.h>
|
||||
#include <asm/sbi.h>
|
||||
#include <asm/csr.h>
|
||||
#include <asm/insn.h>
|
||||
#include <asm/patch.h>
|
||||
|
||||
struct cpu_manufacturer_info_t {
|
||||
unsigned long vendor_id;
|
||||
|
@ -53,6 +57,88 @@ static void __init_or_module riscv_fill_cpu_mfr_info(struct cpu_manufacturer_inf
|
|||
}
|
||||
}
|
||||
|
||||
static u32 riscv_instruction_at(void *p)
|
||||
{
|
||||
u16 *parcel = p;
|
||||
|
||||
return (u32)parcel[0] | (u32)parcel[1] << 16;
|
||||
}
|
||||
|
||||
static void riscv_alternative_fix_auipc_jalr(void *ptr, u32 auipc_insn,
|
||||
u32 jalr_insn, int patch_offset)
|
||||
{
|
||||
u32 call[2] = { auipc_insn, jalr_insn };
|
||||
s32 imm;
|
||||
|
||||
/* get and adjust new target address */
|
||||
imm = riscv_insn_extract_utype_itype_imm(auipc_insn, jalr_insn);
|
||||
imm -= patch_offset;
|
||||
|
||||
/* update instructions */
|
||||
riscv_insn_insert_utype_itype_imm(&call[0], &call[1], imm);
|
||||
|
||||
/* patch the call place again */
|
||||
patch_text_nosync(ptr, call, sizeof(u32) * 2);
|
||||
}
|
||||
|
||||
static void riscv_alternative_fix_jal(void *ptr, u32 jal_insn, int patch_offset)
|
||||
{
|
||||
s32 imm;
|
||||
|
||||
/* get and adjust new target address */
|
||||
imm = riscv_insn_extract_jtype_imm(jal_insn);
|
||||
imm -= patch_offset;
|
||||
|
||||
/* update instruction */
|
||||
riscv_insn_insert_jtype_imm(&jal_insn, imm);
|
||||
|
||||
/* patch the call place again */
|
||||
patch_text_nosync(ptr, &jal_insn, sizeof(u32));
|
||||
}
|
||||
|
||||
void riscv_alternative_fix_offsets(void *alt_ptr, unsigned int len,
|
||||
int patch_offset)
|
||||
{
|
||||
int num_insn = len / sizeof(u32);
|
||||
int i;
|
||||
|
||||
for (i = 0; i < num_insn; i++) {
|
||||
u32 insn = riscv_instruction_at(alt_ptr + i * sizeof(u32));
|
||||
|
||||
/*
|
||||
* May be the start of an auipc + jalr pair
|
||||
* Needs to check that at least one more instruction
|
||||
* is in the list.
|
||||
*/
|
||||
if (riscv_insn_is_auipc(insn) && i < num_insn - 1) {
|
||||
u32 insn2 = riscv_instruction_at(alt_ptr + (i + 1) * sizeof(u32));
|
||||
|
||||
if (!riscv_insn_is_jalr(insn2))
|
||||
continue;
|
||||
|
||||
/* if instruction pair is a call, it will use the ra register */
|
||||
if (RV_EXTRACT_RD_REG(insn) != 1)
|
||||
continue;
|
||||
|
||||
riscv_alternative_fix_auipc_jalr(alt_ptr + i * sizeof(u32),
|
||||
insn, insn2, patch_offset);
|
||||
i++;
|
||||
}
|
||||
|
||||
if (riscv_insn_is_jal(insn)) {
|
||||
s32 imm = riscv_insn_extract_jtype_imm(insn);
|
||||
|
||||
/* Don't modify jumps inside the alternative block */
|
||||
if ((alt_ptr + i * sizeof(u32) + imm) >= alt_ptr &&
|
||||
(alt_ptr + i * sizeof(u32) + imm) < (alt_ptr + len))
|
||||
continue;
|
||||
|
||||
riscv_alternative_fix_jal(alt_ptr + i * sizeof(u32),
|
||||
insn, patch_offset);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* This is called very early in the boot process (directly after we run
|
||||
* a feature detect on the boot CPU). No need to worry about other CPUs
|
||||
|
@ -77,6 +163,31 @@ static void __init_or_module _apply_alternatives(struct alt_entry *begin,
|
|||
stage);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_MMU
|
||||
static void __init apply_vdso_alternatives(void)
|
||||
{
|
||||
const Elf_Ehdr *hdr;
|
||||
const Elf_Shdr *shdr;
|
||||
const Elf_Shdr *alt;
|
||||
struct alt_entry *begin, *end;
|
||||
|
||||
hdr = (Elf_Ehdr *)vdso_start;
|
||||
shdr = (void *)hdr + hdr->e_shoff;
|
||||
alt = find_section(hdr, shdr, ".alternative");
|
||||
if (!alt)
|
||||
return;
|
||||
|
||||
begin = (void *)hdr + alt->sh_offset,
|
||||
end = (void *)hdr + alt->sh_offset + alt->sh_size,
|
||||
|
||||
_apply_alternatives((struct alt_entry *)begin,
|
||||
(struct alt_entry *)end,
|
||||
RISCV_ALTERNATIVES_BOOT);
|
||||
}
|
||||
#else
|
||||
static void __init apply_vdso_alternatives(void) { }
|
||||
#endif
|
||||
|
||||
void __init apply_boot_alternatives(void)
|
||||
{
|
||||
/* If called on non-boot cpu things could go wrong */
|
||||
|
@ -85,6 +196,8 @@ void __init apply_boot_alternatives(void)
|
|||
_apply_alternatives((struct alt_entry *)__alt_start,
|
||||
(struct alt_entry *)__alt_end,
|
||||
RISCV_ALTERNATIVES_BOOT);
|
||||
|
||||
apply_vdso_alternatives();
|
||||
}
|
||||
|
||||
/*
|
||||
|
|
|
@ -144,30 +144,54 @@ arch_initcall(riscv_cpuinfo_init);
|
|||
.uprop = #UPROP, \
|
||||
.isa_ext_id = EXTID, \
|
||||
}
|
||||
|
||||
/*
|
||||
* Here are the ordering rules of extension naming defined by RISC-V
|
||||
* specification :
|
||||
* 1. All extensions should be separated from other multi-letter extensions
|
||||
* by an underscore.
|
||||
* 2. The first letter following the 'Z' conventionally indicates the most
|
||||
* The canonical order of ISA extension names in the ISA string is defined in
|
||||
* chapter 27 of the unprivileged specification.
|
||||
*
|
||||
* Ordinarily, for in-kernel data structures, this order is unimportant but
|
||||
* isa_ext_arr defines the order of the ISA string in /proc/cpuinfo.
|
||||
*
|
||||
* The specification uses vague wording, such as should, when it comes to
|
||||
* ordering, so for our purposes the following rules apply:
|
||||
*
|
||||
* 1. All multi-letter extensions must be separated from other extensions by an
|
||||
* underscore.
|
||||
*
|
||||
* 2. Additional standard extensions (starting with 'Z') must be sorted after
|
||||
* single-letter extensions and before any higher-privileged extensions.
|
||||
|
||||
* 3. The first letter following the 'Z' conventionally indicates the most
|
||||
* closely related alphabetical extension category, IMAFDQLCBKJTPVH.
|
||||
* If multiple 'Z' extensions are named, they should be ordered first
|
||||
* by category, then alphabetically within a category.
|
||||
* 3. Standard supervisor-level extensions (starts with 'S') should be
|
||||
* listed after standard unprivileged extensions. If multiple
|
||||
* supervisor-level extensions are listed, they should be ordered
|
||||
* If multiple 'Z' extensions are named, they must be ordered first by
|
||||
* category, then alphabetically within a category.
|
||||
*
|
||||
* 3. Standard supervisor-level extensions (starting with 'S') must be listed
|
||||
* after standard unprivileged extensions. If multiple supervisor-level
|
||||
* extensions are listed, they must be ordered alphabetically.
|
||||
*
|
||||
* 4. Standard machine-level extensions (starting with 'Zxm') must be listed
|
||||
* after any lower-privileged, standard extensions. If multiple
|
||||
* machine-level extensions are listed, they must be ordered
|
||||
* alphabetically.
|
||||
* 4. Non-standard extensions (starts with 'X') must be listed after all
|
||||
* standard extensions. They must be separated from other multi-letter
|
||||
* extensions by an underscore.
|
||||
*
|
||||
* 5. Non-standard extensions (starting with 'X') must be listed after all
|
||||
* standard extensions. If multiple non-standard extensions are listed, they
|
||||
* must be ordered alphabetically.
|
||||
*
|
||||
* An example string following the order is:
|
||||
* rv64imadc_zifoo_zigoo_zafoo_sbar_scar_zxmbaz_xqux_xrux
|
||||
*
|
||||
* New entries to this struct should follow the ordering rules described above.
|
||||
*/
|
||||
static struct riscv_isa_ext_data isa_ext_arr[] = {
|
||||
__RISCV_ISA_EXT_DATA(zicbom, RISCV_ISA_EXT_ZICBOM),
|
||||
__RISCV_ISA_EXT_DATA(zihintpause, RISCV_ISA_EXT_ZIHINTPAUSE),
|
||||
__RISCV_ISA_EXT_DATA(zbb, RISCV_ISA_EXT_ZBB),
|
||||
__RISCV_ISA_EXT_DATA(sscofpmf, RISCV_ISA_EXT_SSCOFPMF),
|
||||
__RISCV_ISA_EXT_DATA(sstc, RISCV_ISA_EXT_SSTC),
|
||||
__RISCV_ISA_EXT_DATA(svinval, RISCV_ISA_EXT_SVINVAL),
|
||||
__RISCV_ISA_EXT_DATA(svpbmt, RISCV_ISA_EXT_SVPBMT),
|
||||
__RISCV_ISA_EXT_DATA(zicbom, RISCV_ISA_EXT_ZICBOM),
|
||||
__RISCV_ISA_EXT_DATA(zihintpause, RISCV_ISA_EXT_ZIHINTPAUSE),
|
||||
__RISCV_ISA_EXT_DATA("", RISCV_ISA_EXT_MAX),
|
||||
};
|
||||
|
||||
|
|
|
@ -10,6 +10,7 @@
|
|||
#include <linux/ctype.h>
|
||||
#include <linux/libfdt.h>
|
||||
#include <linux/log2.h>
|
||||
#include <linux/memory.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/of.h>
|
||||
#include <asm/alternative.h>
|
||||
|
@ -29,9 +30,6 @@ unsigned long elf_hwcap __read_mostly;
|
|||
/* Host ISA bitmap */
|
||||
static DECLARE_BITMAP(riscv_isa, RISCV_ISA_EXT_MAX) __read_mostly;
|
||||
|
||||
DEFINE_STATIC_KEY_ARRAY_FALSE(riscv_isa_ext_keys, RISCV_ISA_EXT_KEY_MAX);
|
||||
EXPORT_SYMBOL(riscv_isa_ext_keys);
|
||||
|
||||
/**
|
||||
* riscv_isa_extension_base() - Get base extension word
|
||||
*
|
||||
|
@ -222,12 +220,14 @@ void __init riscv_fill_hwcap(void)
|
|||
set_bit(nr, this_isa);
|
||||
}
|
||||
} else {
|
||||
/* sorted alphabetically */
|
||||
SET_ISA_EXT_MAP("sscofpmf", RISCV_ISA_EXT_SSCOFPMF);
|
||||
SET_ISA_EXT_MAP("svpbmt", RISCV_ISA_EXT_SVPBMT);
|
||||
SET_ISA_EXT_MAP("zicbom", RISCV_ISA_EXT_ZICBOM);
|
||||
SET_ISA_EXT_MAP("zihintpause", RISCV_ISA_EXT_ZIHINTPAUSE);
|
||||
SET_ISA_EXT_MAP("sstc", RISCV_ISA_EXT_SSTC);
|
||||
SET_ISA_EXT_MAP("svinval", RISCV_ISA_EXT_SVINVAL);
|
||||
SET_ISA_EXT_MAP("svpbmt", RISCV_ISA_EXT_SVPBMT);
|
||||
SET_ISA_EXT_MAP("zbb", RISCV_ISA_EXT_ZBB);
|
||||
SET_ISA_EXT_MAP("zicbom", RISCV_ISA_EXT_ZICBOM);
|
||||
SET_ISA_EXT_MAP("zihintpause", RISCV_ISA_EXT_ZIHINTPAUSE);
|
||||
}
|
||||
#undef SET_ISA_EXT_MAP
|
||||
}
|
||||
|
@ -266,81 +266,38 @@ void __init riscv_fill_hwcap(void)
|
|||
if (elf_hwcap & BIT_MASK(i))
|
||||
print_str[j++] = (char)('a' + i);
|
||||
pr_info("riscv: ELF capabilities %s\n", print_str);
|
||||
|
||||
for_each_set_bit(i, riscv_isa, RISCV_ISA_EXT_MAX) {
|
||||
j = riscv_isa_ext2key(i);
|
||||
if (j >= 0)
|
||||
static_branch_enable(&riscv_isa_ext_keys[j]);
|
||||
}
|
||||
}
|
||||
|
||||
#ifdef CONFIG_RISCV_ALTERNATIVE
|
||||
static bool __init_or_module cpufeature_probe_svpbmt(unsigned int stage)
|
||||
{
|
||||
if (!IS_ENABLED(CONFIG_RISCV_ISA_SVPBMT))
|
||||
return false;
|
||||
|
||||
if (stage == RISCV_ALTERNATIVES_EARLY_BOOT)
|
||||
return false;
|
||||
|
||||
return riscv_isa_extension_available(NULL, SVPBMT);
|
||||
}
|
||||
|
||||
static bool __init_or_module cpufeature_probe_zicbom(unsigned int stage)
|
||||
{
|
||||
if (!IS_ENABLED(CONFIG_RISCV_ISA_ZICBOM))
|
||||
return false;
|
||||
|
||||
if (stage == RISCV_ALTERNATIVES_EARLY_BOOT)
|
||||
return false;
|
||||
|
||||
if (!riscv_isa_extension_available(NULL, ZICBOM))
|
||||
return false;
|
||||
|
||||
riscv_noncoherent_supported();
|
||||
return true;
|
||||
}
|
||||
|
||||
/*
|
||||
* Probe presence of individual extensions.
|
||||
*
|
||||
* This code may also be executed before kernel relocation, so we cannot use
|
||||
* addresses generated by the address-of operator as they won't be valid in
|
||||
* this context.
|
||||
*/
|
||||
static u32 __init_or_module cpufeature_probe(unsigned int stage)
|
||||
{
|
||||
u32 cpu_req_feature = 0;
|
||||
|
||||
if (cpufeature_probe_svpbmt(stage))
|
||||
cpu_req_feature |= BIT(CPUFEATURE_SVPBMT);
|
||||
|
||||
if (cpufeature_probe_zicbom(stage))
|
||||
cpu_req_feature |= BIT(CPUFEATURE_ZICBOM);
|
||||
|
||||
return cpu_req_feature;
|
||||
}
|
||||
|
||||
void __init_or_module riscv_cpufeature_patch_func(struct alt_entry *begin,
|
||||
struct alt_entry *end,
|
||||
unsigned int stage)
|
||||
{
|
||||
u32 cpu_req_feature = cpufeature_probe(stage);
|
||||
struct alt_entry *alt;
|
||||
u32 tmp;
|
||||
void *oldptr, *altptr;
|
||||
|
||||
if (stage == RISCV_ALTERNATIVES_EARLY_BOOT)
|
||||
return;
|
||||
|
||||
for (alt = begin; alt < end; alt++) {
|
||||
if (alt->vendor_id != 0)
|
||||
continue;
|
||||
if (alt->errata_id >= CPUFEATURE_NUMBER) {
|
||||
WARN(1, "This feature id:%d is not in kernel cpufeature list",
|
||||
if (alt->errata_id >= RISCV_ISA_EXT_MAX) {
|
||||
WARN(1, "This extension id:%d is not in ISA extension list",
|
||||
alt->errata_id);
|
||||
continue;
|
||||
}
|
||||
|
||||
tmp = (1U << alt->errata_id);
|
||||
if (cpu_req_feature & tmp)
|
||||
patch_text_nosync(alt->old_ptr, alt->alt_ptr, alt->alt_len);
|
||||
if (!__riscv_isa_extension_available(NULL, alt->errata_id))
|
||||
continue;
|
||||
|
||||
oldptr = ALT_OLD_PTR(alt);
|
||||
altptr = ALT_ALT_PTR(alt);
|
||||
|
||||
mutex_lock(&text_mutex);
|
||||
patch_text_nosync(oldptr, altptr, alt->alt_len);
|
||||
riscv_alternative_fix_offsets(oldptr, alt->alt_len, oldptr - altptr);
|
||||
mutex_unlock(&text_mutex);
|
||||
}
|
||||
}
|
||||
#endif
|
||||
|
|
|
@ -55,12 +55,15 @@ static int ftrace_check_current_call(unsigned long hook_pos,
|
|||
}
|
||||
|
||||
static int __ftrace_modify_call(unsigned long hook_pos, unsigned long target,
|
||||
bool enable)
|
||||
bool enable, bool ra)
|
||||
{
|
||||
unsigned int call[2];
|
||||
unsigned int nops[2] = {NOP4, NOP4};
|
||||
|
||||
make_call(hook_pos, target, call);
|
||||
if (ra)
|
||||
make_call_ra(hook_pos, target, call);
|
||||
else
|
||||
make_call_t0(hook_pos, target, call);
|
||||
|
||||
/* Replace the auipc-jalr pair at once. Return -EPERM on write error. */
|
||||
if (patch_text_nosync
|
||||
|
@ -70,42 +73,13 @@ static int __ftrace_modify_call(unsigned long hook_pos, unsigned long target,
|
|||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* Put 5 instructions with 16 bytes at the front of function within
|
||||
* patchable function entry nops' area.
|
||||
*
|
||||
* 0: REG_S ra, -SZREG(sp)
|
||||
* 1: auipc ra, 0x?
|
||||
* 2: jalr -?(ra)
|
||||
* 3: REG_L ra, -SZREG(sp)
|
||||
*
|
||||
* So the opcodes is:
|
||||
* 0: 0xfe113c23 (sd)/0xfe112e23 (sw)
|
||||
* 1: 0x???????? -> auipc
|
||||
* 2: 0x???????? -> jalr
|
||||
* 3: 0xff813083 (ld)/0xffc12083 (lw)
|
||||
*/
|
||||
#if __riscv_xlen == 64
|
||||
#define INSN0 0xfe113c23
|
||||
#define INSN3 0xff813083
|
||||
#elif __riscv_xlen == 32
|
||||
#define INSN0 0xfe112e23
|
||||
#define INSN3 0xffc12083
|
||||
#endif
|
||||
|
||||
#define FUNC_ENTRY_SIZE 16
|
||||
#define FUNC_ENTRY_JMP 4
|
||||
|
||||
int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
|
||||
{
|
||||
unsigned int call[4] = {INSN0, 0, 0, INSN3};
|
||||
unsigned long target = addr;
|
||||
unsigned long caller = rec->ip + FUNC_ENTRY_JMP;
|
||||
unsigned int call[2];
|
||||
|
||||
call[1] = to_auipc_insn((unsigned int)(target - caller));
|
||||
call[2] = to_jalr_insn((unsigned int)(target - caller));
|
||||
make_call_t0(rec->ip, addr, call);
|
||||
|
||||
if (patch_text_nosync((void *)rec->ip, call, FUNC_ENTRY_SIZE))
|
||||
if (patch_text_nosync((void *)rec->ip, call, MCOUNT_INSN_SIZE))
|
||||
return -EPERM;
|
||||
|
||||
return 0;
|
||||
|
@ -114,15 +88,14 @@ int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
|
|||
int ftrace_make_nop(struct module *mod, struct dyn_ftrace *rec,
|
||||
unsigned long addr)
|
||||
{
|
||||
unsigned int nops[4] = {NOP4, NOP4, NOP4, NOP4};
|
||||
unsigned int nops[2] = {NOP4, NOP4};
|
||||
|
||||
if (patch_text_nosync((void *)rec->ip, nops, FUNC_ENTRY_SIZE))
|
||||
if (patch_text_nosync((void *)rec->ip, nops, MCOUNT_INSN_SIZE))
|
||||
return -EPERM;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
||||
/*
|
||||
* This is called early on, and isn't wrapped by
|
||||
* ftrace_arch_code_modify_{prepare,post_process}() and therefor doesn't hold
|
||||
|
@ -144,10 +117,10 @@ int ftrace_init_nop(struct module *mod, struct dyn_ftrace *rec)
|
|||
int ftrace_update_ftrace_func(ftrace_func_t func)
|
||||
{
|
||||
int ret = __ftrace_modify_call((unsigned long)&ftrace_call,
|
||||
(unsigned long)func, true);
|
||||
(unsigned long)func, true, true);
|
||||
if (!ret) {
|
||||
ret = __ftrace_modify_call((unsigned long)&ftrace_regs_call,
|
||||
(unsigned long)func, true);
|
||||
(unsigned long)func, true, true);
|
||||
}
|
||||
|
||||
return ret;
|
||||
|
@ -159,16 +132,16 @@ int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr,
|
|||
unsigned long addr)
|
||||
{
|
||||
unsigned int call[2];
|
||||
unsigned long caller = rec->ip + FUNC_ENTRY_JMP;
|
||||
unsigned long caller = rec->ip;
|
||||
int ret;
|
||||
|
||||
make_call(caller, old_addr, call);
|
||||
make_call_t0(caller, old_addr, call);
|
||||
ret = ftrace_check_current_call(caller, call);
|
||||
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
return __ftrace_modify_call(caller, addr, true);
|
||||
return __ftrace_modify_call(caller, addr, true, false);
|
||||
}
|
||||
#endif
|
||||
|
||||
|
@ -203,12 +176,12 @@ int ftrace_enable_ftrace_graph_caller(void)
|
|||
int ret;
|
||||
|
||||
ret = __ftrace_modify_call((unsigned long)&ftrace_graph_call,
|
||||
(unsigned long)&prepare_ftrace_return, true);
|
||||
(unsigned long)&prepare_ftrace_return, true, true);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
return __ftrace_modify_call((unsigned long)&ftrace_graph_regs_call,
|
||||
(unsigned long)&prepare_ftrace_return, true);
|
||||
(unsigned long)&prepare_ftrace_return, true, true);
|
||||
}
|
||||
|
||||
int ftrace_disable_ftrace_graph_caller(void)
|
||||
|
@ -216,12 +189,12 @@ int ftrace_disable_ftrace_graph_caller(void)
|
|||
int ret;
|
||||
|
||||
ret = __ftrace_modify_call((unsigned long)&ftrace_graph_call,
|
||||
(unsigned long)&prepare_ftrace_return, false);
|
||||
(unsigned long)&prepare_ftrace_return, false, true);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
return __ftrace_modify_call((unsigned long)&ftrace_graph_regs_call,
|
||||
(unsigned long)&prepare_ftrace_return, false);
|
||||
(unsigned long)&prepare_ftrace_return, false, true);
|
||||
}
|
||||
#endif /* CONFIG_DYNAMIC_FTRACE */
|
||||
#endif /* CONFIG_FUNCTION_GRAPH_TRACER */
|
||||
|
|
|
@ -11,7 +11,7 @@
|
|||
#include <linux/string.h>
|
||||
#include <asm/cacheflush.h>
|
||||
#include <asm/gdb_xml.h>
|
||||
#include <asm/parse_asm.h>
|
||||
#include <asm/insn.h>
|
||||
|
||||
enum {
|
||||
NOT_KGDB_BREAK = 0,
|
||||
|
@ -23,27 +23,6 @@ enum {
|
|||
static unsigned long stepped_address;
|
||||
static unsigned int stepped_opcode;
|
||||
|
||||
#if __riscv_xlen == 32
|
||||
/* C.JAL is an RV32C-only instruction */
|
||||
DECLARE_INSN(c_jal, MATCH_C_JAL, MASK_C_JAL)
|
||||
#else
|
||||
#define is_c_jal_insn(opcode) 0
|
||||
#endif
|
||||
DECLARE_INSN(jalr, MATCH_JALR, MASK_JALR)
|
||||
DECLARE_INSN(jal, MATCH_JAL, MASK_JAL)
|
||||
DECLARE_INSN(c_jr, MATCH_C_JR, MASK_C_JR)
|
||||
DECLARE_INSN(c_jalr, MATCH_C_JALR, MASK_C_JALR)
|
||||
DECLARE_INSN(c_j, MATCH_C_J, MASK_C_J)
|
||||
DECLARE_INSN(beq, MATCH_BEQ, MASK_BEQ)
|
||||
DECLARE_INSN(bne, MATCH_BNE, MASK_BNE)
|
||||
DECLARE_INSN(blt, MATCH_BLT, MASK_BLT)
|
||||
DECLARE_INSN(bge, MATCH_BGE, MASK_BGE)
|
||||
DECLARE_INSN(bltu, MATCH_BLTU, MASK_BLTU)
|
||||
DECLARE_INSN(bgeu, MATCH_BGEU, MASK_BGEU)
|
||||
DECLARE_INSN(c_beqz, MATCH_C_BEQZ, MASK_C_BEQZ)
|
||||
DECLARE_INSN(c_bnez, MATCH_C_BNEZ, MASK_C_BNEZ)
|
||||
DECLARE_INSN(sret, MATCH_SRET, MASK_SRET)
|
||||
|
||||
static int decode_register_index(unsigned long opcode, int offset)
|
||||
{
|
||||
return (opcode >> offset) & 0x1F;
|
||||
|
@ -65,23 +44,25 @@ static int get_step_address(struct pt_regs *regs, unsigned long *next_addr)
|
|||
if (get_kernel_nofault(op_code, (void *)pc))
|
||||
return -EINVAL;
|
||||
if ((op_code & __INSN_LENGTH_MASK) != __INSN_LENGTH_GE_32) {
|
||||
if (is_c_jalr_insn(op_code) || is_c_jr_insn(op_code)) {
|
||||
if (riscv_insn_is_c_jalr(op_code) ||
|
||||
riscv_insn_is_c_jr(op_code)) {
|
||||
rs1_num = decode_register_index(op_code, RVC_C2_RS1_OPOFF);
|
||||
*next_addr = regs_ptr[rs1_num];
|
||||
} else if (is_c_j_insn(op_code) || is_c_jal_insn(op_code)) {
|
||||
*next_addr = EXTRACT_RVC_J_IMM(op_code) + pc;
|
||||
} else if (is_c_beqz_insn(op_code)) {
|
||||
} else if (riscv_insn_is_c_j(op_code) ||
|
||||
riscv_insn_is_c_jal(op_code)) {
|
||||
*next_addr = RVC_EXTRACT_JTYPE_IMM(op_code) + pc;
|
||||
} else if (riscv_insn_is_c_beqz(op_code)) {
|
||||
rs1_num = decode_register_index_short(op_code,
|
||||
RVC_C1_RS1_OPOFF);
|
||||
if (!rs1_num || regs_ptr[rs1_num] == 0)
|
||||
*next_addr = EXTRACT_RVC_B_IMM(op_code) + pc;
|
||||
*next_addr = RVC_EXTRACT_BTYPE_IMM(op_code) + pc;
|
||||
else
|
||||
*next_addr = pc + 2;
|
||||
} else if (is_c_bnez_insn(op_code)) {
|
||||
} else if (riscv_insn_is_c_bnez(op_code)) {
|
||||
rs1_num =
|
||||
decode_register_index_short(op_code, RVC_C1_RS1_OPOFF);
|
||||
if (rs1_num && regs_ptr[rs1_num] != 0)
|
||||
*next_addr = EXTRACT_RVC_B_IMM(op_code) + pc;
|
||||
*next_addr = RVC_EXTRACT_BTYPE_IMM(op_code) + pc;
|
||||
else
|
||||
*next_addr = pc + 2;
|
||||
} else {
|
||||
|
@ -90,7 +71,7 @@ static int get_step_address(struct pt_regs *regs, unsigned long *next_addr)
|
|||
} else {
|
||||
if ((op_code & __INSN_OPCODE_MASK) == __INSN_BRANCH_OPCODE) {
|
||||
bool result = false;
|
||||
long imm = EXTRACT_BTYPE_IMM(op_code);
|
||||
long imm = RV_EXTRACT_BTYPE_IMM(op_code);
|
||||
unsigned long rs1_val = 0, rs2_val = 0;
|
||||
|
||||
rs1_num = decode_register_index(op_code, RVG_RS1_OPOFF);
|
||||
|
@ -100,34 +81,34 @@ static int get_step_address(struct pt_regs *regs, unsigned long *next_addr)
|
|||
if (rs2_num)
|
||||
rs2_val = regs_ptr[rs2_num];
|
||||
|
||||
if (is_beq_insn(op_code))
|
||||
if (riscv_insn_is_beq(op_code))
|
||||
result = (rs1_val == rs2_val) ? true : false;
|
||||
else if (is_bne_insn(op_code))
|
||||
else if (riscv_insn_is_bne(op_code))
|
||||
result = (rs1_val != rs2_val) ? true : false;
|
||||
else if (is_blt_insn(op_code))
|
||||
else if (riscv_insn_is_blt(op_code))
|
||||
result =
|
||||
((long)rs1_val <
|
||||
(long)rs2_val) ? true : false;
|
||||
else if (is_bge_insn(op_code))
|
||||
else if (riscv_insn_is_bge(op_code))
|
||||
result =
|
||||
((long)rs1_val >=
|
||||
(long)rs2_val) ? true : false;
|
||||
else if (is_bltu_insn(op_code))
|
||||
else if (riscv_insn_is_bltu(op_code))
|
||||
result = (rs1_val < rs2_val) ? true : false;
|
||||
else if (is_bgeu_insn(op_code))
|
||||
else if (riscv_insn_is_bgeu(op_code))
|
||||
result = (rs1_val >= rs2_val) ? true : false;
|
||||
if (result)
|
||||
*next_addr = imm + pc;
|
||||
else
|
||||
*next_addr = pc + 4;
|
||||
} else if (is_jal_insn(op_code)) {
|
||||
*next_addr = EXTRACT_JTYPE_IMM(op_code) + pc;
|
||||
} else if (is_jalr_insn(op_code)) {
|
||||
} else if (riscv_insn_is_jal(op_code)) {
|
||||
*next_addr = RV_EXTRACT_JTYPE_IMM(op_code) + pc;
|
||||
} else if (riscv_insn_is_jalr(op_code)) {
|
||||
rs1_num = decode_register_index(op_code, RVG_RS1_OPOFF);
|
||||
if (rs1_num)
|
||||
*next_addr = ((unsigned long *)regs)[rs1_num];
|
||||
*next_addr += EXTRACT_ITYPE_IMM(op_code);
|
||||
} else if (is_sret_insn(op_code)) {
|
||||
*next_addr += RV_EXTRACT_ITYPE_IMM(op_code);
|
||||
} else if (riscv_insn_is_sret(op_code)) {
|
||||
*next_addr = pc;
|
||||
} else {
|
||||
*next_addr = pc + 4;
|
||||
|
|
|
@ -13,8 +13,8 @@
|
|||
|
||||
.text
|
||||
|
||||
#define FENTRY_RA_OFFSET 12
|
||||
#define ABI_SIZE_ON_STACK 72
|
||||
#define FENTRY_RA_OFFSET 8
|
||||
#define ABI_SIZE_ON_STACK 80
|
||||
#define ABI_A0 0
|
||||
#define ABI_A1 8
|
||||
#define ABI_A2 16
|
||||
|
@ -23,10 +23,10 @@
|
|||
#define ABI_A5 40
|
||||
#define ABI_A6 48
|
||||
#define ABI_A7 56
|
||||
#define ABI_RA 64
|
||||
#define ABI_T0 64
|
||||
#define ABI_RA 72
|
||||
|
||||
.macro SAVE_ABI
|
||||
addi sp, sp, -SZREG
|
||||
addi sp, sp, -ABI_SIZE_ON_STACK
|
||||
|
||||
REG_S a0, ABI_A0(sp)
|
||||
|
@ -37,6 +37,7 @@
|
|||
REG_S a5, ABI_A5(sp)
|
||||
REG_S a6, ABI_A6(sp)
|
||||
REG_S a7, ABI_A7(sp)
|
||||
REG_S t0, ABI_T0(sp)
|
||||
REG_S ra, ABI_RA(sp)
|
||||
.endm
|
||||
|
||||
|
@ -49,24 +50,18 @@
|
|||
REG_L a5, ABI_A5(sp)
|
||||
REG_L a6, ABI_A6(sp)
|
||||
REG_L a7, ABI_A7(sp)
|
||||
REG_L t0, ABI_T0(sp)
|
||||
REG_L ra, ABI_RA(sp)
|
||||
|
||||
addi sp, sp, ABI_SIZE_ON_STACK
|
||||
addi sp, sp, SZREG
|
||||
.endm
|
||||
|
||||
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
|
||||
.macro SAVE_ALL
|
||||
addi sp, sp, -SZREG
|
||||
addi sp, sp, -PT_SIZE_ON_STACK
|
||||
|
||||
REG_S x1, PT_EPC(sp)
|
||||
addi sp, sp, PT_SIZE_ON_STACK
|
||||
REG_L x1, (sp)
|
||||
addi sp, sp, -PT_SIZE_ON_STACK
|
||||
REG_S t0, PT_EPC(sp)
|
||||
REG_S x1, PT_RA(sp)
|
||||
REG_L x1, PT_EPC(sp)
|
||||
|
||||
REG_S x2, PT_SP(sp)
|
||||
REG_S x3, PT_GP(sp)
|
||||
REG_S x4, PT_TP(sp)
|
||||
|
@ -100,15 +95,11 @@
|
|||
.endm
|
||||
|
||||
.macro RESTORE_ALL
|
||||
REG_L t0, PT_EPC(sp)
|
||||
REG_L x1, PT_RA(sp)
|
||||
addi sp, sp, PT_SIZE_ON_STACK
|
||||
REG_S x1, (sp)
|
||||
addi sp, sp, -PT_SIZE_ON_STACK
|
||||
REG_L x1, PT_EPC(sp)
|
||||
REG_L x2, PT_SP(sp)
|
||||
REG_L x3, PT_GP(sp)
|
||||
REG_L x4, PT_TP(sp)
|
||||
REG_L x5, PT_T0(sp)
|
||||
REG_L x6, PT_T1(sp)
|
||||
REG_L x7, PT_T2(sp)
|
||||
REG_L x8, PT_S0(sp)
|
||||
|
@ -137,17 +128,16 @@
|
|||
REG_L x31, PT_T6(sp)
|
||||
|
||||
addi sp, sp, PT_SIZE_ON_STACK
|
||||
addi sp, sp, SZREG
|
||||
.endm
|
||||
#endif /* CONFIG_DYNAMIC_FTRACE_WITH_REGS */
|
||||
|
||||
ENTRY(ftrace_caller)
|
||||
SAVE_ABI
|
||||
|
||||
addi a0, ra, -FENTRY_RA_OFFSET
|
||||
addi a0, t0, -FENTRY_RA_OFFSET
|
||||
la a1, function_trace_op
|
||||
REG_L a2, 0(a1)
|
||||
REG_L a1, ABI_SIZE_ON_STACK(sp)
|
||||
mv a1, ra
|
||||
mv a3, sp
|
||||
|
||||
ftrace_call:
|
||||
|
@ -155,8 +145,8 @@ ftrace_call:
|
|||
call ftrace_stub
|
||||
|
||||
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
|
||||
addi a0, sp, ABI_SIZE_ON_STACK
|
||||
REG_L a1, ABI_RA(sp)
|
||||
addi a0, sp, ABI_RA
|
||||
REG_L a1, ABI_T0(sp)
|
||||
addi a1, a1, -FENTRY_RA_OFFSET
|
||||
#ifdef HAVE_FUNCTION_GRAPH_FP_TEST
|
||||
mv a2, s0
|
||||
|
@ -166,17 +156,17 @@ ftrace_graph_call:
|
|||
call ftrace_stub
|
||||
#endif
|
||||
RESTORE_ABI
|
||||
ret
|
||||
jr t0
|
||||
ENDPROC(ftrace_caller)
|
||||
|
||||
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
|
||||
ENTRY(ftrace_regs_caller)
|
||||
SAVE_ALL
|
||||
|
||||
addi a0, ra, -FENTRY_RA_OFFSET
|
||||
addi a0, t0, -FENTRY_RA_OFFSET
|
||||
la a1, function_trace_op
|
||||
REG_L a2, 0(a1)
|
||||
REG_L a1, PT_SIZE_ON_STACK(sp)
|
||||
mv a1, ra
|
||||
mv a3, sp
|
||||
|
||||
ftrace_regs_call:
|
||||
|
@ -196,6 +186,6 @@ ftrace_graph_regs_call:
|
|||
#endif
|
||||
|
||||
RESTORE_ALL
|
||||
ret
|
||||
jr t0
|
||||
ENDPROC(ftrace_regs_caller)
|
||||
#endif /* CONFIG_DYNAMIC_FTRACE_WITH_REGS */
|
||||
|
|
|
@ -268,6 +268,13 @@ static int apply_r_riscv_align_rela(struct module *me, u32 *location,
|
|||
return -EINVAL;
|
||||
}
|
||||
|
||||
static int apply_r_riscv_add16_rela(struct module *me, u32 *location,
|
||||
Elf_Addr v)
|
||||
{
|
||||
*(u16 *)location += (u16)v;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int apply_r_riscv_add32_rela(struct module *me, u32 *location,
|
||||
Elf_Addr v)
|
||||
{
|
||||
|
@ -282,6 +289,13 @@ static int apply_r_riscv_add64_rela(struct module *me, u32 *location,
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int apply_r_riscv_sub16_rela(struct module *me, u32 *location,
|
||||
Elf_Addr v)
|
||||
{
|
||||
*(u16 *)location -= (u16)v;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int apply_r_riscv_sub32_rela(struct module *me, u32 *location,
|
||||
Elf_Addr v)
|
||||
{
|
||||
|
@ -315,8 +329,10 @@ static int (*reloc_handlers_rela[]) (struct module *me, u32 *location,
|
|||
[R_RISCV_CALL] = apply_r_riscv_call_rela,
|
||||
[R_RISCV_RELAX] = apply_r_riscv_relax_rela,
|
||||
[R_RISCV_ALIGN] = apply_r_riscv_align_rela,
|
||||
[R_RISCV_ADD16] = apply_r_riscv_add16_rela,
|
||||
[R_RISCV_ADD32] = apply_r_riscv_add32_rela,
|
||||
[R_RISCV_ADD64] = apply_r_riscv_add64_rela,
|
||||
[R_RISCV_SUB16] = apply_r_riscv_sub16_rela,
|
||||
[R_RISCV_SUB32] = apply_r_riscv_sub32_rela,
|
||||
[R_RISCV_SUB64] = apply_r_riscv_sub64_rela,
|
||||
};
|
||||
|
@ -429,21 +445,6 @@ void *module_alloc(unsigned long size)
|
|||
}
|
||||
#endif
|
||||
|
||||
static const Elf_Shdr *find_section(const Elf_Ehdr *hdr,
|
||||
const Elf_Shdr *sechdrs,
|
||||
const char *name)
|
||||
{
|
||||
const Elf_Shdr *s, *se;
|
||||
const char *secstrs = (void *)hdr + sechdrs[hdr->e_shstrndx].sh_offset;
|
||||
|
||||
for (s = sechdrs, se = sechdrs + hdr->e_shnum; s < se; s++) {
|
||||
if (strcmp(name, secstrs + s->sh_name) == 0)
|
||||
return s;
|
||||
}
|
||||
|
||||
return NULL;
|
||||
}
|
||||
|
||||
int module_finalize(const Elf_Ehdr *hdr,
|
||||
const Elf_Shdr *sechdrs,
|
||||
struct module *me)
|
||||
|
|
|
@ -136,13 +136,6 @@ bool __kprobes simulate_auipc(u32 opcode, unsigned long addr, struct pt_regs *re
|
|||
#define branch_offset(opcode) \
|
||||
sign_extend32((branch_imm(opcode)), 12)
|
||||
|
||||
#define BRANCH_BEQ 0x0
|
||||
#define BRANCH_BNE 0x1
|
||||
#define BRANCH_BLT 0x4
|
||||
#define BRANCH_BGE 0x5
|
||||
#define BRANCH_BLTU 0x6
|
||||
#define BRANCH_BGEU 0x7
|
||||
|
||||
bool __kprobes simulate_branch(u32 opcode, unsigned long addr, struct pt_regs *regs)
|
||||
{
|
||||
/*
|
||||
|
@ -169,22 +162,22 @@ bool __kprobes simulate_branch(u32 opcode, unsigned long addr, struct pt_regs *r
|
|||
|
||||
offset_tmp = branch_offset(opcode);
|
||||
switch (branch_funct3(opcode)) {
|
||||
case BRANCH_BEQ:
|
||||
case RVG_FUNCT3_BEQ:
|
||||
offset = (rs1_val == rs2_val) ? offset_tmp : 4;
|
||||
break;
|
||||
case BRANCH_BNE:
|
||||
case RVG_FUNCT3_BNE:
|
||||
offset = (rs1_val != rs2_val) ? offset_tmp : 4;
|
||||
break;
|
||||
case BRANCH_BLT:
|
||||
case RVG_FUNCT3_BLT:
|
||||
offset = ((long)rs1_val < (long)rs2_val) ? offset_tmp : 4;
|
||||
break;
|
||||
case BRANCH_BGE:
|
||||
case RVG_FUNCT3_BGE:
|
||||
offset = ((long)rs1_val >= (long)rs2_val) ? offset_tmp : 4;
|
||||
break;
|
||||
case BRANCH_BLTU:
|
||||
case RVG_FUNCT3_BLTU:
|
||||
offset = (rs1_val < rs2_val) ? offset_tmp : 4;
|
||||
break;
|
||||
case BRANCH_BGEU:
|
||||
case RVG_FUNCT3_BGEU:
|
||||
offset = (rs1_val >= rs2_val) ? offset_tmp : 4;
|
||||
break;
|
||||
default:
|
||||
|
|
|
@ -3,14 +3,7 @@
|
|||
#ifndef _RISCV_KERNEL_PROBES_SIMULATE_INSN_H
|
||||
#define _RISCV_KERNEL_PROBES_SIMULATE_INSN_H
|
||||
|
||||
#define __RISCV_INSN_FUNCS(name, mask, val) \
|
||||
static __always_inline bool riscv_insn_is_##name(probe_opcode_t code) \
|
||||
{ \
|
||||
BUILD_BUG_ON(~(mask) & (val)); \
|
||||
return (code & (mask)) == (val); \
|
||||
} \
|
||||
bool simulate_##name(u32 opcode, unsigned long addr, \
|
||||
struct pt_regs *regs)
|
||||
#include <asm/insn.h>
|
||||
|
||||
#define RISCV_INSN_REJECTED(name, code) \
|
||||
do { \
|
||||
|
@ -19,9 +12,6 @@ bool simulate_##name(u32 opcode, unsigned long addr, \
|
|||
} \
|
||||
} while (0)
|
||||
|
||||
__RISCV_INSN_FUNCS(system, 0x7f, 0x73);
|
||||
__RISCV_INSN_FUNCS(fence, 0x7f, 0x0f);
|
||||
|
||||
#define RISCV_INSN_SET_SIMULATE(name, code) \
|
||||
do { \
|
||||
if (riscv_insn_is_##name(code)) { \
|
||||
|
@ -30,18 +20,9 @@ __RISCV_INSN_FUNCS(fence, 0x7f, 0x0f);
|
|||
} \
|
||||
} while (0)
|
||||
|
||||
__RISCV_INSN_FUNCS(c_j, 0xe003, 0xa001);
|
||||
__RISCV_INSN_FUNCS(c_jr, 0xf07f, 0x8002);
|
||||
__RISCV_INSN_FUNCS(c_jal, 0xe003, 0x2001);
|
||||
__RISCV_INSN_FUNCS(c_jalr, 0xf07f, 0x9002);
|
||||
__RISCV_INSN_FUNCS(c_beqz, 0xe003, 0xc001);
|
||||
__RISCV_INSN_FUNCS(c_bnez, 0xe003, 0xe001);
|
||||
__RISCV_INSN_FUNCS(c_ebreak, 0xffff, 0x9002);
|
||||
|
||||
__RISCV_INSN_FUNCS(auipc, 0x7f, 0x17);
|
||||
__RISCV_INSN_FUNCS(branch, 0x7f, 0x63);
|
||||
|
||||
__RISCV_INSN_FUNCS(jal, 0x7f, 0x6f);
|
||||
__RISCV_INSN_FUNCS(jalr, 0x707f, 0x67);
|
||||
bool simulate_auipc(u32 opcode, unsigned long addr, struct pt_regs *regs);
|
||||
bool simulate_branch(u32 opcode, unsigned long addr, struct pt_regs *regs);
|
||||
bool simulate_jal(u32 opcode, unsigned long addr, struct pt_regs *regs);
|
||||
bool simulate_jalr(u32 opcode, unsigned long addr, struct pt_regs *regs);
|
||||
|
||||
#endif /* _RISCV_KERNEL_PROBES_SIMULATE_INSN_H */
|
||||
|
|
|
@ -12,6 +12,9 @@
|
|||
EXPORT_SYMBOL(memset);
|
||||
EXPORT_SYMBOL(memcpy);
|
||||
EXPORT_SYMBOL(memmove);
|
||||
EXPORT_SYMBOL(strcmp);
|
||||
EXPORT_SYMBOL(strlen);
|
||||
EXPORT_SYMBOL(strncmp);
|
||||
EXPORT_SYMBOL(__memset);
|
||||
EXPORT_SYMBOL(__memcpy);
|
||||
EXPORT_SYMBOL(__memmove);
|
||||
|
|
|
@ -300,6 +300,9 @@ void __init setup_arch(char **cmdline_p)
|
|||
riscv_init_cbom_blocksize();
|
||||
riscv_fill_hwcap();
|
||||
apply_boot_alternatives();
|
||||
if (IS_ENABLED(CONFIG_RISCV_ISA_ZICBOM) &&
|
||||
riscv_isa_extension_available(NULL, ZICBOM))
|
||||
riscv_noncoherent_supported();
|
||||
}
|
||||
|
||||
static int __init topology_init(void)
|
||||
|
|
|
@ -29,22 +29,46 @@ int show_unhandled_signals = 1;
|
|||
|
||||
static DEFINE_SPINLOCK(die_lock);
|
||||
|
||||
static void dump_kernel_instr(const char *loglvl, struct pt_regs *regs)
|
||||
{
|
||||
char str[sizeof("0000 ") * 12 + 2 + 1], *p = str;
|
||||
const u16 *insns = (u16 *)instruction_pointer(regs);
|
||||
long bad;
|
||||
u16 val;
|
||||
int i;
|
||||
|
||||
for (i = -10; i < 2; i++) {
|
||||
bad = get_kernel_nofault(val, &insns[i]);
|
||||
if (!bad) {
|
||||
p += sprintf(p, i == 0 ? "(%04hx) " : "%04hx ", val);
|
||||
} else {
|
||||
printk("%sCode: Unable to access instruction at 0x%px.\n",
|
||||
loglvl, &insns[i]);
|
||||
return;
|
||||
}
|
||||
}
|
||||
printk("%sCode: %s\n", loglvl, str);
|
||||
}
|
||||
|
||||
void die(struct pt_regs *regs, const char *str)
|
||||
{
|
||||
static int die_counter;
|
||||
int ret;
|
||||
long cause;
|
||||
unsigned long flags;
|
||||
|
||||
oops_enter();
|
||||
|
||||
spin_lock_irq(&die_lock);
|
||||
spin_lock_irqsave(&die_lock, flags);
|
||||
console_verbose();
|
||||
bust_spinlocks(1);
|
||||
|
||||
pr_emerg("%s [#%d]\n", str, ++die_counter);
|
||||
print_modules();
|
||||
if (regs)
|
||||
if (regs) {
|
||||
show_regs(regs);
|
||||
dump_kernel_instr(KERN_EMERG, regs);
|
||||
}
|
||||
|
||||
cause = regs ? regs->cause : -1;
|
||||
ret = notify_die(DIE_OOPS, str, regs, 0, cause, SIGSEGV);
|
||||
|
@ -54,7 +78,7 @@ void die(struct pt_regs *regs, const char *str)
|
|||
|
||||
bust_spinlocks(0);
|
||||
add_taint(TAINT_DIE, LOCKDEP_NOW_UNRELIABLE);
|
||||
spin_unlock_irq(&die_lock);
|
||||
spin_unlock_irqrestore(&die_lock, flags);
|
||||
oops_exit();
|
||||
|
||||
if (in_interrupt())
|
||||
|
|
|
@ -22,11 +22,6 @@ struct vdso_data {
|
|||
};
|
||||
#endif
|
||||
|
||||
extern char vdso_start[], vdso_end[];
|
||||
#ifdef CONFIG_COMPAT
|
||||
extern char compat_vdso_start[], compat_vdso_end[];
|
||||
#endif
|
||||
|
||||
enum vvar_pages {
|
||||
VVAR_DATA_PAGE_OFFSET,
|
||||
VVAR_TIMENS_PAGE_OFFSET,
|
||||
|
|
|
@ -40,6 +40,13 @@ SECTIONS
|
|||
. = 0x800;
|
||||
.text : { *(.text .text.*) } :text
|
||||
|
||||
. = ALIGN(4);
|
||||
.alternative : {
|
||||
__alt_start = .;
|
||||
*(.alternative)
|
||||
__alt_end = .;
|
||||
}
|
||||
|
||||
.data : {
|
||||
*(.got.plt) *(.got)
|
||||
*(.data .data.* .gnu.linkonce.d.*)
|
||||
|
|
|
@ -5,6 +5,7 @@
|
|||
*/
|
||||
|
||||
#define RO_EXCEPTION_TABLE_ALIGN 4
|
||||
#define RUNTIME_DISCARD_EXIT
|
||||
|
||||
#ifdef CONFIG_XIP_KERNEL
|
||||
#include "vmlinux-xip.lds.S"
|
||||
|
@ -85,6 +86,9 @@ SECTIONS
|
|||
/* Start of init data section */
|
||||
__init_data_begin = .;
|
||||
INIT_DATA_SECTION(16)
|
||||
.init.bss : {
|
||||
*(.init.bss) /* from the EFI stub */
|
||||
}
|
||||
.exit.data :
|
||||
{
|
||||
EXIT_DATA
|
||||
|
@ -95,6 +99,10 @@ SECTIONS
|
|||
*(.rel.dyn*)
|
||||
}
|
||||
|
||||
.rela.dyn : {
|
||||
*(.rela*)
|
||||
}
|
||||
|
||||
__init_data_end = .;
|
||||
|
||||
. = ALIGN(8);
|
||||
|
@ -140,6 +148,7 @@ SECTIONS
|
|||
STABS_DEBUG
|
||||
DWARF_DEBUG
|
||||
ELF_DETAILS
|
||||
.riscv.attributes 0 : { *(.riscv.attributes) }
|
||||
|
||||
DISCARDS
|
||||
}
|
||||
|
|
|
@ -15,8 +15,7 @@
|
|||
#include <asm/hwcap.h>
|
||||
#include <asm/insn-def.h>
|
||||
|
||||
#define has_svinval() \
|
||||
static_branch_unlikely(&riscv_isa_ext_keys[RISCV_ISA_EXT_KEY_SVINVAL])
|
||||
#define has_svinval() riscv_has_extension_unlikely(RISCV_ISA_EXT_SVINVAL)
|
||||
|
||||
void kvm_riscv_local_hfence_gvma_vmid_gpa(unsigned long vmid,
|
||||
gpa_t gpa, gpa_t gpsz,
|
||||
|
|
|
@ -3,6 +3,9 @@ lib-y += delay.o
|
|||
lib-y += memcpy.o
|
||||
lib-y += memset.o
|
||||
lib-y += memmove.o
|
||||
lib-y += strcmp.o
|
||||
lib-y += strlen.o
|
||||
lib-y += strncmp.o
|
||||
lib-$(CONFIG_MMU) += uaccess.o
|
||||
lib-$(CONFIG_64BIT) += tishift.o
|
||||
|
||||
|
|
|
@ -0,0 +1,121 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0-only */
|
||||
|
||||
#include <linux/linkage.h>
|
||||
#include <asm/asm.h>
|
||||
#include <asm-generic/export.h>
|
||||
#include <asm/alternative-macros.h>
|
||||
#include <asm/errata_list.h>
|
||||
|
||||
/* int strcmp(const char *cs, const char *ct) */
|
||||
SYM_FUNC_START(strcmp)
|
||||
|
||||
ALTERNATIVE("nop", "j strcmp_zbb", 0, RISCV_ISA_EXT_ZBB, CONFIG_RISCV_ISA_ZBB)
|
||||
|
||||
/*
|
||||
* Returns
|
||||
* a0 - comparison result, value like strcmp
|
||||
*
|
||||
* Parameters
|
||||
* a0 - string1
|
||||
* a1 - string2
|
||||
*
|
||||
* Clobbers
|
||||
* t0, t1
|
||||
*/
|
||||
1:
|
||||
lbu t0, 0(a0)
|
||||
lbu t1, 0(a1)
|
||||
addi a0, a0, 1
|
||||
addi a1, a1, 1
|
||||
bne t0, t1, 2f
|
||||
bnez t0, 1b
|
||||
li a0, 0
|
||||
ret
|
||||
2:
|
||||
/*
|
||||
* strcmp only needs to return (< 0, 0, > 0) values
|
||||
* not necessarily -1, 0, +1
|
||||
*/
|
||||
sub a0, t0, t1
|
||||
ret
|
||||
|
||||
/*
|
||||
* Variant of strcmp using the ZBB extension if available
|
||||
*/
|
||||
#ifdef CONFIG_RISCV_ISA_ZBB
|
||||
strcmp_zbb:
|
||||
|
||||
.option push
|
||||
.option arch,+zbb
|
||||
|
||||
/*
|
||||
* Returns
|
||||
* a0 - comparison result, value like strcmp
|
||||
*
|
||||
* Parameters
|
||||
* a0 - string1
|
||||
* a1 - string2
|
||||
*
|
||||
* Clobbers
|
||||
* t0, t1, t2, t3, t4, t5
|
||||
*/
|
||||
|
||||
or t2, a0, a1
|
||||
li t4, -1
|
||||
and t2, t2, SZREG-1
|
||||
bnez t2, 3f
|
||||
|
||||
/* Main loop for aligned string. */
|
||||
.p2align 3
|
||||
1:
|
||||
REG_L t0, 0(a0)
|
||||
REG_L t1, 0(a1)
|
||||
orc.b t3, t0
|
||||
bne t3, t4, 2f
|
||||
addi a0, a0, SZREG
|
||||
addi a1, a1, SZREG
|
||||
beq t0, t1, 1b
|
||||
|
||||
/*
|
||||
* Words don't match, and no null byte in the first
|
||||
* word. Get bytes in big-endian order and compare.
|
||||
*/
|
||||
#ifndef CONFIG_CPU_BIG_ENDIAN
|
||||
rev8 t0, t0
|
||||
rev8 t1, t1
|
||||
#endif
|
||||
|
||||
/* Synthesize (t0 >= t1) ? 1 : -1 in a branchless sequence. */
|
||||
sltu a0, t0, t1
|
||||
neg a0, a0
|
||||
ori a0, a0, 1
|
||||
ret
|
||||
|
||||
2:
|
||||
/*
|
||||
* Found a null byte.
|
||||
* If words don't match, fall back to simple loop.
|
||||
*/
|
||||
bne t0, t1, 3f
|
||||
|
||||
/* Otherwise, strings are equal. */
|
||||
li a0, 0
|
||||
ret
|
||||
|
||||
/* Simple loop for misaligned strings. */
|
||||
.p2align 3
|
||||
3:
|
||||
lbu t0, 0(a0)
|
||||
lbu t1, 0(a1)
|
||||
addi a0, a0, 1
|
||||
addi a1, a1, 1
|
||||
bne t0, t1, 4f
|
||||
bnez t0, 3b
|
||||
|
||||
4:
|
||||
sub a0, t0, t1
|
||||
ret
|
||||
|
||||
.option pop
|
||||
#endif
|
||||
SYM_FUNC_END(strcmp)
|
|
@ -0,0 +1,133 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0-only */
|
||||
|
||||
#include <linux/linkage.h>
|
||||
#include <asm/asm.h>
|
||||
#include <asm-generic/export.h>
|
||||
#include <asm/alternative-macros.h>
|
||||
#include <asm/errata_list.h>
|
||||
|
||||
/* int strlen(const char *s) */
|
||||
SYM_FUNC_START(strlen)
|
||||
|
||||
ALTERNATIVE("nop", "j strlen_zbb", 0, RISCV_ISA_EXT_ZBB, CONFIG_RISCV_ISA_ZBB)
|
||||
|
||||
/*
|
||||
* Returns
|
||||
* a0 - string length
|
||||
*
|
||||
* Parameters
|
||||
* a0 - String to measure
|
||||
*
|
||||
* Clobbers:
|
||||
* t0, t1
|
||||
*/
|
||||
mv t1, a0
|
||||
1:
|
||||
lbu t0, 0(t1)
|
||||
beqz t0, 2f
|
||||
addi t1, t1, 1
|
||||
j 1b
|
||||
2:
|
||||
sub a0, t1, a0
|
||||
ret
|
||||
|
||||
/*
|
||||
* Variant of strlen using the ZBB extension if available
|
||||
*/
|
||||
#ifdef CONFIG_RISCV_ISA_ZBB
|
||||
strlen_zbb:
|
||||
|
||||
#ifdef CONFIG_CPU_BIG_ENDIAN
|
||||
# define CZ clz
|
||||
# define SHIFT sll
|
||||
#else
|
||||
# define CZ ctz
|
||||
# define SHIFT srl
|
||||
#endif
|
||||
|
||||
.option push
|
||||
.option arch,+zbb
|
||||
|
||||
/*
|
||||
* Returns
|
||||
* a0 - string length
|
||||
*
|
||||
* Parameters
|
||||
* a0 - String to measure
|
||||
*
|
||||
* Clobbers
|
||||
* t0, t1, t2, t3
|
||||
*/
|
||||
|
||||
/* Number of irrelevant bytes in the first word. */
|
||||
andi t2, a0, SZREG-1
|
||||
|
||||
/* Align pointer. */
|
||||
andi t0, a0, -SZREG
|
||||
|
||||
li t3, SZREG
|
||||
sub t3, t3, t2
|
||||
slli t2, t2, 3
|
||||
|
||||
/* Get the first word. */
|
||||
REG_L t1, 0(t0)
|
||||
|
||||
/*
|
||||
* Shift away the partial data we loaded to remove the irrelevant bytes
|
||||
* preceding the string with the effect of adding NUL bytes at the
|
||||
* end of the string's first word.
|
||||
*/
|
||||
SHIFT t1, t1, t2
|
||||
|
||||
/* Convert non-NUL into 0xff and NUL into 0x00. */
|
||||
orc.b t1, t1
|
||||
|
||||
/* Convert non-NUL into 0x00 and NUL into 0xff. */
|
||||
not t1, t1
|
||||
|
||||
/*
|
||||
* Search for the first set bit (corresponding to a NUL byte in the
|
||||
* original chunk).
|
||||
*/
|
||||
CZ t1, t1
|
||||
|
||||
/*
|
||||
* The first chunk is special: compare against the number
|
||||
* of valid bytes in this chunk.
|
||||
*/
|
||||
srli a0, t1, 3
|
||||
bgtu t3, a0, 3f
|
||||
|
||||
/* Prepare for the word comparison loop. */
|
||||
addi t2, t0, SZREG
|
||||
li t3, -1
|
||||
|
||||
/*
|
||||
* Our critical loop is 4 instructions and processes data in
|
||||
* 4 byte or 8 byte chunks.
|
||||
*/
|
||||
.p2align 3
|
||||
1:
|
||||
REG_L t1, SZREG(t0)
|
||||
addi t0, t0, SZREG
|
||||
orc.b t1, t1
|
||||
beq t1, t3, 1b
|
||||
2:
|
||||
not t1, t1
|
||||
CZ t1, t1
|
||||
|
||||
/* Get number of processed words. */
|
||||
sub t2, t0, t2
|
||||
|
||||
/* Add number of characters in the first word. */
|
||||
add a0, a0, t2
|
||||
srli t1, t1, 3
|
||||
|
||||
/* Add number of characters in the last word. */
|
||||
add a0, a0, t1
|
||||
3:
|
||||
ret
|
||||
|
||||
.option pop
|
||||
#endif
|
||||
SYM_FUNC_END(strlen)
|
|
@ -0,0 +1,139 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0-only */
|
||||
|
||||
#include <linux/linkage.h>
|
||||
#include <asm/asm.h>
|
||||
#include <asm-generic/export.h>
|
||||
#include <asm/alternative-macros.h>
|
||||
#include <asm/errata_list.h>
|
||||
|
||||
/* int strncmp(const char *cs, const char *ct, size_t count) */
|
||||
SYM_FUNC_START(strncmp)
|
||||
|
||||
ALTERNATIVE("nop", "j strncmp_zbb", 0, RISCV_ISA_EXT_ZBB, CONFIG_RISCV_ISA_ZBB)
|
||||
|
||||
/*
|
||||
* Returns
|
||||
* a0 - comparison result, value like strncmp
|
||||
*
|
||||
* Parameters
|
||||
* a0 - string1
|
||||
* a1 - string2
|
||||
* a2 - number of characters to compare
|
||||
*
|
||||
* Clobbers
|
||||
* t0, t1, t2
|
||||
*/
|
||||
li t2, 0
|
||||
1:
|
||||
beq a2, t2, 2f
|
||||
lbu t0, 0(a0)
|
||||
lbu t1, 0(a1)
|
||||
addi a0, a0, 1
|
||||
addi a1, a1, 1
|
||||
bne t0, t1, 3f
|
||||
addi t2, t2, 1
|
||||
bnez t0, 1b
|
||||
2:
|
||||
li a0, 0
|
||||
ret
|
||||
3:
|
||||
/*
|
||||
* strncmp only needs to return (< 0, 0, > 0) values
|
||||
* not necessarily -1, 0, +1
|
||||
*/
|
||||
sub a0, t0, t1
|
||||
ret
|
||||
|
||||
/*
|
||||
* Variant of strncmp using the ZBB extension if available
|
||||
*/
|
||||
#ifdef CONFIG_RISCV_ISA_ZBB
|
||||
strncmp_zbb:
|
||||
|
||||
.option push
|
||||
.option arch,+zbb
|
||||
|
||||
/*
|
||||
* Returns
|
||||
* a0 - comparison result, like strncmp
|
||||
*
|
||||
* Parameters
|
||||
* a0 - string1
|
||||
* a1 - string2
|
||||
* a2 - number of characters to compare
|
||||
*
|
||||
* Clobbers
|
||||
* t0, t1, t2, t3, t4, t5, t6
|
||||
*/
|
||||
|
||||
or t2, a0, a1
|
||||
li t5, -1
|
||||
and t2, t2, SZREG-1
|
||||
add t4, a0, a2
|
||||
bnez t2, 4f
|
||||
|
||||
/* Adjust limit for fast-path. */
|
||||
andi t6, t4, -SZREG
|
||||
|
||||
/* Main loop for aligned string. */
|
||||
.p2align 3
|
||||
1:
|
||||
bgt a0, t6, 3f
|
||||
REG_L t0, 0(a0)
|
||||
REG_L t1, 0(a1)
|
||||
orc.b t3, t0
|
||||
bne t3, t5, 2f
|
||||
addi a0, a0, SZREG
|
||||
addi a1, a1, SZREG
|
||||
beq t0, t1, 1b
|
||||
|
||||
/*
|
||||
* Words don't match, and no null byte in the first
|
||||
* word. Get bytes in big-endian order and compare.
|
||||
*/
|
||||
#ifndef CONFIG_CPU_BIG_ENDIAN
|
||||
rev8 t0, t0
|
||||
rev8 t1, t1
|
||||
#endif
|
||||
|
||||
/* Synthesize (t0 >= t1) ? 1 : -1 in a branchless sequence. */
|
||||
sltu a0, t0, t1
|
||||
neg a0, a0
|
||||
ori a0, a0, 1
|
||||
ret
|
||||
|
||||
2:
|
||||
/*
|
||||
* Found a null byte.
|
||||
* If words don't match, fall back to simple loop.
|
||||
*/
|
||||
bne t0, t1, 3f
|
||||
|
||||
/* Otherwise, strings are equal. */
|
||||
li a0, 0
|
||||
ret
|
||||
|
||||
/* Simple loop for misaligned strings. */
|
||||
3:
|
||||
/* Restore limit for slow-path. */
|
||||
.p2align 3
|
||||
4:
|
||||
bge a0, t4, 6f
|
||||
lbu t0, 0(a0)
|
||||
lbu t1, 0(a1)
|
||||
addi a0, a0, 1
|
||||
addi a1, a1, 1
|
||||
bne t0, t1, 5f
|
||||
bnez t0, 4b
|
||||
|
||||
5:
|
||||
sub a0, t0, t1
|
||||
ret
|
||||
|
||||
6:
|
||||
li a0, 0
|
||||
ret
|
||||
|
||||
.option pop
|
||||
#endif
|
||||
SYM_FUNC_END(strncmp)
|
|
@ -267,10 +267,12 @@ asmlinkage void do_page_fault(struct pt_regs *regs)
|
|||
if (user_mode(regs))
|
||||
flags |= FAULT_FLAG_USER;
|
||||
|
||||
if (!user_mode(regs) && addr < TASK_SIZE &&
|
||||
unlikely(!(regs->status & SR_SUM)))
|
||||
die_kernel_fault("access to user memory without uaccess routines",
|
||||
addr, regs);
|
||||
if (!user_mode(regs) && addr < TASK_SIZE && unlikely(!(regs->status & SR_SUM))) {
|
||||
if (fixup_exception(regs))
|
||||
return;
|
||||
|
||||
die_kernel_fault("access to user memory without uaccess routines", addr, regs);
|
||||
}
|
||||
|
||||
perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, addr);
|
||||
|
||||
|
|
|
@ -2,6 +2,7 @@
|
|||
OBJECT_FILES_NON_STANDARD := y
|
||||
|
||||
purgatory-y := purgatory.o sha256.o entry.o string.o ctype.o memcpy.o memset.o
|
||||
purgatory-y += strcmp.o strlen.o strncmp.o
|
||||
|
||||
targets += $(purgatory-y)
|
||||
PURGATORY_OBJS = $(addprefix $(obj)/,$(purgatory-y))
|
||||
|
@ -18,6 +19,15 @@ $(obj)/memcpy.o: $(srctree)/arch/riscv/lib/memcpy.S FORCE
|
|||
$(obj)/memset.o: $(srctree)/arch/riscv/lib/memset.S FORCE
|
||||
$(call if_changed_rule,as_o_S)
|
||||
|
||||
$(obj)/strcmp.o: $(srctree)/arch/riscv/lib/strcmp.S FORCE
|
||||
$(call if_changed_rule,as_o_S)
|
||||
|
||||
$(obj)/strlen.o: $(srctree)/arch/riscv/lib/strlen.S FORCE
|
||||
$(call if_changed_rule,as_o_S)
|
||||
|
||||
$(obj)/strncmp.o: $(srctree)/arch/riscv/lib/strncmp.S FORCE
|
||||
$(call if_changed_rule,as_o_S)
|
||||
|
||||
$(obj)/sha256.o: $(srctree)/lib/crypto/sha256.c FORCE
|
||||
$(call if_changed_rule,cc_o_c)
|
||||
|
||||
|
@ -77,6 +87,9 @@ CFLAGS_ctype.o += $(PURGATORY_CFLAGS)
|
|||
AFLAGS_REMOVE_entry.o += -Wa,-gdwarf-2
|
||||
AFLAGS_REMOVE_memcpy.o += -Wa,-gdwarf-2
|
||||
AFLAGS_REMOVE_memset.o += -Wa,-gdwarf-2
|
||||
AFLAGS_REMOVE_strcmp.o += -Wa,-gdwarf-2
|
||||
AFLAGS_REMOVE_strlen.o += -Wa,-gdwarf-2
|
||||
AFLAGS_REMOVE_strncmp.o += -Wa,-gdwarf-2
|
||||
|
||||
$(obj)/purgatory.ro: $(PURGATORY_OBJS) FORCE
|
||||
$(call if_changed,ld)
|
||||
|
|
|
@ -93,6 +93,11 @@ disas() {
|
|||
${CROSS_COMPILE}strip $t.o
|
||||
fi
|
||||
|
||||
if [ "$ARCH" = "riscv" ]; then
|
||||
OBJDUMPFLAGS="-M no-aliases --section=.text -D"
|
||||
${CROSS_COMPILE}strip $t.o
|
||||
fi
|
||||
|
||||
if [ $pc_sub -ne 0 ]; then
|
||||
if [ $PC ]; then
|
||||
adj_vma=$(( $PC - $pc_sub ))
|
||||
|
@ -126,8 +131,13 @@ get_substr_opcode_bytes_num()
|
|||
do
|
||||
substr+="$opc"
|
||||
|
||||
opcode="$substr"
|
||||
if [ "$ARCH" = "riscv" ]; then
|
||||
opcode=$(echo $opcode | tr ' ' '\n' | tac | tr -d '\n')
|
||||
fi
|
||||
|
||||
# return if opcode bytes do not match @opline anymore
|
||||
if ! echo $opline | grep -q "$substr";
|
||||
if ! echo $opline | grep -q "$opcode";
|
||||
then
|
||||
break
|
||||
fi
|
||||
|
|
Loading…
Reference in New Issue