Commit Graph

17 Commits

Author SHA1 Message Date
Chen Lifu c08b4848f5
riscv: lib: uaccess: fix CSR_STATUS SR_SUM bit
Since commit 5d8544e2d0 ("RISC-V: Generic library routines and assembly")
and commit ebcbd75e39 ("riscv: Fix the bug in memory access fixup code"),
if __clear_user and __copy_user return from an fixup branch,
CSR_STATUS SR_SUM bit will be set, it is a vulnerability, so that
S-mode memory accesses to pages that are accessible by U-mode will success.
Disable S-mode access to U-mode memory should clear SR_SUM bit.

Fixes: 5d8544e2d0 ("RISC-V: Generic library routines and assembly")
Fixes: ebcbd75e39 ("riscv: Fix the bug in memory access fixup code")
Signed-off-by: Chen Lifu <chenlifu@huawei.com>
Reviewed-by: Ben Dooks <ben.dooks@codethink.co.uk>
Link: https://lore.kernel.org/r/20220615014714.1650349-1-chenlifu@huawei.com
Cc: stable@vger.kernel.org
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2022-08-10 14:06:31 -07:00
Jisheng Zhang 6dd10d9166
riscv: extable: consolidate definitions
This is a riscv port of commit 819771cc28 ("arm64: extable:
consolidate definitions").

Signed-off-by: Jisheng Zhang <jszhang@kernel.org>
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2022-01-05 17:52:47 -08:00
Jisheng Zhang 9d504f9aa5
riscv: lib: uaccess: fold fixups into body
uaccess functions such __asm_copy_to_user(),  __arch_copy_from_user()
and __clear_user() place their exception fixups in the `.fixup` section
without any clear association with themselves. If we backtrace the
fixup code, it will be symbolized as an offset from the nearest prior
symbol.

Similar as arm64 does, we must move fixups into the body of the
functions themselves, after the usual fast-path returns.

Signed-off-by: Jisheng Zhang <jszhang@kernel.org>
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2022-01-05 17:52:39 -08:00
Jisheng Zhang bb1f85d604
riscv: switch to relative exception tables
Similar as other architectures such as arm64, x86 and so on, use
offsets relative to the exception table entry values rather than
absolute addresses for both the exception locationand the fixup.

However, RISCV label difference will actually produce two relocations,
a pair of R_RISCV_ADD32 and R_RISCV_SUB32. Take below simple code for
example:

$ cat test.S
.section .text
1:
        nop
.section __ex_table,"a"
        .balign 4
        .long (1b - .)
.previous

$ riscv64-linux-gnu-gcc -c test.S
$ riscv64-linux-gnu-readelf -r test.o
Relocation section '.rela__ex_table' at offset 0x100 contains 2 entries:
  Offset          Info           Type           Sym. Value    Sym. Name + Addend
000000000000  000600000023 R_RISCV_ADD32     0000000000000000 .L1^B1 + 0
000000000000  000500000027 R_RISCV_SUB32     0000000000000000 .L0  + 0

The modpost will complain the R_RISCV_SUB32 relocation, so we need to
patch modpost.c to skip this relocation for .rela__ex_table section.

After this patch, the __ex_table section size of defconfig vmlinux is
reduced from 7072 Bytes to 3536 Bytes.

Signed-off-by: Jisheng Zhang <jszhang@kernel.org>
Reviewed-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2022-01-05 17:52:20 -08:00
Akira Tsukamoto ea196c548c
riscv: __asm_copy_to-from_user: Fix: Typos in comments
Fixing typos and grammar mistakes and using more intuitive label
name.

Signed-off-by: Akira Tsukamoto <akira.tsukamoto@gmail.com>
Fixes: ca6eaaa210 ("riscv: __asm_copy_to-from_user: Optimize unaligned memory access and pipeline stall")
Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2021-07-23 17:49:12 -07:00
Akira Tsukamoto d4b3e0105e
riscv: __asm_copy_to-from_user: Remove unnecessary size check
Clean up:

The size of 0 will be evaluated in the next step. Not
required here.

Signed-off-by: Akira Tsukamoto <akira.tsukamoto@gmail.com>
Fixes: ca6eaaa210 ("riscv: __asm_copy_to-from_user: Optimize unaligned memory access and pipeline stall")
Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2021-07-23 17:49:07 -07:00
Akira Tsukamoto 22b5f16ffe
riscv: __asm_copy_to-from_user: Fix: fail on RV32
Had a bug when converting bytes to bits when the cpu was rv32.

The a3 contains the number of bytes and multiple of 8
would be the bits. The LGREG is holding 2 for RV32 and 3 for
RV32, so to achieve multiple of 8 it must always be constant 3.
The 2 was mistakenly used for rv32.

Signed-off-by: Akira Tsukamoto <akira.tsukamoto@gmail.com>
Fixes: ca6eaaa210 ("riscv: __asm_copy_to-from_user: Optimize unaligned memory access and pipeline stall")
Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2021-07-23 17:49:01 -07:00
Akira Tsukamoto 6010d300f9
riscv: __asm_copy_to-from_user: Fix: overrun copy
There were two causes for the overrun memory access.

The threshold size was too small.
The aligning dst require one SZREG and unrolling word copy requires
8*SZREG, total have to be at least 9*SZREG.

Inside the unrolling copy, the subtracting -(8*SZREG-1) would make
iteration happening one extra loop. Proper value is -(8*SZREG).

Signed-off-by: Akira Tsukamoto <akira.tsukamoto@gmail.com>
Fixes: ca6eaaa210 ("riscv: __asm_copy_to-from_user: Optimize unaligned memory access and pipeline stall")
Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2021-07-23 17:48:52 -07:00
Akira Tsukamoto ca6eaaa210
riscv: __asm_copy_to-from_user: Optimize unaligned memory access and pipeline stall
This patch will reduce cpu usage dramatically in kernel space especially
for application which use sys-call with large buffer size, such as
network applications. The main reason behind this is that every
unaligned memory access will raise exceptions and switch between s-mode
and m-mode causing large overhead.

First copy in bytes until reaches the first word aligned boundary in
destination memory address. This is the preparation before the bulk
aligned word copy.

The destination address is aligned now, but oftentimes the source
address is not in an aligned boundary. To reduce the unaligned memory
access, it reads the data from source in aligned boundaries, which will
cause the data to have an offset, and then combines the data in the next
iteration by fixing offset with shifting before writing to destination.
The majority of the improving copy speed comes from this shift copy.

In the lucky situation that the both source and destination address are
on the aligned boundary, perform load and store with register size to
copy the data. Without the unrolling, it will reduce the speed since the
next store instruction for the same register using from the load will
stall the pipeline.

At last, copying the remainder in one byte at a time.

Signed-off-by: Akira Tsukamoto <akira.tsukamoto@gmail.com>
Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2021-07-06 15:09:48 -07:00
Palmer Dabbelt abc71bf0a7
RISC-V: Stop using LOCAL for the uaccess fixups
LLVM's integrated assembler doesn't support the LOCAL directive, which we're
using when generating our uaccess fixup tables.  Luckily the table fragment is
small enough that there's only one internal symbol, so using a relative symbol
reference doesn't really complicate anything.

Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-03-03 10:45:14 -08:00
Luc Van Oostenryck 4d47ce158e riscv: fix compile failure with EXPORT_SYMBOL() & !MMU
When support for !MMU was added, the declaration of
__asm_copy_to_user() & __asm_copy_from_user() were #ifdefed
out hence their EXPORT_SYMBOL() give an error message like:
  .../riscv_ksyms.c:13:15: error: '__asm_copy_to_user' undeclared here
  .../riscv_ksyms.c:14:15: error: '__asm_copy_from_user' undeclared here

Since these symbols are not defined with !MMU it's wrong to export them.
Same for __clear_user() (even though this one is also declared in
include/asm-generic/uaccess.h and thus doesn't give an error message).

Fix this by doing the EXPORT_SYMBOL() directly where these symbols
are defined: inside lib/uaccess.S itself.

Fixes: 6bd33e1ece ("riscv: fix compile failure with EXPORT_SYMBOL() & !MMU")
Reported-by: kbuild test robot <lkp@intel.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Signed-off-by: Luc Van Oostenryck <luc.vanoostenryck@gmail.com>
Signed-off-by: Paul Walmsley <paul.walmsley@sifive.com>
2019-12-27 21:44:36 -08:00
Christoph Hellwig a4c3733d32 riscv: abstract out CSR names for supervisor vs machine mode
Many of the privileged CSRs exist in a supervisor and machine version
that are used very similarly.  Provide versions of the CSR names and
fields that map to either the S-mode or M-mode variant depending on
a new CONFIG_RISCV_M_MODE kconfig symbol.

Contains contributions from Damien Le Moal <Damien.LeMoal@wdc.com>
and Paul Walmsley <paul.walmsley@sifive.com>.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Thomas Gleixner <tglx@linutronix.de> # for drivers/clocksource, drivers/irqchip
[paul.walmsley@sifive.com: updated to apply]
Signed-off-by: Paul Walmsley <paul.walmsley@sifive.com>
2019-11-05 09:20:42 -08:00
Bin Meng 4f3f900846 riscv: Using CSR numbers to access CSRs
Since commit a3182c91ef ("RISC-V: Access CSRs using CSR numbers"),
we should prefer accessing CSRs using their CSR numbers, but there
are several leftovers like sstatus / sptbr we missed.

Signed-off-by: Bin Meng <bmeng.cn@gmail.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Paul Walmsley <paul.walmsley@sifive.com>
2019-08-30 11:04:19 -07:00
Palmer Dabbelt e0e0c87c02
RISC-V: Make our port sparse-clean
This patch set contains a handful of fixes that clean up the sparse
results for the RISC-V port.  These patches shouldn't have any
functional difference.  The patches:

* Use NULL instead of 0.
* Clean up __user annotations.
* Split __copy_user into two functions, to make the __user annotations
  valid.

Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
2018-06-11 09:09:49 -07:00
Luc Van Oostenryck 86406d51d3
riscv: split the declaration of __copy_user
We use a single __copy_user assembly function to copy memory both from
and to userspace. While this works, it triggers sparse errors because
we're implicitly casting between the kernel and user address spaces by
calling __copy_user.

This patch splits the C declaration into a pair of functions,
__asm_copy_{to,from}_user, that have sane semantics WRT __user. This
split make things fine from sparse's point of view. The assembly
implementation keeps a single definition but add a double ENTRY() for it,
one for __asm_copy_to_user and another one for __asm_copy_from_user.
The result is a spare-safe implementation that pays no performance
or code size penalty.

Signed-off-by: Luc Van Oostenryck <luc.vanoostenryck@gmail.com>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
2018-06-09 12:34:31 -07:00
Alan Kao ebcbd75e39
riscv: Fix the bug in memory access fixup code
A piece of fixup code is currently shared by __copy_user and
__clear_user.  It first disables the access to user-space memory
and then returns the "n" argument, which represents #(bytes not processed).
However,__copy_user's "n" is in register a2, while __clear_user's in a1,
and thus it causes errors for programs like setdomainname02 testcase in LTP.

This patch fixes this issue by separating their fixup code and returning
the right value for the kernel to handle a relative fault properly.

Signed-off-by: Alan Kao <alankao@andestech.com>
Cc: Greentime Hu <greentime@andestech.com>
Cc: Zong Li <zong@andestech.com>
Cc: Vincent Chen <vincentc@andestech.com>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
2018-06-04 13:33:31 -07:00
Palmer Dabbelt 5d8544e2d0 RISC-V: Generic library routines and assembly
This patch contains code that is more specific to the RISC-V ISA than it
is to Linux.  It contains string and math operations, C wrappers for
various assembly instructions, stack walking code, and uaccess.

Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
2017-09-26 15:26:45 -07:00