OpenCloudOS-Kernel/arch/alpha/kernel/entry.S

853 lines
18 KiB
ArmAsm
Raw Normal View History

License cleanup: add SPDX GPL-2.0 license identifier to files with no license Many source files in the tree are missing licensing information, which makes it harder for compliance tools to determine the correct license. By default all files without license information are under the default license of the kernel, which is GPL version 2. Update the files which contain no license information with the 'GPL-2.0' SPDX license identifier. The SPDX identifier is a legally binding shorthand, which can be used instead of the full boiler plate text. This patch is based on work done by Thomas Gleixner and Kate Stewart and Philippe Ombredanne. How this work was done: Patches were generated and checked against linux-4.14-rc6 for a subset of the use cases: - file had no licensing information it it. - file was a */uapi/* one with no licensing information in it, - file was a */uapi/* one with existing licensing information, Further patches will be generated in subsequent months to fix up cases where non-standard license headers were used, and references to license had to be inferred by heuristics based on keywords. The analysis to determine which SPDX License Identifier to be applied to a file was done in a spreadsheet of side by side results from of the output of two independent scanners (ScanCode & Windriver) producing SPDX tag:value files created by Philippe Ombredanne. Philippe prepared the base worksheet, and did an initial spot review of a few 1000 files. The 4.13 kernel was the starting point of the analysis with 60,537 files assessed. Kate Stewart did a file by file comparison of the scanner results in the spreadsheet to determine which SPDX license identifier(s) to be applied to the file. She confirmed any determination that was not immediately clear with lawyers working with the Linux Foundation. Criteria used to select files for SPDX license identifier tagging was: - Files considered eligible had to be source code files. - Make and config files were included as candidates if they contained >5 lines of source - File already had some variant of a license header in it (even if <5 lines). All documentation files were explicitly excluded. The following heuristics were used to determine which SPDX license identifiers to apply. - when both scanners couldn't find any license traces, file was considered to have no license information in it, and the top level COPYING file license applied. For non */uapi/* files that summary was: SPDX license identifier # files ---------------------------------------------------|------- GPL-2.0 11139 and resulted in the first patch in this series. If that file was a */uapi/* path one, it was "GPL-2.0 WITH Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was: SPDX license identifier # files ---------------------------------------------------|------- GPL-2.0 WITH Linux-syscall-note 930 and resulted in the second patch in this series. - if a file had some form of licensing information in it, and was one of the */uapi/* ones, it was denoted with the Linux-syscall-note if any GPL family license was found in the file or had no licensing in it (per prior point). Results summary: SPDX license identifier # files ---------------------------------------------------|------ GPL-2.0 WITH Linux-syscall-note 270 GPL-2.0+ WITH Linux-syscall-note 169 ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21 ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17 LGPL-2.1+ WITH Linux-syscall-note 15 GPL-1.0+ WITH Linux-syscall-note 14 ((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5 LGPL-2.0+ WITH Linux-syscall-note 4 LGPL-2.1 WITH Linux-syscall-note 3 ((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3 ((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1 and that resulted in the third patch in this series. - when the two scanners agreed on the detected license(s), that became the concluded license(s). - when there was disagreement between the two scanners (one detected a license but the other didn't, or they both detected different licenses) a manual inspection of the file occurred. - In most cases a manual inspection of the information in the file resulted in a clear resolution of the license that should apply (and which scanner probably needed to revisit its heuristics). - When it was not immediately clear, the license identifier was confirmed with lawyers working with the Linux Foundation. - If there was any question as to the appropriate license identifier, the file was flagged for further research and to be revisited later in time. In total, over 70 hours of logged manual review was done on the spreadsheet to determine the SPDX license identifiers to apply to the source files by Kate, Philippe, Thomas and, in some cases, confirmation by lawyers working with the Linux Foundation. Kate also obtained a third independent scan of the 4.13 code base from FOSSology, and compared selected files where the other two scanners disagreed against that SPDX file, to see if there was new insights. The Windriver scanner is based on an older version of FOSSology in part, so they are related. Thomas did random spot checks in about 500 files from the spreadsheets for the uapi headers and agreed with SPDX license identifier in the files he inspected. For the non-uapi files Thomas did random spot checks in about 15000 files. In initial set of patches against 4.14-rc6, 3 files were found to have copy/paste license identifier errors, and have been fixed to reflect the correct identifier. Additionally Philippe spent 10 hours this week doing a detailed manual inspection and review of the 12,461 patched files from the initial patch version early this week with: - a full scancode scan run, collecting the matched texts, detected license ids and scores - reviewing anything where there was a license detected (about 500+ files) to ensure that the applied SPDX license was correct - reviewing anything where there was no detection but the patch license was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied SPDX license was correct This produced a worksheet with 20 files needing minor correction. This worksheet was then exported into 3 different .csv files for the different types of files to be modified. These .csv files were then reviewed by Greg. Thomas wrote a script to parse the csv files and add the proper SPDX tag to the file, in the format that the file expected. This script was further refined by Greg based on the output to detect more types of files automatically and to distinguish between header and source .c files (which need different comment types.) Finally Greg ran the script using the .csv files to generate the patches. Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org> Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 22:07:57 +08:00
/* SPDX-License-Identifier: GPL-2.0 */
/*
* arch/alpha/kernel/entry.S
*
* Kernel entry-points.
*/
#include <asm/asm-offsets.h>
#include <asm/thread_info.h>
#include <asm/pal.h>
#include <asm/errno.h>
#include <asm/unistd.h>
.text
.set noat
.cfi_sections .debug_frame
/* Stack offsets. */
#define SP_OFF 184
alpha: lazy FPU switching On each context switch we save the FPU registers on stack of old process and restore FPU registers from the stack of new one. That allows us to avoid doing that each time we enter/leave the kernel mode; however, that can get suboptimal in some cases. For one thing, we don't need to bother saving anything for kernel threads. For another, if between entering and leaving the kernel a thread gives CPU up more than once, it will do useless work, saving the same values every time, only to discard the saved copy as soon as it returns from switch_to(). Alternative solution: * move the array we save into from switch_stack to thread_info * have a (thread-synchronous) flag set when we save them * have another flag set when they should be restored on return to userland. * do *NOT* save/restore them in do_switch_stack()/undo_switch_stack(). * restore on the exit to user mode if the restore flag had been set. Clear both flags. * on context switch, entry to fork/clone/vfork, before entry into do_signal() and on entry into straced syscall save the registers and set the 'saved' flag unless it had been already set. * on context switch set the 'restore' flag as well. * have copy_thread() set both flags for child, so the registers would be restored once the child returns to userland. * use the saved data in setup_sigcontext(); have restore_sigcontext() set both flags and copy from sigframe to save area. * teach ptrace to look for FPU registers in thread_info instead of switch_stack. * teach isolated accesses to FPU registers (rdfpcr, wrfpcr, etc.) to check the 'saved' flag (under preempt_disable()) and work with the save area if it's been set; if 'saved' flag is found upon write access, set 'restore' flag as well. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Matt Turner <mattst88@gmail.com>
2022-09-02 09:50:12 +08:00
#define SWITCH_STACK_SIZE 64
.macro CFI_START_OSF_FRAME func
.align 4
.globl \func
.type \func,@function
\func:
.cfi_startproc simple
.cfi_return_column 64
.cfi_def_cfa $sp, 48
.cfi_rel_offset 64, 8
.cfi_rel_offset $gp, 16
.cfi_rel_offset $16, 24
.cfi_rel_offset $17, 32
.cfi_rel_offset $18, 40
.endm
.macro CFI_END_OSF_FRAME func
.cfi_endproc
.size \func, . - \func
.endm
/*
* This defines the normal kernel pt-regs layout.
*
* regs 9-15 preserved by C code
* regs 16-18 saved by PAL-code
* regs 29-30 saved and set up by PAL-code
* JRP - Save regs 16-18 in a special area of the stack, so that
* the palcode-provided values are available to the signal handler.
*/
.macro SAVE_ALL
subq $sp, SP_OFF, $sp
.cfi_adjust_cfa_offset SP_OFF
stq $0, 0($sp)
stq $1, 8($sp)
stq $2, 16($sp)
stq $3, 24($sp)
stq $4, 32($sp)
stq $28, 144($sp)
.cfi_rel_offset $0, 0
.cfi_rel_offset $1, 8
.cfi_rel_offset $2, 16
.cfi_rel_offset $3, 24
.cfi_rel_offset $4, 32
.cfi_rel_offset $28, 144
lda $2, alpha_mv
stq $5, 40($sp)
stq $6, 48($sp)
stq $7, 56($sp)
stq $8, 64($sp)
stq $19, 72($sp)
stq $20, 80($sp)
stq $21, 88($sp)
ldq $2, HAE_CACHE($2)
stq $22, 96($sp)
stq $23, 104($sp)
stq $24, 112($sp)
stq $25, 120($sp)
stq $26, 128($sp)
stq $27, 136($sp)
stq $2, 152($sp)
stq $16, 160($sp)
stq $17, 168($sp)
stq $18, 176($sp)
.cfi_rel_offset $5, 40
.cfi_rel_offset $6, 48
.cfi_rel_offset $7, 56
.cfi_rel_offset $8, 64
.cfi_rel_offset $19, 72
.cfi_rel_offset $20, 80
.cfi_rel_offset $21, 88
.cfi_rel_offset $22, 96
.cfi_rel_offset $23, 104
.cfi_rel_offset $24, 112
.cfi_rel_offset $25, 120
.cfi_rel_offset $26, 128
.cfi_rel_offset $27, 136
.endm
.macro RESTORE_ALL
lda $19, alpha_mv
ldq $0, 0($sp)
ldq $1, 8($sp)
ldq $2, 16($sp)
ldq $3, 24($sp)
ldq $21, 152($sp)
ldq $20, HAE_CACHE($19)
ldq $4, 32($sp)
ldq $5, 40($sp)
ldq $6, 48($sp)
ldq $7, 56($sp)
subq $20, $21, $20
ldq $8, 64($sp)
beq $20, 99f
ldq $20, HAE_REG($19)
stq $21, HAE_CACHE($19)
stq $21, 0($20)
99: ldq $19, 72($sp)
ldq $20, 80($sp)
ldq $21, 88($sp)
ldq $22, 96($sp)
ldq $23, 104($sp)
ldq $24, 112($sp)
ldq $25, 120($sp)
ldq $26, 128($sp)
ldq $27, 136($sp)
ldq $28, 144($sp)
addq $sp, SP_OFF, $sp
.cfi_restore $0
.cfi_restore $1
.cfi_restore $2
.cfi_restore $3
.cfi_restore $4
.cfi_restore $5
.cfi_restore $6
.cfi_restore $7
.cfi_restore $8
.cfi_restore $19
.cfi_restore $20
.cfi_restore $21
.cfi_restore $22
.cfi_restore $23
.cfi_restore $24
.cfi_restore $25
.cfi_restore $26
.cfi_restore $27
.cfi_restore $28
.cfi_adjust_cfa_offset -SP_OFF
.endm
.macro DO_SWITCH_STACK
bsr $1, do_switch_stack
.cfi_adjust_cfa_offset SWITCH_STACK_SIZE
.cfi_rel_offset $9, 0
.cfi_rel_offset $10, 8
.cfi_rel_offset $11, 16
.cfi_rel_offset $12, 24
.cfi_rel_offset $13, 32
.cfi_rel_offset $14, 40
.cfi_rel_offset $15, 48
.endm
.macro UNDO_SWITCH_STACK
bsr $1, undo_switch_stack
.cfi_restore $9
.cfi_restore $10
.cfi_restore $11
.cfi_restore $12
.cfi_restore $13
.cfi_restore $14
.cfi_restore $15
.cfi_adjust_cfa_offset -SWITCH_STACK_SIZE
.endm
/*
* Non-syscall kernel entry points.
*/
CFI_START_OSF_FRAME entInt
SAVE_ALL
lda $8, 0x3fff
lda $26, ret_from_sys_call
bic $sp, $8, $8
mov $sp, $19
jsr $31, do_entInt
CFI_END_OSF_FRAME entInt
CFI_START_OSF_FRAME entArith
SAVE_ALL
lda $8, 0x3fff
lda $26, ret_from_sys_call
bic $sp, $8, $8
mov $sp, $18
jsr $31, do_entArith
CFI_END_OSF_FRAME entArith
CFI_START_OSF_FRAME entMM
SAVE_ALL
/* save $9 - $15 so the inline exception code can manipulate them. */
subq $sp, 56, $sp
.cfi_adjust_cfa_offset 56
stq $9, 0($sp)
stq $10, 8($sp)
stq $11, 16($sp)
stq $12, 24($sp)
stq $13, 32($sp)
stq $14, 40($sp)
stq $15, 48($sp)
.cfi_rel_offset $9, 0
.cfi_rel_offset $10, 8
.cfi_rel_offset $11, 16
.cfi_rel_offset $12, 24
.cfi_rel_offset $13, 32
.cfi_rel_offset $14, 40
.cfi_rel_offset $15, 48
addq $sp, 56, $19
/* handle the fault */
lda $8, 0x3fff
bic $sp, $8, $8
jsr $26, do_page_fault
/* reload the registers after the exception code played. */
ldq $9, 0($sp)
ldq $10, 8($sp)
ldq $11, 16($sp)
ldq $12, 24($sp)
ldq $13, 32($sp)
ldq $14, 40($sp)
ldq $15, 48($sp)
addq $sp, 56, $sp
.cfi_restore $9
.cfi_restore $10
.cfi_restore $11
.cfi_restore $12
.cfi_restore $13
.cfi_restore $14
.cfi_restore $15
.cfi_adjust_cfa_offset -56
/* finish up the syscall as normal. */
br ret_from_sys_call
CFI_END_OSF_FRAME entMM
CFI_START_OSF_FRAME entIF
SAVE_ALL
lda $8, 0x3fff
lda $26, ret_from_sys_call
bic $sp, $8, $8
mov $sp, $17
jsr $31, do_entIF
CFI_END_OSF_FRAME entIF
CFI_START_OSF_FRAME entUna
lda $sp, -256($sp)
.cfi_adjust_cfa_offset 256
stq $0, 0($sp)
.cfi_rel_offset $0, 0
.cfi_remember_state
ldq $0, 256($sp) /* get PS */
stq $1, 8($sp)
stq $2, 16($sp)
stq $3, 24($sp)
and $0, 8, $0 /* user mode? */
stq $4, 32($sp)
bne $0, entUnaUser /* yup -> do user-level unaligned fault */
stq $5, 40($sp)
stq $6, 48($sp)
stq $7, 56($sp)
stq $8, 64($sp)
stq $9, 72($sp)
stq $10, 80($sp)
stq $11, 88($sp)
stq $12, 96($sp)
stq $13, 104($sp)
stq $14, 112($sp)
stq $15, 120($sp)
/* 16-18 PAL-saved */
stq $19, 152($sp)
stq $20, 160($sp)
stq $21, 168($sp)
stq $22, 176($sp)
stq $23, 184($sp)
stq $24, 192($sp)
stq $25, 200($sp)
stq $26, 208($sp)
stq $27, 216($sp)
stq $28, 224($sp)
mov $sp, $19
stq $gp, 232($sp)
.cfi_rel_offset $1, 1*8
.cfi_rel_offset $2, 2*8
.cfi_rel_offset $3, 3*8
.cfi_rel_offset $4, 4*8
.cfi_rel_offset $5, 5*8
.cfi_rel_offset $6, 6*8
.cfi_rel_offset $7, 7*8
.cfi_rel_offset $8, 8*8
.cfi_rel_offset $9, 9*8
.cfi_rel_offset $10, 10*8
.cfi_rel_offset $11, 11*8
.cfi_rel_offset $12, 12*8
.cfi_rel_offset $13, 13*8
.cfi_rel_offset $14, 14*8
.cfi_rel_offset $15, 15*8
.cfi_rel_offset $19, 19*8
.cfi_rel_offset $20, 20*8
.cfi_rel_offset $21, 21*8
.cfi_rel_offset $22, 22*8
.cfi_rel_offset $23, 23*8
.cfi_rel_offset $24, 24*8
.cfi_rel_offset $25, 25*8
.cfi_rel_offset $26, 26*8
.cfi_rel_offset $27, 27*8
.cfi_rel_offset $28, 28*8
.cfi_rel_offset $29, 29*8
lda $8, 0x3fff
stq $31, 248($sp)
bic $sp, $8, $8
jsr $26, do_entUna
ldq $0, 0($sp)
ldq $1, 8($sp)
ldq $2, 16($sp)
ldq $3, 24($sp)
ldq $4, 32($sp)
ldq $5, 40($sp)
ldq $6, 48($sp)
ldq $7, 56($sp)
ldq $8, 64($sp)
ldq $9, 72($sp)
ldq $10, 80($sp)
ldq $11, 88($sp)
ldq $12, 96($sp)
ldq $13, 104($sp)
ldq $14, 112($sp)
ldq $15, 120($sp)
/* 16-18 PAL-saved */
ldq $19, 152($sp)
ldq $20, 160($sp)
ldq $21, 168($sp)
ldq $22, 176($sp)
ldq $23, 184($sp)
ldq $24, 192($sp)
ldq $25, 200($sp)
ldq $26, 208($sp)
ldq $27, 216($sp)
ldq $28, 224($sp)
ldq $gp, 232($sp)
lda $sp, 256($sp)
.cfi_restore $1
.cfi_restore $2
.cfi_restore $3
.cfi_restore $4
.cfi_restore $5
.cfi_restore $6
.cfi_restore $7
.cfi_restore $8
.cfi_restore $9
.cfi_restore $10
.cfi_restore $11
.cfi_restore $12
.cfi_restore $13
.cfi_restore $14
.cfi_restore $15
.cfi_restore $19
.cfi_restore $20
.cfi_restore $21
.cfi_restore $22
.cfi_restore $23
.cfi_restore $24
.cfi_restore $25
.cfi_restore $26
.cfi_restore $27
.cfi_restore $28
.cfi_restore $29
.cfi_adjust_cfa_offset -256
call_pal PAL_rti
.align 4
entUnaUser:
.cfi_restore_state
ldq $0, 0($sp) /* restore original $0 */
lda $sp, 256($sp) /* pop entUna's stack frame */
.cfi_restore $0
.cfi_adjust_cfa_offset -256
SAVE_ALL /* setup normal kernel stack */
lda $sp, -56($sp)
.cfi_adjust_cfa_offset 56
stq $9, 0($sp)
stq $10, 8($sp)
stq $11, 16($sp)
stq $12, 24($sp)
stq $13, 32($sp)
stq $14, 40($sp)
stq $15, 48($sp)
.cfi_rel_offset $9, 0
.cfi_rel_offset $10, 8
.cfi_rel_offset $11, 16
.cfi_rel_offset $12, 24
.cfi_rel_offset $13, 32
.cfi_rel_offset $14, 40
.cfi_rel_offset $15, 48
lda $8, 0x3fff
addq $sp, 56, $19
bic $sp, $8, $8
jsr $26, do_entUnaUser
ldq $9, 0($sp)
ldq $10, 8($sp)
ldq $11, 16($sp)
ldq $12, 24($sp)
ldq $13, 32($sp)
ldq $14, 40($sp)
ldq $15, 48($sp)
lda $sp, 56($sp)
.cfi_restore $9
.cfi_restore $10
.cfi_restore $11
.cfi_restore $12
.cfi_restore $13
.cfi_restore $14
.cfi_restore $15
.cfi_adjust_cfa_offset -56
br ret_from_sys_call
CFI_END_OSF_FRAME entUna
CFI_START_OSF_FRAME entDbg
SAVE_ALL
lda $8, 0x3fff
lda $26, ret_from_sys_call
bic $sp, $8, $8
mov $sp, $16
jsr $31, do_entDbg
CFI_END_OSF_FRAME entDbg
/*
* The system call entry point is special. Most importantly, it looks
* like a function call to userspace as far as clobbered registers. We
* do preserve the argument registers (for syscall restarts) and $26
* (for leaf syscall functions).
*
* So much for theory. We don't take advantage of this yet.
*
* Note that a0-a2 are not saved by PALcode as with the other entry points.
*/
.align 4
.globl entSys
.type entSys, @function
.cfi_startproc simple
.cfi_return_column 64
.cfi_def_cfa $sp, 48
.cfi_rel_offset 64, 8
.cfi_rel_offset $gp, 16
entSys:
SAVE_ALL
lda $8, 0x3fff
bic $sp, $8, $8
lda $4, NR_syscalls($31)
stq $16, SP_OFF+24($sp)
lda $5, sys_call_table
lda $27, sys_ni_syscall
cmpult $0, $4, $4
ldl $3, TI_FLAGS($8)
stq $17, SP_OFF+32($sp)
s8addq $0, $5, $5
stq $18, SP_OFF+40($sp)
.cfi_rel_offset $16, SP_OFF+24
.cfi_rel_offset $17, SP_OFF+32
.cfi_rel_offset $18, SP_OFF+40
#ifdef CONFIG_AUDITSYSCALL
lda $6, _TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT
and $3, $6, $3
bne $3, strace
#else
blbs $3, strace /* check for SYSCALL_TRACE in disguise */
#endif
beq $4, 1f
ldq $27, 0($5)
1: jsr $26, ($27), sys_ni_syscall
ldgp $gp, 0($26)
blt $0, $syscall_error /* the call failed */
$ret_success:
stq $0, 0($sp)
stq $31, 72($sp) /* a3=0 => no error */
.align 4
.globl ret_from_sys_call
ret_from_sys_call:
cmovne $26, 0, $18 /* $18 = 0 => non-restartable */
ldq $0, SP_OFF($sp)
and $0, 8, $0
beq $0, ret_to_kernel
alpha: deal with multiple simultaneously pending signals Unlike the other targets, alpha sets _one_ sigframe and buggers off until the next syscall/interrupt, even if more signals are pending. It leads to quite a few unpleasant inconsistencies, starting with SIGSEGV potentially arriving not where it should and including e.g. mess with sigsuspend(); consider two pending signals blocked until sigsuspend() unblocks them. We pick the first one; then, if we are hit by interrupt while in the handler, we process the second one as well. If we are not, and if no syscalls had been made, we get out of the first handler and leave the second signal pending; normally sigreturn() would've picked it anyway, but here it starts with restoring the original mask and voila - the second signal is blocked again. On everything else we get both delivered consistently. It's actually easy to fix; the only thing to watch out for is prevention of double syscall restart. Fortunately, the idea I've nicked from arm fix by rmk works just fine... Testcase demonstrating the behaviour in question; on alpha we get one or both flags set (usually one), on everything else both are always set. #include <signal.h> #include <stdio.h> int had1, had2; void f1(int sig) { had1 = 1; } void f2(int sig) { had2 = 1; } main() { sigset_t set1, set2; sigemptyset(&set1); sigemptyset(&set2); sigaddset(&set2, 1); sigaddset(&set2, 2); signal(1, f1); signal(2, f2); sigprocmask(SIG_SETMASK, &set2, NULL); raise(1); raise(2); sigsuspend(&set1); printf("had1:%d had2:%d\n", had1, had2); } Tested-by: Michael Cree <mcree@orcon.net.nz> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Matt Turner <mattst88@gmail.com>
2010-09-18 20:42:27 +08:00
ret_to_user:
/* Make sure need_resched and sigpending don't change between
sampling and the rti. */
lda $16, 7
call_pal PAL_swpipl
ldl $17, TI_FLAGS($8)
and $17, _TIF_WORK_MASK, $2
alpha: deal with multiple simultaneously pending signals Unlike the other targets, alpha sets _one_ sigframe and buggers off until the next syscall/interrupt, even if more signals are pending. It leads to quite a few unpleasant inconsistencies, starting with SIGSEGV potentially arriving not where it should and including e.g. mess with sigsuspend(); consider two pending signals blocked until sigsuspend() unblocks them. We pick the first one; then, if we are hit by interrupt while in the handler, we process the second one as well. If we are not, and if no syscalls had been made, we get out of the first handler and leave the second signal pending; normally sigreturn() would've picked it anyway, but here it starts with restoring the original mask and voila - the second signal is blocked again. On everything else we get both delivered consistently. It's actually easy to fix; the only thing to watch out for is prevention of double syscall restart. Fortunately, the idea I've nicked from arm fix by rmk works just fine... Testcase demonstrating the behaviour in question; on alpha we get one or both flags set (usually one), on everything else both are always set. #include <signal.h> #include <stdio.h> int had1, had2; void f1(int sig) { had1 = 1; } void f2(int sig) { had2 = 1; } main() { sigset_t set1, set2; sigemptyset(&set1); sigemptyset(&set2); sigaddset(&set2, 1); sigaddset(&set2, 2); signal(1, f1); signal(2, f2); sigprocmask(SIG_SETMASK, &set2, NULL); raise(1); raise(2); sigsuspend(&set1); printf("had1:%d had2:%d\n", had1, had2); } Tested-by: Michael Cree <mcree@orcon.net.nz> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Matt Turner <mattst88@gmail.com>
2010-09-18 20:42:27 +08:00
bne $2, work_pending
restore_all:
alpha: lazy FPU switching On each context switch we save the FPU registers on stack of old process and restore FPU registers from the stack of new one. That allows us to avoid doing that each time we enter/leave the kernel mode; however, that can get suboptimal in some cases. For one thing, we don't need to bother saving anything for kernel threads. For another, if between entering and leaving the kernel a thread gives CPU up more than once, it will do useless work, saving the same values every time, only to discard the saved copy as soon as it returns from switch_to(). Alternative solution: * move the array we save into from switch_stack to thread_info * have a (thread-synchronous) flag set when we save them * have another flag set when they should be restored on return to userland. * do *NOT* save/restore them in do_switch_stack()/undo_switch_stack(). * restore on the exit to user mode if the restore flag had been set. Clear both flags. * on context switch, entry to fork/clone/vfork, before entry into do_signal() and on entry into straced syscall save the registers and set the 'saved' flag unless it had been already set. * on context switch set the 'restore' flag as well. * have copy_thread() set both flags for child, so the registers would be restored once the child returns to userland. * use the saved data in setup_sigcontext(); have restore_sigcontext() set both flags and copy from sigframe to save area. * teach ptrace to look for FPU registers in thread_info instead of switch_stack. * teach isolated accesses to FPU registers (rdfpcr, wrfpcr, etc.) to check the 'saved' flag (under preempt_disable()) and work with the save area if it's been set; if 'saved' flag is found upon write access, set 'restore' flag as well. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Matt Turner <mattst88@gmail.com>
2022-09-02 09:50:12 +08:00
ldl $2, TI_STATUS($8)
and $2, TS_SAVED_FP | TS_RESTORE_FP, $3
bne $3, restore_fpu
restore_other:
.cfi_remember_state
RESTORE_ALL
call_pal PAL_rti
ret_to_kernel:
.cfi_restore_state
lda $16, 7
call_pal PAL_swpipl
alpha: lazy FPU switching On each context switch we save the FPU registers on stack of old process and restore FPU registers from the stack of new one. That allows us to avoid doing that each time we enter/leave the kernel mode; however, that can get suboptimal in some cases. For one thing, we don't need to bother saving anything for kernel threads. For another, if between entering and leaving the kernel a thread gives CPU up more than once, it will do useless work, saving the same values every time, only to discard the saved copy as soon as it returns from switch_to(). Alternative solution: * move the array we save into from switch_stack to thread_info * have a (thread-synchronous) flag set when we save them * have another flag set when they should be restored on return to userland. * do *NOT* save/restore them in do_switch_stack()/undo_switch_stack(). * restore on the exit to user mode if the restore flag had been set. Clear both flags. * on context switch, entry to fork/clone/vfork, before entry into do_signal() and on entry into straced syscall save the registers and set the 'saved' flag unless it had been already set. * on context switch set the 'restore' flag as well. * have copy_thread() set both flags for child, so the registers would be restored once the child returns to userland. * use the saved data in setup_sigcontext(); have restore_sigcontext() set both flags and copy from sigframe to save area. * teach ptrace to look for FPU registers in thread_info instead of switch_stack. * teach isolated accesses to FPU registers (rdfpcr, wrfpcr, etc.) to check the 'saved' flag (under preempt_disable()) and work with the save area if it's been set; if 'saved' flag is found upon write access, set 'restore' flag as well. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Matt Turner <mattst88@gmail.com>
2022-09-02 09:50:12 +08:00
br restore_other
.align 3
$syscall_error:
/*
* Some system calls (e.g., ptrace) can return arbitrary
* values which might normally be mistaken as error numbers.
* Those functions must zero $0 (v0) directly in the stack
* frame to indicate that a negative return value wasn't an
* error number..
*/
ldq $18, 0($sp) /* old syscall nr (zero if success) */
beq $18, $ret_success
ldq $19, 72($sp) /* .. and this a3 */
subq $31, $0, $0 /* with error in v0 */
addq $31, 1, $1 /* set a3 for errno return */
stq $0, 0($sp)
mov $31, $26 /* tell "ret_from_sys_call" we can restart */
stq $1, 72($sp) /* a3 for return */
br ret_from_sys_call
/*
* Do all cleanup when returning from all interrupts and system calls.
*
* Arguments:
* $8: current.
* $17: TI_FLAGS.
* $18: The old syscall number, or zero if this is not a return
* from a syscall that errored and is possibly restartable.
* $19: The old a3 value
*/
.align 4
.type work_pending, @function
work_pending:
and $17, _TIF_NOTIFY_RESUME | _TIF_SIGPENDING | _TIF_NOTIFY_SIGNAL, $2
bne $2, $work_notifysig
$work_resched:
/*
* We can get here only if we returned from syscall without SIGPENDING
* or got through work_notifysig already. Either case means no syscall
* restarts for us, so let $18 and $19 burn.
*/
jsr $26, schedule
mov 0, $18
br ret_to_user
$work_notifysig:
mov $sp, $16
DO_SWITCH_STACK
jsr $26, do_work_pending
UNDO_SWITCH_STACK
br restore_all
/*
* PTRACE syscall handler
*/
.align 4
.type strace, @function
strace:
/* set up signal stack, call syscall_trace */
alpha: lazy FPU switching On each context switch we save the FPU registers on stack of old process and restore FPU registers from the stack of new one. That allows us to avoid doing that each time we enter/leave the kernel mode; however, that can get suboptimal in some cases. For one thing, we don't need to bother saving anything for kernel threads. For another, if between entering and leaving the kernel a thread gives CPU up more than once, it will do useless work, saving the same values every time, only to discard the saved copy as soon as it returns from switch_to(). Alternative solution: * move the array we save into from switch_stack to thread_info * have a (thread-synchronous) flag set when we save them * have another flag set when they should be restored on return to userland. * do *NOT* save/restore them in do_switch_stack()/undo_switch_stack(). * restore on the exit to user mode if the restore flag had been set. Clear both flags. * on context switch, entry to fork/clone/vfork, before entry into do_signal() and on entry into straced syscall save the registers and set the 'saved' flag unless it had been already set. * on context switch set the 'restore' flag as well. * have copy_thread() set both flags for child, so the registers would be restored once the child returns to userland. * use the saved data in setup_sigcontext(); have restore_sigcontext() set both flags and copy from sigframe to save area. * teach ptrace to look for FPU registers in thread_info instead of switch_stack. * teach isolated accesses to FPU registers (rdfpcr, wrfpcr, etc.) to check the 'saved' flag (under preempt_disable()) and work with the save area if it's been set; if 'saved' flag is found upon write access, set 'restore' flag as well. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Matt Turner <mattst88@gmail.com>
2022-09-02 09:50:12 +08:00
// NB: if anyone adds preemption, this block will need to be protected
ldl $1, TI_STATUS($8)
and $1, TS_SAVED_FP, $3
or $1, TS_SAVED_FP, $2
bne $3, 1f
stl $2, TI_STATUS($8)
bsr $26, __save_fpu
1:
DO_SWITCH_STACK
jsr $26, syscall_trace_enter /* returns the syscall number */
UNDO_SWITCH_STACK
/* get the arguments back.. */
ldq $16, SP_OFF+24($sp)
ldq $17, SP_OFF+32($sp)
ldq $18, SP_OFF+40($sp)
ldq $19, 72($sp)
ldq $20, 80($sp)
ldq $21, 88($sp)
/* get the system call pointer.. */
lda $1, NR_syscalls($31)
lda $2, sys_call_table
lda $27, sys_ni_syscall
cmpult $0, $1, $1
s8addq $0, $2, $2
beq $1, 1f
ldq $27, 0($2)
1: jsr $26, ($27), sys_gettimeofday
alpha: fix a 14 years old bug in sigreturn tracing The way sigreturn() is implemented on alpha breaks PTRACE_SYSCALL, all way back to 1.3.95 when alpha has grown PTRACE_SYSCALL support. What happens is direct return to ret_from_syscall, in order to bypass mangling of a3 (error indicator) and prevent other mutilations of registers (e.g. by syscall restart). That's fine, but... the entire TIF_SYSCALL_TRACE codepath is kept separate on alpha and post-syscall stopping/notifying the tracer is after the syscall. And the normal path we are forcibly switching to doesn't have it. So we end up with *one* stop in traced sigreturn() vs. two in other syscalls. And yes, strace is visibly broken by that; try to strace the following #include <signal.h> #include <stdio.h> void f(int sig) {} main() { signal(SIGHUP, f); raise(SIGHUP); write(1, "eeeek\n", 6); } and watch the show. The close(1) = 405 in the end of strace output is coming from return value of write() (6 == __NR_close on alpha) and syscall number of exit_group() (__NR_exit_group == 405 there). The fix is fairly simple - the only thing we end up missing is the call of syscall_trace() and we can tell whether we'd been called from the SYSCALL_TRACE path by checking ra value. Since we are setting the switch_stack up (that's what sys_sigreturn() does), we have the right environment for calling syscall_trace() - just before we call undo_switch_stack() and return. Since undo_switch_stack() will overwrite s0 anyway, we can use it to store the result of "has it been called from SYSCALL_TRACE path?" check. The same thing applies in rt_sigreturn(). Tested-by: Michael Cree <mcree@orcon.net.nz> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Matt Turner <mattst88@gmail.com>
2010-09-18 20:41:16 +08:00
ret_from_straced:
ldgp $gp, 0($26)
/* check return.. */
blt $0, $strace_error /* the call failed */
$strace_success:
stq $31, 72($sp) /* a3=0 => no error */
stq $0, 0($sp) /* save return value */
DO_SWITCH_STACK
jsr $26, syscall_trace_leave
UNDO_SWITCH_STACK
br $31, ret_from_sys_call
.align 3
$strace_error:
ldq $18, 0($sp) /* old syscall nr (zero if success) */
beq $18, $strace_success
ldq $19, 72($sp) /* .. and this a3 */
subq $31, $0, $0 /* with error in v0 */
addq $31, 1, $1 /* set a3 for errno return */
stq $0, 0($sp)
stq $1, 72($sp) /* a3 for return */
DO_SWITCH_STACK
mov $18, $9 /* save old syscall number */
mov $19, $10 /* save old a3 */
jsr $26, syscall_trace_leave
mov $9, $18
mov $10, $19
UNDO_SWITCH_STACK
mov $31, $26 /* tell "ret_from_sys_call" we can restart */
br ret_from_sys_call
CFI_END_OSF_FRAME entSys
/*
* Save and restore the switch stack -- aka the balance of the user context.
*/
.align 4
.type do_switch_stack, @function
.cfi_startproc simple
.cfi_return_column 64
.cfi_def_cfa $sp, 0
.cfi_register 64, $1
do_switch_stack:
lda $sp, -SWITCH_STACK_SIZE($sp)
.cfi_adjust_cfa_offset SWITCH_STACK_SIZE
stq $9, 0($sp)
stq $10, 8($sp)
stq $11, 16($sp)
stq $12, 24($sp)
stq $13, 32($sp)
stq $14, 40($sp)
stq $15, 48($sp)
stq $26, 56($sp)
ret $31, ($1), 1
.cfi_endproc
.size do_switch_stack, .-do_switch_stack
.align 4
.type undo_switch_stack, @function
.cfi_startproc simple
.cfi_def_cfa $sp, 0
.cfi_register 64, $1
undo_switch_stack:
ldq $9, 0($sp)
ldq $10, 8($sp)
ldq $11, 16($sp)
ldq $12, 24($sp)
ldq $13, 32($sp)
ldq $14, 40($sp)
ldq $15, 48($sp)
ldq $26, 56($sp)
lda $sp, SWITCH_STACK_SIZE($sp)
ret $31, ($1), 1
.cfi_endproc
.size undo_switch_stack, .-undo_switch_stack
alpha: lazy FPU switching On each context switch we save the FPU registers on stack of old process and restore FPU registers from the stack of new one. That allows us to avoid doing that each time we enter/leave the kernel mode; however, that can get suboptimal in some cases. For one thing, we don't need to bother saving anything for kernel threads. For another, if between entering and leaving the kernel a thread gives CPU up more than once, it will do useless work, saving the same values every time, only to discard the saved copy as soon as it returns from switch_to(). Alternative solution: * move the array we save into from switch_stack to thread_info * have a (thread-synchronous) flag set when we save them * have another flag set when they should be restored on return to userland. * do *NOT* save/restore them in do_switch_stack()/undo_switch_stack(). * restore on the exit to user mode if the restore flag had been set. Clear both flags. * on context switch, entry to fork/clone/vfork, before entry into do_signal() and on entry into straced syscall save the registers and set the 'saved' flag unless it had been already set. * on context switch set the 'restore' flag as well. * have copy_thread() set both flags for child, so the registers would be restored once the child returns to userland. * use the saved data in setup_sigcontext(); have restore_sigcontext() set both flags and copy from sigframe to save area. * teach ptrace to look for FPU registers in thread_info instead of switch_stack. * teach isolated accesses to FPU registers (rdfpcr, wrfpcr, etc.) to check the 'saved' flag (under preempt_disable()) and work with the save area if it's been set; if 'saved' flag is found upon write access, set 'restore' flag as well. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Matt Turner <mattst88@gmail.com>
2022-09-02 09:50:12 +08:00
#define FR(n) n * 8 + TI_FP($8)
.align 4
.globl __save_fpu
.type __save_fpu, @function
__save_fpu:
#define V(n) stt $f##n, FR(n)
V( 0); V( 1); V( 2); V( 3)
V( 4); V( 5); V( 6); V( 7)
V( 8); V( 9); V(10); V(11)
V(12); V(13); V(14); V(15)
V(16); V(17); V(18); V(19)
V(20); V(21); V(22); V(23)
V(24); V(25); V(26); V(27)
mf_fpcr $f0 # get fpcr
V(28); V(29); V(30)
stt $f0, FR(31) # save fpcr in slot of $f31
ldt $f0, FR(0) # don't let "__save_fpu" change fp state.
ret
#undef V
.size __save_fpu, .-__save_fpu
.align 4
restore_fpu:
and $3, TS_RESTORE_FP, $3
bic $2, TS_SAVED_FP | TS_RESTORE_FP, $2
beq $3, 1f
#define V(n) ldt $f##n, FR(n)
ldt $f30, FR(31) # get saved fpcr
V( 0); V( 1); V( 2); V( 3)
mt_fpcr $f30 # install saved fpcr
V( 4); V( 5); V( 6); V( 7)
V( 8); V( 9); V(10); V(11)
V(12); V(13); V(14); V(15)
V(16); V(17); V(18); V(19)
V(20); V(21); V(22); V(23)
V(24); V(25); V(26); V(27)
V(28); V(29); V(30)
1: stl $2, TI_STATUS($8)
br restore_other
#undef V
/*
* The meat of the context switch code.
*/
.align 4
.globl alpha_switch_to
.type alpha_switch_to, @function
.cfi_startproc
alpha_switch_to:
DO_SWITCH_STACK
alpha: lazy FPU switching On each context switch we save the FPU registers on stack of old process and restore FPU registers from the stack of new one. That allows us to avoid doing that each time we enter/leave the kernel mode; however, that can get suboptimal in some cases. For one thing, we don't need to bother saving anything for kernel threads. For another, if between entering and leaving the kernel a thread gives CPU up more than once, it will do useless work, saving the same values every time, only to discard the saved copy as soon as it returns from switch_to(). Alternative solution: * move the array we save into from switch_stack to thread_info * have a (thread-synchronous) flag set when we save them * have another flag set when they should be restored on return to userland. * do *NOT* save/restore them in do_switch_stack()/undo_switch_stack(). * restore on the exit to user mode if the restore flag had been set. Clear both flags. * on context switch, entry to fork/clone/vfork, before entry into do_signal() and on entry into straced syscall save the registers and set the 'saved' flag unless it had been already set. * on context switch set the 'restore' flag as well. * have copy_thread() set both flags for child, so the registers would be restored once the child returns to userland. * use the saved data in setup_sigcontext(); have restore_sigcontext() set both flags and copy from sigframe to save area. * teach ptrace to look for FPU registers in thread_info instead of switch_stack. * teach isolated accesses to FPU registers (rdfpcr, wrfpcr, etc.) to check the 'saved' flag (under preempt_disable()) and work with the save area if it's been set; if 'saved' flag is found upon write access, set 'restore' flag as well. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Matt Turner <mattst88@gmail.com>
2022-09-02 09:50:12 +08:00
ldl $1, TI_STATUS($8)
and $1, TS_RESTORE_FP, $3
bne $3, 1f
or $1, TS_RESTORE_FP | TS_SAVED_FP, $2
and $1, TS_SAVED_FP, $3
stl $2, TI_STATUS($8)
bne $3, 1f
bsr $26, __save_fpu
1:
call_pal PAL_swpctx
lda $8, 0x3fff
UNDO_SWITCH_STACK
bic $sp, $8, $8
mov $17, $0
ret
.cfi_endproc
.size alpha_switch_to, .-alpha_switch_to
/*
* New processes begin life here.
*/
.globl ret_from_fork
.align 4
.ent ret_from_fork
ret_from_fork:
lda $26, ret_to_user
mov $17, $16
jmp $31, schedule_tail
.end ret_from_fork
/*
* ... and new kernel threads - here
*/
.align 4
.globl ret_from_kernel_thread
.ent ret_from_kernel_thread
ret_from_kernel_thread:
mov $17, $16
jsr $26, schedule_tail
mov $9, $27
mov $10, $16
jsr $26, ($9)
br $31, ret_to_user
.end ret_from_kernel_thread
/*
* Special system calls. Most of these are special in that they either
* have to play switch_stack games.
*/
.macro fork_like name
.align 4
.globl alpha_\name
.ent alpha_\name
alpha_\name:
.prologue 0
bsr $1, do_switch_stack
alpha: lazy FPU switching On each context switch we save the FPU registers on stack of old process and restore FPU registers from the stack of new one. That allows us to avoid doing that each time we enter/leave the kernel mode; however, that can get suboptimal in some cases. For one thing, we don't need to bother saving anything for kernel threads. For another, if between entering and leaving the kernel a thread gives CPU up more than once, it will do useless work, saving the same values every time, only to discard the saved copy as soon as it returns from switch_to(). Alternative solution: * move the array we save into from switch_stack to thread_info * have a (thread-synchronous) flag set when we save them * have another flag set when they should be restored on return to userland. * do *NOT* save/restore them in do_switch_stack()/undo_switch_stack(). * restore on the exit to user mode if the restore flag had been set. Clear both flags. * on context switch, entry to fork/clone/vfork, before entry into do_signal() and on entry into straced syscall save the registers and set the 'saved' flag unless it had been already set. * on context switch set the 'restore' flag as well. * have copy_thread() set both flags for child, so the registers would be restored once the child returns to userland. * use the saved data in setup_sigcontext(); have restore_sigcontext() set both flags and copy from sigframe to save area. * teach ptrace to look for FPU registers in thread_info instead of switch_stack. * teach isolated accesses to FPU registers (rdfpcr, wrfpcr, etc.) to check the 'saved' flag (under preempt_disable()) and work with the save area if it's been set; if 'saved' flag is found upon write access, set 'restore' flag as well. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Matt Turner <mattst88@gmail.com>
2022-09-02 09:50:12 +08:00
// NB: if anyone adds preemption, this block will need to be protected
ldl $1, TI_STATUS($8)
and $1, TS_SAVED_FP, $3
or $1, TS_SAVED_FP, $2
bne $3, 1f
stl $2, TI_STATUS($8)
bsr $26, __save_fpu
1:
jsr $26, sys_\name
ldq $26, 56($sp)
lda $sp, SWITCH_STACK_SIZE($sp)
ret
.end alpha_\name
.endm
fork_like fork
fork_like vfork
fork_like clone
.macro sigreturn_like name
.align 4
.globl sys_\name
.ent sys_\name
sys_\name:
.prologue 0
alpha: fix a 14 years old bug in sigreturn tracing The way sigreturn() is implemented on alpha breaks PTRACE_SYSCALL, all way back to 1.3.95 when alpha has grown PTRACE_SYSCALL support. What happens is direct return to ret_from_syscall, in order to bypass mangling of a3 (error indicator) and prevent other mutilations of registers (e.g. by syscall restart). That's fine, but... the entire TIF_SYSCALL_TRACE codepath is kept separate on alpha and post-syscall stopping/notifying the tracer is after the syscall. And the normal path we are forcibly switching to doesn't have it. So we end up with *one* stop in traced sigreturn() vs. two in other syscalls. And yes, strace is visibly broken by that; try to strace the following #include <signal.h> #include <stdio.h> void f(int sig) {} main() { signal(SIGHUP, f); raise(SIGHUP); write(1, "eeeek\n", 6); } and watch the show. The close(1) = 405 in the end of strace output is coming from return value of write() (6 == __NR_close on alpha) and syscall number of exit_group() (__NR_exit_group == 405 there). The fix is fairly simple - the only thing we end up missing is the call of syscall_trace() and we can tell whether we'd been called from the SYSCALL_TRACE path by checking ra value. Since we are setting the switch_stack up (that's what sys_sigreturn() does), we have the right environment for calling syscall_trace() - just before we call undo_switch_stack() and return. Since undo_switch_stack() will overwrite s0 anyway, we can use it to store the result of "has it been called from SYSCALL_TRACE path?" check. The same thing applies in rt_sigreturn(). Tested-by: Michael Cree <mcree@orcon.net.nz> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Matt Turner <mattst88@gmail.com>
2010-09-18 20:41:16 +08:00
lda $9, ret_from_straced
cmpult $26, $9, $9
lda $sp, -SWITCH_STACK_SIZE($sp)
jsr $26, do_\name
alpha: fix a 14 years old bug in sigreturn tracing The way sigreturn() is implemented on alpha breaks PTRACE_SYSCALL, all way back to 1.3.95 when alpha has grown PTRACE_SYSCALL support. What happens is direct return to ret_from_syscall, in order to bypass mangling of a3 (error indicator) and prevent other mutilations of registers (e.g. by syscall restart). That's fine, but... the entire TIF_SYSCALL_TRACE codepath is kept separate on alpha and post-syscall stopping/notifying the tracer is after the syscall. And the normal path we are forcibly switching to doesn't have it. So we end up with *one* stop in traced sigreturn() vs. two in other syscalls. And yes, strace is visibly broken by that; try to strace the following #include <signal.h> #include <stdio.h> void f(int sig) {} main() { signal(SIGHUP, f); raise(SIGHUP); write(1, "eeeek\n", 6); } and watch the show. The close(1) = 405 in the end of strace output is coming from return value of write() (6 == __NR_close on alpha) and syscall number of exit_group() (__NR_exit_group == 405 there). The fix is fairly simple - the only thing we end up missing is the call of syscall_trace() and we can tell whether we'd been called from the SYSCALL_TRACE path by checking ra value. Since we are setting the switch_stack up (that's what sys_sigreturn() does), we have the right environment for calling syscall_trace() - just before we call undo_switch_stack() and return. Since undo_switch_stack() will overwrite s0 anyway, we can use it to store the result of "has it been called from SYSCALL_TRACE path?" check. The same thing applies in rt_sigreturn(). Tested-by: Michael Cree <mcree@orcon.net.nz> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Matt Turner <mattst88@gmail.com>
2010-09-18 20:41:16 +08:00
bne $9, 1f
jsr $26, syscall_trace_leave
alpha: fix a 14 years old bug in sigreturn tracing The way sigreturn() is implemented on alpha breaks PTRACE_SYSCALL, all way back to 1.3.95 when alpha has grown PTRACE_SYSCALL support. What happens is direct return to ret_from_syscall, in order to bypass mangling of a3 (error indicator) and prevent other mutilations of registers (e.g. by syscall restart). That's fine, but... the entire TIF_SYSCALL_TRACE codepath is kept separate on alpha and post-syscall stopping/notifying the tracer is after the syscall. And the normal path we are forcibly switching to doesn't have it. So we end up with *one* stop in traced sigreturn() vs. two in other syscalls. And yes, strace is visibly broken by that; try to strace the following #include <signal.h> #include <stdio.h> void f(int sig) {} main() { signal(SIGHUP, f); raise(SIGHUP); write(1, "eeeek\n", 6); } and watch the show. The close(1) = 405 in the end of strace output is coming from return value of write() (6 == __NR_close on alpha) and syscall number of exit_group() (__NR_exit_group == 405 there). The fix is fairly simple - the only thing we end up missing is the call of syscall_trace() and we can tell whether we'd been called from the SYSCALL_TRACE path by checking ra value. Since we are setting the switch_stack up (that's what sys_sigreturn() does), we have the right environment for calling syscall_trace() - just before we call undo_switch_stack() and return. Since undo_switch_stack() will overwrite s0 anyway, we can use it to store the result of "has it been called from SYSCALL_TRACE path?" check. The same thing applies in rt_sigreturn(). Tested-by: Michael Cree <mcree@orcon.net.nz> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Matt Turner <mattst88@gmail.com>
2010-09-18 20:41:16 +08:00
1: br $1, undo_switch_stack
br ret_from_sys_call
.end sys_\name
.endm
sigreturn_like sigreturn
sigreturn_like rt_sigreturn
.align 4
.globl alpha_syscall_zero
.ent alpha_syscall_zero
alpha_syscall_zero:
.prologue 0
/* Special because it needs to do something opposite to
force_successful_syscall_return(). We use the saved
syscall number for that, zero meaning "not an error".
That works nicely, but for real syscall 0 we need to
make sure that this logics doesn't get confused.
Store a non-zero there - -ENOSYS we need in register
for our return value will do just fine.
*/
lda $0, -ENOSYS
unop
stq $0, 0($sp)
ret
.end alpha_syscall_zero