License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 22:07:57 +08:00
|
|
|
|
/* SPDX-License-Identifier: GPL-2.0 */
|
2005-04-17 06:20:36 +08:00
|
|
|
|
/*
|
|
|
|
|
* arch/alpha/kernel/entry.S
|
|
|
|
|
*
|
|
|
|
|
* Kernel entry-points.
|
|
|
|
|
*/
|
|
|
|
|
|
2005-09-10 03:28:48 +08:00
|
|
|
|
#include <asm/asm-offsets.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
|
#include <asm/thread_info.h>
|
|
|
|
|
#include <asm/pal.h>
|
|
|
|
|
#include <asm/errno.h>
|
|
|
|
|
#include <asm/unistd.h>
|
|
|
|
|
|
|
|
|
|
.text
|
|
|
|
|
.set noat
|
2011-04-13 05:45:12 +08:00
|
|
|
|
.cfi_sections .debug_frame
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
|
|
/* Stack offsets. */
|
|
|
|
|
#define SP_OFF 184
|
alpha: lazy FPU switching
On each context switch we save the FPU registers on stack
of old process and restore FPU registers from the stack of new one.
That allows us to avoid doing that each time we enter/leave the
kernel mode; however, that can get suboptimal in some cases.
For one thing, we don't need to bother saving anything
for kernel threads. For another, if between entering and leaving
the kernel a thread gives CPU up more than once, it will do
useless work, saving the same values every time, only to discard
the saved copy as soon as it returns from switch_to().
Alternative solution:
* move the array we save into from switch_stack to thread_info
* have a (thread-synchronous) flag set when we save them
* have another flag set when they should be restored on return to userland.
* do *NOT* save/restore them in do_switch_stack()/undo_switch_stack().
* restore on the exit to user mode if the restore flag had
been set. Clear both flags.
* on context switch, entry to fork/clone/vfork, before entry into do_signal()
and on entry into straced syscall save the registers and set the 'saved' flag
unless it had been already set.
* on context switch set the 'restore' flag as well.
* have copy_thread() set both flags for child, so the registers would be
restored once the child returns to userland.
* use the saved data in setup_sigcontext(); have restore_sigcontext() set both flags
and copy from sigframe to save area.
* teach ptrace to look for FPU registers in thread_info instead of
switch_stack.
* teach isolated accesses to FPU registers (rdfpcr, wrfpcr, etc.)
to check the 'saved' flag (under preempt_disable()) and work with the save area
if it's been set; if 'saved' flag is found upon write access, set 'restore' flag
as well.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Matt Turner <mattst88@gmail.com>
2022-09-02 09:50:12 +08:00
|
|
|
|
#define SWITCH_STACK_SIZE 64
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
2011-04-13 05:45:12 +08:00
|
|
|
|
.macro CFI_START_OSF_FRAME func
|
|
|
|
|
.align 4
|
|
|
|
|
.globl \func
|
|
|
|
|
.type \func,@function
|
|
|
|
|
\func:
|
|
|
|
|
.cfi_startproc simple
|
|
|
|
|
.cfi_return_column 64
|
|
|
|
|
.cfi_def_cfa $sp, 48
|
|
|
|
|
.cfi_rel_offset 64, 8
|
|
|
|
|
.cfi_rel_offset $gp, 16
|
|
|
|
|
.cfi_rel_offset $16, 24
|
|
|
|
|
.cfi_rel_offset $17, 32
|
|
|
|
|
.cfi_rel_offset $18, 40
|
|
|
|
|
.endm
|
|
|
|
|
|
|
|
|
|
.macro CFI_END_OSF_FRAME func
|
|
|
|
|
.cfi_endproc
|
|
|
|
|
.size \func, . - \func
|
|
|
|
|
.endm
|
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
|
/*
|
|
|
|
|
* This defines the normal kernel pt-regs layout.
|
|
|
|
|
*
|
|
|
|
|
* regs 9-15 preserved by C code
|
|
|
|
|
* regs 16-18 saved by PAL-code
|
|
|
|
|
* regs 29-30 saved and set up by PAL-code
|
|
|
|
|
* JRP - Save regs 16-18 in a special area of the stack, so that
|
|
|
|
|
* the palcode-provided values are available to the signal handler.
|
|
|
|
|
*/
|
|
|
|
|
|
2011-04-13 05:45:12 +08:00
|
|
|
|
.macro SAVE_ALL
|
|
|
|
|
subq $sp, SP_OFF, $sp
|
|
|
|
|
.cfi_adjust_cfa_offset SP_OFF
|
|
|
|
|
stq $0, 0($sp)
|
|
|
|
|
stq $1, 8($sp)
|
|
|
|
|
stq $2, 16($sp)
|
|
|
|
|
stq $3, 24($sp)
|
|
|
|
|
stq $4, 32($sp)
|
|
|
|
|
stq $28, 144($sp)
|
|
|
|
|
.cfi_rel_offset $0, 0
|
|
|
|
|
.cfi_rel_offset $1, 8
|
|
|
|
|
.cfi_rel_offset $2, 16
|
|
|
|
|
.cfi_rel_offset $3, 24
|
|
|
|
|
.cfi_rel_offset $4, 32
|
|
|
|
|
.cfi_rel_offset $28, 144
|
|
|
|
|
lda $2, alpha_mv
|
|
|
|
|
stq $5, 40($sp)
|
|
|
|
|
stq $6, 48($sp)
|
|
|
|
|
stq $7, 56($sp)
|
|
|
|
|
stq $8, 64($sp)
|
|
|
|
|
stq $19, 72($sp)
|
|
|
|
|
stq $20, 80($sp)
|
|
|
|
|
stq $21, 88($sp)
|
|
|
|
|
ldq $2, HAE_CACHE($2)
|
|
|
|
|
stq $22, 96($sp)
|
|
|
|
|
stq $23, 104($sp)
|
|
|
|
|
stq $24, 112($sp)
|
|
|
|
|
stq $25, 120($sp)
|
|
|
|
|
stq $26, 128($sp)
|
|
|
|
|
stq $27, 136($sp)
|
|
|
|
|
stq $2, 152($sp)
|
|
|
|
|
stq $16, 160($sp)
|
|
|
|
|
stq $17, 168($sp)
|
2005-04-17 06:20:36 +08:00
|
|
|
|
stq $18, 176($sp)
|
2011-04-13 05:45:12 +08:00
|
|
|
|
.cfi_rel_offset $5, 40
|
|
|
|
|
.cfi_rel_offset $6, 48
|
|
|
|
|
.cfi_rel_offset $7, 56
|
|
|
|
|
.cfi_rel_offset $8, 64
|
|
|
|
|
.cfi_rel_offset $19, 72
|
|
|
|
|
.cfi_rel_offset $20, 80
|
|
|
|
|
.cfi_rel_offset $21, 88
|
|
|
|
|
.cfi_rel_offset $22, 96
|
|
|
|
|
.cfi_rel_offset $23, 104
|
|
|
|
|
.cfi_rel_offset $24, 112
|
|
|
|
|
.cfi_rel_offset $25, 120
|
|
|
|
|
.cfi_rel_offset $26, 128
|
|
|
|
|
.cfi_rel_offset $27, 136
|
|
|
|
|
.endm
|
|
|
|
|
|
|
|
|
|
.macro RESTORE_ALL
|
|
|
|
|
lda $19, alpha_mv
|
|
|
|
|
ldq $0, 0($sp)
|
|
|
|
|
ldq $1, 8($sp)
|
|
|
|
|
ldq $2, 16($sp)
|
|
|
|
|
ldq $3, 24($sp)
|
|
|
|
|
ldq $21, 152($sp)
|
|
|
|
|
ldq $20, HAE_CACHE($19)
|
|
|
|
|
ldq $4, 32($sp)
|
|
|
|
|
ldq $5, 40($sp)
|
|
|
|
|
ldq $6, 48($sp)
|
|
|
|
|
ldq $7, 56($sp)
|
|
|
|
|
subq $20, $21, $20
|
|
|
|
|
ldq $8, 64($sp)
|
|
|
|
|
beq $20, 99f
|
|
|
|
|
ldq $20, HAE_REG($19)
|
|
|
|
|
stq $21, HAE_CACHE($19)
|
|
|
|
|
stq $21, 0($20)
|
|
|
|
|
99: ldq $19, 72($sp)
|
|
|
|
|
ldq $20, 80($sp)
|
|
|
|
|
ldq $21, 88($sp)
|
|
|
|
|
ldq $22, 96($sp)
|
|
|
|
|
ldq $23, 104($sp)
|
|
|
|
|
ldq $24, 112($sp)
|
|
|
|
|
ldq $25, 120($sp)
|
|
|
|
|
ldq $26, 128($sp)
|
|
|
|
|
ldq $27, 136($sp)
|
|
|
|
|
ldq $28, 144($sp)
|
2005-04-17 06:20:36 +08:00
|
|
|
|
addq $sp, SP_OFF, $sp
|
2011-04-13 05:45:12 +08:00
|
|
|
|
.cfi_restore $0
|
|
|
|
|
.cfi_restore $1
|
|
|
|
|
.cfi_restore $2
|
|
|
|
|
.cfi_restore $3
|
|
|
|
|
.cfi_restore $4
|
|
|
|
|
.cfi_restore $5
|
|
|
|
|
.cfi_restore $6
|
|
|
|
|
.cfi_restore $7
|
|
|
|
|
.cfi_restore $8
|
|
|
|
|
.cfi_restore $19
|
|
|
|
|
.cfi_restore $20
|
|
|
|
|
.cfi_restore $21
|
|
|
|
|
.cfi_restore $22
|
|
|
|
|
.cfi_restore $23
|
|
|
|
|
.cfi_restore $24
|
|
|
|
|
.cfi_restore $25
|
|
|
|
|
.cfi_restore $26
|
|
|
|
|
.cfi_restore $27
|
|
|
|
|
.cfi_restore $28
|
|
|
|
|
.cfi_adjust_cfa_offset -SP_OFF
|
|
|
|
|
.endm
|
|
|
|
|
|
|
|
|
|
.macro DO_SWITCH_STACK
|
|
|
|
|
bsr $1, do_switch_stack
|
|
|
|
|
.cfi_adjust_cfa_offset SWITCH_STACK_SIZE
|
|
|
|
|
.cfi_rel_offset $9, 0
|
|
|
|
|
.cfi_rel_offset $10, 8
|
|
|
|
|
.cfi_rel_offset $11, 16
|
|
|
|
|
.cfi_rel_offset $12, 24
|
|
|
|
|
.cfi_rel_offset $13, 32
|
|
|
|
|
.cfi_rel_offset $14, 40
|
|
|
|
|
.cfi_rel_offset $15, 48
|
|
|
|
|
.endm
|
|
|
|
|
|
|
|
|
|
.macro UNDO_SWITCH_STACK
|
|
|
|
|
bsr $1, undo_switch_stack
|
|
|
|
|
.cfi_restore $9
|
|
|
|
|
.cfi_restore $10
|
|
|
|
|
.cfi_restore $11
|
|
|
|
|
.cfi_restore $12
|
|
|
|
|
.cfi_restore $13
|
|
|
|
|
.cfi_restore $14
|
|
|
|
|
.cfi_restore $15
|
|
|
|
|
.cfi_adjust_cfa_offset -SWITCH_STACK_SIZE
|
|
|
|
|
.endm
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
|
* Non-syscall kernel entry points.
|
|
|
|
|
*/
|
|
|
|
|
|
2011-04-13 05:45:12 +08:00
|
|
|
|
CFI_START_OSF_FRAME entInt
|
2005-04-17 06:20:36 +08:00
|
|
|
|
SAVE_ALL
|
|
|
|
|
lda $8, 0x3fff
|
|
|
|
|
lda $26, ret_from_sys_call
|
|
|
|
|
bic $sp, $8, $8
|
|
|
|
|
mov $sp, $19
|
|
|
|
|
jsr $31, do_entInt
|
2011-04-13 05:45:12 +08:00
|
|
|
|
CFI_END_OSF_FRAME entInt
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
2011-04-13 05:45:12 +08:00
|
|
|
|
CFI_START_OSF_FRAME entArith
|
2005-04-17 06:20:36 +08:00
|
|
|
|
SAVE_ALL
|
|
|
|
|
lda $8, 0x3fff
|
|
|
|
|
lda $26, ret_from_sys_call
|
|
|
|
|
bic $sp, $8, $8
|
|
|
|
|
mov $sp, $18
|
|
|
|
|
jsr $31, do_entArith
|
2011-04-13 05:45:12 +08:00
|
|
|
|
CFI_END_OSF_FRAME entArith
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
2011-04-13 05:45:12 +08:00
|
|
|
|
CFI_START_OSF_FRAME entMM
|
2005-04-17 06:20:36 +08:00
|
|
|
|
SAVE_ALL
|
|
|
|
|
/* save $9 - $15 so the inline exception code can manipulate them. */
|
|
|
|
|
subq $sp, 56, $sp
|
2011-04-13 05:45:12 +08:00
|
|
|
|
.cfi_adjust_cfa_offset 56
|
2005-04-17 06:20:36 +08:00
|
|
|
|
stq $9, 0($sp)
|
|
|
|
|
stq $10, 8($sp)
|
|
|
|
|
stq $11, 16($sp)
|
|
|
|
|
stq $12, 24($sp)
|
|
|
|
|
stq $13, 32($sp)
|
|
|
|
|
stq $14, 40($sp)
|
|
|
|
|
stq $15, 48($sp)
|
2011-04-13 05:45:12 +08:00
|
|
|
|
.cfi_rel_offset $9, 0
|
|
|
|
|
.cfi_rel_offset $10, 8
|
|
|
|
|
.cfi_rel_offset $11, 16
|
|
|
|
|
.cfi_rel_offset $12, 24
|
|
|
|
|
.cfi_rel_offset $13, 32
|
|
|
|
|
.cfi_rel_offset $14, 40
|
|
|
|
|
.cfi_rel_offset $15, 48
|
2005-04-17 06:20:36 +08:00
|
|
|
|
addq $sp, 56, $19
|
|
|
|
|
/* handle the fault */
|
|
|
|
|
lda $8, 0x3fff
|
|
|
|
|
bic $sp, $8, $8
|
|
|
|
|
jsr $26, do_page_fault
|
|
|
|
|
/* reload the registers after the exception code played. */
|
|
|
|
|
ldq $9, 0($sp)
|
|
|
|
|
ldq $10, 8($sp)
|
|
|
|
|
ldq $11, 16($sp)
|
|
|
|
|
ldq $12, 24($sp)
|
|
|
|
|
ldq $13, 32($sp)
|
|
|
|
|
ldq $14, 40($sp)
|
|
|
|
|
ldq $15, 48($sp)
|
|
|
|
|
addq $sp, 56, $sp
|
2011-04-13 05:45:12 +08:00
|
|
|
|
.cfi_restore $9
|
|
|
|
|
.cfi_restore $10
|
|
|
|
|
.cfi_restore $11
|
|
|
|
|
.cfi_restore $12
|
|
|
|
|
.cfi_restore $13
|
|
|
|
|
.cfi_restore $14
|
|
|
|
|
.cfi_restore $15
|
|
|
|
|
.cfi_adjust_cfa_offset -56
|
2005-04-17 06:20:36 +08:00
|
|
|
|
/* finish up the syscall as normal. */
|
|
|
|
|
br ret_from_sys_call
|
2011-04-13 05:45:12 +08:00
|
|
|
|
CFI_END_OSF_FRAME entMM
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
2011-04-13 05:45:12 +08:00
|
|
|
|
CFI_START_OSF_FRAME entIF
|
2005-04-17 06:20:36 +08:00
|
|
|
|
SAVE_ALL
|
|
|
|
|
lda $8, 0x3fff
|
|
|
|
|
lda $26, ret_from_sys_call
|
|
|
|
|
bic $sp, $8, $8
|
|
|
|
|
mov $sp, $17
|
|
|
|
|
jsr $31, do_entIF
|
2011-04-13 05:45:12 +08:00
|
|
|
|
CFI_END_OSF_FRAME entIF
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
2011-04-13 05:45:12 +08:00
|
|
|
|
CFI_START_OSF_FRAME entUna
|
2005-04-17 06:20:36 +08:00
|
|
|
|
lda $sp, -256($sp)
|
2011-04-13 05:45:12 +08:00
|
|
|
|
.cfi_adjust_cfa_offset 256
|
2005-04-17 06:20:36 +08:00
|
|
|
|
stq $0, 0($sp)
|
2011-04-13 05:45:12 +08:00
|
|
|
|
.cfi_rel_offset $0, 0
|
|
|
|
|
.cfi_remember_state
|
2005-04-17 06:20:36 +08:00
|
|
|
|
ldq $0, 256($sp) /* get PS */
|
|
|
|
|
stq $1, 8($sp)
|
|
|
|
|
stq $2, 16($sp)
|
|
|
|
|
stq $3, 24($sp)
|
|
|
|
|
and $0, 8, $0 /* user mode? */
|
|
|
|
|
stq $4, 32($sp)
|
|
|
|
|
bne $0, entUnaUser /* yup -> do user-level unaligned fault */
|
|
|
|
|
stq $5, 40($sp)
|
|
|
|
|
stq $6, 48($sp)
|
|
|
|
|
stq $7, 56($sp)
|
|
|
|
|
stq $8, 64($sp)
|
|
|
|
|
stq $9, 72($sp)
|
|
|
|
|
stq $10, 80($sp)
|
|
|
|
|
stq $11, 88($sp)
|
|
|
|
|
stq $12, 96($sp)
|
|
|
|
|
stq $13, 104($sp)
|
|
|
|
|
stq $14, 112($sp)
|
|
|
|
|
stq $15, 120($sp)
|
|
|
|
|
/* 16-18 PAL-saved */
|
|
|
|
|
stq $19, 152($sp)
|
|
|
|
|
stq $20, 160($sp)
|
|
|
|
|
stq $21, 168($sp)
|
|
|
|
|
stq $22, 176($sp)
|
|
|
|
|
stq $23, 184($sp)
|
|
|
|
|
stq $24, 192($sp)
|
|
|
|
|
stq $25, 200($sp)
|
|
|
|
|
stq $26, 208($sp)
|
|
|
|
|
stq $27, 216($sp)
|
|
|
|
|
stq $28, 224($sp)
|
2005-10-03 03:49:52 +08:00
|
|
|
|
mov $sp, $19
|
2005-04-17 06:20:36 +08:00
|
|
|
|
stq $gp, 232($sp)
|
2011-04-13 05:45:12 +08:00
|
|
|
|
.cfi_rel_offset $1, 1*8
|
|
|
|
|
.cfi_rel_offset $2, 2*8
|
|
|
|
|
.cfi_rel_offset $3, 3*8
|
|
|
|
|
.cfi_rel_offset $4, 4*8
|
|
|
|
|
.cfi_rel_offset $5, 5*8
|
|
|
|
|
.cfi_rel_offset $6, 6*8
|
|
|
|
|
.cfi_rel_offset $7, 7*8
|
|
|
|
|
.cfi_rel_offset $8, 8*8
|
|
|
|
|
.cfi_rel_offset $9, 9*8
|
|
|
|
|
.cfi_rel_offset $10, 10*8
|
|
|
|
|
.cfi_rel_offset $11, 11*8
|
|
|
|
|
.cfi_rel_offset $12, 12*8
|
|
|
|
|
.cfi_rel_offset $13, 13*8
|
|
|
|
|
.cfi_rel_offset $14, 14*8
|
|
|
|
|
.cfi_rel_offset $15, 15*8
|
|
|
|
|
.cfi_rel_offset $19, 19*8
|
|
|
|
|
.cfi_rel_offset $20, 20*8
|
|
|
|
|
.cfi_rel_offset $21, 21*8
|
|
|
|
|
.cfi_rel_offset $22, 22*8
|
|
|
|
|
.cfi_rel_offset $23, 23*8
|
|
|
|
|
.cfi_rel_offset $24, 24*8
|
|
|
|
|
.cfi_rel_offset $25, 25*8
|
|
|
|
|
.cfi_rel_offset $26, 26*8
|
|
|
|
|
.cfi_rel_offset $27, 27*8
|
|
|
|
|
.cfi_rel_offset $28, 28*8
|
|
|
|
|
.cfi_rel_offset $29, 29*8
|
2005-04-17 06:20:36 +08:00
|
|
|
|
lda $8, 0x3fff
|
|
|
|
|
stq $31, 248($sp)
|
|
|
|
|
bic $sp, $8, $8
|
|
|
|
|
jsr $26, do_entUna
|
|
|
|
|
ldq $0, 0($sp)
|
|
|
|
|
ldq $1, 8($sp)
|
|
|
|
|
ldq $2, 16($sp)
|
|
|
|
|
ldq $3, 24($sp)
|
|
|
|
|
ldq $4, 32($sp)
|
|
|
|
|
ldq $5, 40($sp)
|
|
|
|
|
ldq $6, 48($sp)
|
|
|
|
|
ldq $7, 56($sp)
|
|
|
|
|
ldq $8, 64($sp)
|
|
|
|
|
ldq $9, 72($sp)
|
|
|
|
|
ldq $10, 80($sp)
|
|
|
|
|
ldq $11, 88($sp)
|
|
|
|
|
ldq $12, 96($sp)
|
|
|
|
|
ldq $13, 104($sp)
|
|
|
|
|
ldq $14, 112($sp)
|
|
|
|
|
ldq $15, 120($sp)
|
|
|
|
|
/* 16-18 PAL-saved */
|
|
|
|
|
ldq $19, 152($sp)
|
|
|
|
|
ldq $20, 160($sp)
|
|
|
|
|
ldq $21, 168($sp)
|
|
|
|
|
ldq $22, 176($sp)
|
|
|
|
|
ldq $23, 184($sp)
|
|
|
|
|
ldq $24, 192($sp)
|
|
|
|
|
ldq $25, 200($sp)
|
|
|
|
|
ldq $26, 208($sp)
|
|
|
|
|
ldq $27, 216($sp)
|
|
|
|
|
ldq $28, 224($sp)
|
|
|
|
|
ldq $gp, 232($sp)
|
|
|
|
|
lda $sp, 256($sp)
|
2011-04-13 05:45:12 +08:00
|
|
|
|
.cfi_restore $1
|
|
|
|
|
.cfi_restore $2
|
|
|
|
|
.cfi_restore $3
|
|
|
|
|
.cfi_restore $4
|
|
|
|
|
.cfi_restore $5
|
|
|
|
|
.cfi_restore $6
|
|
|
|
|
.cfi_restore $7
|
|
|
|
|
.cfi_restore $8
|
|
|
|
|
.cfi_restore $9
|
|
|
|
|
.cfi_restore $10
|
|
|
|
|
.cfi_restore $11
|
|
|
|
|
.cfi_restore $12
|
|
|
|
|
.cfi_restore $13
|
|
|
|
|
.cfi_restore $14
|
|
|
|
|
.cfi_restore $15
|
|
|
|
|
.cfi_restore $19
|
|
|
|
|
.cfi_restore $20
|
|
|
|
|
.cfi_restore $21
|
|
|
|
|
.cfi_restore $22
|
|
|
|
|
.cfi_restore $23
|
|
|
|
|
.cfi_restore $24
|
|
|
|
|
.cfi_restore $25
|
|
|
|
|
.cfi_restore $26
|
|
|
|
|
.cfi_restore $27
|
|
|
|
|
.cfi_restore $28
|
|
|
|
|
.cfi_restore $29
|
|
|
|
|
.cfi_adjust_cfa_offset -256
|
2005-04-17 06:20:36 +08:00
|
|
|
|
call_pal PAL_rti
|
|
|
|
|
|
|
|
|
|
.align 4
|
|
|
|
|
entUnaUser:
|
2011-04-13 05:45:12 +08:00
|
|
|
|
.cfi_restore_state
|
2005-04-17 06:20:36 +08:00
|
|
|
|
ldq $0, 0($sp) /* restore original $0 */
|
|
|
|
|
lda $sp, 256($sp) /* pop entUna's stack frame */
|
2011-04-13 05:45:12 +08:00
|
|
|
|
.cfi_restore $0
|
|
|
|
|
.cfi_adjust_cfa_offset -256
|
2005-04-17 06:20:36 +08:00
|
|
|
|
SAVE_ALL /* setup normal kernel stack */
|
|
|
|
|
lda $sp, -56($sp)
|
2011-04-13 05:45:12 +08:00
|
|
|
|
.cfi_adjust_cfa_offset 56
|
2005-04-17 06:20:36 +08:00
|
|
|
|
stq $9, 0($sp)
|
|
|
|
|
stq $10, 8($sp)
|
|
|
|
|
stq $11, 16($sp)
|
|
|
|
|
stq $12, 24($sp)
|
|
|
|
|
stq $13, 32($sp)
|
|
|
|
|
stq $14, 40($sp)
|
|
|
|
|
stq $15, 48($sp)
|
2011-04-13 05:45:12 +08:00
|
|
|
|
.cfi_rel_offset $9, 0
|
|
|
|
|
.cfi_rel_offset $10, 8
|
|
|
|
|
.cfi_rel_offset $11, 16
|
|
|
|
|
.cfi_rel_offset $12, 24
|
|
|
|
|
.cfi_rel_offset $13, 32
|
|
|
|
|
.cfi_rel_offset $14, 40
|
|
|
|
|
.cfi_rel_offset $15, 48
|
2005-04-17 06:20:36 +08:00
|
|
|
|
lda $8, 0x3fff
|
|
|
|
|
addq $sp, 56, $19
|
|
|
|
|
bic $sp, $8, $8
|
|
|
|
|
jsr $26, do_entUnaUser
|
|
|
|
|
ldq $9, 0($sp)
|
|
|
|
|
ldq $10, 8($sp)
|
|
|
|
|
ldq $11, 16($sp)
|
|
|
|
|
ldq $12, 24($sp)
|
|
|
|
|
ldq $13, 32($sp)
|
|
|
|
|
ldq $14, 40($sp)
|
|
|
|
|
ldq $15, 48($sp)
|
|
|
|
|
lda $sp, 56($sp)
|
2011-04-13 05:45:12 +08:00
|
|
|
|
.cfi_restore $9
|
|
|
|
|
.cfi_restore $10
|
|
|
|
|
.cfi_restore $11
|
|
|
|
|
.cfi_restore $12
|
|
|
|
|
.cfi_restore $13
|
|
|
|
|
.cfi_restore $14
|
|
|
|
|
.cfi_restore $15
|
|
|
|
|
.cfi_adjust_cfa_offset -56
|
2005-04-17 06:20:36 +08:00
|
|
|
|
br ret_from_sys_call
|
2011-04-13 05:45:12 +08:00
|
|
|
|
CFI_END_OSF_FRAME entUna
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
2011-04-13 05:45:12 +08:00
|
|
|
|
CFI_START_OSF_FRAME entDbg
|
2005-04-17 06:20:36 +08:00
|
|
|
|
SAVE_ALL
|
|
|
|
|
lda $8, 0x3fff
|
|
|
|
|
lda $26, ret_from_sys_call
|
|
|
|
|
bic $sp, $8, $8
|
|
|
|
|
mov $sp, $16
|
|
|
|
|
jsr $31, do_entDbg
|
2011-04-13 05:45:12 +08:00
|
|
|
|
CFI_END_OSF_FRAME entDbg
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
|
* The system call entry point is special. Most importantly, it looks
|
|
|
|
|
* like a function call to userspace as far as clobbered registers. We
|
|
|
|
|
* do preserve the argument registers (for syscall restarts) and $26
|
|
|
|
|
* (for leaf syscall functions).
|
|
|
|
|
*
|
|
|
|
|
* So much for theory. We don't take advantage of this yet.
|
|
|
|
|
*
|
|
|
|
|
* Note that a0-a2 are not saved by PALcode as with the other entry points.
|
|
|
|
|
*/
|
|
|
|
|
|
|
|
|
|
.align 4
|
|
|
|
|
.globl entSys
|
2011-04-13 05:45:12 +08:00
|
|
|
|
.type entSys, @function
|
|
|
|
|
.cfi_startproc simple
|
|
|
|
|
.cfi_return_column 64
|
|
|
|
|
.cfi_def_cfa $sp, 48
|
|
|
|
|
.cfi_rel_offset 64, 8
|
|
|
|
|
.cfi_rel_offset $gp, 16
|
2005-04-17 06:20:36 +08:00
|
|
|
|
entSys:
|
|
|
|
|
SAVE_ALL
|
|
|
|
|
lda $8, 0x3fff
|
|
|
|
|
bic $sp, $8, $8
|
2022-03-29 10:57:38 +08:00
|
|
|
|
lda $4, NR_syscalls($31)
|
2005-04-17 06:20:36 +08:00
|
|
|
|
stq $16, SP_OFF+24($sp)
|
|
|
|
|
lda $5, sys_call_table
|
|
|
|
|
lda $27, sys_ni_syscall
|
|
|
|
|
cmpult $0, $4, $4
|
|
|
|
|
ldl $3, TI_FLAGS($8)
|
|
|
|
|
stq $17, SP_OFF+32($sp)
|
|
|
|
|
s8addq $0, $5, $5
|
|
|
|
|
stq $18, SP_OFF+40($sp)
|
2011-04-13 05:45:12 +08:00
|
|
|
|
.cfi_rel_offset $16, SP_OFF+24
|
|
|
|
|
.cfi_rel_offset $17, SP_OFF+32
|
|
|
|
|
.cfi_rel_offset $18, SP_OFF+40
|
2013-12-20 10:04:10 +08:00
|
|
|
|
#ifdef CONFIG_AUDITSYSCALL
|
|
|
|
|
lda $6, _TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT
|
|
|
|
|
and $3, $6, $3
|
|
|
|
|
bne $3, strace
|
2021-09-19 06:18:48 +08:00
|
|
|
|
#else
|
|
|
|
|
blbs $3, strace /* check for SYSCALL_TRACE in disguise */
|
|
|
|
|
#endif
|
2005-04-17 06:20:36 +08:00
|
|
|
|
beq $4, 1f
|
|
|
|
|
ldq $27, 0($5)
|
2018-08-11 08:04:55 +08:00
|
|
|
|
1: jsr $26, ($27), sys_ni_syscall
|
2005-04-17 06:20:36 +08:00
|
|
|
|
ldgp $gp, 0($26)
|
|
|
|
|
blt $0, $syscall_error /* the call failed */
|
2021-09-19 06:38:15 +08:00
|
|
|
|
$ret_success:
|
2005-04-17 06:20:36 +08:00
|
|
|
|
stq $0, 0($sp)
|
|
|
|
|
stq $31, 72($sp) /* a3=0 => no error */
|
|
|
|
|
|
|
|
|
|
.align 4
|
2011-04-13 05:45:12 +08:00
|
|
|
|
.globl ret_from_sys_call
|
2005-04-17 06:20:36 +08:00
|
|
|
|
ret_from_sys_call:
|
2012-10-11 11:50:59 +08:00
|
|
|
|
cmovne $26, 0, $18 /* $18 = 0 => non-restartable */
|
2005-04-17 06:20:36 +08:00
|
|
|
|
ldq $0, SP_OFF($sp)
|
|
|
|
|
and $0, 8, $0
|
2010-09-26 04:07:14 +08:00
|
|
|
|
beq $0, ret_to_kernel
|
alpha: deal with multiple simultaneously pending signals
Unlike the other targets, alpha sets _one_ sigframe and
buggers off until the next syscall/interrupt, even if
more signals are pending. It leads to quite a few unpleasant
inconsistencies, starting with SIGSEGV potentially arriving
not where it should and including e.g. mess with sigsuspend();
consider two pending signals blocked until sigsuspend()
unblocks them. We pick the first one; then, if we are hit
by interrupt while in the handler, we process the second one
as well. If we are not, and if no syscalls had been made,
we get out of the first handler and leave the second signal
pending; normally sigreturn() would've picked it anyway, but
here it starts with restoring the original mask and voila -
the second signal is blocked again. On everything else we
get both delivered consistently.
It's actually easy to fix; the only thing to watch out for
is prevention of double syscall restart. Fortunately, the
idea I've nicked from arm fix by rmk works just fine...
Testcase demonstrating the behaviour in question; on alpha
we get one or both flags set (usually one), on everything
else both are always set.
#include <signal.h>
#include <stdio.h>
int had1, had2;
void f1(int sig) { had1 = 1; }
void f2(int sig) { had2 = 1; }
main()
{
sigset_t set1, set2;
sigemptyset(&set1);
sigemptyset(&set2);
sigaddset(&set2, 1);
sigaddset(&set2, 2);
signal(1, f1);
signal(2, f2);
sigprocmask(SIG_SETMASK, &set2, NULL);
raise(1);
raise(2);
sigsuspend(&set1);
printf("had1:%d had2:%d\n", had1, had2);
}
Tested-by: Michael Cree <mcree@orcon.net.nz>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Matt Turner <mattst88@gmail.com>
2010-09-18 20:42:27 +08:00
|
|
|
|
ret_to_user:
|
2005-04-17 06:20:36 +08:00
|
|
|
|
/* Make sure need_resched and sigpending don't change between
|
|
|
|
|
sampling and the rti. */
|
|
|
|
|
lda $16, 7
|
|
|
|
|
call_pal PAL_swpipl
|
2012-10-11 11:50:59 +08:00
|
|
|
|
ldl $17, TI_FLAGS($8)
|
|
|
|
|
and $17, _TIF_WORK_MASK, $2
|
alpha: deal with multiple simultaneously pending signals
Unlike the other targets, alpha sets _one_ sigframe and
buggers off until the next syscall/interrupt, even if
more signals are pending. It leads to quite a few unpleasant
inconsistencies, starting with SIGSEGV potentially arriving
not where it should and including e.g. mess with sigsuspend();
consider two pending signals blocked until sigsuspend()
unblocks them. We pick the first one; then, if we are hit
by interrupt while in the handler, we process the second one
as well. If we are not, and if no syscalls had been made,
we get out of the first handler and leave the second signal
pending; normally sigreturn() would've picked it anyway, but
here it starts with restoring the original mask and voila -
the second signal is blocked again. On everything else we
get both delivered consistently.
It's actually easy to fix; the only thing to watch out for
is prevention of double syscall restart. Fortunately, the
idea I've nicked from arm fix by rmk works just fine...
Testcase demonstrating the behaviour in question; on alpha
we get one or both flags set (usually one), on everything
else both are always set.
#include <signal.h>
#include <stdio.h>
int had1, had2;
void f1(int sig) { had1 = 1; }
void f2(int sig) { had2 = 1; }
main()
{
sigset_t set1, set2;
sigemptyset(&set1);
sigemptyset(&set2);
sigaddset(&set2, 1);
sigaddset(&set2, 2);
signal(1, f1);
signal(2, f2);
sigprocmask(SIG_SETMASK, &set2, NULL);
raise(1);
raise(2);
sigsuspend(&set1);
printf("had1:%d had2:%d\n", had1, had2);
}
Tested-by: Michael Cree <mcree@orcon.net.nz>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Matt Turner <mattst88@gmail.com>
2010-09-18 20:42:27 +08:00
|
|
|
|
bne $2, work_pending
|
2005-04-17 06:20:36 +08:00
|
|
|
|
restore_all:
|
alpha: lazy FPU switching
On each context switch we save the FPU registers on stack
of old process and restore FPU registers from the stack of new one.
That allows us to avoid doing that each time we enter/leave the
kernel mode; however, that can get suboptimal in some cases.
For one thing, we don't need to bother saving anything
for kernel threads. For another, if between entering and leaving
the kernel a thread gives CPU up more than once, it will do
useless work, saving the same values every time, only to discard
the saved copy as soon as it returns from switch_to().
Alternative solution:
* move the array we save into from switch_stack to thread_info
* have a (thread-synchronous) flag set when we save them
* have another flag set when they should be restored on return to userland.
* do *NOT* save/restore them in do_switch_stack()/undo_switch_stack().
* restore on the exit to user mode if the restore flag had
been set. Clear both flags.
* on context switch, entry to fork/clone/vfork, before entry into do_signal()
and on entry into straced syscall save the registers and set the 'saved' flag
unless it had been already set.
* on context switch set the 'restore' flag as well.
* have copy_thread() set both flags for child, so the registers would be
restored once the child returns to userland.
* use the saved data in setup_sigcontext(); have restore_sigcontext() set both flags
and copy from sigframe to save area.
* teach ptrace to look for FPU registers in thread_info instead of
switch_stack.
* teach isolated accesses to FPU registers (rdfpcr, wrfpcr, etc.)
to check the 'saved' flag (under preempt_disable()) and work with the save area
if it's been set; if 'saved' flag is found upon write access, set 'restore' flag
as well.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Matt Turner <mattst88@gmail.com>
2022-09-02 09:50:12 +08:00
|
|
|
|
ldl $2, TI_STATUS($8)
|
|
|
|
|
and $2, TS_SAVED_FP | TS_RESTORE_FP, $3
|
|
|
|
|
bne $3, restore_fpu
|
|
|
|
|
restore_other:
|
2011-04-13 05:45:12 +08:00
|
|
|
|
.cfi_remember_state
|
2005-04-17 06:20:36 +08:00
|
|
|
|
RESTORE_ALL
|
|
|
|
|
call_pal PAL_rti
|
|
|
|
|
|
2010-09-26 04:07:14 +08:00
|
|
|
|
ret_to_kernel:
|
2011-04-13 05:45:12 +08:00
|
|
|
|
.cfi_restore_state
|
2010-09-26 04:07:14 +08:00
|
|
|
|
lda $16, 7
|
|
|
|
|
call_pal PAL_swpipl
|
alpha: lazy FPU switching
On each context switch we save the FPU registers on stack
of old process and restore FPU registers from the stack of new one.
That allows us to avoid doing that each time we enter/leave the
kernel mode; however, that can get suboptimal in some cases.
For one thing, we don't need to bother saving anything
for kernel threads. For another, if between entering and leaving
the kernel a thread gives CPU up more than once, it will do
useless work, saving the same values every time, only to discard
the saved copy as soon as it returns from switch_to().
Alternative solution:
* move the array we save into from switch_stack to thread_info
* have a (thread-synchronous) flag set when we save them
* have another flag set when they should be restored on return to userland.
* do *NOT* save/restore them in do_switch_stack()/undo_switch_stack().
* restore on the exit to user mode if the restore flag had
been set. Clear both flags.
* on context switch, entry to fork/clone/vfork, before entry into do_signal()
and on entry into straced syscall save the registers and set the 'saved' flag
unless it had been already set.
* on context switch set the 'restore' flag as well.
* have copy_thread() set both flags for child, so the registers would be
restored once the child returns to userland.
* use the saved data in setup_sigcontext(); have restore_sigcontext() set both flags
and copy from sigframe to save area.
* teach ptrace to look for FPU registers in thread_info instead of
switch_stack.
* teach isolated accesses to FPU registers (rdfpcr, wrfpcr, etc.)
to check the 'saved' flag (under preempt_disable()) and work with the save area
if it's been set; if 'saved' flag is found upon write access, set 'restore' flag
as well.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Matt Turner <mattst88@gmail.com>
2022-09-02 09:50:12 +08:00
|
|
|
|
br restore_other
|
2010-09-26 04:07:14 +08:00
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
|
.align 3
|
|
|
|
|
$syscall_error:
|
|
|
|
|
/*
|
|
|
|
|
* Some system calls (e.g., ptrace) can return arbitrary
|
|
|
|
|
* values which might normally be mistaken as error numbers.
|
|
|
|
|
* Those functions must zero $0 (v0) directly in the stack
|
|
|
|
|
* frame to indicate that a negative return value wasn't an
|
|
|
|
|
* error number..
|
|
|
|
|
*/
|
2012-10-11 11:50:59 +08:00
|
|
|
|
ldq $18, 0($sp) /* old syscall nr (zero if success) */
|
|
|
|
|
beq $18, $ret_success
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
2012-10-11 11:50:59 +08:00
|
|
|
|
ldq $19, 72($sp) /* .. and this a3 */
|
2005-04-17 06:20:36 +08:00
|
|
|
|
subq $31, $0, $0 /* with error in v0 */
|
|
|
|
|
addq $31, 1, $1 /* set a3 for errno return */
|
|
|
|
|
stq $0, 0($sp)
|
|
|
|
|
mov $31, $26 /* tell "ret_from_sys_call" we can restart */
|
|
|
|
|
stq $1, 72($sp) /* a3 for return */
|
|
|
|
|
br ret_from_sys_call
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
|
* Do all cleanup when returning from all interrupts and system calls.
|
|
|
|
|
*
|
|
|
|
|
* Arguments:
|
|
|
|
|
* $8: current.
|
2012-10-11 11:50:59 +08:00
|
|
|
|
* $17: TI_FLAGS.
|
|
|
|
|
* $18: The old syscall number, or zero if this is not a return
|
2005-04-17 06:20:36 +08:00
|
|
|
|
* from a syscall that errored and is possibly restartable.
|
2012-10-11 11:50:59 +08:00
|
|
|
|
* $19: The old a3 value
|
2005-04-17 06:20:36 +08:00
|
|
|
|
*/
|
|
|
|
|
|
|
|
|
|
.align 4
|
2011-04-13 05:45:12 +08:00
|
|
|
|
.type work_pending, @function
|
2005-04-17 06:20:36 +08:00
|
|
|
|
work_pending:
|
2020-10-08 23:11:42 +08:00
|
|
|
|
and $17, _TIF_NOTIFY_RESUME | _TIF_SIGPENDING | _TIF_NOTIFY_SIGNAL, $2
|
2012-09-06 06:08:40 +08:00
|
|
|
|
bne $2, $work_notifysig
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
|
|
$work_resched:
|
2012-09-06 06:08:40 +08:00
|
|
|
|
/*
|
|
|
|
|
* We can get here only if we returned from syscall without SIGPENDING
|
|
|
|
|
* or got through work_notifysig already. Either case means no syscall
|
2012-10-11 11:50:59 +08:00
|
|
|
|
* restarts for us, so let $18 and $19 burn.
|
2012-09-06 06:08:40 +08:00
|
|
|
|
*/
|
2005-04-17 06:20:36 +08:00
|
|
|
|
jsr $26, schedule
|
2012-10-11 11:50:59 +08:00
|
|
|
|
mov 0, $18
|
2012-09-06 06:08:40 +08:00
|
|
|
|
br ret_to_user
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
|
|
$work_notifysig:
|
2007-05-30 07:03:28 +08:00
|
|
|
|
mov $sp, $16
|
2011-04-13 05:45:12 +08:00
|
|
|
|
DO_SWITCH_STACK
|
2012-09-06 06:30:34 +08:00
|
|
|
|
jsr $26, do_work_pending
|
2011-04-13 05:45:12 +08:00
|
|
|
|
UNDO_SWITCH_STACK
|
2012-09-06 06:30:34 +08:00
|
|
|
|
br restore_all
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
|
* PTRACE syscall handler
|
|
|
|
|
*/
|
|
|
|
|
|
|
|
|
|
.align 4
|
2011-04-13 05:45:12 +08:00
|
|
|
|
.type strace, @function
|
2005-04-17 06:20:36 +08:00
|
|
|
|
strace:
|
|
|
|
|
/* set up signal stack, call syscall_trace */
|
alpha: lazy FPU switching
On each context switch we save the FPU registers on stack
of old process and restore FPU registers from the stack of new one.
That allows us to avoid doing that each time we enter/leave the
kernel mode; however, that can get suboptimal in some cases.
For one thing, we don't need to bother saving anything
for kernel threads. For another, if between entering and leaving
the kernel a thread gives CPU up more than once, it will do
useless work, saving the same values every time, only to discard
the saved copy as soon as it returns from switch_to().
Alternative solution:
* move the array we save into from switch_stack to thread_info
* have a (thread-synchronous) flag set when we save them
* have another flag set when they should be restored on return to userland.
* do *NOT* save/restore them in do_switch_stack()/undo_switch_stack().
* restore on the exit to user mode if the restore flag had
been set. Clear both flags.
* on context switch, entry to fork/clone/vfork, before entry into do_signal()
and on entry into straced syscall save the registers and set the 'saved' flag
unless it had been already set.
* on context switch set the 'restore' flag as well.
* have copy_thread() set both flags for child, so the registers would be
restored once the child returns to userland.
* use the saved data in setup_sigcontext(); have restore_sigcontext() set both flags
and copy from sigframe to save area.
* teach ptrace to look for FPU registers in thread_info instead of
switch_stack.
* teach isolated accesses to FPU registers (rdfpcr, wrfpcr, etc.)
to check the 'saved' flag (under preempt_disable()) and work with the save area
if it's been set; if 'saved' flag is found upon write access, set 'restore' flag
as well.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Matt Turner <mattst88@gmail.com>
2022-09-02 09:50:12 +08:00
|
|
|
|
// NB: if anyone adds preemption, this block will need to be protected
|
|
|
|
|
ldl $1, TI_STATUS($8)
|
|
|
|
|
and $1, TS_SAVED_FP, $3
|
|
|
|
|
or $1, TS_SAVED_FP, $2
|
|
|
|
|
bne $3, 1f
|
|
|
|
|
stl $2, TI_STATUS($8)
|
|
|
|
|
bsr $26, __save_fpu
|
|
|
|
|
1:
|
2011-04-13 05:45:12 +08:00
|
|
|
|
DO_SWITCH_STACK
|
2012-05-27 09:44:21 +08:00
|
|
|
|
jsr $26, syscall_trace_enter /* returns the syscall number */
|
2011-04-13 05:45:12 +08:00
|
|
|
|
UNDO_SWITCH_STACK
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
2012-05-27 09:44:21 +08:00
|
|
|
|
/* get the arguments back.. */
|
2005-04-17 06:20:36 +08:00
|
|
|
|
ldq $16, SP_OFF+24($sp)
|
|
|
|
|
ldq $17, SP_OFF+32($sp)
|
|
|
|
|
ldq $18, SP_OFF+40($sp)
|
|
|
|
|
ldq $19, 72($sp)
|
|
|
|
|
ldq $20, 80($sp)
|
|
|
|
|
ldq $21, 88($sp)
|
|
|
|
|
|
|
|
|
|
/* get the system call pointer.. */
|
2022-03-29 10:57:38 +08:00
|
|
|
|
lda $1, NR_syscalls($31)
|
2005-04-17 06:20:36 +08:00
|
|
|
|
lda $2, sys_call_table
|
2018-08-11 08:04:55 +08:00
|
|
|
|
lda $27, sys_ni_syscall
|
2005-04-17 06:20:36 +08:00
|
|
|
|
cmpult $0, $1, $1
|
|
|
|
|
s8addq $0, $2, $2
|
|
|
|
|
beq $1, 1f
|
|
|
|
|
ldq $27, 0($2)
|
|
|
|
|
1: jsr $26, ($27), sys_gettimeofday
|
2010-09-18 20:41:16 +08:00
|
|
|
|
ret_from_straced:
|
2005-04-17 06:20:36 +08:00
|
|
|
|
ldgp $gp, 0($26)
|
|
|
|
|
|
|
|
|
|
/* check return.. */
|
|
|
|
|
blt $0, $strace_error /* the call failed */
|
|
|
|
|
$strace_success:
|
2021-09-19 06:27:12 +08:00
|
|
|
|
stq $31, 72($sp) /* a3=0 => no error */
|
2005-04-17 06:20:36 +08:00
|
|
|
|
stq $0, 0($sp) /* save return value */
|
|
|
|
|
|
2011-04-13 05:45:12 +08:00
|
|
|
|
DO_SWITCH_STACK
|
2012-05-27 09:44:21 +08:00
|
|
|
|
jsr $26, syscall_trace_leave
|
2011-04-13 05:45:12 +08:00
|
|
|
|
UNDO_SWITCH_STACK
|
2005-04-17 06:20:36 +08:00
|
|
|
|
br $31, ret_from_sys_call
|
|
|
|
|
|
|
|
|
|
.align 3
|
|
|
|
|
$strace_error:
|
2012-10-11 11:50:59 +08:00
|
|
|
|
ldq $18, 0($sp) /* old syscall nr (zero if success) */
|
|
|
|
|
beq $18, $strace_success
|
|
|
|
|
ldq $19, 72($sp) /* .. and this a3 */
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
|
|
subq $31, $0, $0 /* with error in v0 */
|
|
|
|
|
addq $31, 1, $1 /* set a3 for errno return */
|
|
|
|
|
stq $0, 0($sp)
|
|
|
|
|
stq $1, 72($sp) /* a3 for return */
|
|
|
|
|
|
2011-04-13 05:45:12 +08:00
|
|
|
|
DO_SWITCH_STACK
|
2012-10-11 11:50:59 +08:00
|
|
|
|
mov $18, $9 /* save old syscall number */
|
|
|
|
|
mov $19, $10 /* save old a3 */
|
2012-05-27 09:44:21 +08:00
|
|
|
|
jsr $26, syscall_trace_leave
|
2012-10-11 11:50:59 +08:00
|
|
|
|
mov $9, $18
|
|
|
|
|
mov $10, $19
|
2011-04-13 05:45:12 +08:00
|
|
|
|
UNDO_SWITCH_STACK
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
|
|
mov $31, $26 /* tell "ret_from_sys_call" we can restart */
|
|
|
|
|
br ret_from_sys_call
|
2011-04-13 05:45:12 +08:00
|
|
|
|
CFI_END_OSF_FRAME entSys
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
|
* Save and restore the switch stack -- aka the balance of the user context.
|
|
|
|
|
*/
|
|
|
|
|
|
|
|
|
|
.align 4
|
2011-04-13 05:45:12 +08:00
|
|
|
|
.type do_switch_stack, @function
|
|
|
|
|
.cfi_startproc simple
|
|
|
|
|
.cfi_return_column 64
|
|
|
|
|
.cfi_def_cfa $sp, 0
|
|
|
|
|
.cfi_register 64, $1
|
2005-04-17 06:20:36 +08:00
|
|
|
|
do_switch_stack:
|
|
|
|
|
lda $sp, -SWITCH_STACK_SIZE($sp)
|
2011-04-13 05:45:12 +08:00
|
|
|
|
.cfi_adjust_cfa_offset SWITCH_STACK_SIZE
|
2005-04-17 06:20:36 +08:00
|
|
|
|
stq $9, 0($sp)
|
|
|
|
|
stq $10, 8($sp)
|
|
|
|
|
stq $11, 16($sp)
|
|
|
|
|
stq $12, 24($sp)
|
|
|
|
|
stq $13, 32($sp)
|
|
|
|
|
stq $14, 40($sp)
|
|
|
|
|
stq $15, 48($sp)
|
|
|
|
|
stq $26, 56($sp)
|
|
|
|
|
ret $31, ($1), 1
|
2011-04-13 05:45:12 +08:00
|
|
|
|
.cfi_endproc
|
|
|
|
|
.size do_switch_stack, .-do_switch_stack
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
|
|
.align 4
|
2011-04-13 05:45:12 +08:00
|
|
|
|
.type undo_switch_stack, @function
|
|
|
|
|
.cfi_startproc simple
|
|
|
|
|
.cfi_def_cfa $sp, 0
|
|
|
|
|
.cfi_register 64, $1
|
2005-04-17 06:20:36 +08:00
|
|
|
|
undo_switch_stack:
|
|
|
|
|
ldq $9, 0($sp)
|
|
|
|
|
ldq $10, 8($sp)
|
|
|
|
|
ldq $11, 16($sp)
|
|
|
|
|
ldq $12, 24($sp)
|
|
|
|
|
ldq $13, 32($sp)
|
|
|
|
|
ldq $14, 40($sp)
|
|
|
|
|
ldq $15, 48($sp)
|
|
|
|
|
ldq $26, 56($sp)
|
|
|
|
|
lda $sp, SWITCH_STACK_SIZE($sp)
|
|
|
|
|
ret $31, ($1), 1
|
2011-04-13 05:45:12 +08:00
|
|
|
|
.cfi_endproc
|
|
|
|
|
.size undo_switch_stack, .-undo_switch_stack
|
alpha: lazy FPU switching
On each context switch we save the FPU registers on stack
of old process and restore FPU registers from the stack of new one.
That allows us to avoid doing that each time we enter/leave the
kernel mode; however, that can get suboptimal in some cases.
For one thing, we don't need to bother saving anything
for kernel threads. For another, if between entering and leaving
the kernel a thread gives CPU up more than once, it will do
useless work, saving the same values every time, only to discard
the saved copy as soon as it returns from switch_to().
Alternative solution:
* move the array we save into from switch_stack to thread_info
* have a (thread-synchronous) flag set when we save them
* have another flag set when they should be restored on return to userland.
* do *NOT* save/restore them in do_switch_stack()/undo_switch_stack().
* restore on the exit to user mode if the restore flag had
been set. Clear both flags.
* on context switch, entry to fork/clone/vfork, before entry into do_signal()
and on entry into straced syscall save the registers and set the 'saved' flag
unless it had been already set.
* on context switch set the 'restore' flag as well.
* have copy_thread() set both flags for child, so the registers would be
restored once the child returns to userland.
* use the saved data in setup_sigcontext(); have restore_sigcontext() set both flags
and copy from sigframe to save area.
* teach ptrace to look for FPU registers in thread_info instead of
switch_stack.
* teach isolated accesses to FPU registers (rdfpcr, wrfpcr, etc.)
to check the 'saved' flag (under preempt_disable()) and work with the save area
if it's been set; if 'saved' flag is found upon write access, set 'restore' flag
as well.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Matt Turner <mattst88@gmail.com>
2022-09-02 09:50:12 +08:00
|
|
|
|
|
|
|
|
|
#define FR(n) n * 8 + TI_FP($8)
|
|
|
|
|
.align 4
|
|
|
|
|
.globl __save_fpu
|
|
|
|
|
.type __save_fpu, @function
|
|
|
|
|
__save_fpu:
|
|
|
|
|
#define V(n) stt $f##n, FR(n)
|
|
|
|
|
V( 0); V( 1); V( 2); V( 3)
|
|
|
|
|
V( 4); V( 5); V( 6); V( 7)
|
|
|
|
|
V( 8); V( 9); V(10); V(11)
|
|
|
|
|
V(12); V(13); V(14); V(15)
|
|
|
|
|
V(16); V(17); V(18); V(19)
|
|
|
|
|
V(20); V(21); V(22); V(23)
|
|
|
|
|
V(24); V(25); V(26); V(27)
|
|
|
|
|
mf_fpcr $f0 # get fpcr
|
|
|
|
|
V(28); V(29); V(30)
|
|
|
|
|
stt $f0, FR(31) # save fpcr in slot of $f31
|
|
|
|
|
ldt $f0, FR(0) # don't let "__save_fpu" change fp state.
|
|
|
|
|
ret
|
|
|
|
|
#undef V
|
|
|
|
|
.size __save_fpu, .-__save_fpu
|
|
|
|
|
|
|
|
|
|
.align 4
|
|
|
|
|
restore_fpu:
|
|
|
|
|
and $3, TS_RESTORE_FP, $3
|
|
|
|
|
bic $2, TS_SAVED_FP | TS_RESTORE_FP, $2
|
|
|
|
|
beq $3, 1f
|
|
|
|
|
#define V(n) ldt $f##n, FR(n)
|
|
|
|
|
ldt $f30, FR(31) # get saved fpcr
|
|
|
|
|
V( 0); V( 1); V( 2); V( 3)
|
|
|
|
|
mt_fpcr $f30 # install saved fpcr
|
|
|
|
|
V( 4); V( 5); V( 6); V( 7)
|
|
|
|
|
V( 8); V( 9); V(10); V(11)
|
|
|
|
|
V(12); V(13); V(14); V(15)
|
|
|
|
|
V(16); V(17); V(18); V(19)
|
|
|
|
|
V(20); V(21); V(22); V(23)
|
|
|
|
|
V(24); V(25); V(26); V(27)
|
|
|
|
|
V(28); V(29); V(30)
|
|
|
|
|
1: stl $2, TI_STATUS($8)
|
|
|
|
|
br restore_other
|
|
|
|
|
#undef V
|
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
|
* The meat of the context switch code.
|
|
|
|
|
*/
|
|
|
|
|
.align 4
|
|
|
|
|
.globl alpha_switch_to
|
2011-04-13 05:45:12 +08:00
|
|
|
|
.type alpha_switch_to, @function
|
|
|
|
|
.cfi_startproc
|
2005-04-17 06:20:36 +08:00
|
|
|
|
alpha_switch_to:
|
2011-04-13 05:45:12 +08:00
|
|
|
|
DO_SWITCH_STACK
|
alpha: lazy FPU switching
On each context switch we save the FPU registers on stack
of old process and restore FPU registers from the stack of new one.
That allows us to avoid doing that each time we enter/leave the
kernel mode; however, that can get suboptimal in some cases.
For one thing, we don't need to bother saving anything
for kernel threads. For another, if between entering and leaving
the kernel a thread gives CPU up more than once, it will do
useless work, saving the same values every time, only to discard
the saved copy as soon as it returns from switch_to().
Alternative solution:
* move the array we save into from switch_stack to thread_info
* have a (thread-synchronous) flag set when we save them
* have another flag set when they should be restored on return to userland.
* do *NOT* save/restore them in do_switch_stack()/undo_switch_stack().
* restore on the exit to user mode if the restore flag had
been set. Clear both flags.
* on context switch, entry to fork/clone/vfork, before entry into do_signal()
and on entry into straced syscall save the registers and set the 'saved' flag
unless it had been already set.
* on context switch set the 'restore' flag as well.
* have copy_thread() set both flags for child, so the registers would be
restored once the child returns to userland.
* use the saved data in setup_sigcontext(); have restore_sigcontext() set both flags
and copy from sigframe to save area.
* teach ptrace to look for FPU registers in thread_info instead of
switch_stack.
* teach isolated accesses to FPU registers (rdfpcr, wrfpcr, etc.)
to check the 'saved' flag (under preempt_disable()) and work with the save area
if it's been set; if 'saved' flag is found upon write access, set 'restore' flag
as well.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Matt Turner <mattst88@gmail.com>
2022-09-02 09:50:12 +08:00
|
|
|
|
ldl $1, TI_STATUS($8)
|
|
|
|
|
and $1, TS_RESTORE_FP, $3
|
|
|
|
|
bne $3, 1f
|
|
|
|
|
or $1, TS_RESTORE_FP | TS_SAVED_FP, $2
|
|
|
|
|
and $1, TS_SAVED_FP, $3
|
|
|
|
|
stl $2, TI_STATUS($8)
|
|
|
|
|
bne $3, 1f
|
|
|
|
|
bsr $26, __save_fpu
|
|
|
|
|
1:
|
2005-04-17 06:20:36 +08:00
|
|
|
|
call_pal PAL_swpctx
|
|
|
|
|
lda $8, 0x3fff
|
2011-04-13 05:45:12 +08:00
|
|
|
|
UNDO_SWITCH_STACK
|
2005-04-17 06:20:36 +08:00
|
|
|
|
bic $sp, $8, $8
|
|
|
|
|
mov $17, $0
|
|
|
|
|
ret
|
2011-04-13 05:45:12 +08:00
|
|
|
|
.cfi_endproc
|
|
|
|
|
.size alpha_switch_to, .-alpha_switch_to
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
|
* New processes begin life here.
|
|
|
|
|
*/
|
|
|
|
|
|
|
|
|
|
.globl ret_from_fork
|
|
|
|
|
.align 4
|
|
|
|
|
.ent ret_from_fork
|
|
|
|
|
ret_from_fork:
|
2021-09-19 06:42:20 +08:00
|
|
|
|
lda $26, ret_to_user
|
2005-04-17 06:20:36 +08:00
|
|
|
|
mov $17, $16
|
|
|
|
|
jmp $31, schedule_tail
|
|
|
|
|
.end ret_from_fork
|
|
|
|
|
|
|
|
|
|
/*
|
2012-09-10 10:03:42 +08:00
|
|
|
|
* ... and new kernel threads - here
|
2005-04-17 06:20:36 +08:00
|
|
|
|
*/
|
|
|
|
|
.align 4
|
2012-09-10 10:03:42 +08:00
|
|
|
|
.globl ret_from_kernel_thread
|
|
|
|
|
.ent ret_from_kernel_thread
|
|
|
|
|
ret_from_kernel_thread:
|
|
|
|
|
mov $17, $16
|
|
|
|
|
jsr $26, schedule_tail
|
|
|
|
|
mov $9, $27
|
|
|
|
|
mov $10, $16
|
|
|
|
|
jsr $26, ($9)
|
2012-06-01 10:22:52 +08:00
|
|
|
|
br $31, ret_to_user
|
2012-10-11 11:12:01 +08:00
|
|
|
|
.end ret_from_kernel_thread
|
2012-06-01 10:22:52 +08:00
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
|
* Special system calls. Most of these are special in that they either
|
2018-08-17 09:03:35 +08:00
|
|
|
|
* have to play switch_stack games.
|
2005-04-17 06:20:36 +08:00
|
|
|
|
*/
|
2012-10-26 21:54:47 +08:00
|
|
|
|
|
|
|
|
|
.macro fork_like name
|
2005-04-17 06:20:36 +08:00
|
|
|
|
.align 4
|
2012-10-26 21:54:47 +08:00
|
|
|
|
.globl alpha_\name
|
|
|
|
|
.ent alpha_\name
|
|
|
|
|
alpha_\name:
|
2005-04-17 06:20:36 +08:00
|
|
|
|
.prologue 0
|
|
|
|
|
bsr $1, do_switch_stack
|
alpha: lazy FPU switching
On each context switch we save the FPU registers on stack
of old process and restore FPU registers from the stack of new one.
That allows us to avoid doing that each time we enter/leave the
kernel mode; however, that can get suboptimal in some cases.
For one thing, we don't need to bother saving anything
for kernel threads. For another, if between entering and leaving
the kernel a thread gives CPU up more than once, it will do
useless work, saving the same values every time, only to discard
the saved copy as soon as it returns from switch_to().
Alternative solution:
* move the array we save into from switch_stack to thread_info
* have a (thread-synchronous) flag set when we save them
* have another flag set when they should be restored on return to userland.
* do *NOT* save/restore them in do_switch_stack()/undo_switch_stack().
* restore on the exit to user mode if the restore flag had
been set. Clear both flags.
* on context switch, entry to fork/clone/vfork, before entry into do_signal()
and on entry into straced syscall save the registers and set the 'saved' flag
unless it had been already set.
* on context switch set the 'restore' flag as well.
* have copy_thread() set both flags for child, so the registers would be
restored once the child returns to userland.
* use the saved data in setup_sigcontext(); have restore_sigcontext() set both flags
and copy from sigframe to save area.
* teach ptrace to look for FPU registers in thread_info instead of
switch_stack.
* teach isolated accesses to FPU registers (rdfpcr, wrfpcr, etc.)
to check the 'saved' flag (under preempt_disable()) and work with the save area
if it's been set; if 'saved' flag is found upon write access, set 'restore' flag
as well.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Matt Turner <mattst88@gmail.com>
2022-09-02 09:50:12 +08:00
|
|
|
|
// NB: if anyone adds preemption, this block will need to be protected
|
|
|
|
|
ldl $1, TI_STATUS($8)
|
|
|
|
|
and $1, TS_SAVED_FP, $3
|
|
|
|
|
or $1, TS_SAVED_FP, $2
|
|
|
|
|
bne $3, 1f
|
|
|
|
|
stl $2, TI_STATUS($8)
|
|
|
|
|
bsr $26, __save_fpu
|
|
|
|
|
1:
|
2012-10-26 21:54:47 +08:00
|
|
|
|
jsr $26, sys_\name
|
2012-10-18 13:11:58 +08:00
|
|
|
|
ldq $26, 56($sp)
|
|
|
|
|
lda $sp, SWITCH_STACK_SIZE($sp)
|
2005-04-17 06:20:36 +08:00
|
|
|
|
ret
|
2012-10-26 21:54:47 +08:00
|
|
|
|
.end alpha_\name
|
|
|
|
|
.endm
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
2012-10-26 21:54:47 +08:00
|
|
|
|
fork_like fork
|
|
|
|
|
fork_like vfork
|
|
|
|
|
fork_like clone
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
2018-08-17 09:03:35 +08:00
|
|
|
|
.macro sigreturn_like name
|
2005-04-17 06:20:36 +08:00
|
|
|
|
.align 4
|
2018-08-17 09:03:35 +08:00
|
|
|
|
.globl sys_\name
|
|
|
|
|
.ent sys_\name
|
|
|
|
|
sys_\name:
|
2005-04-17 06:20:36 +08:00
|
|
|
|
.prologue 0
|
2010-09-18 20:41:16 +08:00
|
|
|
|
lda $9, ret_from_straced
|
|
|
|
|
cmpult $26, $9, $9
|
2005-04-17 06:20:36 +08:00
|
|
|
|
lda $sp, -SWITCH_STACK_SIZE($sp)
|
2018-08-17 09:03:35 +08:00
|
|
|
|
jsr $26, do_\name
|
2010-09-18 20:41:16 +08:00
|
|
|
|
bne $9, 1f
|
2012-05-27 09:44:21 +08:00
|
|
|
|
jsr $26, syscall_trace_leave
|
2010-09-18 20:41:16 +08:00
|
|
|
|
1: br $1, undo_switch_stack
|
2005-04-17 06:20:36 +08:00
|
|
|
|
br ret_from_sys_call
|
2018-08-17 09:03:35 +08:00
|
|
|
|
.end sys_\name
|
|
|
|
|
.endm
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
2018-08-17 09:03:35 +08:00
|
|
|
|
sigreturn_like sigreturn
|
|
|
|
|
sigreturn_like rt_sigreturn
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
|
|
.align 4
|
2018-08-11 08:04:55 +08:00
|
|
|
|
.globl alpha_syscall_zero
|
|
|
|
|
.ent alpha_syscall_zero
|
|
|
|
|
alpha_syscall_zero:
|
2005-04-17 06:20:36 +08:00
|
|
|
|
.prologue 0
|
2018-08-11 08:04:55 +08:00
|
|
|
|
/* Special because it needs to do something opposite to
|
|
|
|
|
force_successful_syscall_return(). We use the saved
|
|
|
|
|
syscall number for that, zero meaning "not an error".
|
|
|
|
|
That works nicely, but for real syscall 0 we need to
|
|
|
|
|
make sure that this logics doesn't get confused.
|
|
|
|
|
Store a non-zero there - -ENOSYS we need in register
|
|
|
|
|
for our return value will do just fine.
|
|
|
|
|
*/
|
2005-04-17 06:20:36 +08:00
|
|
|
|
lda $0, -ENOSYS
|
|
|
|
|
unop
|
|
|
|
|
stq $0, 0($sp)
|
|
|
|
|
ret
|
2018-08-11 08:04:55 +08:00
|
|
|
|
.end alpha_syscall_zero
|