In wasm64, the signatures of some library functions and global variables
defined in Emscripten change:
- `emscripten_longjmp`: `(i32, i32) -> ()` -> `(i64, i32) -> ()`
This changes because the first argument is the address of a memory
buffer. This in turn causes more changes below.
- `setThrew`: `(i32, i32) -> ()` -> `(i64, i32) -> ()`
`emscripten_longjmp` calls `setThrew` with the i64 buffer argument as
the first parameter.
- `__THREW__` (global var): `i32` to `i64`
`setThrew`'s first argument is set to this `__THREW__` variable, so it
should change to i64 as well.
- `testSetjmp`: `(i32, i32*, i32) -> (i32)` -> `(i64, i32*, i32) -> (i32)`
In the code transformation done in this pass, the value of `__THREW__`
is passed as the first parameter of `testSetjmp`.
This patch creates some helper functions to easily get types that become
different depending on the wasm32/wasm64, and uses them to change
various function signatures and code transformations. Also updates the
tests with WASM32/WASM64 check lines.
(Untested) Emscripten side patch: https://github.com/emscripten-core/emscripten/pull/14108
Reviewed By: aardappel
Differential Revision: https://reviews.llvm.org/D101985
This extends any frame record created in the function to include that
parameter, passed in X22.
The new record looks like [X22, FP, LR] in memory, and FP is stored with 0b0001
in bits 63:60 (CodeGen assumes they are 0b0000 in normal operation). The effect
of this is that tools walking the stack should expect to see one of three
values there:
* 0b0000 => a normal, non-extended record with just [FP, LR]
* 0b0001 => the extended record [X22, FP, LR]
* 0b1111 => kernel space, and a non-extended record.
All other values are currently reserved.
If compiling for arm64e this context pointer is address-discriminated with the
discriminator 0xc31a and the DB (process-specific) key.
There is also an "i8** @llvm.swift.async.context.addr()" intrinsic providing
front-ends access to this slot (and forcing its creation initialized to nullptr
if necessary).
As noticed on D90170, the recursion depth for matching a maximum of a i128 bitwidth was too high.
@lebedev.ri mentioned that we can probably do better by limiting the number of collected Values instead of just depth, but I'll look at that later.
There are three essentially different cases to handle:
* -O1, no LSE. The IR is expanded to ldxp/stxp and we need patterns to select
them.
* -O0, no LSE. We get G_ATOMIC_CMPXCHG, and need to produce CMP_SWAP_N
pseudos. The registers are all 64-bit so this is easy.
* LSE. We get G_ATOMIC_CMPXCHG and need to produce a CASP instruction with
XSeqPair registers.
The last case is by far the hardest, and and adds 128-bit GPR support as a
byproduct.
As discussed in D102437, the VF argument to isScalarWithPredication
seems redundant, so this is intended to be a non-functional change. It
seems wrong to query the widening decision at this point. Removing the
operand and code to get the widening decision causes no unit/regression
tests to fail. I've also found no issues running the LLVM test-suite.
This subsequently removes the VF argument from isPredicatedInst as well,
since it is no longer required.
A consequence is that checkInstOffsetsDoNotOverlap can now distinguish
sp+offset from fp+offset, so it knows that it shouldn't try to work out
whether the accesses overlap just by comparing the offsets. For example
in these two instructions:
MIR:
BUFFER_STORE_DWORD_OFFSET %0:vgpr_32(s32), $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr32, 4, 0, 0, 0, 0, 0, implicit $exec :: (dereferenceable store 4 into stack + 4, addrspace 5)
%4:vgpr_32 = BUFFER_LOAD_DWORD_OFFEN %stack.0.alloca, $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr32, 0, 0, 0, 0, 0, 0, implicit $exec :: (load 4 from `i8 addrspace(5)* undef`, addrspace 5)
ISA:
buffer_store_dword v0, off, s[0:3], s32 offset:4
buffer_load_dword v0, off, s[0:3], s34
Differential Revision: https://reviews.llvm.org/D73957
The Arm Architecture Reference Manual says that the SystemHintOp_BTI
opcode is prefered when CRm:op2 matches 0100:xx0, but llvm-mc
currently accepts 0100:xxx, which isn't right.
Differential Revision: https://reviews.llvm.org/D102415
On windows, the native path char type is wchar_t - therefore, this test
didn't actually do the conversion that the test was supposed to exercise.
The charset conversions on windows do cause extra allocations outside of
the provided allocator though, so that bit of the test has to be waived
now that the test actually does something. (Other tests have similar
TEST_NOT_WIN32() for allocation checks for charset conversions.)
Also fix a typo, and amend the path.native.obs/string_alloc test to
test char8_t, too.
Differential Revision: https://reviews.llvm.org/D102360
Unlike it's legacy SSE XMM XORPS version, which measures as being 1-cycle,
this one is certainly a zero-cycle instruction, in addition to both of them
being dependency breaking.
As confirmed by exegesis measurements, and ref docs.
For gfx10 gradient (g16) and address (a16) can be independent. Previous
implementation assumed that a16 implied g16.
There are some other changes that fix the verification (as well as asm/disasm)
that are required for the included test to pass - the XFAIL will be removed in
those changes.
This also includes required fixes for GlobalISel
Differential Revision: https://reviews.llvm.org/D102066
Change-Id: I7d171cc90994de05f41669b66a6d0ffa2ed05d09
A16 support for image instructions assembly/disassembly (gfx10) was missing
Also refactor MIMG op addr size calcs to common function
We'd got 3 places where the same operation was being done.
One test is now marked XFAIL until a related codegen patch is in place
Differential Revision: https://reviews.llvm.org/D102231
Change-Id: I7e86e730ef8c71901457855cba570581f4f576bb
Since 5de2d189e6 this particular warning
hasn't had the location of the source file containing the inline
assembly.
Fix this by reporting via LLVMContext. Which means that we no longer
have the "instantiated into assembly here" lines but they were going to
point to the start of the inline asm string anyway.
This message is already tested via IR in llvm. However we won't have
the required location info there so I've added a C file test in clang
to cover it.
(though strictly, this is testing llvm code)
Reviewed By: ychen
Differential Revision: https://reviews.llvm.org/D102244
Fix was implemented in the ittap repo to solve an error about cross-compiling ITTAPI in LLVM with mingw.
The problem occurred in the cross-compilation environment for Julia's dependencies.
The corresponding issue item in ittapi repo: https://github.com/intel/ittapi/issues/19
A new tag was created in ittapi repo for that fix.
This patch contains changes to update the ittapi tag in LLVM.
Reviewed By: bader
Differential Revision: https://reviews.llvm.org/D102471
This moves the isOverwrite function into the DSEState so that it can
share the analyses and members from the state.
A few extra loop tests were also added to test stores in and around
multi block loops for D100464.
This is separate from (but builds on) the support added in ec6b71df70 for
emitting LinkGraphs in the context of an active materialization. This commit
makes LinkGraphs a first-class data structure with features equivalent to
object files within ObjectLinkingLayer.
Clang's coverage data for auto-generated switch cases is really, really
large. Before this change, when I enable code coverage, SemaDeclAttr.obj
is 4.0GB. Naturally, this fails to link.
Replacing the RISCV builtin id check with a comparison reduces object
file size from 4.0GB to 330MB. Replacing the AArch64 SVE range check
reduces the size again down to 17MB, which is reasonable.
I think the RISCV switch is larger in coverage data because it uses more
levels of macro expansion, while the SVE intrinsics only use one. In any
case, please try to avoid switches with 1000+ cases, they usually don't
optimize well.
LLD already produces a nice error message when sections exceed 4GB, and
this setRVA assertion causes LLD to crash instead of diagnosing the
error properly.
No test because we don't want slow tests that create 4GB files.
Group functions/structs in namespaces for better code readability.
Depends On D102123
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D102124
Make "target rank" a pass option of VectorToSCF.
Depends On D102101
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D102123
With zero-sized allocations we don't actually end up storing the
address tag to the memory tag space, so store it in the first byte of
the chunk instead so that we can find it later in getInlineErrorInfo().
Differential Revision: https://reviews.llvm.org/D102442
It's more likely that we have a UAF than an OOB in blocks that are
more than 1 block away from the fault address, so the UAF should
appear first in the error report.
Differential Revision: https://reviews.llvm.org/D102379
On x32 size_t == unsigned int, not unsigned long int:
../../../../../src-master/libsanitizer/sanitizer_common/sanitizer_linux_libcdep.cpp: In function ??void __sanitizer::InitTlsSize()??:
../../../../../src-master/libsanitizer/sanitizer_common/sanitizer_linux_libcdep.cpp:209:55: error: invalid conversion from ??__sanitizer::uptr*?? {aka ??long unsigned int*??} to ??size_t*?? {aka ??unsigned int*??} [-fpermissive]
209 | ((void (*)(size_t *, size_t *))get_tls_static_info)(&g_tls_size, &tls_align);
| ^~~~~~~~~~~
| |
| __sanitizer::uptr* {aka long unsigned int*}
by using size_t on g_tls_size. This is to fix:
https://bugs.llvm.org/show_bug.cgi?id=50332
Reviewed By: MaskRay
Differential Revision: https://reviews.llvm.org/D102446