We don't parse ObjC v1 types from the runtime metadata like we do for ObjC v2, but doing so by creating empty types was ruining the i386 v1 debugging experience.
<rdar://problem/24093343>
llvm-svn: 289233
Reapplied with fix for PR31323 - X86 SSE2 vXi16 multiplies for illegal types were creating CONCAT_VECTORS nodes with vector inputs that might not total the number of elements in the result type.
llvm-svn: 289232
PCH files store the macro history for a given macro, and the whole history list
for one identifier is given to the Preprocessor at once via
Preprocessor::setLoadedMacroDirective(). This contained an assert that no macro
history exists yet for that identifier. That's usually true, but it's not true
for builtin macros, which are created in Preprocessor() before flags and pchs
are processed. Luckily, ASTWriter stops writing macro history lists at builtins
(see shouldIgnoreMacro() in ASTWriter.cpp), so the head of the history list was
missing for builtin macros. So make the assert weaker, and splice the history
list to the existing single define for builtins.
https://reviews.llvm.org/D27545
llvm-svn: 289228
This saves two pointers from FunctionDecl that were being used for some
rare and questionable C-only functionality. The DeclsInPrototypeScope
ArrayRef was added in r151712 in order to parse this kind of C code:
enum e {x, y};
int f(enum {y, x} n) {
return x; // should return 1, not 0
}
The challenge is that we parse 'int f(enum {y, x} n)' it its own
function prototype scope that gets popped before we build the
FunctionDecl for 'f'. The original change was doing two questionable
things:
1. Saving all tag decls introduced in prototype scope on a TU-global
Sema variable. This is problematic when you have cases like this, where
'x' and 'y' shouldn't be visible in 'f':
void f(void (*fp)(enum { x, y } e)) { /* no x */ }
This patch fixes that, so now 'f' can't see 'x', which is consistent
with GCC.
2. Storing the decls in FunctionDecl in ActOnFunctionDeclarator so that
they could be used in ActOnStartOfFunctionDef. This is just an
inefficient way to move information around. The AST lives forever, but
the list of non-parameter decls in prototype scope is short lived.
Moving these things to the Declarator solves both of these issues.
Reviewers: rsmith
Subscribers: jmolloy, cfe-commits
Differential Revision: https://reviews.llvm.org/D27279
llvm-svn: 289225
Retrying after fixing overly aggressive load-store forwarding optimization.
Simplify Consecutive Merge Store Candidate Search
Now that address aliasing is much less conservative, push through
simplified store merging search which only checks for parallel stores
through the chain subgraph. This is cleaner as the separation of
non-interfering loads/stores from the store-merging logic.
Whem merging stores, search up the chain through a single load, and
finds all possible stores by looking down from through a load and a
TokenFactor to all stores visited. This improves the quality of the
output SelectionDAG and generally the output CodeGen (with some
exceptions).
Additional Minor Changes:
1. Finishes removing unused AliasLoad code
2. Unifies the the chain aggregation in the merged stores across
code paths
3. Re-add the Store node to the worklist after calling
SimplifyDemandedBits.
4. Increase GatherAllAliasesMaxDepth from 6 to 18. That number is
arbitrary, but seemed sufficient to not cause regressions in
tests.
This finishes the change Matt Arsenault started in r246307 and
jyknight's original patch.
Many tests required some changes as memory operations are now
reorderable. Some tests relying on the order were changed to use
volatile memory operations
Noteworthy tests:
CodeGen/AArch64/argument-blocks.ll -
It's not entirely clear what the test_varargs_stackalign test is
supposed to be asserting, but the new code looks right.
CodeGen/AArch64/arm64-memset-inline.lli -
CodeGen/AArch64/arm64-stur.ll -
CodeGen/ARM/memset-inline.ll -
The backend now generates *worse* code due to store merging
succeeding, as we do do a 16-byte constant-zero store efficiently.
CodeGen/AArch64/merge-store.ll -
Improved, but there still seems to be an extraneous vector insert
from an element to itself?
CodeGen/PowerPC/ppc64-align-long-double.ll -
Worse code emitted in this case, due to the improved store->load
forwarding.
CodeGen/X86/dag-merge-fast-accesses.ll -
CodeGen/X86/MergeConsecutiveStores.ll -
CodeGen/X86/stores-merging.ll -
CodeGen/Mips/load-store-left-right.ll -
Restored correct merging of non-aligned stores
CodeGen/AMDGPU/promote-alloca-stored-pointer-value.ll -
Improved. Correctly merges buffer_store_dword calls
CodeGen/AMDGPU/si-triv-disjoint-mem-access.ll -
Improved. Sidesteps loading a stored value and
merges two stores
CodeGen/X86/pr18023.ll -
This test has been removed, as it was asserting incorrect
behavior. Non-volatile stores *CAN* be moved past volatile loads,
and now are.
CodeGen/X86/vector-idiv.ll -
CodeGen/X86/vector-lzcnt-128.ll -
It's basically impossible to tell what these tests are actually
testing. But, looks like the code got better due to the memory
operations being recognized as non-aliasing.
CodeGen/X86/win32-eh.ll -
Both loads of the securitycookie are now merged.
Reviewers: arsenm, hfinkel, tstellarAMD, jyknight, nhaehnle
Subscribers: wdng, nhaehnle, nemanjai, arsenm, weimingz, niravd, RKSimon, aemerson, qcolombet, dsanders, resistor, tstellarAMD, t.p.northover, spatel
Differential Revision: https://reviews.llvm.org/D14834
llvm-svn: 289221
Part of the work for PR31323 - add extra asserts checking that the input vectors are of consistent type and result in the correct number of vector elements.
llvm-svn: 289214
Adds support for bitcasting a little endian 'small element' vector to 'large element' scalar/vector (e.g. v16i8 to v4i32 or v2i32 to i64), which is required for PR30845. We extract the knownbits for each 'small element' part and concatenate the results together.
We can add support for big endian and 'large element' scalar/vector to 'small element' vector bitcasting once we have test cases for them.
Differential Revision: https://reviews.llvm.org/D27129
llvm-svn: 289200
This test links against liblldb, so it can only run when the target arch is the
same arch as liblldb. We already have a decorator for that, so apply it.
While I'm in there, also mark the test as debug-info independent.
llvm-svn: 289199
The i386 glibc ld.so expects the .got.slot entry that is relocated by a
R_386_IRELATIVE relocation to point directly at the ifunc resolver and
not the address of the PLT entry + 6 (thus entering the lazy resolver).
This is also the case for ARM and I suspect it is because these use REL
relocations and can't use the addend field to store the address of the
ifunc resolver. If the lazy resolver is used we get an error message
stating that only R_386_JUMP_SLOT is supported.
As ARM and i386 share the same code, I've removed the ARM specific test
and added a writeIgotPlt() function that by default calls writeGotPlt().
ARM and i386 override this to write the address of the ifunc resolver.
Differential Revision: https://reviews.llvm.org/D27581
llvm-svn: 289198
This patch changes where the C++ ABI headers are put during the build. Previously
they were put in the top level include directory (not the libc++ header directory).
However that just polutes the top level directory. Instead this patch creates a special
directory to put them in. The reason they can't be put under c++/v1 until after the build
is because libc++ uses the in-source headers, so we can't add the include path of the libc++
headers in the object dir.
Additionally this patch teaches the test suite how to find the ABI headers,
and adds a demangling utility to help debug tests with.
llvm-svn: 289195
This reverts commit r288916 as it is currently causing a crasher in
Halide. Reproducer on llvm.org/PR31323. While it might be that halide is
generating invalid IR, llc shouldn't crash.
llvm-svn: 289194
sse_load_f32/f64 can also match loads that are zero extended to vectors. We shouldn't match that because we wouldn't be able to get the instruction to zero the upper bits like the intrinsic semantics would require for such a case.
There is a test case that does depend on this behavior.
llvm-svn: 289193
We could previously select an integer which would hit an assertion error
in pseudo expansion.
The new type will also generate the appropriate fixups if needed, which
wasn't done beforehand.
llvm-svn: 289192
Summary:
Scalar intrinsics have specific semantics about the which input's upper bits are passed through to the output. The same input is also supposed to be the input we use for the lower element when the mask bit is 0 in a masked operation. We aren't currently keeping these semantics with instruction selection.
This patch corrects this by introducing new scalar FMA ISD nodes that indicate whether operand 1(one of the multiply inputs) or operand 3(the additon/subtraction input) should pass thru its upper bits.
We use this information to select 213/132 form for the operand 1 version and the 231 form for the operand 3 version.
We also use this information to suppress combining FNEG operations on the passthru input since semantically the passthru bits aren't negated. This is stronger than the earlier check added for a user being SELECTS so we can remove that.
This fixes PR30913.
Reviewers: delena, zvi, v_klochkov
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D27144
llvm-svn: 289190
These were selecting directly to the VOP2 form instead
of VOP3 like the i32 instructions. Fixes regressions in
future commits where an immediate isn't folded because it was
initially used for the second operand.
Because uniform 16-bit operations are promoted to i32, it's
difficult to get a simple testcase where this matters. Fold
failures in SIFoldOperands here tend to be hidden by commute
and fold in SIShrinkInstructions.
llvm-svn: 289189