https://reviews.llvm.org/D25932 made it so that clang always checks if
libLTO.dylib is present on disk, even if -flto is not being used. The
motivation for that change was that if a dependency happens to contain bitcode,
ld64 will try to load libLTO without -flto explicitly being enabled. However,
the change had the undesirable side effect of warning if libLTO.dylib doesn't
exist even if it isn't needed.
Change things so that -lto_library is always passes, independent of if it
exists or not. ld64 only looks at this flag if it uses LTO. If the dylib
exists, all is well. If it doesn't, and LTO is not being used, all is well too.
If ld64 does end up using LTO and the dylib does not exist, ld64 will print
something like
ld: could not process llvm bitcode object file, because foo/libLTO.dylib could not be loaded file 'test.o' for architecture x86_64
https://reviews.llvm.org/D26984
llvm-svn: 287685
SCCs.
These will be fairly expensive routines to call and might be abused in
real code, but are quite useful when debugging or in asserts and are
reasonable and well formed properties to query.
I've used one of them in an assert that was requested in a code review
here. In subsequent commits I'll start using these routines more
heavily, for example in unittests etc. But this at least gets the
groundwork in place.
Differential Revision: https://reviews.llvm.org/D25506
llvm-svn: 287682
GNU LD allows `ASSERT` commands to be in output section descriptions.
Note that LD also mandates that `ASSERT` commands in this context must
end with a semicolon.
llvm-svn: 287677
This occurs during UINT_TO_FP v2f64 lowering.
We can easily generalize this to other horizontal ops (FHSUB, PACKSS, PACKUS) as required - we are doing something similar with PACKUS in lowerV2I64VectorShuffle
llvm-svn: 287676
Due to the way the preprocessor works nodes can be half in a macro or a
different file. This means checking the file name of the start location
of a Decl is not a correct way of checking if the entire Decl is in that
file. Remove that flawed assumption and replace it with a more effective
check: If the point we're looking for is *inside* of the begin and end
location of a Decl, look inside.
This should make clang-rename more reliable (for example macro'd namespaces
threw it off before, as seen in the test case) and maybe a little faster
by pruning off more stuff that the RecursiveASTVisitor doesn't have to
drill into.
llvm-svn: 287649
Add missing unaligned store macros (ush/usw) and fix the exisiting
implementation of the unaligned load macros in order to generate
identical expansions with the GNU assembler.
llvm-svn: 287646
No-one actually had a mangler handy when calling this function, and
getSymbol itself went most of the way towards getting its own mangler
(with a local TLOF variable) so forcing all callers to supply one was
just extra complication.
llvm-svn: 287645
Summary: Splat vectors are canonicalized to BUILD_VECTOR's so the code can be simplified. NFC-ish.
Reviewers: craig.topper, delena, RKSimon, andreadb
Subscribers: RKSimon, llvm-commits
Differential Revision: https://reviews.llvm.org/D26678
llvm-svn: 287643
Summary:
The new error information contains the type of error (e.g. overlap or bad file path)
and the replacement(s) that is causing the error. This enables us to resolve some errors.
For example, for insertion at the same location conflict, we need to know the
existing replacement which conflicts with the new replacement in order to calculate
the new position to be insert before/after the existing replacement (for merging).
Reviewers: klimek, bkramer
Subscribers: djasper, cfe-commits
Differential Revision: https://reviews.llvm.org/D26853
llvm-svn: 287639
Summary:
Improve detection of global vs local variables.
Currently when a global variable is optimized out or otherwise has an unknown
location (DW_AT_location is empty) it gets reported as local.
I added two new heuristics:
- if a mangled name is present, the variable is global (or static)
- if DW_AT_location is present but invalid, the variable is global (or static)
Subscribers: lldb-commits
Differential Revision: https://reviews.llvm.org/D26908
llvm-svn: 287636
Add basic ComputeNumSignBits support for TRUNCATE ops for cases where the source's number of sign bits overlaps with the truncated size.
Improves X86 SIGN_EXTEND_IN_REG vector cases which were needlessly sign extending boolean vector results.
Differential Revision: https://reviews.llvm.org/D26851
llvm-svn: 287635
/proc/self/maps can't be read atomically, this leads to episodic
crashes in libignore as it thinks that a module is loaded twice.
See the new test for an example.
dl_iterate_phdr does not have this problem.
Switch libignore to dl_iterate_phdr.
llvm-svn: 287632
This commit handles cases where the size qualifier of an indirect memory reference operand in Intel syntax is missing (e.g. "vaddps xmm1, xmm2, [a]").
GCC will deduce the size qualifier for AVX512 vector and broadcast memory operands based on the possible matches:
"vaddps xmm1, xmm2, [a]" matches only “XMMWORD PTR” qualifier.
"vaddps xmm1, xmm2, [a]{1to4}" matches only “DWORD PTR” qualifier.
This is different from the current behavior of LLVM, which deduces the size qualifier based on the size of the memory operand.
For "vaddps xmm1, xmm2, [a]"
"char a;" will imply "BYTE PTR" qualifier
"short a;" will imply "WORD PTR" qualifier.
This commit aligns LLVM to GCC’s behavior.
This is the LLVM part of the review.
The Clang part of the review: https://reviews.llvm.org/D26587
Differential Revision: https://reviews.llvm.org/D26586
llvm-svn: 287630
Drop instructions that do not influence the memory impact of a basic block.
They are not needed to reproduce the original bug (verified) and will cause
random test noise if we would decide to only model the instructions that
have visible side-effects.
llvm-svn: 287626
Add two store instructions at the end of basic blocks that are required to
reproduce the original bug to ensure we always process and model these basic
blocks. This makes this test case stable even in case we would decide to bail
out early of basic blocks which do not modify the global state. Also add
additional check lines to verify how we model the basic block.
llvm-svn: 287625
There were several cases in X86 where we were unable to fully factor a ScopeMatcher but created nested ScopeMatchers for some portions of it. Then we created a SwitchType that split it up and further factored it so that we ended up with something like this:
SwitchType
Scope
Scope
Sequence of matchers
Some other sequence of matchers
EndScope
Another sequence of matchers
EndScope
...Next type
This change turns it into this:
SwitchType
Scope
Sequence of matchers
Some other sequence of matchers
Another sequence of matchers
EndScope
...Next type
Several other in-tree targets had similar nested scopes like this. Overall this doesn't save many bytes, but makes the isel output a little more regular.
llvm-svn: 287624
We add CHECK lines to this test case to make it easier to see the difference
between affine and non-affine memory accesses. We also change the test case to
use a parameteric index expression as otherwise our range analysis will
understand that the non-affine memory access can only access input[1],
which makes it difficult to see that the memory access is in-fact modeled as
non-affine access.
llvm-svn: 287623
I'm sure this caused the load size to misprint in Intel syntax output. We were also inconsistent about which patterns used which instruction between VEX and EVEX.
There are two different reg/reg versions of movq, one from a GPR and one from the lower 64-bits of an XMM register. This changes the loading folding table to use the single i64mem memory form for folding both cases. But we need to use TB_NO_REVERSE to prevent a duplicate entry in the unfolding table.
llvm-svn: 287622
Summary:
The index and one of the table operands can be swapped by changing the opcode to the other version. Neither of these operands are the one that can load from memory so this can't be used to increase memory folding opportunities.
We need to handle the unmasked forms and the kz forms. Since the load operand isn't being commuted we can commute the load and broadcast instructions too.
Reviewers: igorb, delena, Ayal, Farhana, RKSimon
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D25652
llvm-svn: 287621
We would attempt to access the symbol section without ensuring that the symbol
was not absolute. When the assembler referenced relocation is not evaluated to
the absolute, but when we record the relocation, we would query the section.
Because the symbol is absolute, it does not have a section associated with it,
triggering an assertion. Just be more careful about the access of the section.
Addresses PR31064!
llvm-svn: 287619
Because in case of unions we currently default-bind compound values in the
store, this quick fix avoids the crash for this case.
Patch by Ilya Palachev and independently by Alexander Shaposhnikov!
Differential Revision: https://reviews.llvm.org/D26442
llvm-svn: 287618
Summary:
Shuffle lowering widens the element size of a shuffle if elements are contiguous. This is sometimes help because wider element types have more shuffle options. If the shuffle is one of the arguments to a vselect this shuffle widening can introduce a bitcast between the vselect and the shuffle. This will prevent isel from selecting a masked operation. If the shuffle can be written equally efficiently with a different element size to match the vselect type we should change the shuffle type to allow masking.
This patch does this conversion for all VALIGND/VALIGNQ sizes. It also supports turning 128-bit PALIGNR into VALIGND/VALIGNQ. This fixes the case shown in PR31018.
I plan to add support for more operations in future patches.
Reviewers: RKSimon, zvi, delena
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D26902
llvm-svn: 287612