xchg with a mem operand has different locking semantics. If we unfold it
into a xchg r,r we will loose the implicit lock. Likewise we never want
to fold a register xchg into a memory one as it would be a lot slower.
This triggers during LLVM selfhost.
llvm-svn: 304163
Following the request made in https://reviews.llvm.org/D32871, the
general documentation of the Vectorization Plan is hereby placed
under docs/Proposals.
llvm-svn: 304161
error C2971: 'llvm::ManagedStatic': template parameter 'Creator': 'CreateDefaultTimerGroup': a variable with non-static storage duration cannot be used as a non-type argument
llvm-svn: 304157
This used to be just leaked. r295370 made it use magic statics. This adds
a global destructor, which is something we'd like to avoid. It also creates
a weird situation where the mutex used by TimerGroup is re-created during
global shutdown and leaked.
Using a ManagedStatic here is also subtle as it relies on the mutex
inside of ManagedStatic to be recursive. I've added a test for that
in a previous change.
llvm-svn: 304156
The extending load possibility was missed in:
https://reviews.llvm.org/rL304072
We might want to handle this cases as a follow-up, but bailing out for now
to avoid miscompiling.
llvm-svn: 304153
Use VLREP when inserting one or more loads into a vector. This is more
efficient than to first load and then use a VLVGP.
Review: Ulrich Weigand
llvm-svn: 304152
Create a helper to deal with the common code for merging incoming values
together after they've been split during call lowering. There's likely
more stuff that can be commoned up here, but we'll leave that for later.
llvm-svn: 304143
Summary
clang -c -mcpu=pwr9 test/CodeGen/PowerPC/build-vector-tests.ll causes an assertion failure during the binary encoding.
The failure occurs when a D-form load instruction takes two register operands instead of a register + an immediate.
This patch fixes the problem and also adds an assertion to catch this failure earlier before the binary encoding (i.e. during lit test).
The fix is from Nemanja Ivanovic @nemanjai.
Differential Revision: https://reviews.llvm.org/D33482
llvm-svn: 304133
Clang coerces structs into arrays, so it's a good idea to support them.
Most of the support boils down to getting the splitToValueTypes helper
to actually split types. We then use G_INSERT/G_EXTRACT to deal with the
parts.
llvm-svn: 304132
This is really a workaround for ThinLTO in particular - since it can
import partial CUs that may end up looking very similar/the same as
the same partial import in another ThinLTO compile.
An alternative fix would be to change the DICompileUnit metadata to
include a "primary file" or the like - and when importing for ThinLTO
set the primary file to the name of the DICompileUnit that is being
imported into. This involves changing the schema and would reduce the
excessive uniqueness in the hash that this change creates - allowing
diagnosing of more duplicate CUs than will be caught with this change.
But duplicate CUs can still be caught in non-ThinLTO builds & are mostly
a nuisance rather than a particularly deliberate/effective tool for
finding broken code. (arguably the hash could always include the dwo
file and nothing in fission would break, I think..)
Reapply of r304119 after adding a triple to the test and moving it
to the X86 directory.
llvm-svn: 304130
When the only use of a CU is for a subprogram that's only emitted into
the using CU (to avoid cross-CU references in DWO files), avoid creating
that CU at all.
Reapply of r304111 after adding a triple to the test and moving it
to the X86 directory.
llvm-svn: 304129
The reverted change introdued assertions ala:
"MachineBasicBlock::succ_iterator
llvm::MachineBasicBlock::removeSuccessor(succ_iterator, bool): Assertion
`I != Successors.end() && "Not a current successor!"'
Mikael, the original committer, wrote me that he is working on a fix, but that
it likely will take some time to get this resolved. As this bug is one of the
last two issues that keep the AOSP buildbot from turning green, I revert the
original commit r302876.
I am looking forward to see this recommitted after the assertion has been
resolved.
llvm-svn: 304128
This was reverted due to buildbot breakages and I was not familiar
with this code to investigate it. But while trying to get a
useful backtrace for the author, it turns out the fix was very
obvious. Resubmitting this patch as is, and will submit the
fix in a followup so that the fix is not hidden in the larger
CL.
llvm-svn: 304122
This reverts commit 28cb1003507f287726f43c771024a1dc102c45fe as well
as all subsequent followups. llvm-tblgen currently segfaults with
this change, and it seems it has been broken on the bots all
day with no fixes in preparation. See, for example:
http://lab.llvm.org:8011/builders/clang-x86-windows-msvc2015/
llvm-svn: 304121
ConvertUTF.cpp has a little dependency on LLVM, and since the code extensively uses fall-through switches,
I prefer disabling the warning for the whole file, rather than adding attributes for each case.
llvm-svn: 304120
This is really a workaround for ThinLTO in particular - since it can
import partial CUs that may end up looking very similar/the same as
the same partial import in another ThinLTO compile.
An alternative fix would be to change the DICompileUnit metadata to
include a "primary file" or the like - and when importing for ThinLTO
set the primary file to the name of the DICompileUnit that is being
imported into. This involves changing the schema and would reduce the
excessive uniqueness in the hash that this change creates - allowing
diagnosing of more duplicate CUs than will be caught with this change.
But duplicate CUs can still be caught in non-ThinLTO builds & are mostly
a nuisance rather than a particularly deliberate/effective tool for
finding broken code. (arguably the hash could always include the dwo
file and nothing in fission would break, I think..)
llvm-svn: 304119
Summary: CPI does not read the status register, but only writes it.
Reviewers: dylanmckay
Reviewed By: dylanmckay
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D33223
llvm-svn: 304116
When the only use of a CU is for a subprogram that's only emitted into
the using CU (to avoid cross-CU references in DWO files), avoid creating
that CU at all.
llvm-svn: 304111
- Remove all uses of base sched model entries and set them all to
Unsupported so all the opcodes are described in
AArch64SchedFalkorDetails.td.
- Remove entries for unsupported half-float opcodes.
- Remove entries for unsupported LSE extension opcodes.
- Add entry for MOVbaseTLS (and set Sched in base td file entry to
WriteSys) and a few other pseudo ops.
- Fix a few FP load/store with reg offset entries to use the LSLfast
predicates.
- Add Q size BIF/BIT/BSL entries.
- Fix swapped Q/D sized CLS/CLZ/CNT/RBIT entires.
- Fix pre/post increment address register latency (this operand is
always dest 0).
- Fix swapped FCVTHD/FCVTHS/FCVTDH/FCVTDS entries.
- Fix XYZ resource over usage on LD[1-4] opcodes.
llvm-svn: 304108
X86 backend holds huge tables in order to map between the register and memory forms of each instruction.
This TableGen Backend automatically generated all these tables with the appropriate flags for each entry.
Differential Revision: https://reviews.llvm.org/D32684
llvm-svn: 304088
Some register-register instructions can be encoded in 2 different ways, this happens when 2 register operands can be folded (separately).
For example if we look at the MOV8rr and MOV8rr_REV, both instructions perform exactly the same operation, but are encoded differently. Here is the relevant information about these instructions from Intel's 64-ia-32-architectures-software-developer-manual:
Opcode Instruction Op/En 64-Bit Mode Compat/Leg Mode Description
8A /r MOV r8,r/m8 RM Valid Valid Move r/m8 to r8.
88 /r MOV r/m8,r8 MR Valid Valid Move r8 to r/m8.
Here we can see that in order to enable the folding of the output and input registers, we had to define 2 "encodings", and as a result we got 2 move 8-bit register-register instructions.
In the X86 backend, we define both of these instructions, usually one has a regular name (MOV8rr) while the other has "_REV" suffix (MOV8rr_REV), must be marked with isCodeGenOnly flag and is not emitted from CodeGen.
Automatically generating the memory folding tables relies on matching encodings of instructions, but in these cases where we want to map both memory forms of the mov 8-bit (MOV8rm & MOV8mr) to MOV8rr (not to MOV8rr_REV) we have to somehow point from the MOV8rr_REV to the "regular" appropriate instruction which in this case is MOV8rr.
This field enable this "pointing" mechanism - which is used in the TableGen backend for generating memory folding tables.
Differential Revision: https://reviews.llvm.org/D32683
llvm-svn: 304087
Summary:
I believe https://reviews.llvm.org/rL302576 introduced two bugs:
1) it produces duplicate distinct variables for every: dbg.value describing the same variable.
To fix the problme I switched form getDistinct() to get() in DebugLoc.cpp: auto reparentVar = [&](DILocalVariable *Var) {
return DILocalVariable::getDistinct(
2) It passes NewFunction plain name as a linkagename parameter to Subprogram constructor. Breaks assert in:
|| DeclLinkageName.empty()) || LinkageName == DeclLinkageName) && "decl has a linkage name and it is different"' failed.
#9 0x00007f5010261b75 llvm::DwarfUnit::applySubprogramDefinitionAttributes(llvm::DISubprogram const*, llvm::DIE&) /home/gor/llvm/lib/CodeGen/AsmPrinter/DwarfUnit.cpp:1173:3
#
(Edit: reproducer added)
Here how https://reviews.llvm.org/rL302576 broke coroutine debug info.
Coroutine body of the original function is split into several parts by cloning and removing unneeded code.
All parts describe the original function and variables present in the original function.
For a simple case, prior to Split, original function has these two blocks:
```
PostSpill: ; preds = %AllocaSpillBB
call void @llvm.dbg.value(metadata i32 %x, i64 0, metadata !14, metadata !15), !dbg !13
store i32 %x, i32* %x.addr, align 4
...
and
sw.epilog: ; preds = %sw.bb
%x.addr.reload.addr = getelementptr inbounds %f.Frame, %f.Frame* %FramePtr, i32 0, i32 4, !dbg !20
%4 = load i32, i32* %x.addr.reload.addr, align 4, !dbg !20
call void @llvm.dbg.value(metadata i32 %4, i64 0, metadata !14, metadata !15), !dbg !13!14 = !DILocalVariable(name: "x", arg: 1, scope: !6, file: !7, line: 55, type: !11)
```
Note that in two blocks different expression represent the same original user variable X.
Before rL302576, for every cloned function there was exactly one cloned DILocalVariable(name: "x" as in:
```
define i8* @f(i32 %x) #0 !dbg !6 {
...
!6 = distinct !DISubprogram(name: "f", scope: !7, file: !7, line: 55, type: !8, isLocal: false, isDefinition: true, scopeLine: 55, flags: DIFlagPrototyped,
...
!14 = !DILocalVariable(name: "x", arg: 1, scope: !6, file: !7, line: 55, type: !11)
define internal fastcc void @f.resume(%f.Frame* %FramePtr) #0 !dbg !25 {
...
!25 = distinct !DISubprogram(name: "f", scope: !7, file: !7, line: 55, type: !8, isLocal: false, isDefinition: true, scopeLine: 55, flags: DIFlagPrototyped, isOptimized: false, unit: !0, variables: !2)
!28 = !DILocalVariable(name: "x", arg: 1, scope: !25, file: !7, line: 55, type: !11)
```
After rL302576, for every cloned function there were as many DILocalVariable(name: "x" as there were "call void @llvm.dbg.value" for that variable.
This was causing asserts in VerifyDebugInfo and AssemblyPrinter.
Example:
```
!27 = distinct !DISubprogram(name: "f", linkageName: "f.resume", scope: !7, file: !7, line: 55, type: !8, isLocal: false, isDefinition: true, scopeLine: 55,
!29 = distinct !DILocalVariable(name: "x", arg: 1, scope: !27, file: !7, line: 55, type: !11)
!39 = distinct !DILocalVariable(name: "x", arg: 1, scope: !27, file: !7, line: 55, type: !11)
!41 = distinct !DILocalVariable(name: "x", arg: 1, scope: !27, file: !7, line: 55, type: !11)
```
Second problem:
Prior to rL302576, all clones were described by DISubprogram referring to original function.
```
define i8* @f(i32 %x) #0 !dbg !6 {
...
!6 = distinct !DISubprogram(name: "f", scope: !7, file: !7, line: 55, type: !8, isLocal: false, isDefinition: true, scopeLine: 55, flags: DIFlagPrototyped,
define internal fastcc void @f.resume(%f.Frame* %FramePtr) #0 !dbg !25 {
...
!25 = distinct !DISubprogram(name: "f", scope: !7, file: !7, line: 55, type: !8, isLocal: false, isDefinition: true, scopeLine: 55, flags: DIFlagPrototyped,
```
After rL302576, DISubprogram for clones is of two minds, plain name refers to the original name, linkageName refers to plain name of the clone.
```
!27 = distinct !DISubprogram(name: "f", linkageName: "f.resume", scope: !7, file: !7, line: 55, type: !8, isLocal: false, isDefinition: true, scopeLine: 55,
```
I think the assumption in AsmPrinter is that both name and linkageName should refer to the same entity. It asserts here when they are not:
```
|| DeclLinkageName.empty()) || LinkageName == DeclLinkageName) && "decl has a linkage name and it is different"' failed.
#9 0x00007f5010261b75 llvm::DwarfUnit::applySubprogramDefinitionAttributes(llvm::DISubprogram const*, llvm::DIE&) /home/gor/llvm/lib/CodeGen/AsmPrinter/DwarfUnit.cpp:1173:3
```
After this fix, behavior (with respect to coroutines) reverts to exactly as it was before and therefore making them debuggable again, or even more importantly, compilable, with "-g"
Reviewers: dblaikie, echristo, aprantl
Reviewed By: dblaikie
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D33614
llvm-svn: 304079
With fix of uninitialized variable.
Original commit message:
This change is intended to use for LLD in D33183.
Problem we have in LLD when building .gdb_index is that we need to know section which address range belongs to.
Previously it was solved on LLD side by providing fake section addresses with use of llvm::LoadedObjectInfo
interface. We assigned file offsets as addressed. Then after obtaining ranges lists, for each range we had to find section ID's.
That not only was slow, but also complicated implementation and was the reason of incorrect behavior when
sections share the same offsets, like D33176 shows.
This patch makes DWARF parsers to return section index as well. That solves problem mentioned above.
Differential revision: https://reviews.llvm.org/D33184
llvm-svn: 304078
DagInits are allocated in a BumpPtrAllocator so they are never destructed. This means the destructor for the SmallVector never runs.
To fix this we now allocate the vectors in the BumpPtrAllocator too using TrailingObjects.
llvm-svn: 304077
The optimistic delinearization implemented in LLVM detects array sizes by
looking for non-linear products between parameters and induction variables.
In OpenCL code, such products often look like:
A[get_global_id(0) * N + get_global_id(1)]
Hence, the IV is hidden in the get_global_id() call and consequently
delinearization would fail as no induction variable is available that helps
us to identify N as array size parameter.
We now use a very simple heuristic to change this. We assume that each parameter
that comes directly from a function call is a hidden induction variable. As
a result, we can delinearize the access above to:
A[get_global_id(0)][get_global_id(1]
llvm-svn: 304073
If we have (extract_subvector(load wide vector)) with no other users,
that can just be (load narrow vector). This is intentionally conservative.
Follow-ups may loosen the one-use constraint to account for the extract cost
or just remove the one-use check.
The memop chain updating is based on code that already exists multiple times
in x86 lowering, so that should be pulled into a helper function as a follow-up.
Background: this is a potential improvement noticed via regressions caused by
making x86's peekThroughBitcasts() not loop on consecutive bitcasts (see
comments in D33137).
Differential Revision: https://reviews.llvm.org/D33578
llvm-svn: 304072
These used to hold std::unique_ptrs that managed the allocation for the various *Init object so that they would be deleted on exit. Everything is allocated in a BumpPtrAllocator name so there is no reason for these to still exist.
llvm-svn: 304066
I've taken the approach from the LoopInfo test:
* Rather than running in the pass manager just build the analyses manually
* Split out the common parts (makeLLVMModule, runWithDomTree) into helpers
Differential Revision: https://reviews.llvm.org/D33617
llvm-svn: 304061
Summary:
This fixes introduction of an incorrect inttoptr/ptrtoint pair in
the included test case which makes use of non-integral pointers. I
suspect there are more cases like this left, but this takes care of
the one I was seeing at the moment.
Reviewers: sanjoy
Subscribers: mzolotukhin, llvm-commits
Differential Revision: https://reviews.llvm.org/D33129
llvm-svn: 304058
Rewrite fixupKills() to use the LivePhysRegs class. Simplifies the code
and fixes a bug where the CSR registers in return blocks where missed
leading to invalid kill flags. Also remove the unnecessary rule that we
wouldn't set kill flags on tied operands.
No tests as I have an upcoming commit improving MachineVerifier checks
to catch these cases in multiple existing lit tests.
llvm-svn: 304055
This reverts commit r299287 plus clean-ups.
The localizer pass is a helper pass that could be run at O0 in the GISel
pipeline to work around the deficiency of the fast register allocator.
It basically shortens the live-ranges of the constants so that the
allocator does not spill all over the place.
Long term fix would be to make the greedy allocator fast.
llvm-svn: 304051
The recommit is to fix a bug about ExtractValue and InsertValue ops. For those
ops, some varargs inside GVN::Expression are not value numbers but raw index
numbers. It is wrong to do phi-translate for raw index numbers, and the fix is
to stop doing that.
Right now scalarpre doesn't have phi-translate support, so it will miss some
simple pre opportunities. Like the following testcase, current scalarpre cannot
recognize the last "a * b" is fully redundent because a and b used by the last
"a * b" expr are both defined by phis.
long a[100], b[100], g1, g2, g3;
__attribute__((pure)) long goo();
void foo(long a, long b, long c, long d) {
g1 = a * b;
if (__builtin_expect(g2 > 3, 0)) {
a = c;
b = d;
g2 = a * b;
}
g3 = a * b; // fully redundant.
}
The patch adds phi-translate support in scalarpre. This is only a temporary
solution before the newpre based on newgvn is available.
Differential Revision: https://reviews.llvm.org/D32252
llvm-svn: 304050
One case in BranchRelaxation did not compute liveins after creating a
new block. This is catched by existing tests with an upcoming commit
that will improve MachineVerifier checking of livein lists.
llvm-svn: 304049
- Rewrite livein calculation to use the computeLiveIns() helper
function. This is slightly less efficient but easier to reason about
and doesn't unnecessarily add pristine and reserved registers[1]
- Zero the status register at the beginning of the loop to make sure it
has a defined value.
- Remove kill flags of values that need to stay alive throughout the loop.
[1] An upcoming commit of mine will tighten the MachineVerifier to catch
these.
llvm-svn: 304048
Previously, we called simplifyPossiblyCastedAndOrOfICmps twice with the operands commuted, but the call to simplifyAndOrOfICmpsWithConstants further down already handles commuting and doesn't need to be called both ways.
This patch pushes double calls further down to just the individual routines that need to be called twice.
Differential Revision: https://reviews.llvm.org/D33603
llvm-svn: 304044
Wrong assembly code is generated for a simple program with
clang. If clang only produces IR and llc is used
for IR lowering and optimization, correct assembly
code is generated.
The main reason is that clang feeds default Reloc::Static
to llvm and llc feeds no RelocMode to llvm, where
for llc case, BPF backend picks up Reloc::PIC_ mode.
This leads different IR lowering behavior and clang
permits global_addr+off folding while llc doesn't.
This patch introduces isOffsetFoldingLegal function into
BPF backend and the function always return false.
This will make clang and llc behave the same for
the lowering.
Bug https://bugs.llvm.org//show_bug.cgi?id=33183
has more detailed explanation.
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
llvm-svn: 304043
It's a workaround because the test was flakey passing to begin with, but
it looks like (going off commit history) it really did want to test in
the presence of debug info, so keep that behavior (by adding something
to the CU so it's not dropped) & restore the flakey pass in the process.
(added a FIXME in case someone else decides to look at it later)
llvm-svn: 304042
[AMDGPU] add intrinsic for s_getpc
Summary: The s_getpc instruction is exposed as intrinsic llvm.amdgcn.s.getpc.
Patch by Tim Corringham
llvm-svn: 304031
Summary:
r295737 included a fix for leaking libraries loaded via. DynamicLibrary::addPermanentLibrary.
This created a problem where static constructors in a library could insert llvm::ManagedStatic objects before DynamicLibrary would register it's own ManagedStatic, meaning a crash could occur at shutdown.
r301562 exasperated this problem by cleaning up the DynamicLibrary ManagedStatic during llvm_shutdown.
Reviewers: v.g.vassilev, lhames, efriedma
Reviewed By: efriedma
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D33581
llvm-svn: 304027
This code was replicated two additional times to handle commuted cases, but I think a commutable matcher can take care of it.
Differential Revision: https://reviews.llvm.org/D33585
llvm-svn: 304022
The tests here are have operands commuted to provide more coverage. I also commuted one of the instructions in the scalar tests so the 4 tests cover the 4 commuted variations
Differential Revision: https://reviews.llvm.org/D33599
llvm-svn: 304021
Consistent with GCC and addresses a shortcoming with ThinLTO where many
imported CUs may end up being empty (because the functions imported from
them either ended up not being used (and were then discarded, since
they're imported as available_externally) or optimized away entirely).
Test cases previously testing empty CUs (either intentionally, or
because they didn't need anything more complicated) had a trivial 'int'
or similar basic type added to their retained types list.
This is a first order approximation - a deeper implementation could do
things like:
1) Be more lazy about construction of the CU - for example if two CUs
containing a single identical retained type are linked together, with
this change one of the two CUs will be produced but empty (since a
duplicate type won't be produced).
2) Go further and invert all the CU links the same way the subprogram
link is inverted - keep named CU lists of retained types, macros, etc,
and have those link back to the CU. Then if they're emitted, the CU is
emitted, but never otherwise - this would allow the metadata itself to
be dropped earlier too, though it seems unlikely that's an important
optimization as there shouldn't be many CUs relative to the number of
other entities.
llvm-svn: 304020
The whole-program-devirt pass needs to run at -O0 because only it
knows about the llvm.type.checked.load intrinsic: it needs to both
lower the intrinsic itself and handle it in the summary.
Differential Revision: https://reviews.llvm.org/D33571
llvm-svn: 304019
Every other place in InstCombine that uses these methods in ValueTracking already pass this information. This makes the remaining sites consistent.
Differential Revision: https://reviews.llvm.org/D33567
llvm-svn: 304018
This produced 'strange' DWARF anyway - the CU would have no ranges (or
at least not a range including the inlined code) nor any subprogram or
inlined_subroutine - yet the line table would have entries for these
instructions.
(this actually becomes more relevant with changes coming after this,
where a CU without any contents will be omitted entirely - so there
would be no line table to put this on anyway)
llvm-svn: 304004
This change is intended to use for LLD in D33183.
Problem we have in LLD when building .gdb_index is that we need to know section which address range belongs to.
Previously it was solved on LLD side by providing fake section addresses with use of llvm::LoadedObjectInfo
interface. We assigned file offsets as addressed. Then after obtaining ranges lists, for each range we had to find section ID's.
That not only was slow, but also complicated implementation and was the reason of incorrect behavior when
sections share the same offsets, like D33176 shows.
This patch makes DWARF parsers to return section index as well. That solves problem mentioned above.
Differential revision: https://reviews.llvm.org/D33184
llvm-svn: 304002
Re-commit r303938 and r303954 with a fix for addLiveIns(): the internal
addPristines() function must be called on an empty set or it may
accidentally reset saved registers.
- addLiveOutsNoPristines() needs to add callee saved registers that are
actually saved and restored somewhere to the set (they are not
pristine).
- Cleanup/rewrite the code for addLiveOuts()/addLiveOutsNoPristines().
This fixes the problem from D32156.
Differential Revision: https://reviews.llvm.org/D32464
llvm-svn: 304001
In the best case:
extract (binop (concat X1, X2), (concat Y1, Y2)), N --> binop XN, YN
...we kill all of the extract/concat and just have narrow binops remaining.
If only one of the binop operands is amenable, this transform is still
worthwhile because we kill some of the extract/concat.
Optional bitcasting makes the code more complicated, but there doesn't
seem to be a way to avoid that.
The TODO about extending to more than bitwise logic is there because we really
will regress several x86 tests including madd, psad, and even a plain
integer-multiply-by-2 or shift-left-by-1. I don't think there's anything
fundamentally wrong with this patch that would cause those regressions; those
folds are just missing or brittle.
If we extend to more binops, I found that this patch will fire on at least one
non-x86 regression test. There's an ARM NEON test in
test/CodeGen/ARM/coalesce-subregs.ll with a pattern like:
t5: v2f32 = vector_shuffle<0,3> t2, t4
t6: v1i64 = bitcast t5
t8: v1i64 = BUILD_VECTOR Constant:i64<0>
t9: v2i64 = concat_vectors t6, t8
t10: v4f32 = bitcast t9
t12: v4f32 = fmul t11, t10
t13: v2i64 = bitcast t12
t16: v1i64 = extract_subvector t13, Constant:i32<0>
There was no functional change in the codegen from this transform from what I
could see though.
For the x86 test changes:
1. PR32790() is the closest call. We don't reduce the AVX1 instruction count in that case,
but we improve throughput. Also, on a core like Jaguar that double-pumps 256-bit ops,
there's an unseen win because two 128-bit ops have the same cost as the wider 256-bit op.
SSE/AVX2/AXV512 are not affected which is expected because only AVX1 has the extract/concat
ops to match the pattern.
2. do_not_use_256bit_op() is the best case. Everyone wins by avoiding the concat/extract.
Related bug for IR filed as: https://bugs.llvm.org/show_bug.cgi?id=33026
3. The SSE diffs in vector-trunc-math.ll are just scheduling/RA, so nothing real AFAICT.
4. The AVX1 diffs in vector-tzcnt-256.ll are all the same pattern: we reduced the instruction
count by one in each case by eliminating two insert/extract while adding one narrower logic op.
https://bugs.llvm.org/show_bug.cgi?id=32790
Differential Revision: https://reviews.llvm.org/D33137
llvm-svn: 303997
Currently getOptimalMemOpType returns i32 for large enough sizes without
checking for alignment, leading to poor code generation when misaligned accesses
aren't permitted as we generate a word store then later split it up into byte
stores. This means we inadvertantly go over the MaxStoresPerMemcpy limit and for
memset we splat the memset value into a word then immediately split it up
again.
Fix this by leaving it up to FindOptimalMemOpLowering to figure out which type
to use, but also fix a bug there where it wasn't correctly checking if
misaligned memory accesses are allowed.
Differential Revision: https://reviews.llvm.org/D33442
llvm-svn: 303990
With fix of test compilation.
Initial commit message:
This change is intended to use for LLD in D33183.
Problem we have in LLD when building .gdb_index is that we need to know section
which address range belongs to.
Previously it was solved on LLD side by providing fake section addresses
with use of llvm::LoadedObjectInfo interface. We assigned file offsets as addressed.
Then after obtaining ranges lists, for each range we had to find section ID's.
That not only was slow, but also complicated implementation and was the reason
of incorrect behavior when
sections share the same offsets, like D33176 shows.
This patch makes DWARF parsers to return section index as well.
That solves problem mentioned above.
Differential revision: https://reviews.llvm.org/D33184
llvm-svn: 303983
Running unittests/Support/DynamicLibrary/DynamicLibraryTests fails when LLVM is
configured with LLVM_EXPORT_SYMBOLS_FOR_PLUGINS=ON, because the test's version
script only contains symbols extracted from the static libraries, that the test
links with, but not those from the main object/executable itself. The patch
explicitly exports the one symbol needed by the test.
This change fixes https://bugs.llvm.org/show_bug.cgi?id=32893
Patch authored by Momchil Velikov.
Differential Revision: https://reviews.llvm.org/D33490
llvm-svn: 303979
This change is intended to use for LLD in D33183.
Problem we have in LLD when building .gdb_index is that we need to know section
which address range belongs to.
Previously it was solved on LLD side by providing fake section addresses
with use of llvm::LoadedObjectInfo interface. We assigned file offsets as addressed.
Then after obtaining ranges lists, for each range we had to find section ID's.
That not only was slow, but also complicated implementation and was the reason
of incorrect behavior when
sections share the same offsets, like D33176 shows.
This patch makes DWARF parsers to return section index as well.
That solves problem mentioned above.
Differential revision: https://reviews.llvm.org/D33184
llvm-svn: 303978
The patch rL303730 was reverted because test lsr-expand-quadratic.ll failed on
many non-X86 configs with this patch. The reason of this is that the patch
makes a correctless fix that changes optimizer's behavior for this test.
Without the change, LSR was making an overconfident simplification basing on a
wrong SCEV. Apparently it did not need the IV analysis to do this. With the
change, it chose a different way to simplify (that wasn't so confident), and
this way required the IV analysis. Now, following the right execution path,
LSR tries to make a transformation relying on IV Users analysis. This analysis
is target-dependent due to this code:
// LSR is not APInt clean, do not touch integers bigger than 64-bits.
// Also avoid creating IVs of non-native types. For example, we don't want a
// 64-bit IV in 32-bit code just because the loop has one 64-bit cast.
uint64_t Width = SE->getTypeSizeInBits(I->getType());
if (Width > 64 || !DL.isLegalInteger(Width))
return false;
To make a proper transformation in this test case, the type i32 needs to be
legal for the specified data layout. When the test runs on some non-X86
configuration (e.g. pure ARM 64), opt gets confused by the specified target
and does not use it, rejecting the specified data layout as well. Instead,
it uses some default layout that does not treat i32 as a legal type
(currently the layout that is used when it is not specified does not have
legal types at all). As result, the transformation we expect to happen does
not happen for this test.
This re-enabling patch does not have any source code changes compared to the
original patch rL303730. The only difference is that the failing test is
moved to X86 directory and now has requirement of running on x86 only to comply
with the specified target triple and data layout.
Differential Revision: https://reviews.llvm.org/D33543
llvm-svn: 303971
Re-commit r303937 + r303949 as they were not the cause for the build
failures.
We do not track liveness of reserved registers so adding them to the
liveins list in computeLiveIns() was completely unnecessary.
llvm-svn: 303970
block.
This allows writing much more natural and readable range based for loops
directly over the PHI nodes. It also takes advantage of the same tricks
for terminating the sequence as the hand coded versions.
I've replaced one example of this mostly to showcase the difference and
I've added a unit test to make sure the facilities really work the way
they're intended. I want to use this inside of SimpleLoopUnswitch but it
seems generally nice.
Differential Revision: https://reviews.llvm.org/D33533
llvm-svn: 303964
Summary:
RelocVisitor had too many, too small functions. This patch group them
by architecture rather than each relocation type.
Reviewers: grimar, dblaikie
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D33580
llvm-svn: 303950
We have a lot of complicated logic to determine where padding
is in a record, and the debug info doesn't always provide enough
information to figure it out with laser precision. In this case
we were putting the padding in the wrong place causing an
out of bounds access on a BitVector.
Right now we decide that any trailing padding of a child type
will be truncated during record layout, but this is only true
insofar as the class still is sized properly to end on an
alignment boundary, which the algorithm doesn't yet know about.
For now, just don't crash, even though we display padding twice
in this case.
llvm-svn: 303946
Summary:
For various clang analyzer tests, which were unsupported, I got lit
exceptions, similar to the following:
Exception during script execution:
Traceback (most recent call last):
File "utils/lit/lit/run.py", line 190, in execute_test
result = test.config.test_format.execute(test, lit_config)
File "tools/clang/test/Analysis/analyzer_test.py", line 11, in execute
if result.code == lit.Test.FAIL:
AttributeError: 'tuple' object has no attribute 'code'
This is because executeShTest() in utils/lit/lit/TestRunner.py is
supposed to return a lit.Test.Result object, but in case of unsupported
tests, it returns a plain tuple.
Fix this by returning a properly initialized lit.Test.Result object
instead.
Reviewers: rnk, rafael, modocache
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D33579
llvm-svn: 303943
Prevailing symbol resolution is necessary for correctness. Without
this we can end up dropping a referenced linkonce symbol from the link.
Differential Revision: https://reviews.llvm.org/D33570
llvm-svn: 303939
- addLiveOutsNoPristines() needs to add callee saved registers that are
actually saved and restored somewhere to the set (they are not
pristine).
- Cleanup/rewrite the code for addLiveOuts()/addLiveOutsNoPristines().
This fixes the problem from D32156.
Differential Revision: https://reviews.llvm.org/D32464
llvm-svn: 303938
Merging two type streams is one of the most time consuming
parts of generating a PDB, and as such it needs to be as
fast as possible. The visitor abstractions used for interoperating
nicely with many different types of inputs and outputs have
been used widely and help greatly for testability and implementing
tools, but the abstractions build up and get in the way of
performance.
This patch removes all of the visitation stuff from the type
stream merger, essentially re-inventing the leaf / member switch
and loop, but at a very low level. This allows us many other
optimizations, such as not actually deserializing *any* records
(even member records which don't describe their own length), as
the operation of "figure out how long this record is" is somewhat
faster than "figure out how long this record *and* get all its
fields out". Furthermore, whereas before we had to deserialize,
re-write type indices, then re-serialize, now we don't have to
do any of those 3 steps. We just find out where the type indices
are and pull them directly out of the byte stream and re-write
them.
This is worth a 50-60% performance increase. On top of all other
optimizations that have been applied this week, I now get the
following numbers when linking lld.exe and lld.pdb
MSVC: 25.67s
Before This Patch: 18.59s
After This Patch: 8.92s
So this is a huge performance win.
Differential Revision: https://reviews.llvm.org/D33564
llvm-svn: 303935
Previously this code was defensive to the situation in which the debug
info scopes would lead to a different subprogram from the subprogram in
the CU's subprogram list (this could've happened with linkonce
functions, etc as per the comment being removed). Since the CU<>SP link
reversal this is no longer possible.
llvm-svn: 303933
I forgot to forward the chain, causing some missing instruction
dependencies. The test crashes the compiler without this patch.
Inspired by the test case, D33519 also tries to remove the extra sync.
Differential Revision: https://reviews.llvm.org/D33573
llvm-svn: 303931
We have wrappers for several other ValueTracking methods that take care of passing all of the analysis and assumption cache parameters. This extends it to isKnownToBeAPowerOfTwo.
llvm-svn: 303924
Right now scalarpre doesn't have phi-translate support, so it will miss some
simple pre opportunities. Like the following testcase, current scalarpre cannot
recognize the last "a * b" is fully redundent because a and b used by the last
"a * b" expr are both defined by phis.
long a[100], b[100], g1, g2, g3;
__attribute__((pure)) long goo();
void foo(long a, long b, long c, long d) {
g1 = a * b;
if (__builtin_expect(g2 > 3, 0)) {
a = c;
b = d;
g2 = a * b;
}
g3 = a * b; // fully redundant.
}
The patch adds phi-translate support in scalarpre. This is only a temporary
solution before the newpre based on newgvn is available.
Differential Revision: https://reviews.llvm.org/D32252
llvm-svn: 303923
Rename the DEBUG_TYPE to match the names of corresponding passes where
it makes sense. Also establish the pattern of simply referencing
DEBUG_TYPE instead of repeating the passname where possible.
llvm-svn: 303921
Originally this was intended to be set up so that when linking
a PDB which refers to a type server, it would only visit the
PDB once, and on subsequent visitations it would just skip it
since all the records had already been added.
Due to some C++ scoping issues, this was not occurring and it
was revisiting the type server every time, which caused every
record to end up being thrown away on all subsequent visitations.
This doesn't affect the performance of linking clang-cl generated
object files because we don't use type servers, but when linking
object files and libraries generated with /Zi via MSVC, this means
only 1 object file has to be linked instead of N object files, so
the speedup is quite large.
llvm-svn: 303920
Previously, every time we wanted to serialize a field list record, we
would create a new copy of FieldListRecordBuilder, which would in turn
create a temporary instance of TypeSerializer, which itself had a
std::vector<> that was about 128K in size. So this 128K allocation was
happening every time. We can re-use the same instance over and over, we
just have to clear its internal hash table and seen records list between
each run. This saves us from the constant re-allocations.
This is worth an ~18.5% speed increase (3.75s -> 3.05s) in my tests.
Differential Revision: https://reviews.llvm.org/D33506
llvm-svn: 303919
Previously it would do a character by character search for a null
terminator, to account for the fact that an arbitrary stream need not
store its data contiguously so you couldn't just do a memchr. However, the
stream API has a function which will return the longest contiguous chunk
without doing a copy, and by using this function we can do a memchr on the
individual chunks. For certain types of streams like data from object
files etc, this is guaranteed to find the null terminator with only a
single memchr, but even with discontiguous streams such as
MappedBlockStream, it's rare that any given string will cross a block
boundary, so even those will almost always be satisfied with a single
memchr.
This optimization is worth a 10-12% reduction in link time (4.2 seconds ->
3.75 seconds)
Differential Revision: https://reviews.llvm.org/D33503
llvm-svn: 303918
Summary:
DbiStreamBuilder calculated the offset of the source file names inside
the file info substream as the size of the file info substream minus
the size of the file names. Since the file info substream is padded to
a multiple of 4 bytes, this caused the first file name to be aligned
on a 4-byte boundary. By contrast, DbiModuleList would read the file
names immediately after the file name offset table, without skipping
to the next 4-byte boundary. This change makes it so that the file
names are written to the location where DbiModuleList expects them,
and puts any necessary padding for the file info substream after the
file names instead of before it.
Reviewers: amccarth, rnk, zturner
Reviewed By: amccarth, zturner
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D33475
llvm-svn: 303917
It was using the number of blocks of the entire PDB file as the number
of blocks of each stream that was created. This was only an issue in
the readLongestContiguousChunk function, which was never called prior.
This bug surfaced when I updated an algorithm to use this function and
the algorithm broke.
llvm-svn: 303916
Also, include global entries for all data symbols, not
just external ones, since these are referenced by the
relocation records.
Add a test case that includes unnamed data.
Differential Revision: https://reviews.llvm.org/D33079
llvm-svn: 303915
A profile shows the majority of time doing type merging is spent
deserializing records from sequences of bytes into friendly C++ structures
that we can easily access members of in order to find the type indices to
re-write.
Records are prefixed with their length, however, and most records have
type indices that appear at fixed offsets in the record. For these
records, we can save some cycles by just looking at the right place in the
byte sequence and re-writing the value, then skipping the record in the
type stream. This saves us from the costly deserialization of examining
every field, including potentially null terminated strings which are the
slowest, even though it was unnecessary to begin with.
In addition, we apply another optimization. Previously, after
deserializing a record and re-writing its type indices, we would
unconditionally re-serialize it in order to compute the hash of the
re-written record. This would result in an alloc and memcpy for every
record. If no type indices were re-written, however, this was an
unnecessary allocation. In this patch re-writing is made two phase. The
first phase discovers the indices that need to be rewritten and their new
values. This information is passed through to the de-duplication code,
which only copies and re-writes type indices in the serialized byte
sequence if at least one type index is different.
Some records have type indices which only appear after variable length
strings, or which have lists of type indices, or various other situations
that can make it tricky to make this optimization. While I'm not giving up
on optimizing these cases as well, for now we can get the easy cases out
of the way and lay the groundwork for more complicated cases later.
This patch yields another 50% speedup on top of the already large speedups
submitted over the past 2 days. In two tests I have run, I went from 9
seconds to 3 seconds, and from 16 seconds to 8 seconds.
Differential Revision: https://reviews.llvm.org/D33480
llvm-svn: 303914
By default, CMake uses a 32-bit toolchain, even when on a 64-bit platform targeting a 64-bit build. However, due to the size of the binaries involved, this can cause linker instabilities (such as the linker running out of memory). Guide people to the correct solution to get CMake to use the native toolchain.
llvm-svn: 303912
PPC::GETtlsADDR is lowered to a branch and a nop, by the assembly
printer. Its size was incorrectly marked as 4, correct it to 8. The
incorrect size can cause incorrect branch relaxation in
PPCBranchSelector under the right conditions.
llvm-svn: 303904