This is a vestige from the GCC-3 days, which disables IPO passes
when set. I don't think anybody actually uses it as there are
several IPO passes which still run with this flag set and
nobody complained/noticed. This reduces the delta between
current and new pass manager and allows us to easily review
the difference when we decide to flip the switch (or audit
which passes should run, FWIW).
llvm-svn: 315043
Summary:
If the extracted region has multiple exported data flows toward the same BB which is not included in the region, correct resotre instructions and PHI nodes won't be generated inside the exitStub. The solution is simply put the restore instructions right after the definition of output values instead of putting in exitStub.
Unittest for this bug is included.
Author: myhsu
Reviewers: chandlerc, davide, lattner, silvas, davidxl, wmi, kuhar
Subscribers: dberlin, kuhar, mgorny, llvm-commits
Differential Revision: https://reviews.llvm.org/D37902
llvm-svn: 315041
The patch that this assert comes with is fixing a bug in MBP. The assert is
invalid however.
Thanks to @sergey.k.okunev for finding this
Currently this fails SPECCPU2006 LTO. I will add a test case when I do more
investigation and have one.
llvm-svn: 315032
It is possible for two modules to define the same set of external
symbols without causing a duplicate symbol error at link time,
as long as each of the symbols is a comdat member. So we cannot
use them as part of a unique id for the module.
Differential Revision: https://reviews.llvm.org/D38602
llvm-svn: 315026
Add extending loads and constant offset patterns
A bit more refactoring of the tablegen to make the patterns fairly nice and
uniform between the regular and atomic loads.
Differential Revision: https://reviews.llvm.org/D38523
llvm-svn: 315022
Summary: In SamplePGO, when an indirect call is promoted in the profiled binary, before profile annotation, it will be promoted and inlined. For the original indirect call, the current implementation will not mark VP profile on it. This is an issue when profile becomes stale. This patch annotates VP prof on indirect calls during annotation.
Reviewers: tejohnson
Reviewed By: tejohnson
Subscribers: sanjoy, llvm-commits
Differential Revision: https://reviews.llvm.org/D38477
llvm-svn: 315016
Summary:
Xcode's dsymutil emits a __swift_ast DWARF section, which is required for debugging,
and which contains a byte-for-byte dump of the swiftmodule file.
Add this feature to llvm-dsymutil.
Tested with `gobjdump --dwarf=info -s`, by verifying that the contents of
`__DWARF.__swift_ast` match between Xcode's dsymutil and llvm-dsymutil
(Xcode's dwarfdump and llvm-dwarfdump don't currently recognize the
__swift_ast section).
Reviewers: aprantl, friss
Subscribers: llvm-commits, JDevlieghere
Differential Revision: https://reviews.llvm.org/D38504
llvm-svn: 315014
The new format is changeAddrMode_xx_yy, where xx is the current mode,
and yy is the new one.
Old name: New name:
getBaseWithImmOffset changeAddrMode_abs_io
getAbsoluteForm changeAddrMode_io_abs
getBaseWithRegOffset changeAddrMode_io_rr
xformRegToImmOffset changeAddrMode_rr_io
getBaseWithLongOffset changeAddrMode_rr_ur
getRegShlForm changeAddrMode_ur_rr
llvm-svn: 315013
Ensure the program_headers call will fail correctly if the program
headers are larger than the underlying buffer.
Patch by Parker Thompson!
llvm-svn: 315012
Summary:
Xcode's dsymutil emits a __swift_ast DWARF section, which is required for debugging,
and which contains a byte-for-byte dump of the swiftmodule file.
Add this feature to llvm-dsymutil.
Tested with `gobjdump --dwarf=info -s`, by verifying that the contents of
`__DWARF.__swift_ast` match between Xcode's dsymutil and llvm-dsymutil
(Xcode's dwarfdump and llvm-dwarfdump don't currently recognize the
__swift_ast section).
Reviewers: aprantl, friss
Subscribers: llvm-commits, JDevlieghere
Differential Revision: https://reviews.llvm.org/D38504
llvm-svn: 315004
This is the same exact change we did for the current pass manager
in rL314997, but the new pass manager pipeline already happened
to run GlobalOpt after the inliner, so we just insert a run of
GDCE here.
llvm-svn: 315003
Summary: Make test robust enough to not fail due to CFG changes and re-enable for ARM/AArch64.
Reviewers: rovka, fhahn
Reviewed By: fhahn
Subscribers: fhahn, aemerson, rengolin, mcrosier, llvm-commits, kristof.beyls
Differential Revision: https://reviews.llvm.org/D38590
llvm-svn: 315002
Sink the insertion of "pop ebp" out of the frame size calculation
branches. They all check for HasFP.
Our handling of CLEANUPRET and CATCHRET was equivalent, both are
funclets and use the same frame size. We can eliminate the CLEANUPRET
case.
Hoist the hasFP(MF) query into a local bool.
Rename TargetMBB to CatchRetTarget to be more descriptive.
Eliminate the Optional<unsigned> RetOpcode local, now that it has one
use.
It's only a net savings of 10 lines, but hopefully it's *slightly* more
readable.
llvm-svn: 315000
The inliner performs some kind of dead code elimination as it goes,
but there are cases that are not really caught by it. We might
at some point consider teaching the inliner about them, but it
is OK for now to run GlobalOpt + GlobalDCE in tandem as their
benefits generally outweight the cost, making the whole pipeline
faster.
This fixes PR34652.
Differential Revision: https://reviews.llvm.org/D38154
llvm-svn: 314997
AbstractLatticeFunction and SparseSolver are class templates parameterized by a
lattice value, so we need to move these member functions over to the header.
Differential Revision: https://reviews.llvm.org/D38561
llvm-svn: 314996
Implement .set dspr2 directive with appropriate feature bits. This
directive is a counterpart of -mattr=dspr2 command line option with the
exception that it does not influence elf header flags.
Patch by Milos Stojanovic.
Differential Revision: https://reviews.llvm.org/D38537
llvm-svn: 314994
The old algoritm was not correct, although it worked most of the time.
Avoid the complex reachability analysis and simply calculate the maximal
registers out of the set of all referenced registers.
llvm-svn: 314991
There is data racing to the static variable RecordIndex in index profile reader
when merging in multiple threads. Make it a member variable in
IndexedInstrProfReader to fix this.
Differential Revision: https://reviews.llvm.org/D38431
llvm-svn: 314990
The code which lowers BUILD_VECTOR of consecutive loads into a single vector
load doesn't update chains properly. As a result the vector load can be
reordered with the store to the same location.
The current code in EltsFromConsecutiveLoads only updates the chain following
the first load. The fix is to update the chains following all the loads
comprising the vector.
This is a fix for PR10114.
Reviewed By: niravd
Differential Revision: https://reviews.llvm.org/D38547
llvm-svn: 314988
When ignoring a load that participates in an interleaved group, make sure to
move a cast that needs to sink after it.
Testcase derived from reproducer of PR34743.
Differential Revision: https://reviews.llvm.org/D38338
llvm-svn: 314986
Instead of trying to keep LastWidenRecipe updated after creating each recipe,
have tryToWiden() retrieve the last recipe of the current VPBasicBlock and check
if it's a VPWidenRecipe when attempting to extend its range. This ensures that
such extensions, optimized to maintain the original instruction order, do so
only when the instructions are to maintain their relative order. The latter does
not always hold, e.g., when a cast needs to sink to unravel first order
recurrence (r306884).
Testcase derived from reproducer of PR34711.
Differential Revision: https://reviews.llvm.org/D38339
llvm-svn: 314981
Previously, instructions that were defined to use the FGR64 register class
were associated with the Mips64 table which was incorrect.
Reviewers: nitesh.jain, atanasyan
Differential Revision: https://reviews.llvm.org/D38454
llvm-svn: 314976
Summary:
When reinserting debug values after register allocation, make sure to
insert debug values after each redefinition of debug value register in
the slot index range. The reason for this is that DwarfDebug will end
the range of a debug variable when the physical reg is defined. For
instructions with e.g. tied operands this result in prematurely ended
debug range.
This resolves pr34545
Patch by Karl-Johan Karlsson and Bjorn Pettersson
Reviewers: rnk, aprantl
Reviewed By: rnk
Subscribers: bjope, llvm-commits
Differential Revision: https://reviews.llvm.org/D38229
llvm-svn: 314974
Currently llvm-mc just hangs inside infinite loop
while trying to parse file which has ".section .с" inside,
where section name is non-english character.
Patch fixes the issue.
In this patch I also moved content of non-english-characters.s
to test/MC/AsmParser/Inputs folder so that non-english-characters.s
becomes a single testcase for all invalid inputs containing non-english
symbols. That is convinent because llvm-mc otherwise tries
to parse and tokenize the whole testcase file with tools invocations and
it is harder to isolate the issue.
Differential revision: https://reviews.llvm.org/D38545
llvm-svn: 314973
We were using an i1 type and then zero extending to a vector. Instead just create the 0/1 directly as a ConstantInt with the correct type. No need to ask ConstantExpr to zero extend for us.
This bug is a bit tricky to hit because it requires us to visit a zext of an icmp that would normally be simplified to true/false, but that icmp hasnt' been visited yet. In the test case this zext and icmp were created by visiting a udiv and due to worklist ordering we got to the zext first.
Fixes PR34841.
llvm-svn: 314971
Summary: This is to avoid e.g. merging two cheap icmps if the target is not going to expand to something nice later.
Reviewers: dberlin, spatel
Subscribers: davide, nemanjai
Differential Revision: https://reviews.llvm.org/D38232
llvm-svn: 314970
Summary:
FastISel::hasTrivialKill() was the only user of the "IntPtrTy" version of
Cast::isNoopCast(). According to review comments in D37894 we could instead
use the "DataLayout" version of the method, and thus get rid of the
"IntPtrTy" versions of isNoopCast() completely.
With the above done, the remaining isNoopCast() could then be simplified
a bit more.
Reviewers: arsenm
Reviewed By: arsenm
Subscribers: wdng, llvm-commits
Differential Revision: https://reviews.llvm.org/D38497
llvm-svn: 314969
Summary:
The arg1 logging handler changed in compiler-rt to start writing a
different type for entries encountered when logging the first argument
of XRay-instrumented functions. This change allows the trace loader to
support reading these record types as well as prepare for when the
basic (naive) mode implementation starts writing down the argument
payloads.
Without this change, binaries with arg1 logging support enabled start
writing unreadable logs for any of the XRay tracing tools.
Reviewers: pelikan
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D38550
llvm-svn: 314967
Adds the option 'new-pass-manager' to the gold pluggin to enable using the
new pass manager during the lto/thinlto link step.
Patch by Graham Yiu.
Differential Revision: https://reviews.llvm.org/D38517
llvm-svn: 314963
We can support ashr similar to lshr, if we know that none of the shifted in bits are used. In that case SimplifyDemandedBits would normally convert it to lshr. But that conversion doesn't happen if the shift has additional users.
Differential Revision: https://reviews.llvm.org/D38521
llvm-svn: 314945
Summary:
If the pointer width is 32 bits and the calculated GEP offset is
negative, we call APInt::getLimitedValue(), which does a
*zero*-extension of the offset. That's wrong -- we should do an sext.
Fixes a bug introduced in rL314362 and found by Evgeny Astigeevich.
Reviewers: efriedma
Subscribers: sanjoy, javed.absar, llvm-commits, eastig
Differential Revision: https://reviews.llvm.org/D38557
llvm-svn: 314935
But now include a check for CPU_COUNT so we still build on 10 year old
versions of glibc.
Original message:
Use sched_getaffinity instead of std:🧵:hardware_concurrency.
The issue with std:🧵:hardware_concurrency is that it forwards
to libc and some implementations (like glibc) don't take thread
affinity into consideration.
With this change a llvm program that can execute in only 2 cores will
use 2 threads, even if the machine has 32 cores.
This makes benchmarking a lot easier, but should also help if someone
doesn't want to use all cores for compilation for example.
llvm-svn: 314931
This is a follow-up to https://reviews.llvm.org/D38138.
I fixed the capitalization of some functions because we're changing those
lines anyway and that helped verify that we weren't accidentally dropping
any options by using default param values.
llvm-svn: 314930
Function isLoweredToCall can only accept non-null function pointer, but a function pointer can be null for indirect function call. So check it before calling isLoweredToCall from getInstructionLatency.
Differential Revision: https://reviews.llvm.org/D38204
llvm-svn: 314927
Recommitting r314517 with the fix for handling ConstantExpr.
Original commit message:
Currently, getGEPCost() returns TCC_FREE whenever a GEP is a legal addressing
mode in the target. However, since it doesn't check its actual users, it will
return FREE even in cases where the GEP cannot be folded away as a part of
actual addressing mode. For example, if an user of the GEP is a call
instruction taking the GEP as a parameter, then the GEP may not be folded in
isel.
llvm-svn: 314923
Summary:
This reverts D38481. The change breaks systems with older versions of glibc. It
injects a use of CPU_COUNT() from sched.h without checking to ensure that the
function exists first.
Reviewers:
Subscribers:
llvm-svn: 314922
It broke the Chromium / SQLite build; see PR34830.
> Summary:
> 1/ Operand folding during complex pattern matching for LEAs has been
> extended, such that it promotes Scale to accommodate similar operand
> appearing in the DAG.
> e.g.
> T1 = A + B
> T2 = T1 + 10
> T3 = T2 + A
> For above DAG rooted at T3, X86AddressMode will no look like
> Base = B , Index = A , Scale = 2 , Disp = 10
>
> 2/ During OptimizeLEAPass down the pipeline factorization is now performed over LEAs
> so that if there is an opportunity then complex LEAs (having 3 operands)
> could be factored out.
> e.g.
> leal 1(%rax,%rcx,1), %rdx
> leal 1(%rax,%rcx,2), %rcx
> will be factored as following
> leal 1(%rax,%rcx,1), %rdx
> leal (%rdx,%rcx) , %edx
>
> 3/ Aggressive operand folding for AM based selection for LEAs is sensitive to loops,
> thus avoiding creation of any complex LEAs within a loop.
>
> Reviewers: lsaba, RKSimon, craig.topper, qcolombet, jmolloy
>
> Reviewed By: lsaba
>
> Subscribers: jmolloy, spatel, igorb, llvm-commits
>
> Differential Revision: https://reviews.llvm.org/D35014
llvm-svn: 314919
Somehow a few massive errors slipped though the cracks of testing.
1. The code in Segment::finalize was left over from the old layout
algorithm. In certain situations this would cause very strange issues
with segment layout. For instance in the shift-segments.test case it
would cause the second segment to have the same offset as the first.
2. In debugging this I discovered another issue. Namely section alignment
was not being computed based on Section->Align but instead
Section->Offset which is bizarre and makes no sense. I have no clue how
it worked in the first place. This issue is also fixed
3. Fixing #2 exposed a bug where things were not being written past the end
of the file that technically should have been. This was because in
certain cases (like overlapping-segments) the end of the file wouldn't
always be bumped if the offset could be chosen relative to an existing
segment that already had it's offset chosen. For fully nested segments
this is fine but for overlapping segments this leaves the end of the
file short. So I changed how the offset is bumped when looping though
segments.
Differential Revision: https://reviews.llvm.org/D38436
llvm-svn: 314918
Summary:
This patch teaches `DT.applyUpdates` to take the fast when applying zero or just one update and makes it not run the internal batch updater machinery.
With this patch, it should no longer make sense to have a special check in user's code that checks the update sequence size before applying them, e.g.
```
if (!MyUpdates.empty())
DT.applyUpdates(MyUpdates);
```
or
```
if (MyUpdates.size() == 1)
if (...)
DT.insertEdge(...)
else
DT.deleteEdge(...)
```
Reviewers: dberlin, brzycki, davide, grosser, sanjoy
Reviewed By: dberlin, davide
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D38541
llvm-svn: 314917
Summary:
normpath() was being called on an empty string and appended to
the environment variable in the case where the environment variable
was unset. This led to ":." being appended to the path, since
normpath() of an empty string is '.', presumably to represent cwd.
Reviewers: zturner, sqlbyme, modocache
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D38542
llvm-svn: 314915
This patch redefines the MOVSS/MOVSD instructions to take VR128 as its second input. This allows the MOVSS/SD->BLEND commute to work without requiring a COPY to be inserted.
This should fix PR33079
Overall this looks to be an improvement in the generated code. I haven't checked the EXPENSIVE_CHECKS build but I'll do that and update with results.
Differential Revision: https://reviews.llvm.org/D38449
llvm-svn: 314914
r314857 changed the CFG that resulted in the flaky test MachineBranchProb.ll to
fail the bots again. Marking it as unsupported for ARM/AArch64 again until we
find the cause.
llvm-svn: 314912
Before the patch this was in Analysis. Moving it to IR and making it implicit
part of LLVMContext::diagnose allows the full opt-remark facility to be used
outside passes e.g. the pass manager. Jessica is planning to use this to
report function size after each pass. The same could be used for time
reports.
Tested with BUILD_SHARED_LIBS=On.
llvm-svn: 314909
We can likely remove most of these as redundant in the near future,
but I'm trying to make sure I don't introduce any regressions with D38514.
llvm-svn: 314907
Early out from vector shift by immediates that will exceed eltsize - don't bother making an unnecessary ComputeNumSignBits recursive call.
llvm-svn: 314903
Previously, on long branches (relative jumps of >4 kB), an assertion
failure was hit, as AVRInstrInfo::insertIndirectBranch was not
implemented. Despite its name, it is called by the branch relaxator
for *all* unconditional jumps.
Patch by Thomas Backman.
llvm-svn: 314891
In some cases, the code generator attempts to generate instructions such as:
lddw r24, Y+63
which expands to:
ldd r24, Y+63
ldd r25, Y+64 # Oops! This is actually ld r25, Y in the binary
This commit limits the first offset to 62, and thus the second to 63.
It also updates some asserts in AVRExpandPseudoInsts.cpp, including for
INW and OUTW, which appear to be unused.
Patch by Thomas Backman.
llvm-svn: 314890
This adds diagnostics for invalid immediate operands to the MOVW and MOVT
instructions (ARM and Thumb).
Differential revision: https://reviews.llvm.org/D31879
llvm-svn: 314888
Currently, our diagnostics for assembly operands are not consistent.
Some start with (for example) "immediate operand must be ...",
and some with "operand must be an immediate ...". I think the latter
form is preferable for a few reasons:
* It's unambiguous that it is referring to the expected type of operand, not
the type the user provided. For example, the user could provide an register
operand, and get a message taking about an operand is if it is already an
immediate, just not in the accepted range.
* It allows us to have a consistent style once we add diagnostics for operands
that could take two forms, for example a label or pc-relative memory operand.
Differential revision: https://reviews.llvm.org/D36689
llvm-svn: 314887
Summary:
1/ Operand folding during complex pattern matching for LEAs has been
extended, such that it promotes Scale to accommodate similar operand
appearing in the DAG.
e.g.
T1 = A + B
T2 = T1 + 10
T3 = T2 + A
For above DAG rooted at T3, X86AddressMode will no look like
Base = B , Index = A , Scale = 2 , Disp = 10
2/ During OptimizeLEAPass down the pipeline factorization is now performed over LEAs
so that if there is an opportunity then complex LEAs (having 3 operands)
could be factored out.
e.g.
leal 1(%rax,%rcx,1), %rdx
leal 1(%rax,%rcx,2), %rcx
will be factored as following
leal 1(%rax,%rcx,1), %rdx
leal (%rdx,%rcx) , %edx
3/ Aggressive operand folding for AM based selection for LEAs is sensitive to loops,
thus avoiding creation of any complex LEAs within a loop.
Reviewers: lsaba, RKSimon, craig.topper, qcolombet, jmolloy
Reviewed By: lsaba
Subscribers: jmolloy, spatel, igorb, llvm-commits
Differential Revision: https://reviews.llvm.org/D35014
llvm-svn: 314886
I found that llvm-mc does not like non-english characters even in comments,
which it tries to tokenize.
Problem happens because of functions like isdigit(), isalnum() which takes
int argument and expects it is not negative.
But at the same time MCParser uses char* to store input buffer poiner, char has signed value,
so it is possible to pass negative value to one of functions from above and
that triggers an assert.
Testcase for demonstration is provided.
To fix the issue helper functions were introduced in StringExtras.h
Differential revision: https://reviews.llvm.org/D38461
llvm-svn: 314883
This time invoking llc with "-march=x86-64" in the testcase, so we don't assume
the default target is x86.
Summary:
If we have
%vreg0<def> = PHI %vreg2<undef>, <BB#0>, %vreg3, <BB#2>; GR32:%vreg0,%vreg2,%vreg3
%vreg3<def,tied1> = ADD32ri8 %vreg0<kill,tied0>, 1, %EFLAGS<imp-def>; GR32:%vreg3,%vreg0
then we can't just change %vreg0 into %vreg3, since %vreg2 is actually
undef. We would have to also copy the undef flag to be able to change the
register.
Instead we deal with this case like other cases where we can't just
replace the register: we insert a COPY. The code creating the COPY already
copied all flags from the PHI input, so the undef flag will be transferred
as it should.
Reviewers: kparzysz
Reviewed By: kparzysz
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D38235
llvm-svn: 314882
We have found some corner cases connected to range intersection where IRCE makes
a bad thing when the latch condition is unsigned. The fix for that will go as a follow up.
This patch temporarily disables IRCE for unsigned latch conditions until the issue is fixed.
The unsigned latch conditions were introduced to IRCE by rL310027.
Differential Revision: https://reviews.llvm.org/D38529
llvm-svn: 314881
Summary:
If we have
%vreg0<def> = PHI %vreg2<undef>, <BB#0>, %vreg3, <BB#2>; GR32:%vreg0,%vreg2,%vreg3
%vreg3<def,tied1> = ADD32ri8 %vreg0<kill,tied0>, 1, %EFLAGS<imp-def>; GR32:%vreg3,%vreg0
then we can't just change %vreg0 into %vreg3, since %vreg2 is actually
undef. We would have to also copy the undef flag to be able to change the
register.
Instead we deal with this case like other cases where we can't just
replace the register: we insert a COPY. The code creating the COPY already
copied all flags from the PHI input, so the undef flag will be transferred
as it should.
Reviewers: kparzysz
Reviewed By: kparzysz
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D38235
llvm-svn: 314879
The previous version didn't work if the jump table base address didn't
fit in 32 bit, since it was encoded as an immediate offset. And in case
the jump table is encoded as 32 bit label differences, we need to
load and add them to the table base first.
This solves the first half of the issues mentioned in PR34720.
Also fix some of the errors pointed out by -verify-machineinstrs, by
using GR32_NOSPRegClass.
Differential Revision: https://reviews.llvm.org/D38333
llvm-svn: 314876
Test needs some slight adjustment because we no longer check the existence of
BFI but rather that the actual hotness is set on the remark. If entry_count
is not set getBlockProfileCount returns None.
llvm-svn: 314874
This is because lib/Fuzzer doesn't really depend on llvm infrastucture.
It's not easy to access the llvm hardware_concurrency here.
Differential Reivision: https://reviews.llvm.org/D38481
llvm-svn: 314870
Summary:
After r308422 we defer optimizations that can destroy loop canonical forms to
LateSimplifyCFG. Running LateSimplifyCFG after expanding atomic operations
can exploit more control-flow opportunities.
Reviewers: mcrosier, t.p.northover, efriedma
Reviewed By: efriedma
Subscribers: aemerson, rengolin, javed.absar, llvm-commits, kristof.beyls
Differential Revision: https://reviews.llvm.org/D38262
llvm-svn: 314857
This patch makes DT::eraseNode mark DFSInfo as invalid.
Not marking it as invalid leads to DFS numbers getting corrupted
and failing VerifyDFSNumbers check.
This patch also makes children iterator const (NFC).
llvm-svn: 314847
This was dead when it landed in r252578. We have this functionality, if
not for stack probe calls, but for regular calls in
X86CallFrameOptimization.cpp.
llvm-svn: 314845
An archive looks like
<header>
<symbol table>
<tail>
The symbol table refers to offsets in the tail. A complication is that
we would like to support symbol tables that use 64 bit offsets if it
turns out that any of the offsets is too big.
This patch changes the archive writer to first compute the tail. We
cannot just compute one big StringRef since that would require reading
every member upfront, but we can represent it as a series of
StringRefs.
Having done that it is much easier to compute the symbol table and all
offsets are computed before it is written. With this if there is an
accounting problem it will show up with a regular symbol table, not
just when a 64 bit one is needed.
llvm-svn: 314844
Surprisingly, we have zero coverage for these patterns.
Many of these are handled in InstSimplify, but it's not obvious
what the rule for folding each case should be, so I've just
stamped out everything.
It should be possible to fold every case, but currently, we
miss these:
int ashr_slt(int x) {
return (x >> 1) < 1;
}
int ashr_sgt(int x) {
return (x >> 1) > 0;
}
https://godbolt.org/g/aB2hLE
llvm-svn: 314837
This commit does two things. Firstly, it cleans up some of the benefit
calculation wrt outlined functions and candidates. Secondly, it fixes an
off-by-one bug in the cost model which was caused by the benefit value of
an OutlinedFunction and Candidate differing by 1. It updates the remarks test
to reflect this change.
llvm-svn: 314836
Summary:
For the amdpal OS type:
We write an AMDGPU_PAL_METADATA record in the .note section in the ELF
(or as an assembler directive). It contains key=value pairs of 32 bit
ints. It is a merge of metadata from codegen of the shaders, and
metadata provided by the frontend as _amdgpu_pal_metadata IR metadata.
Where both sources have a key=value with the same key, the two values
are ORed together.
This .note record is part of the amdpal ABI and will be documented in
docs/AMDGPUUsage.rst in a future commit.
Eventually the amdpal OS type will stop generating the .AMDGPU.config
section once the frontend has safely moved over to using the .note
records above instead of .AMDGPU.config.
Reviewers: arsenm, nhaehnle, dstuttard
Subscribers: kzhuravl, wdng, yaxunl, llvm-commits, t-tye
Differential Revision: https://reviews.llvm.org/D37753
llvm-svn: 314829
The test fails on Linux; see follow-up email on the llvm-commits list.
> Add the option to lookup an address in the debug information and print
> out the file, function, block and line table details.
>
> Differential revision: https://reviews.llvm.org/D38409
This also reverts the follow-up r314818:
> [test] Fix llvm-dwarfdump/cmdline.test
>
> Fixes test/tools/llvm-dwarfdump/cmdline.test
llvm-svn: 314825
All the buildbots are red, e.g.
http://lab.llvm.org:8011/builders/clang-cmake-aarch64-lld/builds/2436/
> Summary:
> This patch tries to vectorize loads of consecutive memory accesses, accessed
> in non-consecutive or jumbled way. An earlier attempt was made with patch D26905
> which was reverted back due to some basic issue with representing the 'use mask' of
> jumbled accesses.
>
> This patch fixes the mask representation by recording the 'use mask' in the usertree entry.
>
> Change-Id: I9fe7f5045f065d84c126fa307ef6ebe0787296df
>
> Reviewers: mkuper, loladiro, Ayal, zvi, danielcdh
>
> Reviewed By: Ayal
>
> Subscribers: hans, mzolotukhin
>
> Differential Revision: https://reviews.llvm.org/D36130
llvm-svn: 314824
The list of register ids was previously written out in a couple of dirrent
places. This puts it in a .def file and also adds a few more registers (e.g.
the x87 regs) which should lead to more readable dumps, but I didn't include
the whole list since that seems unnecessary.
X86_MC::initLLVMToSEHAndCVRegMapping is pretty ugly, but at least it's not
relying on magic constants anymore. The TODO of using tablegen still stands.
Differential revision: https://reviews.llvm.org/D38480
llvm-svn: 314821
Summary:
This should fix a regression introduced by r313786, which switched from
MachineInstr::isIndirectDebugValue() to checking if operand 1 is an
immediate. I didn't have a test case for it until now.
A single UserValue, which approximates a user variable, may have many
DBG_VALUE instructions that disagree about whether the variable is in
memory or in a virtual register. This will become much more common once
we have llvm.dbg.addr, but you can construct such a test case manually
today with llvm.dbg.value.
Before this change, we would get two UserValues: one for direct and one
for indirect DBG_VALUE instructions describing the same variable. If we
build separate interval maps for direct and indirect locations, we will
end up accidentally coalescing identical DBG_VALUE intervals that need
to remain separate because they are broken up by intervals of the
opposite direct-ness.
Reviewers: aprantl
Subscribers: llvm-commits, hiraditya
Differential Revision: https://reviews.llvm.org/D37932
llvm-svn: 314819
Add the option to lookup an address in the debug information and print
out the file, function, block and line table details.
Differential revision: https://reviews.llvm.org/D38409
llvm-svn: 314817
The issue with std:🧵:hardware_concurrency is that it forwards
to libc and some implementations (like glibc) don't take thread
affinity into consideration.
With this change a llvm program that can execute in only 2 cores will
use 2 threads, even if the machine has 32 cores.
This makes benchmarking a lot easier, but should also help if someone
doesn't want to use all cores for compilation for example.
llvm-svn: 314809
Summary:
This patch tries to vectorize loads of consecutive memory accesses, accessed
in non-consecutive or jumbled way. An earlier attempt was made with patch D26905
which was reverted back due to some basic issue with representing the 'use mask' of
jumbled accesses.
This patch fixes the mask representation by recording the 'use mask' in the usertree entry.
Change-Id: I9fe7f5045f065d84c126fa307ef6ebe0787296df
Reviewers: mkuper, loladiro, Ayal, zvi, danielcdh
Reviewed By: Ayal
Subscribers: hans, mzolotukhin
Differential Revision: https://reviews.llvm.org/D36130
llvm-svn: 314806
This switches the ARM AsmParser to use assembly operand diagnostics from
tablegen, rather than a switch statement on the ARMMatchResultTy. It
moves the existing diagnostic strings to tablegen, but adds no new ones,
so this is NFC except for one diagnostic string that had an off-by-1 error
in the hand-written switch statement.
Differential revision: https://reviews.llvm.org/D31607
llvm-svn: 314804
This adds a DiagnosticString member to the AsmOperand tablegen class, so
that the diagnostic text to be used when an assembly operand is
incorrect can be stored in the tablegen description of the operand,
rather than in a separate switch statement in the AsmParser.
If DiagnosticString is used for any operands, tablegen will emit a
getMatchKindDiag function, to map from diagnostic enums to strings.
Differential revision: https://reviews.llvm.org/D31606
llvm-svn: 314803
Summary:
This patch teaches the DominatorTree verifier to check DFS In/Out numbers which are used to answer dominance queries.
DFS number verification is done in O(nlogn), so it shouldn't add much overhead on top of the O(n^3) sibling property verification.
This check should detect errors like the one spotted in PR34466 and related bug reports.
The patch also cleans up the DFS calculation a bit, as all constructed trees should have a single root now.
I see 2 new test failures when running check-all after this change:
```
Failing Tests (2):
Polly :: Isl/CodeGen/OpenMP/reference-argument-from-non-affine-region.ll
Polly :: Isl/CodeGen/OpenMP/two-parallel-loops-reference-outer-indvar.ll
```
which seem to happen just after `Create LLVM-IR from SCoPs` -- I XFAILed them in r314800.
Reviewers: dberlin, grosser, davide, zhendongsu, bollu
Reviewed By: dberlin
Subscribers: nandini12396, bollu, Meinersbur, brzycki, llvm-commits
Differential Revision: https://reviews.llvm.org/D38331
llvm-svn: 314801
tryParseRegister advances the lexer, so we need to take copies of the start and
end locations of the register operand before calling it.
Previously, the caret in the diagnostic pointer to the comma after the r0
operand in the test, rather than the start of the operand.
Differential revision: https://reviews.llvm.org/D31537
llvm-svn: 314799
The dsp register class is an alias of the gpr register class, so
we have to define instructions for spilling and reloading.
Reviewers: atanasyan
Differential Revision: https://reviews.llvm.org/D38038
llvm-svn: 314798
Currently optimizeMemoryInst requires that all of the AddrModes it sees are
identical. This patch makes it capable of tracking multiple AddrModes, so long
as they differ in at most one field.
This patch does nothing by itself, but later patches will make use of it to
insert or reuse phi or select instructions for the differing fields.
Differential Revision: https://reviews.llvm.org/D38278
llvm-svn: 314795
This lets us optimize away selects that perform the same address computation in
two different ways and is also the first step towards being able to handle
selects between two different, but compatible, address computations.
Differential Revision: https://reviews.llvm.org/D38242
llvm-svn: 314794
In this code, we use ~0U as a sentinel value for any operand class that doesn't
have a user-friendly error message, but this value isn't in range of the
MatchClassKind enum, so we need to ensure it does not get passed to isSubclass.
llvm-svn: 314793
If the upper bits of a truncation shuffle patterns have at least the minimum number of sign/zero bits on their inputs then we can safely use PACKSS/PACKUS as shuffles.
Partial fix for https://bugs.llvm.org/show_bug.cgi?id=34773
Differential Revision: https://reviews.llvm.org/D38472
llvm-svn: 314788
The code responsible for analysis of inbounds GEPs is extracted into a separate
function: CallAnalyzer::canFoldInboundsGEP. With the patch SROA
enabling/disabling code is localized at one place instead of spreading across
the code of CallAnalyzer::visitGetElementPtr.
Differential Revision: https://reviews.llvm.org/D38233
llvm-svn: 314787
Summary:
Take the target's endianness into account when splitting the
debug information in DAGTypeLegalizer::SetExpandedInteger.
This patch fixes so that, for big-endian targets, the fragment
expression corresponding to the high part of a split integer
value is placed at offset 0, in order to correctly represent
the memory address order.
I have attached a PPC32 reproducer where the resulting DWARF
pieces for a 64-bit integer were incorrectly reversed.
Original patch was reverted due to using -stop-after=isel in
the test case (but that is only working when AMDGPU target
is included in the llc build). The test case has now been
updated to use -stop-before=expand-isel-pseudos instead.
Patch by: dstenb
Reviewers: JDevlieghere, aprantl, dblaikie
Reviewed By: JDevlieghere, aprantl, dblaikie
Subscribers: nemanjai
Differential Revision: https://reviews.llvm.org/D38172
llvm-svn: 314781
This converts the ARM AsmParser to use the new assembly matcher error
reporting mechanism, which allows errors to be reported for multiple
instruction encodings when it is ambiguous which one the user intended
to use.
By itself this doesn't improve many error messages, because we don't have
diagnostic text for most operand types, but as we add that then this will allow
more of those diagnostic strings to be used when they are relevant.
Differential revision: https://reviews.llvm.org/D31530
llvm-svn: 314779