This change adds a hasFileAtIndex method. getChildDeclContext can first call this method, and if it returns true it knows it can then lookup the resolved path cache for the given file index. If we hit that cache then we don't even have to call getFileNameByIndex.
Running dsymutil against the swift executable built from github gives a 20% performance improvement without any change in the binary.
Differential Revision: https://reviews.llvm.org/D22655
Reviewed by friss.
llvm-svn: 276380
Summary:
Clang inserts GetElementPtrInst so findAllocaForValue was not
able to find allocas.
PR27453
Reviewers: kcc, eugenis
Differential Revision: https://reviews.llvm.org/D22657
llvm-svn: 276374
If `-irce-skip-profitability-checks` is passed in, IRCE will kick in in
all cases where it is legal for it to kick in. This flag is intended to
help diagnose and analyse performance issues.
llvm-svn: 276372
This makes it easy to swap out the default stylesheet for a custom one.
It also shaves ~6.62 MB out of the report directory for a full coverage
build of llvm+clang.
While we're at it, prune the CSS and add tests for it.
llvm-svn: 276359
Do not clone stored values unless they are GEPs that are special cased to avoid
hoisting them without hoisting their associated ld/st.
Differential revision: https://reviews.llvm.org/D22652
llvm-svn: 276358
When we failed to parse a function in the mir parser, we should abort
the whole compilation instead of continuing in a weird state. Indeed,
this was creating strange machine function passes failures that were
hard to understand, until we notice that the function actually did not
get parsed correctly!
llvm-svn: 276348
This variant is (as documented in the TD) for disassembler use only, and should
not be used in patterns - it is longer, and is broken on 64-bit.
llvm-svn: 276347
when constraint "w" is used on a 32-bit operand.
This enables compiling the following code, which used to error out in
the backend:
void foo1(int a) {
asm volatile ("sqxtn h0, %s0\n" : : "w"(a):);
}
Fixes PR28633.
llvm-svn: 276344
This does not change anything by default since LLVM_INCLUDE_UTILS is already set
to TRUE by default. In addition, since LLVM_INCLUDE_TESTS => LLVM_INCLUDE_UTILS,
the only way that this can cause changes is in the case where LLVM_INCLUDE_UTILS
is set to TRUE, but LLVM_INCLUDE_TESTS is FALSE. In that case, building gtest is
not a huge cost.
The reason to do this is that without this change, one can not turn off
LLVM_INCLUDE_TESTS in downstream projects that also use gtest for unittests. It
also just in general makes more sense since LLVM_INCLUDE_UTILS gates FileCheck
and other utilities that are along the lines of gtest.
Additionally from talking with chandlerc, this was not done for any specific
reason, so there is no reason not to do it and lots of benefit to doing it.
llvm-svn: 276342
rL245171 exposed a hole in InstSimplify that manifested in a strange way in PR28466:
https://llvm.org/bugs/show_bug.cgi?id=28466
It's possible to use trunc + icmp sgt/slt in place of an and + icmp eq/ne, so we need to
recognize that pattern to eliminate selects that are choosing between some value and some
bitmasked version of that value.
Note that there is significant room for improvement (refactoring) and enhancement (more
patterns, possibly in InstCombine rather than here).
Differential Revision: https://reviews.llvm.org/D22537
llvm-svn: 276341
This patch moves the update instruction for vectorized integer induction phi
nodes to the end of the latch block. This ensures consistent placement of all
induction updates across all the kinds of int inductions we create (scalar,
splat vector, or vector phi).
Differential Revision: https://reviews.llvm.org/D22416
llvm-svn: 276339
std::numeric_limits<int64_t>::max() is not constexpr in VC 2013 headers,
and Clang complains that it isn't. MSVC 2013 itself is emitting a
dynamic initializer for this thing. Instead, use an enum.
llvm-svn: 276334
Having the added `\brief` made doxygen interpret it as the summary for
the `llvm` namespace (visible at:
http://llvm.org/doxygen/namespaces.html).
llvm-svn: 276331
Move needsComdatForCounter() to lib/ProfileData/InstrProf.cpp from
lib/Transforms/Instrumentation/InstrProfiling.cpp to make is available for
other files.
Differential Revision: https://reviews.llvm.org/D22643
llvm-svn: 276330
Given that other proposals are making their way through, it's better if we
specify what GitHub proposal this is, in case there are others that also
involve GitHub, but not sub-modules.
llvm-svn: 276325
Summary:
The llvm.invariant.start and llvm.invariant.end intrinsics currently
support specifying invariant memory objects only in the default address space.
With this change, these intrinsics are overloaded for any adddress space for memory objects
and we can use these llvm invariant intrinsics in non-default address spaces.
Example: llvm.invariant.start.p1i8(i64 4, i8 addrspace(1)* %ptr)
This overloaded intrinsic is needed for representing final or invariant memory in managed languages.
Reviewers: tstellarAMD, reames, apilipenko
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D22519
llvm-svn: 276316
Previously LLVM_HAS_GLOBAL_ISEL would directly get the value of
LLVM_BUILD_GLOBAL_ISEL. This could be any integer value and not just ON
and OFF. The problem is that lit.cfg was checking for ON to define that
global-isel was supported, thus if we were setting
LLVM_BUILD_GLOBAL_ISEL with an integer value, say 1, this test would
fail whereas we do build global-isel and want to test it.
llvm-svn: 276307
Previously LLVM_BUILD_GLOBAL_ISEL was a boolean variable and although,
this is strictly identical to an option, it did not convey the
information that the user may set it. Options are here for that.
llvm-svn: 276306
Making smaller pieces out of some of these ~1000 line functions should make
it easier to incrementally upgrade them to handle vector types.
llvm-svn: 276304
Summary:
This change also changes findMatchingInsn and
findMatchingUpdateInsnForward to take DBG_VALUE opcodes into account
when tracking register defs and uses, which could potentially inhibit
these optimizations in the presence of debug information.
Reviewers: mcrosier
Subscribers: aemerson, rengolin, mcrosier, llvm-commits
Differential Revision: https://reviews.llvm.org/D22582
llvm-svn: 276293
Under normal circumstances we prefer the higher performance MOVD to extract the 0'th element of a v8i16 vector instead of PEXTRW.
But as detailed on PR27265, this prevents the SSE41 implementation of PEXTRW from folding the store of the 0'th element. Additionally it prevents us from making use of the fact that the (SSE2) reg-reg version of PEXTRW implicitly zero-extends the i16 element to the i32/i64 destination register.
This patch only preferentially lowers to MOVD if we will not be zero-extending the extracted i16, nor prevent a store from being folded (on SSSE41).
Fix for PR27265.
Differential Revision: https://reviews.llvm.org/D22509
llvm-svn: 276289
As requested on D22509, I've pulled out the v8i16 extraction lowering as the SSE41 and pre-SSE41 implementations are effectively the same.
llvm-svn: 276285
As reported on PR26235, we don't currently make use of the VBROADCASTF128/VBROADCASTI128 instructions (or the AVX512 equivalents) to load+splat a 128-bit vector to both lanes of a 256-bit vector.
This patch enables lowering from subvector insertion/concatenation patterns and auto-upgrades the llvm.x86.avx.vbroadcastf128.pd.256 / llvm.x86.avx.vbroadcastf128.ps.256 intrinsics to match.
We could possibly investigate using VBROADCASTF128/VBROADCASTI128 to load repeated constants as well (similar to how we already do for scalar broadcasts).
Differential Revision: https://reviews.llvm.org/D22460
llvm-svn: 276281
This provides an elegant pattern to solve the "construct if not in map
already" problem we have many times in LLVM. Without try_emplace we
either have to rely on a sentinel value (nullptr) or do two lookups.
llvm-svn: 276277
The clearance calculation did not take into account registers defined as outputs or clobbers in inline assembly machine instructions because these register defs are implicit.
Differential Revision: http://reviews.llvm.org/D22580
llvm-svn: 276266
This patch fixes a very subtle bug in regmask calculation. Thanks to zan
jyu Wong <zyfwong@gmail.com> for bringing this to notice.
For example if CL is only clobbered than CH should not be marked
clobbered but CX, RCX and ECX should be mark clobbered. Previously for
each modified register all of its aliases are marked clobbered by
markRegClobbred() in RegUsageInfoCollector.cpp but that is wrong because
when CL is clobbered then MRI::isPhysRegModified() will return true for
CL, CX, ECX, RCX which is correct behavior but then for CX, EXC, RCX we
mark CH also clobbered as CH is aliased to CX,ECX,RCX so
markRegClobbred() is not required because isPhysRegModified already take
cares of proper aliasing register. A very simple test case has been
added to verify this change.
Please find relevant bug report here :
http://llvm.org/PR28567
Patch by Vivek Pandya <vivekvpandya@gmail.com>
Differential Revision: https://reviews.llvm.org/D22400
llvm-svn: 276235
classifyLEAReg() deals with switching operands from 32bit to 64bit in
order to use a LEA64_32 instruction (for three address code goodness).
It currently performs a liveness analysis to determine the kill/undef
flag for the newly added operand. This should not be necessary:
- If the previous operand had a kill flag, then the 32bit part of the
register gets killed, this will kill the super register as well.
- If the previous operand had an undef flag then we didn't care what
value we read, just use the same flag on the new operand.
(No matter what an operand with an undef flag won't affect liveness)
This makes the code independent of the presence of kill flags because it
avoids a call to MachineBasicBlock::computeRegisterLiveness().
Differential Revision: http://reviews.llvm.org/D22283
llvm-svn: 276222
They were all auto-incremented from 0 anyway, and I'm getting really annoying
conflicts and runtime failures when different people add more for GlobalISel
(and even when I'm refactoring my own patches).
NFC.
llvm-svn: 276204
(Also, refactor our constexpr handling to be less insane).
This patch lets us track field offsets in the CFL Graph, which is the
first step to making CFLAA field/offset sensitive. Woohoo! Note that
this patch shouldn't visibly change our behavior (since we make no use
of the offsets we're now tracking), so we can't quite add tests for this
yet.
Patch by Jia Chen.
Differential Revision: https://reviews.llvm.org/D22598
llvm-svn: 276201
The earlier change added hotness attribute to missed-optimization
remarks. This follows up with the analysis remarks (the ones explaining
the reason for the missed optimization).
llvm-svn: 276192
This helps because LoopAccessReport is passed around as a const
reference and we derive the basic block passed as the Value parameter
from the instruction in LoopAccessReport.
llvm-svn: 276191
At -O0, cmpxchg survives AtomicExpand: it's mostly straightforward
to select it in fast-isel, and let the pseudo be expanded later.
extractvalues on the result are the tricky part: the generic logic
only works for legal types (and it would be painful to make it
support illegal types), so we can only support i32/i64 cmpxchg.
llvm-svn: 276183
We can replace the return values with undef if we replaced all
the call uses with a constant/undef.
Differential Revision: https://reviews.llvm.org/D22336
llvm-svn: 276174
Summary:
Previously we wouldn't move loads/stores across instructions that had
side-effects, where that was defined as may-write or may-throw. But
this is not sufficiently restrictive: Stores can't safely be moved
across instructions that may load.
This patch also adds a DEBUG check that all instructions in our chain
are either loads or stores.
Reviewers: asbirlea
Subscribers: llvm-commits, jholewinski, arsenm, mzolotukhin
Differential Revision: https://reviews.llvm.org/D22547
llvm-svn: 276171
Summary:
Previously if we had a chain that contained a side-effecting
instruction, we wouldn't vectorize it at all. Now we'll vectorize
everything that comes before the side-effecting instruction.
Reviewers: asbirlea
Subscribers: arsenm, jholewinski, llvm-commits, mzolotukhin
Differential Revision: https://reviews.llvm.org/D22536
llvm-svn: 276170
A seemingly common use for the walker's getClobberingMemoryAccess
function is:
```
MemoryAccess *getClobber(MemorySSAWalker *W, MemoryUseOrDef *MUD) {
const Instruction *I = MUD->getMemoryInst();
return W->getClobberingMemoryAccess(I);
}
```
Which is kind of redundant, since walkers will ultimately query MSSA to
find out which MemoryAccess `I` maps to (...which is always `MUD`).
So, this patch adds an overload of getClobberingMemoryAccess that
accepts MemoryAccesses directly. As a result, the Instruction overload
of getClobberingMemoryAccess becomes a lightweight wrapper around our
new overload.
Additionally, this patch un`virtual`izes the Instruction overload of
getClobberingMemoryAccess, since there doesn't seem to be a walker that
benefits from that being virtual, and I can't think of how else one
would implement it. Happy to make it virtual again if we would benefit
from doing so.
llvm-svn: 276169
This should be all the low-level instruction selection needs to determine how
to implement an operation, with the remaining context taken from the opcode
(e.g. G_ADD vs G_FADD) or other flags not based on type (e.g. fast-math).
llvm-svn: 276158