This is a no-op/NFC at the moment & generally makes the code /somewhat/
cleaner/less reliant on assumptions about what will produce a debug_addr
section.
It's still a bit "spooky action at a distance" - the add ranges code
pre-emptively inserts addresses into the address pool it knows will
eventually be used by the range emission code (or low/high pc).
The 'ideal' would be either to actually compute the addresses needed for
range (& loc) emission earlier - which would mean decanonicalizing the
range/loc representation earlier to account for whether it was going to
use addrx encodings or not (which would be unfortunate, but could be
refactored to be relatively unobtrusive).
Alternatively, emitting the range/loc sections earlier would cause them
to request the needed addresses sooner - but then you endup having to
split finalizeModuleInfo because some things need to be handled there
before the ranges/locs are emitted, I think...
LVI and its consumers currently have quite a bit of complexity
related to dominator tree management. However, it doesn't look
like it is actually needed...
The only use of the dominator tree is inside isValidAssumeForContext().
However, due to the way LVI queries work, it is not needed:
If we query a value for some block, we will first get the edge values
from all predecessor blocks, which also includes an intersection with
assumptions that apply to the terminator of the predecessor. As such,
we will already have processed all assumptions from predecessor blocks
(this is actually stronger than what isValidAssumeForContext() does
with a DT, because this is capable of combining non-dominating
assumptions). The only additional assumptions we need to take into
account are those in the block being queried. And we don't need a
dominator tree for that.
This patch only removes the use of DT, I will drop the machinery
around it in a followup.
Differential Revision: https://reviews.llvm.org/D76797
Summary: Adds a matcher called `hasOperands` for `BinaryOperator`'s when you need to match both sides but the order isn't important, usually on commutative operators.
Reviewers: klimek, aaron.ballman, gribozavr2, alexfh
Reviewed By: aaron.ballman
Subscribers: cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D80054
This build vector lowering pattern came up in D79886.
I've tried to limit the improvement to cases where it looks
clearly better to load, but we could remove the 'TODO'
predicates already if we are willing to overlook some
corner cases.
Differential Revision: https://reviews.llvm.org/D80013
This is required to get avr-gdb correctly showing values at the right
addresses. This problem was discovered by using debug symbols in an
external program to lookup values in an AVR simulator.
This was originally in D79116.
Converting from a narrow-enough FP source value to integer and
back to FP guarantees that the conversion to FP is exact because
of UB/poison-on-overflow.
This was suggested in PR36617:
https://bugs.llvm.org/show_bug.cgi?id=36617#c19
When the callee requires a dynamic stack realignment,
it is not possible to correcty access the incoming
stack arguments using the stack pointer. We reserve a
base pointer in such cases to access the function arguments
inside the callee. The base pointer will hold the incoming
stack pointer value before any kind of delta added to it.
Reviewed By: arsenm, scott.linder
Differential Revision: https://reviews.llvm.org/D78811
Spurious assertion failures are symptoms of a race condition for the handling
of detached tasks:
Assertion failure at kmp_tasking.cpp(3744): taskdata->td_flags.complete == 1.
Assertion failure at kmp_tasking.cpp(710): taskdata->td_flags.executing == 0.
in the case of detach=true, all accesses to taskdata in __kmp_task_finish need
to happen before (~line 873):
taskdata->td_flags.proxy = TASK_PROXY;
This assignment signals to __kmp_fulfill_event, that the task will need to be
freed there. So, conceptionally the ownership of taskdata is moved.
Reviewed By: AndreyChurbanov
Differential Revision: https://reviews.llvm.org/D79702
When compiling with -Werror=overloaded-virtual, gcc emits this:
====
llvm/include/llvm/Pass.h:102:16: error: ‘virtual bool llvm::Pass::doInitialization(llvm::Module&)’ was hidden [-Werror=overloaded-virtual]
virtual bool doInitialization(Module &) { return false; }
^~~~~~~~~~~~~~~~
In file included from llvm/lib/Transforms/IPO/Inliner.cpp:20:0:
llvm/include/llvm/Transforms/IPO/Inliner.h:38:8: error: by ‘virtual bool llvm::LegacyInlinerBase::doInitialization(llvm::CallGraph&)’ [-Werror=overloaded-virtual]
bool doInitialization(CallGraph &CG) override;
^~~~~~~~~~~~~~~~
====
This is an old issue which has just started biting our downstream after
a slight rearrangement of includes around Inliner.
Fixing it similar to how doFinalization was done years ago.
Summary:
On XMEGA, I/O address space is same as data address space - there is no 0x20 offset,
because CPU General Purpose Registers are not mapped in data address space.
From https://en.wikipedia.org/wiki/AVR_microcontrollers
> In the XMEGA variant, the working register file is not mapped into the data address space; as such, it is not possible to treat any of the XMEGA's working registers as though they were SRAM. Instead, the I/O registers are mapped into the data address space starting at the very beginning of the address space.
Reviewers: dylanmckay
Reviewed By: dylanmckay
Subscribers: hiraditya, Jim, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D77207
Patch by Vlastimil Labsky.
Since setting COMPILER_RT_USE_BUILTINS_LIBRARY would remove -z,defs
flag, missing builtins library would continue to build unnoticed.
Explicitly emit an error in such case.
Differential Revision: https://reviews.llvm.org/D79470
We know the pointer somewhere on the stack, we just don't know
exactly where since the index may be variable.
Differential Revision: https://reviews.llvm.org/D80060
Along the lines of D77454 and D79968. Unlike loads and stores, the
default alignment is getPrefTypeAlign, to match the existing handling in
various places, including SelectionDAG and InstCombine.
Differential Revision: https://reviews.llvm.org/D80044
This ensures we create mem operands for these instructions fixing PR45949.
Unfortunately, it increases the size of X86GenDAGISel.inc, but some dag
combine canonicalization could reduce the types of load we need to match.
If isSized is passed a SmallPtrSet, it uses that set to catch infinitely
recursive types (for example, a struct that has itself as a member).
Otherwise, it just crashes on such types.
I've also made a stab at imposing some more order on where and how we add
attributes; this part should be NFC. I wasn't sure whether the CUDA use
case for libdevice should propagate CPU/features attributes, so there's a
bit of unnecessary duplication.
This is split off from D79799 - where I was proposing to fully iterate
over a function until there are no more transforms. I suspect we are
still going to want to do something like that eventually.
But we can achieve the same gains much more efficiently on the current
set of regression tests just by reversing the order that we visit the
instructions.
This may also reduce the motivation for D79078, but we are still not
getting the optimal pattern for a reduction.
Given a VQMOVN(VSHR), we can fold that into a VQSHRN simply enough using
a few tablegen patterns.
Differential Revision: https://reviews.llvm.org/D77720
This is basically the same patch as D63233, but converted to
funnel shifts rather than regular shifts. I did not see a
way to effectively share code for these 2 cases though.
This follows D79718 and D79827 to re-fix PR37426 because
that gets canonicalized to funnel shift intrinsics in IR.
I did draft an alternative patch as an enhancement to
"shouldSinkOperands()", but that was awkward because
we have to key the transform from the select, but then
look at both its users and its operands.
This adds two combines for VMOVN, one to fold
VMOVN[tb](c, VQMOVNb(a, b)) => VQMOVN[tb](c, b)
The other to perform demand bits analysis on the lanes of a VMOVN. We
know that only the bottom lanes of the second operand and the top or
bottom lanes of the Qd operand are needed in the result, depending on if
the VMOVN is bottom or top.
Differential Revision: https://reviews.llvm.org/D77718
This adds some custom lowering for VQMOVN, an instruction that can be
used to perform saturating truncates from a pair of min(max(X, -0x8000),
0x7fff), providing those constants are correct. This leaves a VQMOVNBs
which saturates the value and inserts that into the bottom lanes of an
existing vector. We then need to do something with the other lanes,
extending the value using a vmovlb.
Ideally, as will often be the case, only the bottom lane of what remains
will be demanded, allowing the vmovlb to be removed. Which should mean
the instruction is either equal or a win most of the time, and allows
some extra follow-up folding to happen.
Differential Revision: https://reviews.llvm.org/D77590