Commit Graph

7184 Commits

Author SHA1 Message Date
Simon Pilgrim 31d3c0b333 [ADT] Fix Wshift-overflow gcc warning in isPowerOf2 unit test 2021-10-18 17:00:36 +01:00
Gil Rapaport 1156bd4fc3 [LV] Record memory widening decisions (NFCI)
Record widening decisions for memory operations within the planned recipes and
use the recorded decisions in code-gen rather than querying the cost model.

Differential Revision: https://reviews.llvm.org/D110479
2021-10-18 18:03:35 +03:00
Simon Pilgrim ac1c0dd317 [ADT] Add some basic APInt::isPowerOf2() unit test coverage 2021-10-18 15:01:28 +01:00
Nikita Popov 274b2439f8 [ConstantRange] Add fast signed multiply
The multiply() implementation is very slow -- it performs six
multiplications in double the bitwidth, which means that it will
typically work on allocated APInts and bypass fast-path
implementations. Add an additional implementation that doesn't
try to produce anything better than a full range if overflow is
possible. At least for the BasicAA use-case, we really don't care
about more precise modeling of overflow behavior. The current
use of multiply() is fine while the implementation is limited to
a single index, but extending it to the multiple-index case makes
the compile-time impact untenable.
2021-10-17 16:41:49 +02:00
Nikita Popov 492a4a428f [APInt] Fix 1-bit edge case in smul_ov()
The sdiv used to check for overflow can itself overflow if the
LHS is signed min and the RHS is -1. The code tried to account for
this by also checking the commuted version. However, for 1-bit
values, signed min and -1 are the same value, so both divisions
overflow. As such, the overflow for -1 * -1 was not detected
(which results in -1 rather than 1 for 1-bit values). Fix this by
explicitly checking for this case instead.

Noticed while adding exhaustive test coverage for smul_ov(),
which is also part of this commit.
2021-10-16 20:31:04 +02:00
Tomasz Miąsko 41a6fc8438 [Demangle] Extract nonMicrosoftDemangle from llvm::demangle
Introduce a new demangling function that supports symbols using Itanium
mangling and Rust v0 mangling, and is expected in the near future to
include support for D mangling as well.

Unlike llvm::demangle, the function does not accept extra underscore
decoration. The callers generally know exactly when symbols should
include the extra decoration and so they should be responsible for
stripping it.

Functionally the only intended change is to allow demangling Rust
symbols with an extra underscore decoration through llvm::demangle,
which matches the existing behaviour for Itanium symbols.

Reviewed By: dblaikie, jhenderson

Part of https://reviews.llvm.org/D110664
2021-10-16 13:32:16 +02:00
Nikita Popov 587493b441 [ConstantRange] Compute precise shl range for single elements
For the common case where the shift amount is constant (a single
element range) we can easily compute a precise range (up to
unsigned envelope), so do that.
2021-10-15 23:44:41 +02:00
Nikita Popov 9eb8040a28 [ConstantRange] Support checking optimality for subset of inputs (NFC)
We always want to check correctness, but for some operations we
can only guarantee optimality for a subset of inputs. Accept an
additional predicate that determines whether optimality for a
given pair of ranges should be checked.
2021-10-15 22:48:07 +02:00
Nikita Popov 82e858d1bf [ConstantRange] Better diagnostic for correctness test failure (NFC)
Print a friendly error message including the inputs, result and
not-contained element if an exhaustive correctness test fails,
same as we do if the optimality test fails.
2021-10-15 21:52:17 +02:00
Mubashar Ahmad 97809c828f [AArch64]Enabling Cortex-A510 Support
This patch enables support for Cortex-A510 CPUs.

Reviewed By: MarkMurrayARM, dmgreen

Differential Revision: https://reviews.llvm.org/D109825
2021-10-15 14:31:18 +01:00
djtodoro 8c3adce81d [JSON] Handle uint64_t type
There was no handling of uint64_t in the LLVM JSON library.
This patch adds support for that. The motivation is
the https://reviews.llvm.org/D109217.

Differential Revision: https://reviews.llvm.org/D109347
2021-10-15 11:18:22 +02:00
Jeremy Morse b5426ced71 [DebugInfo][InstrRef] Place variable-values PHI using LLVM utilities
This patch is very similar to D110173 / a3936a6c19, but for variable
values rather than machine values. This is for the second instr-ref
problem, calculating the correct variable value on entry to each block.
The previous lattice based implementation was broken; we now use LLVMs
existing PHI placement utilities to work out where values need to merge,
then eliminate un-necessary ones through value propagation.

Most of the deletions here happen in vlocJoin: it was trying to pick a
location for PHIs to happen in, badly, leading to an infinite loop in the
MIR test added, where it would repeatedly switch between register
locations. The new approach is simpler: either PHIs can be eliminated, or
they can't, and the location of the value is a different problem.

Various bits and pieces move to the header so that they can be tested in
the unit tests. The DbgValue class grows a "VPHI" kind to represent
variable value PHIS that haven't been eliminated yet.

Differential Revision: https://reviews.llvm.org/D110630
2021-10-14 14:43:43 +01:00
Jeremy Morse a3936a6c19 [DebugInfo][InstrRef] Use PHI placement utilities for machine locations
InstrRefBasedLDV used to try and determine which values are in which
registers using a lattice approach; however this is hard to understand, and
broken in various ways. This patch replaces that approach with a standard
SSA approach using existing LLVM utilities. PHIs are placed at dominance
frontiers; value propagation then eliminates un-necessary PHIs.

This patch also adds a bunch of unit tests that should cover many of the
weirder forms of control flow.

Differential Revision: https://reviews.llvm.org/D110173
2021-10-13 12:49:04 +01:00
Fangrui Song c2d4fe51bb [X86] Remove little support we had for MPX
GCC 9.1 removed Intel MPX support. Linux kernel removed MPX in 2019.
glibc 2.35 will remove MPX.

Our support is limited: we support assembling of bndmov but not bnd.
Just remove it.

Reviewed By: pengfei, skan

Differential Revision: https://reviews.llvm.org/D111517
2021-10-12 16:18:51 -07:00
Lang Hames adf55ac665 [ORC] Call ExecutorProcessControl::disconnect in unit tests that require it.
Another follow-up to 2815ed57e3 and 19b4e3cfc6. For unit tests that don't use
an ExecutionSession we need to call ExecutorProcessControl::disconnect directly
to wait for the dispatcher to shut down.

https://llvm.org/PR52153
2021-10-12 14:59:39 -07:00
Lang Hames 19b4e3cfc6 [ORC] Call ExecutionSession::endSession in unit tests.
2815ed57e3 added calls from ExecutorProcessControl::disconnect implementations
to shut down the TaskDispatcher. We still need to call endSession to trigger
disconnection though. This commit adds the necessary calls to the failing unit
tests.

https://llvm.org/PR52153
2021-10-12 14:27:39 -07:00
Lang Hames bbc2fc548b [Support][ORC] Add an explicit release operation to OwningMemoryBlock.
This gives OwningMemoryBlock clients a way to check for errors on release.

rdar://84127175
2021-10-12 10:13:43 -07:00
Jeremy Morse 838b4a533e [DebugInfo][NFC] Move LiveDebugValues class to header
This patch shifts the InstrRefBasedLDV class declaration to a header.
Partially because it's already massive, but mostly so that I can start
writing some unit tests for it. This patch also adds the boilerplate for
said unit tests.

Differential Revision: https://reviews.llvm.org/D110165
2021-10-12 16:07:26 +01:00
Lang Hames 962a2479b5 Re-apply e50aea58d5, "Major JITLinkMemoryManager refactor". with fixes.
Adds explicit narrowing casts to JITLinkMemoryManager.cpp.

Honors -slab-address option in llvm-jitlink.cpp, which was accidentally
dropped in the refactor.

This effectively reverts commit 6641d29b70.
2021-10-11 21:39:00 -07:00
Lang Hames 6641d29b70 Revert "[JITLink][ORC] Major JITLinkMemoryManager refactor."
This reverts commit e50aea58d5 while I
investigate bot failures.
2021-10-11 19:23:41 -07:00
Lang Hames e50aea58d5 [JITLink][ORC] Major JITLinkMemoryManager refactor.
This commit substantially refactors the JITLinkMemoryManager API to: (1) add
asynchronous versions of key operations, (2) give memory manager implementations
full control over link graph address layout, (3) enable more efficient tracking
of allocated memory, and (4) support "allocation actions" and finalize-lifetime
memory.

Together these changes provide a more usable API, and enable more powerful and
efficient memory manager implementations.

To support these changes the JITLinkMemoryManager::Allocation inner class has
been split into two new classes: InFlightAllocation, and FinalizedAllocation.
The allocate method returns an InFlightAllocation that tracks memory (both
working and executor memory) prior to finalization. The finalize method returns
a FinalizedAllocation object, and the InFlightAllocation is discarded. Breaking
Allocation into InFlightAllocation and FinalizedAllocation allows
InFlightAllocation subclassses to be written more naturally, and FinalizedAlloc
to be implemented and used efficiently (see (3) below).

In addition to the memory manager changes this commit also introduces a new
MemProt type to represent memory protections (MemProt replaces use of
sys::Memory::ProtectionFlags in JITLink), and a new MemDeallocPolicy type that
can be used to indicate when a section should be deallocated (see (4) below).

Plugin/pass writers who were using sys::Memory::ProtectionFlags will have to
switch to MemProt -- this should be straightworward. Clients with out-of-tree
memory managers will need to update their implementations. Clients using
in-tree memory managers should mostly be able to ignore it.

Major features:

(1) More asynchrony:

The allocate and deallocate methods are now asynchronous by default, with
synchronous convenience wrappers supplied. The asynchronous versions allow
clients (including JITLink) to request and deallocate memory without blocking.

(2) Improved control over graph address layout:

Instead of a SegmentRequestMap, JITLinkMemoryManager::allocate now takes a
reference to the LinkGraph to be allocated. The memory manager is responsible
for calculating the memory requirements for the graph, and laying out the graph
(setting working and executor memory addresses) within the allocated memory.
This gives memory managers full control over JIT'd memory layout. For clients
that don't need or want this degree of control the new "BasicLayout" utility can
be used to get a segment-based view of the graph, similar to the one provided by
SegmentRequestMap. Once segment addresses are assigned the BasicLayout::apply
method can be used to automatically lay out the graph.

(3) Efficient tracking of allocated memory.

The FinalizedAlloc type is a wrapper for an ExecutorAddr and requires only
64-bits to store in the controller. The meaning of the address held by the
FinalizedAlloc is left up to the memory manager implementation, but the
FinalizedAlloc type enforces a requirement that deallocate be called on any
non-default values prior to destruction. The deallocate method takes a
vector<FinalizedAlloc>, allowing for bulk deallocation of many allocations in a
single call.

Memory manager implementations will typically store the address of some
allocation metadata in the executor in the FinalizedAlloc, as holding this
metadata in the executor is often cheaper and may allow for clean deallocation
even in failure cases where the connection with the controller is lost.

(4) Support for "allocation actions" and finalize-lifetime memory.

Allocation actions are pairs (finalize_act, deallocate_act) of JITTargetAddress
triples (fn, arg_buffer_addr, arg_buffer_size), that can be attached to a
finalize request. At finalization time, after memory protections have been
applied, each of the "finalize_act" elements will be called in order (skipping
any elements whose fn value is zero) as

((char*(*)(const char *, size_t))fn)((const char *)arg_buffer_addr,
                                     (size_t)arg_buffer_size);

At deallocation time the deallocate elements will be run in reverse order (again
skipping any elements where fn is zero).

The returned char * should be null to indicate success, or a non-null
heap-allocated string error message to indicate failure.

These actions allow finalization and deallocation to be extended to include
operations like registering and deregistering eh-frames, TLS sections,
initializer and deinitializers, and language metadata sections. Previously these
operations required separate callWrapper invocations. Compared to callWrapper
invocations, actions require no extra IPC/RPC, reducing costs and eliminating
a potential source of errors.

Finalize lifetime memory can be used to support finalize actions: Sections with
finalize lifetime should be destroyed by memory managers immediately after
finalization actions have been run. Finalize memory can be used to support
finalize actions (e.g. with extra-metadata, or synthesized finalize actions)
without incurring permanent memory overhead.
2021-10-11 19:12:42 -07:00
Lang Hames 8abf46d39a [ORC] Propagate out-of-band errors in callAsync.
Returned out-of-band errors should be wrapped as llvm::Errors and passed to the
SendDeserializedResult function. Failure to do this results in an assertion when
we try to deserialize from the WrapperFunctionResult while it's in the
out-of-band error state.
2021-10-11 14:23:50 -07:00
Roman Lebedev 684cbae89a
[KnownBits] Introduce `countMaxActiveBits()` and use it in a few places 2021-10-11 23:36:06 +03:00
Victor Campos 3550e242fa [Clang][ARM][AArch64] Add support for Armv9-A, Armv9.1-A and Armv9.2-A
armv9-a, armv9.1-a and armv9.2-a can be targeted using the -march option
both in ARM and AArch64.

 - Armv9-A maps to Armv8.5-A.
 - Armv9.1-A maps to Armv8.6-A.
 - Armv9.2-A maps to Armv8.7-A.
 - The SVE2 extension is enabled by default on these architectures.
 - The cryptographic extensions are disabled by default on these
 architectures.

The Armv9-A architecture is described in the Arm® Architecture Reference
Manual Supplement Armv9, for Armv9-A architecture profile
(https://developer.arm.com/documentation/ddi0608/latest).

Reviewed By: SjoerdMeijer

Differential Revision: https://reviews.llvm.org/D109517
2021-10-11 17:44:09 +01:00
Lang Hames c59ebe4c4c [ORC] Add TaskDispatcher::shutdown calls to TaskDispatchTest.cpp unit tests.
These calls were left out of 4d7cea3d2e. In the InPlaceDispatcher test case
the operation is a no-op, but it's good form to include it. In the
DynamicThreadPoolTaskDispatcher test the shutdown call is required to ensure
that we don't exit the test (and tear down the dispatcher) before the thread
running the dispatch has completed.
2021-10-10 21:09:29 -07:00
Esme-Yi a00ff71668 [XCOFF] Improve error message context.
Summary: This patch improves the error message context of the
XCOFF interfaces by providing more details.

Reviewed By: jhenderson

Differential Revision: https://reviews.llvm.org/D110320
2021-10-11 02:52:20 +00:00
Lang Hames f341161689 [ORC] Add TaskDispatch API and thread it through ExecutorProcessControl.
ExecutorProcessControl objects will now have a TaskDispatcher member which
should be used to dispatch work (in particular, handling incoming packets in
the implementation of remote EPC implementations like SimpleRemoteEPC).

The GenericNamedTask template can be used to wrap function objects that are
callable as 'void()' (along with an optional name to describe the task).
The makeGenericNamedTask functions can be used to create GenericNamedTask
instances without having to name the function object type.

In a future patch ExecutionSession will be updated to use the
ExecutorProcessControl's dispatcher, instead of its DispatchTaskFunction.
2021-10-10 18:39:55 -07:00
Lang Hames da7f993a8d [ORC] Reorder callWrapperAsync and callSPSWrapperAsync parameters.
The callee address is now the first parameter and the 'SendResult' function
the second. This change improves consistentency with the non-async functions
where the callee is the first address and the return value the second.
2021-10-10 13:10:43 -07:00
Reid Kleckner 89b57061f7 Move TargetRegistry.(h|cpp) from Support to MC
This moves the registry higher in the LLVM library dependency stack.
Every client of the target registry needs to link against MC anyway to
actually use the target, so we might as well move this out of Support.

This allows us to ensure that Support doesn't have includes from MC/*.

Differential Revision: https://reviews.llvm.org/D111454
2021-10-08 14:51:48 -07:00
Jake Egan 8037481cb2 [AIX] Disable tests failing due to missing DWARF sections
The following tests are failing due to missing DWARF sections. This patch sets these tests as XFAIL/DISABLED on AIX until a more permanent solution is implemented.

Reviewed By: shchenz

Differential Revision: https://reviews.llvm.org/D111336
2021-10-08 12:06:38 -04:00
Amara Emerson 08b3c0d995 [GlobalISel] Combine G_UMULH x, (1 << c)) -> x >> (bitwidth - c)
In order to not generate an unnecessary G_CTLZ, I extended the constant folder
in the CSEMIRBuilder to handle G_CTLZ. I also added some extra handing of
vector constants too. It seems we don't have any support for doing constant
folding of vector constants, so the tests show some other useless G_SUB
instructions too.

Differential Revision: https://reviews.llvm.org/D111036
2021-10-07 23:51:37 -07:00
Jack Andersen bd4dad87f4 [MachineInstr] Move MIParser's DBG_VALUE RegState::Debug invariant into MachineInstr::addOperand
Based on the reasoning of D53903, register operands of DBG_VALUE are
invariably treated as RegState::Debug operands. This change enforces
this invariant as part of MachineInstr::addOperand so that all passes
emit this flag consistently.

RegState::Debug is inconsistently set on DBG_VALUE registers throughout
LLVM. This runs the risk of a filtering iterator like
MachineRegisterInfo::reg_nodbg_iterator to process these operands
erroneously when not parsed from MIR sources.

This issue was observed in the development of the llvm-mos fork which
adds a backend that relies on physical register operands much more than
existing targets. Physical RegUnit 0 has the same numeric encoding as
$noreg (indicating an undef for DBG_VALUE). Allowing debug operands into
the machine scheduler correlates $noreg with RegUnit 0 (i.e. a collision
of register numbers with different zero semantics). Eventually, this
causes an assert where DBG_VALUE instructions are prohibited from
participating in live register ranges.

Reviewed By: MatzeB, StephenTozer

Differential Revision: https://reviews.llvm.org/D110105
2021-10-07 16:08:52 +01:00
Sanjay Patel 519752062c [PatternMatch] add matchers for commutative logical and/or
We need these to add folds with the same structure as
regular commuted logic ops.
2021-10-07 10:37:34 -04:00
David Green 73346f5848 [ARM] Introduce a MQPRCopy
Currently when creating tail predicated loops, we need to validate that
all the live-outs of a loop will be equivalent with and without tail
predication, and if they are not we cannot legally create a
tail-predicated loop, leaving expensive vctp and vpst instructions in
the loop. These notably can include register-allocation instructions
like stack loads and stores, and copys lowered from COPYs to MVE_VORRs.

Instead of trying to prove this is valid late in the pipeline, this
patch introduces a MQPRCopy pseudo instruction that COPY is lowered to.
This can then either be converted to a MVE_VORR where possible, or to a
couple of VMOVD instructions if not. This way they do not behave
differently within and outside of tail-predications regions, and we can
know by construction that they are always valid. The idea is that we can
do the same with stack load and stores, converting them to VLDR/VSTR or
VLDM/VSTM where required to prove tail predication is always valid.

This does unfortunately mean inserting multiple VMOVD instructions,
instead of a single MVE_VORR, but my experiments show it to be an
improvement in general.

Differential Revision: https://reviews.llvm.org/D111048
2021-10-07 12:52:12 +01:00
Arthur Eubanks 05392466f0 Reland [IR] Increase max alignment to 4GB
Currently the max alignment representable is 1GB, see D108661.
Setting the align of an object to 4GB is desirable in some cases to make sure the lower 32 bits are clear which can be used for some optimizations, e.g. https://crbug.com/1016945.

This uses an extra bit in instructions that carry an alignment. We can store 15 bits of "free" information, and with this change some instructions (e.g. AtomicCmpXchgInst) use 14 bits.
We can increase the max alignment representable above 4GB (up to 2^62) since we're only using 33 of the 64 values, but I've just limited it to 4GB for now.

The one place we have to update the bitcode format is for the alloca instruction. It stores its alignment into 5 bits of a 32 bit bitfield. I've added another field which is 8 bits and should be future proof for a while. For backward compatibility, we check if the old field has a value and use that, otherwise use the new field.

Updating clang's max allowed alignment will come in a future patch.

Reviewed By: hans

Differential Revision: https://reviews.llvm.org/D110451
2021-10-06 13:29:23 -07:00
Chris Lattner ad37a45a2e [APInt] Fix isAllOnes and extractBits for zero width values.
isAllOnes() should return true for zero bit values because
there are no zeros in it.

Thanks to Jay Foad for pointing this out.

Differential Revision: https://reviews.llvm.org/D111241
2021-10-06 12:37:53 -07:00
Arthur Eubanks 569346f274 Revert "Reland [IR] Increase max alignment to 4GB"
This reverts commit 8d64314ffe.
2021-10-06 11:38:11 -07:00
Arthur Eubanks 8d64314ffe Reland [IR] Increase max alignment to 4GB
Currently the max alignment representable is 1GB, see D108661.
Setting the align of an object to 4GB is desirable in some cases to make sure the lower 32 bits are clear which can be used for some optimizations, e.g. https://crbug.com/1016945.

This uses an extra bit in instructions that carry an alignment. We can store 15 bits of "free" information, and with this change some instructions (e.g. AtomicCmpXchgInst) use 14 bits.
We can increase the max alignment representable above 4GB (up to 2^62) since we're only using 33 of the 64 values, but I've just limited it to 4GB for now.

The one place we have to update the bitcode format is for the alloca instruction. It stores its alignment into 5 bits of a 32 bit bitfield. I've added another field which is 8 bits and should be future proof for a while. For backward compatibility, we check if the old field has a value and use that, otherwise use the new field.

Updating clang's max allowed alignment will come in a future patch.

Reviewed By: hans

Differential Revision: https://reviews.llvm.org/D110451
2021-10-06 11:03:51 -07:00
Arthur Eubanks 72cf8b6044 Revert "[IR] Increase max alignment to 4GB"
This reverts commit df84c1fe78.

Breaks some bots
2021-10-06 10:21:35 -07:00
Arthur Eubanks df84c1fe78 [IR] Increase max alignment to 4GB
Currently the max alignment representable is 1GB, see D108661.
Setting the align of an object to 4GB is desirable in some cases to make sure the lower 32 bits are clear which can be used for some optimizations, e.g. https://crbug.com/1016945.

This uses an extra bit in instructions that carry an alignment. We can store 15 bits of "free" information, and with this change some instructions (e.g. AtomicCmpXchgInst) use 14 bits.
We can increase the max alignment representable above 4GB (up to 2^62) since we're only using 33 of the 64 values, but I've just limited it to 4GB for now.

The one place we have to update the bitcode format is for the alloca instruction. It stores its alignment into 5 bits of a 32 bit bitfield. I've added another field which is 8 bits and should be future proof for a while. For backward compatibility, we check if the old field has a value and use that, otherwise use the new field.

Updating clang's max allowed alignment will come in a future patch.

Reviewed By: hans

Differential Revision: https://reviews.llvm.org/D110451
2021-10-06 09:54:14 -07:00
Simon Pilgrim 2e5daac217 [llvm] Update report_fatal_error calls from raw_string_ostream to use Twine(OS.str())
As described on D111049, we're trying to remove the <string> dependency from error handling and replace uses of report_fatal_error(const std::string&) with the Twine() variant which can be forward declared.

We can use the raw_string_ostream::str() method to perform the implicit flush() and return a reference to the std::string container that we can then wrap inside Twine().
2021-10-05 18:42:12 +01:00
Chris Lattner cc697fc292 [APInt] Make insertBits and concat work with zero width APInts.
These should both clearly work with our current model for zero width
integers, but don't until now!

Differential Revision: https://reviews.llvm.org/D111113
2021-10-05 08:41:53 -07:00
Kazu Hirata 3081de8c72 [llvm] Migrate from getNumArgOperands to arg_size (NFC)
Note that getNumArgOperands is considered a legacy name.  See
llvm/include/llvm/IR/InstrTypes.h for details.
2021-10-05 08:29:19 -07:00
Bjorn Pettersson 8ed0e6b2cf [SelectionDAG] Replace error prone index check in BaseIndexOffset::computeAliasing
Deriving NoAlias based on having the same index in two BaseIndexOffset
expressions seemed weird (and as shown in the added unittest the
correctness of doing so depended on undocumented pre-conditions that
the user of BaseIndexOffset::computeAliasing would need to take care
of.

This patch removes the code that dereived NoAlias based on indices
being the same. As a compensation, to avoid regressions/diffs in
various lit test, we also add a new check. The new check derives
NoAlias in case the two base pointers are based on two different
GlobalValue:s (neither of them being a GlobalAlias).

Reviewed By: niravd

Differential Revision: https://reviews.llvm.org/D110256
2021-10-05 12:15:55 +02:00
Bjorn Pettersson 1896fb2cff [SelectionDAG] Assume that a GlobalAlias may alias other global values
This fixes a bug detected in DAGCombiner when using global alias
variables. Here is an example:
  @foo = global i16 0, align 1
  @aliasFoo = alias i16, i16 * @foo
  define i16 @bar() {
    ...
    store i16 7, i16 * @foo, align 1
    store i16 8, i16 * @aliasFoo, align 1
    ...
  }

BaseIndexOffset::computeAliasing would incorrectly derive NoAlias
for the two accesses in the example above, resulting in DAGCombiner
miscompiles.

This patch fixes the problem by a defensive approach letting
BaseIndexOffset::computeAliasing return false, i.e. that the aliasing
couldn't be determined, when comparing two global values and at least
one is a GlobalAlias. In the future we might improve this with a
deeper analysis to look at the aliasee for the GlobalAlias etc. But
that is a bit more complicated considering that we could have
'local_unnamed_addr' and situations with several 'alias' variables.

Fixes PR51878.

Differential Revision: https://reviews.llvm.org/D110064
2021-10-05 12:15:55 +02:00
Amara Emerson 8bde5e58c0 Delay outgoing register assignments to last.
The delayed stack protector feature which is currently used for SDAG (and thus
allows for more commonly generating tail calls) depends on being able to extract
the tail call into a separate return block. To do this it also has to extract
the vreg->physreg copies that set up the call's arguments, since if it doesn't
then the call inst ends up using undefined physregs in it's new spliced block.

SelectionDAG implementations can do this because they delay emitting register
copies until  *after* the stack arguments are set up. GISel however just
processes and emits the arguments in IR order, so stack arguments always end up
last, and thus this breaks the code that looks for any register arg copies that
precede the call instruction.

This patch adds a thunk argument to the assignValueToReg() and custom assignment
hooks. For outgoing arguments, register assignments use this return param to
return a thunk that does the actual generating of the copies. We collect these
until all the outgoing stack assignments have been done and then execute them,
so that the copies (and perhaps some artifacts like G_SEXTs) are placed after
any stores.

Differential Revision: https://reviews.llvm.org/D110610
2021-10-04 12:33:20 -07:00
Wang, Pengfei 92ac146bb9 [demangle] Add a unittest for _Float16 demangling. NFC 2021-10-04 22:05:08 +08:00
Jay Foad a9bceb2b05 [APInt] Stop using soft-deprecated constructors and methods in llvm. NFC.
Stop using APInt constructors and methods that were soft-deprecated in
D109483. This fixes all the uses I found in llvm, except for the APInt
unit tests which should still test the deprecated methods.

Differential Revision: https://reviews.llvm.org/D110807
2021-10-04 08:57:44 +01:00
Dávid Bolvanský b1fcca3884 Fixed warnings in LLVM produced by -Wbitwise-instead-of-logical 2021-10-03 13:04:18 +02:00
Min-Yih Hsu 475de8da01 [IR]PATCH 2/2: Add MDNode::printTree and dumpTree
This patch adds the functionalities to print MDNode in tree shape. For
example, instead of printing a MDNode like this:
```
<0x5643e1166888> = !DILocalVariable(name: "foo", arg: 2, scope: <0x5643e11c9740>, file: <0x5643e11c6ec0>, line: 8, type: <0x5643e11ca8e0>, flags: DIFlagPublic | DIFlagFwdDecl, align: 8)
```
The printTree/dumpTree functions can give you:
```
<0x5643e1166888> = !DILocalVariable(name: "foo", arg: 2, scope: <0x5643e11c9740>, file: <0x5643e11c6ec0>, line: 8, type: <0x5643e11ca8e0>, flags: DIFlagPublic | DIFlagFwdDecl, align: 8)
  <0x5643e11c9740> = distinct !DISubprogram(scope: null, spFlags: 0)
  <0x5643e11c6ec0> = distinct !DIFile(filename: "file.c", directory: "/path/to/dir")
  <0x5643e11ca8e0> = distinct !DIDerivedType(tag: DW_TAG_pointer_type, baseType: <0x5643e11668d8>, size: 1, align: 2)
    <0x5643e11668d8> = !DIBasicType(tag: DW_TAG_unspecified_type, name: "basictype")
```
Which is useful when using it in debugger. Where sometimes printing the
whole module to see all MDNodes is too expensive.

Differential Revision: https://reviews.llvm.org/D110113
2021-10-02 21:19:52 -07:00