On Windows, we won't go into the `host_os != "win"` block, so `defines`
won't have been defined, and we'll run into an undefined identifier
error when we try to later append to it. Unconditionally define it at
the start and append to it everywhere else.
Differential Revision: https://reviews.llvm.org/D55617
llvm-svn: 348993
Summary:
In addition to knowing that an instruction is changed. It's also useful to
know when it's about to change. For example, it might print the instruction so
you can track the changes in a debug log, it might remove it from some queue
while it's being worked on, or it might want to change several instructions as
a single transaction and act on all the changes at once.
Added changingInstr() to all existing uses of changedInstr()
Reviewers: aditya_nandakumar
Reviewed By: aditya_nandakumar
Subscribers: rovka, kristof.beyls, llvm-commits
Differential Revision: https://reviews.llvm.org/D55623
llvm-svn: 348992
The previous assertion was relatively easy to trigger, and likely will
be easy to trigger going forward. EmitDelegateCallArg is relatively
popular.
This cleanly diagnoses PR28299 while I work on a proper solution.
llvm-svn: 348991
When loops are deleted, we don't keep track of variables modified inside
the loops, so the DI will contain the wrong value for these.
e.g.
int b() {
int i;
for (i = 0; i < 2; i++)
;
patatino();
return a;
-> 6 patatino();
7 return a;
8 }
9 int main() { b(); }
(lldb) frame var i
(int) i = 0
We mark instead these values as unavailable inserting a
@llvm.dbg.value(undef to make sure we don't end up printing an incorrect
value in the debugger. We could consider doing something fancier,
for, e.g. constants, in the future.
PR39868.
rdar://problem/46418795)
Differential Revision: https://reviews.llvm.org/D55299
llvm-svn: 348988
This fixes https://bugs.llvm.org/show_bug.cgi?id=39908.
The evaluateGEPOffsetExpression() function simplifies GEP offsets for
use in comparisons against zero, basically by converting X*Scale+Offset==0
to X+Offset/Scale==0 if Scale divides Offset. However, before this is done,
Offset is masked down to the pointer size. This results in incorrect
results for negative Offsets, because we basically end up dividing the
32-bit offset *zero* extended to 64-bit bits (rather than sign extended).
Fix this by explicitly sign extending the truncated value.
Differential Revision: https://reviews.llvm.org/D55449
llvm-svn: 348987
Summary:
In both Elf{32,64}_Phdr, the field Elf{32,64}_World p_type is uint32_t.
Also reorder the fields to be similar to Elf64_Phdr (which is different
from Elf32_Phdr but quite similar).
Reviewers: rupprecht, jhenderson, jakehehrlich, alexshap, espindola
Reviewed By: rupprecht
Subscribers: emaste, arichardson, llvm-commits
Differential Revision: https://reviews.llvm.org/D55618
llvm-svn: 348985
Summary:
The TLS_SLOT_TSAN slot is available starting in N, but its location (8)
is incompatible with the proposed solution for implementing ELF TLS on
Android (i.e. bump ARM/AArch64 alignment to reserve an 8-word TCB).
Instead, starting in Q, Bionic replaced TLS_SLOT_DLERROR(6) with
TLS_SLOT_SANITIZER(6). Switch compiler-rt to the new slot.
Reviewers: eugenis, srhines, enh
Reviewed By: eugenis
Subscribers: ruiu, srhines, kubamracek, javed.absar, kristof.beyls, delcypher, llvm-commits, #sanitizers
Differential Revision: https://reviews.llvm.org/D55581
llvm-svn: 348984
Summary:
The change is needed to support ELF TLS in Android. See D55581 for the
same change in compiler-rt.
Reviewers: srhines, eugenis
Reviewed By: eugenis
Subscribers: srhines, llvm-commits
Differential Revision: https://reviews.llvm.org/D55592
llvm-svn: 348983
As mentioned in D55604, there are 2 bugs here:
1. The new pass manager is speculating wildly by default.
2. The old pass manager is not converting this to funnel shift.
llvm-svn: 348980
Summary:
Add a check that TLS_SLOT_TSAN / TLS_SLOT_SANITIZER, whichever
android_get_tls_slot is using, is not conflicting with
TLS_SLOT_DLERROR.
Reviewers: rprichard, vitalybuka
Subscribers: srhines, kubamracek, llvm-commits
Differential Revision: https://reviews.llvm.org/D55587
llvm-svn: 348979
__builtin_cpu_supports and __builtin_cpu_is use information in __cpu_model to decide cpu features. Before this change, __cpu_model was not declared as dso local. The generated code looks up the address in GOT when reading __cpu_model. This makes it impossible to use these functions in ifunc, because at that time GOT entries have not been relocated. This change makes it dso local.
Differential Revision: https://reviews.llvm.org/D53850
llvm-svn: 348978
Summary:
Currently the Clang AST doesn't store information about how the callee of a CallExpr was found. Specifically if it was found using ADL.
However, this information is invaluable to tooling. Consider a tool which renames usages of a function. If the originally CallExpr was formed using ADL, then the tooling may need to additionally qualify the replacement.
Without information about how the callee was found, the tooling is left scratching it's head. Additionally, we want to be able to match ADL calls as quickly as possible, which means avoiding computing the answer on the fly.
This patch changes `CallExpr` to store whether it's callee was found using ADL. It does not change the size of any AST nodes.
Reviewers: fowles, rsmith, klimek, shafik
Reviewed By: rsmith
Subscribers: aaron.ballman, riccibruno, calabrese, titus, cfe-commits
Differential Revision: https://reviews.llvm.org/D55534
llvm-svn: 348977
Summary:
There's little of interest that can be done to an already-erased instruction.
You can't inspect it, write it to a debug log, etc. It ought to be notification
that we're about to erase it. Rename the function to clarify the timing of the
event and reflect current usage.
Also fixed one case where we were trying to print an erased instruction.
Reviewers: aditya_nandakumar
Reviewed By: aditya_nandakumar
Subscribers: rovka, kristof.beyls, llvm-commits
Differential Revision: https://reviews.llvm.org/D55611
llvm-svn: 348976
MULX has somewhat improved register allocation constraints compared to the legacy MUL instruction. Both output registers are encoded instead of fixed to EAX/EDX, but EDX is used as input. It also doesn't touch flags. Unfortunately, the encoding is longer.
Prefering it whenever BMI2 is enabled is probably not optimal. Choosing it should somehow be a function of register allocation constraints like converting adds to three address. gcc and icc definitely don't pick MULX by default. Not sure what if any rules they have for using it.
Differential Revision: https://reviews.llvm.org/D55565
llvm-svn: 348975
A future patch may stop using MULX by default so use MIR to ensure we're always testing MULX.
Add the 32-bit case that we couldn't do in the 64-bit mode IR test due to it being promoted to a 64-bit mul.
llvm-svn: 348972
Updated the annotate-kernel-features pass to support the propagation of uniform-work-group attribute from the kernel to the called functions. Once this pass is run, all kernels, even the ones which initially did not have the attribute, will be able to indicate weather or not they have uniform work group size depending on the value of the attribute.
Differential Revision: https://reviews.llvm.org/D50200
llvm-svn: 348971
Use the replacement execute once threading support in LLVM for PPC64le. It
seems that GCC does not define `__ppc__` and so we would actually call out to
the C++ runtime there which is not what the current code intended. Check both
`__ppc__` and `__PPC__`. This avoids the need for checking the endianness.
Thanks to nemanjai for the hint about GCC's behaviour and the fact that the
reviewed condition could be simplified.
Original patch by Sarvesh Tamba!
llvm-svn: 348970
The __builtin_unpredictable implementation is confused by any implicit
casts, which happen in C++. This patch strips those off so that
if/switch statements now work with it in C++.
Change-Id: I73c3bf4f1775cd906703880944f4fcdc29fffb0a
llvm-svn: 348969
Doesn't handle varargs and other fun things, but it's a start. (also
doesn't print these strictly as valid C++ when it's a pointer to
function, it'll print as "void(int)*" instead of "void (*)(int)")
llvm-svn: 348965
Continue to present HSA metadata as YAML in ASM and when output by tools
(e.g. llvm-readobj), but encode it in Messagepack in the code object.
Differential Revision: https://reviews.llvm.org/D48179
llvm-svn: 348963
This lays the foundation for dumping types not referenced by DW_AT_type
attributes (in the near-term, that'll be DW_AT_containing_type for a
DW_TAG_ptr_to_member_type - in the future, potentially dumping the
pretty printed name next to the DW_TAG for the type, rather than only
when the type is referenced from elsewhere)
llvm-svn: 348961
I'm hoping we can just replace SETCC_CARRY with SBB. This is another step towards that.
I've explicitly used zero as the input to the setcc to avoid a false dependency that we've had with the SETCC_CARRY. I changed one of the patterns that used NEG to instead use an explicit compare with 0 on the LHS. We needed the zero anyway to avoid the false dependency. The negate would clobber its input register. By using a CMP we can avoid that which could be useful.
Differential Revision: https://reviews.llvm.org/D55414
llvm-svn: 348959
Indices for getelementptr can be signed so we should use
getMinSignedBits instead of getActiveBits here. The function later calls
getSExtValue to get the int64_t value, which also checks
getMinSignedBits.
This fixes https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=11647.
Reviewers: mssimpso, efriedma, davide
Reviewed By: efriedma
Differential Revision: https://reviews.llvm.org/D55536
llvm-svn: 348957
This patch introduces a generic function to determine whether a given vector type is known to be a splat value for the specified demanded elements, recursing up the DAG looking for BUILD_VECTOR or VECTOR_SHUFFLE splat patterns.
It also keeps track of the elements that are known to be UNDEF - it returns true if all the demanded elements are UNDEF (as this may be useful under some circumstances), so this needs to be handled by the caller.
A wrapper variant is also provided that doesn't take the DemandedElts or UndefElts arguments for cases where we just want to know if the SDValue is a splat or not (with/without UNDEFS).
I had hoped to completely remove the X86 local version of this function, but I'm seeing some regressions in shift/rotate codegen that will take a little longer to fix and I hope to get this in sooner so I can continue work on PR38243 which needs more capable splat detection.
Differential Revision: https://reviews.llvm.org/D55426
llvm-svn: 348953
If a module has function references, but no functions
themselves, we may end up never calling runOnMachineFunction
and therefore would never initialize nvptxSubtarget field
which would eventually cause a crash.
Instead of relying on nvptxSubtarget being initialized by
one of the methods, retrieve subtarget info directly.
Differential Revision: https://reviews.llvm.org/D55580
llvm-svn: 348952
CallGraph previously would just show the normal name of a function,
which gets really confusing when using it on large C++ projects. This
patch switches the printName call to a printQualifiedName, so that the
namespaces are included.
Change-Id: Ie086d863f6b2251be92109ea1b0946825b28b49a
llvm-svn: 348950
This extends the code that handles 16-bit add promotion to form LEA to also allow 8-bit adds.
That allows us to combine add ops with register moves and save some instructions. This is
another step towards allowing add truncation in generic DAGCombiner (see D54640).
Differential Revision: https://reviews.llvm.org/D55494
llvm-svn: 348946
Version.inc.in processing has a potentially interesting part which I've punted
on for now (LLD_REVISION and LLD_REPOSITORY are set to empty strings for now).
lld now builds in the gn build. But no symlinks to it are created yet, so it
can't be meaningfully run yet.
Differential Revision: https://reviews.llvm.org/D55593
llvm-svn: 348945
When multiple loop transformation are defined in a loop's metadata, their order of execution is defined by the order of their respective passes in the pass pipeline. For instance, e.g.
#pragma clang loop unroll_and_jam(enable)
#pragma clang loop distribute(enable)
is the same as
#pragma clang loop distribute(enable)
#pragma clang loop unroll_and_jam(enable)
and will try to loop-distribute before Unroll-And-Jam because the LoopDistribute pass is scheduled after UnrollAndJam pass. UnrollAndJamPass only supports one inner loop, i.e. it will necessarily fail after loop distribution. It is not possible to specify another execution order. Also,t the order of passes in the pipeline is subject to change between versions of LLVM, optimization options and which pass manager is used.
This patch adds 'followup' attributes to various loop transformation passes. These attributes define which attributes the resulting loop of a transformation should have. For instance,
!0 = !{!0, !1, !2}
!1 = !{!"llvm.loop.unroll_and_jam.enable"}
!2 = !{!"llvm.loop.unroll_and_jam.followup_inner", !3}
!3 = !{!"llvm.loop.distribute.enable"}
defines a loop ID (!0) to be unrolled-and-jammed (!1) and then the attribute !3 to be added to the jammed inner loop, which contains the instruction to distribute the inner loop.
Currently, in both pass managers, pass execution is in a fixed order and UnrollAndJamPass will not execute again after LoopDistribute. We hope to fix this in the future by allowing pass managers to run passes until a fixpoint is reached, use Polly to perform these transformations, or add a loop transformation pass which takes the order issue into account.
For mandatory/forced transformations (e.g. by having been declared by #pragma omp simd), the user must be notified when a transformation could not be performed. It is not possible that the responsible pass emits such a warning because the transformation might be 'hidden' in a followup attribute when it is executed, or it is not present in the pipeline at all. For this reason, this patche introduces a WarnMissedTransformations pass, to warn about orphaned transformations.
Since this changes the user-visible diagnostic message when a transformation is applied, two test cases in the clang repository need to be updated.
To ensure that no other transformation is executed before the intended one, the attribute `llvm.loop.disable_nonforced` can be added which should disable transformation heuristics before the intended transformation is applied. E.g. it would be surprising if a loop is distributed before a #pragma unroll_and_jam is applied.
With more supported code transformations (loop fusion, interchange, stripmining, offloading, etc.), transformations can be used as building blocks for more complex transformations (e.g. stripmining+stripmining+interchange -> tiling).
Reviewed By: hfinkel, dmgreen
Differential Revision: https://reviews.llvm.org/D49281
Differential Revision: https://reviews.llvm.org/D55288
llvm-svn: 348944