Summary:
Add an SMLoc to CodeInit that records the source line it originated from.
This allows tablegen to point precisely at portions of code when reporting
errors within the CodeInit. For example, in the upcoming GlobalISel
combiner, it can report undefined expansions and point at the instance of
the expansion. This is achieved using something like:
SMLoc::getFromPointer(SMLoc::getPointer() +
(StringRef - CodeInit::getValue()))
The location is lost when producing a CodeInit by string concatenation so
a fallback SMLoc is required (e.g. the Record::getLoc()) but that's pretty
rare for CodeInits.
There's a reasonable case for extending tracking of a couple other Init
objects, for example StringInit's are often parsed and it would be good to
point inside the string when reporting errors about that. However, location
tracking also harms de-duplication. This is fine for CodeInit where there's
only a few hundred of them (~160 for X86) and it may be worth it for
StringInit (~86k up to ~1.9M for roughly 15MB increase for X86).
However the origin tracking would be a _terrible_ idea for IntInit, BitInit,
and UnsetInit. I haven't measured either of those three but BitInit would
most likely be on the order of increasing the current 2 BitInit values up
to billions.
Reviewers: volkan, aditya_nandakumar, bogner, paquette, aemerson
Reviewed By: paquette
Subscribers: javed.absar, kristof.beyls, dexonsmith, llvm-commits, kristina
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D58141
llvm-svn: 355245
Add statistics for abstract origins, function, variable and parameter
locations; break the 'variable' counts down into variables and
parameters. Also update call site counting to check for
DW_AT_call_{file,line} in addition to DW_TAG_call_site.
Differential revision: https://reviews.llvm.org/D58849
llvm-svn: 355243
IntrArgMemOnly implies that only memory pointed to by pointer typed arguments will be accessed. But these intrinsics allow you to pass null to the pointer argument and put the full address into the index argument. Other passes won't be able to understand this.
A colleague found that ISPC was creating gathers like this and then dead store elimination removed some stores because it didn't understand what the gather was doing since the pointer argument was null.
Differential Revision: https://reviews.llvm.org/D58805
llvm-svn: 355228
This was sometimes causing clang or llvm-mc to crash, and in other
cases could emit a bogus DWARF line-table header. I did an interim
patch in r352541; this patch should be a cleaner and more complete
fix, and retains the test.
Addresses PR40538.
Differential Revision: https://reviews.llvm.org/D58750
llvm-svn: 355226
Previously we had build_vector PatFrags that called ISD::isBuildVectorAllZeros/Ones. Internally the ISD::isBuildVectorAllZeros/Ones look through bitcasts, but we aren't able to take advantage of that in isel. Instead of we have to canonicalize the types of the all zeros/ones build_vectors and insert bitcasts. Then we have to pattern match those exact bitcasts.
By emitting specific matchers for these 2 nodes, we can make isel look through any bitcasts without needing to explicitly match them. We should also be able to remove the canonicalization to vXi32 from lowering, but I've left that for a follow up.
This removes something like 40,000 bytes from the X86 isel table.
Differential Revision: https://reviews.llvm.org/D58595
llvm-svn: 355224
We have two sources of known bits:
1. For adds leading ones of either operand are preserved. For sub
leading zeros of LHS and leading ones of RHS become leading zeros in
the result.
2. The saturating math is a select between add/sub and an all-ones/
zero value. As such we can carry out the add/sub known bits
calculation, and only preseve the known one/zero bits respectively.
Differential Revision: https://reviews.llvm.org/D58329
llvm-svn: 355223
I'm assuming that the nan propogation logic for InstructonSimplify's handling of fadd and fsub is correct, and applying the same to atomicrmw.
Differential Revision: https://reviews.llvm.org/D58836
llvm-svn: 355222
This lets us detect file size overflows when creating a 64-bit binary on
a 32-bit machine.
Differential Revision: https://reviews.llvm.org/D58840
llvm-svn: 355218
This patch fixes an issue where we would compute an unnecessarily small alignment during scalar promotion when no store is not to be guaranteed to execute, but we've proven load speculation safety. Since speculating a load requires proving the existing alignment is valid at the new location (see Loads.cpp), we can use the alignment fact from the load.
For non-atomics, this is a performance problem. For atomics, this is a correctness issue, though an *incredibly* rare one to see in practice. For atomics, we might not be able to lower an improperly aligned load or store (i.e. i32 align 1). If such an instruction makes it all the way to codegen, we *may* fail to codegen the operation, or we may simply generate a slow call to a library function. The part that makes this super hard to see in practice is that the memory location actually *is* well aligned, and instcombine knows that. So, to see a failure, you have to have a) hit the bug in LICM, b) somehow hit a depth limit in InstCombine/ValueTracking to avoid fixing the alignment, and c) then have generated an instruction which fails codegen rather than simply emitting a slow libcall. All around, pretty hard to hit.
Differential Revision: https://reviews.llvm.org/D58809
llvm-svn: 355217
An idempotent atomicrmw is one that does not change memory in the process of execution. We have already added handling for the various integer operations; this patch extends the same handling to floating point operations which were recently added to IR.
Note: At the moment, we canonicalize idempotent fsub to fadd when ordering requirements prevent us from using a load. As discussed in the review, I will be replacing this with canonicalizing both floating point ops to integer ops in the near future.
Differential Revision: https://reviews.llvm.org/D58251
llvm-svn: 355210
Summary:
This patch will obtain the section name for symbols that refer to a section. Prior to this patch the Name field for STT_SECTIONs was blank, now it is populated.
Before:
```
Symbol table '.symtab' contains 6 entries:
Num: Value Size Type Bind Vis Ndx Name
0: 0000000000000000 0 NOTYPE LOCAL DEFAULT UND
1: 0000000000000000 0 SECTION LOCAL DEFAULT 1
2: 0000000000000000 0 SECTION LOCAL DEFAULT 3
3: 0000000000000000 0 SECTION LOCAL DEFAULT 4
4: 0000000000000000 0 NOTYPE GLOBAL DEFAULT UND _GLOBAL_OFFSET_TABLE_
5: 0000000000000000 0 TLS GLOBAL DEFAULT UND sym
```
With this patch:
```
Symbol table '.symtab' contains 6 entries:
Num: Value Size Type Bind Vis Ndx Name
0: 0000000000000000 0 NOTYPE LOCAL DEFAULT UND
1: 0000000000000000 0 SECTION LOCAL DEFAULT 1 .text
2: 0000000000000000 0 SECTION LOCAL DEFAULT 3 .data
3: 0000000000000000 0 SECTION LOCAL DEFAULT 4 .bss
4: 0000000000000000 0 NOTYPE GLOBAL DEFAULT UND _GLOBAL_OFFSET_TABLE_
5: 0000000000000000 0 TLS GLOBAL DEFAULT UND sym
```
This fixes PR40788
Reviewers: jhenderson, rupprecht, espindola
Reviewed By: rupprecht
Subscribers: emaste, javed.absar, arichardson, MaskRay, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D58796
llvm-svn: 355207
GCC correctly moans that PlainCFGBuilder::isExternalDef(llvm::Value*) and
StackSafetyDataFlowAnalysis::verifyFixedPoint() are defined but not used
in Release builds. Hide them behind 'ifndef NDEBUG'.
llvm-svn: 355205
The new addressing mode added for the v8.2A FP16 instructions uses bit 8 of the
immediate to encode the sign of the offset, like the other FP loads/stores, so
need to be treated the same way.
Differential revision: https://reviews.llvm.org/D58816
llvm-svn: 355201
This function was not checking for the condition code variants which are
undefined if either input is NaN, so we were missing selection of the VSEL
instruction in some cases when using -fno-honor-nans or -ffast-math.
Differential revision: https://reviews.llvm.org/D58812
llvm-svn: 355199
This is for tweaking SHT_SYMTAB sections.
Their sh_info contains the (number of symbols + 1) usually.
But for creating invalid inputs for test cases it would be convenient
to allow explicitly override this field from YAML.
Differential revision: https://reviews.llvm.org/D58779
llvm-svn: 355193
There was a time when we couldn't dump target-specific flags such as
arm-sbrel etc, so the tests didn't check for them. We can now be more
specific in our tests.
llvm-svn: 355189
This is a small addition to arithmetic operations that improves
expressiveness of the language.
Differential Revision: https://reviews.llvm.org/D58775
llvm-svn: 355187
This patch allows all forms of values for options to be used at the end
of a group. With the fix, it is possible to follow the way GNU binutils
tools handle grouping options better. For example, the -j option can be
used with objdump in any of the following ways:
$ objdump -d -j .text a.o
$ objdump -d -j.text a.o
$ objdump -dj .text a.o
$ objdump -dj.text a.o
Differential Revision: https://reviews.llvm.org/D58711
llvm-svn: 355185
If an option, which requires a value, has a `cl::Grouping` formatting
modifier, it works well as far as it is used at the end of a group,
or as a separate argument. However, if the option appears accidentally
in the middle of a group, the program just crashes. This patch prints
an error message instead.
Differential Revision: https://reviews.llvm.org/D58499
llvm-svn: 355184
These were not recognized as potential atomics by memory legalizer.
The test was working not because legalizer did a right thing, but
because it has skipped all these instructions. When I have fixed
DS desciption test started to fail because region address has
changed from 4 to 2 a while ago.
Differential Revision: https://reviews.llvm.org/D58802
llvm-svn: 355179
Unsigned mul high for MIPS32 is selected into two PseudoInstructions:
PseudoMULTu and PseudoMFHI that use accumulator register class ACC64 for
some of its operands. Registers in this class have appropriate hi and lo
register as subregisters: $lo0 and $hi0 are subregisters of $ac0 etc.
mul instruction implicit-defs $lo0 and $hi0 according to MipsInstrInfo.td.
In functions where mul and PseudoMULTu are present fastRegisterAllocator
will "run out of registers during register allocation" because
'calcSpillCost' for $ac0 will return spillImpossible because subregisters
$lo0 and $hi0 of $ac0 are reserved by mul instruction above. A solution is
to mark implicit-defs of $lo0 and $hi0 as dead in mul instruction.
Differential Revision: https://reviews.llvm.org/D58715
llvm-svn: 355178
Summary:
ConstIntInfoVec contains elements extracted from the previous function.
In new PM, releaseMemory() is not called and the dangling elements can
cause segfault in findConstantInsertionPoint.
Rename releaseMemory() to cleanup() to deliver the idea that it is
mandatory and call cleanup() in ConstantHoistingPass::runImpl to fix
this.
Reviewers: ormris, zzheng, dmgreen, wmi
Reviewed By: ormris, wmi
Subscribers: llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D58589
llvm-svn: 355174
Subtarget features are stored in a std::bitset that has been subclassed. There is a special constructor to allow the tablegen files to provide a list of bits to initialize the std::bitset to. This constructor isn't constexpr and std::bitset doesn't support many constexpr operations either. This results in a static global constructor being used to initialize the feature bitsets in these files at startup.
To fix this I've introduced a new FeatureBitArray class that holds three 64-bit values representing the initial bit values and taught tablegen to emit hex constants for them based on the feature enum values. This makes the tablegen files less readable than they were before. I can add the list of features back as a comment if we think that's important.
I've added a method to convert from this class into the std::bitset subclass we had before. I considered making the new FeatureBitArray class just implement the std::bitset interface we need instead, but thought I'd see how others felts about that first.
I've simplified the interfaces to SetImpliedBits and ClearImpliedBits a little minimize the number of times we need to convert to the bitset.
This removes about 27K from my local release+asserts build of llc.
Differential Revision: https://reviews.llvm.org/D58520
llvm-svn: 355167
Summary:
These sorts of blocks often contain calls to noreturn functions, like
longjmp, throw, or trap. If they don't end the program, they are
"interesting" from the perspective of sanitizer coverage, so we should
instrument them. This was discussed in https://reviews.llvm.org/D57982.
Reviewers: kcc, vitalybuka
Subscribers: llvm-commits, craig.topper, efriedma, morehouse, hiraditya
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D58740
llvm-svn: 355152