This patch reverts r325440 and r325438 because it triggers an
assertion in SelectionDAGBuilder.cpp. Also having debug enabled
may unintentionally affect code-gen. The patch is reverted until
we find a better solution.
llvm-svn: 325825
This allows us to improve vector constant matching in more DAG code (backends, TargetLowering etc.).
Differential Revision: https://reviews.llvm.org/D43466
llvm-svn: 325815
Summary:
If there is no debug info for macros, do not emit labels for empty
macinfo sections.
Reviewers: probinson, echristo
Subscribers: aprantl, llvm-commits, JDevlieghere
Differential Revision: https://reviews.llvm.org/D43589
llvm-svn: 325803
We looked through a BITCAST, but the bitcast might be a from a scalar type rather than a vector.
I don't have a test case. I stumbled onto it while prototyping another change that isn't ready yet.
llvm-svn: 325750
Spilling may cause previously non-empty intervals (both for the spilled vreg
and others) to become empty. Moving the pruning into initializeGraph catches
these cases and fixes PR33038.
llvm-svn: 325632
This is split off from D42948 and includes just the cases that constant fold to true or false. It also includes some refactoring to keep predicate checks together.
This supports things like
(setcc uge X, 0) -> true
Differential Revision: https://reviews.llvm.org/D43489
llvm-svn: 325627
DAGCombiner and SimplifySetCC both use getPointerTy for shift amounts pre-legalization. DAGCombiner uses a single helper function to hide this. SimplifySetCC does it in multiple places.
This patch adds a defaulted parameter to getShiftAmountTy that can make it return getPointerTy for scalar types. Use this parameter to simplify the SimplifySetCC and DAGCombiner.
Additionally, there were two places in SimplifySetCC that were creating shifts using the target's preferred shift amount pre-legalization. If the target uses a narrow type and the type is illegal, this can cause SimplfiySetCC to create a shift with an amount that can't represent all possible shift values for the type. To fix this we should use pointer type there too.
Alternatively we could make getScalarShiftAmountTy for each target return a safe value for large types as proposed in D43445. And maybe we should still do that, but fixing the SimplifySetCC code keeps other targets from tripping over this in the future.
Fixes PR36250.
Differential Revision: https://reviews.llvm.org/D43449
llvm-svn: 325602
ExpandUINT_TO_FLOAT can accept vXi32 or vXi64 inputs, so we need to use a uint64_t shift to generate the 2^(BW/2) constant.
No test case unfortunately as no upstream target uses this, but its affecting a downstream target.
llvm-svn: 325578
This is the second part of recommit of r325224. The previous part was
committed in r325426, which deals with C++ memory allocation. Solution
for C memory allocation involved functions `llvm::malloc` and similar.
This was a fragile solution because it caused ambiguity errors in some
cases. In this commit the new functions have names like `llvm::safe_malloc`.
The relevant part of original comment is below, updated for new function
names.
Analysis of fails in the case of out of memory errors can be tricky on
Windows. Such error emerges at the point where memory allocation function
fails, but manifests itself when null pointer is used. These two points
may be distant from each other. Besides, next runs may not exhibit
allocation error.
In some cases memory is allocated by a call to some of C allocation
functions, malloc, calloc and realloc. They are used for interoperability
with C code, when allocated object has variable size and when it is
necessary to avoid call of constructors. In many calls the result is not
checked for null pointer. To simplify checks, new functions are defined
in the namespace 'llvm': `safe_malloc`, `safe_calloc` and `safe_realloc`.
They behave as corresponding standard functions but produce fatal error if
allocation fails. This change replaces the standard functions like 'malloc'
in the cases when the result of the allocation function is not checked
for null pointer.
Finally, there are plain C code, that uses malloc and similar functions. If
the result is not checked, assert statement is added.
Differential Revision: https://reviews.llvm.org/D43010
llvm-svn: 325551
If we have a clamp pattern, SMIN(SMAX(X, LO),HI) or SMAX(SMIN(X, HI),LO) then we can deduce that the number of signbits (zeros/ones) will be at least the minimum of the LO and HI constants.
ComputeKnownBits equivalent of D43338.
Differential Revision: https://reviews.llvm.org/D43463
llvm-svn: 325521
Summary:
This commit separates the abstract accelerator table data structure
from the code for writing out an on-disk representation of a specific
accelerator table format. The idea is that former (now called
AccelTable<T>) can be reused for the DWARF v5 accelerator tables
as-is, without any further customizations.
Some bits of the emission code (now living in the EmissionContext class)
can be reused for DWARF v5 as well, but the subtle differences in the
layout of various subtables mean the sharing is not always possible.
(Also, the individual emit*** functions are fairly simple so there's a
tradeoff between making a bigger general-purpose function, and two
smaller targeted functions.)
Another advantage of this setup is that more of the serialization logic
can be hidden in the .cpp file -- I have moved declarations of the
header and all the emission functions there.
Reviewers: JDevlieghere, aprantl, probinson, dblaikie
Subscribers: echristo, clayborg, vleschuk, llvm-commits
Differential Revision: https://reviews.llvm.org/D43285
llvm-svn: 325516
If we have a clamp pattern, SMIN(SMAX(X, LO),HI) or SMAX(SMIN(X, HI),LO) then we can deduce that the number of signbits will be at least the minimum of the LO and HI constants.
I haven't bothered with the UMIN/UMAX equivalent as (1) we don't have any current use cases and (2) I wonder if we'd be better off immediately falling back for ComputeKnownBits for UMIN/UMAX which already has optimization patterns useful for unsigned cases.
Differential Revision: https://reviews.llvm.org/D43338
llvm-svn: 325450
Summary:
https://llvm.org/PR36263 shows that when compiling at -O0 a dbg.value()
instruction (that remains from an original dbg.declare()) is dropped
by FastISel. Since FastISel selects instructions by iterating a basic
block backwards, it drops the dbg.value if one of its operands is not
yet instantiated by a previously selected instruction.
Instead of calling 'lookUpRegForValue()' we can call 'getRegForValue()'
instead that will insert a placeholder for the operand to be filled in
when continuing the instruction selection.
Reviewers: aprantl, dblaikie, probinson
Reviewed By: aprantl
Subscribers: llvm-commits, dstenb, JDevlieghere
Differential Revision: https://reviews.llvm.org/D43386
llvm-svn: 325438
This makes sure that alloca() function calls properly probe the
stack as needed.
Differential Revision: https://reviews.llvm.org/D42356
llvm-svn: 325433
Summary:
The assert for a DISubrange's CountVarDIE to be available fails
when the dbg.value() has been optimized away for any reason.
Having the assert for that is a little heavy, so instead removing
it now in favor of not generating the 'count' expression.
Addresses http://llvm.org/PR36263 .
Reviewers: aprantl, dblaikie, probinson
Reviewed By: aprantl
Subscribers: JDevlieghere, llvm-commits, dstenb
Differential Revision: https://reviews.llvm.org/D43387
llvm-svn: 325427
This reverts commit r323991.
This commit breaks target that don't model all the register constraints
in TableGen. So far the workaround was to set the
hasExtraXXXRegAllocReq, but it proves that it doesn't cover all the
cases.
For instance, when mutating an instruction (like in the lowering of
COPYs) the isRenamable flag is not properly updated. The same problem
will happen when attaching machine operand from one instruction to
another.
Geoff Berry is working on a fix in https://reviews.llvm.org/D43042.
llvm-svn: 325421
Sadly, r324359 caused at least PR36312. There is a patch out for review
but it seems to be taking a bit and we've already had these crashers in
tree for too long. We're hitting this PR in real code now and are
blocked on shipping new compilers as a consequence so I'm reverting us
back to green.
Sorry for the churn due to the stacked changes that I had to revert. =/
llvm-svn: 325420
Based off the DemandedElts mask the and UNDEF elements returned from the SimplifyDemandedVectorElts calls to the shuffle operands, we can attempt to simplify the shuffle mask.
I had to be very conservative here as accepting post-legalized shuffle masks could cause problems for targets that legalize UNDEF mask elements back to inrange values (PowerPC), similarly combining to identity shuffle masks could cause too much UNDEF information to disappear for later combines.
llvm-svn: 325354
Summary:
Currently when expanding a SETCC node into a SELECT_CC, LLVM uses
an incorrect type for determining BooleanContent of the result. This
patch fixes the issue.
Fixes PR36079.
Reviewers: rogfer01, javed.absar, efriedma
Reviewed By: efriedma
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D43282
llvm-svn: 325325
Same for the sign extend case.
Currently we check for multiple uses on the binop. Then we call ExtendUsesToFormExtLoad to capture SetCCs that use the load. So we only end up finding any setccs when the and has additional uses and the load is used by a setcc. I don't think the and having multiple uses is relevant here. I think we should only be checking for the load having multiple uses.
This changes an NVPTX test because we now find that the load has a second use by a truncate, but ExtendUsesToFormExtLoad only looks at setccs it can extend. All other operations just check isTruncateFree. Maybe we should allow widening of an existing truncate even if its not free?
Differential Revision: https://reviews.llvm.org/D43063
llvm-svn: 325289
This is mainly a move of simplifyShuffleOperands from DAGCombiner::visitVECTOR_SHUFFLE to create a more general purpose TargetLowering::SimplifyDemandedVectorElts implementation.
Further features can be moved/added in future patches.
Differential Revision: https://reviews.llvm.org/D42896
llvm-svn: 325232
Analysis of fails in the case of out of memory errors can be tricky on
Windows. Such error emerges at the point where memory allocation function
fails, but manifests itself when null pointer is used. These two points
may be distant from each other. Besides, next runs may not exhibit
allocation error.
Usual programming practice does not require checking result of 'operator
new' because it throws 'std::bad_alloc' in the case of allocation error.
However, LLVM is usually built with exceptions turned off, so 'new' can
return null pointer. This change installs custom new handler, which causes
fatal error in the case of out of memory. The handler is installed
automatically prior to call to 'main' during construction of a static
object defined in 'lib/Support/ErrorHandling.cpp'. If the application does
not use this file, the handler may be installed manually by a call to
'llvm::install_out_of_memory_new_handler', declared in
'include/llvm/Support/ErrorHandling.h".
There are calls to C allocation functions, malloc, calloc and realloc.
They are used for interoperability with C code, when allocated object has
variable size and when it is necessary to avoid call of constructors. In
many calls the result is not checked against null pointer. To simplify
checks, new functions are defined in the namespace 'llvm' with the
same names as these C function. These functions produce fatal error if
allocation fails. User should use 'llvm::malloc' instead of 'std::malloc'
in order to use the safe variant. This change replaces 'std::malloc'
in the cases when the result of allocation function is not checked against
null pointer.
Finally, there are plain C code, that uses malloc and similar functions. If
the result is not checked, assert statements are added.
Differential Revision: https://reviews.llvm.org/D43010
llvm-svn: 325224
Summary:
This patch adds templated functions to MachineIRBuilder for some opcodes
and adds pattern matcher support for G_AND and G_OR.
Reviewers: aditya_nandakumar
Reviewed By: aditya_nandakumar
Subscribers: rovka, kristof.beyls, llvm-commits
Differential Revision: https://reviews.llvm.org/D43309
llvm-svn: 325162
Previously we only invalidated the pressure set limit cached when the TargetRegisterInfo pointer changes. But as reserved regs and callee saved regs are used as part of calculating the limits we should invalidate when those change too.
I encountered this when reverting a patch from the 6.0 branch. One of the x86 test files had a function that used rbp as a frame pointer, making it reserved. It was followed by another function which didn't use rbp but had the same TRI so the pressure set limit cache was not invalidated. If i removed the function that used rbp as a frame pointer from the file, the remaining function then got a different register pressure limit for the GR16 pressure set. This caused the machine scheduler to change the scheduling for the function. This was an unexpected change from just deleting a function.
I don't have a test case for trunk because the particular x86 test case is different enough from the 6.0 branch to not be affected now.
Differential Revision: https://reviews.llvm.org/D43274
llvm-svn: 325153
The prologue-end line record must be emitted after the last
instruction that is part of the function frame setup code and before
the instruction that marks the beginning of the function body.
Patch by Carlos Alberto Enciso!
Differential Revision: https://reviews.llvm.org/D41762
llvm-svn: 325143
When creating high MachineMemOperand for MSTORE/MLOAD we supply
it with the original PointerInfo, while the pointer itself had been incremented.
The patch adds the proper offset to the PointerInfo.
llvm-svn: 325135
Preserve debug info from a dead 'and' instruction with a constant.
Patch by Djordje Todorovic.
Differential Revision: https://reviews.llvm.org/D43163
llvm-svn: 325119
Making a width of GEP Index, which is used for address calculation, to be one of the pointer properties in the Data Layout.
p[address space]:size:memory_size:alignment:pref_alignment:index_size_in_bits.
The index size parameter is optional, if not specified, it is equal to the pointer size.
Till now, the InstCombiner normalized GEPs and extended the Index operand to the pointer width.
It works fine if you can convert pointer to integer for address calculation and all registered targets do this.
But some ISAs have very restricted instruction set for the pointer calculation. During discussions were desided to retrieve information for GEP index from the Data Layout.
http://lists.llvm.org/pipermail/llvm-dev/2018-January/120416.html
I added an interface to the Data Layout and I changed the InstCombiner and some other passes to take the Index width into account.
This change does not affect any in-tree target. I added tests to cover data layouts with explicitly specified index size.
Differential Revision: https://reviews.llvm.org/D42123
llvm-svn: 325102
* Document most API's
* Delete a useless function call
* Fix a discrepancy between the single and multi-opcode variants of
getActionDefinitions().
The multi-opcode variant now requires that more than one opcode is requested.
Previously it acted much like the single-opcode form but unnecessarily
enforced the requirements of the multi-opcode form.
llvm-svn: 325067
Also make a drive-by-fix of a bug in the subregister scan code that
only triggers with an incomplete or otherwise very irregular machine
description.
rdar://problem/37404493
This re-applies r324972 with an early exit in the case of a complete
failure to make this commit NFC again as intended.
llvm-svn: 325041
Summary:
If the and has an additional use we shouldn't invert it. That creates an additional instruction.
While there add a one use check to the transform above that looked similar.
Reviewers: spatel, RKSimon
Reviewed By: RKSimon
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D43225
llvm-svn: 325019
The bug has been lying dormant, but apparently was never exposed, until
after rL324941 because we didn't return the correct result
for shifts with undef operands.
llvm-svn: 325010
Here are the number of additional debug values salvaged in a stage2
build of clang:
63 SALVAGE: MUL
1250 SALVAGE: SDIV
(No values were salvaged from `srem` instructions in this experiment,
but it's a simple case to handle so we might as well.)
llvm-svn: 324976
Here are the number of additional debug values salvaged in a stage2
build of clang:
1912 SALVAGE: ASHR
405 SALVAGE: LSHR
249 SALVAGE: SHL
llvm-svn: 324975
Also make a drive-by-fix of a bug in the subregister scan code that
only triggers with an incomplete or otherwise very irregular machine
description.
rdar://problem/37404493
llvm-svn: 324972
Summary:
This change is part of step five in the series of changes to remove alignment argument from
memcpy/memmove/memset in favour of alignment attributes. In particular, this changes the
creation of memcpys in the SafeStack pass to set the alignment of the destination object to
its stack alignment while separately setting the source byval arguments alignment to its
alignment.
Steps:
Step 1) Remove alignment parameter and create alignment parameter attributes for
memcpy/memmove/memset. ( rL322965, rC322964, rL322963 )
Step 2) Expand the IRBuilder API to allow creation of memcpy/memmove with differing
source and dest alignments. ( rL323597 )
Step 3) Update Clang to use the new IRBuilder API. ( rC323617 )
Step 4) Update Polly to use the new IRBuilder API. ( rL323618 )
Step 5) Update LLVM passes that create memcpy/memmove calls to use the new IRBuilder API,
and those that use use MemIntrinsicInst::[get|set]Alignment() to use [get|set]DestAlignment()
and [get|set]SourceAlignment() instead. (rL323886, rL323891, rL324148, rL324273, rL324278,
rL324384, rL324395, rL324402, rL324626, rL324642, rL324653, rL324654, rL324773, rL324774,
rL324781, rL324784 )
Step 6) Remove the single-alignment IRBuilder API for memcpy/memmove, and the
MemIntrinsicInst::[get|set]Alignment() methods.
Reference
http://lists.llvm.org/pipermail/llvm-dev/2015-August/089384.htmlhttp://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20151109/312083.html
Reviewers: eugenis, bollu
Reviewed By: eugenis
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D42710
llvm-svn: 324955
This started by noticing that scalar and vector types were producing different results with div ops in PR36305:
https://bugs.llvm.org/show_bug.cgi?id=36305
...but the problem is bigger. I couldn't keep it straight without a table, so I'm attaching that as a PDF to
the review. The x86 tests in undef-ops.ll correspond to that table.
Green means that instsimplify and the DAG agree on the result for all types.
Red means the DAG was returning undef when IR was not.
Yellow means the DAG was returning a non-undef result when IR returned undef.
This patch assumes that we're currently doing the right thing in IR.
Note: I couldn't find any problems with lowering vector constants as the code comments were warning,
but those comments were written long ago in rL36413 .
Differential Revision: https://reviews.llvm.org/D43141
llvm-svn: 324941
If merging them, the dllexport attribute needs to be brought along
to the new GlobalAlias.
Differential Revision: https://reviews.llvm.org/D43192
llvm-svn: 324937
Rather than encode the absence of a checksum with a Kind variant, instead put
both the kind and value in a struct and wrap it in an Optional.
Differential Revision: http://reviews.llvm.org/D43043
llvm-svn: 324928
Armv8.1-A added an atomic load-clear instruction (which performs bitwise
and with the complement of it's operand), but not a load-and
instruction. Our current code-generation for atomic load-and always
inserts an MVN instruction to invert its argument, even if it could be
folded into a constant or another instruction.
This adds lowering early in selection DAG to convert a load-and
operation into an xor with -1 and a load-clear, allowing the normal DAG
optimisations to work on it.
To do this, I've had to add a new ISD opcode, ATOMIC_LOAD_CLR. I don't
see any easy way to do this with an AArch64-specific ISD node, because
the code-generation for atomic operations assumes the SDNodes are of
type AtomicSDNode.
I've left the old tablegen patterns in because they are still needed for
global isel.
Differential revision: https://reviews.llvm.org/D42478
llvm-svn: 324908
Add a common -trap-unreachable option, similar to the target
specific hexagon equivalent, which has been replaced. This
turns unreachable instructions into traps, which is useful for
debugging.
Differential Revision: https://reviews.llvm.org/D42965
llvm-svn: 324880
Instead of reserving 0xF00 bytes for the fixed length portion of the CodeView
symbol name, calculate the actual length of the fixed length portion.
Differential Revision: https://reviews.llvm.org/D42125
llvm-svn: 324850
This reverses instcombine's demanded bits' transform which always tries to clear bits in constants.
As noted in PR35792 and shown in the test diffs:
https://bugs.llvm.org/show_bug.cgi?id=35792
...we can do better in codegen by trying to form -1. The x86 sub test shows a missed opportunity.
I did investigate changing instcombine's behavior, but it would be more work to change
canonicalization in IR. Clearing bits / shrinking constants can allow killing instructions,
so we'd have to figure out how to not regress those cases.
Differential Revision: https://reviews.llvm.org/D42986
llvm-svn: 324839
This allows us to recognise more saturation patterns and also simplify some MINMAX codegen that was failing to combine CMPGE comparisons to a legal CMPGT.
Differential Revision: https://reviews.llvm.org/D43014
llvm-svn: 324837
SelectionDAG::getBoolConstant was recently introduced. At the time I didn't know getConstTrueVal existed, but I think getBoolConstant is better as it will use the source VT to make sure it can properly detect floating point if it is configured differently.
llvm-svn: 324832
Extend salvageDebugInfo to preserve the debug info from a dead 'or'
with a constant.
Patch by Ismail Badawi!
Differential Revision: https://reviews.llvm.org/D43129
llvm-svn: 324764
* Use uleb128 for code offsets in the LSDA call site table.
* Omit the TTBase offset if the type table is empty.
This change can reduce the size of the DWARF/Itanium LSDA by about half.
Patch by Ryan Prichard!
llvm-svn: 324750
Rely on the assembler to finalize the layout of the DWARF/Itanium
exception-handling LSDA. Rather than calculate the exact size of each
thing in the LSDA, use assembler directives:
To emit the offset to the TTBase label:
.uleb128 .Lttbase0-.Lttbaseref0
.Lttbaseref0:
To emit the size of the call site table:
.uleb128 .Lcst_end0-.Lcst_begin0
.Lcst_begin0:
... call site table entries ...
.Lcst_end0:
To align the type info table:
... action table ...
.balign 4
.long _ZTIi
.long _ZTIl
.Lttbase0:
Using assembler directives simplifies the compiler and allows switching
the encoding of offsets in the call site table from udata4 to uleb128 for
a large code size savings. (This commit does not change the encoding.)
The combination of the uleb128 followed by a balign creates an unfortunate
dependency cycle that the assembler must sometimes resolve either by
padding an LEB or by inserting zero padding before the type table. See
PR35809 or GNU as bug 4029.
Patch by Ryan Prichard!
llvm-svn: 324749
r314974 introduced insertion of DEBUG_VALUEs after
each redefinition of debug value register in the slot index range.
In case the instruction redefining the debug value register
was a terminator, machine verifier would complain since it
enforces the rule of no non-terminator instructions
following the first terminator.
Differential Revision: https://reviews.llvm.org/D42801
llvm-svn: 324734
When adding operands to machine instructions in case of
RegisterSDNodes, generate a COPY node in case the register class
does not match the one in the instruction definition.
Differental Revision: https://reviews.llvm.org/D35561
llvm-svn: 324733
Summary:
The class contained arrays of two structures (DataArray and HashData).
These structures were in 1:1 correspondence, and one of them contained
pointers to the other (and *both* contained a "Name" field). By merging
these two structures into one, we can save a bit of space without
negatively impacting much of anything.
Reviewers: JDevlieghere, aprantl
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D43073
llvm-svn: 324724
Add verification for copies involving generic registers if they are
compatible - ie if it is a generic copy, then the types are the
same, and if a COPY b/w generic and target virtual register, then
the sizes should be the same. Only checks if there are no sub registers
involved for now.
https://reviews.llvm.org/D37775
llvm-svn: 324696
This addresses review feedback for D42940. The topological sort is
slightly more expensive but it can now also detect cycles in the
dependencies and actually works correctly.
rdar://problem/37217988
Differential Review: https://reviews.llvm.org/D43036
llvm-svn: 324677
Many in SimplifySetCC and FoldSetCC try to create true or false constants. Some of them query getBooleanContents to figure out whether to use all ones or just 1 for true. But many places do not check and just use 1 without ensuring the VT has an i1 scalar type. Note sure if those places only trigger before type legalization so they only see an i1
type?
To cleanup the inconsistency and reduce some duplicated code, this patch adds a getBoolConstant method to SelectionDAG that takes are of querying getBooleanContents and doing the right thing.
Differential Revision: https://reviews.llvm.org/D43037
llvm-svn: 324634
We're passing the binary op that uses the load instead of the load.
Noticed by inspection. Not sure how to test this because this just prevents the introduction of an extend that will later be truncated and will probably be combined out.
llvm-svn: 324568
The truncate is being used to replace other users of of the load, but we checked that the load only has one use so there are no other uses to replace.
llvm-svn: 324567
Instead of:
%bb.1: derived from LLVM BB %for.body
print:
bb.1.for.body:
Also use MIR syntax for MBB attributes like "align", "landing-pad", etc.
llvm-svn: 324563
The truncate is only needed if the load has additional users. It used to get passed to extendSetCCUses so was created early, but that's no longer the case.
llvm-svn: 324562
Travel all chains paths to first non-tokenfactor node can be
exponential work. Add simple redundency check to avoid this.
Fixes PR36264.
llvm-svn: 324491
This patch is the LLVM part of fixing the issues, described in
https://bugs.llvm.org/show_bug.cgi?id=36168
* The representation of enumerator values in the debug info metadata now
contains a boolean flag isUnsigned, which determines how the bits of
the value are interpreted.
* The DW_TAG_enumeration type DIE now always (for DWARF version >= 3)
includes a DW_AT_type attribute, which refers to the underlying
integer type, as suggested in DWARFv4 (5.7 Enumeration Type Entries).
* The debug info metadata for enumeration type contains (in flags)
indication whether this is a C++11 "fixed enum".
* For C++11 enumeration with a fixed underlying type, the DIE also
includes the DW_AT_enum_class attribute (for DWARF version >= 4).
* Encoding of enumerator constants uses DW_FORM_sdata for signed values
and DW_FORM_udata for unsigned values, as suggested by DWARFv4 (7.5.4
Attribute Encodings).
The changes should be backwards compatible:
* the isUnsigned attribute is optional and defaults to false.
* if the underlying type for the enumeration is not available, the
enumerator values are considered signed.
* the FixedEnum flag defaults to clear.
* the bitcode format for DIEnumerator stores the unsigned flag bit #1 of
the first record element, so the format does not change and the zero
previously stored there is consistent with the false default for
IsUnsigned.
Differential Revision: https://reviews.llvm.org/D42734
llvm-svn: 324489
With fixes from rL324341.
Original commit message:
[MergeICmps] Enable the MergeICmps Pass by default.
Summary: Now that PR33325 is fixed, this should always improve the generated code.
Reviewers: spatel
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D42793
llvm-svn: 324465
X86 currently has a late DAG combine after cttz/ctlz are turned into BSR+BSF+CMOV to detect this and remove the CMOV. But we should be able to do this much earlier and avoid creating the cmov all together.
For the changed AMDGPU test case it appears that previously the i8 cttz was type legalized to i16 which introduced an OR with 256 in order to limit the result to 8 on the widened type. At this point the result is known to never be zero, but nothing checked that. Then operation legalization is told to promote all i16 cttz to i32. This introduces an extend and a truncate and another OR with 65536 to limit the result to 16. With the DAG combiner change we are able to prevent the creation of the second OR since the opcode will have been changed to cttz_zero_undef after the first OR. I the lack of the OR caused the instruction to change to v_ffbl_b32_sdwa
Differential Revision: https://reviews.llvm.org/D42985
llvm-svn: 324427
n Rust, an enum that carries data in the variants is, essentially, a
discriminated union. Furthermore, the Rust compiler will perform
space optimizations on such enums in some situations. Previously,
DWARF for these constructs was emitted using a hack (a magic field
name); but this approach stopped working when more space optimizations
were added in https://github.com/rust-lang/rust/pull/45225.
This patch changes LLVM to allow discriminated unions to be
represented in DWARF. It adds createDiscriminatedUnionType and
createDiscriminatedMemberType to DIBuilder and then arranges for this
to be emitted using DWARF's DW_TAG_variant_part and DW_TAG_variant.
Note that DWARF requires that a discriminated union be represented as
a structure with a variant part. However, as Rust only needs to emit
pure discriminated unions, this is what I chose to expose on
DIBuilder.
Patch by Tom Tromey!
Differential Revision: https://reviews.llvm.org/D42082
llvm-svn: 324426
See D42509 for the original version of this.
Basically, there are two significant changes to behavior here:
- addLiveOuts always adds all pristine registers (even if a block has
no successors).
- addLiveOuts and addLiveOutsNoPristines always add all callee-saved
registers for return blocks (including conditional return blocks).
I cleaned up the functions a bit to make it clear these properties hold.
Differential Revision: https://reviews.llvm.org/D42655
llvm-svn: 324422