Commit Graph

29602 Commits

Author SHA1 Message Date
Aditya Nandakumar 97203cfd6b [GISel] Add new GISel combiners for G_MUL
https://reviews.llvm.org/D87668

Patch adds two new GICombinerRules, one for G_MUL(X, 1) and another for G_MUL(X, -1).
G_MUL(X, 1) is an identity combine, and G_MUL(X, -1) gets replaced with G_SUB(0, X).
Patch additionally adds new combiner tests for the AArch64 target to test these
new combiner rules, as well as updates AMDGPU GISel tests.

Patch by mkitzan
2020-09-15 16:08:47 -07:00
Volkan Keles a4e35cc2ec GlobalISel: Add combines for G_TRUNC
https://reviews.llvm.org/D87050
2020-09-15 15:50:34 -07:00
Guozhi Wei 243ffd0cad [MachineBasicBlock] Fix a typo in function copySuccessor
The condition used to decide if need to copy probability should be reversed.

Differential Revision: https://reviews.llvm.org/D87417
2020-09-15 09:18:18 -07:00
Qiu Chaofan e1669843f2 Revert "[SelectionDAG] Remove unused FP constant in getNegatedExpression"
2508ef01 doesn't totally fix the issue since we did not handle the case
when unused temporary negated result is the same with the result, which
is found by address sanitizer.
2020-09-15 22:03:50 +08:00
Hans Wennborg a21387c654 Revert "RegAllocFast: Record internal state based on register units"
This seems to have caused incorrect register allocation in some cases,
breaking tests in the Zig standard library (PR47278).

As discussed on the bug, revert back to green for now.

> Record internal state based on register units. This is often more
> efficient as there are typically fewer register units to update
> compared to iterating over all the aliases of a register.
>
> Original patch by Matthias Braun, but I've been rebasing and fixing it
> for almost 2 years and fixed a few bugs causing intermediate failures
> to make this patch independent of the changes in
> https://reviews.llvm.org/D52010.

This reverts commit 66251f7e1d, and
follow-ups 931a68f26b
and 0671a4c508. It also adjust some
test expectations.
2020-09-15 13:25:41 +02:00
Simon Pilgrim 6c1f2a34fb SpillPlacement.cpp - remove unnecessary includes. NFCI.
These are all directly included in SpillPlacement.h
2020-09-15 12:18:24 +01:00
Simon Pilgrim 1abb4461ea StatepointLowering.cpp - remove unnecessary includes. NFCI.
These are all directly included in StatepointLowering.h
2020-09-15 12:18:23 +01:00
Simon Pilgrim bee79cdcc6 SelectionDAGBuilder.h - remove unnecessary includes. NFCI.
Reduce to forward declarations and move implicit dependencies down to the cpp files.
2020-09-15 12:18:22 +01:00
Qiu Chaofan 2508ef014e [SelectionDAG] Remove unused FP constant in getNegatedExpression
960cbc53 immediately removes nodes that won't be used to avoid
compilation time explosion. This patch adds the removal to constants to
fix PR47517.

Reviewed By: RKSimon, steven.zhang

Differential Revision: https://reviews.llvm.org/D87614
2020-09-15 17:59:10 +08:00
Petar Avramovic 9b4fa85434 GlobalISel/IRTranslator resetTargetOptions based on function attributes
Update TargetMachine.Options with function attributes before we start
to generate MIR instructions. This allows access to correct function
attributes via TargetMachine.Options (it used to access attributes of
the function that was translated first).
This affects some existing tests with "no-nans-fp-math" attribute.
Follow-up on D87456.

Differential Revision: https://reviews.llvm.org/D87511
2020-09-15 10:26:09 +02:00
Igor Kudrin a845ebd633 [DebugInfo] Make offsets of dwarf units 64-bit (19/19).
In the case of LTO, several DWARF units can be emitted in one section.
For an extremely large application, they may exceed the limit of 4GiB
for 32-bit offsets. As it is now possible to emit 64-bit debugging info,
the patch enables storing the larger offsets.

Differential Revision: https://reviews.llvm.org/D87026
2020-09-15 12:23:32 +07:00
Igor Kudrin 8c19ac23bd [DebugInfo] Make the offset of string pool entries 64-bit (18/19).
The string pool is shared among several units in the case of LTO,
and it potentially can exceed the limit of 4GiB for an extremely
large application. As it is now possible to emit 64-bit debugging
info, the limitation can be removed.

Differential Revision: https://reviews.llvm.org/D87025
2020-09-15 12:23:32 +07:00
Igor Kudrin 7e1e4e81cb [DebugInfo] Fix emitting DWARF64 .debug_macro[.dwo] sections (17/19).
The patch fixes emitting flags and the debug_line_offset field in
the header, as well as the reference to the macro string for
a pre-standard GNU .debug_macro extension.

Differential Revision: https://reviews.llvm.org/D87024
2020-09-15 12:23:31 +07:00
Igor Kudrin a93dd26d8c [DebugInfo] Fix emitting DWARF64 .debug_names sections (16/19).
The patch fixes emitting the unit length field in the header of
the table and offsets to the entry pool. Note that while the patch
changes the common method to emit offsets, in fact, nothing is changed
for Apple accelerator tables, because we do not yet support DWARF64 for
those targets.

Differential Revision: https://reviews.llvm.org/D87023
2020-09-15 12:23:31 +07:00
Igor Kudrin 00ce54689d [DebugInfo] Fix emitting DWARF64 .debug_addr sections (15/19).
The patch fixes emitting the header of the table. The content is
independent of the DWARF format.

Differential Revision: https://reviews.llvm.org/D87022
2020-09-15 12:23:31 +07:00
Igor Kudrin 3158d3dd4b [DebugInfo] Fix emitting DWARF64 .debug_loclists sections (14/19).
The size of the offsets in the table depends on the DWARF format.

Differential Revision: https://reviews.llvm.org/D87020
2020-09-15 12:23:31 +07:00
Igor Kudrin f9b242fe24 [DebugInfo] Fix emitting DWARF64 .debug_rnglists sections (13/19).
The size of the offsets in the table depends on the DWARF format.

Differential Revision: https://reviews.llvm.org/D87019
2020-09-15 12:23:31 +07:00
Igor Kudrin 03b09c6b68 [DebugInfo] Fix emitting pre-v5 name lookup tables in the DWARF64 format (12/19).
The transition is done by using methods of AsmPrinter which
automatically emit values in compliance with the selected DWARF format.

Differential Revision: https://reviews.llvm.org/D87013
2020-09-15 12:23:30 +07:00
Igor Kudrin b118030f3f [DebugInfo] Fix emitting DWARF64 .debug_aranges sections (11/19).
The patch fixes calculating the size of the table and emitting
the fields which depend on the DWARF format by using methods that
choose appropriate sizes automatically.

Differential Revision: https://reviews.llvm.org/D87012
2020-09-15 12:23:30 +07:00
Igor Kudrin 18f23b3ecc [DebugInfo] Fix emitting DWARF64 type units (10/19).
The patch fixes emitting the offset to the type DIE. All other fields
are already fixed in previous patches.

Differential Revision: https://reviews.llvm.org/D87021
2020-09-15 11:31:07 +07:00
Igor Kudrin 924dc58076 [DebugInfo] Fix emitting DWARF64 DWO compilation units and string offset tables (9/19).
These two fixes are better to go together because llvm-dwarfdump is
unable to dump a table when another one is malformed.

Differential Revision: https://reviews.llvm.org/D87018
2020-09-15 11:31:00 +07:00
Igor Kudrin 383d34c077 [DebugInfo] Fix emitting DWARF64 .debug_str_offsets sections (8/19).
The patch fixes calculating the size of the table and emitting the unit
length field.

Differential Revision: https://reviews.llvm.org/D87017
2020-09-15 11:30:53 +07:00
Igor Kudrin 26f1f18831 [DebugInfo] Fix emitting the DW_AT_location attribute for 64-bit DWARFv3 (7/19).
The patch uses a common method to determine the appropriate form for
the value of the attribute.

Differential Revision: https://reviews.llvm.org/D87016
2020-09-15 11:30:46 +07:00
Igor Kudrin cae7c1eb78 [DebugInfo] Use a common method to determine a suitable form for section offsts (6/19).
This is mostly an NFC patch because the involved methods are used when
emitting DWO files, which is incompatible with DWARFv3, or for platforms
where DWARF64 is not supported yet.

Differential Revision: https://reviews.llvm.org/D87015
2020-09-15 11:30:38 +07:00
Igor Kudrin 5dd1c59188 [DebugInfo] Fix emitting DWARF64 compilation units (5/19).
The patch also adds a method to choose an appropriate DWARF form
to represent section offsets according to the version and the format
of producing debug info.

Differential Revision: https://reviews.llvm.org/D87014
2020-09-15 11:30:30 +07:00
Igor Kudrin 982b31fad2 [DebugInfo] Add the -dwarf64 switch to llc and other internal tools (4/19).
The patch adds a switch to enable emitting debug info in the 64-bit
DWARF format. Most emitter for sections will be updated in the subsequent
patches, whereas for .debug_line and .debug_frame the emitters are in
the MC library, which is already updated.

For now, the switch is enabled only for 64-bit ELF targets.

Differential Revision: https://reviews.llvm.org/D87011
2020-09-15 11:30:18 +07:00
Igor Kudrin c3c501f5d7 [DebugInfo] Add new emitting methods for values which depend on the DWARF format (3/19).
These methods are going to be used in subsequent patches.

Differential Revision: https://reviews.llvm.org/D87010
2020-09-15 11:30:10 +07:00
Igor Kudrin a8058c6f8d [DebugInfo] Fix DIE value emitters to be compatible with DWARF64 (2/19).
DW_FORM_sec_offset and DW_FORM_strp imply values of different sizes with
DWARF32 and DWARF64. The patch fixes DIE value classes to use correct
sizes when emitting their values. For DIELocList it ensures that the
requested DWARF form matches the current DWARF format because that class
uses a method that selects the size automatically.

Differential Revision: https://reviews.llvm.org/D87009
2020-09-15 11:30:02 +07:00
Igor Kudrin 380e746bcc [DebugInfo] Fix methods of AsmPrinter to emit values corresponding to the DWARF format (1/19).
These methods are used to emit values which are 32-bit in DWARF32 and
64-bit in DWARF64. The patch fixes them so that they choose the length
automatically, depending on the DWARF format set in the Context.

Differential Revision: https://reviews.llvm.org/D87008
2020-09-15 11:29:48 +07:00
Quentin Colombet b3afad0463 [GlobalISel] Add a `X, Y = G_UNMERGE(G_ZEXT Z)` -> X = G_ZEXT Z; Y = 0 combine
Add a combiner helper to transform unmerge of zext into one zext and
a constant 0

Differential Revision: https://reviews.llvm.org/D87427
2020-09-14 17:27:23 -07:00
Quentin Colombet d2321129bd [GlobalISel] Add `X,Y<dead> = G_UNMERGE Z` -> X = G_TRUNC Z
Add a combiner helper that replaces G_UNMERGE where all the destination lanes
are dead except the first one with a G_TRUNC.

Differential Revision: https://reviews.llvm.org/D87174
2020-09-14 17:27:23 -07:00
Quentin Colombet a36278c2f8 [GlobalISel] Add G_UNMERGE(Cst) -> Cst1, Cst2, ... combine
Add a combiner helper that replaces G_UNMERGE of big constants into direct
use of smaller constants.

Differential Revision: https://reviews.llvm.org/D87166
2020-09-14 16:30:18 -07:00
Aditya Nandakumar 46f9137e43 [GISel]: Add combine for G_FABS to G_FABS
https://reviews.llvm.org/D87554

Patch adds one new GICombinerRule for G_FABS. The combine rule folds G_FABS(G_FABS(X)) to G_FABS(X).
Patch additionally adds new combiner tests for the AArch64 target to test this new combiner rule.

Patch by mkitzan.
2020-09-14 15:56:24 -07:00
Quentin Colombet 670c276232 [GlobalISel] Add G_UNMERGE_VALUES(G_MERGE_VALUES) combine
Add the matching and applying function to the combiner helper for
G_UNMERGE_VALUES(G_MERGE_VALUES).

This combine also supports any merge-like input nodes, like G_BUILD_VECTORS
and is robust against bitcasts in between int unmerge and merge nodes.

When the input type of the merge node and the output type of the unmerge
node are not the same, but the sizes are, the combine still applies but
creates bitcasts between the sources and the destinations instead of
reusing the destinations directly.

Long term, the artifact combiner should probably reuse that helper, but
as of today, it doesn't use any outside helper, so I kept it this way.

Differential Revision: https://reviews.llvm.org/D87117
2020-09-14 15:45:06 -07:00
Craig Topper c193a689b4 [SelectionDAG] Use Align/MaybeAlign in calls to getLoad/getStore/getExtLoad/getTruncStore.
The versions that take 'unsigned' will be removed in the future.

I tried to use getOriginalAlign instead of getAlign in some
places. getAlign factors in the minimum alignment implied by
the offset in the pointer info. Since we're also passing the
pointer info we can use the original alignment.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D87592
2020-09-14 13:54:50 -07:00
Craig Topper 4208ea3e19 [FastISel] Bail out of selectGetElementPtr for vector GEPs.
The code that decomposes the GEP into ADD/MUL doesn't work properly
for vector GEPs. It can create bad COPY instructions or possibly
assert.

For now just bail out to SelectionDAG.

Fixes PR45906
2020-09-14 12:53:06 -07:00
Nikita Popov 53f36f06af [Legalize][ARM][X86] Add float legalization for VECREDUCE
This adds SoftenFloatRes, PromoteFloatRes and SoftPromoteHalfRes
legalizations for VECREDUCE, to fill the remaining hole in the SDAG
legalization. These legalizations simply expand the reduction and
let it be recursively legalized. For the PromoteFloatRes case at
least it is possible to do better than that, but it's pretty tricky
(because we need to consider the interaction of three different
vector legalizations and the type promotion) and probably not
really worthwhile.

I haven't added ExpandFloatRes support, as I am not familiar with
ppc_fp128.

Differential Revision: https://reviews.llvm.org/D87569
2020-09-14 20:42:09 +02:00
Nikita Popov 8e69c3cde8 [DAGCombiner] Fold fmin/fmax with INF / FLT_MAX
Similar to D87415, this folds the various float min/max opcodes
with a constant INF or -INF operand, or FLT_MAX / -FLT_MAX operand
if the ninf flag is set. Some of the folds are only possible under
nnan.

The fminnum(X, INF) with nnan and fmaxnum(X, -INF) with nnan cases
are needed to improve the VECREDUCE_FMIN/FMAX lowerings on X86,
the rest is here for the sake of completeness.

Differential Revision: https://reviews.llvm.org/D87571
2020-09-14 19:59:33 +02:00
Rahman Lavaee 7841e21c98 Let -basic-block-sections=labels emit basicblock metadata in a new .bb_addr_map section, instead of emitting special unary-encoded symbols.
This patch introduces the new .bb_addr_map section feature which allows us to emit the bits needed for mapping binary profiles to basic blocks into a separate section.
The format of the emitted data is represented as follows. It includes a header for every function:

|  Address of the function                      |  -> 8 bytes (pointer size)
|  Number of basic blocks in this function (>0) |  -> ULEB128

The header is followed by a BB record for every basic block. These records are ordered in the same order as MachineBasicBlocks are placed in the function. Each BB Info is structured as follows:

|  Offset of the basic block relative to function begin |  -> ULEB128
|  Binary size of the basic block                       |  -> ULEB128
|  BB metadata                                          |  -> ULEB128  [ MBB.isReturn() OR MBB.hasTailCall() << 1  OR  MBB.isEHPad() << 2 ]

The new feature will replace the existing "BB labels" functionality with -basic-block-sections=labels.
The .bb_addr_map section scrubs the specially-encoded BB symbols from the binary and makes it friendly to profilers and debuggers.
Furthermore, the new feature reduces the binary size overhead from 70% bloat to only 12%.

For more information and results please refer to the RFC: https://lists.llvm.org/pipermail/llvm-dev/2020-July/143512.html

Reviewed By: MaskRay, snehasish

Differential Revision: https://reviews.llvm.org/D85408
2020-09-14 10:16:44 -07:00
David Green 06fb4e9064 [CGP] Limit converting phi types to simple loads and stores
Instcombine limits converting phi types to simple loads and stores. This
does the same in codegenprepare, not processing phis that are not
simple.

Note that volatile loads/store ISel will happily convert between float
and int. Atomics are more likely to always be integer. This just keeps
things simple and doesn't process either.

Differential Revision: https://reviews.llvm.org/D83770
2020-09-14 12:08:34 +01:00
Petar Avramovic 6e2a86ed5a AMDGPU/GlobalISel Check for NoNaNsFPMath in isKnownNeverSNaN
Check for NoNaNsFPMath function attribute in isKnownNeverSNaN.
Function attributes are in held in 'TargetMachine.Options'.
Among other things, this allows selection of some patterns imported
in D87351 since G_FCANONICALIZE is not generated when isKnownNeverSNaN
returns true in lowerFMinNumMaxNum.

However we notice some incorrect results since function attributes are
not correctly written in TargetMachine.Options when next function is
processed. Take a look at @v_test_no_global_nnans_med3_f32_pat0_srcmod0,
it has "no-nans-fp-math"="false" but TargetMachine.Options still has it
set to true since first function in test file had this attribute set to
true. This will be fixed in D87511.

Differential Revision: https://reviews.llvm.org/D87456
2020-09-14 12:11:00 +02:00
Simon Pilgrim 00e5676cf6 [LegalizeDAG] Fix MSVC "result of 32-bit shift implicitly converted to 64 bits" warning. NFCI. 2020-09-14 11:09:43 +01:00
Jeremy Morse d3af441dfe [DebugInstrRef][1/9] Add fields for instr-ref variable locations
Add a DBG_INSTR_REF instruction and a "debug instruction number" field to
MachineInstr. The two allow variable values to be specified by
identifying where the value is computed, rather than the register it lies
in, like so:

  %0 = fooinst, debug-instr-number 1
  [...]
  DBG_INSTR_REF 1, 0

See the original RFC for motivation:
http://lists.llvm.org/pipermail/llvm-dev/2020-February/139440.html

This patch is NFCI; it only adds fields and other boiler plate.

Differential Revision: https://reviews.llvm.org/D85741
2020-09-14 10:06:52 +01:00
David Sherwood 15bff4dec4 [CodeGen] Fix bug in IncrementPointer
In an earlier patch I meant to add the correct flags to the ADD
node when incrementing the pointer, but forgot to pass them to
SelectionDAG::getNode.

Differential Revision: https://reviews.llvm.org/D87496
2020-09-14 08:03:55 +01:00
Yevgeny Rouban 88690a9658 [CodeGenPrepare] Fix zapping dead operands of assume
This patch fixes a problem of the commit 52cc97a0.
A test case is created to demonstrate the crash caused by
the instruction iterator invalidated by the recursive
removal of dead operands of assume. The solution restarts
from the blocks's first instruction in case CurInstIterator
is invalidated by RecursivelyDeleteTriviallyDeadInstructions().

Reviewed By: bkramer

Differential Revision: https://reviews.llvm.org/D87434
2020-09-14 11:46:34 +07:00
Craig Topper 56b33391d3 [SelectionDAG] Move ISD:PARITY formation from DAGCombine to SimplifyDemandedBits.
Previously, we formed ISD::PARITY by looking for (and (ctpop X), 1)
but the AND might be separated from the ctpop. For example if the
parity result is multiplied by 2, we'll pull the AND through the
shift.

So to handle more cases, move to SimplifyDemandedBits where we
can handle more cases that result in only the LSB of the CTPOP
being used.
2020-09-13 21:04:13 -07:00
Qiu Chaofan a4c5351986 [DAGCombiner] Propagate FMF flags in FMA folding
DAG combiner folds (fma a 1.0 b) into (fadd a b) but the flag isn't
propagated into new fadd. This patch fixes that.

Some code in visitFMA is redundant and such support for vector constants
is missing. Need follow-up patch to clean.

Reviewed By: spatel

Differential Revision: https://reviews.llvm.org/D87037
2020-09-14 00:19:06 +08:00
David Green 9237fde481 [CGP] Prevent optimizePhiType from iterating forever
The recently added optimizePhiType algorithm had no checks to make sure
it didn't continually iterate backward and forth between float and int
types. This means that given an input like store(phi(bitcast(load))), we
could convert that back and forth to store(bitcast(phi(load))). This
particular case would usually have been simplified to a different load
type (folding the bitcast into the load) before CGP, but other cases can
occur. The one that came up was phi(bitcast(phi)), where the two phi's
of different types were bitcast between. That was not helped by a dead
bitcast being kept around which could make conversion look profitable.

This adds an extra check of the bitcast Uses or Defs, to make sure that
at least one is grounded and will not end up being converted back. It
also makes sure that dead bitcasts are removed, and there is a minor
change to include newly created Phi nodes in the Visited set so that
they do not need to be revisited.

Differential Revision: https://reviews.llvm.org/D82676
2020-09-13 16:11:01 +01:00
Craig Topper 61d29e0dff [LegalizeTypes] Remove a few cases from SplitVectorOperand that should never happen. NFC
CTTZ, CTLZ, CTPOP, and FCANONICALIZE all have the same input and
output types so the operand should have already been legalized when the
result type was legalized.
2020-09-12 20:59:14 -07:00
Craig Topper ad3d6f993d [SelectionDAG][X86][ARM][AArch64] Add ISD opcode for __builtin_parity. Expand it to shifts and xors.
Clang emits (and (ctpop X), 1) for __builtin_parity. If ctpop
isn't natively supported by the target, this leads to poor codegen
due to the expansion of ctpop being more complex than what is needed
for parity.

This adds a DAG combine to convert the pattern to ISD::PARITY
before operation legalization. Type legalization is updated
to handled Expanding and Promoting this operation. If after type
legalization, CTPOP is supported for this type, LegalizeDAG will
turn it back into CTPOP+AND. Otherwise LegalizeDAG will emit a
series of shifts and xors followed by an AND with 1.

I've avoided vectors in this patch to avoid more legalization
complexity for this patch.

X86 previously had a custom DAG combiner for this. This is now
moved to Custom lowering for the new opcode. There is a minor
regression in vector-reduce-xor-bool.ll, but a follow up patch
can easily fix that.

Fixes PR47433

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D87209
2020-09-12 11:42:18 -07:00
Sanjay Patel 3a8ea8609b [Intrinsics] define semantics for experimental fmax/fmin vector reductions
As discussed on llvm-dev:
http://lists.llvm.org/pipermail/llvm-dev/2020-April/140729.html

This is hopefully the final remaining showstopper before we can remove
the 'experimental' from the reduction intrinsics.

No behavior was specified for the FP min/max reductions, so we have a
mess of different interpretations.

There are a few potential options for the semantics of these max/min ops.
I think this is the simplest based on current behavior/implementation:
make the reductions inherit from the existing llvm.maxnum/minnum intrinsics.
These correspond to libm fmax/fmin, and those are similar to the (now
deprecated?) IEEE-754 maxNum/minNum functions (NaNs are treated as missing
data). So the default expansion creates calls to libm functions.

Another option would be to inherit from llvm.maximum/minimum (NaNs propagate),
but most targets just crash in codegen when given those nodes because no
default expansion was ever implemented AFAICT.

We could also just assume 'nnan' semantics by default (we are already
assuming 'nsz' semantics in the maxnum/minnum intrinsics), but some targets
(AArch64, PowerPC) support the more defined behavior, so it doesn't make much
sense to not allow a tighter spec. Fast-math-flags (nnan) can be used to
loosen the semantics.

(Note that D67507 was proposed to update the LangRef to acknowledge the more
recent IEEE-754 2019 standard, but that patch seems to have stalled. If we do
update based on the new standard, the reduction instructions can seamlessly
inherit from whatever updates are made to the max/min intrinsics.)

x86 sees a regression here on 'nnan' tests because we have underlying,
longstanding bugs in FMF creation/propagation. Those need to be fixed apart
from this change (for example: https://llvm.org/PR35538). The expansion
sequence before this patch may not have been correct.

Differential Revision: https://reviews.llvm.org/D87391
2020-09-12 09:10:28 -04:00
Yuanfang Chen ad99e34c59 Revert "[NewPM][CodeGen] Introduce CodeGenPassBuilder to help build codegen pipeline"
This reverts commit 31ecf8d29d.
This reverts commit 3fdaa8602a.

There is laying violation for Target->CodeGen.
2020-09-11 18:52:32 -07:00
Yuanfang Chen 31ecf8d29d [NewPM][CodeGen] Introduce CodeGenPassBuilder to help build codegen pipeline
Following up on D67687.
Please refer to the RFC here http://lists.llvm.org/pipermail/llvm-dev/2020-July/143309.html

`CodeGenPassBuilder` is the NPM counterpart of `TargetPassConfig` with below differences.
- Debugging features (MIR print/verify, disable pass, start/stop-before/after, etc.) living in `TargetPassConfig` are moved to use PassInstrument as much as possible. (Implementation also lives in `TargetPassConfig.cpp`)
- `TargetPassConfig` is a polymorphic base (virtual inheritance) to build the target-dependent pipeline whereas `CodeGenPassBuilder` is the CRTP base/helper to implement the target-dependent pipeline. The motivation is flexibility for targets to customize the pipeline, inlining opportunity, and fits the overall NPM value semantics design.
- `TargetPassConfig` is a legacy immutable pass to declare hooks for targets to customize some target-independent codegen layer behavior. This is partially ported to TargetMachine::options. The rest, such as `createMachineScheduler/createPostMachineScheduler`, are left out for now. They should be implemented in LLVMTargetMachine in the future.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D83608
2020-09-11 16:41:17 -07:00
Matt Arsenault 382b2b1b51 RegAllocFast: Fix typo in comment 2020-09-11 18:06:14 -04:00
Matt Arsenault e21bb31eb6 CodeGen: Require SSA to run PeepholeOptimizer 2020-09-11 18:03:04 -04:00
Jeremy Morse 0caeaff123 [LiveDebugValues][NFC] Re-land 60db26a66d, add instr-ref tests
This was landed but reverted in 5b9c2b1bea due to asan picking up a memory
leak. This is fixed in the change to InstrRefBasedImpl.cpp. Original
commit message follows:

[LiveDebugValues][NFC] Add instr-ref tests, adapt old tests

This patch adds a few tests in DebugInfo/MIR/InstrRef/ of interesting
behaviour that the instruction referencing implementation of
LiveDebugValues has. Mostly, these tests exist to ensure that if you
give the "-experimental-debug-variable-locations" command line switch,
the right implementation runs; and to ensure it behaves the same way as
the VarLoc LiveDebugValues implementation.

I've also touched roughly 30 other tests, purely to make the tests less
rigid about what output to accept. DBG_VALUE instructions are usually
printed with a trailing !debug-location indicating its scope:

  !debug-location !1234

However InstrRefBasedLDV produces new DebugLoc instances on the fly,
meaning there sometimes isn't a numbered node when they're printed,
making the output:

  !debug-location !DILocation(line: 0, blah blah)

Which causes a ton of these tests to fail. This patch removes checks for
that final part of each DBG_VALUE instruction. None of them appear to
be actually checking the scope is correct, just that it's present, so
I don't believe there's any loss in coverage here.

Differential Revision: https://reviews.llvm.org/D83054
2020-09-11 12:14:44 +01:00
Benjamin Kramer 5405ee553a [CodeGenPrepare] Simplify code. NFCI. 2020-09-11 11:24:08 +02:00
Martin Storsjö 46416f0803 [CodeGen] [WinException] Remove a redundant explicit section switch for aarch64
The following EmitWinEHHandlerData() implicitly switches to .xdata, just
like on x86_64.

This became orphaned from the original code requiring it in
0b61d220c9 / https://reviews.llvm.org/D61095.

Differential Revision: https://reviews.llvm.org/D87447
2020-09-11 10:31:04 +03:00
Alok Kumar Sharma e45b0708ae [DebugInfo] Fixing CodeView assert related to lowerBound field of DISubrange.
This is to fix CodeView build failure https://bugs.llvm.org/show_bug.cgi?id=47287
    after DIsSubrange upgrade D80197

    Assert condition is now removed and Count is calculated in case LowerBound
    is absent or zero and Count or UpperBound is constant. If Count is unknown
    it is later handled as VLA (currently Count is set to zero).

Reviewed By: rnk

Differential Revision: https://reviews.llvm.org/D87406
2020-09-11 11:12:49 +05:30
Reid Kleckner 2c73bef7fa Fix wrong comment about enabling optimizations to work around a bug 2020-09-10 16:45:20 -07:00
Reid Kleckner 4e3edef4b8 Use pragmas to work around MSVC x86_32 debug miscompile bug
Halide users reported this here: https://llvm.org/pr46176
I reported the issue to MSVC here:
https://developercommunity.visualstudio.com/content/problem/1179643/msvc-copies-overaligned-non-trivially-copyable-par.html

This codepath is apparently not covered by LLVM's unit tests, so I added
coverage in a unit test.

If we want to support this configuration going forward, it means that is
in general not safe to pass a SmallVector<T, N> by value if alignof(T)
is greater than 4. This doesn't appear to come up often because passing
a SmallVector by value is inefficient and not idiomatic: it copies the
inline storage. In this case, the SmallVector<LLT,4> is captured by
value by a lambda, and the lambda is passed by value into std::function,
and that's how we hit the bug.

Differential Revision: https://reviews.llvm.org/D87475
2020-09-10 14:50:01 -07:00
Volkan Keles d4bf90271f GlobalISel: Combine fneg(fneg x) to x
https://reviews.llvm.org/D87473
2020-09-10 12:57:38 -07:00
Anna Thomas b1b9806370 [ImplicitNullChecks] NFC: Remove unused PointerReg arg in dep analysis
The PointerReg arg was passed into the dependence function for an
assertion which no longer exists. So, this patch updates the dependence
functions to avoid the PointerReg in the signature.

Tests-Run: make check
2020-09-10 15:31:57 -04:00
Anna Thomas 46329f6079 [ImplicitNullCheck] Handle instructions that preserve zero value
This is the first in a series of patches to make implicit null checks
more general. This patch identifies instructions that preserves zero
value of a register and considers that as a valid instruction to hoist
along with the faulting load. See added testcases.

Reviewed-By: reames, dantrushin

Differential Revision: https://reviews.llvm.org/D87108
2020-09-10 13:39:50 -04:00
Simon Pilgrim f42f733af9 SwitchLoweringUtils.h - reduce TargetLowering.h include. NFCI.
Only include the headers we actually need, and move the remaining includes down to implicit dependent files.
2020-09-10 17:42:18 +01:00
Jay Foad 517202c720 [TargetLowering] Fix comments describing XOR -> OR/AND transformations 2020-09-10 13:56:34 +01:00
Kerry McLaughlin cd89f5c91b [SVE][CodeGen] Legalisation of truncate for scalable vectors
Truncating from an illegal SVE type to a legal type, e.g.
`trunc <vscale x 4 x i64> %in to <vscale x 4 x i32>`
fails after PromoteIntOp_CONCAT_VECTORS attempts to
create a BUILD_VECTOR.

This patch changes the promote function to create a sequence of
INSERT_SUBVECTORs if the return type is scalable, and replaces
these with UNPK+UZP1 for AArch64.

Reviewed By: paulwalker-arm

Differential Revision: https://reviews.llvm.org/D86548
2020-09-10 11:35:33 +01:00
Nikita Popov 0a5dc7effb [DAGCombiner] Fold fmin/fmax of NaN
fminnum(X, NaN) is X, fminimum(X, NaN) is NaN. This mirrors the
behavior of existing InstSimplify folds.

This is expected to improve the reduction lowerings in D87391,
which use NaN as a neutral element.

Differential Revision: https://reviews.llvm.org/D87415
2020-09-09 23:53:32 +02:00
Amara Emerson e5784ef8f6 [GlobalISel] Enable usage of BranchProbabilityInfo in IRTranslator.
We weren't using this before, so none of the MachineFunction CFG edges had the
branch probability information added. As a result, block placement later in the
pipeline was flying blind.

This is enabled only with optimizations enabled like SelectionDAG.

Differential Revision: https://reviews.llvm.org/D86824
2020-09-09 14:31:12 -07:00
Amara Emerson 467a071285 [GlobalISel][IRTranslator] Generate better conditional branch lowering.
This is a port of the functionality from SelectionDAG, which tries to find
a tree of conditions from compares that are then combined using OR or AND,
before using that result as the input to a branch. Instead of naively
lowering the code as is, this change converts that into a sequence of
conditional branches on the sub-expressions of the tree.

Like SelectionDAG, we re-use the case block codegen functionality from
the switch lowering utils, which causes us to generate some different code.
The result of which I've tried to mitigate in earlier combine patches.

Differential Revision: https://reviews.llvm.org/D86665
2020-09-09 13:16:11 -07:00
Amara Emerson cc76da7ada [GlobalISel] Rewrite the elide-br-by-swapping-icmp-ops combine to do less.
This combine previously tried to take sequences like:
  %cond = G_ICMP pred, a, b
  G_BRCOND %cond, %truebb
  G_BR %falsebb
%truebb:
  ...
%falsebb:
  ...

and by inverting the compare predicate and swapping branch targets, delete the
G_BR and instead have a single conditional branch to the falsebb. Since in an
earlier patch we have a combine to fold not(icmp) into just an inverted icmp,
we don't need this combine to do as much. This patch instead generalizes the
combine by just looking for:
  G_BRCOND %cond, %truebb
  G_BR %falsebb
%truebb:
  ...
%falsebb:
  ...

and then inverting the condition using a not (xor). The xor can be folded away
in a separate combine. This change also lets us avoid some optimization code
in the IRTranslator.

I also think that deleting G_BRs in the combiner is unnecessary. That's
something that targets can decide to do at selection time and could simplify
generic code in future.

Differential Revision: https://reviews.llvm.org/D86664
2020-09-09 13:08:16 -07:00
Ulrich Weigand 1a25133bcd [DAGCombine] Skip re-visiting EntryToken to avoid compile time explosion
During the main DAGCombine loop, whenever a node gets replaced, the new
node and all its users are pushed onto the worklist.  Omit this if the
new node is the EntryToken (e.g. if a store managed to get optimized
out), because re-visiting the EntryToken and its users will not uncover
any additional opportunities, but there may be a large number of such
users, potentially causing compile time explosion.

This compile time explosion showed up in particular when building the
SingleSource/UnitTests/matrix-types-spec.cpp test-suite case on any
platform without SIMD vector support.

Reviewed By: arsenm
Differential Revision: https://reviews.llvm.org/D86963
2020-09-09 19:13:46 +02:00
Alon Kom 818cf30b83 [MachinePipeliner] Fix II_setByPragma initialization
II_setByPragma was not reset between 2 calls of the MachinePipleiner pass

Reviewed By: bcahoon

Differential Revision: https://reviews.llvm.org/D87088
2020-09-09 13:38:35 +00:00
Denis Antrushin 4358fa782e [Statepoints] Update DAG root after emitting statepoint.
Since we always generate CopyToRegs for statepoint results,
we must update DAG root after emitting statepoint, so that
these copies are scheduled before any possible local uses.
Note: getControlRoot() flushes all PendingExports, not only
those we generates for relocates. If that'll become a problem,
we can change it to flushing relocate exports only.

Reviewed By: reames

Differential Revision: https://reviews.llvm.org/D87251
2020-09-09 20:22:10 +07:00
Simon Pilgrim d816499f95 [KnownBits] Move SelectionDAG::computeKnownBits ISD::ABS handling to KnownBits::abs
Move the ISD::ABS handling to a KnownBits::abs handler, to simplify future implementations in ValueTracking/GlobalISel.
2020-09-09 13:22:58 +01:00
Denis Antrushin 2a52c3301a [Statepoints] Properly handle const base pointer.
Current code in InstEmitter assumes all GC pointers are either
VRegs or stack slots - hence, taking only one operand.
But it is possible to have constant base, in which case it
occupies two machine operands.

Add a convinience function to StackMaps to get index of next
meta argument and use it in InsrEmitter to properly advance to
the next statepoint meta operand.

Reviewed By: reames

Differential Revision: https://reviews.llvm.org/D87252
2020-09-09 14:07:00 +07:00
Craig Topper 844e94a502 [SelectionDAGBuilder] Remove Unnecessary FastMathFlags temporary. Use SDNodeFlags instead. NFCI
This was a missed simplication in D87200
2020-09-08 15:50:12 -07:00
Craig Topper b1e68f885b [SelectionDAGBuilder] Pass fast math flags to getNode calls rather than trying to set them after the fact.:
This removes the after the fact FMF handling from D46854 in favor of passing fast math flags to getNode. This should be a superset of D87130.

This required adding a SDNodeFlags to SelectionDAG::getSetCC.

Now we manage to contant fold some stuff undefs during the
initial getNode that we don't do in later DAG combines.

Differential Revision: https://reviews.llvm.org/D87200
2020-09-08 15:27:21 -07:00
Volkan Keles 1242dd330d GlobalISel: Combine `op undef, x` to 0
https://reviews.llvm.org/D86611
2020-09-08 09:46:38 -07:00
Simon Pilgrim 3c83b967cf LiveRegUnits.h - reduce MachineRegisterInfo.h include. NFC.
We only need to include MachineInstrBundle.h, but exposes an implicit dependency in MachineOutliner.h.

Also, remove duplicate includes from LiveRegUnits.cpp + MachineOutliner.cpp.
2020-09-08 17:27:00 +01:00
Jonas Paulsson 6dc3e22b57 [DAGTypeLegalizer] Handle ZERO_EXTEND of promoted type in WidenVecRes_Convert.
On SystemZ, a ZERO_EXTEND of an i1 vector handled by WidenVecRes_Convert()
always ended up being scalarized, because the type action of the input is
promotion which was previously an unhandled case in this method.

This fixes https://bugs.llvm.org/show_bug.cgi?id=47132.

Differential Revision: https://reviews.llvm.org/D86268

Patch by Eli Friedman.
Review: Ulrich Weigand
2020-09-08 16:49:51 +02:00
Craig Topper da79b1eecc [SelectionDAG][X86][ARM] Teach ExpandIntRes_ABS to use sra+add+xor expansion when ADDCARRY is supported.
Rather than using SELECT instructions, use SRA, UADDO/ADDCARRY and
XORs to expand ABS. This is the multi-part version of the sequence
we use in LegalizeDAG.

It's also the same as the Custom sequence uses for i64 on 32-bit
and i128 on 64-bit. So we can remove the X86 customization.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D87215
2020-09-07 13:15:26 -07:00
Sanjay Patel 7a06b166b1 [DAGCombiner] allow more store merging for non-i8 truncated ops
This is a follow-up suggested in D86420 - if we have a pair of stores
in inverted order for the target endian, we can rotate the source
bits into place.
The "be_i64_to_i16_order" test shows a limitation of the current
function (which might be avoided if we integrate this function with
the other cases in mergeConsecutiveStores). In the earlier
"be_i64_to_i16" test, we skip the first 2 stores because we do not
match the full set as consecutive or rotate-able, but then we reach
the last 2 stores and see that they are an inverted pair of 16-bit
stores. The "be_i64_to_i16_order" test alters the program order of
the stores, so we miss matching the sub-pattern.

Differential Revision: https://reviews.llvm.org/D87112
2020-09-07 14:12:36 -04:00
Momchil Velikov eb482afaf5 Reduce the number of memory allocations when displaying
a warning about clobbering reserved registers (NFC).

Also address some minor inefficiencies and style issues.

Differential Revision: https://reviews.llvm.org/D86088
2020-09-07 17:04:00 +01:00
Simon Pilgrim 6670f5d1e6 MachineStableHash.h - remove MachineInstr.h include. NFC.
Use forward declarations and move the include to MachineStableHash.cpp
2020-09-07 13:33:48 +01:00
Simon Wallis 79ea83e104 [SelectionDAG] memcpy expansion of const volatile struct ignores const zero
In getMemcpyLoadsAndStores(), a memcpy where the source is a zero constant is expanded to a MemOp::Set instead of a MemOp::Copy, even when the memcpy is volatile.
This is incorrect.

The fix is to add a check for volatile, and expand to MemOp::Copy in the volatile case.

Reviewed By: chill

Differential Revision: https://reviews.llvm.org/D87134
2020-09-07 13:22:09 +01:00
Simon Pilgrim e57cbcbdc1 LegalizeTypes.h - remove orphan SplitVSETCC declaration. NFCI.
The implementation no longer exists
2020-09-07 13:11:49 +01:00
Jay Foad 713c2ad60c [GlobalISel] Extend not_cmp_fold to work on conditional expressions
Differential Revision: https://reviews.llvm.org/D86709
2020-09-07 09:31:08 +01:00
Jay Foad 5350e1b509 [KnownBits] Implement accurate unsigned and signed max and min
Use the new implementation in ValueTracking, SelectionDAG and
GlobalISel.

Differential Revision: https://reviews.llvm.org/D87034
2020-09-07 09:09:01 +01:00
Amara Emerson d0abc75749 [GlobalISel] Disable the indexed loads combine completely unless forced. NFC.
The post-index matcher, before it queries the target legality, walks uses
of some instructions which in pathological cases can be massive. Since
no targets actually support indexed loads yet, disable this to stop wasting
compile time on something which is going to fail anyway.
2020-09-05 21:04:03 -07:00
Jonas Paulsson 714ceefad9 [SelectionDAG] Always intersect SDNode flags during getNode() node memoization.
Previously SDNodeFlags::instersectWith(Flags) would do nothing if Flags was
in an undefined state, which is very bad given that this is the default when
getNode() is called without passing an explicit SDNodeFlags argument.

This meant that if an already existing and reused node had a flag which the
second caller to getNode() did not set, that flag would remain uncleared.

This was exposed by https://bugs.llvm.org/show_bug.cgi?id=47092, where an NSW
flag was incorrectly set on an add instruction (which did in fact overflow in
one of the two original contexts), so when SystemZElimCompare removed the
compare with 0 trusting that flag, wrong-code resulted.

There is more that needs to be done in this area as discussed here:

Differential Revision: https://reviews.llvm.org/D86871

Review: Ulrich Weigand, Sanjay Patel
2020-09-05 10:30:38 +02:00
Fangrui Song 398ba37230 [LiveDebugVariables] Delete unneeded doInitialization 2020-09-04 13:27:42 -07:00
Simon Pilgrim 7582c5c023 CallingConvLower.h - remove unnecessary MachineFunction.h include. NFC.
Reduce to forward declaration, add the Register.h include that we still needed, move CCState::ensureMaxAlignment into CallingConvLower.cpp as it was the only function that needed the full definition of MachineFunction.

Fix a few implicit dependencies further down.
2020-09-04 12:16:48 +01:00
David Sherwood 73a3d350a4 [SVE][CodeGen] Fix up warnings in sve-split-insert/extract tests
I have fixed up some more ElementCount/TypeSize related warnings in
the following tests:

  CodeGen/AArch64/sve-split-extract-elt.ll
  CodeGen/AArch64/sve-split-insert-elt.ll

In SelectionDAG::CreateStackTemporary we were relying upon the implicit
cast from TypeSize -> uint64_t when calling MachineFrameInfo::CreateStackObject.
I've fixed this by passing in the known minimum size instead, which I
believe is fine because the associated stack id indicates whether this
is a scalable object or not.

I've also fixed up a case in TargetLowering::SimplifyDemandedBits when
extracting a vector element from a scalable vector. The result is a scalar,
hence it wasn't caught at the start of the function. If the vector is
scalable we just bail out for now.

Differential Revision: https://reviews.llvm.org/D86431
2020-09-04 09:51:31 +01:00
Michael Liao bf41c4d29e [codegen] Ensure target flags are cleared/set properly. NFC.
- When an operand is changed into an immediate value or like, ensure their
  target flags being cleared or set properly.

Differential Revision: https://reviews.llvm.org/D87109
2020-09-03 18:37:39 -04:00
Puyan Lotfi 7fff1fbd3c [MIRVRegNamer] Experimental MachineInstr stable hashing (Fowler-Noll-Vo)
This hashing scheme has been useful out of tree, and I want to start
experimenting with it. Specifically I want to experiment on the
MIRVRegNamer, MIRCanononicalizer, and eventually the MachineOutliner.

This diff is a first step, that optionally brings stable hashing to the
MIRVRegNamer (and as a result, the MIRCanonicalizer).  We've tested this
hashing scheme on a lot of MachineOperand types that llvm::hash_value
can not handle in a stable manner.

This stable hashing was also the basis for

"Global Machine Outliner for ThinLTO" in EuroLLVM 2020

http://llvm.org/devmtg/2020-04/talks.html#TechTalk_58

Credits: Kyungwoo Lee, Nikolai Tillmann

Differential Revision: https://reviews.llvm.org/D86952
2020-09-03 16:13:09 -04:00
Amy Huang 5fe33f7399 [DebugInfo] Make DWARF ignore sizes on forward declared class types.
Make sure the sizes for forward declared classes aren't emitted in
DWARF.

This comes before https://reviews.llvm.org/D87062, which adds sizes to
all classes with definitions.

Bug: https://bugs.llvm.org/show_bug.cgi?id=47338

Differential Revision: https://reviews.llvm.org/D87070
2020-09-03 11:01:49 -07:00
Simon Pilgrim 1673a08044 SelectionDAG.h - remove unnecessary FunctionLoweringInfo.h include. NFCI.
Use forward declarations and move the include down to dependent files that actually use it.

This also exposes a number of implicit dependencies on KnownBits.h
2020-09-03 18:33:25 +01:00
Simon Pilgrim 46780cc0ee PHIEliminationUtils.cpp - remove unnecessary MachineBasicBlock.h include. NFCI.
This is already included in PHIEliminationUtils.h
2020-09-03 17:43:34 +01:00
Simon Pilgrim 6731eb644a Fix Wdocumentation trailing comments warnings. NFCI. 2020-09-03 17:43:34 +01:00
Simon Pilgrim b196c7192f Fix Wdocumentation warning. NFCI.
Remove \returns tag from a void function
2020-09-03 17:43:34 +01:00
Simon Pilgrim 898e42db93 GlobalISel/Utils.h - remove unused includes. NFCI.
Twine is unused, and TargetLowering can be reduced to a forward declaration and moved to Utils.cpp
2020-09-03 15:59:12 +01:00
Simon Pilgrim 91848b11b4 LowerEmuTLS.cpp - remove unused TargetLowering.h include. NFC.
We only needed llvm/IR/Constants.h.
2020-09-03 14:40:09 +01:00
Amara Emerson 2878ecc90f [StackProtector] Fix crash with vararg due to not checking LocationSize validity.
Differential Revision: https://reviews.llvm.org/D87074
2020-09-03 00:08:48 -07:00
Craig Topper b16e8687ab [CodeGenPrepare][X86] Teach optimizeGatherScatterInst to turn a splat pointer into GEP with scalar base and 0 index
This helps SelectionDAGBuilder recognize the splat can be used as a uniform base.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D86371
2020-09-02 20:44:12 -07:00
Jay Foad 099c089d4b [APInt] New member function setBitVal
Differential Revision: https://reviews.llvm.org/D87033
2020-09-02 21:40:31 +01:00
Anna Thomas 425573a2fa [ImplicitNullChecks] NFC: Refactor dependence safety check
After computing dependence, we check if it is safe to hoist by
identifying if it clobbers any liveIns in the sibling block (NullSucc).
This check is moved to its own function which will be used in the
soon-to-be modified dependence checking algorithm for implicit null
checks pass.

Tests-Run: lit tests on X86/implicit-*
2020-09-02 10:29:44 -04:00
Anna Thomas 6f7737c468 [ImplicitNullChecks] NFC: Separated out checks and added comments
Separated out some checks in isSuitableMemoryOp and added comments
explaining why some of those checks are done.

Tests-Run:X86 implicit null checks tests.
2020-09-02 10:29:44 -04:00
Paul Walker f72121254d [SVE] Don't reorder subvector/binop sequences when the resulting binop is not legal.
When lowering fixed length vector operations for SVE the subvector
operations are used extensively to marshall data between scalable
and fixed-length vectors. This means that sequences like:

  extract_subvec(binop(insert_subvec(a), insert_subvec(b)))

are very common. DAGCombine only checks if the resulting binop is
legal or can be custom lowered when undoing such sequences. When
it's custom lowering that is introducing them the result is an
infinite legalise->combine->legalise loop.

This patch extends the isOperationLegalOr... functions to include
a "LegalOnly" parameter to restrict the check to legal operations
only. Although isOperationLegal could be used it's common for
the affected code paths to be visited pre and post legalisation,
so the extra parameter keeps the code tidy.

Differential Revision: https://reviews.llvm.org/D86450
2020-09-02 11:01:33 +01:00
Sander de Smalen f13beac51b [AArch64][SVE] Preserve full vector regs over EH edge.
Unwinders may only preserve the lower 64bits of Neon and SVE registers,
as only the registers in the base ABI are guaranteed to be preserved
over the exception edge. The caller will need to preserve additional
registers for when the call throws an exception and the unwinder has
tried to recover state.

For  e.g.

    svint32_t bar(svint32_t);
    svint32_t foo(svint32_t x, bool *err) {
      try { bar(x); } catch (...) { *err = true; }
      return x;
    }

`z0` needs to be spilled before the call to `bar(x)` and reloaded before
returning from foo, as the exception handler may have clobbered z0.

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D84737
2020-09-02 10:54:18 +01:00
Igor Kudrin 3445ec9ba7 [DebugInfo] Emit a 1-byte value as a terminator of entries list in the name index.
As stated in section 6.1.1.2, DWARFv5, p. 142,
| The last entry for each name is followed by a zero byte that
| terminates the list. There may be gaps between the lists.

The patch changes emitting a 4-byte zero value to a 1-byte one, which
effectively removes the gap between entry lists, and thus saves
approximately 3 bytes per name; the calculation is not exact because
the total size of the table is aligned to 4.

Differential Revision: https://reviews.llvm.org/D86927
2020-09-02 16:12:39 +07:00
Igor Kudrin 71eed4808f [DebugInfo] Remove Dwarf5AccelTableWriter::Header::UnitLength. NFC.
The member is not in use; the unit length for the table is emitted as
a difference between two labels. Moreover, the type of the member might
be misleading, because for DWARF64 the field should be 64 bit long.

Differential Revision: https://reviews.llvm.org/D86912
2020-09-02 16:11:45 +07:00
Cameron McInally cfe2b81710 [SVE] Update INSERT_SUBVECTOR DAGCombine to use getVectorElementCount().
A small piece of the project to replace getVectorNumElements() with getVectorElementCount().

Differential Revision: https://reviews.llvm.org/D86894
2020-09-01 16:51:44 -05:00
Amara Emerson 520ab710fb Revert "Revert "[GlobalISel] Fold xor(cmp(pred, _, _), 1) -> cmp(inverse(pred), _, _)" (and dependent patch "Optimize away a Not feeding a brcond by using tbz instead of tbnz.")"
This reverts commit 8693ddc743.

Re-committing with the test requiring asserts.
2020-09-01 14:29:04 -07:00
Jordan Rupprecht 8693ddc743 Revert "[GlobalISel] Fold xor(cmp(pred, _, _), 1) -> cmp(inverse(pred), _, _)" (and dependent patch "Optimize away a Not feeding a brcond by using tbz instead of tbnz.")
This reverts commit 8ad8f484b6. It causes crashes when running `ninja check-llvm-codegen-aarch64-globalisel`, e.g.
http://lab.llvm.org:8011/builders/clang-with-thin-lto-ubuntu/builds/24132/steps/test-stage1-compiler/logs/stdio.
Note that the crash does not seem to reproduce in debug builds.

5ded444252 depends on this, so revert that too.
2020-09-01 13:31:57 -07:00
Craig Topper 4783e2c9c6 [MachineCopyPropagation] In isNopCopy, check the destination registers match in addition to the source registers.
Previously if the source match we asserted that the destination
matched. But GPR <-> mask register copies on X86 can violate this
since we use the same K-registers for multiple sizes.

Fixes this ISPC issue https://github.com/ispc/ispc/issues/1851

Differential Revision: https://reviews.llvm.org/D86507
2020-09-01 12:44:32 -07:00
Amara Emerson 8ad8f484b6 [GlobalISel] Fold xor(cmp(pred, _, _), 1) -> cmp(inverse(pred), _, _)
This is needed for an upcoming change to how we translate conditional branches
which might generate these.

Differential Revision: https://reviews.llvm.org/D86383
2020-09-01 10:57:17 -07:00
Matt Arsenault 32a8a10b42 GlobalISel: Implement computeNumSignBits for G_SELECT 2020-09-01 12:50:19 -04:00
Matt Arsenault 35c94d3f7e GlobalISel: Port smarter known bits for umin/umax from DAG 2020-09-01 12:50:15 -04:00
Matt Arsenault 759482ddaa GlobalISel: Implement computeKnownBits for G_BSWAP and G_BITREVERSE 2020-09-01 12:49:57 -04:00
Sam Tebbs 15e880a04f [DAGCombiner] Fold an AND of a masked load into a zext_masked_load
This patch folds an AND of a masked load and build vector into a zero
extended masked load.

Differential Revision: https://reviews.llvm.org/D86789
2020-09-01 17:02:07 +01:00
Volkan Keles 061182b7ba GlobalISel: Add combines for extend operations
https://reviews.llvm.org/D86516
2020-09-01 08:50:06 -07:00
Matt Arsenault 9e7e1b2d4b GlobalISel: Implement computeNumSignBits for G_SEXTLOAD/G_ZEXTLOAD 2020-09-01 11:20:02 -04:00
Matt Arsenault 92090e8bd8 GlobalISel: Implement computeKnownBits for G_UNMERGE_VALUES 2020-09-01 11:19:27 -04:00
Sourabh Singh Tomar 03812041d8 [NFCI] Removed an un-used declaration got accidentally introduced in f91d18eaa9 2020-09-01 15:59:04 +05:30
David Sherwood 9fbb113247 [SVE][CodeGen] Fix TypeSize/ElementCount related warnings in sve-split-load.ll
I have fixed up a number of warnings resulting from TypeSize -> uint64_t
casts and calling getVectorNumElements() on scalable vector types. I
think most of the changes are fairly trivial except for those in
DAGTypeLegalizer::SplitVecRes_MLOAD I've tried to ensure we create
the MachineMemoryOperands in a sensible way for scalable vectors.

I have added a CHECK line to the following test:

  CodeGen/AArch64/sve-split-load.ll

that ensures no new warnings are added.

Differential Revision: https://reviews.llvm.org/D86697
2020-09-01 07:47:59 +01:00
Qiu Chaofan 5475154865 [NFC] [DAGCombiner] Refactor bitcast folding within fabs/fneg
fabs and fneg share a common transformation:

(fneg (bitconvert x)) -> (bitconvert (xor x sign))
(fabs (bitconvert x)) -> (bitconvert (and x ~sign))

This patch separate the code into a single method.

Reviewed By: spatel

Differential Revision: https://reviews.llvm.org/D86862
2020-09-01 00:48:12 +08:00
Qiu Chaofan eb2a405c18 [NFC] [DAGCombiner] Remove unnecessary negation in visitFNEG
In visitFNEG of DAGCombiner, the folding of (fneg (fsub c, x)) is
redundant since getNegatedExpression already handles it.
2020-09-01 00:35:01 +08:00
Sanjay Patel 1c9a09f42e [DAGCombiner] skip reciprocal divisor optimization for x/sqrt(x), better
I tried to fix this in:
rG716e35a0cf53
...but that patch depends on the order that we encounter the
magic "x/sqrt(x)" expression in the combiner's worklist.

This patch should improve that by waiting until we walk the
user list to decide if there's a use to skip.

The AArch64 test reveals another (existing) ordering problem
though - we may try to create an estimate for plain sqrt(x)
before we see that it is part of a 1/sqrt(x) expression.
2020-08-31 09:35:59 -04:00
Sanjay Patel 716e35a0cf [DAGCombiner] skip reciprocal divisor optimization for x/sqrt(x)
In general, we probably want to try the multi-use reciprocal
transform before sqrt transforms, but x/sqrt(x) is a special-case
because that will always reduce to plain sqrt(x) or an estimate.

The AArch64 tests show that the transform is limited by TLI
hook to patterns where there are 3 or more uses of the divisor.
So this change can result in an extra division compared to
what we had, but that's the intended behvior based on the
current setting of that hook.
2020-08-30 10:55:45 -04:00
Nikita Popov 51d34c0c53 [TargetLowering] Strip tailing whitespace (NFC) 2020-08-29 18:09:08 +02:00
Kai Luo b904324788 [DAGCombiner] Enhance (zext(setcc))
Current `v:t = zext(setcc x,y,cc)` will be transformed to `select x, y, 1:t, 0:t, cc`. It misses some opportunities if x's type size is less than `t`'s size. This patch enhances the above transformation.

Reviewed By: spatel

Differential Revision: https://reviews.llvm.org/D86687
2020-08-29 03:37:41 +00:00
Matt Arsenault 1b201914b5 GlobalISel: Combine out redundant sext_inreg
The scalar tests don't work yet, since computeNumSignBits apparently
doesn't handle sextload yet, and sext folds into the load first.
2020-08-28 17:57:31 -04:00
Jon Roelofs b15f2bd3ad [early-ifcvt] Add OptRemarks 2020-08-28 15:51:18 -06:00
Craig Topper aab90384a3 [Attributes] Add a method to check if an Attribute has AttrKind None. Use instead of hasAttribute(Attribute::None)
There's a special case in hasAttribute for None when pImpl is null. If pImpl is not null we dispatch to pImpl->hasAttribute which will always return false for Attribute::None.

So if we just want to check for None its sufficient to just check that pImpl is null. Which can even be done inline.

This patch adds a helper for that case which I hope will speed up our getSubtargetImpl implementations.

Differential Revision: https://reviews.llvm.org/D86744
2020-08-28 13:23:45 -07:00
Benjamin Kramer 52cc97a0db [CodeGenPrepare] Zap the argument of llvm.assume when deleting it
We know that the argument is mostly likely dead, so we can purge it
early. Otherwise it would make it to codegen, and can block further
optimizations.
2020-08-28 20:52:22 +02:00
Snehasish Kumar 94faadaca4 [llvm][CodeGen] Machine Function Splitter
We introduce a codegen optimization pass which splits functions into hot and cold
parts. This pass leverages the basic block sections feature recently
introduced in LLVM from the Propeller project. The pass targets
functions with profile coverage, identifies cold blocks and moves them
to a separate section. The linker groups all cold blocks across
functions together, decreasing fragmentation and improving icache and
itlb utilization.

We evaluated the Machine Function Splitter pass on clang bootstrap and
SPECInt 2017.

For clang bootstrap we observe a mean 2.33% runtime improvement with a
~32% reduction in itlb and stlb misses. Additionally, L1 icache misses
reduced by 9.5% while L2 instruction misses reduced by 20%.

For SPECInt we report the change in IntRate the C/C++
benchmarks. All benchmarks apart from mcf and x264 improve, on average
by 0.6% with the max for deepsjeng at 1.6%.

Benchmark		% Change
500.perlbench_r		 0.78
502.gcc_r		 0.82
505.mcf_r		-0.30
520.omnetpp_r		 0.18
523.xalancbmk_r		 0.37
525.x264_r		-0.46
531.deepsjeng_r		 1.61
541.leela_r		 0.83
557.xz_r		 0.15

Differential Revision: https://reviews.llvm.org/D85368
2020-08-28 11:10:14 -07:00
Denis Antrushin fabd4c1ae1 [Statepoint] Always spill base pointer.
There is a subtle problem with new statepoint lowering scheme
when base and pointers are the same (see PR46917 for more context):

%1 = STATEPOINT ... %0, %0(tied-def 0)...

if, for some reason, register allocator desides to put two instances
of %0 into two different objects (registers or spill slots), we may
end up with

$reg3 = STATEPOINT ... $reg2, $reg1(tied-def 0)...

and nothing will prevent later passes to sink uses of $reg2 below
statepoint, which is incorrect.

As a short term solution, always put base pointers on stack during
lowering.
A longer term solution may be to rework MIR statepoint format to
avoid GC pointer duplication in statepoint argument list.

Reviewed By: reames

Differential Revision: https://reviews.llvm.org/D86712
2020-08-28 23:22:07 +07:00
Yonghong Song 443d352a1c [GlobalISel] fix a compilation error with gcc 6.3.0
With gcc 6.3.0, I hit the following compilation error:
  ../lib/CodeGen/GlobalISel/Combiner.cpp: In member function
      ‘bool llvm::Combiner::combineMachineInstrs(llvm::MachineFunction&,
       llvm::GISelCSEInfo*)’:
  ../lib/CodeGen/GlobalISel/Combiner.cpp:156:54: error: suggest parentheses
       around ‘&&’ within ‘||’ [-Werror=parentheses]
     assert(!CSEInfo || !errorToBool(CSEInfo->verify()) &&
                        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~
                            "CSEInfo is not consistent. Likely missing calls to "
                            ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                            "observer on mutations");

Fix the code as suggested by the compiler.
2020-08-28 09:16:52 -07:00
QingShan Zhang deb4b25807 [DAGCombine] Don't delete the node if it has uses immediately
This is the follow up patch for https://reviews.llvm.org/D86183 as we miss to delete the node if NegX == NegY, which has use after we create the node.
```
    if (NegX && (CostX <= CostY)) {
      Cost = std::min(CostX, CostZ);
      RemoveDeadNode(NegY);
      return DAG.getNode(Opcode, DL, VT, NegX, Y, NegZ, Flags);  #<-- NegY is used here if NegY == NegX.
    }
```

Reviewed By: spatel

Differential Revision: https://reviews.llvm.org/D86689
2020-08-28 16:13:43 +00:00
David Sherwood f4257c5832 [SVE] Make ElementCount members private
This patch changes ElementCount so that the Min and Scalable
members are now private and can only be accessed via the get
functions getKnownMinValue() and isScalable(). In addition I've
added some other member functions for more commonly used operations.
Hopefully this makes the class more useful and will reduce the
need for calling getKnownMinValue().

Differential Revision: https://reviews.llvm.org/D86065
2020-08-28 14:43:53 +01:00
Denis Antrushin 248a67f144 [Statepoint] Turn assert into check in foldPatchpoint.
Original D81646 had check for tied regs in foldPatchpoint().
Due to unfortunate miscommunication with review comments and
adressing some comments post commit, it turned into assertion.

We had an offline talk and agreed that with current implementation
this path is possible, so I'm changing it back to check.

Note that this is workaround until ussues described in PR46917 are
resolved.
2020-08-28 20:00:23 +07:00
Sam Parker b30adfb529 [ARM][LowOverheadLoops] Liveouts and reductions
Remove the code that tried to look for reduction patterns, since the
vectorizer and isel can now produce predicated arithmetic instructios
within the loop body. This has required some reorganisation and fixes
around live-out and predication checks, as well as looking for cases
where an input/output is initialised to zero.

Differential Revision: https://reviews.llvm.org/D86613
2020-08-28 13:56:16 +01:00
Matt Arsenault 5feca7c9c3 GlobalISel: Implement computeNumSignBits for G_SEXT_INREG 2020-08-27 19:44:37 -04:00
Matt Arsenault f08bbde83f Correctly revert "GlobalISel: Use & operator on KnownBits"
I mis-resolved the revert through moving the code to another function.
2020-08-27 19:08:31 -04:00
Matt Arsenault 6cf4f25670 Revert "GlobalISel: Use & operator on KnownBits"
This reverts commit e53b799779.

Confusingly, this does not simply and the two sets of known bits, but
implements known bits for the and operator.
2020-08-27 18:52:34 -04:00
Brad Smith d870e36326 [SSP] Restore setting the visibility of __guard_local to hidden for better code generation.
Patch by: Philip Guenther
2020-08-27 17:17:38 -04:00
Matt Arsenault abc99ab572 GlobalISel: Implement known bits for min/max 2020-08-27 16:56:17 -04:00
Matt Arsenault ee679638d7 MIR: Infer not-SSA for subregister defs
It's possible to have a single virtual register def with a subreg
index that would pass the previous check, but it's not possible to
have a subregister def in SSA.

This is in preparation for adding stricter checks for SSA MIR.
2020-08-27 16:56:16 -04:00
Eli Friedman 8d21985a75 [RegisterScavenging] Delete dead function unprocess(). 2020-08-27 13:19:32 -07:00
Matt Arsenault e53b799779 GlobalISel: Use & operator on KnownBits
Avoid repeating for zero and one
2020-08-27 14:07:18 -04:00
Matt Arsenault 531f7063ba GlobalISel: Implement known bits for G_MERGE_VALUES 2020-08-27 14:07:18 -04:00
Aditya Nandakumar db464a3dbf [GISel] Add new GISel combiners for G_SELECT
https://reviews.llvm.org/D83833

Patch adds two new GICombinerRules for G_SELECT. The rules include:
combining selects with undef comparisons into their first selectee value,
and to combine away selects with constant comparisons. Patch additionally
adds a new combiner test for the AArch64 target to test these new G_SELECT
combiner rules and the existing select_same_val combiner rule.

Patch by  mkitzan
2020-08-27 09:40:15 -07:00
Aditya Nandakumar 5c2db1655b [GISel]: Fix one more CSE Non determinism
https://reviews.llvm.org/D86676

Sometimes we can have the following code

 x:gpr(s32) = G_OP

Say we build G_OP2 to the same x and then delete the previous instruction. Using something like

 Register X = ...;
 auto NewMIB = CSEBuilder.buildOp2(X, ... args);

Currently there's a mismatch in how NewMIB is profiled and inserted into the CSEMap (ie it doesn't consider register bank/register class along with type).Unify the profiling by refactoring and calling the common method.

This was found by turning on the CSEInfo::verify in at the end of each of our GISel passes which turns inconsistent state/non determinism in CSEing into crashes which likely usually indicates missing calls to Observer on mutations (the most common case). Here non determinism usually means not cseing sometimes, but almost never about producing incorrect code.
Also this patch adds this verification at the end of the combiners as well.
2020-08-27 09:06:21 -07:00
Lucas Prates 3d943bcd22 [CodeGen] Properly propagating Calling Convention information when lowering vector arguments
When joining the legal parts of vector arguments into its original value
during the lower of Formal Arguments in SelectionDAGBuilder, the Calling
Convention information was not being propagated for the handling of each
individual parts. The same did not happen when lowering calls, causing a
mismatch.

This patch fixes the issue by properly propagating the Calling
Convention details.

This fixes Bugzilla #47001.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D86715
2020-08-27 17:01:10 +01:00
Drew Wock 0ec098e22b [FPEnv] Allow fneg + strict_fadd -> strict_fsub in DAGCombiner
This is the first of a set of DAGCombiner changes enabling strictfp
optimizations. I want to test to waters with this to make sure changes
like these are acceptable for the strictfp case- this particular change
should preserve exception ordering and result precision perfectly, and
many other possible changes appear to be able to as well.

Copied from regular fadd combines but modified to preserve ordering via
the chain, this change allows strict_fadd x, (fneg y) to become
struct_fsub x, y and strict_fadd (fneg x), y to become strict_fsub y, x.

Differential Revision: https://reviews.llvm.org/D85548
2020-08-27 08:17:01 -04:00
OCHyams b6cca0ec05 Revert "[DWARF] Add cuttoff guarding quadratic validThroughout behaviour"
This reverts commit b9d977b0ca.

This cutoff is no longer required. The commit 34ffa7fc501 (D86153) introduces a
performance improvement which was tested against the motivating case for this
patch.

Discussed in differential revision: https://reviews.llvm.org/D86153
2020-08-27 11:52:30 +01:00
OCHyams 57d8acac64 [DwarfDebug] Improve validThroughout performance (4/4)
Almost NFC (see end).

The backwards scan in validThroughout significantly contributed to compile time
for a pathological case, causing the 'X86 Assembly Printer' pass to account for
roughly 70% of the run time. This patch guards the loop against running
unnecessarily, bringing the pass contribution down to 4%.

Almost NFC: There is a hack in validThroughout which promotes single constant
value DBG_VALUEs in the prologue to be live throughout the function. We're more
likely to hit this code path with this patch applied. Similarly to the parent
patches there is a small coverage change reported in the order of 10s of bytes.

Reviewed By: aprantl

Differential Revision: https://reviews.llvm.org/D86153
2020-08-27 11:52:30 +01:00
OCHyams 3c491881d2 [DwarfDebug] Improve multi-BB single location detection in validThroughout (3/4)
With the changes introduced in D86151 we can now check for single locations
which span multiple blocks for inlined scopes and blocks.

D86151 introduced the InstructionOrdering parameter, replacing a scan through
MBB instructions. The functionality to compare instruction positions across
blocks was add there, and this patch just removes the exit checks that were
previously (but no longer) required.

CTMark shows a geomean binary size reduction of 2.2% for RelWithDebInfo builds.
llvm-locstats (using D85636) shows a very small variable location coverage
change in 5 of 10 binaries, but just like in D86151 it is only in the order of
10s of bytes.

Reviewed By: djtodoro

Differential Revision: https://reviews.llvm.org/D86152
2020-08-27 11:52:29 +01:00
OCHyams 0b5a8050ea [DwarfDebug] Improve single location detection in validThroughout (2/4)
With this patch we're now accounting for two more cases which should be
considered 'valid throughout': First, where RangeEnd is ScopeEnd. Second, where
RangeEnd comes before ScopeEnd when including meta instructions, but are both
preceded by the same non-meta instruction.

CTMark shows a geomean binary size reduction of 1.5% for RelWithDebInfo builds.
`llvm-locstats` (using D85636) shows a very small variable location coverage
change in 2 of 10 binaries, but it is in the order of 10s of bytes which lines
up with my expectations.

I've added a test which checks both of these new cases. The first check in the
test isn't strictly necessary for this patch. But I'm not sure that it is
explicitly tested anywhere else, and is useful for the final patch in the
series.

Reviewed By: aprantl

Differential Revision: https://reviews.llvm.org/D86151
2020-08-27 11:52:29 +01:00
OCHyams e048ea7b1a [NFC][DebugInfo] Create InstructionOrdering helper class (1/4)
Group the map and methods used to query instruction ordering for trimVarLocs
(D82129) into a class. This will make it easier to reuse the functionality
upcoming patches.

Reviewed By: aprantl

Differential Revision: https://reviews.llvm.org/D86150
2020-08-27 11:52:29 +01:00
Sander de Smalen 4e9b66de3f [AArch64][SVE] Add missing debug info for ACLE types.
This patch adds type information for SVE ACLE vector types,
by describing them as vectors, with a lower bound of 0, and
an upper bound described by a DWARF expression using the
AArch64 Vector Granule register (VG), which contains the
runtime multiple of 64bit granules in an SVE vector.

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D86101
2020-08-27 10:56:42 +01:00
Sam Parker a3e41d4581 [ARM] Make MachineVerifier more strict about terminators
Fix the ARM backend's analyzeBranch so it doesn't ignore predicated
return instructions, and make the MachineVerifier rule more strict.

Differential Revision: https://reviews.llvm.org/D40061
2020-08-27 07:10:20 +01:00
Matt Arsenault 5207545a86 GlobalISel: IRTranslate minimum of pointer sizes on memcpy
I forgot to squash this with 0b7f6cc71a
2020-08-26 20:10:00 -04:00
Matt Arsenault 0b7f6cc71a GlobalISel: Add generic instructions for memory intrinsics
AArch64, X86 and Mips currently directly consumes these and custom
lowering to produce a libcall, but really these should follow the
normal legalization process through the libcall/lower action.
2020-08-26 20:08:45 -04:00
Alina Sbirlea 0b34226304 Use properlyDominates in RDFLiveness when sorting on dominance.
Summary:
When looking for all reaching definitions, we sort basic blocks on dominance. When sorting looking for properlyDominates() handles the case A == B.

Authored by: pranavb

Differential Revision: https://reviews.llvm.org/D86661
2020-08-26 15:16:40 -07:00
Sanjay Patel 54a5dd485c [DAGCombiner] allow store merging non-i8 truncated ops
We have a gap in our store merging capabilities for shift+truncate
patterns as discussed in:
https://llvm.org/PR46662

I generalized the code/comments for this function in earlier commits,
so we only need ease the type restriction and adjust the address/endian
checking to make this work.

AArch64 lets us switch endian to make sure that patterns are matched
either way.

Differential Revision: https://reviews.llvm.org/D86420
2020-08-26 15:23:08 -04:00
aartbik 72305a08ff [llvm] [DAG] Fix bug in llvm.get.active.lane.mask lowering
This intrinsic only accepted proper machine vector lengths.
Fixed by this change. With unit tests.

https://bugs.llvm.org/show_bug.cgi?id=47299

Reviewed By: SjoerdMeijer

Differential Revision: https://reviews.llvm.org/D86585
2020-08-26 10:16:31 -07:00
Craig Topper 28bd47fc47 [LegalizeTypes] Remove WidenVecRes_Shift and just use WidenVecRes_Binary
This function seems to allow for the shift amount to have a different type than the result, but I don't think we do that anywhere else for vector shifts. We also don't have any support for legalizing the shift amount alone if the result is legal and the shift amount type isn't. The code coverage report here shows this code as uncovered http://lab.llvm.org:8080/coverage/coverage-reports/coverage/Users/buildslave/jenkins/workspace/coverage/llvm-project/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp.html

Differential Revision: https://reviews.llvm.org/D86475
2020-08-26 09:57:41 -07:00
Jay Foad 75d159f924 [LegalizeTypes] Add ROTL/ROTR to ScalarizeVectorResult.
We can scalarize these just like any other binary operation.

Fixes https://bugs.llvm.org/show_bug.cgi?id=47303 caused by D77152.

Differential Revision: https://reviews.llvm.org/D86601
2020-08-26 14:42:57 +01:00
Matt Arsenault eb074088c9 GlobalISel: Combine G_ADD of G_PTRTOINT to G_PTR_ADD
This produces less work for addressing mode matching. I think this is
safe since I don't think machine IR is supposed to give the same
aliasing properties as getelementptr in the IR.
2020-08-26 08:57:15 -04:00
QingShan Zhang ebf3b188c6 [Scheduling] Implement a new way to cluster loads/stores
Before calling target hook to determine if two loads/stores are clusterable,
we put them into different groups to avoid fake cluster due to dependency.
For now, we are putting the loads/stores into the same group if they have
the same predecessor. We assume that, if two loads/stores have the same
predecessor, it is likely that, they didn't have dependency for each other.

However, one SUnit might have several predecessors and for now, we just
pick up the first predecessor that has non-data/non-artificial dependency,
which is too arbitrary. And we are struggling to fix it.

So, I am proposing some better implementation.
1. Collect all the loads/stores that has memory info first to reduce the complexity.
2. Sort these loads/stores so that we can stop the seeking as early as possible.
3. For each load/store, seeking for the first non-dependency instruction with the
   sorted order, and check if they can cluster or not.

Reviewed By: Jay Foad

Differential Revision: https://reviews.llvm.org/D85517
2020-08-26 12:33:59 +00:00
Sam Tebbs 85dd852a0d [RDA] Don't visit the BB of the instruction in getReachingUniqueMIDef
If the basic block of the instruction passed to getUniqueReachingMIDef
is a transitive predecessor of itself and has a definition of the
register, the function will return that definition even if it is after
the instruction given to the function. This patch stops the function
from scanning the instruction's basic block to prevent this.

Differential Revision: https://reviews.llvm.org/D86607
2020-08-26 12:40:39 +01:00
Jay Foad b7e3599a22 [SelectionDAG] Handle non-power-of-2 bitwidths in expandROT
Differential Revision: https://reviews.llvm.org/D86449
2020-08-26 09:20:46 +01:00
Fangrui Song 82d0749749 [TargetLoweringObjectFileImpl] Make .llvmbc and .llvmcmd non-SHF_ALLOC
There are two ways .llvmbc can be produced:

* clang -c -fembed-bitcode=all (which also produces .llvmcmd)
* LTO backend: ld.lld -mllvm -lto-embed-bitcode or -plugin-opt=-lto-embed-bitcode

.llvmbc and .llvmcmd have the SHF_ALLOC flag, so they can be dropped by
--gc-sections.

This patch sets SectionKind::Metadata to drop the SHF_ALLOC flag. This
is conceptually correct: the two sections are not part of the process
image, so SHF_ALLOC is not appropriate.

`test/LTO/X86/embed-bitcode.ll`: changed `llvm-objcopy -O binary --only-section` to
`llvm-objcopy --dump-section`. `-O binary` does not dump non-SHF_ALLOC sections.

Reviewed By: tejohnson

Differential Revision: https://reviews.llvm.org/D86374
2020-08-25 13:37:29 -07:00
Sjoerd Meijer 39522b1e10 [SelectionDAG] Legalize intrinsic get.active.lane.mask
This adapts legalization of intrinsic get.active.lane.mask to the new semantics
as described in D86147. Because the second argument is now the loop tripcount,
we legalize this intrinsic to an 'icmp ULT' instead of an ULE when it was the
backedge-taken count.

Differential Revision: https://reviews.llvm.org/D86302
2020-08-25 15:00:10 +01:00
Jeremy Morse 121a49d839 [LiveDebugValues] Add switches for using instr-ref variable locations
This patch adds the -Xclang option
"-fexperimental-debug-variable-locations" and same LLVM CodeGen option,
to pick which variable location tracking solution to use.

Right now all the switch does is pick which LiveDebugValues
implementation to use, the normal VarLoc one or the instruction
referencing one in rGae6f78824031. Over time, the aim is to add fragments
of support in aid of the value-tracking RFC:

  http://lists.llvm.org/pipermail/llvm-dev/2020-February/139440.html

also controlled by this command line switch. That will slowly move
variable locations to be defined by an instruction calculating a value,
and a DBG_INSTR_REF instruction referring to that value. Thus, this is
going to grow into a "use the new kind of variable locations" switch,
rather than just "use the new LiveDebugValues implementation".

Differential Revision: https://reviews.llvm.org/D83048
2020-08-25 14:58:48 +01:00
David Green 5b7e27a4db [ARM][CGP] Fix scalar condition selects for MVE
The arm backend does not handle select/select_cc on vectors with scalar
conditions, preferring to expand them in codegenprepare instead. This
usually works except when optimizing for size, where the optsize check
would end up overruling the backend isSelectSupported check.

We could handle the selects in ISel too, but this seems like smaller
code than trying to splat the condition to all lanes.

Differential Revision: https://reviews.llvm.org/D86433
2020-08-25 12:09:06 +01:00
Paul Walker 73ac3c0ede [SVE] Lower scalable vector ISD::FNEG operations.
Also updates isConstOrConstSplatFP to allow the mul(A,-1) -> neg(A)
transformation when -1 is expressed as an ISD::SPLAT_VECTOR.

Differential Revision: https://reviews.llvm.org/D86415
2020-08-25 11:22:28 +01:00
Sam Parker 85a5c65f69 [NFC][RDA] Add explicit def check
Explicitly check that there is a local def prior to the given
instruction in getReachingLocalMIDef instead of just relying on
a nullptr return from getInstFromId.
2020-08-25 08:37:45 +01:00
Venkataramanan Kumar 62e91bf563 [DAGCombine]: Fold X/Sqrt(X) to Sqrt(X)
With FMF ( "nsz" and " reassoc") fold X/Sqrt(X) to Sqrt(X).

This is done after targets have the chance to produce a
reciprocal sqrt estimate sequence because that expansion
is probably more efficient than an expansion of a
non-reciprocal sqrt. That is also why we deferred doing
this transform in IR (D85709).

Differential Revision: https://reviews.llvm.org/D86403
2020-08-24 18:16:13 -04:00
Craig Topper 43465a4375 [LegalizeTypes][X86] Add ROTL/ROTR to WidenVectorResult.
We can widen these just like any other binary operation.

Added test cases for v2i32 for X86 for coverage.

Fixes failures seen after D77152.
2020-08-24 10:10:20 -07:00
Jay Foad a522067692 [SDAG] Convert FSHL <--> FSHR if the target only supports one of them
D77152 tried to do this but got it wrong in the shift-by-zero case.
D86430 reverted the wrong code. Reimplement the optimization with
different code depending on whether the shift amount is known to be
non-zero (modulo bitwidth).

This improves code quality for fshl tests on AMDGPU, which only has an
fshr instruction.

Differential Revision: https://reviews.llvm.org/D86438
2020-08-24 17:47:10 +01:00
Matt Arsenault 517caca359 GlobalISel: Improve dead instruction debug printing
This was printing the "Is dead" on a separate line from the
instruction, which was harder to follow.
2020-08-24 10:12:00 -04:00
Matt Arsenault e1644a3779 GlobalISel: Reduce G_SHL width if source is extension
shl ([sza]ext x, y) => zext (shl x, y).

Turns expensive 64 bit shifts into 32 bit if it does not overflow the
source type:

This is a port of an AMDGPU DAG combine added in
5fa289f0d8. InstCombine does this
already, but we need to do it again here to apply it to shifts
introduced for lowered getelementptrs. This will help matching
addressing modes that use 32-bit offsets in a future patch.

TableGen annoyingly assumes only a single match data operand, so
introduce a reusable struct. However, this still requires defining a
separate GIMatchData for every combine which is still annoying.

Adds a morally equivalent function to the existing
getShiftAmountTy. Without this, we would have to do try to repeatedly
query the legalizer info and guess at what type to use for the shift.
2020-08-24 09:42:40 -04:00
Bjorn Pettersson 7a4e26adc8 [SelectionDAG] Fix miscompile bug in expandFunnelShift
This is a fixup of commit 0819a6416f (D77152) which could
result in miscompiles. The miscompile could only happen for targets
where isOperationLegalOrCustom could return different values for
FSHL and FSHR.

The commit mentioned above added logic in expandFunnelShift to
convert between FSHL and FSHR by swapping direction of the
funnel shift. However, that transform is only legal if we know
that the shift count (modulo bitwidth) isn't zero.

Basically, since fshr(-1,0,0)==0 and fshl(-1,0,0)==-1 then doing a
rewrite such as fshr(X,Y,Z) => fshl(X,Y,0-Z) would be incorrect if
Z modulo bitwidth, could be zero.

```
$ ./alive-tv /tmp/test.ll

----------------------------------------
define i32 @src(i32 %x, i32 %y, i32 %z) {
%0:
  %t0 = fshl i32 %x, i32 %y, i32 %z
  ret i32 %t0
}
=>
define i32 @tgt(i32 %x, i32 %y, i32 %z) {
%0:
  %t0 = sub i32 32, %z
  %t1 = fshr i32 %x, i32 %y, i32 %t0
  ret i32 %t1
}
Transformation doesn't verify!
ERROR: Value mismatch

Example:
i32 %x = #x00000000 (0)
i32 %y = #x00000400 (1024)
i32 %z = #x00000000 (0)

Source:
i32 %t0 = #x00000000 (0)

Target:
i32 %t0 = #x00000020 (32)
i32 %t1 = #x00000400 (1024)
Source value: #x00000000 (0)
Target value: #x00000400 (1024)
```

It could be possible to add back the transform, given that logic
is added to check that (Z % BW) can't be zero. Since there were
no test cases proving that such a transform actually would be useful
I decided to simply remove the faulty code in this patch.

Reviewed By: foad, lebedev.ri

Differential Revision: https://reviews.llvm.org/D86430
2020-08-24 09:52:11 +02:00
Fangrui Song fd485673da [LiveDebugVariables] Internalize class DbgVariableValue. NFC 2020-08-23 22:53:46 -07:00
Qiu Chaofan 1bc45b2fd8 [PowerPC] Support lowering int-to-fp on ppc_fp128
D70867 introduced support for expanding most ppc_fp128 operations. But
sitofp/uitofp is missing. This patch adds that after D81669.

Reviewed By: uweigand

Differntial Revision: https://reviews.llvm.org/D81918
2020-08-24 11:18:16 +08:00
QingShan Zhang 960cbc53ca [DAGCombine] Remove dead node when it is created by getNegatedExpression
We hit the compiling time reported by https://bugs.llvm.org/show_bug.cgi?id=46877
and the reason is the same as D77319. So we need to remove the dead node we created
to avoid increase the problem size of DAGCombiner.

Reviewed By: Spatel

Differential Revision: https://reviews.llvm.org/D86183
2020-08-24 02:50:58 +00:00
Sanjay Patel 1d0fa79824 [DAGCombiner] restrict store merge of truncs to early combining
The pattern matching does not account for truncating stores,
so it is unlikely to work at later stages. So we are likely
wasting compile-time with no hope of improvement by running
this later.
2020-08-23 10:44:23 -04:00
Sanjay Patel 79cb289a95 [DAGCombiner] add early exit for store merging of truncs
This should be NFC in terms of output because the endian
check further down would bail out too, but we are wasting
time by waiting to that point to give up. If we generalize
that function to deal with more than i8 types, we should
not have to deal with the degenerate case.
2020-08-22 16:25:16 -04:00
Jeremy Morse 93af37043b Follow-up build fix for rGae6f78824031
One of the bots objects to brace-initializing a tuple:

  http://lab.llvm.org:8011/builders/clang-cmake-x86_64-sde-avx512-linux/builds/43595/steps/build%20stage%201/logs/stdio

As the tuple constructor is apparently explicit. Fall back to the (not
as pretty) explicit construction of a tuple. I'd thought this was
permitted behaviour; will investigate why this fails later.
2020-08-22 19:09:30 +01:00
Fangrui Song 60bcec4eea [LiveDebugValues] Delete unneeded copy constructor after D83047
It will suppress the implicitly-declared copy assignment operator in C++20.
2020-08-22 10:55:28 -07:00
Jeremy Morse ae6f788240 [LiveDebugValues] Add instruction-referencing LDV implementation
This patch imports the instruction-referencing implementation of
LiveDebugValues proposed here:

  http://lists.llvm.org/pipermail/llvm-dev/2020-June/142368.html

The new implementation is unreachable in this patch, it's the next patch
that enables it behind a command line switch. Briefly, rather than
tracking variable locations by just their location as the 'VarLoc'
implementation does, this implementation does it by value:
 * Each value defined in a function is numbered, and propagated through
   dataflow,
 * Each DBG_VALUE reads a machine value number from a machine location,
 * Variable _values_ are propagated through dataflow,
 * Variable values are translated back into locations, DBG_VALUEs
   inserted to specify where those locations are.

The ultimate aim of this is to enable referring to variable values
throughout post-isel code, rather than locations. Those patches will
build on top of this new LiveDebugValues implementation in later patches
-- it can't be done with the VarLoc implementation as we don't have
value information, only locations.

Differential Revision: https://reviews.llvm.org/D83047
2020-08-22 18:31:08 +01:00
Matt Arsenault 901e3317fe GlobalISel: Merge FewerElements for G_BUILD_VECTOR/G_CONCAT_VECTORS
This switches from using G_EXTRACT in odd cases to widen with undef
and unmerge.
2020-08-22 10:25:53 -04:00
Jeremy Morse 2d9be9e318 Fix some builds after 20bb9fe565
-Wsuggest-override indicates this VarLocBasedLDV method needs the
override keyword.
2020-08-22 15:20:42 +01:00
Jeremy Morse 20bb9fe565 [LiveDebugValues] Install an implementation-picking LiveDebugValues pass
This patch renames the current LiveDebugValues class to "VarLocBasedLDV"
and removes the pass-registration code from it. It creates a separate
LiveDebugValues class that deals with pass registration and management,
that calls through to VarLocBasedLDV::ExtendRanges when
runOnMachineFunction is called. This is done through the "LDVImpl"
abstract class, so that a future patch can install the new
instruction-referencing LiveDebugValues implementation and have it
picked at runtime.

No functional change is intended, just shuffling responsibilities.

Differential Revision: https://reviews.llvm.org/D83046
2020-08-22 14:50:22 +01:00
Sanjay Patel 2fc7c85201 [DAGCombiner] clean up merge of truncated stores; NFC
This code handles the special-case of i8 stores,
but it could be generalized to deal with other types.
2020-08-22 09:23:32 -04:00
Jeremy Morse fba06e3c85 [LiveDebugValues][NFC] Move LiveDebugValues source for refactor
This is a pure file move of LiveDebugValues.cpp ahead of the pass being
refactored, with an experimental new implementation to follow.

The motivation for these changes can be found here:

  http://lists.llvm.org/pipermail/llvm-dev/2020-June/142368.html

And the other related changes can be found in the phabricator stack for
this revision:

Differential Revision: https://reviews.llvm.org/D83304
2020-08-22 12:58:30 +01:00
Sourabh Singh Tomar f91d18eaa9 [DebugInfo][flang]Added support for representing Fortran assumed length strings
This patch adds support for representing Fortran `character(n)`.

Primarily patch is based out of D54114 with appropriate modifications.

Test case IR is generated using our downstream classic-flang. We're in process
of upstreaming flang PR's but classic-flang has dependencies on llvm, so
this has to get in first.

Patch includes functional test case for both IR and corresponding
dwarf, furthermore it has been manually tested as well using GDB.

Source snippet:
```
 program assumedLength
   call sub('Hello')
   call sub('Goodbye')
   contains
   subroutine sub(string)
           implicit none
           character(len=*), intent(in) :: string
           print *, string
   end subroutine sub
 end program assumedLength
```

GDB:
```
(gdb) ptype string
type = character (5)
(gdb) p string
$1 = 'Hello'
```

Reviewed By: aprantl, schweitz

Differential Revision: https://reviews.llvm.org/D86305
2020-08-22 10:13:40 +05:30
Nicolai Hähnle b37db11d95 MachineSSAUpdater: Allow initialization with just a register class
The register class is required for inserting PHIs, but the "current
virtual register" isn't actually used for anything, so let's remove it
while we're at it.

Differential Revision: https://reviews.llvm.org/D85602

Change-Id: I1e647f31570ef21a7ea8e20db3454178e98a6a8b
2020-08-21 23:04:35 +02:00
Jay Foad 0819a6416f [SelectionDAG] Better legalization for FSHL and FSHR
In SelectionDAGBuilder always translate the fshl and fshr intrinsics to
FSHL and FSHR (or ROTL and ROTR) instead of lowering them to shifts and
ORs. Improve the legalization of FSHL and FSHR to avoid code quality
regressions.

Differential Revision: https://reviews.llvm.org/D77152
2020-08-21 10:32:49 +01:00
Yevgeny Rouban 18bc400f97 [NewPM][PassInstrumentation] Add PreservedAnalyses parameter to AfterPass* callbacks
Both AfterPass and AfterPassInvalidated pass instrumentation
callbacks get additional parameter of type PreservedAnalyses.
This patch was created by @fedor.sergeev. I have just slightly
changed it.

Reviewers: fedor.sergeev

Differential Revision: https://reviews.llvm.org/D81555
2020-08-21 16:10:42 +07:00
Justin Bogner 1283dca007 [GISel] Correct the known bits of G_ANYEXT
Known bits for G_ANYEXT was incorrectly using KnownBits::zext, causing
us to treat the high bits as zero even though they're (by definition)
unknown.

Differential Revision: https://reviews.llvm.org/D86323
2020-08-20 17:17:04 -07:00
Jon Roelofs 74ca5275e9 Fix a couple of typos. NFC 2020-08-20 14:56:57 -06:00
Matt Arsenault 79ce9bb380 CodeGen: Don't drop AA metadata when splitting MachineMemOperands
Assuming this is used to split a memory access into smaller pieces,
the new access should still have the same aliasing properties as the
original memory access. As far as I can tell, this wasn't
intentionally dropped. It may be necessary to drop this if you are
moving the operand outside of the bounds of the original object in
such a way that it may alias another IR object, but I don't think any
of the existing users are doing this. Some of the uses widen into
unused alignment padding, which I think is OK.
2020-08-20 16:17:30 -04:00
Jay Foad 4aaf772542 [PeepholeOptimizer] Remove dead code
At this point we have already ruled out all def operands, so we can't
possibly see a dead implicit def operand.
2020-08-20 16:48:57 +01:00
Bjorn Pettersson b43235a76c [DebugInfo] Fix DwarfExpression::addConstantFP for float on big-endian
The byte swapping, when dealing with 4 byte (float) FP constants
in DwarfExpression::addConstantFP, added in commit ef8992b9f0
was not correct. It always performed byte swapping using an
uint64_t value. When dealing with 4 byte values the 4 interesting
bytes ended up in the big end of the uint64_t, but later we emitted
the 4 bytes at the little end. So we ended up with zeroes being
emitted and faulty debug information.

This patch simplifies things a bit, IMHO. Using the APInt
representation throughout the function, instead of looking at
the internal representation using getRawBytes and without using
reinterpret_cast etc. And using API.byteSwap() should result in
correct byte swapping independent of APInt being 4 or 8 bytes.

Differential Revision: https://reviews.llvm.org/D86272
2020-08-20 11:48:05 +02:00
Konstantin Schwarz 7497b861f4 [GlobalISel][IRTranslator] Support PHI instructions in landingpad blocks
The check for the landingpad instructions was overly restrictive. In optimimized builds PHI nodes can appear
before the landingpad instructions, resulting in a fallback to SelectionDAG.

This change relaxes the check to allow PHI nodes.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D86141
2020-08-20 10:49:31 +02:00
Matt Arsenault 31adc28d24 GlobalISel: Implement fewerElementsVector for G_CONCAT_VECTORS sources
This fixes <6 x s16> = G_CONCAT_VECTORS from <3 x s16> handling.
2020-08-19 18:53:24 -04:00
Sourabh Singh Tomar ef8992b9f0 Re-apply "[DebugInfo] Emit DW_OP_implicit_value for Floating point constants"
This patch was reverted in 7c182663a8 due to some failures
observed on PCC based machines. Failures were due to Endianness issue and
long double representation issues.

Patch is revised to address Endianness issue. Furthermore, support
for emission of `DW_OP_implicit_value` for `long double` has been removed
(since it was unclean at the moment). Planning to handle this in
a clean way soon!

For more context, please refer to following review link.

Reviewed By: aprantl

Differential Revision: https://reviews.llvm.org/D83560
2020-08-20 01:39:42 +05:30
Sourabh Singh Tomar 9937872c02 Revert "[DebugInfo] Emit DW_OP_implicit_value for Floating point constants"
This reverts commit 15801f1619.
arc's land messed up! It removed the new commit message and took it
from revision.
2020-08-20 01:28:03 +05:30
Sourabh Singh Tomar 15801f1619 [DebugInfo] Emit DW_OP_implicit_value for Floating point constants
llvm is missing support for DW_OP_implicit_value operation.
DW_OP_implicit_value op is indispensable for cases such as
optimized out long double variables.

For intro refer: DWARFv5 Spec Pg: 40 2.6.1.1.4 Implicit Location Descriptions

Consider the following example:
```
int main() {
        long double ld = 3.14;
        printf("dummy\n");
        ld *= ld;
        return 0;
}
```
when compiled with tunk `clang` as
`clang test.c -g -O1` produces following location description
of variable `ld`:
```
DW_AT_location        (0x00000000:
                     [0x0000000000201691, 0x000000000020169b): DW_OP_constu 0xc8f5c28f5c28f800, DW_OP_stack_value, DW_OP_piece 0x8, DW_OP_constu 0x4000, DW_OP_stack_value, DW_OP_bit_piece 0x10 0x40, DW_OP_stack_value)
                  DW_AT_name    ("ld")
```
Here one may notice that this representation is incorrect(DWARF4
stack could only hold integers(and only up to the size of address)).
Here the variable size itself is `128` bit.
GDB and LLDB confirms this:
```
(gdb) p ld
$1 = <invalid float value>
(lldb) frame variable ld
(long double) ld = <extracting data from value failed>
```

GCC represents/uses DW_OP_implicit_value in these sort of situations.
Based on the discussion with Jakub Jelinek regarding GCC's motivation
for using this, I concluded that DW_OP_implicit_value is most appropriate
in this case.

Link: https://gcc.gnu.org/pipermail/gcc/2020-July/233057.html

GDB seems happy after this patch:(LLDB doesn't have support
for DW_OP_implicit_value)
```
(gdb) p ld
p ld
$1 = 3.14000000000000012434
```

Reviewed By: aprantl

Differential Revision: https://reviews.llvm.org/D83560
2020-08-20 01:20:40 +05:30
Matt Arsenault adbcc8e733 GlobalISel: Add TargetLowering member to LegalizerHelper 2020-08-19 14:50:35 -04:00
Matt Arsenault d64ad3f051 GlobalISel: Don't check for verifier enforced constraint
Loads are always required to have a single memory operand.
2020-08-19 14:15:38 -04:00
Matt Arsenault e95c08432a GlobalISel: Use Register 2020-08-19 13:45:31 -04:00
Mehdi Amini a407ec9b6d Revert "Revert "[NFC][llvm] Make the contructors of `ElementCount` private.""
Was reverted because MLIR/Flang builds were broken, these APIs have been
fixed in the meantime.
2020-08-19 17:26:36 +00:00
Mehdi Amini 4fc56d70aa Revert "[NFC][llvm] Make the contructors of `ElementCount` private."
This reverts commit 264afb9e6a.
(and dependent 6b742cc48 and fc53bd610f)

MLIR/Flang are broken.
2020-08-19 17:21:37 +00:00
Jessica Paquette d25b12bdc3 [GlobalISel] Add combine for (x & mask) -> x when (x & mask) == x
If we have a mask, and a value x, where (x & mask) == x, we can drop the AND
and just use x.

This is about a 0.4% geomean code size improvement on CTMark at -O3 for AArch64.

In AArch64, this is most useful post-legalization. Patterns like this often
show up when legalizing s1s, which must be extended to larger types.

e.g.

```
%cmp:_(s32) = G_ICMP ...
%and:_(s32) = G_AND %cmp, 1
```

Since G_ICMP only produces a single bit, there's no reason to mask it with the
G_AND.

Differential Revision: https://reviews.llvm.org/D85463
2020-08-19 10:20:57 -07:00
Francesco Petrogalli 264afb9e6a [NFC][llvm] Make the contructors of `ElementCount` private.
Differential Revision: https://reviews.llvm.org/D86120
2020-08-19 16:26:44 +00:00
David Sherwood 3f36561f69 [SVE][CodeGen] Fix scalable vector issues in DAGTypeLegalizer::GenWidenVectorLoads
In DAGTypeLegalizer::GenWidenVectorLoads the algorithm assumes it only
ever deals with fixed width types, hence the offsets for each individual
store never take 'vscale' into account. I've changed the code in that
function to use TypeSize instead of unsigned for tracking the remaining
load amount. In addition, I've changed the load loop to use the new
IncrementPointer helper function for updating the addresses in each
iteration, since this handles scalable vector types.

Also, I've added report_fatal_errors in GenWidenVectorExtLoads,
TargetLowering::scalarizeVectorLoad and TargetLowering::scalarizeVectorStores,
since these functions currently use a sequence of element-by-element
scalar loads/stores. In a similar vein, I've also added a fatal error
report in FindMemType for the case when we decide to return the element
type for a scalable vector type.

I've added new tests in

  CodeGen/AArch64/sve-split-load.ll
  CodeGen/AArch64/sve-ld-addressing-mode-reg-imm.ll

for the changes in GenWidenVectorLoads.

Differential Revision: https://reviews.llvm.org/D85909
2020-08-19 07:54:32 +01:00
Amara Emerson ed35344524 Use std::make_tuple instead of initializer lists to make a bot happy:
http://lab.llvm.org:8011/builders/clang-cmake-x86_64-avx2-linux
2020-08-18 14:55:52 -07:00
David Blaikie 1870b52f0c Recommit "PR44685: DebugInfo: Handle address-use-invalid type units referencing non-type units"
Originally committed as be3ef93bf5.
Reverted by b4bffdbadf due to bot
failures:
http://green.lab.llvm.org/green/job/clang-stage1-cmake-RA-expensive/17380/testReport/junit/LLVM/DebugInfo_X86/addr_tu_to_non_tu_ll/
http://45.33.8.238/win/22216/step_11.txt

MacOS failure due to testing Split DWARF which isn't compatible with
MachO.
Windows failure due to testing type units which aren't enabled on
Windows.

Fix both of these by applying an explicit x86 linux triple to the test.
2020-08-18 13:43:28 -07:00
Jessica Paquette bf36e90295 [GlobalISel][CallLowering] NFC: Unify flag-setting from CallBase + AttributeList
It's annoying to have to maintain multiple, nearly identical chains of if
statements which all set the same attributes.

Add a helper function, `addFlagsUsingAttrFn` which performs the attribute
setting.

Then, use wrappers for that function in `lowerCall` and `setArgFlags`.

(Note that the flag-setting code in `setArgFlags` was missing the returned
attribute. There's no selection for this yet, so no test. It's an example of
the kind of thing this lets us avoid, though.)

Differential Revision: https://reviews.llvm.org/D86159
2020-08-18 11:07:33 -07:00
Jessica Paquette f29e6277ad [GlobalISel][CallLowering] Don't tail call with non-forwarded explicit sret
Similar to this commit:

faf8065a99

Testcase is pretty much the same as

test/CodeGen/AArch64/tailcall-explicit-sret.ll

Except it uses i64 (since we don't handle the i1024 return values yet), and
doesn't have indirect tail call testcases (because we can't translate those
yet).

Differential Revision: https://reviews.llvm.org/D86148
2020-08-18 11:06:57 -07:00
Matt Arsenault 5a15f6628e GlobalISel: Implement fewerElementsVector for G_INSERT_VECTOR_ELT
Add unit tests since AMDGPU will only trigger this for gigantic
vectors, and won't use the annoying odd sized breakdown case.
2020-08-18 13:51:19 -04:00
Amara Emerson 04a6ea5d77 [GlobalISel] Add a combine for sext_inreg(load x), c --> sextload x
This is restricted to single use loads, which if we fold to sextloads we can
find more optimal addressing modes on AArch64.

This also fixes an overload the MachineFunction::getMachineMemOperand() method
which was incorrectly using the MF alignment instead of the MMO alignment.

Differential Revision: https://reviews.llvm.org/D85966
2020-08-18 10:42:15 -07:00
Amara Emerson 40e269ea6d [GlobalISel] Add a combine for ashr(shl x, c), c --> sext_inreg x, c'
By detecting this sign extend pattern early, we can uncover opportunities for
more optimizations.

Differential Revision: https://reviews.llvm.org/D85965
2020-08-18 10:42:15 -07:00
Jessica Paquette 224a8c639e [GlobalISel][CallLowering] Look through call parameters for flags
We weren't looking through the parameters on calls at all.

E.g., say you had

```
declare i32 @zext(i32 zeroext %x)

...
%y = call i32 @zext(i32 %something)
...

```

At the point of the call, we wouldn't know that the %something should have the
zeroext attribute.

This sets flags in about the same way as
TargetLoweringBase::ArgListEntry::setAttributes.

Differential Revision: https://reviews.llvm.org/D86125
2020-08-18 08:48:56 -07:00
Nico Weber b4bffdbadf Revert "PR44685: DebugInfo: Handle address-use-invalid type units referencing non-type units"
This reverts commit be3ef93bf5.
Test fails on macOS and Windows, e.g. http://45.33.8.238/win/22216/step_11.txt
2020-08-18 08:40:36 -04:00
David Blaikie be3ef93bf5 PR44685: DebugInfo: Handle address-use-invalid type units referencing non-type units
Theory was that we should never reach a non-type unit (eg: type in an
anonymous namespace) when we're already in the invalid "encountered an
address-use, so stop emitting types for now, until we throw out the
whole type tree to restart emitting in non-type unit" state. But that's
not the case (prior commit cleaned up one reason this wasn't exposed
sooner - but also makes it easier to test/demonstrate this issue)
2020-08-17 21:42:00 -07:00
David Blaikie 24c3dabef4 DebugInfo: Emit class template parameters first, before members
This reads more like what you'd expect the DWARF to look like (from the
lexical order of C++ - template parameters come before members, etc),
and also happens to make it easier to tickle (& thus test) a bug related
to type units and Split DWARF I'm about to fix.
2020-08-17 21:42:00 -07:00
Matt Arsenault a128292b90 GlobalISel: Make type for lower action more consistently optional
Some of the lower implementations were relying on this, however the
type was not set depending on which form .lower* helper form you were
using. For instance, if you used an unconditonal lower(), the type was
never set. Most of the lower actions do not benefit from a type
parameter, and just expand in terms of the original operation's types.

However, some lowerings could benefit from an additional type hint to
combine a promotion and an expansion. An example of this is for
add/sub sat. The DAG integer legalization tries to use smarter
expansions directly when promoting the integer type, and doesn't
always produce the same instruction with a wider type.

Treat this as an optional hint argument, that only means something for
specific lower actions. It may be useful to generalize this mechanism
to pass a full list of type indexes and desired types, but I haven't
run into a case like that yet.
2020-08-17 16:24:55 -04:00
Alexandre Ganea 98e01f56b0 Revert "Re-Re-land: [CodeView] Add full repro to LF_BUILDINFO record"
This reverts commit a3036b3863.

As requested in: https://reviews.llvm.org/D80833#2221866
Bug report: https://crbug.com/1117026
2020-08-17 15:49:18 -04:00
Sanjay Patel f925fd3304 [DAGCombiner] give magic number a name in getStoreMergeCandidates; NFC 2020-08-17 15:37:55 -04:00
Sanjay Patel 046b4a550a [DAGCombiner] reduce code duplication in getStoreMergeCandidates; NFC 2020-08-17 15:37:55 -04:00
Sanjay Patel 20c85fd1ab [DAGCombiner] simplify bool return in getStoreMergeCandidates; NFC 2020-08-17 15:37:55 -04:00
Sanjay Patel 52cd8f1ecb [DAGCombiner] clean up getStoreMergeCandidates(); NFC
1. Move bailouts and local var declarations.
2. Convert if-chain to switch on StoreSource with unreachable default.
2020-08-17 15:37:54 -04:00
Sanjay Patel 27708db3e3 [DAGCombiner] convert StoreSource if-chain to switch; NFC
The "isa" checks were less constrained because they allow
target constants, but the later matching code would bail
out on those anyway, so this should be slightly more
efficient.
2020-08-17 15:37:54 -04:00
Matt Arsenault a275acc4a9 GlobalISel: Early continue to reduce loop indentation 2020-08-17 13:51:08 -04:00
Matt Arsenault 5b53b17cd3 DAG: Add missing comment for transform 2020-08-17 10:01:12 -04:00
Matt Arsenault 924f31bc3c GlobalISel: Remove unnecessary check for copy type
COPY isn't allowed to change the type, but can mix no type with type.
2020-08-17 09:19:25 -04:00
Matt Arsenault 04a288f0f0 GlobalISel: Remove unnecessary llvm:: 2020-08-15 12:12:50 -04:00
Philip Reames a96fc4638b Remove deopt and gc transition arguments from gc.statepoint intrinsic
(Forgot to land this a couple of weeks back.)

In a recent series of changes, I've introduced support for using the respective operand bundle kinds on the statepoint. At the moment, code supports either/or, but there's no need to keep the old support around. For the moment, I am simply changing the specification and verifier to require zero length argument sets in the intrinsic.

The intrinsic itself is experimental. Given that, there's no forward serialization needed. The in tree uses and generation have already been updated to use the new operand bundle based forms, the only folks broken by the change will be those with frontends generating statepoints directly and the updates should be easy.

Why not go ahead and just remove the arguments entirely? Well, I plan to. But while working on this I've found that almost all of the arguments to the statepoint can be expressed via operand bundles or attributes. Given that, I'm planning a radical simplification of the arguments and figured I'd do one update not several small ones.

Differential Revision: https://reviews.llvm.org/D80892
2020-08-14 16:07:40 -07:00
Craig Topper c7a0b2684f [X86][MC][Target] Initial backend support a tune CPU to support -mtune
This patch implements initial backend support for a -mtune CPU controlled by a "tune-cpu" function attribute. If the attribute is not present X86 will use the resolved CPU from target-cpu attribute or command line.

This patch adds MC layer support a tune CPU. Each CPU now has two sets of features stored in their GenSubtargetInfo.inc tables . These features lists are passed separately to the Processor and ProcessorModel classes in tablegen. The tune list defaults to an empty list to avoid changes to non-X86. This annoyingly increases the size of static tables on all target as we now store 24 more bytes per CPU. I haven't quantified the overall impact, but I can if we're concerned.

One new test is added to X86 to show a few tuning features with mismatched tune-cpu and target-cpu/target-feature attributes to demonstrate independent control. Another new test is added to demonstrate that the scheduler model follows the tune CPU.

I have not added a -mtune to llc/opt or MC layer command line yet. With no attributes we'll just use the -mcpu for both. MC layer tools will always follow the normal CPU for tuning.

Differential Revision: https://reviews.llvm.org/D85165
2020-08-14 15:31:50 -07:00
Matt Arsenault 5c5e6d951e TableGen/GlobalISel: Partially handle immAllOnesV/immAllZerosV
These should really match either G_BUILD_VECTOR or
G_BUILD_VECTOR_TRUNC, but there doesn't seem to be an existing
mechanism for matching alternative opcodes. There is GIM_SwitchOpcode,
but it seems to assume it's oly only used for matcher optimization.

I could also omit any opcode check and rely on the matcher directly
checking the opcode, but the table optimizer currently assumes there
has to be an opcode check.

Also doesn't try to handle undef elements like the DAG version.
2020-08-14 13:55:30 -04:00
Jordan Rupprecht fd9187f746 [NFC] Silence variables unused in release builds 2020-08-14 08:35:58 -07:00
Denis Antrushin 1c80a6ce5f [Statepoints] FixupStatepoint: properly set isKill on spilled register.
When spilling statepoint meta arg register it is incorrect to blindly
mark it as killed - it may be used in non-meta args (e.g., as call
parameter).
2020-08-14 22:19:20 +07:00
Denis Antrushin 5f6bee77fa [Statepoints] Spill GC Ptr regs in FixupStatepoints.
Extend FixupStatepointCallerSaved pass with ability to spill
statepoint GC pointer arguments (optionally allowing them on CSRs).
Special handling is required for invoke statepoints, because at MI
level single landing pad may be shared by multiple statepoints, so
we must ensure we spill landing pad's live-ins into the same stack
slots.

Full statepoint refactoring change set is available at D81603.

Reviewed By: skatkov

Differential Revision: https://reviews.llvm.org/D81647
2020-08-14 20:21:19 +07:00
Yuanfang Chen a5ed20b549 [NewPM][CodeGen] Add machine code verification callback
D83608 need this.

Reviewed By: aeubanks

Differential Revision: https://reviews.llvm.org/D85916
2020-08-13 16:13:01 -07:00