Commit Graph

113861 Commits

Author SHA1 Message Date
Sanjay Patel 3cd1aa88f9 [InstCombine] fold another shifty abs pattern to cmp+sel (PR36036)
The bug report:
https://bugs.llvm.org/show_bug.cgi?id=36036

...requests a DAG change for this, but an IR canonicalization
probably handles most cases. If we still want to match this
pattern in the backend, there's a proposal for that too:
D47831

Alive proofs including nsw/nuw cases that were first noted in:
D46988

https://rise4fun.com/Alive/Kmp

This patch is largely copied from the existing code that was
initially added with:
D40984
...but I didn't see much gain from trying to share code.

llvm-svn: 334137
2018-06-06 21:58:12 +00:00
Matt Arsenault e9524f1fb3 AMDGPU: Custom lower v2f16 fneg/fabs with illegal f16
Fixes terrible code on targets without f16 support. The
legalization creates a mess that is difficult to recover
from. Also should avoid randomly breaking these tests
multiple times in sequence in future commits.

Some regressions in cases where it happens to be better
to pull the source modifier after the conversion.

llvm-svn: 334132
2018-06-06 21:28:11 +00:00
Roman Lebedev cbf8446359 [InstCombine] PR37603: low bit mask canonicalization
Summary:
This is [[ https://bugs.llvm.org/show_bug.cgi?id=37603 | PR37603 ]].

https://godbolt.org/g/VCMNpS
https://rise4fun.com/Alive/idM

When doing bit manipulations, it is quite common to calculate some bit mask,
and apply it to some value via `and`.

The typical C code looks like:
```
int mask_signed_add(int nbits) {
    return (1 << nbits) - 1;
}
```
which is translated into (with `-O3`)
```
define dso_local i32 @mask_signed_add(int)(i32) local_unnamed_addr #0 {
  %2 = shl i32 1, %0
  %3 = add nsw i32 %2, -1
  ret i32 %3
}
```

But there is a second, less readable variant:
```
int mask_signed_xor(int nbits) {
    return ~(-(1 << nbits));
}
```
which is translated into (with `-O3`)
```
define dso_local i32 @mask_signed_xor(int)(i32) local_unnamed_addr #0 {
  %2 = shl i32 -1, %0
  %3 = xor i32 %2, -1
  ret i32 %3
}
```

Since we created such a mask, it is quite likely that we will use it in `and` next.
And then we may get rid of `not` op by folding into `andn`.

But now that i have actually looked:
https://godbolt.org/g/VTUDmU
_some_ backend changes will be needed too.
We clearly loose `bzhi` recognition.

Reviewers: spatel, craig.topper, RKSimon

Reviewed By: spatel

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D47428

llvm-svn: 334127
2018-06-06 19:38:27 +00:00
Roman Lebedev 488d28d4e5 [X86] Emit BZHI when mask is ~(-1 << nbits))
Summary:
In D47428, i propose to choose the `~(-(1 << nbits))` as the canonical form of low-bit-mask formation.
As it is seen from these tests, there is a reason for that.

AArch64 currently better handles `~(-(1 << nbits))`, but not the more traditional `(1 << nbits) - 1` (sic!).
The other way around for X86.
It would be much better to canonicalize.

This patch is completely monkey-typing.
I don't really understand how this works :)
I have based it on `// x & (-1 >> (32 - y))` pattern.

Also, when we only have `BMI`, i wonder if we could use `BEXTR` with `start=0` ?

Related links:
https://bugs.llvm.org/show_bug.cgi?id=36419
https://bugs.llvm.org/show_bug.cgi?id=37603
https://bugs.llvm.org/show_bug.cgi?id=37610
https://rise4fun.com/Alive/idM

Reviewers: craig.topper, spatel, RKSimon, javed.absar

Reviewed By: craig.topper

Subscribers: kristof.beyls, llvm-commits

Differential Revision: https://reviews.llvm.org/D47453

llvm-svn: 334125
2018-06-06 19:38:16 +00:00
Krzysztof Parzyszek c1e712baa5 [Hexagon] Implement vector-pair zero as V6_vsubw_dv
llvm-svn: 334123
2018-06-06 19:34:40 +00:00
Craig Topper ef813a5226 [X86] Properly disassemble gather/scatter instructions where xmm4/ymm4/zmm4 are used as the index.
These encodings correspond to the cases in the normal encoding scheme where there is no index and our modrm reading code initially decodes it as such. The VSIB handling code tried to compensate for this, but failed to add the base needed to make later code do the right thing.

Fixes PR37712.

llvm-svn: 334121
2018-06-06 19:15:15 +00:00
Craig Topper d04cc8e640 [X86] Rename vy512mem->vy512xmem and vz256xmem->vz256mem.
The index size is represented by the letter after the 'v'. The number represents the memory size. If an 'x' appears after the number its means the index register can be from VR128X/VR256X instead of VR128/VR256.

As vy512mem uses a VR256X index it should have an x.
And vz256mem uses a VR512 index so it shouldn't have an x.

I admit these names kind of suck and are confusing.

llvm-svn: 334120
2018-06-06 19:15:12 +00:00
Simon Pilgrim aef5bdbea1 [X86][BtVer2] Add support for all vector instructions that should match the dependency-breaking 'zero-idiom'
As detailed on Agner's Microarchitecture doc (21.8 AMD Bobcat and Jaguar pipeline - Dependency-breaking instructions), all these instructions are dependency breaking and zero the destination register.

llvm-svn: 334119
2018-06-06 19:06:09 +00:00
Evandro Menezes b2c8244715 [AArch64, ARM] Add support for Samsung Exynos M4
Create a separate feature set for Exynos M4 and add test cases.

llvm-svn: 334115
2018-06-06 18:56:00 +00:00
Michael Berg cc1c4b6912 guard fsqrt with fmf sub flags
Summary:
This change uses fmf subflags to guard optimizations as well as unsafe. These changes originated from D46483.
It contains only context for fsqrt.


Reviewers: spatel, hfinkel, arsenm

Reviewed By: spatel

Subscribers: hfinkel, wdng, andrew.w.kaylor, wristow, efriedma, nemanjai

Differential Revision: https://reviews.llvm.org/D47749

llvm-svn: 334113
2018-06-06 18:47:55 +00:00
Krzysztof Parzyszek 0da1fe3770 [Hexagon] Split CTPOP of vector pairs
llvm-svn: 334109
2018-06-06 18:03:29 +00:00
Petar Jovanovic 8cb6a521be Change TII isCopyInstr way of returning arguments(NFC)
Make TII isCopyInstr() return MachineOperands through pointer to pointer
instead via reference.

Patch by Nikola Prica.

Differential Revision: https://reviews.llvm.org/D47364

llvm-svn: 334105
2018-06-06 16:36:30 +00:00
David Green 25312b2b6c [GlobalMerge] Set the alignment on merged global structs
If no alignment is set, the abi/preferred alignment of structs will be
used which may be higher than required. This can lead to extra padding
and in the end an increase in data size.

Differential Revision: https://reviews.llvm.org/D47633

llvm-svn: 334099
2018-06-06 14:48:32 +00:00
Tim Northover 9b80060d7b InstCombine: ignore debug instructions during fence combine
We should never get different CodeGen based on whether the code is being
compiled in debug mode so we must skip over @llvm.dbg.value (and similar)
calls.

Should fix at least the worst part of PR37690.

llvm-svn: 334090
2018-06-06 12:46:02 +00:00
Simon Pilgrim f06ff16049 Fix MSVC '*/' found outside of comment warning. NFCI.
llvm-svn: 334086
2018-06-06 11:10:11 +00:00
Ilya Biryukov 3c9c10649b Fix compilation of WebAssembly and RISCV after r334078
llvm-svn: 334085
2018-06-06 10:57:50 +00:00
Simon Pilgrim 3d14158891 [X86][BMI][TBM] Only demand bottom 16-bits of the BEXTR control op (PR34042)
Only the bottom 16-bits of BEXTR's control op are required (0:8 INDEX, 15:8 LENGTH).

Differential Revision: https://reviews.llvm.org/D47690

llvm-svn: 334083
2018-06-06 10:52:10 +00:00
Peter Smith 57f661bd7d [MC] Pass MCSubtargetInfo to fixupNeedsRelaxation and applyFixup
On targets like Arm some relaxations may only be performed when certain
architectural features are available. As functions can be compiled with
differing levels of architectural support we must make a judgement on
whether we can relax based on the MCSubtargetInfo for the function. This
change passes through the MCSubtargetInfo for the function to
fixupNeedsRelaxation so that the decision on whether to relax can be made
per function. In this patch, only the ARM backend makes use of this
information. We must also pass the MCSubtargetInfo to applyFixup because
some fixups skip error checking on the assumption that relaxation has
occurred, to prevent code-generation errors applyFixup must see the same
MCSubtargetInfo as fixupNeedsRelaxation.

Differential Revision: https://reviews.llvm.org/D44928

llvm-svn: 334078
2018-06-06 09:40:06 +00:00
Petar Jovanovic 326ec32403 [MIPS GlobalISel] Add lowerCall
Add minimal support to lower function calls.
Support only functions with arguments/return that go through registers
and have type i32.

Patch by Petar Avramovic.

Differential Revision: https://reviews.llvm.org/D45627

llvm-svn: 334071
2018-06-06 07:24:52 +00:00
Petr Hosek fc9b29bd61 [Support] Use zx_cache_flush on Fuchsia to flush instruction cache
Fuchsia doesn't use __clear_cache, instead it provide zx_cache_flush
system call. Use it to flush instruction cache.

Differential Revision: https://reviews.llvm.org/D47753

llvm-svn: 334068
2018-06-06 06:26:18 +00:00
Sanjay Patel 59313be8d3 [CodeGen] assume max/default throughput for unspecified instructions
This is a fix for the problem arising in D47374 (PR37678):
https://bugs.llvm.org/show_bug.cgi?id=37678

We may not have throughput info because it's not specified in the model 
or it's not available with variant scheduling, so assume that those
instructions can execute/complete at max-issue-width.

Differential Revision: https://reviews.llvm.org/D47723

llvm-svn: 334055
2018-06-05 23:34:45 +00:00
Amaury Sechet a79b6b3ef0 [Mips] Remove uneeded variants of ADDC/ADDE lowering
Summary: As it turns out, the lowering for the Mips16* family of target is the exact same thing as what the ops expands to, so the code handling them can be removed and the ops only enabled for the MipsSE* family of targets.

Reviewers: smaksimovic, atanasyan, abeserminji

Subscribers: sdardis, arichardson, llvm-commits

Differential Revision: https://reviews.llvm.org/D47703

llvm-svn: 334052
2018-06-05 22:13:56 +00:00
Guozhi Wei c4c6b548c5 [CodeGenPrepare] Move Extension Instructions Through Logical And Shift Instructions
CodeGenPrepare pass move extension instructions close to load instructions in different BB, so they can be combined later. But the extension instructions can't move through logical and shift instructions in current implementation. This patch enables this enhancement, so we can eliminate more extension instructions.

Differential Revision: https://reviews.llvm.org/D45537

This is re-commit of r331783, which was reverted by r333305. The performance regression was caused by some unlucky alignment, not a code generation problem.

llvm-svn: 334049
2018-06-05 21:03:52 +00:00
Zachary Turner 8ac1c38a72 [FileSystem] Remove OpenFlags param from several functions.
There was only one place in the entire codebase where a non
default value was being passed, and that place was already hidden
in an implementation file.  So we can delete the extra parameter
and all existing clients continue to work as they always have,
while making the interface a bit simpler.

Differential Revision: https://reviews.llvm.org/D47789

llvm-svn: 334046
2018-06-05 19:58:26 +00:00
Matt Arsenault 57e541e87e AMDGPU: Preserve metadata when widening loads
Preserves the low bound of the !range. I don't think
it's legal to do anything with the top half since it's
theoretically reading garbage.

llvm-svn: 334045
2018-06-05 19:52:56 +00:00
Matt Arsenault 9224c00d2b AMDGPU: Use more custom insert/extract_vector_elt lowering
Apply to i8 vectors.

llvm-svn: 334044
2018-06-05 19:52:46 +00:00
Krzysztof Parzyszek b984ffcc71 [Hexagon] Add pattern to generate 64-bit neg instruction
llvm-svn: 334043
2018-06-05 19:52:39 +00:00
Krzysztof Parzyszek d8b093efef [Hexagon] Add more patterns for generating abs/absp instructions
llvm-svn: 334038
2018-06-05 19:00:50 +00:00
Michael Berg 96925fe0df guard fneg with fmf sub flags
Summary: This change uses fmf subflags to guard optimizations as well as unsafe. These changes originated from D46483.

Reviewers: spatel, hfinkel

Reviewed By: spatel

Subscribers: nemanjai

Differential Revision: https://reviews.llvm.org/D47389

llvm-svn: 334037
2018-06-05 18:49:47 +00:00
Simon Dardis 0d95ff03f2 [mips] Fix the predicates for arithmetic operations
Reviewers: smaksimovic, atanasyan, abeserminji

Differential Revision: https://reviews.llvm.org/D47635

llvm-svn: 334031
2018-06-05 17:53:22 +00:00
Simon Pilgrim f2f043acbb [X86][SSE] Use multiplication scale factors for v8i16 SHL on pre-AVX2 targets.
Similar to v4i32 SHL, convert v8i16 shift amounts to scale factors instead to improve performance and reduce instruction count. We were already doing this for constant shifts, this adds variable shift support.

Reduces the serial nature of the codegen, which relies on chains of plendvb/pand+pandn+por shifts.

This is a step towards adding support for vXi16 vector rotates.

Differential Revision: https://reviews.llvm.org/D47546

llvm-svn: 334023
2018-06-05 15:17:39 +00:00
Nirav Dave 05b589101e [MC][X86] Allow assembler variable assignment to register name.
Summary:
Allow extended parsing of variable assembler assignment syntax and modify X86 to permit
VAR = register assignment. As we emit these as .set directives when possible, we inline
such expressions in output assembly.

Fixes PR37425.

Reviewers: rnk, void, echristo

Reviewed By: rnk

Subscribers: nickdesaulniers, llvm-commits, hiraditya

Differential Revision: https://reviews.llvm.org/D47545

llvm-svn: 334022
2018-06-05 15:13:39 +00:00
Matt Arsenault 191bc71541 DAG: Stop dropping invariant/dereferencable
When legalizing illegal FP load results, this was
for some reason dropping the invariant and dereferencable
memory flags. There doesn't seem to be any reason for this,
and the equivalent isn't done for integer loads.

Fixes an issue in a future AMDGPU commit where some identical
loads fail to merge because one of the loads ends up
dropping the flags.

llvm-svn: 334020
2018-06-05 14:52:24 +00:00
John Brawn e4ff0bd401 [InstCombine] Correct the cmp operand type used when canonicalizing abs/nabs
When adjusting a cmp in order to canonicalize an abs/nabs select pattern we need
to use the type of the existing operand when creating a new operand not the
type of a select operand, as the two may be different.

This fixes PR37686.

llvm-svn: 334019
2018-06-05 14:10:55 +00:00
Gabor Buella 1181f94ae4 [X86] NFC Fix typo introduced in r328016 HSI->HDI
llvm-svn: 334016
2018-06-05 12:55:12 +00:00
Krzysztof Parzyszek aafb8c204c [Hexagon] Minor cleanups in isel lowering
llvm-svn: 334015
2018-06-05 12:49:19 +00:00
Hiroshi Inoue 955655f558 [PowerPC] reduce rotate in BitPermutationSelector
BitPermutationSelector builds the output value by repeating rotate-and-mask instructions with input registers.
Here, we may avoid one rotate instruction if we start building from an input register that does not require rotation.

For example of the test case bitfieldinsert.ll, it first rotates left r4 by 8 bits and then inserts some bits from r5 without rotation.
This can be executed by one rlwimi instruction, which rotates r4 by 8 bits and inserts its bits into r5.

This patch adds a check for rotation amounts in the comparator used in sorting to process the input without rotation first.

Differential Revision: https://reviews.llvm.org/D47765

llvm-svn: 334011
2018-06-05 11:58:01 +00:00
Simon Pilgrim fef9b6eea6 [X86][SSE] Add target shuffle support to X86TargetLowering::computeKnownBitsForTargetNode
Ideally we'd use resolveTargetShuffleInputs to handle faux shuffles as well but:
(a) that code path doesn't handle general/pre-legalized ops/types very well.
(b) I'm concerned about the compute time as they recurse to calls to computeKnownBits/ComputeNumSignBits which would need depth limiting somehow.

llvm-svn: 334007
2018-06-05 10:52:29 +00:00
Gabor Buella 349ffcee87 [X86] NFC Refactor some code in InstPrinters
Summary:
Bringing some come duplicated in the AT&T and the Intel printers
into a common parent class.

Reviewers: craig.topper

Reviewed By: craig.topper

Differential Revision: https://reviews.llvm.org/D47682

llvm-svn: 334005
2018-06-05 10:41:39 +00:00
Peter Smith ef945b2240 [MC][ARM] Add range checking for Thumb2 resolved fixups.
When the branch target of a Thumb2 unconditional or conditonal branch is
resolved at assembly time, no range checking is performed on the result
leading to incorrect immediates. This change adds a range check:
+- 16 Megabytes for unconditional branches, +- 1 Megabyte for the
conditional branch.

Differential Revision: https://reviews.llvm.org/D46306

llvm-svn: 333997
2018-06-05 10:00:56 +00:00
Simon Pilgrim 7bbe7a2920 [X86][SSE] Add basic PACKUS support to X86TargetLowering::computeKnownBitsForTargetNode
Helps improve analysis of saturation ops

llvm-svn: 333995
2018-06-05 09:45:03 +00:00
Peter Smith 0aafe0cee5 [MC][ARM] Correct Thumb BL instruction range
The Thumb BL range is + or - either 16 Megabytes or 4 Megabytes depending
on whether the CPU supports Thumb2 or the v8-m baseline ops. The existing
check for BL range is incorrectly set at +- 32 Megabytes. This change
corrects the higher range and uses the lower range if the featurebits
don't have the necessary support for it.

Differential Revision: https://reviews.llvm.org/D46305

llvm-svn: 333991
2018-06-05 09:32:28 +00:00
Alexander Ivchenko 964b27fa21 [X86][CET] Shadow stack fix for setjmp/longjmp
This is the new version of D46181, allowing setjmp/longjmp
to work correctly with the Intel CET shadow stack by storing
SSP on setjmp and fixing it on longjmp. The patch has been
updated to use the cf-protection-return module flag instead
of HasSHSTK, and the bug that caused D46181 to be reverted
has been fixed with the test expanded to track that fix.

patch by mike.dvoretsky

Differential Revision: https://reviews.llvm.org/D47311

llvm-svn: 333990
2018-06-05 09:22:30 +00:00
Craig Topper f17b33d6c6 [X86] Make all instructions that operate on MMX types, but were added after the initial MMX support via one of the SSE features flags make them require the MMX feature as well.
Passing -mattr=-mmx needs to disable these instructions since the MMX register class won't have been set up. But we don't want -mattr=-mmx to disable SSE so we have to do it separately.

llvm-svn: 333984
2018-06-05 06:20:06 +00:00
Nirav Dave e5eb99668c [RegAllocGreedy] Use simpler map class for EvicteeInfo. NFCI.
RegAlloc keeps a insertion-time ordered map of evictee information,
but we only use membership. Replace MapVector with contextually
equivalent DenseMap which is smaller and faster.

llvm-svn: 333981
2018-06-05 03:16:28 +00:00
Francis Visoiu Mistrih 2c0ef67327 Use MF instead of Fn for MachineFunction references. NFC
llvm-svn: 333973
2018-06-05 00:27:28 +00:00
Francis Visoiu Mistrih ca69b3bf6d [ShrinkWrap] Add optimization remarks to the shrink-wrapping pass
Start by emitting remarks for very basic unsupported cases such as
irreducible CFGs and EHFunclets. The end goal is to be able to cover all
the cases where we give up with an explanation.

llvm-svn: 333972
2018-06-05 00:27:24 +00:00
Amara Emerson d496cc8ffb [MIRParser] Add parser support for 'true' and 'false' i1s.
We already output true and false in the printer, but the parser isn't able to
read it.

Differential Revision: https://reviews.llvm.org/D47424

llvm-svn: 333970
2018-06-05 00:17:13 +00:00
Reid Kleckner adcaddb6da Fix -Wcovered-switch-default warning and clang-format it
llvm-svn: 333967
2018-06-04 23:47:29 +00:00
David Blaikie 10d25ffe7d Move Compiler.h from Demangle back to Support
Code review feedback from r328123 prefers copying the few feature test
macros used by Demangle into there, rather than sinking the header into
an odd corner like Demangle.

llvm-svn: 333965
2018-06-04 22:53:38 +00:00