Commit Graph

15821 Commits

Author SHA1 Message Date
Sanjay Patel f68a8c4779 update test to use FileCheck for tighter checking
llvm-svn: 269122
2016-05-10 21:42:09 +00:00
Quentin Colombet 220f7da488 [X86] Properly check that EAX is dead when copying EFLAGS.
This fixes a bug introduced in r267623, where we got smarter and avoided to save
EAX before using it. However, we failed to check if any of the subregister of
EAX were alive and thus, missed cases where we have to save EAX before using it.

The problem may happen on every X86/i386/... platform.

This fixes llvm.org/PR27624

llvm-svn: 269115
2016-05-10 20:49:46 +00:00
Tim Northover b5ece527a1 ARM: stop emitting blx instructions for most calls on MachO.
I'm really not sure why we were in the first place, it's the linker's job to
convert between BL/BLX as necessary. Even worse, using BLX left Thumb calls
that could be locally resolved completely unencodable since all offsets to BLX
are multiples of 4.

rdar://26182344

llvm-svn: 269101
2016-05-10 19:17:47 +00:00
Rafael Espindola 32483a7641 Make "@name =" mandatory for globals in .ll files.
An oddity of the .ll syntax is that the "@var = " in

@var = global i32 42

is optional. Writing just

global i32 42

is equivalent to

@0 = global i32 42

This means that there is a pretty big First set at the top level. The
current implementation maintains it manually. I was trying to refactor
it, but then started wondering why keep it a all. I personally find the
above syntax confusing. It looks like something is missing.

This patch removes the feature and simplifies the parser.

llvm-svn: 269096
2016-05-10 18:22:45 +00:00
Mandeep Singh Grang e5a2f116d6 Fix PR26655: Bail out if all regs of an inst BUNDLE have the correct kill flag
Summary:
While setting kill flags on instructions inside a BUNDLE, we bail out as soon
as we set kill flag on a register.  But we are missing a check when all the
registers already have the correct kill flag set. We need to bail out in that
case as well.

This patch refactors the old code and simply makes use of the addRegisterKilled
function in MachineInstr.cpp in order to determine whether to set/remove kill
on an instruction.

Reviewers: apazos, t.p.northover, pete, MatzeB

Subscribers: MatzeB, davide, llvm-commits

Differential Revision: http://reviews.llvm.org/D17356

llvm-svn: 269092
2016-05-10 17:57:27 +00:00
Dan Gohman 2e64438ae4 [WebAssembly] Preliminary fast-isel support.
llvm-svn: 269083
2016-05-10 17:39:48 +00:00
Simon Pilgrim d86b2d6b1e [X86][AVX512] Added another masked shuffle combine from load test
llvm-svn: 269077
2016-05-10 16:55:20 +00:00
Krzysztof Parzyszek a356bb7fa4 [ScheduleDAG] Make sure to process all def operands before any use operands
An example from Hexagon where things went wrong:
  %R0<def> = L2_loadrigp <ga:@fp04>      ; load function address
  J2_callr %R0<kill>, ..., %R0<imp-def>  ; call *R0, return value in R0

ScheduleDAGInstrs::buildSchedGraph would visit all instructions going
backwards, and in each instruction it would visit all operands in their
order on the operand list. In the case of this call, it visited the use
of R0 first, then removed it from the set Uses after it visited the def.
This caused the DAG to be missing the data dependence edge on R0 between
the load and the call.

Differential Revision: http://reviews.llvm.org/D20102

llvm-svn: 269076
2016-05-10 16:50:30 +00:00
Marcin Koscielnicki bbac890b53 [PR27599] [SystemZ] [SelectionDAG] Fix extension of atomic cmpxchg result.
Currently, SelectionDAG assumes 8/16-bit cmpxchg returns either a sign
extended result, or a zero extended result.  SystemZ takes a third
option by returning junk in the high bits (rotated contents of the other
bytes in the memory word).  In that case, don't use Assert*ext, and
zero-extend the result ourselves if a comparison is needed.

Differential Revision: http://reviews.llvm.org/D19800

llvm-svn: 269075
2016-05-10 16:49:04 +00:00
Simon Pilgrim bfa05d169b [X86][AVX] Added some shuffle combine from load tests
As discussed on D19198 - we need to check what happens when we shuffle with different value type to the load

llvm-svn: 269068
2016-05-10 16:08:24 +00:00
Simon Pilgrim efc757dceb [X86][AVX512] Added masked version of MOVDDUP test with 16f32
llvm-svn: 269038
2016-05-10 10:30:00 +00:00
Chris Dewhurst 7bb1c04943 [Sparc][LEON] Itineraries unit test.
Added test to check LeonItineraries are being applied by code checked-in two weeks ago in r267121.

Phabricator Review: http://reviews.llvm.org/D19359

llvm-svn: 269032
2016-05-10 09:09:20 +00:00
Jonas Paulsson 8e5b0c65cc [foldMemoryOperand()] Pass LiveIntervals to enable liveness check.
SystemZ (and probably other targets as well) can fold a memory operand
by changing the opcode into a new instruction that as a side-effect
also clobbers the CC-reg.

In order to do this, liveness of that reg must first be checked. When
LIS is passed, getRegUnit() can be called on it and the right
LiveRange is computed on demand.

Reviewed by Matthias Braun.
http://reviews.llvm.org/D19861

llvm-svn: 269026
2016-05-10 08:09:37 +00:00
Matthias Braun 11e87cc945 liveness.mir requires asserts to use -debug-only
llvm-svn: 269020
2016-05-10 05:38:47 +00:00
Craig Topper 3fef1de785 [X86] Update X86_INTR calling convention to save ZMM registers instead of YMM registers when AVX512 is enabled.
llvm-svn: 269017
2016-05-10 05:27:56 +00:00
Matthias Braun 8d6e57b216 LiveIntervalAnalysis: Rework constructMainRangeFromSubranges()
We now use LiveRangeCalc::extendToUses() instead of a specially designed
algorithm in constructMainRangeFromSubranges():
- The original motivation for constructMainRangeFromSubranges() were
  differences between the main liverange and subranges because of hidden
  dead definitions. This case however cannot happen anymore with the
  DetectDeadLaneMasks pass in place.
- It simplifies the code.
- This fixes a longstanding bug where we did not properly create new SSA
  values on merging control flow (the MachineVerifier missed most of
  these cases).
- Move constructMainRangeFromSubranges() to LiveIntervalAnalysis and
  LiveRangeCalc to better match the implementation/available helper
  functions.

llvm-svn: 269016
2016-05-10 04:51:14 +00:00
Dan Gohman 0cfb5f852d [WebAssembly] Move register stackification and coloring to a late phase.
Move the register stackification and coloring passes to run very late, after
PEI, tail duplication, and most other passes. This means that all code emitted
and expanded by those passes is now exposed to these passes. This also
eliminates the need for prologue/epilogue code to be manually stackified,
which significantly simplifies the code.

This does require running LiveIntervals a second time. It's useful to think
of these late passes not as late optimization passes, but as a domain-specific
compression algorithm based on knowledge of liveness information. It's used to
compress the code after all conventional optimizations are complete, which is
why it uses LiveIntervals at a phase when actual optimization passes don't
typically need it.

Differential Revision: http://reviews.llvm.org/D20075

llvm-svn: 269012
2016-05-10 04:24:02 +00:00
Matthias Braun fb94d8d56a llc: Rework -run-pass option
We now construct a custom pass pipeline instead of injecting
start-before/stop-after into the default pipeline construction. This
allows to specify any pass known to the pass registry. Previously
specifying indirectly added analysis passes or passes not added to the
pipeline add all would not be added and we would silently do nothing.

This also restricts the -run-pass option to cases with .mir input.

llvm-svn: 269003
2016-05-10 01:32:44 +00:00
Quentin Colombet ee5f36bd54 [X86][AVX512] Use the proper load/store for AVX512 registers.
When loading or storing AVX512 registers we were not using the AVX512
variant of the load and store for VR128 and VR256 like registers.
Thus, we ended up with the wrong encoding and actually were dropping the
high bits of the instruction. The result was that we load or store the
wrong register. The effect is visible only when we emit the object file
directly and disassemble it. Then, the output of the disassembler does
not match the assembly input.

This is related to llvm.org/PR27481.

llvm-svn: 269001
2016-05-10 01:09:14 +00:00
Quentin Colombet 739614839f [X86] Fix the AllRegs AVX calling convention.
We used to list registers that were not in the AVX space. In other
words, we were pushing registers that the ISA cannot encode
(YMM16-YMM31).

This is part of llvm.org/PR27481.

llvm-svn: 268983
2016-05-09 22:37:05 +00:00
Quentin Colombet 86098ab10b Reapply [X86] Add a new LOW32_ADDR_ACCESS_RBP register class.
This reapplies commit r268796, with a fix for the setting of the inline asm
constraints. I.e., "mark" LOW32_ADDR_ACCESS_RBP as a GR variant, so that the
regular processing of the GR operands (setting of the subregisters) happens.

Original commit log:
[X86] Add a new LOW32_ADDR_ACCESS_RBP register class.

ABIs like NaCl uses 32-bit addresses but have 64-bit frame.
The new register class reflects those constraints when choosing a
register class for a address access.

llvm-svn: 268955
2016-05-09 19:01:46 +00:00
Quentin Colombet 52d8e3bd4c [X86] Update a regexp in a test case to resist register allocation
changes.

llvm-svn: 268954
2016-05-09 19:01:42 +00:00
Nemanja Ivanovic 6e29baf7f5 [Power9] Add support for -mcpu=pwr9 in the back end
This patch corresponds to review:
http://reviews.llvm.org/D19683

Simply adds the bits for being able to specify -mcpu=pwr9 to the back end.

llvm-svn: 268950
2016-05-09 18:54:58 +00:00
Krzysztof Parzyszek 7c7bb538cb [Hexagon] Treat all conditional branches as predicted (not-taken by default)
llvm-svn: 268946
2016-05-09 18:22:07 +00:00
Konstantin Zhuravlyov e34ead8269 [AMDGPU] Clean up debugger tests
llvm-svn: 268944
2016-05-09 18:05:42 +00:00
Sanjay Patel c7b91e65d8 [CGP] avoid crashing from weightlessness
It's possible that we have branch weights with 0 values.
In that case, don't try to create an impossible BranchProbability.

llvm-svn: 268935
2016-05-09 17:31:55 +00:00
Matt Arsenault a949dc619c AMDGPU: Fold shift into cvt_f32_ubyteN
llvm-svn: 268930
2016-05-09 16:29:50 +00:00
Daniel Sanders e473dc937f [mips][micromips] Make getPointerRegClass() result depend on the instruction.
Summary:
Previously, it returned the GPR16MMRegClass for all instructions which was
incorrect for instructions like lwsp/lwgp and unnecesarily restricted the
permitted registers for instructions like lw32.

This fixes quite a few of the -verify-machineinstrs errors reported in PR27458.
I've only added -verify-machineinstrs to one test in this change since I
understand there is a plan to enable the verifier by default.

Reviewers: hvarga, zbuljan, zoran.jovanovic, sdardis

Subscribers: dsanders, llvm-commits, sdardis

Differential Revision: http://reviews.llvm.org/D19873

llvm-svn: 268918
2016-05-09 13:38:25 +00:00
Strahinja Petrovic e682b80b8b [PowerPC] fix register alignment for long double type
This patch fixes register alignment for long double type in
soft float mode. Before this patch alignment was 8 and this
patch changes it to 4.
Differential Revision: http://reviews.llvm.org/D18034

llvm-svn: 268909
2016-05-09 12:27:39 +00:00
Silviu Baranga f60be28ed8 [AArch64] Implement lowering of the X constraint on AArch64
Summary:
This implements the lowering of the X constraint on
AArch64.

The default behaviour of the X constraint lowering is to
restrict it to "f". This is a problem because the "f"
constraint is not implemented on AArch64 and would be too
restrictive anyway. Therefore, the AArch64 hook will
lower this to "w" (if the operand is a floating point or
vector) or "r" otherwise.

The implementation is similar with the one added for
ARM (r267411).

This is the AArch64 side of the fix for http://llvm.org/PR26493

Reviewers: rengolin

Subscribers: aemerson, rengolin, llvm-commits, t.p.northover

Differential Revision: http://reviews.llvm.org/D19967

llvm-svn: 268907
2016-05-09 11:10:44 +00:00
Simon Pilgrim bf3a4f552e [X86][AVX512] Added masked version of combine tests
llvm-svn: 268904
2016-05-09 10:43:13 +00:00
Craig Topper a58abd1cc6 [AVX512] Fix up types for arguments of int_x86_avx512_mask_cvtsd2ss_round and int_x86_avx512_mask_cvtss2sd_round. Only the argument being converted should be a different type. The other 2 argument should have the same type as the result.
llvm-svn: 268891
2016-05-09 05:34:12 +00:00
Craig Topper 707c89c00d [AVX512] Add non-temporal store patterns for v16i32/v32i16/v64i8.
llvm-svn: 268889
2016-05-08 23:43:17 +00:00
Craig Topper c41320d700 [AVX512] Add missing patterns for non-temporal stores of 128/256-bit vXi8/vXi16/vXi32 when VLX is enabled. The equivalent AVX1/2 patterns are disabled by VLX.
This caused regular stores to be emitted instead.

llvm-svn: 268886
2016-05-08 23:08:45 +00:00
Craig Topper e5ce84a33c [AVX512] Add VLX 128/256-bit SET0 operations that encode to 128/256-bit EVEX encoded VPXORD so all 32 registers can be used.
llvm-svn: 268884
2016-05-08 21:33:53 +00:00
Craig Topper 298b6d7493 [X86] Re-generate tests using update_llc_test_checks.py to prepare for a future commit. NFC
llvm-svn: 268883
2016-05-08 21:33:47 +00:00
Craig Topper 092794b82a Remove Windows line endings in some tests to prepare for a future commit. NFC
llvm-svn: 268882
2016-05-08 21:33:44 +00:00
Craig Topper d681e23336 [X86] Lower 256-bit vector all-zero constants to v8i32 even with AVX1 only. Either way a 256-bit VXORPS will be used.
llvm-svn: 268873
2016-05-08 07:10:54 +00:00
Craig Topper 3d6722910c [X86] Add patterns for 256-bit non-temporal stores when only AVX1 is supported. While there, add a predicate to the SSE2 patterns to avoid an ordering dependency.
llvm-svn: 268872
2016-05-08 07:10:50 +00:00
Craig Topper d788498411 [X86] No need to avoid selecting AVX_SET0 for 256-bit integer types when only AVX1 is supported. AVX_SET0 just expands to 256-bit VXORPS which is legal in AVX1.
llvm-svn: 268871
2016-05-08 07:10:47 +00:00
Weiming Zhao 5b5501e817 [ARM] Fix Scavenger assert due to underestimated stack size
(re-apply r268810 as it exposed an uninitialized variable in ARM MFI.
 Patch 268868 should fix that.)

Summary:
Currently, when checking if a stack is "BigStack" or not, it doesn't count into spills and arguments. Therefore, LLVM won't reserve spill slot for this actually "BigStack". This may cause scavenger failure.

Reviewers: rengolin

Subscribers: vitalybuka, aemerson, rengolin, tberghammer, danalbert, srhines, llvm-commits

Differential Revision: http://reviews.llvm.org/D19896

llvm-svn: 268869
2016-05-08 05:11:54 +00:00
Simon Pilgrim b6f82c449a [SelectionDAG] Added bitreverse(bitreverse(v)) --> v
Added bitreverse creation testing

llvm-svn: 268865
2016-05-07 20:12:36 +00:00
Simon Pilgrim 8ef046a8ca [X86] Added BITREVERSE constant folding and identity tests
Identity tests are currently failing - this will be fixed soon

llvm-svn: 268862
2016-05-07 19:04:00 +00:00
Sanjay Patel c2751e7050 [x86, BMI] add TLI hook for 'andn' and use it to simplify comparisons
For the sake of minimalism, this patch is x86 only, but I think that at least
PPC, ARM, AArch64, and Sparc probably want to do this too.

We might want to generalize the hook and pattern recognition for a target like
PPC that has a full assortment of negated logic ops (orc, nand).

Note that http://reviews.llvm.org/D18842 will cause this transform to trigger
more often.

For reference, this relates to:
https://llvm.org/bugs/show_bug.cgi?id=27105
https://llvm.org/bugs/show_bug.cgi?id=27202
https://llvm.org/bugs/show_bug.cgi?id=27203
https://llvm.org/bugs/show_bug.cgi?id=27328

Differential Revision: http://reviews.llvm.org/D19087

llvm-svn: 268858
2016-05-07 15:03:40 +00:00
Vitaly Buka e81d96be6f Revert r268810 becase it brakes msan bot.
16802==WARNING: MemorySanitizer: use-of-uninitialized-value
    lib/Target/ARM/ARMFrameLowering.cpp:1632

llvm-svn: 268833
2016-05-07 01:54:00 +00:00
Ahmed Bougacha 04a8fc2e37 [X86] Teach X86FixupBWInsts to promote MOV8rr/MOV16rr to MOV32rr.
This re-applies r268760, reverted in r268794.
Fixes http://llvm.org/PR27670

The original imp-defs assertion was way overzealous: forward all
implicit operands, except imp-defs of the new super-reg def (r268787
for GR64, but also possible for GR16->GR32), or imp-uses of the new
super-reg use.
While there, mark the source use as Undef, and add an imp-use of the
old source reg: that should cover any case of dead super-regs.

At the stage the pass runs, flags are unlikely to matter anyway;
still, let's be as correct as possible.

Also add MIR tests for the various interesting cases.

Original commit message:
Codesize is less (16) or equal (8), and we avoid partial
dependencies.

Differential Revision: http://reviews.llvm.org/D19999

llvm-svn: 268831
2016-05-07 01:11:17 +00:00
Matthias Braun 22152acf7b DetectDeadLanes: Increase precision when detecting undef inputs
In case of COPY-like instruction we may be able to deduce that a certain
input is unused, based on the used lanes of the register defined by the
instruction.
This even works accross otherwise incompatible copies (no need to have
compatible lanemasks, completely unused operands are still completely
unused). It even makes sense to redo the analysis in this case since we
gained information for a case we previously stopped at because of the
incompatible masks.

llvm-svn: 268815
2016-05-06 22:43:50 +00:00
Weiming Zhao 74f12d31c1 [ARM] Fix Scavenger assert due to underestimated stack size
(this is resubmit of r268529 with minor refactoring. r268529 was reverted
 at r268536 due a memory sanitizer failure.  I have not been able to
 reproduce that failure and I checked all the variable used in my change
 but I could not spot an issue. I did some refactoring and see if it will
 give a clearer hint)

Summary:
Currently, when checking if a stack is "BigStack" or not, it doesn't count into spills and arguments. Therefore, LLVM won't reserve spill slot for this actually "BigStack". This may cause scavenger failure.

Reviewers: rengolin

Subscribers: vitalybuka, aemerson, rengolin, tberghammer, danalbert, srhines, llvm-commits

Differential Revision: http://reviews.llvm.org/D19896

llvm-svn: 268810
2016-05-06 22:20:13 +00:00
Quentin Colombet a09f050dc1 Revert "[X86] Add a new LOW32_ADDR_ACCESS_RBP register class."
This reverts commit r268796.
I believe it breaks test/CodeGen/X86/asm-mismatched-types.ll with:
Cannot emit physreg copy instruction

llvm-svn: 268799
2016-05-06 21:21:50 +00:00
Quentin Colombet 2728074e3c [X86] Add a new LOW32_ADDR_ACCESS_RBP register class.
ABIs like NaCl uses 32-bit addresses but have 64-bit frame.
The new register class reflects those constraints when choosing a
register class for a address access.

llvm-svn: 268796
2016-05-06 21:10:53 +00:00