Commit Graph

23444 Commits

Author SHA1 Message Date
Juergen Ributzka ab6f44efaf Add test case for [Stackmaps][X86TTI] Fix think-o in getIntImmCost calculation (r204738).
llvm-svn: 205464
2014-04-02 21:15:36 +00:00
Saleem Abdulrasool cd1308296e ARM: update subtarget information for Windows on ARM
Update the subtarget information for Windows on ARM.  This enables using the MC
layer to target Windows on ARM.

llvm-svn: 205459
2014-04-02 20:32:05 +00:00
Tom Stellard 36a031870b TargetLibraryInfo: Disable memcpy and memset on R600
There are no implementations of these for R600.

llvm-svn: 205455
2014-04-02 19:53:29 +00:00
Jim Grosbach 4a1a9ce5e6 ARM: cortex-m0 doesn't support unaligned memory access.
Unlike other v6+ processors, cortex-m0 never supports unaligned accesses.
From the v6m ARM ARM:

"A3.2 Alignment support: ARMv6-M always generates a fault when an unaligned
access occurs."

rdar://16491560

llvm-svn: 205452
2014-04-02 19:28:13 +00:00
Kai Nacke 13673ac704 [mips] Add more Octeon cnMips instructions
Adds the instructions ext/ext32/cins/cins32.
It also changes pop/dpop to accept the two operand version and
adds a simple pattern to generate baddu.
Tests for the two operand versions (including baddu/dmul/dpop/pop)
and the code generation pattern for baddu are included.

Reviewed by: Daniel.Sanders@imgtec.com

llvm-svn: 205449
2014-04-02 18:40:43 +00:00
Oliver Stannard b14c625111 ARM: Add support for segmented stacks
Patch by Alex Crichton, ILyoan, Luqman Aden and Svetoslav.

llvm-svn: 205430
2014-04-02 16:10:33 +00:00
Tim Northover 6d69168ffd ARM64: use GOT for weak symbols & PIC.
Weak symbols cannot use the small code model's usual ADRP sequences since the
instruction simply may not be able to encode a value of 0.

This redirects them to use the GOT, which hopefully linkers are able to cope
with even in the static relocation model.

llvm-svn: 205426
2014-04-02 14:39:11 +00:00
Tim Northover 0d80f70530 ARM64: fix lowering of fp128 fptosi/fptoui
We were creating libcall nodes that returned an MVT::f128, when these
particular operations actually return an int of some stripe.

llvm-svn: 205425
2014-04-02 14:39:07 +00:00
Tim Northover 670df3d937 SLPVectorizer: compare entire intrinsic for SLP compatibility.
Some Intrinsics are overloaded to the extent that return type equality (all
that's been checked up to now) does not guarantee that the arguments are the
same. In these cases SLP vectorizer should not recurse into the operands, which
can be achieved by comparing them as "Function *" rather than simply the ID.

llvm-svn: 205424
2014-04-02 14:39:02 +00:00
Tim Northover ebd37ab382 ARM64: make sure first argument to INSERT_SUBVECTOR has right type.
Again, coalescing and other optimisations swiftly made the MachineInstrs
consistent again, but when compiled at -O0 a bad INSERT_SUBREGISTER was
produced.

llvm-svn: 205423
2014-04-02 14:38:58 +00:00
Tim Northover 5e3a484e3b ARM64: convert fp16 narrowing ISel to pseudo-instruction
The previous attempt was fine with optimisations, but was actually rather
cavalier with its types. When compiled at -O0, it produced invalid COPY
MachineInstrs.

llvm-svn: 205422
2014-04-02 14:38:54 +00:00
Job Noorman f7da105f39 Mark FPB as a reserved register when needed.
llvm-svn: 205421
2014-04-02 13:13:56 +00:00
Rafael Espindola b1b49789d0 Work around gold bug http://sourceware.org/PR16794.
llvm-svn: 205416
2014-04-02 12:15:20 +00:00
Renato Golin d93295ea56 Remove duplicated DMB instructions
ARM specific optimiztion, finding places in ARM machine code where 2 dmbs
follow one another, and eliminating one of them.

Patch by Reinoud Elhorst.

llvm-svn: 205409
2014-04-02 09:03:43 +00:00
Hal Finkel b0ebdc0f43 [LoopVectorizer] Count dependencies of consecutive pointers as uniforms
For the purpose of calculating the cost of the loop at various vectorization
factors, we need to count dependencies of consecutive pointers as uniforms
(which means that the VF = 1 cost is used for all overall VF values).

For example, the TSVC benchmark function s173 has:
  ...
  %3 = add nsw i64 %indvars.iv, 16000
  %arrayidx8 = getelementptr inbounds %struct.GlobalData* @global_data, i64 0, i32 0, i64 %3
  ...
and we must realize that the add will be a scalar in order to correctly deduce
it to be profitable to vectorize this on PowerPC with VSX enabled. In fact, all
dependencies of a consecutive pointer must be a scalar (uniform), and so we
simply need to add all consecutive pointers to the worklist that currently
detects collects uniforms.

Fixes PR19296.

llvm-svn: 205387
2014-04-02 02:34:49 +00:00
David Blaikie 6fa9966ee6 DebugLocEntry: Actually merge the loc entry when returning true.
Seems we didn't have any test coverage for merging... awesome. So I
added some - but hit an llvm-objdump bug while I was there. I'm choosing
not to shave that yak right now.

Code review feedback/bug catch by Adrian Prantl in r205360.

llvm-svn: 205373
2014-04-01 23:19:23 +00:00
Adrian Prantl 75ce62acef DwarfDebug: Prevent DebugLocEntry merging from coalescing two different
constants into only the first one.

rdar://14874886.

llvm-svn: 205357
2014-04-01 21:04:18 +00:00
Hal Finkel 9e0baa6d3a [PowerPC] Add some missing VSX bitcast patterns
llvm-svn: 205352
2014-04-01 19:24:27 +00:00
Hal Finkel 2eed29f3c8 Implement X86TTI::getUnrollingPreferences
This provides an initial implementation of getUnrollingPreferences for x86.
getUnrollingPreferences is used by the generic (concatenation) unroller, which
is distinct from the unrolling done by the loop vectorizer. Many modern x86
cores have some kind of uop cache and loop-stream detector (LSD) used to
efficiently dispatch small loops, and taking full advantage of this requires
unrolling small loops (small here means 10s of uops).

These caches also have limits on the number of taken branches in the loop, and
so we also cap the loop unrolling factor based on the maximum "depth" of the
loop. This is currently calculated with a partial DFS traversal (partial
because it will stop early if the path length grows too much). This is still an
approximation, and one that is both conservative (because it does not account
for branches eliminated via block placement) and optimistic (because it is only
recording the maximum depth over minimum paths). Nevertheless, because the
loops that fit in these uop caches are so small, it is not clear how much the
details matter.

The original set of patches posted for review produced the following test-suite
performance results (from the TSVC benchmark) at that time:
  ControlLoops-dbl - 13% speedup
  ControlLoops-flt - 15% speedup
  Reductions-dbl - 7.5% speedup

llvm-svn: 205348
2014-04-01 18:50:34 +00:00
Kai Nacke af47f60f83 [mips] Add Octeon cnMips instructions mtmX and mtpX
Adds the Octeon cnMips instructions "load multiplier register MPLx" and "load product register Px".
Includes tests.

Reviews by: Daniel.Sanders@imgtec.com

llvm-svn: 205343
2014-04-01 18:35:26 +00:00
Reid Kleckner 101102711d Support segmented stacks on Win64
Identical to Win32 method except the GS segment register is used for TLS
instead of FS and pvArbitrary is at TEB offset 0x28 instead of 0x14.

llvm-svn: 205342
2014-04-01 18:34:21 +00:00
Matt Arsenault 553751b9bc Fix missing RUN line in test
llvm-svn: 205341
2014-04-01 18:34:13 +00:00
Matt Arsenault e407ae9846 Make isSetCCEquivalent respect the TargetBooleanContents
llvm-svn: 205336
2014-04-01 18:13:26 +00:00
Tim Northover 1351030801 ARM: add cyclone CPU with ZeroCycleZeroing feature.
The Cyclone CPU is similar to swift for most LLVM purposes, but does have two
preferred instructions for zeroing a VFP register. This teaches LLVM about
them.

llvm-svn: 205309
2014-04-01 13:22:02 +00:00
Tim Northover 4f1dd58e2e ARM64: add intrinsic for pmull (p64 x p64 = p128) operations.
llvm-svn: 205302
2014-04-01 12:22:37 +00:00
Daniel Sanders ffd8436d6c [mips] Extend ParseJumpTarget to support the full symbol expression syntax.
Summary:
This should fix the issues the D3222 caused in lld. Testcase is based on
the one that failed in the buildbot.

Depends on D3233

Reviewers: matheusalmeida, vmedic

Reviewed By: matheusalmeida

Differential Revision: http://llvm-reviews.chandlerc.com/D3234

llvm-svn: 205298
2014-04-01 10:41:48 +00:00
Tim Northover ff179ba3d3 ARM64: add patterns for more lane-wise ld1/st1 operations.
llvm-svn: 205294
2014-04-01 10:37:09 +00:00
Tim Northover d8d613b979 ARM64: fix bug in ld3r (1d) SelectionDAG.
llvm-svn: 205293
2014-04-01 10:37:03 +00:00
Daniel Sanders b50ccf8e26 [mips] Rewrite MipsAsmParser and MipsOperand.
Summary:
Highlights:
- Registers are resolved much later (by the render method).
  Prior to that point, GPR32's/GPR64's are GPR's regardless of register
  size. Similarly FGR32's/FGR64's/AFGR64's are FGR's regardless of register
  size or FR mode. Numeric registers can be anything.
- All registers are parsed the same way everywhere (even when handling
  symbol aliasing)
  - One consequence is that all registers can be specified numerically
    almost anywhere (e.g. $fccX, $wX). The exception is symbol aliasing
    but that can be easily resolved.
- Removes the need for the hasConsumedDollar hack
- Parenthesis and Bracket suffixes are handled generically
- Micromips instructions are parsed directly instead of going through the
  standard encodings first.
- rdhwr accepts all 32 registers, and the following instructions that previously
  xfailed now work:
    ddiv, ddivu, div, divu, cvt.l.[ds], se[bh], wsbh, floor.w.[ds], c.ngl.d,
    c.sf.s, dsbh, dshd, madd.s, msub.s, nmadd.s, nmsub.s, swxc1
- Diagnostics involving registers point at the correct character (the $)
- There's only one kind of immediate in MipsOperand. LSA immediates are handled
  by the predicate and renderer.

Lowlights:
- Hardcoded '$zero' in the div patterns is handled with a hack.
  MipsOperand::isReg() will return true for a k_RegisterIndex token
  with Index == 0 and getReg() will return ZERO for this case. Note that it
  doesn't return ZERO_64 on isGP64() targets.
- I haven't cleaned up all of the now-unused functions.
  Some more of the generic parser could be removed too (integers and relocs
  for example).
- insve.df needed a custom decoder to handle the implicit fourth operand that
  was needed to make it parse correctly. The difficulty was that the matcher
  expected a Token<'0'> but gets an Imm<0>. Adding an implicit zero solved this.

Reviewers: matheusalmeida, vmedic

Reviewed By: matheusalmeida

Differential Revision: http://llvm-reviews.chandlerc.com/D3222

llvm-svn: 205292
2014-04-01 10:35:28 +00:00
Alexey Volkov 1328b28dc6 [x86] Do not convert to cmp32 for Atom arch by Sergey Okunev
Differential Revision: http://llvm-reviews.chandlerc.com/D2824

llvm-svn: 205288
2014-04-01 08:13:07 +00:00
David Blaikie 3464161070 DebugInfo: Avoid creating unnecessary/empty line tables and remove the special case of '0' in DwarfCompileUnit::initStmtList by just always using a label difference
This moves one case of raw text checking down into the MCStreamer
interfaces in the form of a virtual function, even if we ultimately end
up consolidating on the one-or-many line tables issue one day, this is
nicer in the interim. This just generally streamlines a bunch of use
cases into a common code path.

llvm-svn: 205287
2014-04-01 08:07:52 +00:00
David Blaikie 8bf66c4f3f DebugInfo: Emit relocation to debug_line section when emitting asm for asm
I don't think this is reachable by any frontend (why would you transform
asm to asm+debug info?) but it helps tidy up some of this code, avoid
the weird special case of "emit the first CU, store the label, then emit
the rest" in MCDwarfLineTable::Emit by instead having the
DWARF-for-assembly case use the same codepath as DwarfDebug.cpp, by
registering the label of the debug_line section, thus causing it to be
emitted. (with a special case in asm output to just emit the label since
asm output uses the .loc directives, etc, rather than the debug_loc
directly)

llvm-svn: 205286
2014-04-01 07:35:52 +00:00
Hal Finkel 86b3064f2b Move partial/runtime unrolling late in the pipeline
The generic (concatenation) loop unroller is currently placed early in the
standard optimization pipeline. This is a good place to perform full unrolling,
but not the right place to perform partial/runtime unrolling. However, most
targets don't enable partial/runtime unrolling, so this never mattered.

However, even some x86 cores benefit from partial/runtime unrolling of very
small loops, and follow-up commits will enable this. First, we need to move
partial/runtime unrolling late in the optimization pipeline (importantly, this
is after SLP and loop vectorization, as vectorization can drastically change
the size of a loop), while keeping the full unrolling where it is now. This
change does just that.

llvm-svn: 205264
2014-03-31 23:23:51 +00:00
Arnold Schwaighofer 15262e6703 Revert "SLPVectorizer: Ignore users that are insertelements we can reschedule them"
This reverts commit r205018.

Conflicts:
	lib/Transforms/Vectorize/SLPVectorizer.cpp
	test/Transforms/SLPVectorizer/X86/insert-element-build-vector.ll

This is breaking libclc build.

llvm-svn: 205260
2014-03-31 23:05:56 +00:00
Juergen Ributzka e117992f00 [Stackmaps] Update the stackmap format to use 64-bit relocations for the function address and properly align all entries.
This commit updates the stackmap format to version 1 to indicate the
reorganizaion of several fields. This was done in order to align stackmap
entries to their natural alignment and to minimize padding.

Fixes <rdar://problem/16005902>

llvm-svn: 205254
2014-03-31 22:14:04 +00:00
Adam Nemet 10c4ce2584 [X86] Adjust cost of FP_TO_UINT v4f64->v4i32 as well
Pretty obvious follow-on to r205159 to also handle conversion from double
besides float.

Fixes <rdar://problem/16373208>

llvm-svn: 205253
2014-03-31 21:54:48 +00:00
Matt Arsenault 378bf9c68b R600: Compute masked bits for min and max
llvm-svn: 205242
2014-03-31 19:35:33 +00:00
Rafael Espindola ee1c342ef9 Don't relocate with sections if there might be a paired relocation.
llvm-svn: 205240
2014-03-31 19:00:23 +00:00
Daniel Sanders e34a120285 Revert: [mips] Rewrite MipsAsmParser and MipsOperand.' due to buildbot errors in lld tests.
It's currently unable to parse 'sym + imm' without surrounding parenthesis.

llvm-svn: 205237
2014-03-31 18:51:43 +00:00
Matt Arsenault 4c53717787 R600: Add BFE, BFI, and BFM intrinsics to help with writing tests.
llvm-svn: 205236
2014-03-31 18:21:18 +00:00
Rafael Espindola c627a8750a Now that this test is assembly, make the checks a bit stronger.
This will be used for a followup patch.

llvm-svn: 205232
2014-03-31 18:01:50 +00:00
Hal Finkel b4240ca0f4 [PowerPC] Don't ever expand BUILD_VECTOR of v2i64 with shuffles
If we have two unique values for a v2i64 build vector, this will always result
in two vector loads if we expand using shuffles. Only one is necessary.

llvm-svn: 205231
2014-03-31 17:48:16 +00:00
Daniel Sanders 0c648ba5be [mips] Rewrite MipsAsmParser and MipsOperand.
Summary:
Highlights:
- Registers are resolved much later (by the render method).
  Prior to that point, GPR32's/GPR64's are GPR's regardless of register
  size. Similarly FGR32's/FGR64's/AFGR64's are FGR's regardless of register
  size or FR mode. Numeric registers can be anything.
- All registers are parsed the same way everywhere (even when handling
  symbol aliasing)
  - One consequence is that all registers can be specified numerically
    almost anywhere (e.g. $fccX, $wX). The exception is symbol aliasing
    but that can be easily resolved.
- Removes the need for the hasConsumedDollar hack
- Parenthesis and Bracket suffixes are handled generically
- Micromips instructions are parsed directly instead of going through the
  standard encodings first.
- rdhwr accepts all 32 registers, and the following instructions that previously
  xfailed now work:
    ddiv, ddivu, div, divu, cvt.l.[ds], se[bh], wsbh, floor.w.[ds], c.ngl.d,
    c.sf.s, dsbh, dshd, madd.s, msub.s, nmadd.s, nmsub.s, swxc1
- Diagnostics involving registers point at the correct character (the $)
- There's only one kind of immediate in MipsOperand. LSA immediates are handled
  by the predicate and renderer.

Lowlights:
- Hardcoded '$zero' in the div patterns is handled with a hack.
  MipsOperand::isReg() will return true for a k_RegisterIndex token
  with Index == 0 and getReg() will return ZERO for this case. Note that it
  doesn't return ZERO_64 on isGP64() targets.
- I haven't cleaned up all of the now-unused functions.
  Some more of the generic parser could be removed too (integers and relocs
  for example).
- insve.df needed a custom decoder to handle the implicit fourth operand that
  was needed to make it parse correctly. The difficulty was that the matcher
  expected a Token<'0'> but gets an Imm<0>. Adding an implicit zero solved this.

Reviewers: matheusalmeida, vmedic

Reviewed By: matheusalmeida

Differential Revision: http://llvm-reviews.chandlerc.com/D3222

llvm-svn: 205229
2014-03-31 17:43:46 +00:00
Paul Robinson 7c99ec5b99 Disable each MachineFunctionPass for 'optnone' functions, unless that
pass normally runs at optimization level None, or is part of the
register allocation pipeline.

llvm-svn: 205228
2014-03-31 17:43:35 +00:00
Yaron Keren c6a57ea4e9 Two updated tests for MinGW 32 and 64 exception handling code generation.
llvm-svn: 205227
2014-03-31 17:34:15 +00:00
Tom Roeder ed0e88c31a This patch fixes LTO's RecordStreamer so that it records symbols in the MCExpr
part of an asm .symver directive as being used. This prevents referenced
functions from being internalized and deleted.

Without the patch to LTOModule.cpp, the test case will produce the error:

LLVM ERROR: A @@ version cannot be undefined.

llvm-svn: 205221
2014-03-31 16:59:13 +00:00
Eli Bendersky 264cd4672d Fix for PR19099 - NVPTX produces invalid symbol names.
This is a more thorough fix for the issue than r203483. An IR pass will run
before NVPTX codegen to make sure there are no invalid symbol names that can't
be consumed by the ptxas assembler.

llvm-svn: 205212
2014-03-31 15:56:26 +00:00
Tim Northover 5081cd0f81 ARM64: add extra patterns for scalar shifts
llvm-svn: 205209
2014-03-31 15:46:46 +00:00
Tim Northover e7834c3bbc ARM64: add extra scalar neg pattern & tests.
llvm-svn: 205208
2014-03-31 15:46:42 +00:00
Tim Northover 4468670345 ARM64: add patterns for scalar sqdmlal & sqdmlsl.
llvm-svn: 205207
2014-03-31 15:46:38 +00:00