Commit Graph

9329 Commits

Author SHA1 Message Date
Oliver Stannard d55e115b58 ARM: Correctly align arguments after a byval struct is passed on the stack
llvm-svn: 202985
2014-03-05 15:25:27 +00:00
Andrew Trick fbb278c541 Make stackmap machineinstrs clobber the scratch regs too.
Patchpoints already did this. Doing it for stackmaps is a convenience
for the runtime in the event that it needs to scratch register to
patch or perform a runtime call thunk.

Unlike patchpoints, we just assume the AnyRegCC calling
convention. This is the only language and target independent calling
convention specific to stackmaps so makes sense.  Although the calling
convention is not currently used to select the scratch registers.

llvm-svn: 202943
2014-03-05 07:08:16 +00:00
Hans Wennborg acb842d523 Check for dynamic allocas and inline asm that clobbers sp before building
selection dag (PR19012)

In X86SelectionDagInfo::EmitTargetCodeForMemcpy we check with MachineFrameInfo
to make sure that ESI isn't used as a base pointer register before we choose to
emit rep movs (which clobbers esi).

The problem is that MachineFrameInfo wouldn't know about dynamic allocas or
inline asm that clobbers the stack pointer until SelectionDAGBuilder has
encountered them.

This patch fixes the problem by checking for such things when building the
FunctionLoweringInfo.

Differential Revision: http://llvm-reviews.chandlerc.com/D2954

llvm-svn: 202930
2014-03-05 02:43:26 +00:00
Richard Osborne 1b5fc39710 [XCore] Fix call of absolute address.
Previously for:

tail call void inttoptr (i64 65536 to void ()*)() nounwind

We would emit:

bl 65536

The immediate operand of the bl instruction is a relative offset so it is
wrong to use the absolute address here.

llvm-svn: 202860
2014-03-04 16:50:30 +00:00
Daniel Sanders d920770add [mips][msa] Correct the behaviour of the COPY_FW pseudo on lanes 2 and 3.
Summary:
Previously, attempting to extract lanes 2 and 3 would actually extract lane 1.
The MSA CodeGen tests only covered lanes 0 and 1.

Differential Revision: http://llvm-reviews.chandlerc.com/D2935

llvm-svn: 202848
2014-03-04 13:54:30 +00:00
Chad Rosier 70cb2311ab Revert "[AArch64] This is a work in progress to provide a machine description"
This reverts commit ff717c8fc786a0cfa1602982b91895fa09e514fc.

llvm-svn: 202773
2014-03-04 00:32:07 +00:00
Chad Rosier fe45290566 [AArch64] This is a work in progress to provide a machine description
for the Cortex-A53 subtarget in the AArch64 backend.

This patch lays the ground work to annotate each AArch64 instruction
(no NEON yet) with a list of SchedReadWrite types. The patch also
provides the Cortex-A53 processor resources, maps those the the default
SchedReadWrites, and provides basic latency. NEON support will be added
in a subsequent patch with proper forwarding logic.

Verification was done by setting the pre-RA scheduler to linearize to
better gauge the effect of the MIScheduler. Even without modeling the
forward logic, the results show a modest improvement for Cortex-A53.

Reviewers: apazos, mcrosier, atrick
Patch by Dave Estes <cestes@codeaurora.org>!

llvm-svn: 202767
2014-03-03 23:32:47 +00:00
Daniel Sanders fa961d76f0 [mips] Prevent %lo relocation being used on MSA loads and stores.
Summary:
Parts of the compiler still believed MSA load/stores have a 16-bit offset when
it is actually 10-bit. Corrected this, and fixed a closely related issue this
uncovered where load/stores with 10-bit and 12-bit offsets (MSA and microMIPS
respectively) could not load/store using offsets from the stack/frame pointer.
They accepted frameindex+offset, but not frameindex by itself.

Reviewers: jacksprat, matheusalmeida

Reviewed By: jacksprat

Differential Revision: http://llvm-reviews.chandlerc.com/D2888

llvm-svn: 202717
2014-03-03 14:31:21 +00:00
Hal Finkel 6aca2373f2 Add a PPC inline asm constraint type for single CR bits
Now that the PowerPC backend can track individual CR bits as first-class
registers, we should also have a way of allocating them for inline asm
statements. Because these registers are only one bit, if an output variable is
implicitly cast to a larger integer size, we'll get an any_extend to that
larger type (this is part of the existing target-independent logic). As a
result, regardless of the size of the output type, only the first bit is
meaningful.

The constraint identifier "wc" has been chosen for this purpose. Although gcc
does not currently support allocating individual CR bits, this identifier
choice has been coordinated with the gcc PowerPC team, and will be marked as
reserved for this purpose in the gcc constraints.md file.

llvm-svn: 202657
2014-03-02 18:23:39 +00:00
Elena Demikhovsky 9737e3886b AVX-512: Fixed extract_vector_elt for v8i1 vector
llvm-svn: 202624
2014-03-02 09:19:44 +00:00
Matt Arsenault 2430958182 R600: Add failing control flow tests.
Simple cases hit a variety of problems at -O0.

llvm-svn: 202601
2014-03-01 21:45:41 +00:00
Hal Finkel 46043edc56 Remove extra truncs/exts around i32 bit operations on PPC64
This generalizes the code to eliminate extra truncs/exts around i1 bit
operations to also do the same on PPC64 for i32 bit operations. This eliminates
a fairly prevalent code wart:

int foo(int a) {
  return a == 5 ? 7 : 8;
}

On PPC64, because of the extension implied by the ABI, this would generate:

	cmplwi 0, 3, 5
	li 12, 8
	li 4, 7
	isel 3, 4, 12, 2
	rldicl 3, 3, 0, 32
	blr

where the 'rldicl 3, 3, 0, 32', the extension, is completely unnecessary. At
least for the single-BB case (which is all that the DAG combine mechanism can
handle), this unnecessary extension is no longer generated.

llvm-svn: 202600
2014-03-01 21:36:57 +00:00
Venkatraman Govindaraju 6f2e08c8e1 [Sparc] Add support for parsing directives in SparcAsmParser.
llvm-svn: 202564
2014-03-01 02:18:04 +00:00
Venkatraman Govindaraju f7eecf80c4 [Sparc] Emit 'restore' instead of 'restore %g0, %g0, %g0'. This improves the readability of the generated code.
llvm-svn: 202563
2014-03-01 01:04:26 +00:00
Manman Ren 709c951b42 SpillPlacement: fix a bug in iterate.
Inside iterate, we scan backwards then scan forwards in a loop. When iteration
is not zero, the last node was just updated so we can skip it. But when
iteration is zero, we can't skip the last node.

For the testing case, fixing this will save a spill and move register copies
from hot path to cold path.

llvm-svn: 202557
2014-02-28 23:05:31 +00:00
Tom Stellard d61a1c3360 R600/SI: Expand all v16[if]32 operations
llvm-svn: 202543
2014-02-28 21:36:37 +00:00
Justin Bogner 02b958422c CommandLine: Exit successfully for -version and -help
Tools that use the CommandLine library currently exit with an error
when invoked with -version or -help. This is unusual and non-standard,
so we'll fix them to exit successfully instead.

I don't expect that anyone relies on the current behaviour, so this
should be a fairly safe change.

llvm-svn: 202530
2014-02-28 19:08:01 +00:00
Adam Nemet 6586e5d6ac Test commit
llvm-svn: 202528
2014-02-28 18:44:39 +00:00
Zoran Jovanovic 285cc289e8 Fixed operand of SC microMIPS instruction.
llvm-svn: 202526
2014-02-28 18:22:56 +00:00
Hal Finkel b998915ee1 Swap PPC isel operands to allow for 0-folding
The PPC isel instruction can fold 0 into the first operand (thus eliminating
the need to materialize a zero-containing register when the 'true' result of
the isel is 0). When the isel is fed by a bit register operation that we can
invert, do so as part of the bit-register-operation peephole routine.

llvm-svn: 202469
2014-02-28 06:11:16 +00:00
Hal Finkel 940ab934d4 Add CR-bit tracking to the PowerPC backend for i1 values
This change enables tracking i1 values in the PowerPC backend using the
condition register bits. These bits can be treated on PowerPC as separate
registers; individual bit operations (and, or, xor, etc.) are supported.
Tracking booleans in CR bits has several advantages:

 - Reduction in register pressure (because we no longer need GPRs to store
   boolean values).

 - Logical operations on booleans can be handled more efficiently; we used to
   have to move all results from comparisons into GPRs, perform promoted
   logical operations in GPRs, and then move the result back into condition
   register bits to be used by conditional branches. This can be very
   inefficient, because the throughput of these CR <-> GPR moves have high
   latency and low throughput (especially when other associated instructions
   are accounted for).

 - On the POWER7 and similar cores, we can increase total throughput by using
   the CR bits. CR bit operations have a dedicated functional unit.

Most of this is more-or-less mechanical: Adjustments were needed in the
calling-convention code, support was added for spilling/restoring individual
condition-register bits, and conditional branch instruction definitions taking
specific CR bits were added (plus patterns and code for generating bit-level
operations).

This is enabled by default when running at -O2 and higher. For -O0 and -O1,
where the ability to debug is more important, this feature is disabled by
default. Individual CR bits do not have assigned DWARF register numbers,
and storing values in CR bits makes them invisible to the debugger.

It is critical, however, that we don't move i1 values that have been promoted
to larger values (such as those passed as function arguments) into bit
registers only to quickly turn around and move the values back into GPRs (such
as happens when values are returned by functions). A pair of target-specific
DAG combines are added to remove the trunc/extends in:
  trunc(binary-ops(binary-ops(zext(x), zext(y)), ...)
and:
  zext(binary-ops(binary-ops(trunc(x), trunc(y)), ...)
In short, we only want to use CR bits where some of the i1 values come from
comparisons or are used by conditional branches or selects. To put it another
way, if we can do the entire i1 computation in GPRs, then we probably should
(on the POWER7, the GPR-operation throughput is higher, and for all cores, the
CR <-> GPR moves are expensive).

POWER7 test-suite performance results (from 10 runs in each configuration):

SingleSource/Benchmarks/Misc/mandel-2: 35% speedup
MultiSource/Benchmarks/Prolangs-C++/city/city: 21% speedup
MultiSource/Benchmarks/MiBench/automotive-susan: 23% speedup
SingleSource/Benchmarks/CoyoteBench/huffbench: 13% speedup
SingleSource/Benchmarks/Misc-C++/Large/sphereflake: 13% speedup
SingleSource/Benchmarks/Misc-C++/mandel-text: 10% speedup

SingleSource/Benchmarks/Misc-C++-EH/spirit: 10% slowdown
MultiSource/Applications/lemon/lemon: 8% slowdown

llvm-svn: 202451
2014-02-28 00:27:01 +00:00
Roman Divacky 7a9c6549ba Lower FNEG just like FABS to fneg[ds] and fmov[ds], thus avoiding
expensive libcall. Also, Qp_neg is not implemented on at least
FreeBSD. This is also what gcc is doing.

llvm-svn: 202422
2014-02-27 19:26:29 +00:00
Adrian Prantl 7072073cc9 Debug info: Remove ARMAsmPrinter::EmitDwarfRegOp(). AsmPrinter can now
scan the register file for sub- and super-registers.
No functionality change intended.

(Tests are updated because the comments in the assembler output are
different.)

llvm-svn: 202416
2014-02-27 17:56:08 +00:00
Richard Osborne 521bdf211d [XCore] Support functions returning more than 4 words.
If a function returns a large struct by value return the first 4 words
in registers and the rest on the stack in a location reserved by the
caller. This is needed to support the xC language which supports
functions returning an arbitrary number of return values. This is
r202397 reapplied with a fix to avoid an uninitialized read of a member.

llvm-svn: 202414
2014-02-27 17:47:54 +00:00
Richard Osborne 527aa5052d Revert r202396, r202397.
These are causing test failures, revert for now.

llvm-svn: 202398
2014-02-27 14:24:13 +00:00
Richard Osborne e82bf0988e [XCore] Support functions returning more than 4 words.
Summary:
If a function returns a large struct by value return the first 4 words
in registers and the rest on the stack in a location reserved by the
caller. This is needed to support the xC language which supports
functions returning an arbitrary number of return values.

Reviewers: robertlytton

Reviewed By: robertlytton

CC: llvm-commits

Differential Revision: http://llvm-reviews.chandlerc.com/D2889

llvm-svn: 202397
2014-02-27 14:00:40 +00:00
Richard Osborne a283d24ad9 [XCore] Target optimized library function __memcpy_4()
Summary:
If the src, dst and size of a memcpy are known to be 4 byte aligned we
can call __memcpy_4() instead of memcpy().

Reviewers: robertlytton

Reviewed By: robertlytton

CC: llvm-commits

Differential Revision: http://llvm-reviews.chandlerc.com/D2871

llvm-svn: 202395
2014-02-27 13:39:07 +00:00
Richard Osborne d6e85018c5 [XCore] Add dag combines for instructions that ignore some input bits.
These instructions ignore the high bits of one of their input operands -
try and use this to simplify the code.

llvm-svn: 202394
2014-02-27 13:20:11 +00:00
Richard Osborne 2d3a2bee41 [XCore] Provide information about known zero bits of resource instructions.
llvm-svn: 202393
2014-02-27 13:20:06 +00:00
Daniel Sanders 9f088ba322 Stop test/CodeGen/X86/v4i32load-crash.ll targeting non-X86-64 targets.
Summary:
Fixes an issue where a test attempts to use -mcpu=x86-64 on non-X86-64 targets.
This triggers an assertion in the MIPS backend since it doesn't know what ABI to
use by default for unrecognized processors.

CC: llvm-commits, rafael

Differential Revision: http://llvm-reviews.chandlerc.com/D2877

llvm-svn: 202369
2014-02-27 09:24:31 +00:00
Michel Danzer 9e61c4b6cd R600/SI: Optimize SI_KILL for constant operands
If the SI_KILL operand is constant, we can either clear the exec mask if
the operand is negative, or do nothing otherwise.

Reviewed-by: Tom Stellard <thomas.stellard@amd.com>
llvm-svn: 202337
2014-02-27 01:47:09 +00:00
Michel Danzer 6f273c57db R600/SI: Allow SI_KILL for geometry shaders
Reviewed-by: Tom Stellard <thomas.stellard@amd.com>
llvm-svn: 202336
2014-02-27 01:47:02 +00:00
Andrew Trick 9f240f742b Use regnum regex in an XCore test case.
llvm-svn: 202315
2014-02-26 23:22:49 +00:00
Andrew Trick 2560d11e72 Very temporarily XFAILing a test. Will be fixed shortly.
llvm-svn: 202310
2014-02-26 22:39:59 +00:00
Andrew Trick 52a00936b4 Add a limit to the heuristic that register allocates instructions in local order.
This handles pathological cases in which we see 2x increase in spill
code for large blocks (~50k instructions). I don't have a unit test
for this behavior.

Fixes rdar://16072279.

llvm-svn: 202304
2014-02-26 22:07:26 +00:00
Quentin Colombet 85c9e16291 Lower unsigned vsetcc to psubus in certain cases
The current approach to lower a vsetult is to flip the sign bit of the
operands, swap the operands and then use a (signed) pcmpgt.  psubus (unsigned
saturating subtract) can be used to emulate a vsetult more efficiently:

+    case ISD::SETULT: {
+      // If the comparison is against a constant we can turn this into a
+      // setule.  With psubus, setule does not require a swap.  This is
+      // beneficial because the constant in the register is no longer
+      // destructed as the destination so it can be hoisted out of a loop.

I also enable lowering via psubus in a few other cases where it's clearly
beneficial: setule and setuge if minu/maxu cannot be used.
    
rdar://problem/14338765

Patch by Adam Nemet <anemet@apple.com>.

llvm-svn: 202301
2014-02-26 21:39:12 +00:00
Tim Northover ed9c20681d AArch64: simplify tbl/tbx polymorphism
The table argument is always 128-bit (and interpreted as <16 x i8>) so the
extra specifier for it is just clutter.

No user-visible behaviour change, so no tests.

llvm-svn: 202258
2014-02-26 11:55:09 +00:00
Artyom Skrobov 1a6cd1d912 ARMv8 IfConversion must skip narrow instructions that a) define CPSR and b) wouldn't affect CPSR in an IT block
llvm-svn: 202257
2014-02-26 11:27:28 +00:00
Daniel Sanders 91efd1c464 Stop test/CodeGen/ARM/a15.ll targetting non-ARM targets.
Summary:
Fixes an issue where a test attempts to use -mcpu=cortex-a15 on non-ARM targets.
This triggers an assertion on MIPS since it doesn't know what ABI to use by default for
unrecognized processors.

Reviewers: rengolin

Reviewed By: rengolin

CC: llvm-commits, aemerson, rengolin

Differential Revision: http://llvm-reviews.chandlerc.com/D2876

llvm-svn: 202256
2014-02-26 11:26:18 +00:00
Tom Stellard 1f15bff0df R600/SI: Custom select 64-bit ADD
llvm-svn: 202194
2014-02-25 21:36:18 +00:00
Hal Finkel 22304046c7 Account for 128-bit integer operations in PPCCTRLoops
We need to abort the formation of counter-register-based loops where there are
128-bit integer operations that might become function calls.

llvm-svn: 202192
2014-02-25 20:51:50 +00:00
Rafael Espindola f863ee2949 Store a DataLayout in Module.
Now that DataLayout is not a pass, store one in Module.

Since the C API expects to be able to get a char* to the datalayout description,
we have to keep a std::string somewhere. This patch keeps it in Module and also
uses it to represent modules without a DataLayout.

Once DataLayout is mandatory, we should probably move the string to DataLayout
itself since it won't be necessary anymore to represent the special case of a
module without a DataLayout.

llvm-svn: 202190
2014-02-25 20:01:08 +00:00
Richard Osborne 50e3b7f759 [XCore] Add intrinsic for CLRPT (clear port time) instruction.
llvm-svn: 202172
2014-02-25 17:31:15 +00:00
Richard Osborne 92fdd3491a [XCore] Add intrinsic for EDU (event disable unconditional) instruction.
llvm-svn: 202171
2014-02-25 17:31:06 +00:00
Logan Chien 18583d71e8 Keep the link register for uwtable.
The function with uwtable attribute might be visited by the
stack unwinder, thus the link register should be considered
as clobbered after the execution of the branch and link
instruction (i.e. the definition of the machine instruction
can't be ignored) even when the callee function are marked
with noreturn.

llvm-svn: 202165
2014-02-25 16:57:28 +00:00
Richard Osborne 8b7466e886 [XCore] Prefer to word align functions.
The behaviour of the XCore's instruction buffer means that the performance
of the same code sequence can differ depending on whether it starts at a 4
byte aligned address or not. Since we don't model the instruction buffer
in the backend we have no way of knowing for sure if it is beneficial to
word align a specific function. However, in the absence of precise
modelling, it is better on balance to word align functions because:

* It makes a fetch-nop while executing the prologue slightly less likely.
* If we don't word align functions then a small perturbation in one
  function can have a dramatic knock on effect. If the size of the function
  changes it might change the alignment and therefore the performance of
  all the functions that happen to follow it in the binary. This butterfly
  effect makes it harder to reason about and measure the performance of
  code.

llvm-svn: 202163
2014-02-25 16:37:15 +00:00
Matt Arsenault 41e2f2bacd R600/SI - Add new CI arithmetic instructions.
Does not yet include larger part required
to match v_mad_i64_i32 / v_mad_u64_u32.

llvm-svn: 202077
2014-02-24 21:01:28 +00:00
Benjamin Kramer facca1f049 SPARC: Implement TRAP lowering. Matches what GCC emits.
llvm-svn: 201994
2014-02-23 21:43:52 +00:00
Elena Demikhovsky 3ebfe11532 AVX-512: Fixed encoding of VPTESTMQ
llvm-svn: 201980
2014-02-23 14:28:35 +00:00
Benjamin Kramer d20d1adfb8 Make test more resilient against scheduling decisions.
Should bring the atom buildbots back to life.

llvm-svn: 201951
2014-02-22 20:14:02 +00:00
NAKAMURA Takumi 0607c15435 llvm/test/CodeGen/X86/shift-pcmp.ll: Tweak to appease FileCheck. "CHECK-LABEL" doesn't identify labels magically and CHECK-LABEL behaves free from other contexts.
For targeting pecoff, ".def foo" appears before ".short 32".

          .def    foo;
  ...
  .LCPI0_0:
          .short  32
  foo:

CHECK-LABEL seeks not from ".short 32" but from the top of the input.

llvm-svn: 201931
2014-02-22 07:27:04 +00:00
Quentin Colombet 1627a4159e [CodeGenPrepare] Fix the check of the legality of an instruction.
The API expects an ISD opcode, not an IR opcode.
Fixes a regression for R600.

Related to <rdar://problem/15519855>.

llvm-svn: 201923
2014-02-22 01:06:41 +00:00
Quentin Colombet 4db08df18e [DAGCombiner] PCMP* sets its result to all ones or zeros so we can AND with the
shifted mask rather than masking and shifting separately.

The patch adds this transformation to the DAGCombiner:

  (shl (and (setcc:i8v16 ...) N01C) N1C) -> (and (setcc:i8v16 ...) N01C<<N1C)

<rdar://problem/16054492>

Patch by Adam Nemet <anemet@apple.com>

llvm-svn: 201906
2014-02-21 23:42:41 +00:00
Kevin Qin 07334d37de [AArch64] Add register constraints to avoid generating STLXR and STXR with unpredictable behavior.
llvm-svn: 201841
2014-02-21 07:45:48 +00:00
Oliver Stannard 7b2f2fba7f AArch64: __va_list.__stack must be 8-byte aligned
The va_start macro for AArch64 must set va_list.__stack to the address
following the last named argument on the stack, rounded up to an alignment
of 8 bytes.

llvm-svn: 201797
2014-02-20 17:19:26 +00:00
Daniel Sanders 5a1449dab4 [mips] Make it impossible to have UnknownABI in CodeGen and Integrated Assembler.
Summary:
This removes the need to coerce UnknownABI to the default ABI (O32 for
MIPS32, N64 for MIPS64 [*]) in both MipsSubtarget and MipsAsmParser.

Clang has been updated to disable both possible default ABI's before enabling
the ABI it intends to use.

[*] N64 being the default for MIPS64 is not actually correct.
    However N32 is not fully implemented/tested yet.

Depends on: D2830

Reviewers: jacksprat, matheusalmeida

Reviewed By: matheusalmeida

Differential Revision: http://llvm-reviews.chandlerc.com/D2832
Differential Revision: http://llvm-reviews.chandlerc.com/D2846

llvm-svn: 201792
2014-02-20 14:58:19 +00:00
Daniel Sanders e70897f3ea [mips] Make mips64 the default CPU for the mips64 architecture
Summary:
This is consistent with the integrated assembler.
All mips64 codegen tests previously passed -mcpu. Removed -mcpu from
blez_bgez.ll and const-mult.ll to cover the default case.

Ideally, the two implementations of selectMipsCPU() will be merged but it's
proven difficult to find a home for the function that doesn't cause link errors.
For now, we'll hoist the common functionality into a function and mark it with
FIXME's.

Reviewers: jacksprat, matheusalmeida

Reviewed By: matheusalmeida

Differential Revision: http://llvm-reviews.chandlerc.com/D2830

llvm-svn: 201782
2014-02-20 13:13:33 +00:00
Elena Demikhovsky 2efed98b58 AVX-512: added a lit test for truncate operation
llvm-svn: 201763
2014-02-20 07:34:13 +00:00
Roman Divacky 37136c0333 Expand 64bit {SHL,SHR,SRA}_PARTS on sparcv9.
llvm-svn: 201718
2014-02-19 21:35:39 +00:00
Rafael Espindola daeafb4c2a Add back r201608, r201622, r201624 and r201625
r201608 made llvm corretly handle private globals with MachO. r201622 fixed
a bug in it and r201624 and r201625 were changes for using private linkage,
assuming that llvm would do the right thing.

They all got reverted because r201608 introduced a crash in LTO. This patch
includes a fix for that. The issue was that TargetLoweringObjectFile now has
to be initialized before we can mangle names of private globals. This is
trivially true during the normal codegen pipeline (the asm printer does it),
but LTO has to do it manually.

llvm-svn: 201700
2014-02-19 17:23:20 +00:00
Daniel Sanders e15c1bbb69 [mips] Use multiple FileCheck prefixes rather than run the test multiple times
llvm-svn: 201695
2014-02-19 16:27:36 +00:00
Venkatraman Govindaraju 55824b176a [Sparc] Remove spurious checks from a testcase.
llvm-svn: 201690
2014-02-19 15:57:49 +00:00
Cameron McInally 7b544f0297 Fix AVX512 vector sqrt assembly strings.
llvm-svn: 201681
2014-02-19 15:16:09 +00:00
Daniel Jasper 7e198ad862 Revert r201622 and r201608.
This causes the LLVMgold plugin to segfault. More information on the
replies to r201608.

llvm-svn: 201669
2014-02-19 12:26:01 +00:00
Rafael Espindola b9ea63c551 Avoid an infinite cycle with private linkage and -f{data|function}-sections.
When outputting an object we check its section to find its name, but when
looking for the section with -ffunction-section we look for the symbol name.

Break the loop by requesting a name with the private prefix when constructing
the section name. This matches the behavior before r201608.

llvm-svn: 201622
2014-02-19 01:28:30 +00:00
Rafael Espindola 09dcc6a536 Fix PR18743.
The IR
@foo = private constant i32 42

is valid, but before this patch we would produce an invalid MachO from it. It
was invalid because it would use an L label in a section where the liker needs
the labels in order to atomize it.

One way of fixing it would be to just reject this IR in the backend, but that
would not be very front end friendly.

What this patch does is use an 'l' prefix in sections that we know the linker
requires symbols for atomizing them. This allows frontends to just use
private and not worry about which sections they go to or how the linker handles
them.

One small issue with this strategy is that now a symbol name depends on the
section, which is not available before codegen. This is not a problem in
practice. The reason is that it only happens with private linkage, which will
be ignored by the non codegen users (llvm-nm and llvm-ar).

llvm-svn: 201608
2014-02-18 22:24:57 +00:00
Ana Pazos 7c27a265dc [AArch64] Expanded sin, cos, pow with FP vector types inputs
llvm-svn: 201601
2014-02-18 20:31:05 +00:00
Robert Lytton 346e808ec6 XCore target: Handle common linkage
llvm-svn: 201563
2014-02-18 11:21:59 +00:00
Robert Lytton af6c256c34 XCore target: Fix llvm.eh.return and EH info register handling
llvm-svn: 201561
2014-02-18 11:21:48 +00:00
Tim Northover f06df5866f X86: use vpsllvd (& friends) for 16-bit shifts on Haswell
llvm-svn: 201558
2014-02-18 11:15:32 +00:00
Jiangning Liu 742c588edc Fix a typo about lowering AArch64 va_copy.
llvm-svn: 201541
2014-02-18 02:37:42 +00:00
Elena Demikhovsky 750498c77b AVX-512: implemented zext fron i1 to i16
llvm-svn: 201502
2014-02-17 07:29:33 +00:00
Mark Seaborn be266aa325 Use 16 byte stack alignment for NaCl on ARM
NaCl's ARM ABI uses 16 byte stack alignment, so set that in
ARMSubtarget.cpp.

Using 16 byte alignment exposes an issue in code generation in which a
varargs function leaves a 4 byte gap between the values of r1-r3 saved
to the stack and the following arguments that were passed on the
stack.  (Previously, this code only needed to support 4 byte and 8
byte alignment.)

With this issue, llc generated:

varargs_func:
        sub     sp, sp, #16
        push    {lr}
        sub     sp, sp, #12
        add     r0, sp, #16   // Should be 20
        stm     r0, {r1, r2, r3}
        ldr     r0, .LCPI0_0  // Address of va_list
        add     r1, sp, #16
        str     r1, [r0]
        bl      external_func

Fix the bug by checking for "Align > 4".  Also simplify the code by
using OffsetToAlignment(), and update comments.

Differential Revision: http://llvm-reviews.chandlerc.com/D2677

llvm-svn: 201497
2014-02-16 18:59:48 +00:00
Nico Rieck a0abeb3548 Fix more broken CHECK lines
llvm-svn: 201493
2014-02-16 13:28:39 +00:00
Nico Rieck 5ba5226ab9 Add extra CHECK prefix to tests with explicit prefix
These tests mistakenly assume that CHECK is still available even if an
explicit prefix is specified.

llvm-svn: 201492
2014-02-16 13:28:15 +00:00
Nico Rieck 35a237d4ed Actually call FileCheck in tests
llvm-svn: 201491
2014-02-16 13:27:39 +00:00
Elena Demikhovsky 1fad075974 AVX-512: simpyfied BUILD_VECTOR for masks; fixed cmp/test sequence
llvm-svn: 201487
2014-02-16 11:34:23 +00:00
Nico Rieck 7647178738 Fix broken CHECK lines
llvm-svn: 201479
2014-02-16 07:31:05 +00:00
Quentin Colombet 867c550947 [CodeGenPrepare][AddressingModeMatcher] Give up on type promotion if the
transformation does not bring any immediate benefits and introduce an illegal
operation. 

llvm-svn: 201439
2014-02-14 22:23:22 +00:00
Tom Stellard 728d4172df TargetLowering: n * r where n > 2 should be an illegal addressing mode
llvm-svn: 201433
2014-02-14 21:10:34 +00:00
Reed Kotler 4cdaa7d778 This patch has two main functions:
1) Fix a specific bug when certain conversion functions are called in a program compiled as mips16 with hard float and
the program is linked as c++. There are two libraries that are reversed in the link order with gcc/g++ and clang/clang++ for
mips16 in this case and the proper stubs will then not be called. These stubs are normally handled in the Mips16HardFloat pass
but in this case we don't know at that time that we need to generate the stubs. This must all be handled later in code generation
and we have moved this functionality to MipsAsmPrinter. When linked as C (gcc or clang) the proper stubs are linked in from libc.

2) Set up the infrastructure to handle 90% of what is in the Mips16HardFloat pass in this new area of MipsAsmPrinter. This is a more
logical place to handle this and we have known for some time that we needed to move the code later and not implement it using
inline asm as we do now but it was not clear exactly where to do this and what mechanism should be used. Now it's clear to us
how to do this and this patch contains the infrastructure to move most of this to MipsAsmPrinter but the actual moving will be done
in a follow on patch. The same infrastructure is used to fix this current bug as described in #1. This change was requested by the list
during the original putback of the Mips16HardFloat pass but was not practical for us do at that time.

llvm-svn: 201426
2014-02-14 19:16:39 +00:00
Artyom Skrobov f6830f47b8 Generate the DWARF stack frame decode operations in the function prologue for ARM/Thumb functions.
Patch by Keith Walker!

llvm-svn: 201423
2014-02-14 17:19:07 +00:00
Kevin Qin edc95ee196 [AArch64 NEON] Fix a bug to avoid using floating type as condition type in lowering SELECT_CC.
llvm-svn: 201395
2014-02-14 09:41:15 +00:00
Hao Liu 7146ef8542 [AArch64]Fix the assertion failure caused by "v1i1 SETCC" DAG node.
As v1i1 is illegal, the type legalizer tries to scalarize such node. But if the type operands of SETCC is legal, the scalarization algorithm will cause an assertion failure.

llvm-svn: 201381
2014-02-14 02:21:56 +00:00
Tom Stellard 967bf5813f R600/SI: Expand all v8[if]32 operations
llvm-svn: 201371
2014-02-13 23:34:15 +00:00
Tom Stellard f16d38cbb5 R600/SI: Add a pattern for i32 anyext
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
llvm-svn: 201370
2014-02-13 23:34:13 +00:00
Tom Stellard 6c7a7e82a7 R600/SI: Completely Disable TypeRewriter on compute
llvm-svn: 201369
2014-02-13 23:34:12 +00:00
Tom Stellard 80be9650e3 R600/SI: Split global vector loads with more than 4 elements
llvm-svn: 201368
2014-02-13 23:34:10 +00:00
Tom Stellard 60b6153044 R600/SI: Add ShaderType attribute to some tests
llvm-svn: 201367
2014-02-13 23:34:07 +00:00
Rafael Espindola 1f3de49f37 Use __literal16. It has been supported by the linker since 2005.
llvm-svn: 201365
2014-02-13 23:16:11 +00:00
Rafael Espindola 8459762c88 Add triples to try to fix the windows bots.
llvm-svn: 201345
2014-02-13 16:49:47 +00:00
Rafael Espindola 98f0cfdd90 .file is only available on ELF, use a triple instead of -march.
llvm-svn: 201337
2014-02-13 15:38:16 +00:00
Rafael Espindola e6f37b908a "foo" is not a ppc instruction, don't try to parse it.
llvm-svn: 201336
2014-02-13 15:33:35 +00:00
Rafael Espindola 432acf5cef Specify a triple. MachO AArch64 support is missing.
llvm-svn: 201335
2014-02-13 15:30:06 +00:00
Daniel Sanders 753e17629d Re-commit: Demote EmitRawText call in AsmPrinter::EmitInlineAsm() and remove hasRawTextSupport() call
Summary:
AsmPrinter::EmitInlineAsm() will no longer use the EmitRawText() call for
targets with mature MC support. Such targets will always parse the inline
assembly (even when emitting assembly). Targets without mature MC support
continue to use EmitRawText() for assembly output.

The hasRawTextSupport() check in AsmPrinter::EmitInlineAsm() has been replaced
with MCAsmInfo::UseIntegratedAs which when true, causes the integrated assembler
to parse inline assembly (even when emitting assembly output). UseIntegratedAs
is set to true for targets that consider any failure to parse valid assembly
to be a bug. Target specific subclasses generally enable the integrated
assembler in their constructor. The default value can be overridden with
-no-integrated-as.

All tests that rely on inline assembly supporting invalid assembly (for example,
those that use mnemonics such as 'foo' or 'hello world') have been updated to
disable the integrated assembler.

Changes since review (and last commit attempt):
- Fixed test failures that were missed due to configuration of local build.
  (fixes crash.ll and a couple others).
- Fixed tests that happened to pass because the local build was on X86
  (should fix 2007-12-17-InvokeAsm.ll)
- mature-mc-support.ll's should no longer require all targets to be compiled.
  (should fix ARM and PPC buildbots)
- Object output (-filetype=obj and similar) now forces the integrated assembler
  to be enabled regardless of default setting or -no-integrated-as.
  (should fix SystemZ buildbots)

Reviewers: rafael

Reviewed By: rafael

CC: llvm-commits

Differential Revision: http://llvm-reviews.chandlerc.com/D2686

llvm-svn: 201333
2014-02-13 14:44:26 +00:00
NAKAMURA Takumi 6fe94621b9 llvm/test/CodeGen/AArch64/cpus.ll: Tweak to use -mtriple=aarch64-unknown-unknown, or this would crash for targeting pecoff like *-mingw32.
llvm-svn: 201315
2014-02-13 11:06:23 +00:00
Tim Northover 914af6273b ARM: remove floating-point patterns for @llvm.arm.neon.vabs
The front-end is now generating the generic @llvm.fabs for this
operation now, so the extra patterns are no longer needed.

llvm-svn: 201314
2014-02-13 10:44:30 +00:00
Oliver Stannard 5bbb72f37e Add Cortex-A53 and Cortex-A57 cores to the AArch64 backend
llvm-svn: 201305
2014-02-13 09:46:11 +00:00
Hao Liu 7b6dfcf06a [AArch64]Fix the problems that can't select mul/add/sub of v1i8/v1i16/v1i32 types.
As this problems are similar to shl/sra/srl, also add patterns for shift nodes.

llvm-svn: 201298
2014-02-13 05:42:33 +00:00
Juergen Ributzka 2b97f9b211 [DAG] Fix the recognition of opaque constants in the SelectionDAGBuilder.
This fix checks the original LLVM IR node to identify opaque constants by
looking for the bitcast-constant pattern. Originally we looked at the generated
SDNode, but this might lead to incorrect results. The SDNode could have been
generated by an constant expression that was folded to a constant.

This fixes <rdar://problem/16050719>

llvm-svn: 201291
2014-02-13 04:19:26 +00:00
Hao Liu 4f345f3c03 [AArch64]Add support for spilling FPR8/FPR16.
llvm-svn: 201287
2014-02-13 02:36:58 +00:00
Andrea Di Biagio 386d566395 [X86] Teach the backend how to lower vector shift left into multiply rather than scalarizing it.
Instead of expanding a packed shift into a sequence of scalar shifts,
the backend now tries (when possible) to convert the vector shift into a
vector multiply.

Before this change, a shift of a MVT::v8i16 vector by a
build_vector of constants was always scalarized into a long sequence of "vector
extracts + scalar shifts + vector insert".
With this change, if there is SSE2 support, we emit a single vector multiply.

This change also affects SSE4.1, AVX, AVX2 shifts:
 - A shift of a MVT::v4i32 vector by a build_vector of non uniform constants
is now lowered when possible into a single SSE4.1 vector multiply.
 - Packed v16i16 shift left by constant build_vector are now expanded when
possible into a single AVX2 vpmullw.
This change also improves the lowering of AVX512f vector shifts.

Added test CodeGen/X86/vec_shift6.ll with some code examples that are affected
by this change.

llvm-svn: 201271
2014-02-12 23:42:28 +00:00
Akira Hatanaka a07ffb5b31 Pass edges weights to MachineBasicBlock::addSuccessor in TailDuplicatePass to
preserve branch probability information.

<rdar://problem/15893208>

llvm-svn: 201245
2014-02-12 18:09:18 +00:00
Daniel Sanders abe212a3b8 Revert r201237+r201238: Demote EmitRawText call in AsmPrinter::EmitInlineAsm() and remove hasRawTextSupport() call
It introduced multiple test failures in the buildbots.

llvm-svn: 201241
2014-02-12 15:39:20 +00:00
Daniel Sanders a7d504cf58 Demote EmitRawText call in AsmPrinter::EmitInlineAsm() and remove hasRawTextSupport() call
Summary:
AsmPrinter::EmitInlineAsm() will no longer use the EmitRawText() call for targets with mature MC support. Such targets will always parse the inline assembly (even when emitting assembly). Targets without mature MC support continue to use EmitRawText() for assembly output.

The hasRawTextSupport() check in AsmPrinter::EmitInlineAsm() has been replaced with MCAsmInfo::UseIntegratedAs which when true, causes the integrated assembler to parse inline assembly (even when emitting assembly output). UseIntegratedAs is set to true for targets that consider any failure to parse valid assembly to be a bug. Target specific subclasses generally enable the integrated assembler in their constructor. The default value can be overridden with -no-integrated-as.

All tests that rely on inline assembly supporting invalid assembly (for example, those that use mnemonics such as 'foo' or 'hello world') have been updated to disable the integrated assembler.

Reviewers: rafael

Reviewed By: rafael

CC: llvm-commits

Differential Revision: http://llvm-reviews.chandlerc.com/D2686

llvm-svn: 201237
2014-02-12 14:44:54 +00:00
Evan Cheng 57add3e4ee Tweak ARM fastcc by adopting these two AAPCS rules:
* CPRCs may be allocated to co-processor registers or the stack – they may never be allocated to core registers
* When a CPRC is allocated to the stack, all other VFP registers should be marked as unavailable

The difference is only noticeable in rare cases where there are a large number of floating point arguments (e.g.
7 doubles + additional float, double arguments). Although it's probably still better to avoid vmov as it can cause
stalls in some older ARM cores. The other, more subtle benefit, is to minimize difference between the various
calling conventions.

rdar://16039676

llvm-svn: 201193
2014-02-11 23:49:31 +00:00
David Blaikie 6bd395f3f0 DebugInfo: Remove dependence on file numbering in the line table.
These tests were unnecessarily sensitive to the presence and ordering of
elements in the line table file_names list which will break on a future
change I'm working on.

llvm-svn: 201185
2014-02-11 21:46:46 +00:00
Matt Arsenault 71b71d25eb R600/SI: Fix assertion on infinite loops.
This isn't the most useful case to fix in the real world,
but bugpoint runs into this.

llvm-svn: 201177
2014-02-11 21:12:38 +00:00
Robert Lougher 7d9084ffa1 Teach the DAGCombiner how to fold concat_vector nodes when the input is two
BUILD_VECTOR nodes, e.g.:

(concat_vectors (BUILD_VECTOR a1, a2, a3, a4), (BUILD_VECTOR b1, b2, b3, b4))
->
(BUILD_VECTOR a1, a2, a3, a4, b1, b2, b3, b4)

This fixes an issue with AVX, where a sequence was not recognized as a 256-bit
vbroadcast due to the concat_vectors.

llvm-svn: 201158
2014-02-11 15:42:46 +00:00
Robert Lytton 70b5ba49c3 XCore target: fix const section handling
Xcore target ABI requires const data that is externally visible
to be handled differently if it has C-language linkage rather than
C++ language linkage.

Clang now emits ".cp.rodata" section information.

All other externally visible constant data will be placed in the DP section.

llvm-svn: 201144
2014-02-11 10:36:26 +00:00
Robert Lytton 9b6bb461b1 XCore target: Lower ATOMIC_LOAD & ATOMIC_STORE
llvm-svn: 201143
2014-02-11 10:36:18 +00:00
Elena Demikhovsky 1f32c313f1 AVX: fixed a bug in LowerVECTOR_SHUFFLE
llvm-svn: 201140
2014-02-11 10:21:53 +00:00
Elena Demikhovsky 2aafc22ed9 AVX-512: Optimized BUILD_VECTOR pattern;
fixed encoding of VEXTRACTPS instruction.

llvm-svn: 201134
2014-02-11 07:25:59 +00:00
Quentin Colombet 64ca544d95 [CodeGenPrepare] Test case for the promotions that bypass the
profitability check due to some other checks in the addressing
mode matcher. I.e., test case for commit r201121.

<rdar://problem/16020230>

llvm-svn: 201132
2014-02-11 06:55:43 +00:00
Tom Stellard 5d7aaaed7d R600/SI: Initialize M0 and emit S_WQM_B64 whenever DS instructions are used
DS instructions that access local memory can only uses addresses that
are less than or equal to the value of M0.  When M0 is uninitialized,
then we experience undefined behavior.

This patch also changes the behavior to emit S_WQM_B64 on pixel shaders
no matter what kind of DS instruction is used.

llvm-svn: 201097
2014-02-10 16:58:30 +00:00
Tim Northover b0430415e6 ARM: use natural LLVM IR for vshll instructions
Similarly to the vshrn instructions, these are simple zext/sext + trunc
operations. Using normal LLVM IR should allow for better code, and more sharing
with the AArch64 backend.

llvm-svn: 201093
2014-02-10 16:20:29 +00:00
Oliver Stannard 8dcaa761a2 ARM: r12 is callee-saved for interrupt handlers
For A- and R-class processors, r12 is not normally callee-saved, but is for
interrupt handlers. See AAPCS, 5.3.1.1, "Use of IP by the linker".

llvm-svn: 201089
2014-02-10 14:24:23 +00:00
Tim Northover 170daafe01 ARM: use LLVM IR to represent the vshrn operation
vshrn is just the combination of a right shift and a truncate (and the limits
on the immediate value actually mean the signedness of the shift doesn't
matter). Using that representation allows us to get rid of an ARM-specific
intrinsic, share more code with AArch64 and hopefully get better code out of
the mid-end optimisers.

llvm-svn: 201085
2014-02-10 14:04:07 +00:00
Robert Lougher 48ee75b7e3 Test commit - added a new line to vec_shuf-insert.ll.
llvm-svn: 201083
2014-02-10 12:42:13 +00:00
Matheus Almeida 4b27eb588c [mips][msa] Add DLSA instruction.
llvm-svn: 201081
2014-02-10 12:05:17 +00:00
Matheus Almeida b4133b25e7 [mips][msa] Update FileCheck prefix in preparation for
the addition of Mips64 tests.

No functional changes.

llvm-svn: 201080
2014-02-10 11:30:09 +00:00
Elena Demikhovsky 9f423d6f25 AVX-512: Fixed extract_vector_elt for v16i1 and v8i1 vectors.
llvm-svn: 201066
2014-02-10 07:02:39 +00:00
Hao Liu 6e73761dc8 [AArch64]Implement the copy of two FPR8 registers by using FMOVss of two FPR32 registers in copyPhysReg.
llvm-svn: 201061
2014-02-10 03:16:22 +00:00
Rafael Espindola 5054362920 Always create a temporary symbol to use with the cfi frame.
This is a small simplification and a small step in fixing pr18743 since
private functions on MachO should be using a 'l' prefix.

llvm-svn: 200994
2014-02-07 21:23:18 +00:00
Rafael Espindola e393aab13b Use FileCheck variables to simplify this test.
llvm-svn: 200992
2014-02-07 21:11:33 +00:00
Renato Golin 3dc5ade8bb Fix Darwin bots from EHABI change
llvm-svn: 200990
2014-02-07 20:32:32 +00:00
Matt Arsenault 9ad27e8d76 R600/SI: Add failing test for 3 x i64 vectors.
Stores of <4 x i64> do work (although they do expand to 4 stores
instead of 2), but 3 x i64 vectors fail to select.

llvm-svn: 200989
2014-02-07 20:29:40 +00:00
Renato Golin 78a6eba862 Remove -arm-disable-ehabi option
llvm-svn: 200988
2014-02-07 20:12:49 +00:00
Sasa Stankovic 4c80bdae72 [mips] Forbid the use of registers t6, t7 and t8 if the target is NaCl.
Differential Revision: http://llvm-reviews.chandlerc.com/D2694

llvm-svn: 200978
2014-02-07 17:16:40 +00:00
Rafael Espindola 61acf5d9b0 Fix a bug with .weak_def_can_be_hidden: Mutable variables cannot use it.
Thanks to John McCall for noticing it.

llvm-svn: 200977
2014-02-07 16:21:30 +00:00
Oliver Stannard 1dc1034218 LLVM-1163: AAPCS-VFP violation when CPRC allocated to stack
According to the AAPCS, when a CPRC is allocated to the stack, all other
VFP registers should be marked as unavailable.

I have also modified the rules for allocating non-CPRCs to the stack, to make
it more explicit that all GPRs must be made unavailable. I cannot think of a
case where the old version would produce incorrect answers, so there is no test
for this.

llvm-svn: 200970
2014-02-07 11:19:53 +00:00
Venkatraman Govindaraju fd07500dd1 [Sparc] Emit relocations for Thread Local Storage (TLS) when integrated assembler is used.
llvm-svn: 200962
2014-02-07 05:54:20 +00:00
Venkatraman Govindaraju 104643d0aa [Sparc] Emit correct relocations for PIC code when integrated assembler is used.
llvm-svn: 200961
2014-02-07 04:24:35 +00:00
Manman Ren 37c9267107 PGO branch weight: fix PR18752.
Fix a bug triggered in IfConverterTriangle when CvtBB has multiple predecessors
by getting the weights before removing a successor.

llvm-svn: 200958
2014-02-07 00:38:56 +00:00
Jim Grosbach e9008de652 X86: Resolve a long standing FIXME and properly isel pextr[bw].
Generalize the AArch64 .td nodes for AssertZext and AssertSext. Use
them to match the relevant pextr store instructions.

The test widen_load-2.ll requires a slight change because with the
stores gone, the remaining instructions are scheduled in a different
order.

Add test cases for SSE4 and AVX variants.

Resolves rdar://13414672.

Patch by Adam Nemet <anemet@apple.com>.

llvm-svn: 200957
2014-02-07 00:16:33 +00:00
Rafael Espindola 803fb10874 Convert test to FileCheck.
llvm-svn: 200955
2014-02-06 23:35:22 +00:00
Quentin Colombet 3a4bf0405e [CodeGenPrepare] Move away sign extensions that get in the way of addressing
mode.

Basically the idea is to transform code like this:
%idx = add nsw i32 %a, 1
%sextidx = sext i32 %idx to i64
%gep = gep i8* %myArray, i64 %sextidx
load i8* %gep

Into:
%sexta = sext i32 %a to i64
%idx = add nsw i64 %sexta, 1
%gep = gep i8* %myArray, i64 %idx
load i8* %gep

That way the computation can be folded into the addressing mode.

This transformation is done as part of the addressing mode matcher.
If the matching fails (not profitable, addressing mode not legal, etc.), the
matcher will revert the related promotions.

<rdar://problem/15519855>

llvm-svn: 200947
2014-02-06 21:44:56 +00:00
Tom Stellard e236794578 R600/SI: Add a MUBUF store pattern for Reg+Imm offsets
llvm-svn: 200935
2014-02-06 18:36:41 +00:00
Tom Stellard 2937cbc005 R600/SI: Add a MUBUF store pattern for Imm offsets
llvm-svn: 200934
2014-02-06 18:36:39 +00:00
Tom Stellard 11624bc577 R600/SI: Add a MUBUF load pattern for Reg+Imm offsets
llvm-svn: 200933
2014-02-06 18:36:38 +00:00
Tom Stellard 044e418f15 R600/SI: Use immediates offsets for SMRD instructions whenever possible
There was a problem with the old pattern, so we were copying some
larger immediates into registers when we could have been encoding
them in the instruction.

llvm-svn: 200932
2014-02-06 18:36:34 +00:00
Juergen Ributzka fa0eba6c8b [DAG] Don't pull the binary operation though the shift if the operands have opaque constants.
During DAGCombine visitShiftByConstant assumes that certain binary operations
with only constant operands can always be folded successfully. This is no longer
true when the constant is opaque. This commit fixes visitShiftByConstant by not
performing the optimization for opaque constants. Otherwise we would end up in
an infinite DAGCombine loop.

llvm-svn: 200900
2014-02-06 04:09:06 +00:00
Quentin Colombet 87769713cf [RegAlloc] Add a last chance recoloring mechanism when everything else failed to
find a register.

The idea is to choose a color for the variable that cannot be allocated and
recolor its interferences around. Unlike the current register allocation scheme,
it is allowed to change the color of an already assigned (but maybe not
splittable or spillable) live interval while propagating this change to its
neighbors.
In other word, there are two things that may help finding an available color:
- Already assigned variables (RS_Done) can be recolored to different color.
- The recoloring allows to catch solutions that needs to touch more that just
  the neighbors of the current allocated variable.

E.g.,
vA can use {R1, R2    }
vB can use {    R2, R3}
vC can use {R1        }
Where vA, vB, and vC cannot be split anymore (they are reloads for instance) and
they all interfere.

vA is assigned R1
vB is assigned R2
vC tries to evict vA but vA is already done.
=> Regular register allocation heuristic fails.

Last chance recoloring kicks in:
vC does as if vA was evicted => vC uses R1.
vC is marked as fixed.
vA needs to find a color.
None are available.
vA cannot evict vC: vC is a fixed virtual register now.
vA does as if vB was evicted => vA uses R2.
vB needs to find a color.
R3 is available.
Recoloring => vC = R1, vA = R2, vB = R3.

<rdar://problem/15947839>

llvm-svn: 200883
2014-02-05 22:13:59 +00:00
Rafael Espindola b4eec1daa1 Remove support for not using .loc directives.
Clang itself was not using this. The only way to access it was via llc.

llvm-svn: 200862
2014-02-05 18:00:21 +00:00
Petar Jovanovic 9725016af3 [mips] Add NaCl target and forbid indexed loads and stores for it
This patch adds NaCl target for Mips. It also forbids indexed loads and
stores if the target is NaCl.

Patch by Sasa Stankovic.

Differential Revision: http://llvm-reviews.chandlerc.com/D2690

llvm-svn: 200855
2014-02-05 17:19:30 +00:00
Elena Demikhovsky 0b79be8ab2 AVX-512: optimized icmp -> sext -> icmp pattern
llvm-svn: 200849
2014-02-05 16:17:36 +00:00
Michel Danzer 5d26fdfcba R600/SI: Add pattern for zero-extending i1 to i32
Fixes opencl-example if_* tests with radeonsi.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=74469

Reviewed-by: Tom Stellard <thomas.stellard@amd.com>
llvm-svn: 200830
2014-02-05 09:48:05 +00:00
Elena Demikhovsky a30e437659 AVX-512: Added intrinsic for cvtph2ps.
Added VPTESTNM instruction.
Added a pattern to vselect (lit tests will follow).

llvm-svn: 200823
2014-02-05 07:05:03 +00:00
David Peixotto b9b7362cdc Fix PR18345: ldr= pseudo instruction produces incorrect code when using in inline assembly
This patch fixes the ldr-pseudo implementation to work when used in
inline assembly.  The fix is to move arm assembler constant pools
from the ARMAsmParser class to the ARMTargetStreamer class.

Previously we kept the assembler generated constant pools in the
ARMAsmParser object. This does not work for inline assembly because
a new parser object is created for each blob of inline assembly.
This patch moves the constant pools to the ARMTargetStreamer class
so that the constant pool will remain alive for the entire code
generation process.

An ARMTargetStreamer class is now required for the arm backend.
There was no existing implementation for MachO, only Asm and ELF.
Instead of creating an empty MachO subclass, we decided to make the
ARMTargetStreamer a non-abstract class and provide default
(llvm_unreachable) implementations for the non constant-pool related
methods.

Differential Revision: http://llvm-reviews.chandlerc.com/D2638

llvm-svn: 200777
2014-02-04 17:22:40 +00:00
Tom Stellard 0ec134f3d6 R600/SI: Custom lower i64 ISD::SELECT
llvm-svn: 200774
2014-02-04 17:18:40 +00:00