Commit Graph

69889 Commits

Author SHA1 Message Date
Arnaud A. de Grandmaison 6a90dc4f30 Factor out comparison of Instruction "special" states.
No functional change.

llvm-svn: 209688
2014-05-27 21:35:46 +00:00
David Blaikie 6900674aaf DebugInfo: partially revert cleanup committed in r209680
I'm not sure exactly where/how we end up with an abstract DbgVariable
with a null DIE, but we do... looking into it & will add a test and/or
fix when I figure it out.

Currently shows up in selfhost or compiler-rt builds.

llvm-svn: 209683
2014-05-27 20:20:43 +00:00
David Blaikie b85f0080e7 DebugInfo: Simplify solution to avoid DW_AT_artificial on inlined parameters.
Originally committed in r207717, I clearly didn't look very closely at
the code to understand how existing things were working...

llvm-svn: 209680
2014-05-27 19:34:32 +00:00
Sasa Stankovic e41db2fe31 [mips] Optimize long branch for MIPS64 by removing %higher and %highest.
%higher and %highest can have non-zero values only for offsets greater
than 2GB, which is highly unlikely, if not impossible when compiling a
single function. This makes long branch for MIPS64 3 instructions smaller.

Differential Revision: http://llvm-reviews.chandlerc.com/D3281.diff

llvm-svn: 209678
2014-05-27 18:53:06 +00:00
David Blaikie 482097d098 DebugInfo: Create abstract function definitions even when concrete definitions preceed inline definitions.
After much puppetry, here's the major piece of the work to ensure that
even when a concrete definition preceeds all inline definitions, an
abstract definition is still created and referenced from both concrete
and inline definitions.

Variables are still broken in this case (see comment in
dbg-value-inlined-parameter.ll test case) and will be addressed in
follow up work.

llvm-svn: 209677
2014-05-27 18:37:55 +00:00
David Blaikie 2910f62084 DebugInfo: Avoid an extra map lookup when finding abstract subprogram DIEs.
llvm-svn: 209676
2014-05-27 18:37:51 +00:00
David Blaikie 3c2fff3fe6 DebugInfo: Lazily construct subprogram definition DIEs.
A further step to correctly emitting concrete out of line definitions
preceeding inlined instances of the same program.

To do this, emission of subprograms must be delayed until required since
we don't know which (abstract only (if there's no out of line
definition), concrete only (if there are no inlined instances), or both)
DIEs are required at the start of the module.

To reduce the test churn in the following commit that actually fixes the
bug, this commit introduces the lazy DIE construction and cleans up test
cases that are impacted by the changes in the resulting DIE ordering.

llvm-svn: 209675
2014-05-27 18:37:48 +00:00
David Blaikie f7221adb8e DebugInfo: Lazily attach definition attributes to definitions.
This is a precursor to fixing inlined debug info where the concrete,
out-of-line definition may preceed any inlined usage. To cope with this,
the attributes that may appear on the concrete definition or the
abstract definition are delayed until the end of the module. Then, if an
abstract definition was created, it is referenced (and no other
attributes are added to the out-of-line definition), otherwise the
attributes are added directly to the out-of-line definition.

In a couple of cases this causes not just reordering of attributes, but
reordering of types. When the creation of the attribute is delayed, if
that creation would create a type (such as for a DW_AT_type attribute)
then other top level DIEs may've been constructed during the delay,
causing the referenced type to be created and added after those
intervening DIEs. In the extreme case, in cross-cu-inlining.ll, this
actually causes the DW_TAG_basic_type for "int" to move from one CU to
another.

llvm-svn: 209674
2014-05-27 18:37:43 +00:00
David Blaikie 7f91686f07 DebugInfo: Separate out the addition of subprogram attribute additions so that they can be added later depending on whether or not the function is inlined.
llvm-svn: 209673
2014-05-27 18:37:38 +00:00
Jingyue Wu 80a738dc62 Distribute sext/zext to the operands of and/or/xor
This is an enhancement to SeparateConstOffsetFromGEP. With this patch, we can
extract a constant offset from "s/zext and/or/xor A, B".

Added a new test @ext_or to verify this enhancement.

Refactoring the code, I also extracted some common logic to function
Distributable. 

llvm-svn: 209670
2014-05-27 18:00:00 +00:00
Filipe Cabecinhas e8d6a1e82f Post-commit fixes for r209643
Detected by Daniel Jasper, Ilia Filippov, and Andrea Di Biagio
Fixed the argument order to select (the mask semantics to blendv* are the
inverse of select) and fixed the tests
Added parenthesis to the assert condition
Ran clang-format

llvm-svn: 209667
2014-05-27 16:54:33 +00:00
Bill Schmidt 71dddd51d9 [PATCH] Correct type used for VADD_SPLAT optimization on PowerPC
In PPCISelLowering.cpp: PPCTargetLowering::LowerBUILD_VECTOR(), there
is an optimization for certain patterns to generate one or two vector
splats followed by a vector add or subtract.  This operation is
represented by a VADD_SPLAT in the selection DAG.  Prior to this
patch, it was possible for the VADD_SPLAT to be assigned the wrong
data type, causing incorrect code generation.  This patch corrects the
problem.

Specifically, the code previously assigned the value type of the
BUILD_VECTOR node to the newly generated VADD_SPLAT node.  This is
correct much of the time, but not always.  The problem is that the
call to isConstantSplat() may return a SplatBitSize that is not the
same as the number of bits in the original element vector type.  The
correct type to assign is a vector type with the same element bit size
as SplatBitSize.

The included test case shows an example of this, where the
BUILD_VECTOR node has a type of v16i8.  The vector to be built is {0,
16, 0, 16, 0, 16, 0, 16, 0, 16, 0, 16, 0, 16, 0, 16}.  isConstantSplat
detects that we can generate a splat of 16 for type v8i16, which is
the type we must assign to the VADD_SPLAT node.  If we do not, we
generate a vspltisb of 8 and a vaddubm, which generates the incorrect
result {16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16,
16}.  The correct code generation is a vspltish of 8 and a vadduhm.

This patch also corrected code generation for
CodeGen/PowerPC/2008-07-10-SplatMiscompile.ll, which had been marked
as an XFAIL, so we can remove the XFAIL from the test case.

llvm-svn: 209662
2014-05-27 15:57:51 +00:00
Zoran Jovanovic b355e8f604 [mips][mips64r6] Add Relocations R_MIPS_PCHI16, R_MIPS_PCLO16
Differential Revision: http://reviews.llvm.org/D3860

llvm-svn: 209659
2014-05-27 14:58:51 +00:00
Amara Emerson ceeb1c4830 [ARM] Emit correct build attributes for the relocation models.
Patch by Asiri Rathnayake.

llvm-svn: 209656
2014-05-27 13:30:21 +00:00
Zoran Jovanovic 10e06da031 [mips][mips64r6] Add relocations R_MIPS_PC21_S2, R_MIPS_PC26_S2
Differential Revision: http://reviews.llvm.org/D3824

llvm-svn: 209655
2014-05-27 12:55:40 +00:00
Evgeniy Stepanov 47b1a95f1c [asancov] Emit an initializer passing number of coverage code locations in each module.
llvm-svn: 209654
2014-05-27 12:39:31 +00:00
Tim Northover 1bed9afd30 AArch64: implement copies to/from NZCV as a last ditch effort.
A test in test/Generic creates a DAG where the NZCV output of an ADCS is used
by multiple nodes. This makes LLVM want to save a copy of NZCV for later, which
it couldn't do before.

This should be the last fix required for the aarch64 buildbot.

llvm-svn: 209651
2014-05-27 12:16:02 +00:00
Tim Northover 4f1909f1da ARM: teach AAPCS-VFP to deal with Cortex-M4.
Cortex-M4 only has single-precision floating point support, so any LLVM
"double" type will have been split into 2 i32s by now. Fortunately, the
consecutive-register framework turns out to be precisely what's needed to
reconstruct the double and follow AAPCS-VFP correctly!

rdar://problem/17012966

llvm-svn: 209650
2014-05-27 10:43:38 +00:00
Daniel Jasper 73458c95ac Fix bad assert.
llvm-svn: 209648
2014-05-27 09:55:37 +00:00
Tim Northover 4719041db7 AArch64: support 'c' and 'n' inline asm modifiers.
These are tested by test/CodeGen/Generic, so we should probably know
how to deal with them. Fortunately generic code does it if asked.

llvm-svn: 209646
2014-05-27 07:37:21 +00:00
Filipe Cabecinhas 82ac07c283 Convert some X86 blendv* intrinsics into IR.
Summary:
Implemented an InstCombine transformation that takes a blendv* intrinsic
call and translates it into an IR select, if the mask is constant.

This will eventually get lowered into blends with immediates if possible,
or pblendvb (with an option to further optimize if we can transform the
pblendvb into a blend+immediate instruction, depending on the selector).
It will also enable optimizations by the IR passes, which give up on
sight of the intrinsic.

Both the transformation and the lowering of its result to asm got shiny
new tests.

The transformation is a bit convoluted because of blendvp[sd]'s
definition:

Its mask is a floating point value! This forces us to convert it and get
the highest bit. I suppose this happened because the mask has type
__m128 in Intel's intrinsic and v4sf (for blendps) in gcc's builtin.

I will send an email to llvm-dev to discuss if we want to change this or
not.

Reviewers: grosbach, delena, nadav

Differential Revision: http://reviews.llvm.org/D3859

llvm-svn: 209643
2014-05-27 03:42:20 +00:00
Rafael Espindola 19913ee160 Use existing helper function.
No functionality change.

llvm-svn: 209639
2014-05-26 19:57:55 +00:00
Rafael Espindola ac69cee6a2 [PPC] Use alias symbols in address computation.
This seems to match what gcc does for ppc and what every other llvm
backend does.

llvm-svn: 209638
2014-05-26 19:08:19 +00:00
Tim Northover 68ae503de9 AArch64: force i1 to be zero-extended at an ABI boundary.
This commit is debatable. There are two possible approaches, neither
of which is really satisfactory:

1. Use "@foo(i1 zeroext)" to mean an extension to 32-bits on Darwin,
   and 8 bits otherwise.
2. Redefine "@foo(i1)" to mean that the i1 is extended by the caller
   to 8 bits. This goes against the spirit of "zeroext" I think, but
   it's a bit of a vague construct anyway (by definition you're going
   to extend to the amount required by the ABI, that's why it's the
   ABI!).

This implements option 2. The DAG machinery really isn't setup for the
first (there's a fairly strong assumption that "zeroext" goes to at
least the smallest register size), and even if it was the resulting
DAG looks like it would be inferior in many cases.

Theoretically we could add AssertZext nodes in the consumers of
ABI-passed values too now, but this actually seems to make the code
worse in practice by making truncation proceed in two steps. The code
produced is equally valid if we continue to assume only the low bit is
defined.

Should fix PR19850

llvm-svn: 209637
2014-05-26 17:22:07 +00:00
Tim Northover 47e003c65d AArch64: simplify calling conventions slightly.
We can eliminate the custom C++ code in favour of some TableGen to
check the same things. Functionality should be identical, except for a
buffer overrun that was present in the C++ code and meant webkit
failed if any small argument needed to be passed on the stack.

llvm-svn: 209636
2014-05-26 17:21:53 +00:00
Michael Zolotukhin 265dfa411c Some cleanup for r209568.
llvm-svn: 209634
2014-05-26 14:49:46 +00:00
Rafael Espindola acef6c776b Convert a few loops to use ranges.
llvm-svn: 209628
2014-05-26 13:38:51 +00:00
Kostya Serebryany 4d237a8503 [asan] decrease asan-instrumentation-with-call-threshold from 10000 to 7000, see PR17409
llvm-svn: 209623
2014-05-26 11:57:16 +00:00
Owen Anderson 115aa160e6 Make the LoopRotate pass's maximum header size configurable both programmatically
and via the command line, mirroring similar functionality in LoopUnroll.  In
situations where clients used custom unrolling thresholds, their intent could
previously be foiled by LoopRotate having a hardcoded threshold.

llvm-svn: 209617
2014-05-26 08:58:51 +00:00
David Blaikie ab53c91010 DwarfUnit: Remove some misleading no-op code introduced in r204162.
Post commit review feedback from Manman called this out, but it looks
like it slipped through the cracks.

llvm-svn: 209611
2014-05-26 05:32:21 +00:00
David Blaikie ea86226774 DebugInfo: Fix inlining with #file directives a little harder
Seems my previous fix was insufficient - we were still not adding the
inlined function to the abstract scope list. Which meant it wasn't
flagged as inline, didn't have nested lexical scopes in the abstract
definition, and didn't have abstract variables - so the inlined variable
didn't reference an abstract variable, instead being described
completely inline.

llvm-svn: 209602
2014-05-25 18:11:35 +00:00
Rafael Espindola 4a04c4b69c Emit data or code export directives based on the type.
Currently we look at the Aliasee to decide what type of export
directive to use. It seems better to use the type of the alias
directly. This is similar to how we handle the alias having the
same address but other attributes (linkage, visibility) from the
aliasee.

With this patch it is now possible to do things like

target datalayout = "e-m:e-i64:64-f80:128-n8:16:32:64-S128"
target triple = "x86_64-pc-windows-msvc"
@foo = global [6 x i8] c"\B8*\00\00\00\C3", section ".text", align 16
@f = dllexport alias i32 (), [6 x i8]* @foo
!llvm.module.flags = !{!0}
!0 = metadata !{i32 6, metadata !"Linker Options", metadata !1}
!1 = metadata !{metadata !2, metadata !3}
!2 = metadata !{metadata !"/DEFAULTLIB:libcmt.lib"}
!3 = metadata !{metadata !"/DEFAULTLIB:oldnames.lib"}

llvm-svn: 209600
2014-05-25 12:49:07 +00:00
Peter Collingbourne 0a4376190f Add an extension point for peephole optimizers.
This extension point allows adding passes that perform peephole optimizations
similar to the instruction combiner. These passes will be inserted after
each instance of the instruction combiner pass.

Differential Revision: http://reviews.llvm.org/D3905

llvm-svn: 209595
2014-05-25 10:27:02 +00:00
Hans Wennborg 12d1e24da2 Fix some misplaced spaces around 'override'
llvm-svn: 209589
2014-05-24 20:19:40 +00:00
Tim Northover 391f93a554 AArch64: disable FastISel for large code model.
The code emitted is what would be expected for the small model, so it
shouldn't be used when objects can be the full 64-bits away.

This fixes MCJIT tests on Linux.

llvm-svn: 209585
2014-05-24 19:45:41 +00:00
Benjamin Kramer 5256ce37ac MachineVerifier: Clean up some syntactic weirdness left behind by find&replace.
No functionality change.

llvm-svn: 209581
2014-05-24 13:31:10 +00:00
Benjamin Kramer 389cec0d3e CodeGen: Make MachineBasicBlock::back skip to the beginning of the last bundle.
This makes front/back symmetric with begin/end, avoiding some confusion.
Added instr_front/instr_back for the old behavior, corresponding to
instr_begin/instr_end. Audited all three in-tree users of back(), all
of them look like they don't want to look inside bundles.

Fixes an assertion (PR19815) when generating debug info on mips, where a
delay slot was bundled at the end of a branch.

llvm-svn: 209580
2014-05-24 13:13:17 +00:00
Tim Northover 3b0846e8f7 AArch64/ARM64: move ARM64 into AArch64's place
This commit starts with a "git mv ARM64 AArch64" and continues out
from there, renaming the C++ classes, intrinsics, and other
target-local objects for consistency.

"ARM64" test directories are also moved, and tests that began their
life in ARM64 use an arm64 triple, those from AArch64 use an aarch64
triple. Both should be equivalent though.

This finishes the AArch64 merge, and everyone should feel free to
continue committing as normal now.

llvm-svn: 209577
2014-05-24 12:50:23 +00:00
Tim Northover cc08e1fe1b AArch64/ARM64: remove AArch64 from tree prior to renaming ARM64.
I'm doing this in two phases for a better "git blame" record. This
commit removes the previous AArch64 backend and redirects all
functionality to ARM64. It also deduplicates test-lines and removes
orphaned AArch64 tests.

The next step will be "git mv ARM64 AArch64" and rewire most of the
tests.

Hopefully LLVM is still functional, though it would be even better if
no-one ever had to care because the rename happens straight
afterwards.

llvm-svn: 209576
2014-05-24 12:42:26 +00:00
Michael Zolotukhin d4c724625a Implement sext(C1 + C2*X) --> sext(C1) + sext(C2*X) and
sext{C1,+,C2} --> sext(C1) + sext{0,+,C2} transformation in Scalar
Evolution.

That helps SLP-vectorizer to recognize consecutive loads/stores.

<rdar://problem/14860614>

llvm-svn: 209568
2014-05-24 08:09:57 +00:00
Tim Northover e471e43484 ARM64: extract a 32-bit subreg when selecting an inreg extend
After the load/store refactoring, we were sometimes trying to feed a
GPR64 into a 32-bit register offset operand. This failed in
copyPhysReg.

llvm-svn: 209566
2014-05-24 07:05:42 +00:00
Rafael Espindola ef2c4fb25b clang-format function.
llvm-svn: 209550
2014-05-23 20:39:23 +00:00
Rafael Espindola d246759973 Remove a confusing use of a static method.
No functionality change.

llvm-svn: 209548
2014-05-23 20:35:47 +00:00
David Blaikie 169ffe41af DebugInfo: Put concrete definitions referencing abstract definitions in the same scope as the abstract definition.
This seems like a simple cleanup/improved consistency, but also helps
lay the foundation to fix the bug mentioned in the test case: concrete
definitions preceeding any inlined usage aren't properly split into
concrete + abstract (because they're not known to need it until it's too
late).

Once we start deferring this choice until later, we won't have the
choice to put concrete definitions for inlined subroutines in a
different scope from concrete definitions for non-inlined subroutines
(since we won't know at time-of-construction which one it'll be). This
change brings those two cases into alignment ahead of that future
chaneg/fix.

llvm-svn: 209547
2014-05-23 20:25:15 +00:00
Andrew Trick 839e30b2c0 Fix and improve SCEV ComputeBackedgeTankCount.
This is a follow-up to r209358: PR19799: Indvars miscompile due to an
incorrect max backedge taken count from SCEV.

That fix was incomplete as pointed out by Arnold and Michael Z. The
code was also too confusing. It needed a careful rewrite with more
unit tests. This version will also happen to optimize more cases.

<rdar://17005101> PR19799: Indvars miscompile...

llvm-svn: 209545
2014-05-23 19:47:13 +00:00
Rafael Espindola a5bb2f61cf Use alias linkage and visibility to decide tls access mode.
This matches both what we do for the non-thread case and what gcc does.

With this patch clang would match gcc's behaviour in

static __thread int a = 42;
extern __thread int b __attribute__((alias("a")));
int *f(void) { return &a; }
int *g(void) { return &b; }

if not for pr19843. Manually writing the IL does produce the same access modes.

It is also a step in the direction of fixing pr19844.

llvm-svn: 209543
2014-05-23 19:16:56 +00:00
Jingyue Wu bbb6e4a885 Add the extracted constant offset using GEP
Fixed a TODO in r207783.

Add the extracted constant offset using GEP instead of ugly
ptrtoint+add+inttoptr. Using GEP simplifies future optimizations and makes IR
easier to understand. 

Updated all affected tests, and added a new test in split-gep.ll to cover a
corner case where emitting uglygep is necessary.

llvm-svn: 209537
2014-05-23 18:39:40 +00:00
Lang Hames 8e30e4b9b7 [RuntimeDyld] Remove relocation bounds check introduced in r208375 (MachO only).
We do all of our address arithmetic in 64-bit, and operations involving
logically negative 32-bit offsets (actually represented as unsigned 64 bit ints)
often overflow into higher bits. The overflow check could be preserved by
casting to uint32 at the callsite for applyRelocationValue, but this would
eliminate the value of the check.

The right way to handle overflow in relocations is to make relocation processing
target specific, and compute the values for RelocationEntry objects in the
appropriate types (32-bit for 32-bit targets, 64-bit for 64-bit targets). This
is coming as part of the cleanup I'm working on.

This fixes another i386 regression test.

<rdar://problem/16889891>

llvm-svn: 209536
2014-05-23 18:35:44 +00:00
David Blaikie 05b8584f16 Add FIXME comment based on code review feedback by Hal Finkel on r209338
llvm-svn: 209529
2014-05-23 16:53:14 +00:00
Rafael Espindola 6314ad41d1 Aliases are always definition, delete dead code.
While at it, use a range loop.

llvm-svn: 209519
2014-05-23 15:18:06 +00:00
Rafael Espindola a31f3e50dc Delete dead code.
GV is never used past this point. This was probably a copy and paste error.

llvm-svn: 209518
2014-05-23 15:07:51 +00:00
Daniel Sanders 683ed961e1 [mips] Work around inconsistency in llvm-mc's placement of fixup markers
Summary:
Add a second fixup table to MipsAsmBackend::getFixupKindInfo() to correctly
position llvm-mc's fixup placeholders for big-endian.

See PR19836 for full details of the issue. To summarize, the fixup placeholders
do not account for endianness properly and the implementations of
getFixupKindInfo() for each target are measuring MCFixupKindInfo.TargetOffset
from different ends of the instruction encoding to compensate.

Reviewers: jkolek, zoran.jovanovic, vmedic

Reviewed By: vmedic

Differential Revision: http://reviews.llvm.org/D3889

llvm-svn: 209514
2014-05-23 13:35:24 +00:00
Daniel Sanders 8966caab05 [mips][mips64r6] t(eq|ge|lt|ne)i and t(ge|lt)iu are not available in MIPS32r6/MIPS64r6
Summary: Depends on D3872

Reviewers: jkolek, zoran.jovanovic, vmedic

Reviewed By: vmedic

Differential Revision: http://reviews.llvm.org/D3891

llvm-svn: 209513
2014-05-23 13:24:08 +00:00
Daniel Sanders ac27263512 [mips][mips64r6] [ls][dw][lr] are not available in MIPS32r6/MIPS64r6
Summary:
Instead the system is required to provide some means of handling unaligned
load/store without special instructions. Options include full hardware
support, full trap-and-emulate, and hybrids such as hardware support within
a cache line and trap-and-emulate for multi-line accesses.

MipsSETargetLowering::allowsUnalignedMemoryAccesses() has been configured to
assume that unaligned accesses are 'fast' on the basis that I expect few
hardware implementations will opt for pure-software handling of unaligned
accesses. The ones that do handle it purely in software can override this.

mips64-load-store-left-right.ll has been merged into load-store-left-right.ll

The stricter testing revealed a Bits!=Bytes bug in passByValArg(). This has
been fixed and the variables renamed to clarify the units they hold.

Reviewers: zoran.jovanovic, jkolek, vmedic

Reviewed By: vmedic

Differential Revision: http://reviews.llvm.org/D3872

llvm-svn: 209512
2014-05-23 13:18:02 +00:00
Kostya Serebryany c7895a83d2 [asan] properly instrument memory accesses that have small alignment (smaller than min(8,size)) by making two checks instead of one. This may slowdown some cases, e.g. long long on 32-bit or wide loads produced after loop unrolling. The benefit is higher sencitivity.
llvm-svn: 209508
2014-05-23 11:52:07 +00:00
Bradley Smith 63c8b1bcb3 Fixup sys::getHostCPUFeatures crypto names so it doesn't clash with kernel headers
llvm-svn: 209506
2014-05-23 10:14:13 +00:00
Simon Atanasyan 84242dc774 [YAML] Add an optional argument `EnumMask` to the `yaml::IO::bitSetCase()`.
Some bit-set fields used in ELF file headers in fact contain two parts.
The first one is a regular bit-field. The second one is an enumeraion.
For example ELF header `e_flags` for MIPS target might contain the
following values:

Bit-set values:

  EF_MIPS_NOREORDER = 0x00000001
  EF_MIPS_PIC       = 0x00000002
  EF_MIPS_CPIC      = 0x00000004
  EF_MIPS_ABI2      = 0x00000020

Enumeration:

  EF_MIPS_ARCH_32   = 0x50000000
  EF_MIPS_ARCH_64   = 0x60000000
  EF_MIPS_ARCH_32R2 = 0x70000000
  EF_MIPS_ARCH_64R2 = 0x80000000

For printing bit-sets we use the `yaml::IO::bitSetCase()`. It does not
support bit-set/enumeration combinations and prints too many flags from
an enumeration part. This patch fixes this problem. New method
`yaml::IO::maskedBitSetCase()` handle "enumeration" part of bitset
defined by provided mask.

Patch reviewed by Nick Kledzik and Sean Silva.

llvm-svn: 209504
2014-05-23 08:07:09 +00:00
Jingyue Wu 69a6685c8d Test commit.
The keyword "virtual" is not necessary.

llvm-svn: 209501
2014-05-23 06:30:12 +00:00
David Blaikie 4860225570 Rename a couple of variables to be more accurate.
It's not really a "ScopeDIE", as such - it's the abstract function
definition's DIE. And we usually use "SP" for subprograms, rather than
"Sub".

llvm-svn: 209499
2014-05-23 05:03:23 +00:00
David Blaikie 96fb9024f2 DebugInfo: Fix cross-CU references for scopes (and variables within those scopes) in abstract definitions of cross-CU inlined functions
Found by Adrian Prantl during post-commit review of r209335.

llvm-svn: 209498
2014-05-23 04:23:06 +00:00
Jiangning Liu 4b5b757d65 [ARM64] Fix a bug in shuffle vector lowering to generate corect vext ISD with swapped input vectors.
llvm-svn: 209495
2014-05-23 02:54:50 +00:00
Justin Bogner cbb8438bb3 ScalarEvolution: Fix handling of AddRecs in isKnownPredicate
ScalarEvolution::isKnownPredicate() can wrongly reduce a comparison
when both the LHS and RHS are SCEVAddRecExprs. This checks that both
LHS and RHS are guarded in the case when both are SCEVAddRecExprs.

The test case is against indvars because I could not find a way to
directly test SCEV.

Patch by Sanjay Patel!

llvm-svn: 209487
2014-05-23 00:06:56 +00:00
Lang Hames 7f9fc2b339 [RuntimeDyld] Teach RuntimeDyldMachO how to handle scattered VANILLA relocs on
i386.

This fixes two more MCJIT regression tests on i386:

  ExecutionEngine/MCJIT/2003-05-06-LivenessClobber.ll
  ExecutionEngine/MCJIT/2013-04-04-RelocAddend.ll

The implementation of processScatteredVANILLA is tasteless (*ba-dum-ching*),
but I'm working on a substantial tidy-up of RuntimeDyldMachO that should
improve things.

This patch also fixes a type-o in RuntimeDyldMachO::processSECTDIFFRelocation,
and teaches that method to skip over the PAIR reloc following the SECTDIFF.

<rdar://problem/16961886>

llvm-svn: 209478
2014-05-22 22:30:13 +00:00
Matt Arsenault 46b51b7f62 R600: Add definition for flat address space ID.
Use 4 since that's probably what it will be for spir.
Move ADDRESS_NONE to the end to keep the constant_buffer_* values
unchanged, since apparently a bunch of r600 tests use those directly.

llvm-svn: 209463
2014-05-22 18:27:07 +00:00
Matt Arsenault 05e96f4444 R600: Try to convert BFE back to standard bit ops when possible.
This allows existing DAG combines to work on them, and then
we can re-match to BFE if necessary during instruction selection.

llvm-svn: 209462
2014-05-22 18:09:12 +00:00
Matt Arsenault 5565f65e13 R600: Add dag combine for BFE
llvm-svn: 209461
2014-05-22 18:09:07 +00:00
Matt Arsenault bf8694d36d R600: Implement ComputeNumSignBitsForTargetNode for BFE
llvm-svn: 209460
2014-05-22 18:09:03 +00:00
Matt Arsenault af6df9d943 R600: Implement computeMaskedBitsForTargetNode for BFE
llvm-svn: 209459
2014-05-22 18:09:00 +00:00
Matt Arsenault 493c5f1bc4 R600: Expand mul24 for GPUs without it
llvm-svn: 209458
2014-05-22 18:00:24 +00:00
Matt Arsenault f15a05623e R600: Expand mad24 for GPUs without it
llvm-svn: 209457
2014-05-22 18:00:20 +00:00
Matt Arsenault eb260206c3 R600: Add intrinsics for mad24
llvm-svn: 209456
2014-05-22 18:00:15 +00:00
Eric Christopher 9eff5178f1 Return false if we're not going to do anything.
llvm-svn: 209455
2014-05-22 17:49:33 +00:00
Matt Arsenault f37abc71de R600/SI: Move instruction pattern to instruction definition
llvm-svn: 209454
2014-05-22 17:45:20 +00:00
Diego Novillo 0b761a48cf Remove LLVMContextImpl::optimizationRemarkEnabledFor.
Summary:
This patch moves the handling of -pass-remarks* over to
lib/DiagnosticInfo.cpp. This allows the removal of the
optimizationRemarkEnabledFor functions from LLVMContextImpl, as they're
not needed anymore.

Reviewers: qcolombet

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D3878

llvm-svn: 209453
2014-05-22 17:19:01 +00:00
Andrea Di Biagio c8dd1ad85b [X86] Improve the lowering of BITCAST from MVT::f64 to MVT::v4i16/MVT::v8i8.
This patch teaches the x86 backend how to efficiently lower ISD::BITCAST dag
nodes from MVT::f64 to MVT::v4i16 (and vice versa), and from MVT::f64 to
MVT::v8i8 (and vice versa).

This patch extends the logic from revision 208107 to also handle MVT::v4i16
and MVT::v8i8. Also, this patch correctly propagates Undef values when
performing the widening of a vector (example: when widening from v2i32 to
v4i32, the upper 64bits of the resulting vector are 'undef').

llvm-svn: 209451
2014-05-22 16:21:39 +00:00
Tim Northover b2a6fdb11a ARM64: remove '#' from annotation of add/sub immediate
The full string used to be "// =#12" for example, which looks too
busy.

llvm-svn: 209443
2014-05-22 14:20:05 +00:00
Diego Novillo 7f8af8bf91 Add support for missed and analysis optimization remarks.
Summary:
This adds two new diagnostics: -pass-remarks-missed and
-pass-remarks-analysis. They take the same values as -pass-remarks but
are intended to be triggered in different contexts.

-pass-remarks-missed is used by LLVMContext::emitOptimizationRemarkMissed,
which passes call when they tried to apply a transformation but
couldn't.

-pass-remarks-analysis is used by LLVMContext::emitOptimizationRemarkAnalysis,
which passes call when they want to inform the user about analysis
results.

The patch also:

1- Adds support in the inliner for the two new remarks and a
   test case.

2- Moves emitOptimizationRemark* functions to the llvm namespace.

3- Adds an LLVMContext argument instead of making them member functions
   of LLVMContext.

Reviewers: qcolombet

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D3682

llvm-svn: 209442
2014-05-22 14:19:46 +00:00
Tim Northover f9e798ba6a Segmented stacks: omit __morestack call when there's no frame.
Patch by Florian Zeitz

llvm-svn: 209436
2014-05-22 13:03:43 +00:00
Tim Northover 2dce43c26f ARM64: these work too
llvm-svn: 209430
2014-05-22 12:14:49 +00:00
Tim Northover 5949b60550 Yes they do
llvm-svn: 209429
2014-05-22 12:14:02 +00:00
Tim Northover 4a3ab28ac7 ARM64: model pre/post-indexed operations properly.
We should be keeping track of the writeback on these instructions,
otherwise we're relying on LLVM's stupidity for correct code.

Fortunately, the MC layer can now handle all required constraints,
which means we can get rid of the CodeGen only PseudoInsts too.

llvm-svn: 209426
2014-05-22 11:56:20 +00:00
Tim Northover c350acfda5 ARM64: separate load/store operands to simplify assembler
This changes ARM64 to use separate operands for each component of an
address, and look for separate '[', '$Rn, ..., ']' tokens when
parsing.

This allows us to do away with quite a bit of special C++ code to
handle monolithic "addressing modes" in the MC components. The more
incremental matching of the assembler operands also allows for better
diagnostics when LLVM is presented with invalid input.

Most of the complexity here is with the register-offset instructions,
which were extremely dodgy beforehand: even when the instruction used
wM, LLVM's model had xM as an operand. We papered over this
discrepancy before, but that approach doesn't work now so I split them
into separate X and W variants.

llvm-svn: 209425
2014-05-22 11:56:09 +00:00
Bradley Smith 9288b2181f Extend sys::getHostCPUFeatures to work on AArch64 platforms
llvm-svn: 209420
2014-05-22 11:44:34 +00:00
Daniel Sanders 36ff7c2adc [mips][mips64r6] addi is not available on MIPS32r6/MIPS64r6
Summary: Depends on D3787. Tablegen will raise an assertion without it.

Reviewers: zoran.jovanovic, jkolek, vmedic

Reviewed By: vmedic

Differential Revision: http://reviews.llvm.org/D3842

llvm-svn: 209419
2014-05-22 11:42:31 +00:00
Daniel Sanders a3566c633d [mips][mips64r6] Test that paired single instructions are invalid
Summary:
These emit the 'unknown instruction' instead of the correct error
because they have not been implemented in LLVM for any MIPS ISA.

Reviewers: jkolek, zoran.jovanovic, vmedic

Reviewed By: vmedic

Differential Revision: http://reviews.llvm.org/D3841

llvm-svn: 209418
2014-05-22 11:37:38 +00:00
Daniel Sanders 5c582b2f6d [mips][mips64r6] Add b[on]vc
Summary:
This required me to implement the disassembler for MIPS64r6 since the encodings
are ambiguous with other instructions. This in turn revealed a few
assembly/disassembly bugs which I have fixed.

* da[ht]i only take two operands according to the spec, not three.
* DecodeBranchTarget2[16] correctly handles wider immediates than simm16
  * Also made non-functional change to DecodeBranchTarget and
    DecodeBranchTargetMM to keep implementation style consistent between
    them.
* Difficult encodings are handled by a custom decode method on the most
  general encoding in the group. This method will convert the MCInst to a
  different opcode if necessary.

DecodeBranchTarget is not currently the inverse of getBranchTargetOpValue
so disassembling some branch instructions emit incorrect output. This seems
to affect branches with delay slots on all MIPS ISA's. I've left this bug
for now and temporarily removed the check for the immediate on
bc[12]eqz/bc[12]nez in the MIPS32r6/MIPS64r6 tests.

jialc and jic crash the disassembler for some reason. I've left these
instructions commented out for the moment.

Depends on D3760

Reviewers: jkolek, zoran.jovanovic, vmedic

Reviewed By: vmedic

Differential Revision: http://reviews.llvm.org/D3761

llvm-svn: 209415
2014-05-22 11:23:21 +00:00
Tim Northover 0f6272271e ARM64: assert if we see i64 -> i64 extend in the DAG.
Should be no change in behaviour, but it makes the intended
functionality a bit clearer and means we only have to reason about
real extend operations.

llvm-svn: 209409
2014-05-22 07:41:37 +00:00
Saleem Abdulrasool 9dd60cfb64 MC: initialise MCAsmParser variable
Properly initialise HadError to false during construction.  Detected as
use-of-uninitialised variable by MSan!

llvm-svn: 209393
2014-05-22 06:02:59 +00:00
Eric Christopher 65382d7316 Remove unused variable.
llvm-svn: 209391
2014-05-22 05:33:03 +00:00
Saleem Abdulrasool 2bd1262a32 ARM: introduce llvm.arm.undefined intrinsic
This intrinsic permits the emission of platform specific undefined sequences.
ARM has reserved the 0xde opcode which takes a single integer parameter (ignored
by the CPU).  This permits the operating system to implement custom behaviour on
this trap.  The llvm.arm.undefined intrinsic is meant to provide a means for
generating the target specific behaviour from the frontend.  This is
particularly useful for Windows on ARM which has made use of a series of these
special opcodes.

llvm-svn: 209390
2014-05-22 04:46:46 +00:00
Matt Arsenault c3a73c3087 R600/SI: Match fp_to_uint / uint_to_fp for f64
llvm-svn: 209388
2014-05-22 03:20:30 +00:00
Saleem Abdulrasool 6663f8f2c0 MC: formalise some assertions into proper errors
Now that clang can be used as an assembler via the IAS, invalid assembler inputs
would cause the assertions to trigger.  Although we cannot recover from the
errors here, nor provide caret diagnostics, attempt to handle them slightly more
gracefully by reporting a fatal error.

llvm-svn: 209387
2014-05-22 02:18:10 +00:00
Eric Christopher 0e6e7cf385 Override runOnMachineFunction for ARMISelDAGToDAG so that we can
reset the subtarget on each function.

llvm-svn: 209386
2014-05-22 02:00:27 +00:00
Eric Christopher 4f09c59243 Override runOnMachineFunction for X86ISelDAGToDAG so that we can
reset the subtarget on each function.

llvm-svn: 209384
2014-05-22 01:53:26 +00:00
Eric Christopher 0d5c99eb08 Avoid using subtarget features when adding X86 specific passes to
the pass pipeline.

llvm-svn: 209382
2014-05-22 01:46:02 +00:00
Eric Christopher e0bd2fa927 Remove extra local variable.
llvm-svn: 209381
2014-05-22 01:45:59 +00:00
Eric Christopher 463b84b48b Rename createGlobalBaseRegPass -> createX86GlobalBaseRegPass to make
it obvious that it's a target specific pass.

llvm-svn: 209380
2014-05-22 01:45:57 +00:00
Eric Christopher 89f18805f4 Fix typo.
llvm-svn: 209377
2014-05-22 01:21:44 +00:00
Eric Christopher d71e4441c9 Avoid using subtarget features when initializing the pass pipeline
on PPC.

llvm-svn: 209376
2014-05-22 01:21:35 +00:00
Eric Christopher 1b8e763630 Reset the subtarget for DAGToDAG on every iteration of runOnMachineFunction.
This required updating the generated functions and TD file accordingly
to be pointers rather than const references.

llvm-svn: 209375
2014-05-22 01:07:24 +00:00