Commit Graph

92047 Commits

Author SHA1 Message Date
Rafael Espindola 63d2e0ad9a Remove dead calls to addFrameMove.
Without a PROLOG_LABEL present, the cfi instructions are never printed.

llvm-svn: 182016
2013-05-16 15:08:37 +00:00
Ulrich Weigand 7aa76b6a07 [PowerPC] Report true displacement value from getPreIndexedAddressParts
DAGCombiner::CombineToPreIndexedLoadStore calls a target routine to
decompose a memory address into a base/offset pair.  It expects the
offset (if constant) to be the true displacement value in order to
perform optional additional optimizations; in particular, to convert
other uses of the original pointer into uses of the new base pointer
after pre-increment.

The PowerPC implementation of getPreIndexedAddressParts, however,
simply calls SelectAddressRegImm, which returns a TargetConstant.
This value is appropriate for encoding into the instruction, but
it is not always usable as true displacement value:

- Its type is always MVT::i32, even on 64-bit, where addresses
  ought to be i64 ... this causes the optimization to simply
  always fail on 64-bit due to this line in DAGCombiner:

      // FIXME: In some cases, we can be smarter about this.
      if (Op1.getValueType() != Offset.getValueType()) {

- Its value is truncated to an unsigned 16-bit value if negative.
  This causes the above opimization to generate wrong code.

This patch fixes both problems by simply returning the true
displacement value (in its original type).  This doesn't
affect any other user of the displacement.

llvm-svn: 182012
2013-05-16 14:53:05 +00:00
Rafael Espindola 1f2584025d Add more addFrameMove test coverage.
llvm-svn: 182011
2013-05-16 14:51:26 +00:00
Rafael Espindola c533b559d0 Extend test to check the .cfi instructions.
I am about to refactor the calls to addFrameMove and some of the ppc
ones were not being tested.

llvm-svn: 182009
2013-05-16 14:30:09 +00:00
Richard Sandiford 7fdd268b68 [SystemZ] Tweak register array comment
llvm-svn: 182007
2013-05-16 13:39:02 +00:00
Benjamin Kramer 41942fb37a Relax CHECK-NEXTs a bit to cope with atom's return nop padding.
llvm-svn: 181999
2013-05-16 11:46:50 +00:00
Evgeniy Stepanov 1e7643243d [msan] Switch TLS globals to initial-exec model.
They are always defined in the main executable.

llvm-svn: 181994
2013-05-16 09:14:05 +00:00
Patrik Hagglund b3391b58f7 Removed unused variable, detected by gcc
-Wunused-but-set-variable. Leftover from r181979.

llvm-svn: 181993
2013-05-16 08:37:22 +00:00
Rafael Espindola 7242186b10 Delete dead code.
llvm-svn: 181982
2013-05-16 04:59:17 +00:00
Rafael Espindola e3d5e5354e Don't call addFrameMove on XCore.
getExceptionHandlingType is not ExceptionHandling::DwarfCFI on xcore, so
etFrameInstructions is never called. There is no point creating cfi
instructions if they are never used.

llvm-svn: 181979
2013-05-16 04:16:25 +00:00
Richard Smith e04f0d34d1 Respect the 'nobuiltin' attribute when determining if a call is to a memory builtin.
llvm-svn: 181978
2013-05-16 04:12:04 +00:00
Rafael Espindola a5c7ceedb5 Extend test for better coverage.
Without this change nothing was covering this addFrameMove:

// For 64-bit SVR4 when we have spilled CRs, the spill location
// is SP+8, not a frame-relative slot.
if (Subtarget.isSVR4ABI()
    && Subtarget.isPPC64()
    && (PPC::CR2 <= Reg && Reg <= PPC::CR4)) {
  MachineLocation CSDst(PPC::X1, 8);
  MachineLocation CSSrc(PPC::CR2);
  MMI.addFrameMove(Label, CSDst, CSSrc);
  continue;
}

llvm-svn: 181976
2013-05-16 03:48:50 +00:00
Rafael Espindola 6e8c0d94f8 Removed dead code.
llvm-svn: 181975
2013-05-16 03:34:58 +00:00
Lang Hames a102fe3f91 Fix PBQP graph iterator typedefs.
llvm-svn: 181973
2013-05-16 02:20:41 +00:00
Reed Kotler 515e937685 Patch number 2 for mips16/32 floating point interoperability stubs.
This creates stubs that help Mips32 functions call Mips16 
functions which have floating point parameters that are normally passed
in floating point registers.
 

llvm-svn: 181972
2013-05-16 02:17:42 +00:00
Derek Schuff 36f00d9f02 Revert "Support unaligned load/store on more ARM targets"
This reverts r181898.

llvm-svn: 181944
2013-05-15 23:07:43 +00:00
Eli Bendersky b8cd7a0d7f Remove dead code.
This method is not being used/tested anywhere.

llvm-svn: 181943
2013-05-15 22:41:28 +00:00
Arnold Schwaighofer 88e7fddc8c LoopVectorize: Move call of canHoistAllLoads to canVectorizeWithIfConvert
We only want to check this once, not for every conditional block in the loop.

No functionality change (except that we don't perform a check redudantly
anymore).

llvm-svn: 181942
2013-05-15 22:38:14 +00:00
Rafael Espindola 84ee6c40a8 Delete dead code.
llvm-svn: 181941
2013-05-15 22:27:35 +00:00
David Majnemer 37d3825022 Set an explicit triple for this test.
This allows the test to correctly check symbol names.

llvm-svn: 181939
2013-05-15 22:23:21 +00:00
Hal Finkel 80267a0a37 undef setjmp in PPCCTRLoops
Trying to unbreak the VS build by copying some undef code from
Utils/LowerInvoke.cpp.

llvm-svn: 181938
2013-05-15 22:20:24 +00:00
David Majnemer 8f16974273 X86: Remove redundant test instructions
Increase the number of instructions LLVM recognizes as setting the ZF
flag. This allows us to remove test instructions that redundantly
recalculate the flag.

llvm-svn: 181937
2013-05-15 22:03:08 +00:00
Bill Wendling 598123ede4 Use proper syntax.
llvm-svn: 181930
2013-05-15 21:38:12 +00:00
Hal Finkel 25c1992bc7 Implement PPC counter loops as a late IR-level pass
The old PPCCTRLoops pass, like the Hexagon pass version from which it was
derived, could only handle some simple loops in canonical form. We cannot
directly adapt the new Hexagon hardware loops pass, however, because the
Hexagon pass contains a fundamental assumption that non-constant-trip-count
loops will contain a guard, and this is not always true (the result being that
incorrect negative counts can be generated). With this commit, we replace the
pass with a late IR-level pass which makes use of SE to calculate the
backedge-taken counts and safely generate the loop-count expressions (including
any necessary max() parts). This IR level pass inserts custom intrinsics that
are lowered into the desired decrement-and-branch instructions.

The most fragile part of this new implementation is that interfering uses of
the counter register must be detected on the IR level (and, on PPC, this also
includes any indirect branches in addition to function calls). Also, to make
all of this work, we need a variant of the mtctr instruction that is marked
as having side effects. Without this, machine-code level CSE, DCE, etc.
illegally transform the resulting code. Hopefully, this can be improved
in the future.

This new pass is smaller than the original (and much smaller than the new
Hexagon hardware loops pass), and can handle many additional cases correctly.
In addition, the preheader-creation code has been copied from LoopSimplify, and
after we decide on where it belongs, this code will be refactored so that it
can be explicitly shared (making this implementation even smaller).

The new test-case files ctrloop-{le,lt,ne}.ll have been adapted from tests for
the new Hexagon pass. There are a few classes of loops that this pass does not
transform (noted by FIXMEs in the files), but these deficiencies can be
addressed within the SE infrastructure (thus helping many other passes as well).

llvm-svn: 181927
2013-05-15 21:37:41 +00:00
Hal Finkel 1f6a7f53d8 Fix legalization of SETCC with promoted integer intrinsics
If the input operands to SETCC are promoted, we need to make sure that we
either use the promoted form of both operands (or neither); a mixture is not
allowed. This can happen, for example, if a target has a custom promoted
i1-returning intrinsic (where i1 is not a legal type). In this case, we need to
use the promoted form of both operands.

This change only augments the behavior of the existing logic in the case where
the input types (which may or may not have already been legalized) disagree,
and should not affect existing target code because this case would otherwise
cause an assert in the SETCC operand promotion code.

This will be covered by (essentially all of the) tests for the new PPCCTRLoops
infrastructure.

llvm-svn: 181926
2013-05-15 21:37:27 +00:00
Bill Wendling c6e238c676 Add lldb and polly to the projects to tag.
llvm-svn: 181925
2013-05-15 21:36:46 +00:00
Derek Schuff d2c42d766d Fix miscompile due to StackColoring incorrectly merging stack slots (PR15707)
IR optimisation passes can result in a basic block that contains:

  llvm.lifetime.start(%buf)
  ...
  llvm.lifetime.end(%buf)
  ...
  llvm.lifetime.start(%buf)

Before this change, calculateLiveIntervals() was ignoring the second
lifetime.start() and was regarding %buf as being dead from the
lifetime.end() through to the end of the basic block.  This can cause
StackColoring to incorrectly merge %buf with another stack slot.

Fix by removing the incorrect Starts[pos].isValid() and
Finishes[pos].isValid() checks.

Just doing:
      Starts[pos] = Indexes->getMBBStartIdx(MBB);
      Finishes[pos] = Indexes->getMBBEndIdx(MBB);
unconditionally would be enough to fix the bug, but it causes some
test failures due to stack slots not being merged when they were
before.  So, in order to keep the existing tests passing, treat LiveIn
and LiveOut separately rather than approximating the live ranges by
merging LiveIn and LiveOut.

This fixes PR15707.
Patch by Mark Seaborn.

llvm-svn: 181922
2013-05-15 21:15:09 +00:00
Rafael Espindola 0f2a6fe613 Cleanup relocation sorting for ELF.
We want the order to be deterministic on all platforms. NAKAMURA Takumi
fixed that in r181864. This patch is just two small cleanups:

* Move the function to the cpp file. It is only passed to array_pod_sort.
* Remove the ppc implementation which is now redundant

llvm-svn: 181910
2013-05-15 18:22:01 +00:00
NAKAMURA Takumi dc9f013a5d PPCISelLowering.h: Escape \@ in comments. [-Wdocumentation]
llvm-svn: 181907
2013-05-15 18:01:35 +00:00
NAKAMURA Takumi dcc66456cc Whitespace.
llvm-svn: 181906
2013-05-15 18:01:28 +00:00
Michael Gottesman b4e7f4d841 [objc-arc] Fixed a spelling error and made the statistic descriptions be consistent about their usage of periods.
llvm-svn: 181901
2013-05-15 17:43:03 +00:00
Douglas Gregor 5a4cba0ba6 Add missing #include
llvm-svn: 181900
2013-05-15 17:41:02 +00:00
Derek Schuff 72ddaba785 Support unaligned load/store on more ARM targets
This patch matches GCC behavior: the code used to only allow unaligned
load/store on ARM for v6+ Darwin, it will now allow unaligned load/store for
v6+ Darwin as well as for v7+ on other targets.

The distinction is made because v6 doesn't guarantee support (but LLVM assumes
that Apple controls hardware+kernel and therefore have conformant v6 CPUs),
whereas v7 does provide this guarantee (and Linux behaves sanely).

Overall this should slightly improve performance in most cases because of
reduced I$ pressure.

Patch by JF Bastien

llvm-svn: 181897
2013-05-15 16:08:30 +00:00
Ulrich Weigand 0684076858 Remove MCELFObjectTargetWriter::adjustFixupOffset hack
Now that PowerPC no longer uses adjustFixupOffset, and no other
back-end (ever?) did, we can remove the infrastructure itself
(incidentally addressing a FIXME to that effect).

llvm-svn: 181895
2013-05-15 15:07:42 +00:00
Ulrich Weigand 2fb140ef31 [PowerPC] Remove need for adjustFixupOffst hack
Now that applyFixup understands differently-sized fixups, we can define
fixup_ppc_lo16/fixup_ppc_lo16_ds/fixup_ppc_ha16 to properly be 2-byte
fixups, applied at an offset of 2 relative to the start of the 
instruction text.

This has the benefit that if we actually need to generate a real
relocation record, its address will come out correctly automatically,
without having to fiddle with the offset in adjustFixupOffset.

Tested on both 64-bit and 32-bit PowerPC, using external and
integrated assembler.

llvm-svn: 181894
2013-05-15 15:07:06 +00:00
Richard Sandiford ffd144174d [SystemZ] Make use of SUBTRACT HALFWORD
Thanks to Ulrich Weigand for noticing that this instruction was missing.

llvm-svn: 181893
2013-05-15 15:05:29 +00:00
Ulrich Weigand e7050ad0a1 [PowerPC] Add test case for r181891
llvm-svn: 181892
2013-05-15 15:02:12 +00:00
Ulrich Weigand 56f5b28d2e [PowerPC] Correctly handle fixups of other than 4 byte size
The PPCAsmBackend::applyFixup routine handles the case where a
fixup can be resolved within the same object file.  However,
this routine is currently hard-coded to assume the size of
any fixup is always exactly 4 bytes.

This is sort-of correct for fixups on instruction text; even
though it only works because several of what really would be
2-byte fixups are presented as 4-byte fixups instead (requiring
another hack in PPCELFObjectWriter::adjustFixupOffset to clean
it up).

However, this assumption breaks down completely for fixups
on data, which legitimately can be of any size (1, 2, 4, or 8).

This patch makes applyFixup aware of fixups of varying sizes,
introducing a new helper routine getFixupKindNumBytes (along
the lines of what the ARM back end does).  Note that in order
to handle fixups of size 8, we also need to fix the return type
of adjustFixupValue to uint64_t to avoid truncation.

Tested on both 64-bit and 32-bit PowerPC, using external and
integrated assembler.

llvm-svn: 181891
2013-05-15 15:01:46 +00:00
Arnaud A. de Grandmaison ca08b076f3 Add Jade to the list of external projects using LLVM in the release notes.
Patch by: Antoine Lorence <Antoine.Lorence@insa-rennes.fr>

llvm-svn: 181886
2013-05-15 14:05:01 +00:00
Richard Sandiford 619859f42e [SystemZ] Add more future work items to the README
Based on an analysis by Ulrich Weigand.

llvm-svn: 181882
2013-05-15 12:53:31 +00:00
Richard Sandiford 78a8ef87ca [SystemZ] Consolidate disassembler tests for valid input into 2 big tests
llvm-svn: 181879
2013-05-15 11:00:31 +00:00
Richard Sandiford 364d821ebc [SystemZ] Consolidate assembler tests into 4 big tests
llvm-svn: 181878
2013-05-15 09:58:19 +00:00
Timur Iskhodzhanov 0588513e79 Fix build on Windows
llvm-svn: 181873
2013-05-15 09:00:30 +00:00
David Blaikie 041f1aa3e2 Use only explicit bool conversion operators
BitVector/SmallBitVector::reference::operator bool remain implicit since
they model more exactly a bool, rather than something else that can be
boolean tested.

The most common (non-buggy) case are where such objects are used as
return expressions in bool-returning functions or as boolean function
arguments. In those cases I've used (& added if necessary) a named
function to provide the equivalent (or sometimes negative, depending on
convenient wording) test.

One behavior change (YAMLParser) was made, though no test case is
included as I'm not sure how to reach that code path. Essentially any
comparison of llvm::yaml::document_iterators would be invalid if neither
iterator was at the end.

This helped uncover a couple of bugs in Clang - test cases provided for
those in a separate commit along with similar changes to `operator bool`
instances in Clang.

llvm-svn: 181868
2013-05-15 07:36:59 +00:00
NAKAMURA Takumi 2006ba945f ELFRelocationEntry::operator<(): Try to stabilize the order. r_offset was insufficient to sort Relocs.
It should fix llvm/test/CodeGen/ARM/ehabi-mc-compact-pr*.ll on some hosts.

  RELOCATION RECORDS FOR [.ARM.exidx]:
  0 R_ARM_PREL31 .text
  0 R_ARM_NONE __aeabi_unwind_cpp_pr0

FIXME: I am not sure of the directions of extra comparators, in Type and Index.
For now, they are different from the direction in r_offset.

llvm-svn: 181864
2013-05-15 02:16:23 +00:00
Arnold Schwaighofer 09cee97270 LoopVectorize: Fix comments
No functionality change.

llvm-svn: 181862
2013-05-15 02:02:45 +00:00
Arnold Schwaighofer 2d920477a4 LoopVectorize: Hoist conditional loads if possible
InstCombine can be uncooperative to vectorization and sink loads into
conditional blocks. This prevents vectorization.

Undo this optimization if there are unconditional memory accesses to the same
addresses in the loop.

radar://13815763

llvm-svn: 181860
2013-05-15 01:44:30 +00:00
Jakob Stoklund Olesen 0925b24d9a Speed up Value::isUsedInBasicBlock() for long use lists.
This is expanding Ben's original heuristic for short basic blocks to
also work for longer basic blocks and huge use lists.

Scan the basic block and the use list in parallel, terminating the
search when the shorter list ends. In almost all cases, either the basic
block or the use list is short, and the function returns quickly.

In one crazy test case with very long use chains, CodeGenPrepare runs
400x faster. When compiling ARMDisassembler.cpp it is 5x faster.

<rdar://problem/13840497>

llvm-svn: 181851
2013-05-14 23:45:56 +00:00
Sylvestre Ledru 149e281aa8 Fix two typo
llvm-svn: 181848
2013-05-14 23:36:24 +00:00
NAKAMURA Takumi b734a9dbc6 ExceptionDemo: Corresponding to r181820, SectionMemoryManager should belong to RTDyldMemoryManager.
llvm-svn: 181844
2013-05-14 23:05:00 +00:00
Ahmed Bougacha 9dab0cc6c3 Object: Fix Mach-O relocation printing.
There were two problems that made llvm-objdump -r crash:
- for non-scattered relocations, the symbol/section index is actually in the
  (aptly named) symbolnum field.
- sections are 1-indexed.

llvm-svn: 181843
2013-05-14 22:41:29 +00:00
Arnold Schwaighofer af85f6083a ARM ISel: Don't create illegal types during LowerMUL
The transformation happening here is that we want to turn a
"mul(ext(X), ext(X))" into a "vmull(X, X)", stripping off the extension. We have
to make sure that X still has a valid vector type - possibly recreate an
extension to a smaller type. In case of a extload of a memory type smaller than
64 bit we used create a ext(load()). The problem with doing this - instead of
recreating an extload - is that an illegal type is exposed.

This patch fixes this by creating extloads instead of ext(load()) sequences.

Fixes PR15970.

radar://13871383

llvm-svn: 181842
2013-05-14 22:33:24 +00:00
Manman Ren b3c52fb45b GlobalOpt: fix an issue where CXAAtExitFn points to a deleted function.
CXAAtExitFn was set outside a loop and before optimizations where functions
can be deleted. This patch will set CXAAtExitFn inside the loop and after
optimizations.

Seg fault when running LTO because of accesses to a deleted function.
rdar://problem/13838828

llvm-svn: 181838
2013-05-14 21:52:44 +00:00
Eric Christopher 3c190d7d1c Revert previous patch, it's actually on under Wall.
llvm-svn: 181837
2013-05-14 21:52:01 +00:00
Eric Christopher 80d2dbe5cf Add -Wreorder to the list of C++ warnings.
This built clean with clang, but if we see false positives on the bots
then we'll revert and turn it into a compiler specific check.

llvm-svn: 181836
2013-05-14 21:49:38 +00:00
Eric Christopher 8fd7ab07ca Make getCompileUnit non-const and return the current DIE if it
happens to be a compile unit. Noticed on inspection and tested
via calling on a newly created compile unit. No functional change.

llvm-svn: 181835
2013-05-14 21:33:10 +00:00
Michael Liao 91a1b2c9eb Add 'CHECK-DAG' support
Refer to 'FileCheck.rst'f for details of 'CHECK-DAG'.

llvm-svn: 181827
2013-05-14 20:34:12 +00:00
Michael Liao dcc7d48d55 Refactor string checking. No functionality change.
llvm-svn: 181824
2013-05-14 20:29:52 +00:00
Bill Schmidt a87a7e2620 Implement the PowerPC system call (sc) instruction.
Instruction added at request of Roman Divacky.  Tested via asm-parser.

llvm-svn: 181821
2013-05-14 19:35:45 +00:00
Filip Pizlo 9bc53e8467 SectionMemoryManager shouldn't be a JITMemoryManager. Previously, the
EngineBuilder interface required a JITMemoryManager even if it was being used 
to construct an MCJIT. But the MCJIT actually wants a RTDyldMemoryManager. 
Consequently, the SectionMemoryManager, which is meant for MCJIT, derived 
from the JITMemoryManager and then stubbed out a bunch of JITMemoryManager 
methods that weren't relevant to the MCJIT.

This patch fixes the situation: it teaches the EngineBuilder that 
RTDyldMemoryManager is a supertype of JITMemoryManager, and that it's 
appropriate to pass a RTDyldMemoryManager instead of a JITMemoryManager if 
we're using the MCJIT. This allows us to remove the stub methods from 
SectionMemoryManager, and make SectionMemoryManager a direct subtype of 
RTDyldMemoryManager.

llvm-svn: 181820
2013-05-14 19:29:00 +00:00
Jyotsna Verma 803e506fec Hexagon: Pass to replace tranfer/copy instructions into combine instruction
where possible.

llvm-svn: 181817
2013-05-14 18:54:06 +00:00
Eric Christopher b27cd8bea6 Reapply "Subtract isn't commutative, fix this for MMX psub." with
a somewhat randomly chosen cpu that will minimize cpu specific
differences on bots.

llvm-svn: 181814
2013-05-14 18:33:40 +00:00
Eric Christopher 3eee7454cf Temporarily revert "Subtract isn't commutative, fix this for MMX psub."
It's causing failures on the atom bot.

llvm-svn: 181812
2013-05-14 18:20:42 +00:00
Rafael Espindola e16befb5f6 Fix __clear_cache declaration.
This fixes the build with gcc in gnu++98 and gnu++11 mode.

llvm-svn: 181811
2013-05-14 18:06:14 +00:00
Eric Christopher 0344f495f9 Subtract isn't commutative, fix this for MMX psub.
Patch by Andrea DiBiagio.

llvm-svn: 181809
2013-05-14 17:52:05 +00:00
Jakob Stoklund Olesen abc3d23ccb Recognize sparc64 as an alias for sparcv9 triples.
Patch by Brad Smith!

llvm-svn: 181808
2013-05-14 17:47:27 +00:00
Jyotsna Verma 2dca82ad1c Hexagon: Add patterns to generate 'combine' instructions.
llvm-svn: 181805
2013-05-14 17:16:38 +00:00
Jyotsna Verma 11bd54afd6 Hexagon: ArePredicatesComplement should not restrict itself to TFRs.
llvm-svn: 181803
2013-05-14 16:36:34 +00:00
Kai Nacke 9a224ced0f Add bitcast to store of personality function.
The personality function is user defined and may have an arbitrary result type.
The code assumes always i8*. This results in an assertion failure if a different
type is used. A bitcast to i8* is added to prevent this failure.

Reviewed by: Renato Golin, Bob Wilson

llvm-svn: 181802
2013-05-14 16:30:51 +00:00
Derek Schuff bd7c6e5015 Fix ARM FastISel tests, as a first step to enabling ARM FastISel
ARM FastISel is currently only enabled for iOS non-Thumb1, and I'm working on
enabling it for other targets. As a first step I've fixed some of the tests.
Changes to ARM FastISel tests:
- Different triples don't generate the same relocations (especially
  movw/movt versus constant pool loads). Use a regex to allow either.
- Mangling is different. Use a regex to allow either.
- The reserved registers are sometimes different, so registers get
  allocated in a different order. Capture the names only where this
  occurs.
- Add -verify-machineinstrs to some tests where it works. It doesn't
  work everywhere it should yet.
- Add -fast-isel-abort to many tests that didn't have it before.
- Split out the VarArg test from fast-isel-call.ll into its own
  test. This simplifies test setup because of --check-prefix.

Patch by JF Bastien

llvm-svn: 181801
2013-05-14 16:26:38 +00:00
Bill Schmidt ef3d1a24ed PPC32: Fix stack collision between FP and CR save areas.
The changes to CR spill handling missed a case for 32-bit PowerPC.
The code in PPCFrameLowering::processFunctionBeforeFrameFinalized()
checks whether CR spill has occurred using a flag in the function
info.  This flag is only set by storeRegToStackSlot and
loadRegFromStackSlot.  spillCalleeSavedRegisters does not call
storeRegToStackSlot, but instead produces MI directly.  Thus we don't
see the CR is spilled when assigning frame offsets, and the CR spill
ends up colliding with some other location (generally the FP slot).

This patch sets the flag in spillCalleeSavedRegisters for PPC32 so
that the CR spill is properly detected and gets its own slot in the
stack frame.

llvm-svn: 181800
2013-05-14 16:08:32 +00:00
Jyotsna Verma 7dcbb96e26 Hexagon: Test case to check if branch probabilities are properly reflected in
the jump instructions in the form of taken/not-taken hint.

llvm-svn: 181799
2013-05-14 15:50:49 +00:00
Jyotsna Verma c61e350a7d Hexagon: Remove dead-code after unconditional return from addPreSched2.
llvm-svn: 181797
2013-05-14 15:33:27 +00:00
Tom Stellard 1e21b53020 R600/SI: Add processor type for Hainan asic
Patch by: Alex Deucher

Reviewed-by: Tom Stellard <thomas.stellard@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>

NOTE: This is a candidate for the 3.3 branch.
llvm-svn: 181792
2013-05-14 14:42:56 +00:00
Duncan Sands b33790d898 Get the unittests compiling when building with cmake and the setting
-DLLVM_ENABLE_THREADS=false.

llvm-svn: 181788
2013-05-14 13:29:16 +00:00
Rafael Espindola 17268dc192 Declare __clear_cache.
GCC declares __clear_cache in the gnu modes (-std=gnu++98,
-std=gnu++11), but not in the strict modes (-std=c++98, -std=c++11). This patch
declares it and therefore fixes the build when using one of the strict modes.

llvm-svn: 181785
2013-05-14 13:02:37 +00:00
Richard Sandiford eb9af29426 [SystemZ] Add disassembler support
llvm-svn: 181777
2013-05-14 10:17:52 +00:00
Michel Danzer 1290369f7b R600/SI: Add lit test coverage for the remaining patterns added recently
Reviewed-by: Christian König <christian.koenig@amd.com>
llvm-svn: 181775
2013-05-14 09:53:30 +00:00
Richard Sandiford 18272f8490 [SystemZ] Add extra testscases for r181773
Forgot to svn add these...

llvm-svn: 181774
2013-05-14 09:49:11 +00:00
Richard Sandiford 1fb5883d77 [SystemZ] Rework handling of constant PC-relative operands
The GNU assembler treats things like:

        brasl   %r14, 100

in the same way as:

        brasl   %r14, .+100

rather than as a branch to absolute address 100.  We implemented this in
LLVM by creating an immediate operand rather than the usual expr operand,
and by handling immediate operands specially in the code emitter.
This was undesirable for (at least) three reasons:

- the specialness of immediate operands was exposed to the backend MC code,
  rather than being limited to the assembler parser.

- in disassembly, an immediate operand really is an absolute address.
  (Note that this means reassembling printed disassembly can't recreate
  the original code.)

- it would interfere with any assembly manipulation that we might
  try in future.  E.g. operations like branch shortening can change
  the relative position of instructions, but any code that updates
  sym+offset addresses wouldn't update an immediate "100" operand
  in the same way as an explicit ".+100" operand.

This patch changes the implementation so that the assembler creates
a "." label for immediate PC-relative operands, so that the operand
to the MCInst is always the absolute address.  The patch also adds
some error checking of the offset.

llvm-svn: 181773
2013-05-14 09:47:26 +00:00
Richard Sandiford 6a808f986b [SystemZ] Remove bogus isAsmParserOnly
Marking instructions as isAsmParserOnly stops them from being disassembled.
However, in cases where separate asm and codegen versions exist, we actually
want to disassemble to the asm ones.

No functional change intended.

llvm-svn: 181772
2013-05-14 09:38:07 +00:00
Richard Sandiford 7d37cd26c6 [SystemZ] Match operands to fields by name rather than by order
The SystemZ port currently relies on the order of the instruction operands
matching the order of the instruction field lists.  This isn't desirable
for disassembly, where the two are matched only by name.  E.g. the R1 and R2
fields of an RR instruction should have corresponding R1 and R2 operands.

The main complication is that addresses are compound operands,
and as far as I know there is no mechanism to allow individual
suboperands to be selected by name in "let Inst{...} = ..." assignments.
Luckily it doesn't really matter though.  The SystemZ instruction
encoding groups all address fields together in a predictable order,
so it's just as valid to see the entire compound address operand as
a single field.  That's the approach taken in this patch.

Matching by name in turn means that the operands to COPY SIGN and
CONVERT TO FIXED instructions can be given in natural order.
(It was easier to do this at the same time as the rename,
since otherwise the intermediate step was too confusing.)

No functional change intended.

llvm-svn: 181771
2013-05-14 09:36:44 +00:00
Richard Sandiford d454ec0c31 [SystemZ] Match operands to fields by name rather than by order
The SystemZ port currently relies on the order of the instruction operands
matching the order of the instruction field lists.  This isn't desirable
for disassembly, where the two are matched only by name.  E.g. the R1 and R2
fields of an RR instruction should have corresponding R1 and R2 operands.

The main complication is that addresses are compound operands,
and as far as I know there is no mechanism to allow individual
suboperands to be selected by name in "let Inst{...} = ..." assignments.
Luckily it doesn't really matter though.  The SystemZ instruction
encoding groups all address fields together in a predictable order,
so it's just as valid to see the entire compound address operand as
a single field.  That's the approach taken in this patch.

Matching by name in turn means that the operands to COPY SIGN and
CONVERT TO FIXED instructions can be given in natural order.
(It was easier to do this at the same time as the rename,
since otherwise the intermediate step was too confusing.)

No functional change intended.

llvm-svn: 181769
2013-05-14 09:28:21 +00:00
Michael Gottesman 0c8b562851 Removed trailing whitespace.
llvm-svn: 181760
2013-05-14 06:40:10 +00:00
Reed Kotler 821e86f021 Fix typo.
llvm-svn: 181759
2013-05-14 06:00:01 +00:00
Reed Kotler cad47f0297 Removed an unnamed namespace and forgot to make two of the functions inside
"static".

llvm-svn: 181754
2013-05-14 02:13:45 +00:00
Reed Kotler 2c4657d9b7 This is the first of three patches which creates stubs used for
Mips16/32 floating point interoperability.

When Mips16 code calls external functions that would normally have some
of its parameters or return values passed in floating point registers,
it needs (Mips32) helper functions to do this because while in Mips16 mode
there is no ability to access the floating point registers.

In Pic mode, this is done with a set of predefined functions in libc.
This case is already handled in llvm for Mips16.

In static relocation mode, for efficiency reasons, the compiler generates
stubs that the linker will use if it turns out that the external function
is a Mips32 function. (If it's Mips16, then it does not need the helper
stubs).

These stubs are identically named and the linker knows about these tricks
and will not create multiple copies and will delete them if they are not
needed.

llvm-svn: 181753
2013-05-14 02:00:24 +00:00
Akira Hatanaka 1f24e6a6a2 StackColoring: don't clear an instruction's mem operand if the underlying
object is a PseudoSourceValue and PseudoSourceValue::isConstant returns true (i.e.,
points to memory that has a constant value).

llvm-svn: 181751
2013-05-14 01:42:44 +00:00
David Blaikie 7b770c6aed Assert that DIEEntries are constructed with non-null DIEs
This just brings a crash a little further forward from DWARF emission to
DIE construction to make errors easier to diagnose.

llvm-svn: 181748
2013-05-14 00:35:19 +00:00
Arnold Schwaighofer 2e7a922a15 LoopVectorize: Handle loops with multiple forward inductions
We used to give up if we saw two integer inductions. After this patch, we base
further induction variables on the chosen one like we do in the reverse
induction and pointer induction case.

Fixes PR15720.

radar://13851975

llvm-svn: 181746
2013-05-14 00:21:18 +00:00
Michael Gottesman f3f9e3b10a [objc-arc-opts] Added debug statements when we set and unset whether a pointer is known positive.
llvm-svn: 181745
2013-05-14 00:08:09 +00:00
Michael Gottesman a76143eeee [objc-arc-opts] In the presense of an alloca unconditionally remove RR pairs if and only if we are both KnownSafeBU/KnownSafeTD rather than just either or.
In the presense of a block being initialized, the frontend will emit the
objc_retain on the original pointer and the release on the pointer loaded from
the alloca. The optimizer will through the provenance analysis realize that the
two are related (albiet different), but since we only require KnownSafe in one
direction, will match the inner retain on the original pointer with the guard
release on the original pointer. This is fixed by ensuring that in the presense
of allocas we only unconditionally remove pointers if both our retain and our
release are KnownSafe (i.e. we are KnownSafe in both directions) since we must
deal with the possibility that the frontend will emit what (to the optimizer)
appears to be unbalanced retain/releases.

An example of the miscompile is:

  %A = alloca
  retain(%x)
  retain(%x) <--- Inner Retain
  store %x, %A
  %y = load %A
  ... DO STUFF ...
  release(%y)
  call void @use(%x)
  release(%x) <--- Guarding Release

getting optimized to:

  %A = alloca
  retain(%x)
  store %x, %A
  %y = load %A
  ... DO STUFF ...
  release(%y)
  call void @use(%x)

rdar://13750319

llvm-svn: 181743
2013-05-13 23:49:42 +00:00
Matt Beaumont-Gay e55d9492e3 Move a couple more statistics inside '#ifndef NDEBUG'.
Suppresses an unused-variable warning in -Asserts builds.

llvm-svn: 181733
2013-05-13 21:10:49 +00:00
Jack Carter f5f48d8ff7 Mips assembler: Assembler macro ADDIU $rs,imm
This patch adds alias for addiu instruction which enables following syntax:

    addiu $rs,imm

The macro is translated as:

    addiu $rs,$rs,imm


Contributer: Vladimir Medic
llvm-svn: 181729
2013-05-13 20:26:46 +00:00
Michael Gottesman 993fbf704a [objc-arc-opts] Add comment to BBState making it clear that get{TopDown,BottomUp}PtrState will create a new PtrState object if it does not find a PtrState for Arg.
llvm-svn: 181726
2013-05-13 19:40:39 +00:00
Bill Schmidt 6cda22a3b4 Fix goofy commentary in PPCTargetObjectFile.cpp.
llvm-svn: 181725
2013-05-13 19:40:36 +00:00
Bill Schmidt 22d40dcfe9 PPC64: Constant initializers with dynamic relocations go in .data.rel.ro.
This fixes warning messages observed in the oggenc application test in
projects/test-suite.  Special handling is needed for the 64-bit
PowerPC SVR4 ABI when a constant is initialized with a pointer to a
function in a shared library.  Because a function address is
implemented as the address of a function descriptor, the use of copy
relocations can lead to problems with initialization.  GNU ld
therefore replaces copy relocations with dynamic relocations to be
resolved by the dynamic linker.  This means the constant cannot reside
in the read-only data section, but instead belongs in .data.rel.ro,
which is designed for constants containing dynamic relocations.

The implementation creates a class PPC64LinuxTargetObjectFile
inheriting from TargetLoweringObjectFileELF, which behaves like its
parent except to place constants of this sort into .data.rel.ro.

The test case is reduced from the oggenc application.

llvm-svn: 181723
2013-05-13 19:34:37 +00:00
Bob Wilson c5c0823724 Remove redundant variable introduced by r181682.
llvm-svn: 181721
2013-05-13 19:02:31 +00:00
Michael Gottesman 9fc50b82a4 [objc-arc] Move the before optimization statistics gathering phase out of OptimizeIndividualCalls.
This makes the statistics gathering completely independent of the actual
optimization occuring, preventing any sort of bleeding over from occuring.

Additionally, it simplifies a switch statement in the non-statistic gathering case.

llvm-svn: 181719
2013-05-13 18:29:07 +00:00
Akira Hatanaka 9edae02db8 [mips] Add option -mno-ldc1-sdc1.
This option is used when the user wants to avoid emitting double precision FP
loads and stores. Double precision FP loads and stores are expanded to single
precision instructions after register allocation.

llvm-svn: 181718
2013-05-13 18:23:35 +00:00