Commit Graph

52177 Commits

Author SHA1 Message Date
Evan Cheng 5c03a6b8f5 When hoisting common code, watch out for uses which are marked "kill". If the
killed registers are needed below the insertion point, then unset the kill
marker.

Sorry I'm not able to find a reduced test case.

rdar://10660944

llvm-svn: 148043
2012-01-12 20:31:24 +00:00
Rafael Espindola 00e861ed57 Support segmented stacks on 64-bit FreeBSD.
This patch uses tcb_spare field in the tcb structure to store info.
Patch by Jyun-Yan You.

llvm-svn: 148041
2012-01-12 20:24:30 +00:00
Rafael Espindola 10745d3381 Support segmented stacks on win32.
Uses the pvArbitrary slot of the TIB, which is reserved for applications. We
only support frames with a static size.

llvm-svn: 148040
2012-01-12 20:22:08 +00:00
Evan Cheng 09cc429cb1 Allow targets to select source order pre-RA scheduler.
llvm-svn: 148033
2012-01-12 18:27:52 +00:00
Devang Patel 4a6e778aae Rename X86ATTAsmParser -> X86AsmParser
We are using one parser to parse att as well as intel style syntax.

llvm-svn: 148032
2012-01-12 18:03:40 +00:00
Jakob Stoklund Olesen 994fed689f Make SplitAnalysis::UseSlots private.
llvm-svn: 148031
2012-01-12 17:53:44 +00:00
Benjamin Kramer 9ece950ddb After Jakob's r147938 exception handling on i386 was completely broken.
Restore the (obviously wrong) behavior from before r147938 without relying on
undefined behavior. Add a fat FIXME note.

This should fix nightly tester failures.

llvm-svn: 148030
2012-01-12 17:37:18 +00:00
Nadav Rotem 0a0a829bea Fix a bug in the AVX 256-bit shuffle code in cases where the splat element is on the boundary of two 128-bit vectors.
The attached testcase was stuck in an endless loop.

llvm-svn: 148027
2012-01-12 15:31:55 +00:00
Benjamin Kramer 5b3aa60b44 X86: Generalize the x << (y & const) optimization to also catch masks with more set bits set than 31 or 63.
llvm-svn: 148024
2012-01-12 12:41:34 +00:00
Devang Patel fc6be102ae Add predicate method check match memory operand size, if available.
In att style asm syntax memory operand size is derived from suffix attached with mnemonic.  In intel style asm syntax it is part of memory operand hence predicate method check is required to select appropriate instruction.

llvm-svn: 148006
2012-01-12 01:51:42 +00:00
Bill Wendling 58c7569854 A DenseMap of a std::map isn't a very good idea because the "grow()" method will
need to make a deep copy of each of the std::maps. Use a std::map of the
std::map instead. This improves the compile time of sqlite3 by ~2%.

llvm-svn: 148003
2012-01-12 01:41:03 +00:00
Devang Patel 46831de240 Add intel style operand parser skeleton.
This is a work in progress.

llvm-svn: 148002
2012-01-12 01:36:43 +00:00
Chandler Carruth eb21da060b Switch all of the uses of my InsertDAGNode helper to follow the exact
same pattern. We already had this pattern is a few places, but others
tried to make a rough approximation of an actual DAG structure. As not
everywhere went to this trouble, nothing could rely on this being done.
In fact, I've checked all references to these node Ids, and the ones
that are using the topo-sort properties are actually satisfied with
a strict-weak-ordering. The requirement appears to be that Use >= Def.

I've added a big blurb of comments to this bit of the transform to
clarify why the order is so important for the next reader of the code.

I'm starting with this change as it is very small, and trivially
reverted if something breaks or the >= above really does need to be >.
If that proves the case, we can hide the problem by reverting this
patch, but the problem exists elsewhere as well, and so a more
comprehensive solution will be needed.

llvm-svn: 148001
2012-01-12 01:34:44 +00:00
Bill Wendling 4ec081a4d2 Revert r147978. A DenseMap's iterators may become invalidated here.
llvm-svn: 147980
2012-01-11 23:43:34 +00:00
Jakob Stoklund Olesen 20f19eb9ab Make data structures private.
llvm-svn: 147979
2012-01-11 23:19:08 +00:00
Bill Wendling f0275df9e3 Use a DenseMap.
This appears to improve sqlite3's compile time by ~2%.

llvm-svn: 147978
2012-01-11 22:57:32 +00:00
Jakob Stoklund Olesen 73edbf1682 Sink spillInterferences into RABasic.
This helper method is too simplistic for RAGreedy.

llvm-svn: 147976
2012-01-11 22:52:14 +00:00
Jakob Stoklund Olesen 06ec420347 Cleanup.
llvm-svn: 147975
2012-01-11 22:52:11 +00:00
Jakob Stoklund Olesen a818d804a1 Move RegAllocBase into its own cpp file separate from RABasic.
No functional change.

llvm-svn: 147972
2012-01-11 22:28:30 +00:00
Eli Friedman b31c627be1 Re-fix the issue Bill fixed in r147899 in a slightly different way, which doesn't abuse the semantics of linker_private. We don't really want to merge any string constant with a weak_odr global.
llvm-svn: 147971
2012-01-11 22:06:46 +00:00
Eric Christopher d284c1d80d Fix assert.
llvm-svn: 147966
2012-01-11 20:55:27 +00:00
Argyrios Kyrtzidis cd8fe08e4d Disable the crash reporter when running lit tests.
llvm-svn: 147965
2012-01-11 20:53:25 +00:00
Nadav Rotem b5ce6ee835 On AVX, we can load v8i32 at a time. The bug happens when two uneven loads are used.
When we load the v12i32 type, the GenWidenVectorLoads method generates two loads: v8i32 and v4i32 
and attempts to use CONCAT_VECTORS to join them. In this fix I concat undef values to widen 
the smaller value. The test "widen_load-2.ll" also exposes this bug on AVX.

llvm-svn: 147964
2012-01-11 20:19:17 +00:00
Rafael Espindola d90466bcbf Support segmented stacks on mac.
This uses TLS slot 90, which actually belongs to JavaScriptCore. We only support
frames with static size
Patch by Brian Anderson.

llvm-svn: 147960
2012-01-11 19:00:37 +00:00
Rafael Espindola 4eecacb9c8 Generate the segmented stack prologue for fastcc too.
Patch by Brian Anderson.

llvm-svn: 147958
2012-01-11 18:41:19 +00:00
Chandler Carruth 3212a34269 Revert r147945 which disabled an addressing mode transformation. I had
hoped this would revive one of the llvm-gcc selfhost build bots, but it
didn't so it doesn't appear that my transform is the culprit.

If anyone else is seeing failures, please let me know!

llvm-svn: 147957
2012-01-11 18:36:12 +00:00
Rafael Espindola 2b89448d60 Use unsigned comparison in segmented stack prologue.
This is a comparison of two addresses, and GCC does the comparison unsigned.

Patch by Brian Anderson.

llvm-svn: 147954
2012-01-11 18:23:35 +00:00
Kostya Serebryany 687d078192 [asan] extend the workaround for http://llvm.org/bugs/show_bug.cgi?id=11395: don't instrument the function at all on x86_32 if it has a large asm blob
llvm-svn: 147953
2012-01-11 18:15:23 +00:00
Rafael Espindola 6635ae1c17 Explicitly set the scale to 1 on some segstack prologue instrs.
Patch by Brian Anderson.

llvm-svn: 147952
2012-01-11 18:14:03 +00:00
Kevin Enderby 6223cf72e6 The error check for using -g with a .s file already containing dwarf .file
directives was in the wrong place and getting triggered incorectly with a
cpp .file directive.  This change fixes that and adds a test case.

llvm-svn: 147951
2012-01-11 18:04:47 +00:00
Jan Sjödin 21f83d9f36 Add XOP Intrinsics and tests
llvm-svn: 147949
2012-01-11 15:20:20 +00:00
Nadav Rotem baae7e4577 Fix a bug in the lowering of BUILD_VECTOR for AVX. SCALAR_TO_VECTOR does not zero untouched elements. Use INSERT_VECTOR_ELT instead.
llvm-svn: 147948
2012-01-11 14:07:51 +00:00
Duncan Sands 0bf46b5363 Don't try to create a GEP when the pointee type is unsized (such GEPs
are invalid).  Fixes a crash on array1.C from the GCC testsuite when
compiled with dragonegg.

llvm-svn: 147946
2012-01-11 12:20:08 +00:00
Chandler Carruth 9bc48e5215 Disable the transformation I added in r147936 to see if it fixes some
strange build bot failures that look like a miscompile into an infloop.
I'll investigate this tomorrow, but I'd both like to know whether my
patch is the culprit, and get the bots back to green.

llvm-svn: 147945
2012-01-11 12:17:47 +00:00
Chandler Carruth 3eacfb83fa Hoist a really redundant code pattern into a helper function, and delete
lots of lines of code. No functionality changed.

llvm-svn: 147942
2012-01-11 11:04:36 +00:00
Chandler Carruth b0049f4a43 Simplify the AND-rooted mask+shift checking code to match that of the
SRL-rooted code.

llvm-svn: 147941
2012-01-11 09:35:04 +00:00
Chandler Carruth 3dbcda8478 Unify the interface of the three mask+shift transform helpers, and
factor the differences that were hiding in one of them into its other
caller, the SRL handling code. No change in behavior.

llvm-svn: 147940
2012-01-11 09:35:02 +00:00
Chandler Carruth aa01e6661a Clarify and make explicit some of the requirements for transforming
mask+shift pairs at the beginning of the ISD::AND case block, and then
hoist the final pattern into a helper function, simplifying and
reflowing it appropriately. This should have no observable behavior
change, but several simplifications fell out of this such as directly
computing the new mask constant, etc.

llvm-svn: 147939
2012-01-11 09:35:00 +00:00
Jakob Stoklund Olesen 6039983755 Fix undefined code and reenable test case.
I don't think the compact encoding code is right, but at least is has
defined behavior now.

llvm-svn: 147938
2012-01-11 09:08:04 +00:00
Chandler Carruth 51d3076bbf Hoist the logic to transform shift+mask combinations into sub-register
extracts and scaled addressing modes into its own helper function. No
functionality changed here, just hoisting and layout fixes falling out
of that hoisting.

llvm-svn: 147937
2012-01-11 08:48:20 +00:00
Chandler Carruth 55b2cdee26 Teach the X86 instruction selection to do some heroic transforms to
detect a pattern which can be implemented with a small 'shl' embedded in
the addressing mode scale. This happens in real code as follows:

  unsigned x = my_accelerator_table[input >> 11];

Here we have some lookup table that we look into using the high bits of
'input'. Each entity in the table is 4-bytes, which means this
implicitly gets turned into (once lowered out of a GEP):

  *(unsigned*)((char*)my_accelerator_table + ((input >> 11) << 2));

The shift right followed by a shift left is canonicalized to a smaller
shift right and masking off the low bits. That hides the shift right
which x86 has an addressing mode designed to support. We now detect
masks of this form, and produce the longer shift right followed by the
proper addressing mode. In addition to saving a (rather large)
instruction, this also reduces stalls in Intel chips on benchmarks I've
measured.

In order for all of this to work, one part of the DAG needs to be
canonicalized *still further* than it currently is. This involves
removing pointless 'trunc' nodes between a zextload and a zext. Without
that, we end up generating spurious masks and hiding the pattern.

llvm-svn: 147936
2012-01-11 08:41:08 +00:00
Stepan Dyatkovskiy 8216569812 Improved compile time:
1. Size heuristics changed. Now we calculate number of unswitching
branches only once per loop.
2. Some checks was moved from UnswitchIfProfitable to
processCurrentLoop, since it is not changed during processCurrentLoop
iteration. It allows decide to skip some loops at an early stage.
Extended statistics:
- Added total number of instructions analyzed.

llvm-svn: 147935
2012-01-11 08:40:51 +00:00
Andrew Trick e81211f45c Clarified the SCEV getSmallConstantTripCount interface with in-your-face comments.
This interface is misleading and dangerous, but it is actually what we need for unrolling.

llvm-svn: 147926
2012-01-11 06:52:55 +00:00
Rafael Espindola 647841b181 Add big endian mips support. Based on a patch by Jack Carter.
llvm-svn: 147924
2012-01-11 04:04:14 +00:00
Rafael Espindola 870c4e92b9 Add the skeleton of an asm parser for mips.
llvm-svn: 147923
2012-01-11 03:56:41 +00:00
Andrew Trick 642f0f6a40 ARM Ld/St Optimizer fix.
Allow LDRD to be formed from pairs with different LDR encodings. This was the original intention of the pass. Somewhere along the way, the LDR opcodes were refined which broke the optimization. We really don't care what the original opcodes are as long as they both map to the same LDRD and the immediate still fits.

Fixes rdar://10435045 ARMLoadStoreOptimization cannot handle mixed LDRi8/LDRi12

llvm-svn: 147922
2012-01-11 03:56:08 +00:00
Jakob Stoklund Olesen 8b1d023a4a Detect when a value is undefined on an edge to a landing pad.
Consider this code:

int h() {
  int x;
  try {
    x = f();
    g();
  } catch (...) {
    return x+1;
  }
  return x;
}

The variable x is undefined on the first edge to the landing pad, but it
has the f() return value on the second edge to the landing pad.

SplitAnalysis::getLastSplitPoint() would assume that the return value
from f() was live into the landing pad when f() throws, which is of
course impossible.

Detect these cases, and treat them as if the landing pad wasn't there.
This allows spill code to be inserted after the function call to f().

<rdar://problem/10664933>

llvm-svn: 147912
2012-01-11 02:07:05 +00:00
Jakob Stoklund Olesen 67aec12409 Exclusively use SplitAnalysis::getLastSplitPoint().
Delete the alternative implementation in LiveIntervalAnalysis.

These functions computed the same thing, but SplitAnalysis caches the
result.

llvm-svn: 147911
2012-01-11 02:07:00 +00:00
Evan Cheng d9725a38d6 Avoid CSE of instructions which define physical registers across MBBs unless
the physical registers are not allocatable.

llvm-svn: 147902
2012-01-11 00:38:11 +00:00
Bill Wendling c79155192d If the global variable is removed by the linker, then don't constant merge it
with other symbols.

An object in the __cfstring section is suppoed to be filled with CFString
objects, which have a pointer to ___CFConstantStringClassReference followed by a
pointer to a __cstring. If we allow the object in the __cstring section to be
merged with another global, then it could end up in any section. Because the
linker is going to remove these symbols in the final executable, we shouldn't
bother to merge them.
<rdar://problem/10564621>

llvm-svn: 147899
2012-01-11 00:13:08 +00:00