This allows us to generate table lookups for code such as:
unsigned test(unsigned x) {
switch (x) {
case 100: return 0;
case 101: return 1;
case 103: return 2;
case 105: return 3;
case 107: return 4;
case 109: return 5;
case 110: return 6;
default: return f(x);
}
}
Since cases 102, 104, etc. are not constants, the lookup table has holes
in those positions. We therefore guard the table lookup with a bitmask check.
Patch by Jasper Neumann!
llvm-svn: 203694
The peephole (shift x, (and y, 31)) -> (shift x, y) is repeated for each
integer type and each shift variant.
To improve this a new multiclass is added that covers all integer types. The
shift patterns are now instantiated from this. I am planning to add new
instances for rotates as well.
No functional change intended:
* test/CodeGen/X86/shift-and.ll provides coverage
* Compared the expanded tablegen output and matched up the defs for these
Pat<>s before and after
llvm-svn: 203685
There's a bit of duplicated "magic" code in opt.cpp and Clang's CodeGen that
computes the inliner threshold from opt level and size opt level.
This patch moves the code to a function that lives alongside the inliner itself,
providing a convenient overload to the inliner creation.
A separate patch can be committed to Clang to use this once it's committed to
LLVM. Standalone tools that use the inlining pass can also avoid duplicating
this code and fearing it will go out of sync.
Note: this patch also restructures the conditinal logic of the computation to
be cleaner.
llvm-svn: 203669
Fix Windows build by adding JITLoaderGDB and ProcessElfCore.
RegisterContext: fixes for Windows build: sizeof(GPR::register) didn't work, switched to sizeof(((GPR*)NULL)->register).
llvm-svn: 203667
When a struct has bitfields overlapping with other members
(as required by the AAPCS), clang uses a packed struct to
represent this. If such a struct is large enough for clang to
pass it as a byval pointer (>64 bytes), we need to set the
alignment of the argument to match the original type.
llvm-svn: 203660
Summary:
This is a white lie to workaround a widespread bug in the -mfp64
implementation.
The problem is that none of the 32-bit fpu ops mention the fact that they
clobber the upper 32-bits of the 64-bit FPR. This allows MTHC1 to be
scheduled on the wrong side of most 32-bit FPU ops, particularly MTC1.
Fixing that requires a major overhaul of the FPU implementation which can't
be done right now due to time constraints.
The testcase is SingleSource/Benchmarks/Misc/oourafft.c when given
TARGET_CFLAGS='-mips32r2 mfp64 -mmsa'.
Also correct the comment added in r203464 to indicate that two
instructions were affected.
Reviewers: matheusalmeida, jacksprat
Reviewed By: matheusalmeida
Differential Revision: http://llvm-reviews.chandlerc.com/D3029
llvm-svn: 203659
Summary:
Correct the match patterns and the lowerings that made the CodeGen tests pass despite the mistakes.
The original testcase that discovered the problem was SingleSource/UnitTests/SignlessType/factor.c in test-suite.
During review, we also found that some of the existing CodeGen tests were incorrect and fixed them:
* bitwise.ll: In bsel_v16i8 the IfSet/IfClear were reversed because bsel and bmnz have different operand orders and the test didn't correctly account for this. bmnz goes 'IfClear, IfSet, CondMask', while bsel goes 'CondMask, IfClear, IfSet'.
* vec.ll: In the cases where a bsel is emitted as a bmnz (they are the same operation with a different input tied to the result) the operands were in the wrong order.
* compare.ll and compare_float.ll: The bsel operand order was correct for a greater-than comparison, but a greater-than comparison instruction doesn't exist. Lowering this operation inverts the condition so the IfSet/IfClear need to be swapped to match.
The differences between BSEL, BMNZ, and BMZ and how they map to/from vselect are rather confusing. I've therefore added a note to MSA.txt to explain this in a single place in addition to the comments that explain each case.
Reviewers: matheusalmeida, jacksprat
Reviewed By: matheusalmeida
Differential Revision: http://llvm-reviews.chandlerc.com/D3028
llvm-svn: 203657
When the list of VFP registers to be saved was non-contiguous (so multiple
vpush/vpop instructions were needed) these were being ordered oddly, as in:
vpush {d8, d9}
vpush {d11}
This led to the layout in memory being [d11, d8, d9] which is ugly and doesn't
match the CFI_INSTRUCTIONs we're generating either (so Dwarf info would be
broken).
This switches the order of vpush/vpop (in both prologue and epilogue,
obviously) so that the Dwarf locations are correct again.
rdar://problem/16264856
llvm-svn: 203655