This is already done for shifts. Allow it for rotations as well. E.g.:
(rotl:i32 x, (trunc (and y, 31))) -> (rotl:i32 x, (and (trunc y), 31))
Use the newly factored-out distributeTruncateThroughAnd.
With this patch and some X86.td tweaks we should be able to remove redundant
masking of the rotation amount like in the example above. HW implicitly
performs this masking.
The testcase will be added as part of the X86 patch.
llvm-svn: 203316
This is the new idiom:
x<<(y&31) | x>>((0-y)&31)
which is recognized as:
x ROTL (y&31)
The change refines matchRotateSub. In
Neg & (OpSize - 1) == (OpSize - Pos) & (OpSize - 1), if Pos is
Pos' & (OpSize - 1) we can just use Pos' instead of Pos.
llvm-svn: 203315
Slightly change the wording in the function comment. Originally, it can be
misunderstood as we turned the input into two subsequent rotates.
Better connect the comment which talks about Mask and the code which used
LoBits. Renamed variable to MaskLoBits.
llvm-svn: 203314
be split and the result type widened.
When the condition of a vselect has to be split it makes no sense widening the
vselect and thereby widening the condition. We end up in an endless loop of
widening (vselect result type) and splitting (condition mask type) doing this.
Instead, split both the condition and the vselect and widen the result.
I ran this over the test suite with i686 and mattr=+sse and saw no regressions.
Fixes PR18036.
llvm-svn: 203311
Summary:
COMDAT_SELECT_SAME_SIZE is a COMDAT type that I presume exist only in COFF.
The semantics of the type is that linker should merge such COMDAT sections if
their sizes are the same. Otherwise it's an error.
Reviewers: Bigcheese, shankarke, kledzik
CC: llvm-commits
Differential Revision: http://llvm-reviews.chandlerc.com/D2996
llvm-svn: 203308
First: refactor out the emission of entries into the .debug_loc section
into its own routine.
Second: add a new class ByteStreamer that can be used to either emit
using an AsmPrinter or hash using DIEHash the series of bytes that
would be emitted. Use this in all of the location emission routines
for the .debug_loc section.
No functional change intended outside of a few additional comments
in verbose assembly.
llvm-svn: 203304
This changes the iterators so that they are no longer implemented in terms of ranges (so it's a very partial revert of the existing rangification efforts).
llvm-svn: 203299
These are sometimes created by the shrink to boolean optimization in the
globalopt pass.
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
llvm-svn: 203280
Previously, the assertions in PointerIntPair would try to calculate the value
(1 << NumLowBitsAvailable); the inferred type here is 'int', so if there were
more than 31 bits available we'd get a shift overflow.
Also, add a rudimentary unit test file for PointerIntPair.
llvm-svn: 203273
The integrated assembler now works for ppc. Since this was the last use of the
bg/p predicate and Hal says that it is now dead, drop the predicate too.
llvm-svn: 203269
After hitting the malloc() breakpoint on FreeBSD our top frame is actually
an inlined function malloc_init.
* frame #0: 0x0000000800dcba19 libc.so.7`malloc [inlined] malloc_init at malloc.c:5397
frame #1: 0x0000000800dcba19 libc.so.7`malloc(size=1024) + 9 at malloc.c:5949
frame #2: 0x00000000004006e5 test_step_out_of_malloc_into_function_b_with_dwarf`b(val=1) + 37 at main2.cpp:29
Add a heuristic to keep stepping out until we come to a non-malloc caller,
before checking if it is our desired caller from the test code.
llvm.org/pr17944
llvm-svn: 203268