Commit Graph

39553 Commits

Author SHA1 Message Date
Matthew Simpson b25e87fca5 [LV] Process pointer IVs with PHINodes in collectLoopUniforms
This patch moves the processing of pointer induction variables in
collectLoopUniforms from the consecutive pointer phase of the analysis to the
phi node phase. Previously, if a pointer induction variable was used by both a
scalarized non-memory instruction as well as a vectorized memory instruction,
we would incorrectly identify the pointer as uniform. Pointer induction
variables should be treated the same as other phi nodes. That is, they are
uniform if all users of the induction variable and induction variable update
are uniform.

Differential Revision: https://reviews.llvm.org/D24511

llvm-svn: 281485
2016-09-14 14:47:40 +00:00
James Molloy 13065b00ba [ARM] Promote small global constants to constant pools
If a constant is unamed_addr and is only used within one function, we can save
on the code size and runtime cost of an indirection by changing the global's storage
to inside the constant pool. For example, instead of:

      ldr r0, .CPI0
      bl printf
      bx lr
    .CPI0: &format_string
    format_string: .asciz "hello, world!\n"

We can emit:

      adr r0, .CPI0
      bl printf
      bx lr
    .CPI0: .asciz "hello, world!\n"

This can cause significant code size savings when many small strings are used in one
function (4 bytes per string).

llvm-svn: 281484
2016-09-14 14:47:27 +00:00
Simon Pilgrim 67fdd15cf9 [X86] Added i128 lshr+shl -> mask combine test
llvm-svn: 281480
2016-09-14 14:29:16 +00:00
Nemanja Ivanovic d5deb4896c Fix code-gen crash on Power9 for insert_vector_elt with variable index (PR30189)
This patch corresponds to review:
https://reviews.llvm.org/D24021

In the initial implementation of this instruction, I forgot to account for
variable indices. This patch fixes PR30189 and should probably be merged into
3.9.1 (I'll open a bug according to the new instructions).

llvm-svn: 281479
2016-09-14 14:19:09 +00:00
Andrea Di Biagio e8e1af3649 [InstCombine] Merged two test files and regenerated checks using update_test_checks.py. NFC.
llvm-svn: 281478
2016-09-14 14:18:21 +00:00
Simon Pilgrim ba325e3a73 [X86][SSE] Don't blend vector shifts with MOVSS/MOVSD directly, lower from generic shuffle
Shuffle lowering will correctly lower to MOVSS/MOVSD/PBLEND, improving commutation opportunities

llvm-svn: 281471
2016-09-14 14:08:18 +00:00
James Molloy 9790d8f81d Revert "[Thumb] Teach ISel how to lower compares of AND bitmasks efficiently"
This reverts commit r281323. It caused chromium test failures and a selfhost failure.

llvm-svn: 281451
2016-09-14 09:45:28 +00:00
Tim Northover 1c7825fd79 GlobalISel: mark pointer stores as legal on AArch64.
llvm-svn: 281448
2016-09-14 08:28:54 +00:00
Sjoerd Meijer 724023a1ec This reapplies r281304. The issue was that I had missed
to copy the new isAdd field in the tablegen data structure.

llvm-svn: 281447
2016-09-14 08:20:03 +00:00
Elena Demikhovsky 0569d9d588 AVX-512: Fixed a bug in kortest.z intrinsic
Lowering was wrong - X86ISD::SETCC node should return i8 type.

llvm-svn: 281446
2016-09-14 08:06:54 +00:00
Igor Breger 74813fc19c [AVX512BW] Change truncStore action (v16i16->v16i18). It can be legal only with AVX512VL.
Differential Revision: http://reviews.llvm.org/D24547

llvm-svn: 281445
2016-09-14 08:04:28 +00:00
Craig Topper 4e2d5a43cf [X86] Remove the VCVTSI2SD32 with rounding intrinsic. It's not used by clang and not needed since 32-bit integer to double is always exact.
llvm-svn: 281442
2016-09-14 06:27:46 +00:00
Wei Mi 24662395df Create a getelementptr instead of sub expr for ValueOffsetPair if the
value is a pointer.

This patch is to fix PR30213. When expanding an expr based on ValueOffsetPair,
if the value is of pointer type, we can only create a getelementptr instead
of sub expr.

Differential Revision: https://reviews.llvm.org/D24088

llvm-svn: 281439
2016-09-14 04:39:50 +00:00
Kostya Serebryany da718e55cf [sanitizer-coverage] add yet another flavour of coverage instrumentation: trace-pc-guard. The intent is to eventually replace all of {bool coverage, 8bit-counters, trace-pc} with just this one. LLVM part
llvm-svn: 281431
2016-09-14 01:39:35 +00:00
Akira Hatanaka dea090e6b2 [ObjCARC] Traverse chain downwards to replace uses of argument passed to
ObjC library call with call return.

ARC contraction tries to replace uses of an argument passed to an
objective-c library call with the call return value. For example, in the
following IR, it replaces uses of argument %9 and uses of the values
discovered traversing the chain upwards (%7 and %8) with the call return
%10, if they are dominated by the call to @objc_autoreleaseReturnValue.
This transformation enables code-gen to tail-call the call to
@objc_autoreleaseReturnValue, which is necessary to enable auto release
return value optimization.

%7 = tail call i8* @objc_loadWeakRetained(i8** %6)
%8 = bitcast i8* %7 to %0*
%9 = bitcast %0* %8 to i8*
%10 = tail call i8* @objc_autoreleaseReturnValue(i8* %9)
ret %0* %8

Since r276727, llvm started removing redundant bitcasts and as a result
started feeding the following IR to ARC contraction:

%7 = tail call i8* @objc_loadWeakRetained(i8** %6)
%8 = bitcast i8* %7 to %0*
%9 = tail call i8* @objc_autoreleaseReturnValue(i8* %7)
ret %0* %8

ARC contraction no longer does the optimization described above since it
only traverses the chain upwards and fails to recognize that the
function return can be replaced by the call return. This commit changes
ARC contraction to traverse the chain downwards too and replace uses of
bitcasts with the call return.

rdar://problem/28011339

Differential Revision: https://reviews.llvm.org/D24523

llvm-svn: 281419
2016-09-13 23:43:11 +00:00
Vedant Kumar 84a280ad6a [llvm-cov] Just emit the version number in the index file
Having the version information in every view is distracting, especially
if there are several sub-views.

llvm-svn: 281414
2016-09-13 23:00:13 +00:00
Ahmed Bougacha 7398b178f1 [AArch64] Simplify patchpoint/stackmap size test (r281301). NFC.
llvm-svn: 281407
2016-09-13 22:16:40 +00:00
Pawel Bylica c397f0b272 [CodeGen] Fix invalid shift in mul expansion
Summary: When expanding mul in type legalization make sure the type for shift amount can actually fit the value. This fixes PR30354 https://llvm.org/bugs/show_bug.cgi?id=30354.

Reviewers: hfinkel, majnemer, RKSimon

Subscribers: RKSimon, llvm-commits

Differential Revision: https://reviews.llvm.org/D24478

llvm-svn: 281403
2016-09-13 21:55:41 +00:00
Michael Kuperstein 59f8305305 [DAG] Allow build-to-shuffle combine to combine builds from two wide vectors.
This allows us to, in some cases, create a vector_shuffle out of a build_vector, when
the inputs to the build are extract_elements from two different vectors, at least one
of which is wider than the output. (E.g. a <8 x i16> being constructed out of
elements from a <16 x i16> and a <8 x i16>).

Differential Revision: https://reviews.llvm.org/D24491

llvm-svn: 281402
2016-09-13 21:53:32 +00:00
Kevin Enderby f76b56cb9c Next set of additional error checks for invalid Mach-O files for bad load commands
that use the Mach::dyld_info_command type for the load commands that are
currently use in the MachOObjectFile constructor.

This contains the missing checks for LC_DYLD_INFO and
LC_DYLD_INFO_ONLY load commands and the fields for the
Mach::dyld_info_command type.

llvm-svn: 281400
2016-09-13 21:42:28 +00:00
Krzysztof Parzyszek d19d0507c8 [Hexagon] Better handling of HVX vector lowering
- Expand SELECT_CC and BR_CC for vector types.
- Implement TLI::isShuffleMaskLegal.

llvm-svn: 281397
2016-09-13 21:16:07 +00:00
Sanjay Patel ab40f9d0ef add tests for PR28672
I'm not sure if we actually want to transform all of these in InstCombine yet, 
so I'm not labeling these with FIXME.  

llvm-svn: 281386
2016-09-13 20:36:13 +00:00
Matt Arsenault e2e6cfee61 Reapply "InstCombine: Reduce trunc (shl x, K) width."
This reapplies r272987 with a fix for infinitely looping
when the truncated value is another shift of a constant.

llvm-svn: 281379
2016-09-13 19:43:57 +00:00
Matthias Braun 1af1414d4d AArch64: Cleanup tailcall CC check, enable swiftcc.
Cleanup/change the code that checks for possible tailcall conventions to
look the same as the one in the X86 target. This makes the distinction
between calling conventions that can guarnatee tailcalls and the ones
that may tailcall more obvious.

- Add Swift to the mayTailCall list
- PreserveMost seemed to be incorrectly part of the guarnteed tail call
  list, move it to the mayTailCall list.

llvm-svn: 281376
2016-09-13 19:27:38 +00:00
Matt Arsenault 25dba30017 AMDGPU: Support commuting a FrameIndex operand
llvm-svn: 281369
2016-09-13 19:03:12 +00:00
Davide Italiano 39ccd24126 [LTO] Don't pass SF_Undefined symbols to the IRmover.
This should fix PR 30363.

llvm-svn: 281366
2016-09-13 18:45:13 +00:00
Simon Pilgrim 4a8eba3e96 [DAGCombiner] Use APInt directly in (shl (zext (srl x, C)), C) combine range test
To avoid assertion, we must ensure that the inner shift constant is within range before calling ConstantSDNode::getZExtValue(). We already know that the outer shift constant is in range.

Followup to D23007

llvm-svn: 281362
2016-09-13 18:33:29 +00:00
Nico Weber e204c48d16 Revert r281336 (and r281337), it caused PR30372.
llvm-svn: 281361
2016-09-13 18:17:00 +00:00
Douglas Katzman 8ea02f4e1c [Myriad]: set LeonCASA processor feature
llvm-svn: 281359
2016-09-13 17:51:41 +00:00
Simon Pilgrim 1f9ddf48a6 [X86][SSE] Added AVX512F and additional vector truncate test cases
trunc16i16_16i8 is currently commented out due to PR25684

llvm-svn: 281356
2016-09-13 17:34:56 +00:00
Simon Pilgrim bd28a85d14 [DAGCombiner] Use APInt directly in (shl (ext (shl x, c1)), c2) combine
Fix failure to detect out of range shift constants leading to assert in ConstantSDNode::getZExtValue()

Followup to D23007

llvm-svn: 281354
2016-09-13 17:15:28 +00:00
Andrea Di Biagio 7277afeec1 [ConstantFold] Improve the bitcast folding logic for constant vectors.
The constant folder didn't know how to always fold bitcasts of constant integer
vectors. In particular, it was unable to handle the case where a constant vector
had some undef elements, and the resulting (i.e. bitcasted) vector type had more
elements than the original vector type.

Example:
  %cast = bitcast <2 x i64><i64 undef, i64 2> to <4 x i32>

On a little endian target, %cast could have been folded to:
  <4 x i32><i32 undef, i32 undef, i32 2, i32 0>

This patch improves the folding logic by teaching how to correctly propagate
undef elements in the folded vector.

Differential Revision: https://reviews.llvm.org/D24301

llvm-svn: 281343
2016-09-13 14:50:47 +00:00
Simon Pilgrim 2e99e0ff63 [X86] Regenerated shift combine tests.
Added x86_64 tests

llvm-svn: 281341
2016-09-13 14:41:39 +00:00
Krzysztof Parzyszek b558ae2125 [Hexagon] Clear the flow queue after visiting a single instruction
llvm-svn: 281339
2016-09-13 14:36:55 +00:00
Nirav Dave 9fa8af2180 Defer asm errors to post-statement failure
Recommitting after fixing AsmParser Initialization.

Allow errors to be deferred and emitted as part of clean up to simplify
and shorten Assembly parser code. This will allow error messages to be
emitted in helper functions and be modified by the caller which has
better context.

As part of this many minor cleanups to the Parser:

* Unify parser cleanup on error
* Add Workaround for incorrect return values in ParseDirective instances
* Tighten checks on error-signifying return values for parser functions
  and fix in-tree TargetParsers to be more consistent with the changes.
* Fix AArch64 test cases checking for spurious error messages that are
  now fixed.

These changes should be backwards compatible with current Target Parsers
so long as the error status are correctly returned in appropriate
functions.

Reviewers: rnk, majnemer

Subscribers: aemerson, jyknight, llvm-commits

Differential Revision: https://reviews.llvm.org/D24047

llvm-svn: 281336
2016-09-13 13:55:06 +00:00
Andrea Di Biagio 3647a96a44 [InstSimplify] Add tests to show missed bitcast folding opportunities.
InstSimplify doesn't always know how to fold a bitcast of a constant vector.
In particular, the logic in InstSimplify doesn't know how to handle the case
where the constant vector in input contains some undef elements, and the
number of elements is smaller than the number of elements of the bitcast
vector type.

llvm-svn: 281332
2016-09-13 13:17:42 +00:00
James Molloy 043d613791 Revert "[ARM] Promote small global constants to constant pools"
This reverts commit r281314. Speculatively revert as it's possible this caused linker errors: http://lab.llvm.org:8011/builders/clang-native-arm-lnt/builds/19656

llvm-svn: 281327
2016-09-13 12:45:51 +00:00
Sam Parker 64781ed4bb Remove InstCombine test file
My previous commit should of removed a test file but I missed it.

llvm-svn: 281326
2016-09-13 12:33:06 +00:00
Pablo Barrio bb6984d401 [ARM] Add ".code 32" to functions in the ARM instruction set
Before, only Thumb functions were marked as ".code 16". These
".code x" directives are effective until the next directive of its
kind is encountered. Therefore, in code with interleaved ARM and
Thumb functions, it was possible to declare a function as ARM and
end up with a Thumb function after assembly. A test has been added.

An existing test has also been fixed to take this change into
account.

Reviewers: aschwaighofer, t.p.northover, jmolloy, rengolin

Subscribers: aemerson, rengolin, llvm-commits

Differential Revision: https://reviews.llvm.org/D24337

llvm-svn: 281324
2016-09-13 12:18:15 +00:00
James Molloy d246c598de [Thumb] Teach ISel how to lower compares of AND bitmasks efficiently
For the common pattern (CMPZ (AND x, #bitmask), #0), we can do some more efficient instruction selection if the bitmask is one consecutive sequence of set bits (32 - clz(bm) - ctz(bm) == popcount(bm)).

1) If the bitmask touches the LSB, then we can remove all the upper bits and set the flags by doing one LSLS.
2) If the bitmask touches the MSB, then we can remove all the lower bits and set the flags with one LSRS.
3) If the bitmask has popcount == 1 (only one set bit), we can shift that bit into the sign bit with one LSLS and change the condition query from NE/EQ to MI/PL (we could also implement this by shifting into the carry bit and branching on BCC/BCS).
4) Otherwise, we can emit a sequence of LSLS+LSRS to remove the upper and lower zero bits of the mask.

1-3 require only one 16-bit instruction and can elide the CMP. 4 requires two 16-bit instructions but can elide the CMP and doesn't require materializing a complex immediate, so is also a win.

llvm-svn: 281323
2016-09-13 12:12:32 +00:00
Sam Parker 214f7bf5cc Enable simplify libcalls for ARM PCS
Teach SimplifyLibcalls that in can treat functions annotated with
apcs, aapcs or aapcs_vfp like normal C functions if they only take
and return integer or pointer values, and the target is not iOS.

Differential Revision: https://reviews.llvm.org/D24453

llvm-svn: 281322
2016-09-13 12:10:14 +00:00
Ying Yi 544b1df64f [llvm-cov] - Included footer "Generated by llvm-cov -- llvm version <version number>" in the coverage report.
The llvm-cov version information will be useful to the user when comparing the code coverage across different versions of llvm-cov. This patch provides the llvm-cov version information in the generated coverage report.

Differential Revision: https://reviews.llvm.org/D24457

llvm-svn: 281321
2016-09-13 11:28:31 +00:00
Peter Smith 85bbda191d [ARM] Support ldr.w in pseudo instruction ldr rd,=immediate
The changes made in r269352, r269353 and r269354 to support the 
transformation of the ldr rd,=immediate to mov introduced a regression
from 3.8 (ldr.w rd, =immediate) not supported.

This change puts support back in for ldr.w by means of a t2InstAlias for
the .w form. The .w is ignored in ARM state and propagated to the ldr in
Thumb2.

llvm-svn: 281319
2016-09-13 11:15:51 +00:00
James Molloy 3e4bc66134 [ARM] Promote small global constants to constant pools
If a constant is unamed_addr and is only used within one function, we can save
on the code size and runtime cost of an indirection by changing the global's storage
to inside the constant pool. For example, instead of:

      ldr r0, .CPI0
      bl printf
      bx lr
    .CPI0: &format_string
    format_string: .asciz "hello, world!\n"

We can emit:

      adr r0, .CPI0
      bl printf
      bx lr
    .CPI0: .asciz "hello, world!\n"

This can cause significant code size savings when many small strings are used in one
function (4 bytes per string).

llvm-svn: 281314
2016-09-13 10:28:11 +00:00
Eric Liu 882dc72b38 [WebAssembly] Trying to fix broken tests in CodeGen/WebAssembly caused by r281285.
Reviewers: bkramer, ddcc, dschuff, sunfish

Subscribers: jfb, llvm-commits, dschuff

Differential Revision: https://reviews.llvm.org/D24497

llvm-svn: 281312
2016-09-13 10:05:44 +00:00
Ayman Musa 0c2da88f82 Remove MVT:i1 xor instruction before SELECT. (Performance improvement).
Differential Revision: https://reviews.llvm.org/D23764

llvm-svn: 281308
2016-09-13 09:12:45 +00:00
Sjoerd Meijer 520a18df9c Revert of r281304 as it is causing build bot failures in hexagon
hwloop regression tests. These tests pass locally; will be investigating
where these differences come from.

llvm-svn: 281306
2016-09-13 08:51:59 +00:00
Sjoerd Meijer 05453991fe This adds a new field isAdd to MCInstrDesc. The ARM and Hexagon instruction
descriptions now tag add instructions, and the Hexagon backend is using this to
identify loop induction statements.

Patch by Sam Parker and Sjoerd Meijer.

Differential Revision: https://reviews.llvm.org/D23601

llvm-svn: 281304
2016-09-13 08:08:06 +00:00
Elena Demikhovsky b906df9fe5 AVX-512: Fix for PR28175 - Scalar code optimization.
Optimized (truncate (assertzext x) to i1) and anyext i1 to i8/16/32.
Optimization of this patterns is a one more step towards i1 optimization on AVX-512.

Differential Revision: https://reviews.llvm.org/D24456

llvm-svn: 281302
2016-09-13 07:57:00 +00:00
Diana Picus 4b97288184 [AArch64] Support stackmap/patchpoint in getInstSizeInBytes
We currently return 4 for stackmaps and patchpoints, which is very optimistic
and can in rare cases cause the branch relaxation pass to fail to relax certain
branches.

This patch causes getInstSizeInBytes to return a pessimistic estimate of the
size as the number of bytes requested in the stackmap/patchpoint. In the future,
we could provide a more accurate estimate by sharing some of the logic in
AArch64::LowerSTACKMAP/PATCHPOINT.

Fixes part of https://llvm.org/bugs/show_bug.cgi?id=28750

Differential Revision: https://reviews.llvm.org/D24073

llvm-svn: 281301
2016-09-13 07:45:17 +00:00