Commit Graph

81571 Commits

Author SHA1 Message Date
Matthias Braun 84e289702a Revert "ARMLoadStoreOpt: Merge subs/adds into LDRD/STRD; Factor out common code"
This reverts commit r241928. This caused http://llvm.org/PR24190

llvm-svn: 242734
2015-07-20 23:17:16 +00:00
Matthias Braun 22f3960759 Revert "ARM: Use SpecificBumpPtrAllocator to fix leak introduced in r241920"
This reverts commit r241951. It caused http://llvm.org/PR24190

llvm-svn: 242733
2015-07-20 23:17:14 +00:00
Matthias Braun c8b67e656b AArch64: Restrict macroop fusion heuristics to cyclone.
Even though this is just some hinting for the scheduler it doesn't make
sense to do that unless you know the target can perform the fusion.

llvm-svn: 242732
2015-07-20 23:11:42 +00:00
JF Bastien e4d22d59d1 Targets: commonize some stack realignment code
This patch does the following:
* Fix FIXME on `needsStackRealignment`: it is now shared between multiple targets, implemented in `TargetRegisterInfo`, and isn't `virtual` anymore. This will break out-of-tree targets, silently if they used `virtual` and with a build error if they used `override`.
* Factor out `canRealignStack` as a `virtual` function on `TargetRegisterInfo`, by default only looks for the `no-realign-stack` function attribute.

Multiple targets duplicated the same `needsStackRealignment` code:
 - Aarch64.
 - ARM.
 - Mips almost: had extra `DEBUG` diagnostic, which the default implementation now has.
 - PowerPC.
 - WebAssembly.
 - x86 almost: has an extra `-force-align-stack` option, which the default implementation now has.

The default implementation of `needsStackRealignment` used to just return `false`. My current patch changes the behavior by simply using the above shared behavior. This affects:
 - AMDGPU
 - BPF
 - CppBackend
 - MSP430
 - NVPTX
 - Sparc
 - SystemZ
 - XCore
 - Out-of-tree targets
This is a breaking change! `make check` passes.

The only implementation of the `virtual` function (besides the slight different in x86) was Hexagon (which did `MF.getFrameInfo()->getMaxAlignment() > 8`), and potentially some out-of-tree targets. Hexagon now uses the default implementation.

`needsStackRealignment` was being overwritten in `<Target>GenRegisterInfo.inc`, to return `false` as the default also did. That was odd and is now gone.

Reviewers: sunfish

Subscribers: aemerson, llvm-commits, jfb

Differential Revision: http://reviews.llvm.org/D11160

llvm-svn: 242727
2015-07-20 22:51:32 +00:00
Reid Kleckner 87d03450a5 Don't try to instrument allocas used by outlined SEH funclets
Summary:
Arguments to llvm.localescape must be static allocas. They must be at
some statically known offset from the frame or stack pointer so that
other functions can access them with localrecover.

If we ever want to instrument these, we can use more indirection to
recover the addresses of these local variables. We can do it during
clang irgen or with the asan module pass.

Reviewers: eugenis

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D11307

llvm-svn: 242726
2015-07-20 22:49:44 +00:00
Matthias Braun e536f4f681 AArch64: Add aditional Cyclone macroop fusion opportunities
Related to rdar://19205407

Differential Revision: http://reviews.llvm.org/D10746

llvm-svn: 242724
2015-07-20 22:34:47 +00:00
Matthias Braun 2bd6dd8d54 MachineScheduler: Restrict macroop fusion to data-dependent instructions.
Before creating a schedule edge to encourage MacroOpFusion check that:
- The predecessor actually writes a register that the branch reads.
- The predecessor has no successors in the ScheduleDAG so we can
  schedule it in front of the branch.

This avoids skewing the scheduling heuristic in cases where macroop
fusion cannot happen.

Differential Revision: http://reviews.llvm.org/D10745

llvm-svn: 242723
2015-07-20 22:34:44 +00:00
Geoff Berry e41c2df0ef Fix comment typo (test commit). NFC
llvm-svn: 242719
2015-07-20 22:03:52 +00:00
Quentin Colombet 71a71485f4 [ARM] Refactor the prologue/epilogue emission to be more robust.
This is the first step toward supporting shrink-wrapping for this target.

The changes could be summarized by these items:
- Expand the tail-call return as part of the expand pseudo pass.
- Get rid of the assumptions that the epilogue is the exit block:
  * Do not assume which registers are free in the epilogue. (This indirectly
    improve the lowering of the code for the segmented stacks, see the test
    cases.)
  * Take into account that the basic block can be empty.

Related to <rdar://problem/20821730>

llvm-svn: 242714
2015-07-20 21:42:14 +00:00
Jingyue Wu 48a9bdc6aa [NVPTX] make load on global readonly memory to use ldg
Summary:
[NVPTX] make load on global readonly memory to use ldg

Summary:
As describe in [1], ld.global.nc may be used to load memory by nvcc when
__restrict__ is used and compiler can detect whether read-only data cache
is safe to use.

This patch will try to check whether ldg is safe to use and use them to
replace ld.global when possible. This change can improve the performance
by 18~29% on affected kernels (ratt*_kernel and rwdot*_kernel) in 
S3D benchmark of shoc [2]. 

Patched by Xuetian Weng. 

[1] http://docs.nvidia.com/cuda/kepler-tuning-guide/#read-only-data-cache
[2] https://github.com/vetter/shoc

Test Plan: test/CodeGen/NVPTX/load-with-non-coherent-cache.ll

Reviewers: jholewinski, jingyue

Subscribers: jholewinski, llvm-commits

Differential Revision: http://reviews.llvm.org/D11314

llvm-svn: 242713
2015-07-20 21:28:54 +00:00
Krzysztof Parzyszek 921722049d [Hexagon] Generate MUX from conditional transfers when dot-new not possible
llvm-svn: 242711
2015-07-20 21:23:25 +00:00
Alex Lorenz ab98049947 MIR Serialization: Initial serialization of machine constant pools.
This commit implements the initial serialization of machine constant pools and
the constant pool index machine operands. The constant pool is serialized using
a YAML sequence of YAML mappings that represent the constant values.
The target-specific constant pool items aren't serialized by this commit.

Reviewers: Duncan P. N. Exon Smith
llvm-svn: 242707
2015-07-20 20:51:18 +00:00
Sanjoy Das 93d608c3c3 [ImplicitNullChecks] Work with implicit defs.
Summary:
This change generalizes the implicit null checks pass to work with
instructions that don't have any explicit register defs.  This lets us
use X86's `cmp` against memory as faulting load instructions.

Reviewers: reames, JosephTremoulet

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D11286

llvm-svn: 242703
2015-07-20 20:31:39 +00:00
Alex Lorenz b29554dab9 MIR Parser: Add support for quoted named global value operands.
This commit extends the machine instruction lexer and implements support for
the quoted global value tokens. With this change the syntax for the global value
identifier tokens becomes identical to the syntax for the global identifier
tokens from the LLVM's assembly language.

Reviewers: Duncan P. N. Exon Smith
llvm-svn: 242702
2015-07-20 20:31:01 +00:00
Chad Rosier 3da0ea7f5d [AArch64] Change EON pattern to match more often.
Phabricator: http://reviews.llvm.org/D11359
Patch by Geoff Berry <gberry@codeaurora.org>

llvm-svn: 242694
2015-07-20 18:42:27 +00:00
Tom Stellard 70580f83cc AMDGPU/SI: Add VI patterns to select FLAT instructions for global memory ops
Summary:
The MUBUF addr64 bit has been removed on VI, so we must use FLAT
instructions when the pointer is stored in VGPRs.

Reviewers: arsenm

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D11067

llvm-svn: 242673
2015-07-20 14:28:41 +00:00
Vasileios Kalintiris 974d409259 [mips] Added support for the ERETNC instruction.
Summary: This required adding the instruction predicate HasMips32r5.

Patch by Scott Egerton.

Reviewers: dsanders, vkalintiris

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D11136

llvm-svn: 242666
2015-07-20 12:28:56 +00:00
Arnold Schwaighofer 764d6de823 Revert "MergeFuncs: Transfer the function parameter attributes to the call site"
It is okay to not transfer parameter attributes.

This reverts commit r242558.

llvm-svn: 242646
2015-07-19 19:30:43 +00:00
Yaron Keren c66c06b899 Narrow Callee scope, suggestion from David Blaikie.
llvm-svn: 242644
2015-07-19 15:48:07 +00:00
Simon Pilgrim e2c244f3b4 [X86][SSE] Reordered cast vectorization costs. NFCI.
Reordered the data tables at the top and placed the lookups after. The first stage in the yak shaving necessary to get more accurate costs for a variety of targets given the recent improvements to SINT_TO_FP/UINT_TO_FP/SIGN_EXTEND vector lowering.

llvm-svn: 242643
2015-07-19 15:36:12 +00:00
Yaron Keren 611f614ee1 De-duplicate CS.getCalledFunction() expression.
Not sure if the optimizer will save the call as getCalledFunction()
is not a trivial access function but the code is clearer this way.

llvm-svn: 242641
2015-07-19 11:52:02 +00:00
Simon Pilgrim 4ef0576c40 [DAGCombiner] Fixed minor typo that was missed in D9097.
We don't bitcast the UNDEFs - that is done in visitVECTOR_SHUFFLE, and the getValueType should come from the operand's SDValue not the SDNode.

llvm-svn: 242640
2015-07-19 11:31:40 +00:00
Michael Kuperstein 69e40a4c85 [X86] Add support for tbyte memory operand size for Intel-syntax x86 assembly
Differential Revision: http://reviews.llvm.org/D11257
Patch by: marina.yatsina@intel.com

llvm-svn: 242639
2015-07-19 11:03:08 +00:00
Simon Pilgrim ba51d116c4 Remove TargetInstrInfo::canFoldMemoryOperand
canFoldMemoryOperand is not actually used anywhere in the codebase - all existing users instead call foldMemoryOperand directly when they wish to fold and can correctly deduce what they need from the return value. 

This patch removes the canFoldMemoryOperand base function and the target implementations; only x86 had a real (bit-rotted) implementation, although AMDGPU had a preparatory stub that had never needed to be completed.

Differential Revision: http://reviews.llvm.org/D11331

llvm-svn: 242638
2015-07-19 10:50:53 +00:00
Elena Demikhovsky 17b906058e AVX-512: Floating point conversions for SKX - DAG Lowering.
SKX supports conversion for all FP types. Integer types include doublewords and quardwords.
I added "Legal" status for these nodes and a bunch of tests.
I added "NoVLX" for AVX DAG selection to force VLX instructions selection when VLX is supported.

Differential Revision: http://reviews.llvm.org/D11255

llvm-svn: 242637
2015-07-19 10:17:33 +00:00
Simon Pilgrim 3aca32ea4a Use SDValue bool check. NFCI.
llvm-svn: 242636
2015-07-19 09:56:36 +00:00
Simon Pilgrim 59764dccfb [X86][SSE] Updated SHL/LSHR i64 vectorization costs.
This was missed in D8416.

llvm-svn: 242621
2015-07-18 20:06:30 +00:00
Benjamin Kramer c9436ad659 [AggressiveAntiDepBreaker] Use range loops for multimap access.
No functionality change intended.

llvm-svn: 242620
2015-07-18 20:05:10 +00:00
Yaron Keren 3d49f6df94 Rangify for loops in GlobalDCE, NFC.
llvm-svn: 242619
2015-07-18 19:57:34 +00:00
Benjamin Kramer 9a5d788948 [Hexagon] Use composition instead of inheritance from STL types
The standard containers are not designed to be inherited from, as
illustrated by the MSVC hacks for NodeOrdering. No functional change
intended.

llvm-svn: 242616
2015-07-18 17:43:23 +00:00
Chandler Carruth 9f2bf1aff5 [PM/AA] Remove the addEscapingUse update API that won't be easy to
directly model in the new PM.

This also was an incredibly brittle and expensive update API that was
never fully utilized by all the passes that claimed to preserve AA, nor
could it reasonably have been extended to all of them. Any number of
places add uses of values. If we ever wanted to reliably instrument
this, we would want a callback hook much like we have with ValueHandles,
but doing this for every use addition seems *extremely* expensive in
terms of compile time.

The only user of this update mechanism is GlobalsModRef. The idea of
using this to keep it up to date doesn't really work anyways as its
analysis requires a symmetric analysis of two different memory
locations. It would be very hard to make updates be sufficiently
rigorous to *guarantee* symmetric analysis in this way, and it pretty
certainly isn't true today.

However, folks have been using GMR with this update for a long time and
seem to not be hitting the issues. The reported issue that the update
hook fixes isn't even a problem any more as other changes to
GetUnderlyingObject worked around it, and that issue stemmed from *many*
years ago. As a consequence, a prior patch provided a flag to control
the unsafe behavior of GMR, and this patch removes the update mechanism
that has questionable compile-time tradeoffs and is causing problems
with moving to the new pass manager. Note the lack of test updates --
not one test in tree actually requires this update, even for a contrived
case.

All of this was extensively discussed on the dev list, this patch will
just enact what that discussion decides on. I'm sending it for review in
part to show what I'm planning, and in part to show the *amazing* amount
of work this avoids. Every call to the AA here is something like three
to six indirect function calls, which in the non-LTO pipeline never do
any work! =[

Differential Revision: http://reviews.llvm.org/D11214

llvm-svn: 242605
2015-07-18 03:26:46 +00:00
Kostya Serebryany 86e4a3e0a3 [libFuzzer] require the files and directories passed to the fuzzer to exist
llvm-svn: 242596
2015-07-18 00:03:37 +00:00
Evgeniy Stepanov 9cb08f823f [asan] Fix shadow mapping on Android/AArch64.
Instrumentation and the runtime library were in disagreement about
ASan shadow offset on Android/AArch64.

This fixes a large number of existing tests on Android/AArch64.

llvm-svn: 242595
2015-07-17 23:51:18 +00:00
Matthias Braun 9e85980658 ARM: Enable MachineScheduler and disable PostRAScheduler for swift.
Reapply r242500 now that the swift schedmodel includes LDRLIT.

This is mostly done to disable the PostRAScheduler which optimizes for
instruction latencies which isn't a good fit for out-of-order
architectures. This also allows to leave out the itinerary table in
swift in favor of the SchedModel ones.

This change leads to performance improvements/regressions by as much as
10% in some benchmarks, in fact we loose 0.4% performance over the
llvm-testsuite for reasons that appear to be unknown or out of the
compilers control. rdar://20803802 documents the investigation of
these effects.

While it is probably a good idea to perform the same switch for the
other ARM out-of-order CPUs, I limited this change to swift as I cannot
perform the benchmark verification on the other CPUs.

Differential Revision: http://reviews.llvm.org/D10513

llvm-svn: 242588
2015-07-17 23:18:30 +00:00
Matthias Braun 141d1c9d8f ARM: Add scheduling information for LDRLIT instructions to swift scheduling model
These pseudo instructions are only lowered after register allocation and
are therefore still present when the machine scheduler runs.
Add a run: line to a testcase that uses the uncommon flags necessary to
actually produce a LDRLIT instruction on swift.

llvm-svn: 242587
2015-07-17 23:18:26 +00:00
Quentin Colombet 11922946fe [RAGreedy] Add an experimental deferred spilling feature.
The idea of deferred spilling is to delay the insertion of spill code until the
very end of the allocation. A "candidate" to spill variable might not required
to be spilled because of other evictions that happened after this decision was
taken. The spirit is similar to the optimistic coloring strategy implemented in
Preston and Briggs graph coloring algorithm.

For now, this feature is highly experimental. Although correct, it would require
much more modification to properly model the effect of spilling.

Anyway, this early patch helps prototyping this feature.

Note: The test case cannot unfortunately be reduced and is probably fragile.
llvm-svn: 242585
2015-07-17 23:04:06 +00:00
Alex Lorenz 484903ecd2 MIR Parser: Allow the dollar characters in all of the identifier tokens.
This commit modifies the machine instruction lexer so that it now accepts the
'$' characters in identifier tokens.

This change makes the syntax for unquoted global value tokens consistent with
the syntax for the global idenfitier tokens in the LLVM's assembly language.

llvm-svn: 242584
2015-07-17 22:48:04 +00:00
Alex Lorenz d225595dcf AsmParser: Add a function to parse a standalone constant value.
This commit extends the interface provided by the AsmParser library by adding a
function that allows the user to parse a standalone contant value.

This change is useful for MIR serialization, as it will allow the MIR Parser to
parse the constant values in a machine constant pool.

Reviewers: Duncan P. N. Exon Smith

Differential Revision: http://reviews.llvm.org/D10280

llvm-svn: 242579
2015-07-17 22:07:03 +00:00
Kuba Brecka 7f54753180 [asan] Add a comment explaining why non-instrumented allocas are moved.
Addition to r242510.

llvm-svn: 242561
2015-07-17 19:20:21 +00:00
Arnold Schwaighofer 690cd87dcd MergeFuncs: Transfer the function parameter attributes to the call site
rdar://21516488

llvm-svn: 242558
2015-07-17 18:59:08 +00:00
Adam Nemet 5a6d5bc17b Revert "ARM: Enable MachineScheduler and disable PostRAScheduler for swift."
This reverts commit r242500.

It broke some internal tests and Matthias asked me to revert it while he
is investigating.

llvm-svn: 242553
2015-07-17 18:14:19 +00:00
Matthias Braun 244a6773c7 Use llvm_unreachable() instead of report_fatal_error() if the machine model is incomplete
This error is for developers only so it makes sense to abort and get a
backtrace.

llvm-svn: 242551
2015-07-17 17:50:11 +00:00
James Molloy a6702e2f14 [ARM] Use [SU]ABSDIFF nodes instead of intrinsics for VABD/VABA
No functional change, but it preps codegen for the future when SABSDIFF
will start getting generated in anger.

llvm-svn: 242546
2015-07-17 17:10:55 +00:00
James Molloy faf4e3c33b [AArch64] Use [SU]ABSDIFF nodes instead of intrinsics for ABD/ABA
No functional change, but it preps codegen for the future when SABSDIFF
will start getting generated in anger.

llvm-svn: 242545
2015-07-17 17:10:45 +00:00
Eli Bendersky b09cfb51ca Use inbounds GEPs for memcpy and memset lowering
Follow-up on discussion in http://reviews.llvm.org/D11220

llvm-svn: 242542
2015-07-17 16:42:33 +00:00
Rafael Espindola 2bec8500ef Add support for producing thin archives in llvm-lib.
I will send an entry in docs/CommandGuide for review today.

llvm-svn: 242533
2015-07-17 16:01:11 +00:00
Alexandros Lamprineas 69718d242a Edited the CPUNames table of TargetParser
- Changed the default FPU of cortex-m4.
- Removed "cortex-m4f" entry. Currently not supported.

Change-Id: I73121e358aa9e7ba68eb001c2143df390ff2352a
Phabricator: http://reviews.llvm.org/D11100
llvm-svn: 242528
2015-07-17 15:49:32 +00:00
John Brawn 9ca9ca2805 Make global aliases have symbol size equal to their type
This is mainly for the benefit of GlobalMerge, so that an alias into a
MergedGlobals variable has the same size as the original non-merged
variable.

Differential Revision: http://reviews.llvm.org/D10837

llvm-svn: 242520
2015-07-17 12:12:03 +00:00
Chandler Carruth f55803f761 [PM/AA] Disable the core unsafe aspect of GlobalsModRef in the face of
basic changes to the IR such as folding pointers through PHIs, Selects,
integer casts, store/load pairs, or outlining.

This leaves the feature available behind a flag. This flag's default
could be flipped if necessary, but the real-world performance impact of
this particular feature of GMR may not be sufficiently significant for
many folks to want to run the risk.

Currently, the risk here is somewhat mitigated by half-hearted attempts
to update GlobalsModRef when the rest of the optimizer changes
something. However, I am currently trying to remove that update
mechanism as it makes migrating the AA infrastructure to a form that can
be readily shared between new and old pass managers very challenging.
Without this update mechanism, it is possible that this still unlikely
failure mode will start to trip people, and so I wanted to try to
proactively avoid that.

There is a lengthy discussion on the mailing list about why the core
approach here is flawed, and likely would need to look totally different
to be both reasonably effective and resilient to basic IR changes
occuring. This patch is essentially the first of two which will enact
the result of that discussion. The next patch will remove the current
update mechanism.

Thanks to lots of folks that helped look at this from different angles.
Especial thanks to Michael Zolotukhin for doing some very prelimanary
benchmarking of LTO without GlobalsModRef to get a rough idea of the
impact we could be facing here. So far, it looks very small, but there
are some concerns lingering from other benchmarking. The default here
may get flipped if performance results end up pointing at this as a more
significant issue.

Also thanks to Pete and Gerolf for reviewing!

Differential Revision: http://reviews.llvm.org/D11213

llvm-svn: 242512
2015-07-17 06:58:24 +00:00
Kuba Brecka 37a5ffaca0 [asan] Fix invalid debug info for promotable allocas
Since r230724 ("Skip promotable allocas to improve performance at -O0"), there is a regression in the generated debug info for those non-instrumented variables. When inspecting such a variable's value in LLDB, you often get garbage instead of the actual value. ASan instrumentation is inserted before the creation of the non-instrumented alloca. The only allocas that are considered standard stack variables are the ones declared in the first basic-block, but the initial instrumentation setup in the function breaks that invariant.

This patch makes sure uninstrumented allocas stay in the first BB.

Differential Revision: http://reviews.llvm.org/D11179

llvm-svn: 242510
2015-07-17 06:29:57 +00:00
Tim Northover a5740e0874 AArch64: add comment missed out from earlier patch.
Helps explain some of the background behind this bit of code.

llvm-svn: 242503
2015-07-17 03:31:50 +00:00
Matthias Braun 2d8315f806 ARM: Enable MachineScheduler and disable PostRAScheduler for swift.
This is mostly done to disable the PostRAScheduler which optimizes for
instruction latencies which isn't a good fit for out-of-order
architectures. This also allows to leave out the itinerary table in
swift in favor of the SchedModel ones.

This change leads to performance improvements/regressions by as much as
10% in some benchmarks, in fact we loose 0.4% performance over the
llvm-testsuite for reasons that appear to be unknown or out of the
compilers control. rdar://20803802 documents the investigation of
these effects.

While it is probably a good idea to perform the same switch for the
other ARM out-of-order CPUs, I limited this change to swift as I cannot
perform the benchmark verification on the other CPUs.

Differential Revision: http://reviews.llvm.org/D10513

llvm-svn: 242500
2015-07-17 01:44:31 +00:00
Matt Arsenault cabe02e141 Only do fmul (fadd x, x), c combine if the fadd only has one use
This was increasing the instruction count if the fadd has multiple uses.

llvm-svn: 242498
2015-07-17 01:14:35 +00:00
Rafael Espindola 494a381ede Use small encodings for constants when possible.
llvm-svn: 242493
2015-07-17 00:57:52 +00:00
Alex Lorenz e5a44660dd MIR Serialization: Serialize the frame setup machine instruction flag.
llvm-svn: 242491
2015-07-17 00:24:15 +00:00
Alex Lorenz 7feaf7c60b MIR Serialization: Serialize the frame index machine operands.
Reviewers: Duncan P. N. Exon Smith
llvm-svn: 242487
2015-07-16 23:37:45 +00:00
Cong Hou 9b4f6b2650 Add new constructors for LoopInfo/DominatorTree/BFI/BPI
Those new constructors make it more natural to construct an object for a function. For example, previously to build a LoopInfo for a function, we need four statements:

DominatorTree DT;
LoopInfo LI;
DT.recalculate(F);
LI.analyze(DT);

Now we only need one statement:

LoopInfo LI(DominatorTree(F));

http://reviews.llvm.org/D11274

llvm-svn: 242486
2015-07-16 23:23:35 +00:00
Matthias Braun da3d0d7342 Arm: Don't define a label twice with two setjmps in a function.
Constructing a name based on the function name didn't give us a unique
symbol if we had more than one setjmp in a function. Using
MCContext::createTempSymbol() always gives us a unique name.

Differential Revision: http://reviews.llvm.org/D9314

llvm-svn: 242482
2015-07-16 22:34:20 +00:00
Matthias Braun 3cd00c1739 Fix __builtin_setjmp in combination with sjlj exception handling.
llvm.eh.sjlj.setjmp was used as part of the SjLj exception handling
style but is also used in clang to implement __builtin_setjmp.  The ARM
backend needs to output additional dispatch tables for the SjLj
exception handling style, these tables however can't be emitted if
llvm.eh.sjlj.setjmp is simply used for __builtin_setjmp and no actual
landing pad blocks exist.

To solve this issue a new llvm.eh.sjlj.setup_dispatch intrinsic is
introduced which is used instead of llvm.eh.sjlj.setjmp in the SjLj
exception handling lowering, so we can differentiate between the case
where we actually need to setup a dispatch table and the case where we
just need the __builtin_setjmp semantic.

Differential Revision: http://reviews.llvm.org/D9313

llvm-svn: 242481
2015-07-16 22:34:16 +00:00
Mehdi Amini 731ad628e0 Fix ffiInvoke() use of DataLayout, broken in 242414
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 242456
2015-07-16 22:23:09 +00:00
Simon Pilgrim 9152528917 Fix spelling. NFCI.
llvm-svn: 242448
2015-07-16 21:44:53 +00:00
Tim Northover ca0ffc3561 AArch64: make inexact signalling on round Darwin-specific
C11 leaves the choice on whether round-to-integer operations set the inexact
flag implementation-defined. Darwin does expect it to be set, but this seems to
be against the intent of the IEEE document and slower to implement anyway. So
it should be opt-in.

llvm-svn: 242446
2015-07-16 21:30:21 +00:00
Bill Schmidt 54cced54a6 [PowerPC] v4i32 is a VSRCRegClass
I was looking at some vector code generation and kept seeing
unnecessary vector copies into the Altivec half of the VSX registers.
I discovered that we overlooked v4i32 when adding the register classes
for VSX; we only added v4f32 and v2f64.  This means that anything that
canonicalizes into v4i32 (which is a LOT of stuff) ends up being
forced into VRRC on its way to VSRC.

The fix is one line.  The rest of the patch is fixing up some test
cases whose code generation has changed as a result.

This seems like it would be a good candidate for backport to 3.7.

llvm-svn: 242442
2015-07-16 21:14:07 +00:00
Eli Bendersky f871e09d2d Streamline the coding style in NVPTXLowerAggrCopies
Make the style consistent with LLVM style throughout and clang-format.

llvm-svn: 242439
2015-07-16 20:42:38 +00:00
Jingyue Wu e7981cee24 [NVPTX] enable SpeculativeExecution in NVPTX
Summary:
SpeculativeExecution enables a series straight line optimizations (such
as SLSR and NaryReassociate) on conditional code. For example,

  if (...)
    ... b * s ...
  if (...)
    ... (b + 1) * s ...

speculative execution can hoist b * s and (b + 1) * s from then-blocks,
so that we have

  ... b * s ...
  if (...)
    ...
  ... (b + 1) * s ...
  if (...)
    ...

Then, SLSR can rewrite (b + 1) * s to (b * s + s) because after
speculative execution b * s dominates (b + 1) * s.

The performance impact of this change is significant. It speeds up the
benchmarks running EigenFloatContractionKernelInternal16x16
(ba68f42fa6/unsupported/Eigen/CXX11/src/Tensor/TensorContractionCuda.h?at=default#cl-526)
by roughly 2%. Some internal benchmarks that have the above code pattern
are improved by up to 40%. No significant slowdowns are observed on
Eigen CUDA microbenchmarks.

Reviewers: jholewinski, broune, eliben

Subscribers: llvm-commits, jholewinski

Differential Revision: http://reviews.llvm.org/D11201

llvm-svn: 242437
2015-07-16 20:13:48 +00:00
Matthias Braun af7d7709d6 AArch64: Implement conditional compare sequence matching.
This is a new iteration of the reverted r238793 /
http://reviews.llvm.org/D8232 which wrongly assumed that any and/or
trees can be represented by conditional compare sequences, however there
are some restrictions to that. This version fixes this and adds comments
that explain exactly what types of and/or trees can actually be
implemented as conditional compare sequences.

Related to http://llvm.org/PR20927, rdar://18326194

Differential Revision: http://reviews.llvm.org/D10579

llvm-svn: 242436
2015-07-16 20:02:37 +00:00
Tom Stellard 78655fcfdc AMDPGU/SI: Negative offsets aren't allowed in MUBUF's vaddr operand
Reviewers: arsenm

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D11226

llvm-svn: 242434
2015-07-16 19:40:09 +00:00
Tom Stellard c98ee20328 AMDPGU/SI: Use AssertZext node to mask high bit for scratch offsets
Summary:
We can safely assume that the high bit of scratch offsets will never
be set, because this would require at least 128 GB of GPU memory.

Reviewers: arsenm

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D11225

llvm-svn: 242433
2015-07-16 19:40:07 +00:00
Matthias Braun 0d4cebd434 LiveInterval: Document and enforce rules about empty subranges.
Empty subranges are not allowed in a LiveInterval and must be removed
instead: Check this in the verifiers, put a reminder for this in the
comment of the shrinkToUses variant for a single lane and make it
automatic for the shrinkToUses variant for a LiveInterval.

llvm-svn: 242431
2015-07-16 18:55:35 +00:00
Matthias Braun 7f5ae19e80 Do not duplicate method name in comment, remove duplicate comment
llvm-svn: 242430
2015-07-16 18:55:32 +00:00
Pete Cooper f4ce569deb Revert "Add missing load/store flags to thumb2 instructions."
This reverts commit r242300.

This is causing buildbot failures which we are investigating.
I'll reapply once we know whats going on, but for now want to
get the bots green.

llvm-svn: 242428
2015-07-16 18:38:13 +00:00
Cong Hou d2c1d91ed0 Rename LoopInfo::Analyze() to LoopInfo::analyze() and turn its parameter type to const&.
The benefit of turning the parameter of LoopInfo::analyze() to const& is that it now can accept a rvalue.

http://reviews.llvm.org/D11250

llvm-svn: 242426
2015-07-16 18:23:57 +00:00
Peter Collingbourne 9b0fe61047 Internalize: internalize comdat members as a group, and drop comdat on such members.
Internalizing an individual comdat group member without also internalizing
the other members of the comdat can break comdat semantics. For example,
if a module contains a reference to an internalized comdat member, and the
linker chooses a comdat group from a different object file, this will break
the reference to the internalized member.

This change causes the internalizer to only internalize comdat members if all
other members of the comdat are not externally visible. Once a comdat group
has been fully internalized, there is no need to apply comdat rules to its
members; later optimization passes (e.g. globaldce) can legally drop individual
members of the comdat. So we drop the comdat attribute from all comdat members.

Differential Revision: http://reviews.llvm.org/D10679

llvm-svn: 242423
2015-07-16 17:42:21 +00:00
Benjamin Kramer 5dfd8b6da0 [NVPTX] Don't leak dead instructions after unlinking them from the BasicBlock
llvm-svn: 242417
2015-07-16 16:51:48 +00:00
Mehdi Amini a3fcefb66e Make ExecutionEngine owning a DataLayout
Summary:
This change is part of a series of commits dedicated to have a single
DataLayout during compilation by using always the one owned by the
module.

The ExecutionEngine will act as an exception and will be unsafe to
be reused across context. We don't enforce this rule but undefined
behavior can occurs if the user tries to do it.

Reviewers: lhames

Subscribers: echristo, llvm-commits, rafael, yaron.keren

Differential Revision: http://reviews.llvm.org/D11110

From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 242414
2015-07-16 16:34:23 +00:00
Eli Bendersky f14af16219 Correct lowering of memmove in NVPTX
This fixes https://llvm.org/bugs/show_bug.cgi?id=24056

Also a bit of refactoring along the way.

Differential Revision: http://reviews.llvm.org/D11220

llvm-svn: 242413
2015-07-16 16:27:19 +00:00
Tom Stellard ff5efb8c03 AMDGPU/R600: Remove unused variable
This fixes a warning introduced by r242410.

llvm-svn: 242412
2015-07-16 16:13:34 +00:00
Tom Stellard 1d46fb2d2f AMDPGU/R600: Replace llvm_unreachable() call with LLVMContext::emitError()
Summary:
This fixes an issue on MIPS where the infinite-loop-evergreen.ll test
was failing to terminate.

Fixes PR24147.

Reviewers: arsenm, dsanders

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D11260

llvm-svn: 242410
2015-07-16 15:38:29 +00:00
James Molloy 7395a8182c [Codegen] Add intrinsics 'absdiff' and corresponding SDNodes for absolute difference operation
This adds new intrinsics "*absdiff" for absolute difference ops to facilitate efficient code generation for "sum of absolute differences" operation.
The patch also contains the introduction of corresponding SDNodes and basic legalization support.Sanity of the generated code is tested on X86.

This is 1st of the three patches.

Patch by Shahid Asghar-ahmad!

llvm-svn: 242409
2015-07-16 15:22:46 +00:00
Alexandros Lamprineas 0e20b8dc93 - TargetParser does not handle armv7l in parseArchProfile().
- ARM V7L matches the 'A' profile of ARM architecture.

Change-Id: I80c8b973f5c93fb040c177a227644d56b1b83ea8
Phabricator: http://reviews.llvm.org/D11261
llvm-svn: 242406
2015-07-16 14:54:41 +00:00
Silviu Baranga 0e5804a6af Fix memcheck interval ends for pointers with negative strides
Summary:
The checking pointer grouping algorithm assumes that the
starts/ends of the pointers are well formed (start <= end).

The runtime memory checking algorithm also assumes this by doing:

 start0 < end1 && start1 < end0

to detect conflicts. This check only works if start0 <= end0 and
start1 <= end1.

This change correctly orders the interval ends by either checking
the stride (if it is constant) or by using min/max SCEV expressions.

Reviewers: anemet, rengolin

Subscribers: rengolin, llvm-commits

Differential Revision: http://reviews.llvm.org/D11149

llvm-svn: 242400
2015-07-16 14:02:58 +00:00
Michael Kuperstein dadb847412 [X86] Reapply r240257 : "Allow more call sequences to use push instructions for argument passing"
This allows more call sequences to use pushes instead of movs when optimizing for size.
In particular, calling conventions that pass some parameters in registers (e.g. thiscall) are now supported.

This should no longer cause miscompiles, now that a bug in emitPrologue was fixed in r242395.

llvm-svn: 242398
2015-07-16 13:54:14 +00:00
Michael Kuperstein e1ea4e7d15 [X86] Fix emitPrologue() to make less assumptions about pushes
When X86FrameLowering::emitPrologue() looks for where to insert the %esp subtraction
to allocate stack space for local allocations, it assumes that any sequence of push
instructions that starts at function entry consists purely of spills of callee-save
registers.
This may be false, since from some point forward, the pushes may pushing arguments
to a subsequent function call.

This caused a miscompile that was exposed by r240257, and is not easily testable
since r240257 was reverted. A test will be committed separately after r240257 is
reapplied.

llvm-svn: 242395
2015-07-16 12:27:59 +00:00
Michael Kuperstein 8c3b4f2e78 Revert "Make ExecutionEngine owning a DataLayout"
Reverting to fix buildbot breakage.

This reverts commit r242387.

llvm-svn: 242394
2015-07-16 12:20:31 +00:00
Benjamin Kramer 7d54fab8f0 [Mips] Make helper function static, NFC.
llvm-svn: 242393
2015-07-16 11:12:05 +00:00
Tobias Grosser 39a7bd182e Add PM extension point EP_VectorizerStart
This extension point allows passes to be executed right before the vectorizer
and other highly target specific optimizations are run.

llvm-svn: 242389
2015-07-16 08:20:37 +00:00
Mehdi Amini e029eae634 Add missing break in switch case in R600ISelLowering
Summary: Catched by coverity.

Reviewers: arsenm

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D11120

From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 242388
2015-07-16 06:23:12 +00:00
Mehdi Amini f2643f41b4 Make ExecutionEngine owning a DataLayout
Summary:
This change is part of a series of commits dedicated to have a single
DataLayout during compilation by using always the one owned by the
module.

The ExecutionEngine will act as an exception and will be unsafe to
be reused across context. We don't enforce this rule but undefined
behavior can occurs if the user tries to do it.

Reviewers: lhames

Subscribers: echristo, llvm-commits, rafael, yaron.keren

Differential Revision: http://reviews.llvm.org/D11110

From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 242387
2015-07-16 06:17:14 +00:00
Mehdi Amini bd7287ebe5 Move most user of TargetMachine::getDataLayout to the Module one
Summary:
This change is part of a series of commits dedicated to have a single
DataLayout during compilation by using always the one owned by the
module.

This patch is quite boring overall, except for some uglyness in
ASMPrinter which has a getDataLayout function but has some clients
that use it without a Module (llmv-dsymutil, llvm-dwarfdump), so
some methods are taking a DataLayout as parameter.

Reviewers: echristo

Subscribers: yaron.keren, rafael, llvm-commits, jholewinski

Differential Revision: http://reviews.llvm.org/D11090

From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 242386
2015-07-16 06:11:10 +00:00
Mehdi Amini 5c0fa58e91 Remove DataLayout from TargetLoweringObjectFile, redirect to Module
Summary:
This change is part of a series of commits dedicated to have a single
DataLayout during compilation by using always the one owned by the
module.

Reviewers: echristo

Subscribers: yaron.keren, rafael, llvm-commits, jholewinski

Differential Revision: http://reviews.llvm.org/D11079

From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 242385
2015-07-16 06:04:17 +00:00
Mehdi Amini 1660cab341 Redirect pointerSize query to the TargetMachine in ASMPrinter
Summary:
Because llvm-dsymutil is using ASMPrinter without any MachineFunction
of Module available.

This change is part of a series of commits dedicated to have a single
DataLayout during compilation by using always the one owned by the
module.

Reviewers: echristo

Subscribers: yaron.keren, rafael, llvm-commits

Differential Revision: http://reviews.llvm.org/D11078

From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 242384
2015-07-16 05:59:25 +00:00
Adam Nemet 041e6deb2c [LAA] Split out a helper to check the pointer partitions, NFC
This is made a static public member function to allow the transition of
this logic from LAA to LoopDistribution.  (Technically, it could be an
implementation-local static function but then it would not be accessible
from LoopDistribution.)

llvm-svn: 242376
2015-07-16 02:48:05 +00:00
Reid Kleckner 938bd6fc96 Revert "[X86] Allow more call sequences to use push instructions for argument passing"
It miscompiles some code and a reduced test case has been sent to the
author.

This reverts commit r240257.

llvm-svn: 242373
2015-07-16 01:30:00 +00:00
Reid Kleckner ef9828fb47 Revert "Update LLVM bindings after r239940. ..."
Revert the changes to the C API LLVMBuildLandingPad that were part of
the personality function move. We now set the personality on the parent
function when the C API attempts to construct a landingpad with a
personality.

This reverts commit r240010.

llvm-svn: 242372
2015-07-16 01:16:39 +00:00
Akira Hatanaka 024d91a00b [ARM] Define a subtarget feature that is used to avoid using movt/movw
pairs for 32-bit immediates.

This change is needed to avoid emitting movt/movw pairs when doing LTO
and do so on a per-function basis.

Out-of-tree projects currently using cl::opt option -arm-use-movt=0 or
false to avoid emitting movt/movw pairs should make changes to add
subtarget feature "+no-movt" (see the changes made to clang in r242368).

rdar://problem/21529937

Differential Revision: http://reviews.llvm.org/D11026

llvm-svn: 242369
2015-07-16 00:58:23 +00:00
Rafael Espindola 06d6d1905e Fix handling of relative paths in thin archives.
The member has to end up with a path relative to the archive.

llvm-svn: 242362
2015-07-16 00:14:49 +00:00
Pete Cooper e3c8161736 Clear kill flags in ARMLoadStoreOptimizer.
The pass here was clearing kill flags on instructions which had
their sources killed in the instruction being combined.  But
given that the new instruction is inserted after the existing ones,
any existing instructions with kill flags will lead to the verifier
complaining that we are reading an undefined physreg.

For example, what we had prior to this optimization is
	t2STRi12 %R1, %SP, 12
	t2STRi12 %R1<kill>, %SP, 16
	t2STRi12 %R0<kill>, %SP, 8

and prior to this fix that would generate
	t2STRi12 %R1<kill>, %SP, 16
	t2STRDi8 %R0<kill>, %R1, %SP, 8

This is clearly incorrect as it didn't clear the kill flag on R1
used with offset 16 because there was no kill flag on the instruction
with offset 12.

After this change we clear the kill flag on the offset 16 instruction
because we know it will be used afterwards in the new instruction.

I haven't provided a test case.  I have a small test, but even it is
very sensitive to register allocation order which isn't ideal.

llvm-svn: 242359
2015-07-16 00:09:18 +00:00
Alex Lorenz 31d706836c MIR Serialization: Serialize the jump table index operands.
Reviewers: Duncan P. N. Exon Smith
llvm-svn: 242358
2015-07-15 23:38:35 +00:00
Alex Lorenz 6799e9b3e0 MIR Serialization: Serialize the jump table info.
The jump table info is serialized using a YAML mapping that contains its kind
and a YAML sequence of jump table entries. A jump table entry is a YAML mapping
that has an ID and an inline YAML sequence of machine basic block references.

The testcase 'CodeGen/MIR/X86/jump-table-info.mir' doesn't have any instructions
because one of them contains a jump table index operand. The jump table index
operands will be serialized in a follow up patch, and the appropriate
instructions will be added to this testcase.

Reviewers: Duncan P. N. Exon Smith
llvm-svn: 242357
2015-07-15 23:31:07 +00:00
Rafael Espindola 57c0525d2c llvm-ar: Don't write the directory in the string table.
We were already doing the right thing for short file names, but not long
ones.

llvm-svn: 242354
2015-07-15 23:15:33 +00:00
Cong Hou ab23bfbc0e Create a wrapper pass for BranchProbabilityInfo.
This new wrapper pass is useful when we want to do branch probability analysis conditionally (e.g. only in PGO mode) but don't want to add one more pass dependence.

http://reviews.llvm.org/D11241

llvm-svn: 242349
2015-07-15 22:48:29 +00:00
David Majnemer 3f6994b3c7 Silence GCC -Wparenthesis warning
llvm-svn: 242348
2015-07-15 22:48:26 +00:00
Rafael Espindola 0a74a60bc4 For new archive member we only need to store the full path.
We were storing both the path and the file name, which was redundant
and easy to get confused up with.

llvm-svn: 242347
2015-07-15 22:46:53 +00:00
Chen Li 3f5ed1566e [LoopUnswitch] Add an else clause to IsTrivialUnswitchCondition() when checking HeaderTerm instruction type
Summary:
This is a trivial code change with no functionality effect. 

When LoopUnswitch determines trivial unswitch condition, it checks whether the loop header's terminator instruction is a branch instruction or switch instruction since trivial unswitch condition can only apply to these two instruction types. The current code does not fail the check directly on other instruction types, but check the nullness of LoopExitBB variable instead. The added else clause makes the check fail immediately on other instruction types and makes the code more obvious.  

Reviewers: reames

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D11239

llvm-svn: 242345
2015-07-15 22:41:13 +00:00
Matthias Braun 5d1f12d1f5 TargetRegisterInfo: Provide a way to check assigned registers in getRegAllocationHints()
Pass a const reference to LiveRegMatrix to getRegAllocationHints()
because some targets can prodive better hints if they can test whether a
physreg has been used for register allocation yet.

llvm-svn: 242340
2015-07-15 22:16:00 +00:00
Alex Lorenz 37643a04a4 MIR Serialization: Serialize references from the stack objects to named allocas.
This commit serializes the references to the named LLVM alloca instructions from
the stack objects in the machine frame info. This commit adds a field 'Name' to
the struct 'yaml::MachineStackObject'. This new field is used to store the name
of the alloca instruction when the alloca is present and when it has a name.

llvm-svn: 242339
2015-07-15 22:14:49 +00:00
Paul Robinson b9de106d04 Add a "debugger tuning" concept that allows us to fine-tune how we
emit debug info, according to the preferences of the different
debuggers used on various targets.
Darwin and FreeBSD default to tuning for LLDB; PS4 defaults to tuning for
the SCE (Sony Computer Entertainment) debugger.  All others default to GDB.

Differential Revision: http://reviews.llvm.org/D8506

llvm-svn: 242338
2015-07-15 22:04:54 +00:00
JF Bastien 7289f73b8d Fix mergefunc infinite loop
Self-referential constants containing references to a merged function
no longer cause the MergeFunctions pass to infinite loop. Also adds a
reproduction IR which would otherwise fail, which was isolated from a similar
issue in Chromium.

Author: jrkoenig
Reviewers: nlewycky, jfb
Subscribers: llvm-commits, nlewycky, jfb

Differential Revision: http://reviews.llvm.org/D11208

llvm-svn: 242337
2015-07-15 21:51:33 +00:00
Rafael Espindola f662e00a68 Simplify a few uses of remove_filename by using parent_path instead.
llvm-svn: 242334
2015-07-15 21:24:07 +00:00
Rafael Espindola 449208d95b Handle the error of trying to convert a regular archive to a thin one.
While at it, test that we can add to a thin archive.

llvm-svn: 242330
2015-07-15 20:45:56 +00:00
Cong Hou 5e67b66640 Rename doFunction() in BFI to calculate() and change its parameters from pointers to references.
http://reviews.llvm.org/D11196

llvm-svn: 242322
2015-07-15 19:58:26 +00:00
Tobias Edler von Koch d8ce16b1e6 Analyze recursive PHI nodes in BasicAA
Summary:
This patch allows phi nodes like
  %x = phi [ %incptr, ... ] [ %var, ... ]
  %incptr = getelementptr %x, 1
to be analyzed by BasicAliasAnalysis.

In aliasPHI, we can detect incoming values that are recursive GEPs with a
constant offset. Instead of trying to analyze a recursive GEP (and failing), 
we now ignore it and instead set the size of the memory referenced by
the PHINode to UnknownSize. This represents all the possible memory
locations the pointer represented by the PHINode could be advanced to
by the GEP.

For now, this new behavior is turned off by default to allow debugging of
performance degradations seen with SPEC/x86 and Hexagon benchmarks.
The flag -basicaa-recphi turns it on.


Reviewers: hfinkel, sanjoy

Subscribers: tobiasvk_caf, sanjoy, llvm-commits

Differential Revision: http://reviews.llvm.org/D10368

llvm-svn: 242320
2015-07-15 19:32:22 +00:00
Bruno Cardoso Lopes 9b39693a5d Revert "Refactor optimizeUncoalescable logic"
Likely broke compilation on ARM:

http://lab.llvm.org:8011/builders/clang-native-arm-lnt/builds/13054

This reverts commit 0b7824464fbe3d3f386e2d4aef6a431422709e53.

llvm-svn: 242311
2015-07-15 18:10:46 +00:00
Bruno Cardoso Lopes ad61f34293 Revert "Look through PHIs to find additional register sources"
Likely broke compilation on ARM:

http://lab.llvm.org:8011/builders/clang-native-arm-lnt/builds/13054

This reverts commit 131ce4a838c081516cbfed039fc986b33e3979d6.

llvm-svn: 242310
2015-07-15 18:10:35 +00:00
Cong Hou 0881fc1198 Test commit.
This is a test commit (one blank line deleted).

llvm-svn: 242308
2015-07-15 17:58:15 +00:00
Adrian Prantl ee5feafc0f Debug Info: Add basic support for external types references.
This is a necessary prerequisite for bootstrapping the emission
of debug info inside modules.

- Adds a FlagExternalTypeRef to DICompositeType.
  External types must have a unique identifier.
- External type references are emitted using a forward declaration
  with a DW_AT_signature([DW_FORM_ref_sig8]) based on the UID.

http://reviews.llvm.org/D9612

llvm-svn: 242302
2015-07-15 17:01:41 +00:00
Pete Cooper 21ca199cea Add missing load/store flags to thumb2 instructions.
These were the cause of a verifier error when building 7zip with
-verify-machineinstrs.  Running 'make check' with the verifier
triggered the same error on the test here so i've updated the test
to run the verifier on one of its runs instead of adding a new one.

While looking at this code, there was a stale comment that these
instructions were only used for disassembly.  This probably used to
be the case, but they are now used in the 'ARM load / store optimization pass' too.

llvm-svn: 242300
2015-07-15 16:36:38 +00:00
Bill Schmidt 1e77bb12b4 [PPC64LE] Fix vec_sld semantics for little endian
The vec_sld interface provides access to the vsldoi instruction.
Unlike most of the vec_* interfaces, we do not attempt to change the
generated code for vec_sld based on the endian mode.  It is too
difficult to correctly infer the desired semantics because of
different element types, and the corrected instruction sequence is
expensive, involving loading a permute control vector and performing a
generalized permute.

For GCC, this was implemented as "Don't touch the vec_sld"
implementation.  When it came time for the LLVM implementation, I did
the same thing.  However, this was hasty and incorrect.  In LLVM's
version of altivec.h, vec_sld was previously defined in terms of the
vec_perm interface.  Because vec_perm semantics are adjusted for
little endian, this means that leaving vec_sld untouched causes it to
generate something different for LE than for BE.  Not good.

This back-end patch accompanies the changes to altivec.h that change
vec_sld's behavior for little endian.  Those changes mean that we see
slightly different code in the back end when trying to recognize a
VSLDOI instruction in isVSLDOIShuffleMask.  In particular, a
ShuffleKind of 1 (where the two inputs are identical) must now be
treated the same way as a ShuffleKind of 2 (little endian with
different inputs) when little endian mode is in force.  This is
because ShuffleKind of 1 is defined using big-endian numbering.

This has a ripple effect on LowerBUILD_VECTOR, where we create our own
internal VSLDOI instructions.  Because these are a ShuffleKind of 1,
they will now have their shift amounts subtracted from 16 when
recognizing the shuffle mask.  To avoid problems we have to subtract
them from 16 again before creating the VSLDOI instructions.

There are a couple of other uses of BuildVSLDOI, but these do not need
to be modified because the shift amount is 8, which is unchanged when
subtracted from 16.

llvm-svn: 242296
2015-07-15 15:45:30 +00:00
Bruno Cardoso Lopes fadd4fef2a Look through PHIs to find additional register sources
- Teaches the ValueTracker in the PeepholeOptimizer to look through PHI
instructions.
- Add findNextSourceAndRewritePHI method to lookup into multiple sources
returnted by the ValueTracker and rewrite PHIs with new sources.

With these changes we can find more register sources and rewrite more
copies to allow coaslescing of bitcast instructions. Hence, we eliminate
unnecessary VR64 <-> GR64 copies in x86, but it could be extended to
other archs by marking "isBitcast" on target specific instructions. The
x86 example follows:

A:
  psllq %mm1, %mm0
  movd  %mm0, %r9
  jmp C

B:
  por %mm1, %mm0
  movd  %mm0, %r9
  jmp C

C:
  movd  %r9, %mm0
  pshufw  $238, %mm0, %mm0

Becomes:

A:
  psllq %mm1, %mm0
  jmp C

B:
  por %mm1, %mm0
  jmp C

C:
  pshufw  $238, %mm0, %mm0

Differential Revision: http://reviews.llvm.org/D11197

rdar://problem/20404526

llvm-svn: 242295
2015-07-15 15:35:23 +00:00
Bruno Cardoso Lopes bd68a09591 Refactor optimizeUncoalescable logic
- Create a new CopyRewriter for Uncoalescable copy-like instructions
- Change the ValueTracker to return a ValueTrackerResult

This makes optimizeUncoalescable looks more like optimizeCoalescable and
use the CopyRewritter infrastructure.

This is also the preparation for looking up into PHI nodes in the
ValueTracker.

Differential Revision: http://reviews.llvm.org/D11195

llvm-svn: 242294
2015-07-15 15:35:09 +00:00
Benjamin Kramer c11fd3e775 [PPC] Disassemble little endian ppc instructions in the right byte order
PR24122. The test is simply a byte swapped version of ppc64-encoding.txt.

llvm-svn: 242288
2015-07-15 12:56:19 +00:00
Alexandros Lamprineas fcd93d539e -Added API for retrieving the default FPU of a CPU from TargetParser.
-Implemented as a table lookup.

Change-Id: Iaad0eaf4b29b06827e6700269496dc1ba20e9018
Phabricator: http://reviews.llvm.org/D11100
llvm-svn: 242284
2015-07-15 10:46:21 +00:00
Chandler Carruth 6af95d0a08 [PM/AA] Fix *numerous* serious bugs in GlobalsModRef found by
inspection.

While we want to handle calls specially in this code because they should
have been modeled by the call graph analysis that precedes it, we should
*not* be re-implementing the predicates for whether an instruction reads
or writes memory. Those are well defined already. Notably, at least the
following issues seem to be clearly missed before:
- Ordered atomic loads can "write" to memory by causing writes from other
  threads to become visible. Similarly for ordered atomic stores.
- AtomicRMW instructions quite obviously both read and write to memory.
- AtomicCmpXchg instructions also read and write to memory.
- Fences read and write to memory.
- Invokes of intrinsics or memory allocation functions.

I don't have any test cases, and I suspect this has never really come up
in the real world. But there is no reason why it wouldn't, and it makes
the code simpler to do this the right way.

While here, I've tried to make the loops significantly simpler as well
and added helpful comments as to what is going on.

llvm-svn: 242281
2015-07-15 08:53:29 +00:00
Alexey Bataev b9288601a3 [SDAG] Optimize unordered comparison in soft-float mode (patch by Anton Nadolskiy)
Current implementation handles unordered comparison poorly in soft-float mode. 
Consider (a ULE b) which is a <= b. It is lowered to (ledf2(a, b) <= 0 || unorddf2(a, b) != 0) (in general). We can do better job by lowering it to (__gtdf2(a, b) <= 0). 
Such replacement is true for other CMP's (ult, ugt, uge). In general, we just call same function as for ordered case but negate comparison against zero.
Differential Revision: http://reviews.llvm.org/D10804

llvm-svn: 242280
2015-07-15 08:39:35 +00:00
Hal Finkel 5d36b230b5 [PowerPC] Use the MachineCombiner to reassociate fadd/fmul
This is a direct port of the code from the X86 backend (r239486/r240361), which
uses the MachineCombiner to reassociate (floating-point) adds/muls to increase
ILP, to the PowerPC backend. The rationale is the same.

There is a lot of copy-and-paste here between the X86 code and the PowerPC
code, and we should extract at least some of this into CodeGen somewhere.
However, I don't want to do that until this code is enhanced to handle FMAs as
well. After that, we'll be in a better position to extract the common parts.

llvm-svn: 242279
2015-07-15 08:23:05 +00:00
Hal Finkel 673b493e98 [PowerPC] Extend physical register live range in PPCVSXFMAMutate
If the source of the copy that defines the addend is a physical register, then
its existing live range may not extend to the FMA being mutated. Make sure we
extend the live range of the register to meet the FMA because it will become
its operand in this case.

I don't have an independent test case, but it will be exposed by change to be
committed shortly enabling the use of the machine combiner to do fadd/fmul
reassociation, and will be covered by one of the associated regression tests.

llvm-svn: 242278
2015-07-15 08:23:03 +00:00
Hal Finkel e0fa8f2c86 [MachineCombiner] Work with itineraries
MachineCombiner predicated its use of scheduling-based metrics on
hasInstrSchedModel(), but useful conclusions can be drawn from pipeline
itineraries as well. Almost all of the logic (except for resource tracking in
preservesResourceLen) can be used if we have an itinerary, so enable it in that
case as well.

This will be used by the PowerPC backend in an upcoming commit.

llvm-svn: 242277
2015-07-15 08:22:23 +00:00
Petr Pavlu 097adfb98c [AArch64] Fix problems in decoding generic MSR instructions
Bitpatterns rejected by the decoder method of `MSR (immediate)` should be
decoded as the `extended MSR (register)` instruction.

Differential Revision: http://reviews.llvm.org/D7174

llvm-svn: 242276
2015-07-15 08:10:30 +00:00
Chandler Carruth a033bbbe96 [PM/AA] Cleanup some loops to be range-based. NFC.
llvm-svn: 242275
2015-07-15 08:09:23 +00:00
Igor Breger 096e8b0995 AVX : Fix ISA disabling in case AVX512VL , some instructions should be disabled only if AVX512BW present.
Tests added.

Differential Revision: http://reviews.llvm.org/D11122

llvm-svn: 242270
2015-07-15 07:08:10 +00:00
Rafael Espindola e649258272 Initial support for writing thin archives.
llvm-svn: 242269
2015-07-15 05:47:46 +00:00
Pete Cooper 6923461a16 Use enum instead of unsigned. NFC.
The unsigned opcode argument here was the result of BinaryOperator->getOpcode().
That returns a BinaryOps enum which is more accurate than passing around an
unsigned.

llvm-svn: 242265
2015-07-15 01:31:26 +00:00
Pete Cooper a8127d8c92 Use cast<> instead of dyn_cast to remove llvm_unreachable. NFC.
This code was checking if we are an ICmpInst or FCmpInst then throwing
unreachable if we are neither.  We must be one or the other, so use a
cast on the FCmpInst case to ensure that we are that case.  Then we can
avoid having an unreachable but still catch an error if we ever had another
subclass of CmpInst.

llvm-svn: 242264
2015-07-15 01:31:23 +00:00
Pete Cooper 20dc71b1f1 Use another foreach loop. NFC
llvm-svn: 242263
2015-07-15 01:31:20 +00:00
Pete Cooper 6a96c61659 Use getAnyExtOrTrunc helper instead of manually doing ext/trunc check. NFC.
The code here was doing exactly what is already in getAnyExtOrTrunc().
Just use that method instead.

llvm-svn: 242261
2015-07-15 00:43:57 +00:00
Pete Cooper 8acd386969 Use getZExtOrTrunc helper instead of manually doing zext/trunc check. NFC.
The code here was doing exactly what is already in getZExtOrTrunc().
Just use that method instead.

llvm-svn: 242260
2015-07-15 00:43:54 +00:00
Michael Zolotukhin 31b3eaaf28 [LoopUnrolling] Handle cast instructions.
During estimation of unrolling effect we should be able to propagate
constants through casts.

Differential Revision: http://reviews.llvm.org/D10207

llvm-svn: 242257
2015-07-15 00:19:51 +00:00
Pete Cooper e46f7ef385 Change conditional to assert. NFC.
This code was breaking from the case statement if the getStoreSizeInBits()
value was not a multiple of 0.  Given that the implementation returns
getStoreSize() * 8, it can only be a multiple of 8.

llvm-svn: 242255
2015-07-15 00:07:57 +00:00
Pete Cooper 7e747d26c5 Use getStoreSize() instead of getStoreSizeInBits()/8. NFC.
The calls here were both to getStoreSizeInBits() which multiplies by 8.
We then immediately divided by 8.  Calling getStoreSize() returns the
values we need without the extra arithmetic.

llvm-svn: 242254
2015-07-15 00:07:55 +00:00
Rafael Espindola 6fce2e4f26 Use a range loop.
llvm-svn: 242250
2015-07-14 23:51:01 +00:00
Pete Cooper 7e64ef06e6 Use more foreach loops in SelectionDAG. NFC
llvm-svn: 242249
2015-07-14 23:43:29 +00:00
Wei Mi deee61e434 Create a wrapper pass for BlockFrequencyInfo.
This is useful when we want to do block frequency analysis
conditionally (e.g. only in PGO mode) but don't want to add
one more pass dependence.

Patch by congh.
Approved by dexonsmith.
Differential Revision: http://reviews.llvm.org/D11196

llvm-svn: 242248
2015-07-14 23:40:50 +00:00
JF Bastien c8f48c19d3 WebAssembly: fix build breakage.
Summary:
processFunctionBeforeCalleeSavedScan was renamed to determineCalleeSaves and now takes a BitVector parameter as of rL242165, reviewed in http://reviews.llvm.org/D10909

WebAssembly is still marked as experimental and therefore doesn't build by default. It does, however, grep by default! I notice that processFunctionBeforeCalleeSavedScan is still mentioned in a few comments and error messages, which I also fixed.

Reviewers: qcolombet, sunfish

Subscribers: jfb, dsanders, hfinkel, MatzeB, llvm-commits

Differential Revision: http://reviews.llvm.org/D11199

llvm-svn: 242242
2015-07-14 23:06:07 +00:00
Hal Finkel 4012024fea [PowerPC] Support symbolic targets in patchpoints
Follow-up r235483, with the corresponding support in PPC. We use a regular call
for symbolic targets (because they're much cheaper than indirect calls).

llvm-svn: 242239
2015-07-14 22:53:11 +00:00
David Majnemer 33b6f82e72 [InstCombine] Generalize sub of selects optimization to all BinaryOperators
This exposes further optimization opportunities if the selects are
correlated.

llvm-svn: 242235
2015-07-14 22:39:23 +00:00
Adam Nemet 9f7dedc376 [LAA] Introduce RuntimePointerChecking::PointerInfo, NFC
Turn this structure-of-arrays (i.e. the various pointer attributes) into
array-of-structures.

llvm-svn: 242219
2015-07-14 22:32:50 +00:00
Adam Nemet 7cdebac0c8 [LAA] Lift RuntimePointerCheck out of LoopAccessInfo, NFC
I am planning to add more nested classes inside RuntimePointerCheck so
all these triple-nesting would be hard to follow.

Also rename it to RuntimePointerChecking (i.e. append 'ing').

llvm-svn: 242218
2015-07-14 22:32:44 +00:00
Hal Finkel 9bbad03b98 [PowerPC] Use the ABI indirect-call protocol for patchpoints
We used to take the address specified as the direct target of the patchpoint
and did no TOC-pointer handling.  This, however, as not all that useful,
because MCJIT tends to create a lot of modules, and they have their own TOC
sections. Thus, to call from the generated code to other generated code, you
really need to switch TOC pointers. Make this work as expected, and under
ELFv1, tread the address as the function descriptor address so that the correct
TOC pointer can be loaded.

llvm-svn: 242217
2015-07-14 22:26:06 +00:00
Rafael Espindola 4b83cb5390 Add support for reading members out of thin archives.
For now the Archive owns the buffers of the thin archive members.
This makes for a simple API, but all the buffers are destructed
only when the archive is destructed. This should be fine since we
close the files after mmap so we should not hit an open file
limit.

llvm-svn: 242215
2015-07-14 22:18:43 +00:00
Pete Cooper 65c69407c8 Add allnodes() iterator range to SelectionDAG. NFC.
SelectionDAG already had begin/end methods for iterating over all
the nodes, but didn't define an iterator_range for us in foreach
loops.

This adds such a method and uses it in some of the eligible places
throughout the backends.

llvm-svn: 242212
2015-07-14 22:10:54 +00:00
Pete Cooper 06e249e713 Constify parameters in SelectionDAG methods. NFC
llvm-svn: 242210
2015-07-14 21:54:52 +00:00
Pete Cooper cf17e18f4e Remove unnecessary .getNode() in SelectionDAG. NFC.
The simplify_type specialisation allows us to cast directly from
SDValue to an SDNode* subclass so we don't need to pass a SDNode*
to cast<>.

llvm-svn: 242209
2015-07-14 21:54:48 +00:00
Pete Cooper e89ba67f72 Use more foreach loops in SelectionDAG. NFC
llvm-svn: 242208
2015-07-14 21:54:45 +00:00
Alex Lorenz 9fab370d79 MIR Serialization: Serialize the machine basic block live in registers.
llvm-svn: 242204
2015-07-14 21:24:41 +00:00
Alex Lorenz 15a00a858a MIR Printer: move the function 'printReg'. NFC.
This commit moves the function 'printReg' towards the start of the file so that
it can be used by the conversion methods in MIRPrinter and not just the printing
methods in MIPrinter.

llvm-svn: 242203
2015-07-14 21:18:25 +00:00
Tim Northover 586b741959 GVN: use a static array instead of regenerating it each time. NFC.
llvm-svn: 242202
2015-07-14 21:14:58 +00:00
JF Bastien d9767a368f WebAssembly: add basic int/fp instruction codegen.
Summary: This patch has the most basic instruction codegen for 32 and 64 bit int/fp.

Reviewers: sunfish

Subscribers: llvm-commits, jfb

Differential Revision: http://reviews.llvm.org/D11193

llvm-svn: 242201
2015-07-14 21:13:29 +00:00
Krzysztof Parzyszek fdfaae42b6 Fix NDEBUG build warning
llvm-svn: 242200
2015-07-14 21:03:24 +00:00
Tim Northover d5fdef016d GVN: tolerate an instruction being replaced without existing in the leaderboard
Sometimes an incidentally created instruction can duplicate a Value used
elsewhere. It then often doesn't end up in the leader table. If it's later
removed, we attempt to remove it from the leader table and segfault.

Instead we should just ignore the removal request, which won't cause any
problems. The reverse situation, where the original instruction is replaced by
the new one (which you might think could leave the leader table empty) cannot
occur, because the incidental instruction will never be found in the first
place.

llvm-svn: 242199
2015-07-14 21:03:18 +00:00
Krzysztof Parzyszek 2e82883620 Fix Windows build: replace __func__ with LLVM_FUNCTION_NAME
llvm-svn: 242192
2015-07-14 20:11:28 +00:00
Bruno Cardoso Lopes 9e6dea1df8 [MMX] Use the appropriate instructions for GR64 <-> VR64 copies.
MOVSDto64rr and MOV64toSDrr are defined to convert between FR64 (%xmm)
<-> GR64 registers, not VR64 (%mm) <-> GR64. This is wrong.

I found this by inspection and could not find a suitable testcase for it
since (1) we don't handle MMX bitcasts in Peephole optimizer as to
generate COPYs that (2) could be expanded back to the appropriate x86
instruction in ExpandPostRA.

Switch to use the appropriate instructions: MMX_MOVD64from64rr and
MMX_MOVD64to64rr here.

llvm-svn: 242191
2015-07-14 20:09:34 +00:00
Hal Finkel 8acae5276e [PowerPC] Fix the PPCInstrInfo::getInstrLatency implementation
PowerPC uses itineraries to describe processor pipelines (and dispatch-group
restrictions for P7/P8 cores). Unfortunately, the target-independent
implementation of TII.getInstrLatency calls ItinData->getStageLatency, and that
looks for the largest cycle count in the pipeline for any given instruction.
This, however, yields the wrong answer for the PPC itineraries, because we
don't encode the full pipeline. Because the functional units are fully
pipelined, we only model the initial stages (there are no relevant hazards in
the later stages to model), and so the technique employed by getStageLatency
does not really work. Instead, we should take the maximum output operand
latency, and that's what PPCInstrInfo::getInstrLatency now does.

This caused some test-case churn, including two unfortunate side effects.
First, the new arrangement of copies we get from function parameters now
sometimes blocks VSX FMA mutation (a FIXME has been added to the code and the
test cases), and we have one significant test-suite regression:

SingleSource/Benchmarks/BenchmarkGame/spectral-norm
	56.4185% +/- 18.9398%

In this benchmark we have a loop with a vectorized FP divide, and it with the
new scheduling both divides end up in the same dispatch group (which in this
case seems to cause a problem, although why is not exactly clear). The grouping
structure is hard to predict from the bottom of the loop, and there may not be
much we can do to fix this.

Very few other test-suite performance effects were really significant, but
almost all weakly favor this change. However, in light of the issues
highlighted above, I've left the old behavior available via a
command-line flag.

llvm-svn: 242188
2015-07-14 20:02:02 +00:00
Krzysztof Parzyszek 758744706a [Hexagon] Generate instructions for operations on predicate registers
Convert logical operations on general-purpose registers to the correspon-
ding operations on predicate registers.

llvm-svn: 242186
2015-07-14 19:30:21 +00:00
Keno Fischer aff703a2ca [CodeGen] Force emission of personality directive if explicitly specified
Summary:
Before this change, personality directives were not emitted
if there was no invoke left in the function (of course until
recently this also meant that we couldn't know what
the personality actually was). This patch forces personality directives
to still be emitted, unless it is known to be a noop in the absence of
invokes, or the user explicitly specified `nounwind` (and not
`uwtable`) on the function.

Reviewers: majnemer, rnk

Subscribers: rnk, llvm-commits

Differential Revision: http://reviews.llvm.org/D10884

llvm-svn: 242185
2015-07-14 19:22:51 +00:00
Matt Arsenault 24692118ba AMDGPU: Avoid using 64-bit shift for i64 (shl x, 32)
This can be done only with moves which theoretically
will optimize better later.

Although this transform increases the instruction count,
it should be code size / cycle count neutral in the worst
VALU case. It also seems to slightly improve a couple
of testcases due to other DAG combines this exposes.

This is probably slightly worse for the SALU case, so
it might be better to handle this during moveToVALU,
although then you lose some simplifications like
the load width reducing in the simple testcase.

llvm-svn: 242177
2015-07-14 18:20:33 +00:00
Matt Arsenault 84db5d97b0 AMDGPU/SI: Fix read2 merging into a super register.
If the read2 produced was supposed to be writing into a
super register, it would use the wrong subregister indices.
Fix this by inserting copies, so we only ever write to a vreg_64.
Run the register coalescer again to clean this up, although this
isn't ideal and often does result in an extra move.

Also remove the assert that offset1 > offset0.

There isn't a real reason to not allow this other than a minor
convenience in the compiler, and it doesn't seem worth the effort
of avoiding it.

llvm-svn: 242174
2015-07-14 17:57:36 +00:00
Matthias Braun 9912bb817c MachineRegisterInfo: Remove UsedPhysReg infrastructure
We have a detailed def/use lists for every physical register in
MachineRegisterInfo anyway, so there is little use in maintaining an
additional bitset of which ones are used.

Removing it frees us from extra book keeping. This simplifies
VirtRegMap.

Differential Revision: http://reviews.llvm.org/D10911

llvm-svn: 242173
2015-07-14 17:52:07 +00:00
Matthias Braun 953393a72c RAGreedy: Keep track of allocated PhysRegs internally
Do not use MachineRegisterInfo::setPhysRegUsed()/isPhysRegUsed()
anymore. This bitset changes function-global state and is set by the
VirtRegRewriter anyway.
Simply use a bitvector private to RAGreedy.

Differential Revision: http://reviews.llvm.org/D10910

llvm-svn: 242169
2015-07-14 17:38:17 +00:00
Nemanja Ivanovic 984a3613b3 Add missing builtins to the PPC back end for ABI compliance (vol. 4)
This patch corresponds to review:
http://reviews.llvm.org/D11183

Back end portion of the fourth round of additions to altivec.h.

llvm-svn: 242167
2015-07-14 17:25:20 +00:00
Matthias Braun 0256486532 PrologEpilogInserter: Rewrite API to determine callee save regsiters.
This changes TargetFrameLowering::processFunctionBeforeCalleeSavedScan():

- Rename the function to determineCalleeSaves()
- Pass a bitset of callee saved registers by reference, thus avoiding
  the function-global PhysRegUsed bitset in MachineRegisterInfo.
- Without PhysRegUsed the implementation is fine tuned to not save
  physcial registers which are only read but never modified.

Related to rdar://21539507

Differential Revision: http://reviews.llvm.org/D10909

llvm-svn: 242165
2015-07-14 17:17:13 +00:00
Tim Northover c962d4f28b AArch64: add rev64 alias for 64-bit rev instruction.
It could be useful to assembly programmers and makes the permitted variants a
little more uniform.

llvm-svn: 242164
2015-07-14 17:07:29 +00:00
Krzysztof Parzyszek a0ecf07c0b [Hexagon] Generate "extract" instructions more aggressively
Generate extract instructions (via intrinsics) before the DAG combiner
folds shifts into unrecognizable forms.

llvm-svn: 242163
2015-07-14 17:07:24 +00:00
Hans Wennborg 61f9efe73b ARMAsmParser: Take MCInst param by const-ref
(Broken out from http://reviews.llvm.org/D11167)

llvm-svn: 242160
2015-07-14 16:39:01 +00:00
Alexandros Lamprineas a04e6b1853 Caused regressions: compile Release+Asserts failed on clang-native-arm-cortex-a9
Revert "-Added API for retrieving the default FPU of a CPU from TargetParser."

This reverts commit 01199ab0c6ff2d5c4f6b2c05a95ec011e41c4669.

llvm-svn: 242147
2015-07-14 14:34:06 +00:00
Tom Stellard e48fe2a27a AMDGPU/SI: Add support for shrinking v_cndmask_b32_e32 instructions
Reviewers: arsenm

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D11061

llvm-svn: 242146
2015-07-14 14:15:03 +00:00
Aaron Ballman a927a86c72 Silencing two MSVC warnings; 'argument' : truncation from 'unsigned int' to 'int16_t' and truncation of constant value. NFC intended.
llvm-svn: 242145
2015-07-14 14:14:00 +00:00
Alexandros Lamprineas ab9907a217 -Added API for retrieving the default FPU of a CPU from TargetParser.
-Implemented as a table lookup.

Change-Id: Ibf7217f6bd2769e9c06835a5aede3d072dee6757
Phabricator: http://reviews.llvm.org/D11100
llvm-svn: 242141
2015-07-14 13:20:48 +00:00
Daniel Sanders 03f9c019d2 [mips] Fix li/la differences between IAS and GAS.
Summary:
- Signed 16-bit should have priority over unsigned.
- For la, unsigned 16-bit must use ori+addu rather than directly use ori.
- Correct tests on 32-bit immediates with 64-bit predicates by
  sign-extending the immediate beforehand. For example, isInt<16>(0xffff8000)
  should be true and use addiu.

Also split li/la testing into separate files due to their size.

Reviewers: vkalintiris

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D10967

llvm-svn: 242139
2015-07-14 12:24:22 +00:00
Chandler Carruth 466d7ad42d [PM/AA] Reformat GlobalsModRef so that subsequent patches I make here
don't continually introduce formatting deltas. NFC

llvm-svn: 242129
2015-07-14 08:42:39 +00:00
David Majnemer 62690b1952 [SROA] Don't de-atomic volatile loads and stores
Volatile loads and stores are made visible in global state regardless of
what memory is involved.  It is not correct to disregard the ordering
and synchronization scope because it is possible to synchronize with
memory operations performed by hardware.

This partially addresses PR23737.

llvm-svn: 242126
2015-07-14 06:19:58 +00:00
Yaron Keren d1ba2d9d8b Generate correct asm info for mingw and cygwin ARM targets.
http://reviews.llvm.org/D11075

Patch by Martell Malone
Reviewed by Reid Kleckner

llvm-svn: 242123
2015-07-14 05:51:05 +00:00
NAKAMURA Takumi 0b305db8df Prune trailing whitespaces and CRs.
llvm-svn: 242117
2015-07-14 04:03:49 +00:00
Matthias Braun 75e668ea6e Revert "LegalizeDAG: Fix and improve FCOPYSIGN/FABS legalization"
Accidental commit, needs review first.

This reverts commit r242107.

llvm-svn: 242108
2015-07-14 02:09:57 +00:00
Matthias Braun 4ac4ecdadf LegalizeDAG: Fix and improve FCOPYSIGN/FABS legalization
- Factor out code to query and modify the sign bit of a floatingpoint
  value as an integer. This also works if none of the targets integer
  types is big enough to hold all bits of the floatingpoint value.

- Legalize FABS(x) as FCOPYSIGN(x, 0.0) if FCOPYSIGN is available,
  otherwise perform bit manipulation on the sign bit. The previous code
  used "x >u 0 ? x : -x" which is incorrect for x being -0.0! It also
  takes 34 instructions on ARM Cortex-M4. With this patch we only
  require 5:
    vldr d0, LCPI0_0
    vmov r2, r3, d0
    lsrs r2, r3, #31
    bfi r1, r2, #31, #1
    bx lr
  (This could be further improved if the compiler would recognize that
   r2, r3 is zero).

- Only lower FCOPYSIGN(x, y) = sign(x) ? -FABS(x) : FABS(x) if FABS is
  available otherwise perform bit manipulation on the sign bit.

- Perform the sign(x) test by masking out the sign bit and comparing
  with 0 rather than shifting the sign bit to the highest position and
  testing for "<s 0". For x86 copysignl (on 80bit values) this gets us:
    testl $32768, %eax
  rather than:
    shlq $48, %rax
    sets %al
    testb %al, %al

llvm-svn: 242107
2015-07-14 02:08:26 +00:00
Andrew Wilkins 3bdfc1cd0c Add capability to get and set the personalitty function from the C API
Summary:
The capability was lost with D10429 where the personality function was set at function level rather than landing pad level. Now there is no way to get/set the personality function from the C API. That is a problem.

Note that the whole thing could be avoided by improving the C API testing, as started by D10725

Reviewers: chandlerc, bogner, majnemer, andrew.w.kaylor, rafael, rnk, axw

Subscribers: rafael, llvm-commits

Differential Revision: http://reviews.llvm.org/D10946

llvm-svn: 242104
2015-07-14 01:23:06 +00:00
Rafael Espindola 2b05416be8 Add a herper function. NFC.
llvm-svn: 242100
2015-07-14 01:06:16 +00:00
Alex Lorenz 418f3ec17d MIR Serialization: Serialize the variable sized stack objects.
llvm-svn: 242095
2015-07-14 00:26:26 +00:00
Reid Kleckner 486fa3977a Update enforceKnownAlignment after the isWeakForLinker semantic change
Previously we would refrain from attempting to increase the linkage of
available_externally globals because they were considered weak for the
linker. Now they are treated more like a declaration instead of a weak
definition.

This was causing SSE alignment faults in Chromuim, when some code
assumed it could increase the alignment of a dllimported global that it
didn't control.  http://crbug.com/509256

llvm-svn: 242091
2015-07-14 00:11:08 +00:00
Alex Lorenz 2eacca86ef MIR Serialization: Serialize the sub register indices.
This commit serializes the sub register indices from the register machine
operands.

Reviewers: Duncan P. N. Exon Smith
llvm-svn: 242084
2015-07-13 23:24:34 +00:00
Rafael Espindola c60d0d2a15 Fix reading archive members with / in the name.
This is important for thin archives.

llvm-svn: 242082
2015-07-13 23:07:05 +00:00
Bill Schmidt 15deb803b4 [PPC64LE] More improvements to VSX swap optimization
This patch allows VSX swap optimization to succeed more frequently.
Specifically, it is concerned with common code sequences that occur
when copying a scalar floating-point value to a vector register.  This
patch currently handles cases where the floating-point value is
already in a register, but does not yet handle loads (such as via an
LXSDX scalar floating-point VSX load).  That will be dealt with later.

A typical case is when a scalar value comes in as a floating-point
parameter.  The value is copied into a virtual VSFRC register, and
then a sequence of SUBREG_TO_REG and/or COPY operations will convert
it to a full vector register of the class required by the context.  If
this vector register is then used as part of a lane-permuted
computation, the original scalar value will be in the wrong lane.  We
can fix this by adding a swap operation following any widening
SUBREG_TO_REG operation.  Additional COPY operations may be needed
around the swap operation in order to keep register assignment happy,
but these are pro forma operations that will be removed by coalescing.

If a scalar value is otherwise directly referenced in a computation
(such as by one of the many XS* vector-scalar operations), we
currently disable swap optimization.  These operations are
lane-sensitive by definition.  A MentionsPartialVR flag is added for
use in each swap table entry that mentions a scalar floating-point
register without having special handling defined.

A common idiom for PPC64LE is to convert a double-precision scalar to
a vector by performing a splat operation.  This ensures that the value
can be referenced as V[0], as it would be for big endian, whereas just
converting the scalar to a vector with a SUBREG_TO_REG operation
leaves this value only in V[1].  A doubleword splat operation is one
form of an XXPERMDI instruction, which takes one doubleword from a
first operand and another doubleword from a second operand, with a
two-bit selector operand indicating which doublewords are chosen.  In
the general case, an XXPERMDI can be permitted in a lane-swapped
region provided that it is properly transformed to select the
corresponding swapped values.  This transformation is to reverse the
order of the two input operands, and to reverse and complement the
bits of the selector operand (derivation left as an exercise to the
reader ;).

A new test case that exercises the scalar-to-vector and generalized
XXPERMDI transformations is added as CodeGen/PowerPC/swaps-le-5.ll.
The patch also requires a change to CodeGen/PowerPC/swaps-le-3.ll to
use CHECK-DAG instead of CHECK for two independent instructions that
now appear in reverse order.

There are two small unrelated changes that are added with this patch.
First, the XXSLDWI instruction was incorrectly omitted from the list
of lane-sensitive instructions; this is now fixed.  Second, I observed
that the same webs were being rejected over and over again for
different reasons.  Since it's sufficient to reject a web only once, I
added a check for this to speed up the compilation time slightly.

llvm-svn: 242081
2015-07-13 22:58:19 +00:00
Pete Cooper 90d95edbb4 Loop idiom recognizer was replacing too many uses of popcount.
When spotting that a loop can use ctpop, we were incorrectly replacing all uses of a value with a value derived from ctpop.

The bug here was exposed because we were replacing a use prior to the ctpop with the ctpop value and so we have a use before def, i.e., we changed

 %tobool.5 = icmp ne i32 %num, 0
 store i1 %tobool.5, i1* %ptr
 br i1 %tobool.5, label %for.body.lr.ph, label %for.end

to

 store i1 %1, i1* %ptr
 %0 = call i32 @llvm.ctpop.i32(i32 %num)
 %1 = icmp ne i32 %0, 0
 br i1 %1, label %for.body.lr.ph, label %for.end

Even if we inserted the ctpop so that it dominates the store here, that would still be incorrect.  The store doesn’t want the result of ctpop.

The fix is very simple, and involves replacing only the branch condition with the ctpop instead of all uses.

Reviewed by Hal Finkel.

llvm-svn: 242068
2015-07-13 21:25:33 +00:00
Reid Kleckner 9a1a919465 [WinEH] Emit the LSDA even if no lpads remain but outlining occurred
The outlined funclets call intrinsics which reference labels from the
LSDA. This situation can easily arise in small functions with a single
cleanup at -O0, where Clang marks a definition as nounwind, and then
WinEHPrepare "discovers" that the landingpad is dead by accident and
deletes it.

We now need to ask the LLVM IR Function for it's personality directly,
rather than going through MachineModuleInfo.

Fixes PR23892.

llvm-svn: 242063
2015-07-13 20:41:46 +00:00
Benjamin Kramer d8861517bf [Hexagon] Move BitTracker into the llvm namespace and remove redundant qualifications
No functional change intended.

llvm-svn: 242062
2015-07-13 20:38:16 +00:00
Rafael Espindola 6a8e86f26e Add support deterministic output in llvm-ar and make it the default.
llvm-svn: 242061
2015-07-13 20:38:09 +00:00
Matt Arsenault ca95d44110 AMDGPU: Minor cleanups to always inline pass
llvm-svn: 242053
2015-07-13 19:08:36 +00:00
David Majnemer 1305e2c0f5 [MC] Correctly escape .safeseh's symbol
This fixes PR24107.

llvm-svn: 242050
2015-07-13 18:51:15 +00:00
Mark Heffernan 4c8ca53f7e Enable partial and runtime loop unrolling for NVPTX.
Enable partial and runtime loop unrolling for NVPTX backend via
TTI::UnrollingPreferences with a small threshold. This partially unrolls
small loops which are often unrolled by the PTX to SASS compiler
and unrolling earlier can be beneficial.

llvm-svn: 242049
2015-07-13 18:33:21 +00:00
Mark Heffernan d7ebc24112 Enable runtime unrolling with unroll pragma metadata
Enable runtime unrolling for loops with unroll count metadata ("#pragma unroll N")
and a runtime trip count. Also, do not unroll loops with unroll full metadata if the
loop has a runtime loop count. Previously, such loops would be unrolled with a
very large threshold (pragma-unroll-threshold) if runtime unrolled happened to be
enabled resulting in a very large (and likely unwise) unroll factor.

llvm-svn: 242047
2015-07-13 18:26:27 +00:00
Adrian Prantl 857237ee70 Service the doxygen comments in DwarfUnit and DwarfDebug.
llvm-svn: 242046
2015-07-13 18:25:29 +00:00