Handle lowering of global addresses for PIC mode compilation on Windows. Always
use the movw/movt load to load the address as Windows on ARM requires ARMv7+ and
is a pure Thumb environment.
llvm-svn: 208385
Summary:
Also ran clang-format on the function. The code added is the last else
if block.
Reviewers: nadav, craig.topper, delena
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D3518
llvm-svn: 208372
comment of the API.
Relaxes the behavior of TargetInstrInfo::commuteInstruction when
TargetInstrInfo::findCommutedOpIndices returns false.
Previously TargetInstrInfo triggered a fatal error in such situation whereas based
on the comment in the API it should just return nullptr. Indeed the only
precondition that should be ensured is that the instruction must be commutable.
llvm-svn: 208371
(r207876 was reverted in r208131 after seeing some consistent buildbot
failure for MSVC 2012. The original commits were in r207724-r207726)
Takumi was nice enough to dig into this and locate this Microsoft
Connect issue:
http://connect.microsoft.com/VisualStudio/feedback/details/814899/forward-as-tuple-debug-implementation-error
describing a bug in MSVC2012's forward_as_tuple implementation.
Since the parameters in this instance are trivial/small, pass them by
value (using make_tuple) instead of perfectly-forwarded tuple of rvalue
references (involving the broken forward_as_tuple). Hopefully this will
satisfy MSVC2012.
llvm-svn: 208364
Sorry for the commit spam. My clang-format crashed on me and the vim
plugin did not print an error, but instead just left the formatting
untouched.
llvm-svn: 208358
This patch teaches the backend how to combine packed SSE2/AVX2 arithmetic shift
intrinsics.
The rules are:
- Always fold a packed arithmetic shift by zero to its first operand;
- Convert a packed arithmetic shift intrinsic dag node into a ISD::SRA only if
the shift count is known to be smaller than the vector element size.
This patch also teaches to function 'getTargetVShiftByConstNode' how fold
target specific vector shifts by zero.
Added two new tests to verify that the DAGCombiner is able to fold
sequences of SSE2/AVX2 packed arithmetic shift calls.
llvm-svn: 208342
The parsing of ADD/SUB shifted immediates needs to be done explicitly so
that better diagnostics can be emitted, as a side effect this also
removes some of the hacks in the current method of handling this operand
type.
Additionally remove manual CMP aliasing to ADD/SUB and use InstAlias
instead.
llvm-svn: 208329
Summary:
These instructions were added in MIPS-I, and MIPS-II but were removed in
MIPS-III. Interestingly, GAS continues to accept them when assembling for
MIPS-III.
For the moment, these instructions will follow GAS and accept them for
MIPS-III and newer but this will be tightened up when the invalid-*.s
tests are added.
Depends on D3647
Reviewers: vmedic
Reviewed By: vmedic
Differential Revision: http://reviews.llvm.org/D3648
llvm-svn: 208311
Summary:
A small number of instructions are rejected with the wrong error message.
These have been placed in a separate test for now. There seems to be some
parsing quirk that triggers when these instructions are disabled.
Depends on D3571
Reviewers: vmedic
Reviewed By: vmedic
Differential Revision: http://reviews.llvm.org/D3647
llvm-svn: 208305
Also removed an inaccurate comment that stated that a DenseMap was used as
storage for the ListInit*'s. It's currently using a FoldingSet.
I expect there's a better way to fix this but I haven't found it yet. FoldingSet
is incompatible with the Pool template and I'm not sure if FoldingSet can be
safely replaced with a DenseMap of computed FoldingSetID's to ListInit*'s.
llvm-svn: 208293
The old method used by X86TTI to determine partial-unrolling thresholds was
messy (because it worked by testing target features), and also would not
correctly identify the target CPU if certain target features were disabled.
After some discussions on IRC with Chandler et al., it was decided that the
processor scheduling models were the right containers for this information
(because it is often tied to special uop dispatch-buffer sizes).
This does represent a small functionality change:
- For generic x86-64 (which uses the SB model and, thus, will get some
unrolling).
- For AMD cores (because they still currently use the SB scheduling model)
- For Haswell (based on benchmarking by Louis Gerbarg, it was decided to bump
the default threshold to 50; we're working on a test case for this).
Otherwise, nothing has changed for any other targets. The logic, however, has
been moved into BasicTTI, so other targets may now also opt-in to this
functionality simply by setting LoopMicroOpBufferSize in their processor
model definitions.
llvm-svn: 208289
This adds FK_SecRel_2 relocation support to ARM. This enables the building of
object files for armv7-windows-msvc which enables CodeView line tables for
debugging as opposed to armv7-windows-itanium which currently uses DWARF.
llvm-svn: 208273
Summary:
Vectors built with zeros and elements in the same order as another
(source) vector are optimized to be built using a single insertps
instruction.
Also optimize when we move one element in a vector to a different place
in that vector while zeroing out some of the other elements.
Further optimizations are possible, described in TODO comments.
I will be implementing at least some of them in the near future.
Added some tests for different cases where this optimization triggers.
Reviewers: nadav, delena, craig.topper
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D3521
llvm-svn: 208271
The change to ExtractGV.cpp has no functionality change except to avoid
the asserts. Existing testcases already cover this, so I didn't add a
new one.
llvm-svn: 208264
Visibilities of `hidden` and `protected` are meaningless for symbols
with local linkage.
- Change the assembler to reject non-default visibility on symbols
with local linkage.
- Change the bitcode reader to auto-upgrade `hidden` and `protected`
to `default` when the linkage is local.
- Update LangRef.
<rdar://problem/16141113>
llvm-svn: 208263
`ModuleLinker::getLinkageResult()` shouldn't create symbols with local
linkage and non-default visibility -- in fact, symbols with local
linkage shouldn't be merged at all. Assert to that effect.
llvm-svn: 208262
Since visibility is meaningless for symbols with local linkage, check
local linkage before visibility when setting symbol attributes.
When linkage is `internal` and the visibility is `hidden`, the exposed
attribute is now `LTO_SYMBOL_SCOPE_INTERNAL` instead of
`LTO_SYMBOL_SCOPE_HIDDEN`. Although the bitfield allows *both* to be
specified, the combination is nonsense anyway.
Given changes (in progress) to drop visibility when a symbol has local
linkage, this almost has no functionality change: it's mostly a cleanup
to clarify the logic.
The exception is when something has `appending` linkage. Before this
change, such symbols would be advertised as `LTO_SYMBOL_SCOPE_INTERNAL`;
now, they'll be given `LTO_SYMBOL_SCOPE_COMMON`.
Unfortunately this is really awkward to test. This only changes what we
advertise to linkers (before running LTO), not what the final object
looks like. In theory I could add `DEBUG` output to `llvm-lto` (and
test with "REQUIRES: asserts"), but follow-up commits to disallow
`internal hidden` simplify this anyway.
<rdar://problem/16141113>
llvm-svn: 208261
relocation entries it applies.
Prior to this patch, RuntimeDyldImpl::resolveExternalSymbols discarded
relocations for external symbols once they had been applied. This causes issues
if the client calls MCJIT::finalizeLoadedModules more than once, and updates the
location of any symbols in between (e.g. by calling MCJIT::mapSectionAddress).
No test case yet: None of our in-tree memory managers support moving sections
around. I'll have to hack up a dummy memory manager before I can write a unit
test.
Fixes <rdar://problem/16764378>
llvm-svn: 208257
The loop stream detector (LSD) on modern Intel cores, which optimizes the
execution of small loops, has limits on the number of taken branches in
addition to uop-count limits (modern AMD cores have similar limits).
Unfortunately, at the IR level, estimating the number of branches that will be
taken is difficult. For one thing, it strongly depends on later passes (block
placement, etc.). The original implementation took a conservative approach and
limited the maximal BB DFS depth of the loop. However, fairly-extensive
benchmarking by several of us has revealed that this is the wrong approach. In
fact, there are zero known cases where the branch limit prevents a detrimental
unrolling (but plenty of cases where it does prevent beneficial unrolling).
While we could improve the current branch counting logic by incorporating
branch probabilities, this further complication seems unjustified without a
motivating regression. Instead, unless and until a regression appears, the
branch counting will be removed.
llvm-svn: 208255
Given a FMA family (e.g., 213, 231), not all the variants (i.e., register or
memory) are commutable.
E.g., for the 213 family (with the syntax src1, src2, src3):
fmaXXX213 A, B, reg3/mem3 == fmaXXX213 B, A, reg3/mem3
Now consider the 231 family:
fmaXXX231 A, B, reg3 == fmaXXX231 A, reg3, B
But
fmaXXX231 A, B, mem3 != fmaXXX231 A, mem3, B
Indeed, mem3 cannot be the second argument of the memory variant of fmaXXX231.
Working on a reduced test case!
<rdar://problem/16800495>
llvm-svn: 208252
When reducing the bitwidth of a comparison against a constant, the
original setcc's result type was used, which was incorrect.
No test since I don't think any other in tree targets change the
bitwidth of the setcc type depending on the bitwidth of the compared
type.
llvm-svn: 208236
To compute the dimensions of the array in a unique way, we split the
delinearization analysis in three steps:
- find parametric terms in all memory access functions
- compute the array dimensions from the set of terms
- compute the delinearized access functions for each dimension
The first step is executed on all the memory access functions such that we
gather all the patterns in which an array is accessed. The second step reduces
all this information in a unique description of the sizes of the array. The
third step is delinearizing each memory access function following the common
description of the shape of the array computed in step 2.
This rewrite of the delinearization pass also solves a problem we had with the
previous implementation: because the previous algorithm was by induction on the
structure of the SCEV, it would not correctly recognize the shape of the array
when the memory access was not following the nesting of the loops: for example,
see polly/test/ScopInfo/multidim_only_ivs_3d_reverse.ll
; void foo(long n, long m, long o, double A[n][m][o]) {
;
; for (long i = 0; i < n; i++)
; for (long j = 0; j < m; j++)
; for (long k = 0; k < o; k++)
; A[i][k][j] = 1.0;
Starting with this patch we no longer delinearize access functions that do not
contain parameters, for example in test/Analysis/DependenceAnalysis/GCD.ll
;; for (long int i = 0; i < 100; i++)
;; for (long int j = 0; j < 100; j++) {
;; A[2*i - 4*j] = i;
;; *B++ = A[6*i + 8*j];
these accesses will not be delinearized as the upper bound of the loops are
constants, and their access functions do not contain SCEVUnknown parameters.
llvm-svn: 208232
default architecture for reasonable modern x86 processors, actually be
modern. This processor model should essentially be "tuned" for modern
x86 chips as much as possible without undue penalties on any specific
architecture. Previously we weren't even using the nice scheduling
models. There are a few other tweaks needed here, but this change at
least I have benchmarked across a decent swatch of chips (intel's
clovertown, westmere, and sandybridge; amd's istanbul) and seen no
significant regressions.
If anyone has suggested ways to test this, just let me know. Somewhat
alarmingly, no existing tests failed.
llvm-svn: 208230
this patch disables the dead register elimination pass and the load/store pair
optimization pass at -O0. The ILP optimizations don't require the optimization
level to be checked because the call to addILPOpts is predicated with the
necessary check. The AdvSIMDScalar pass is disabled by default at all
optimization levels. This patch leaves that pass disabled by default.
Also, move command-line options into ARM64TargetMachine.cpp and add a few
additional flags to aid in debugging. This fixes an issue with the
-debug-pass=Structure flag where passes were printed, but not actually run
(i.e., AdvSIMDScalar pass).
llvm-svn: 208223
Summary:
These processors will only be available for the integrated assembler at
first (CodeGen will emit a fatal error saying they are not implemented).
The intention is to work through the existing instructions and correctly
annotate the ISA they were added in so that we have a sufficiently good
base to start MIPS64r6 development. MIPS64r6 removes/re-encodes certain
instructions and I believe it is best to define ISA's using set-union's
as far as possible rather than using set-subtraction.
Reviewers: vmedic
Subscribers: emaste, llvm-commits
Differential Revision: http://reviews.llvm.org/D3569
llvm-svn: 208221
This is a followup to r208171, where a call to make_unique was
disambiguated for MSVC. Disambiguate two more calls, and remove the
comment about it since this is what we do everywhere.
llvm-svn: 208219
When performing a scalar comparison that feeds into a vector select,
it's actually better to do the comparison on the vector side: the
scalar route would be "CMP -> CSEL -> DUP", the vector is "CM -> DUP"
since the vector comparisons are all mask based.
llvm-svn: 208210
Summary:
One small functional change. The recently added PAUSE instruction now has
the HasStdEnc predicate which was accidentally removed by a Requires<>.
Depends on D3640
Reviewers: vmedic
Reviewed By: vmedic
Differential Revision: http://reviews.llvm.org/D3641
llvm-svn: 208209
Summary:
Move IsGP64bit into GPRPredicates, and IsFP64bit/NotFP64bit into FGRPredicates
No functional change (confirmed by diffing tablegen-erated files).
Depends on D3639
Reviewers: vmedic
Reviewed By: vmedic
Differential Revision: http://reviews.llvm.org/D3640
llvm-svn: 208201
The AAPCS states that values passed in registers must have a value as though
they had been loaded with "LDR". LDR is equivalent to "LD1.64 vX.1D" - that is,
loading scalars to vector registers and loading 1-element vectors is equivalent.
The logic implemented here is to ensure that at all call boundaries and during
formal argument lowering all vectors are treated as their bitwidth-based floating
point scalar counterpart, which is always one of f64 or f128 (v2i32 -> f64,
v4i32 -> f128 etc). A BITCAST is inserted so that the appropriate REV will be
generated during code generation.
llvm-svn: 208198
Summary:
This makes it easier to prove a more complicated change in the next commit
is non-functional.
Reviewers: vmedic
Reviewed By: vmedic
Differential Revision: http://reviews.llvm.org/D3639
llvm-svn: 208197
Because we've canonicalised on using LD1/ST1, every time we do a bitcast
between vector types we must do an equivalent lane reversal.
Consider a simple memory load followed by a bitconvert then a store.
v0 = load v2i32
v1 = BITCAST v2i32 v0 to v4i16
store v4i16 v2
In big endian mode every memory access has an implicit byte swap. LDR and
STR do a 64-bit byte swap, whereas LD1/ST1 do a byte swap per lane - that
is, they treat the vector as a sequence of elements to be byte-swapped.
The two pairs of instructions are fundamentally incompatible. We've decided
to use LD1/ST1 only to simplify compiler implementation.
LD1/ST1 perform the equivalent of a sequence of LDR/STR + REV. This makes
the original code sequence: v0 = load v2i32
v1 = REV v2i32 (implicit)
v2 = BITCAST v2i32 v1 to v4i16
v3 = REV v4i16 v2 (implicit)
store v4i16 v3
But this is now broken - the value stored is different to the value loaded
due to lane reordering. To fix this, on every BITCAST we must perform two
other REVs:
v0 = load v2i32
v1 = REV v2i32 (implicit)
v2 = REV v2i32
v3 = BITCAST v2i32 v2 to v4i16
v4 = REV v4i16
v5 = REV v4i16 v4 (implicit)
store v4i16 v5
This means an extra two instructions, but actually in most cases the two REV
instructions can be combined into one. For example:
(REV64_2s (REV64_4h X)) === (REV32_4h X)
There is also no 128-bit REV instruction. This must be synthesized with an
EXT instruction.
Most bitconverts require some sort of conversion. The only exceptions are:
a) Identity conversions - vNfX <-> vNiX
b) Single-lane-to-scalar - v1fX <-> fX or v1iX <-> iX
Even though there are hundreds of changed lines, I have a fairly high confidence
that they are somewhat correct. The changes to add two REV instructions per
bitcast were pretty mechanical, and once I'd done that I threw the resulting
.td at a script I wrote which combined the two REVs together (and added
an EXT instruction, for f128) based on an instruction description I gave it.
This was much less prone to error than doing it all manually, plus my brain
would not just have melted but would have vapourised.
llvm-svn: 208194
This completes the port of r204814 (cpirker "AArch64_BE function argument
passing for ARM ABI") from AArch64 to ARM64, and fixes a bunch of issues
found during later development along the way. The biggest of these was
that the alignment fixup logic wasn't replicated into all the places it
should have been.
llvm-svn: 208192
This is a third patch of patch series that improves MergeFunctions
performance time from O(N*N) to O(N*log(N)).
This patch description:
Being comparing functions we need to compare values we meet at left and
right sides.
Its easy to sort things out for external values. It just should be
the same value at left and right.
But for local values (those were introduced inside function body)
we have to ensure they were introduced at exactly the same place,
and plays the same role.
In short, patch introduces values serial numbering and comparison routine.
The last one compares two values by their serial numbers.
llvm-svn: 208189
Summary:
The overall idea is to chop the Predicates list into subsets that are
usually overridden independently. This allows subclasses to partially
override the predicates of their superclasses without having to re-add all
the existing predicates.
This patch starts the process by moving HasStdEnc into a new
EncodingPredicates list and almost everything else into
AdditionalPredicates.
It has revealed a couple likely bugs where 'let Predicates' has removed
the HasStdEnc predicate.
No functional change (confirmed by diffing tablegen-erated files).
Depends on D3549, D3506
Reviewers: vmedic
Differential Revision: http://reviews.llvm.org/D3550
llvm-svn: 208184
Summary:
It concatenates two or more lists. In addition to the !strconcat semantics
the lists must have the same element type.
My overall aim is to make it easy to append to Instruction.Predicates
rather than override it. This can be done by concatenating lists passed as
arguments, or by concatenating lists passed in additional fields.
Reviewers: dsanders
Reviewed By: dsanders
Subscribers: hfinkel, llvm-commits
Differential Revision: http://reviews.llvm.org/D3506
llvm-svn: 208183
Summary:
This will make it easier to prove that a more complicated change in the
following commit is non-functional.
No functional change.
Depends on D3506
Reviewers: vmedic
Reviewed By: vmedic
Differential Revision: http://reviews.llvm.org/D3549
llvm-svn: 208179
1) Fix for printing debug locations for absolute paths.
2) Location printing is moved into public method DebugLoc::print() to avoid re-inventing the wheel.
Differential Revision: http://reviews.llvm.org/D3513
llvm-svn: 208177
O(N*log(N)). The idea is to introduce total ordering among functions set.
It allows to build binary tree and perform function look-up procedure in O(log(N)) time.
This patch description:
Introduced total ordering among constants implemented in cmpConstants method.
Method performs lexicographical comparison between constants represented as
hypothetical numbers of next format:
<bitcastability-trait><raw-bit-contents>
Please, read cmpConstants declaration comments for more details.
llvm-svn: 208173
This field is used for a list of variables to ensure they are not lost
during optimization (they're only included when optimizations are
enabled).
llvm-svn: 208159
Mark up additional instructions which are part of the function prologue as
MachineFrameSetup. These instructions are part of the function prologue,
emitted by the PEI pass to setup the stack for use in the activating frame.
llvm-svn: 208153
The ARM::BLX instruction is an ARM mode instruction. The Windows on ARM target
is limited to Thumb instructions. Correctly use the thumb mode tBLXr
instruction. This would manifest as an errant write into the object file as the
instruction is 4-bytes in length rather than 2. The result would be a corrupted
object file that would eventually result in an executable that would crash at
runtime.
llvm-svn: 208152
If the source files referenced by a gcno file are missing, gcov
outputs a coverage file where every line is simply /*EOF*/. This also
occurs for lines in the coverage that are past the end of a file that
is found.
This change mimics gcov.
llvm-svn: 208149
In gcov, there's a -n/--no-output option, which disables the writing
of any .gcov files, so that it emits only the summary info on stdout.
This implements the same behaviour in llvm-cov.
llvm-svn: 208148
This is similar to the getAlignment patch, but is done just for
completeness. It looks like we never call getSection on an alias. All the
tests still pass if the if is replaced with an assert.
llvm-svn: 208139
Speculatively reverting due to a suspicious failure on a Windows
buildbot.
This reverts commit 10c37a012ea11596d44cd9059fe09c959caf30c8.
llvm-svn: 208131
remove it from the list of unspilled registers. Otherwise the following
attempt to keep the stack aligned by picking an extra GPR register to
spill will not work as it picks up r11.
llvm-svn: 208129
Summary:
When I initially introduced -pass-remarks, I thought it would be a
neat idea to make it additive. So, if one used it as:
$ llc -pass-remarks=inliner --pass-remarks=loop.*
the compiler would build the regular expression '(inliner)|(loop.*)'.
The more I think about it, the more I regret it. This is not how
other flags work. The standard semantics are right-to-left overrides.
This is how clang interprets -Rpass. And I think the two should be
compatible in this respect.
Reviewers: qcolombet
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D3614
llvm-svn: 208122
Before this patch, the backend always emitted a store+load sequence to
bitconvert from f64 to i64 the input operand of a ISD::BITCAST dag node that
performed a bitconvert from type MVT::f64 to type MVT::v2i32. The resulting
i64 node was then used to build a v2i32 vector.
With this patch, the backend now produces a cheaper SCALAR_TO_VECTOR from
MVT::f64 to MVT::v2f64. That SCALAR_TO_VECTOR is then followed by a "free"
bitcast to type MVT::v4i32. The elements of the resulting
v4i32 are then extracted to build a v2i32 vector (which is illegal and
therefore promoted to MVT::v2i64).
This is in general cheaper than emitting a stack store+load sequence
to bitconvert the operand from type f64 to type i64.
llvm-svn: 208107
This patch implements the infrastructure to use named register constructs in
programs that need access to specific registers (bare metal, kernels, etc).
So far, only the stack pointer is supported as a technology preview, but as it
is, the intrinsic can already support all non-allocatable registers from any
architecture.
llvm-svn: 208104
An alias has the address of what it points to, so it also has the same
alignment.
This allows a few optimizations to see past aliases for free.
llvm-svn: 208103
fall back to the normal path without a cpu. While doing this fix
llc to just exit when we don't have a module to process instead of
asserting.
llvm-svn: 208102
Also, provide the ability to create temporary and non-temporary
declarations, as not all declarations may be replaced by definitions
later on.
This provides the necessary infrastructure for Clang to fix PR19598,
leaking temporary MDNodes in Clang's debug info generation.
llvm-svn: 208054
The Win64 docs are very clear that anything larger than 8 bytes is
passed by reference, and GCC MinGW64 honors that for __modti3 and
friends.
Patch by Jameson Nash!
llvm-svn: 208029
On x64, windows.h doesn't include intrin.h for intrinsics. It just
declares them in the global namespace and uses them, expecting the
compiler to lower it as a builtin. We basically need to do this in
clang, eventually.
llvm-svn: 208023
The number of tail call to loop conversions remains the same (1618 by my count).
The new algorithm does a local scan over the use-def chains to identify local "alloca-derived" values, as well as points where the alloca could escape. Then, a visit over the CFG marks blocks as being before or after the allocas have escaped, and annotates the calls accordingly.
llvm-svn: 208017
Summary:
Also ran clang-format on the function. The code added is the last else
if block.
Reviewers: nadav, craig.topper
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D3518
llvm-svn: 207992
Visibility is meaningless when the linkage is local. Change
`-internalize` to reset the visibility to `default`.
<rdar://problem/16141113>
llvm-svn: 207979
Tested that the right -target-cpu is set in the clang -cc1 command line
when running "clang -march=native -E -v - </dev/null" on both an FX-8150
and an FX-8350. Both are family 15h; the FX-8150 (Bulldozer processor)
reports a model number of 1, and the FX-8350 (Piledriver processor)
reports a model number of 2.
llvm-svn: 207973
Windows on ARM does not conform to AEABI. However, memset would be emitted
using the AEABI signature, resulting in inverted parameters. Handle this
special case appropriately.
llvm-svn: 207943
Add handling for FK_SecRel_4 (4-byte section relative relocations). These are
used by the generation of DWARF debug information (the abbrevations use section
relative relocations). This will also be used in generation of CodeView line
tables.
llvm-svn: 207941
Otherwise we use the same threshold as for complete unrolling, which is
way too high. This made us unroll any loop smaller than 150 instructions
by 8 times, but only if someone specified -march=core2 or better,
which happens to be the default on darwin.
llvm-svn: 207940
When can't assume a vectorized tree is rooted in an instruction. The IRBuilder
could have constant folded it. When we rebuild the build_vector (the series of
InsertElement instructions) use the last original InsertElement instruction. The
vectorized tree root is guaranteed to be before it.
Also, we can't assume that the n-th InsertElement inserts the n-th element into
a vector.
This reverts r207746 which reverted the revert of the revert of r205018 or so.
Fixes the test case in PR19621.
llvm-svn: 207939
operations on the call graph. This one forms a cycle, and while not as
complex as removing an internal edge from an SCC, it involves
a reasonable amount of work to find all of the nodes newly connected in
a cycle.
Also somewhat alarming is the worst case complexity here: it might have
to walk roughly the entire SCC inverse DAG to insert a single edge. This
is carefully documented in the API (I hope).
llvm-svn: 207935
Both MinGW and cygwin (i686) construct export directives without the global
leader prefix. This is mostly due to the fact that they use GNU ld which does
not correctly handle the export directive. This apparently has been been broken
for a while. However, this was recently reported as being broken by
mingwandroid and diorcety of the msys2 project.
Remove the global leader prefix if targeting MinGW or cygwin, otherwise, retain
the global leader prefix. Add an explicit test for cygwin's behaviour of export
directives.
llvm-svn: 207926
Create a helper function to generate the export directive. This was previously
duplicated inline to handle export directives for variables and functions. This
also enables the use of range-based iterators for the generation of the
directive rather than the traditional loops. NFC.
llvm-svn: 207925
This fix simply ensures that both metadata nodes are path-aware before
performing path-aware alias analysis.
This issue isn't normally triggered in LLVM, because we perform an autoupgrade
of the TBAA metadata to the new format when reading in LL or BC files. This
issue only appears when a client creates the IR manually and mixes old and new
TBAA metadata format.
This fixes <rdar://problem/16760860>.
llvm-svn: 207923
The fix itself is fairly simple: move getAccessVariant to MCValue so that we
replace the old weak expression evaluation with the far more general
EvaluateAsRelocatable.
This then requires that EvaluateAsRelocatable stop when it finds a non
trivial reference kind. And that in turn requires the ELF writer to look
harder for weak references.
Last but not least, this found a case where we were being bug by bug
compatible with gas and accepting an invalid input. I reported pr19647
to track it.
llvm-svn: 207920
There is no point in creating it if we're not going to vectorize
anything. Creating the map is expensive as it creates large values.
No functionality change.
llvm-svn: 207916
which are corresponding to the current target read from the ELF file.
This fix cannot be tested until obj2yaml does not support ELF format.
llvm-svn: 207905
Committed initially in r207724-r207726 and reverted due to compiler-rt
crashes in r207732.
Instead, fix this harder with unordered_map and store the LexicalScopes
by value in the map. This did necessitate moving the definition of
LexicalScope above the definition of LexicalScopes.
Let's see how the buildbots/compilers tolerate unordered_map::emplace +
std::piecewise_construct + std::forward_as_tuple...
llvm-svn: 207876
There are public functions that mutate various members as well as
another private member already, so make all the members private to
avoid the discontinuity and add accessors for the values. Should
be no functional change.
llvm-svn: 207868
Reading line tables in llvm-cov was pretty broken, but would happen to
work as long as no line in the table was 0. It's not clear to me
whether a line of zero *should* show up in these tables, but deciding
to read a string in the middle of the line table is certainly the
wrong thing to do if it does.
I've also added some comments, as trying to figure out what this block
of code was doing was fairly unpleasant.
llvm-svn: 207866
This moves most of GlobalOpt's constructor optimization
code out of GlobalOpt into Transforms/Utils/CDtorUtils.{h,cpp}. The
public interface is a single function OptimizeGlobalCtorsList() that
takes a predicate returning which constructors to remove.
GlobalOpt calls this with a function that statically evaluates all
constructors, just like it did before. This part of the change is
behavior-preserving.
Also add a call to this from GlobalDCE with a filter that removes global
constructors that contain a "ret" instruction and nothing else – this
fixes PR19590.
llvm-svn: 207856
address to AnalyzeLoadFromClobberingLoad. This fixes a bug in load-PRE where
PRE is applied to a load that is not partially redundant.
<rdar://problem/16638765>.
llvm-svn: 207853
.file records are supposed to have a section identifier of 65534
(IMAGE_SCN_DEBUG) rather than 0. This is spelt out clearly within the PE/COFF
specification. Fix this minor oversight with the implementation for support for
.file records.
llvm-svn: 207851
Reviewed-by: Tom Stellard <thomas.stellard@amd.com>
Signed-off-by: Samuel Li <samuel.li@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
llvm-svn: 207846
The register spiller assumes that only one new instruction is created
when spilling and restoring registers, so we need to emit pseudo
instructions for vector register spills and lower them after
register allocation.
v2:
- Fix calculation of lane index
- Extend VGPR liveness to end of program.
v3:
- Use SIMM16 field of S_NOP to specify multiple NOPs.
https://bugs.freedesktop.org/show_bug.cgi?id=75005
llvm-svn: 207843
Previously, LLVM had no knowledge that these instructions actually
modified their address register: fine if they never end up in CodeGen,
but when I'd rather like to write some patterns for them it becomes a
disaster.
The change is mostly straightforward, I think the most significant
design decision was to *always* put the address write-back first. This
allows loads and stores to be accessed more uniformly, for example
permitting the continued sharing of the InstAlias definitions.
I also discovered that the custom Decode logic is no longer needed, so
I removed it.
No tests, because there should be no functionality change.
llvm-svn: 207839
While post-indexed LD1/ST1 instructions do exist for vector loads,
this patch makes use of the more flexible addressing-modes in LDR/STR
instructions.
llvm-svn: 207838
Given the following C code llvm currently generates suboptimal code for
x86-64:
__m128 bss4( const __m128 *ptr, size_t i, size_t j )
{
float f = ptr[i][j];
return (__m128) { f, f, f, f };
}
=================================================
define <4 x float> @_Z4bss4PKDv4_fmm(<4 x float>* nocapture readonly %ptr, i64 %i, i64 %j) #0 {
%a1 = getelementptr inbounds <4 x float>* %ptr, i64 %i
%a2 = load <4 x float>* %a1, align 16, !tbaa !1
%a3 = trunc i64 %j to i32
%a4 = extractelement <4 x float> %a2, i32 %a3
%a5 = insertelement <4 x float> undef, float %a4, i32 0
%a6 = insertelement <4 x float> %a5, float %a4, i32 1
%a7 = insertelement <4 x float> %a6, float %a4, i32 2
%a8 = insertelement <4 x float> %a7, float %a4, i32 3
ret <4 x float> %a8
}
=================================================
shlq $4, %rsi
addq %rdi, %rsi
movslq %edx, %rax
vbroadcastss (%rsi,%rax,4), %xmm0
retq
=================================================
The movslq is uneeded, but is present because of the trunc to i32 and then
sext back to i64 that the backend adds for vbroadcastss.
We can't remove it because it changes the meaning. The IR that clang
generates is already suboptimal. What clang really should emit is:
%a4 = extractelement <4 x float> %a2, i64 %j
This patch makes that legal. A separate patch will teach clang to do it.
Differential Revision: http://reviews.llvm.org/D3519
llvm-svn: 207801
This creates a lot of core infrastructure in which to add, with little
effort, quite a bit more to mips fast-isel
Test Plan: simplestore.ll
Reviewers: dsanders
Reviewed By: dsanders
Differential Revision: http://reviews.llvm.org/D3527
llvm-svn: 207790
This was initialized by llvm-mc (calling setDwarfVersion) but other
clients (such as clang, llc, etc) aren't necessarily initializing this
so we were getting garbage DWARF version values in the output.
Initialize it to a reasonable default (the same default used in llvm-mc,
though this is higher than it was (2) previously).
llvm-svn: 207788
This optimization merges the common part of a group of GEPs, so we can compute
each pointer address by adding a simple offset to the common part.
The optimization is currently only enabled for the NVPTX backend, where it has
a large payoff on some benchmarks.
Review: http://reviews.llvm.org/D3462
Patch by Jingyue Wu.
llvm-svn: 207783
This the LLVM portion that will allow Clang and other frontends to emit
typedefs of void by providing a null type for the typedef's underlying
type.
llvm-svn: 207777
We currently force symbols to be globals in .thumb_set. The intent
seems to be that given
.thumb_set foo, bar
we emit an undefined symbol to bar if it is never defined. The side
effect is that we mark bar as global, even if it is defined, which gas
does not.
Producing an undefined reference to bar is a general difference from MC and gas.
For example, given
a = b
gas will produce an undefined reference to b, MC will not. I would be surprised
if any code depends on this, but it it does, we should fix the general
difference, not special case .thumb_set.
llvm-svn: 207757
The canonical form of the BFM instruction is always one of the more explicit
extract or insert operations, which makes reading output much easier.
llvm-svn: 207752
just connects an SCC to one of its descendants directly. Not much of an
impact. The last one is the hard one -- connecting an SCC to one of its
ancestors, and thereby forming a cycle such that we have to merge all
the SCCs participating in the cycle.
llvm-svn: 207751
of SCCs in the SCC DAG. Exercise them in the big graph test case. These
will be especially useful for establishing invariants in insertion
logic.
llvm-svn: 207749
=[
Turns out that this was the root cause of PR19621. We found a crasher
only recently (likely due to improvements elsewhere in the SLP
vectorizer) but the reduced test case failed all the way back to here.
I've confirmed that reverting this patch both fixes the reduced test
case in PR19621 and the actual source file that led to it, so it seems
to really be rooted here. I've replied to the commit thread with
discussion of my (feeble) attempts to debug this. Didn't make it very
far, so reverting now that we have a good test case so that things can
get back to healthy while the debugging carries on.
llvm-svn: 207746
Summary:
There are two functional changes:
1) The directive is not expanded for the ASM->ASM code path.
2) If PIC is not set, there's no expansion for the ASM->OBJ code path (same behaviour as GAS).
Reviewers: dsanders
Reviewed By: dsanders
Differential Revision: http://reviews.llvm.org/D3482
llvm-svn: 207741
This fixes the memory leak introduced with the initial addition of support for
WoA stack probing. Now that the pseudo-instruction expansion can handle an
external symbol, use that to generate the load which simplifies the logic as
well as avoids the memory leak.
llvm-svn: 207737
This enhances the expansion of the mov32imm pseudo-instruction to support an
external symbol reference. This is motivated by a simplification of the stack
probe emission for Windows on ARM (and fixing a leak).
llvm-svn: 207736
This makes the coff writer compute the correct symbol value for the test in
pr19147. The section is still incorrect, that will be fixed in a followup patch.
llvm-svn: 207728
Summary:
Get rid of UserVariables set, and turn DbgValues into MapVector
to get a fixed ordering, as suggested in review for http://reviews.llvm.org/D3573.
Test Plan: llvm regression tests
Reviewers: dblaikie
Reviewed By: dblaikie
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D3579
llvm-svn: 207720
Breaks GDB buildbot
(http://lab.llvm.org:8011/builders/clang-x86_64-ubuntu-gdb-75/builds/14517)
GCC emits DW_AT_object_pointer /everywhere/ (declaration, abstract
definition, inlined subroutine), but it looks like GCC relies on it
being somewhere other than the declaration, at least. I'll experiment
further & can hopefully still remove it from the inlined_subroutine.
This reverts commit r207705.
llvm-svn: 207719
They just don't need to be there - they're inherited from the abstract
definition. In theory I would like them to be inherited from the
declaration, but the DWARF standard doesn't quite say that... we can
probably do it anyway but I'm less confident about that so I'll leave it
for a separate commit.
llvm-svn: 207717
Not all address taken blocks get inlined. The reason is
that a blocks new address is known only when it is cloned. But e.g.
a branch instruction in a different block could need that address earlier
while it gets cloned. The solution is to collect the set of all
blocks that can potentially get inlined and compute a new block address
up front. Then clone and cleanup.
rdar://16427209
llvm-svn: 207713
This effectively reverts r164326, but adds some comments and
justification and ensures we /don't/ emit the DW_AT_object_pointer on
the (abstract and concrete) definitions. (while still preserving it on
standalone definitions involving ObjC Blocks)
This does increase the size of member function declarations from 7 to 11
bytes, unfortunately, but still seems like the Right Thing to do so that
callers that see only the declaration still have the information about
the object pointer. That said, I don't know what, if any, DWARF
consumers don't have a heuristic to guess this in the case of normal
C++ member functions - perhaps we can remove it entirely.
llvm-svn: 207705
For pattern like ((x >> C1) & Mask) << C2, DAG combiner may convert it
into (x >> (C1-C2)) & (Mask << C2), which makes pattern matching of ubfx
more difficult.
For example:
Given
%shr = lshr i64 %x, 4
%and = and i64 %shr, 15
%arrayidx = getelementptr inbounds [8 x [64 x i64]]* @arr, i64 0, %i64 2, i64 %and
%0 = load i64* %arrayidx
With current shift folding, it takes 3 instrs to compute base address:
lsr x8, x0, #1
and x8, x8, #0x78
add x8, x9, x8
If using ubfx, it only needs 2 instrs:
ubfx x8, x0, #4, #4
add x8, x9, x8, lsl #3
This fixes bug 19589
llvm-svn: 207702
DwarfDebug.h has a SmallVector member containing a unique_ptr of an
incomplete type. MSVC doesn't have key functions, so the vtable and
dtor are emitted in AsmPrinter.cpp, where DwarfDebug's ctor is called.
AsmPrinter.cpp include DwarfUnit.h and doesn't get a complete definition
of DwarfTypeUnit. We could fix the problem by including DwarfUnit.h in
DwarfDebug.h, but that would increase header bloat. Instead, define
~DwarfDebug out of line.
llvm-svn: 207701
There is no need to check if we want to hoist the immediate value of an
shift instruction. Simply return TCC_Free right away.
This change is like r206101, but for X86.
rdar://problem/16190769
llvm-svn: 207692
Summary: negu $reg is equivalent to negu $reg, $reg.
Reviewers: dsanders
Reviewed By: dsanders
Differential Revision: http://reviews.llvm.org/D3510
llvm-svn: 207673
Summary:
The pattern sltu $r1, $r2, $imm is found in handwritten assembly which
is just a shorthand version of sltui $r1, $r2, $imm.
Reviewers: dsanders
Reviewed By: dsanders
Differential Revision: http://reviews.llvm.org/D3508
llvm-svn: 207671
We already do this for shstrtab, so might as well do it for strtab. This
extracts the string table building code into a separate class. The idea
is to use it for other object formats too.
I mostly wanted to do this for the general principle, but it does save a
little bit on object file size. I tried this on a clang bootstrap and
saved 0.54% on the sum of object file sizes (1.14 MB out of 212 MB for
a release build).
Differential Revision: http://reviews.llvm.org/D3533
llvm-svn: 207670
It's been decided that in the future, the floating-point immediate in
instructions like "fcmeq v0.2s, v1.2s, #0.0" will be canonically "0.0", which
has been implemented on AArch64 already but not ARM64.
This fixes that issue.
llvm-svn: 207666
Pretty straightforward, we weren't propagating whether or not an
AllocaInst had 'inalloca' marked on it when it came time to clone it.
The inliner exposed this bug. A reduced testcase is forthcoming.
llvm-svn: 207665