As a first step towards real little-endian code generation, this patch
changes the PowerPC MC layer to actually generate little-endian object
files. This involves passing the little-endian flag through the various
layers, including down to createELFObjectWriter so we actually get basic
little-endian ELF objects, emitting instructions in little-endian order,
and handling fixups and relocations as appropriate for little-endian.
The bulk of the patch is to update most test cases in test/MC/PowerPC
to verify both big- and little-endian encodings. (The only test cases
*not* updated are those that create actual big-endian ABI code, like
the TLS tests.)
Note that while the object files are now little-endian, the generated
code itself is not yet updated, in particular, it still does not adhere
to the ELFv2 ABI.
llvm-svn: 204634
Those patterns are used when the load cannot be folded into the related broadcast
during the select phase.
This happens when the load gets additional uses that were not anticipated during
the previous lowering phases (constant vector to constant load, then constant
load reused) or when selection DAG is not able to prove that folding the load
will not create a cycle in the DAG.
<rdar://problem/16074331>
llvm-svn: 204631
This can be observed with the old testcase of CodeGen/X86/pr12312.ll:
47c47
< vorps %ymm0, %ymm1, %ymm0
---
> vorps %ymm1, %ymm0, %ymm0
97c97
< vorps %ymm1, %ymm0, %ymm0
---
> vorps %ymm0, %ymm1, %ymm0
The vector VecIns is populated with all the values from VecInMap. This is done
while iterating VecInMap. VecInMap uses a hash of pointer values so the
resulting order can vary depending on the memory layout.
The fix is to populate the vector VecIns earlier as VecInMap is populated.
This is done in DAG traversal order.
Fixes <rdar://problem/16398806>
llvm-svn: 204623
[PPC64LE] ELFv2 ABI updates for the .opd section
The PPC64 Little Endian (PPC64LE) target supports the ELFv2 ABI, and as
such, does not have a ".opd" section. This is keyed off a _CALL_ELF=2
macro check.
The CALL_ELF check is not clearly documented at this time. The basis
for usage in this patch is from the gcc thread here:
http://gcc.gnu.org/ml/gcc-patches/2013-11/msg01144.html
> Adding comment from Uli:
Looks good to me. I think the old-style JIT doesn't really work
anyway for 64-bit, but at least with this patch LLVM will compile
and link again on a ppc64le host ...
llvm-svn: 204614
I'm under the impression that we used to infer the isCommutable flag from the
instruction-associated pattern. Regardless, we don't seem to do this (at least
by default) any more. I've gone through all of our instruction definitions, and
marked as commutative all of those that should be trivial to commute (by
exchanging the first two operands). There has been special code for the RL*
instructions, and that's not changed.
Before this change, we had the following commutative instructions:
RLDIMI
RLDIMIo
RLWIMI
RLWIMI8
RLWIMI8o
RLWIMIo
XSADDDP
XSMULDP
XVADDDP
XVADDSP
XVMULDP
XVMULSP
After:
ADD4
ADD4o
ADD8
ADD8o
ADDC
ADDC8
ADDC8o
ADDCo
ADDE
ADDE8
ADDE8o
ADDEo
AND
AND8
AND8o
ANDo
CRAND
CREQV
CRNAND
CRNOR
CROR
CRXOR
EQV
EQV8
EQV8o
EQVo
FADD
FADDS
FADDSo
FADDo
FMADD
FMADDS
FMADDSo
FMADDo
FMSUB
FMSUBS
FMSUBSo
FMSUBo
FMUL
FMULS
FMULSo
FMULo
FNMADD
FNMADDS
FNMADDSo
FNMADDo
FNMSUB
FNMSUBS
FNMSUBSo
FNMSUBo
MULHD
MULHDU
MULHDUo
MULHDo
MULHW
MULHWU
MULHWUo
MULHWo
MULLD
MULLDo
MULLW
MULLWo
NAND
NAND8
NAND8o
NANDo
NOR
NOR8
NOR8o
NORo
OR
OR8
OR8o
ORo
RLDIMI
RLDIMIo
RLWIMI
RLWIMI8
RLWIMI8o
RLWIMIo
VADDCUW
VADDFP
VADDSBS
VADDSHS
VADDSWS
VADDUBM
VADDUBS
VADDUHM
VADDUHS
VADDUWM
VADDUWS
VAND
VAVGSB
VAVGSH
VAVGSW
VAVGUB
VAVGUH
VAVGUW
VMADDFP
VMAXFP
VMAXSB
VMAXSH
VMAXSW
VMAXUB
VMAXUH
VMAXUW
VMHADDSHS
VMHRADDSHS
VMINFP
VMINSB
VMINSH
VMINSW
VMINUB
VMINUH
VMINUW
VMLADDUHM
VMULESB
VMULESH
VMULEUB
VMULEUH
VMULOSB
VMULOSH
VMULOUB
VMULOUH
VNMSUBFP
VOR
VXOR
XOR
XOR8
XOR8o
XORo
XSADDDP
XSMADDADP
XSMAXDP
XSMINDP
XSMSUBADP
XSMULDP
XSNMADDADP
XSNMSUBADP
XVADDDP
XVADDSP
XVMADDADP
XVMADDASP
XVMAXDP
XVMAXSP
XVMINDP
XVMINSP
XVMSUBADP
XVMSUBASP
XVMULDP
XVMULSP
XVNMADDADP
XVNMADDASP
XVNMSUBADP
XVNMSUBASP
XXLAND
XXLNOR
XXLOR
XXLXOR
This is a by-inspection change, and I'm not sure how to write a reliable test
case. I would like advice on this, however.
llvm-svn: 204609
Summary:
- If only two registers are passed to a three-register operation, then the
first argument is both source and destination register.
- If a non-register is passed as the last argument, generate the immediate
version of the instruction.
Also mark DADD commutative and add scheduling information (to the generic
scheduler), and implement DSUB.
Patch by David Chisnall
His work was sponsored by: DARPA, AFRL
CC: theraven
Differential Revision: http://llvm-reviews.chandlerc.com/D3148
llvm-svn: 204605
I've done some experimentation with this, and it looks like using the
lower-latency (but lower throughput) copy instruction is essentially always the
right thing to do.
My assumption is that, in order to be relatively sure that the higher-latency
copy will increase throughput, we'd want to have it unlikely to be in-flight
with its use. On the P7, the global completion table (GCT) can hold a maximum
of 120 instructions, shared among all active threads (up to 4), giving 30
instructions per thread. So specifically, I'd require at least that many
instructions between the copy and the use before the high-latency variant is
used.
Trying this, however, over the entire test suite resulted in zero cases where
the high-latency form would be preferable. This may be a consequence of the
fact that the scheduler views copies as free, and so they tend to end up close
to their uses. For this experiment I created a function:
unsigned chooseVSXCopy(MachineBasicBlock &MBB,
MachineBasicBlock::iterator I,
unsigned DestReg, unsigned SrcReg,
unsigned StartDist = 1,
unsigned Depth = 3) const;
with an implementation like:
if (!Depth)
return PPC::XXLOR;
const unsigned MaxDist = 30;
unsigned Dist = StartDist;
for (auto J = I, JE = MBB.end(); J != JE && Dist <= MaxDist; ++J) {
if (J->isTransient() && !J->isCopy())
continue;
if (J->isCall() || J->isReturn() || J->readsRegister(DestReg, TRI))
return PPC::XXLOR;
++Dist;
}
// We've exceeded the required distance for the high-latency form, use it.
if (Dist > MaxDist)
return PPC::XVCPSGNDP;
// If this is only an exit block, use the low-latency form.
if (MBB.succ_empty())
return PPC::XXLOR;
// We've reached the end of the block, check the successor blocks (up to some
// depth), and use the high-latency form if that is okay with all successors.
for (auto J = MBB.succ_begin(), JE = MBB.succ_end(); J != JE; ++J) {
if (chooseVSXCopy(**J, (*J)->begin(), DestReg, SrcReg,
Dist, --Depth) == PPC::XXLOR)
return PPC::XXLOR;
}
// All of our successor blocks seem okay with the high-latency variant, so
// we'll use it.
return PPC::XVCPSGNDP;
and then changed the copy opcode selection from:
Opc = PPC::XXLOR;
to:
Opc = chooseVSXCopy(MBB, std::next(I), DestReg, SrcReg);
In conclusion, I'm removing the FIXME from the comment, because I believe that
there is, at least absent other examples, nothing to fix.
llvm-svn: 204591
This is a pretty straight forward translation for COFF, we just need to
stick the function in a COMDAT section marked as
IMAGE_COMDAT_SELECT_NODUPLICATES.
llvm-svn: 204565
When VSX is available, these instructions should be used in preference to the
older variants that only have access to the scalar floating-point registers.
llvm-svn: 204559
Since the profile can come from 32-bit machines, we need to check the
pointer size. Change the magic number to facilitate this.
Adds tests for reading 32-bit and 64-bit binaries (both big- and
little-endian). The tests write a binary using printf in RUN lines
(like raw-magic-but-no-header.test). Assuming the bots don't complain,
this seems like a better way forward for testing RawInstrProfReader than
committing binary files.
<rdar://problem/16400648>
llvm-svn: 204557
This is similar, but not identical to what gas does. The logic in MC is to just
compute the symbol table after parsing the entire file. GAS is mixed, given
.type b, @object
a = b
b:
.type b, @function
It will propagate the change and make 'a' a function. Given
.type b, @object
b:
a = b
.type b, @function
the type of 'a' is still object.
Since we do the computation in the end, we produce a function in both cases.
llvm-svn: 204555
When a label is parsed, check if there is type information available for the
label. If so, check if the symbol is a function. If the symbol is a function
and we are in thumb mode and no explicit thumb_func has been emitted, adjust the
symbol data to indicate that the function definition is a thumb function.
The application of this inferencing is improved value handling in the object
file (the required thumb bit is set on symbols which are thumb functions). It
also helps improve compatibility with binutils.
The one complication that arises from this handling is the MCAsmStreamer. The
default implementation of getOrCreateSymbolData in MCStreamer does not support
tracking the symbol data. In order to support the semantics of thumb functions,
track symbol data in assembly streamer. Although O(n) in number of labels in
the TU, this is already done in various other streamers and as such the memory
overhead is not a practical concern in this scenario.
llvm-svn: 204544
The cleanup code that removes dead cast instructions only removed them from the
basic block, but didn't delete them. This fix erases them now too.
llvm-svn: 204538
A PHI node usually has only one value/basic block pair per incoming basic block.
In the case of a switch statement it is possible that a following PHI node may
have more than one such pair per incoming basic block. E.g.:
%0 = phi i64 [ 123456, %case2 ], [ 654321, %Entry ], [ 654321, %Entry ]
This is valid and the verfier doesn't complain, because both values are the
same.
Constant hoisting materializes the constant for each operand separately and the
value is still the same, but the variable names have changed. As a result the
verfier can't recognize anymore that they are the same value and complains.
This fix adds special update code for PHI node in constant hoisting to prevent
this corner case.
This fixes <rdar://problem/16394449>
llvm-svn: 204537
This patch renames method 'isConstantSplat' as 'getConstantSplatValue'
(mainly for consistency reasons), and rewrites its logic to ensure
that we always perform a legal 'cast<ConstantSDNode>'.
Added test shift-combine-crash.ll to verify that DAGCombiner no longer crashes with an assertion failure in the attempt to simplify a vector shift by a vector of all undef counts.
llvm-svn: 204536
We make sure a spill is not hoisted to a hotter outer loop by adding
a condition. Hoist a spill to outer loop if there are multiple dependents
(it can be beneficial if more than one dependents are hoisted) or
if DepSV (the hoisting source) is hotter than SV (the hoisting destination).
rdar://16268194
llvm-svn: 204522
Type units have no addresses, so there's no need for DW_AT_addr_base.
This removes another relocation from every skeletal type unit and brings
LLVM's skeletal type units in line with GCC's (containing only
GNU_dwo_name (strp), comp_dir (strp), and GNU_pubnames (flag_present)).
Cary's got some ideas about using str_index in the .o file to reduce
those last two relocations (well, replace two relocations with one
relocation (pointing to the string index) and two indicies)
llvm-svn: 204506
Previously, only regular AArch64 instructions were annotated with SchedRW lists.
This patch does the same for NEON enabling these instructions to be scheduled by
the MIScheduler. Additionally, store operations are now modeled and a few
SchedRW lists were updated for bug fixes (e.g. multiple def operands).
Reviewers: apazos, mcrosier, atrick
Patch by Dave Estes <cestes@codeaurora.org>!
llvm-svn: 204505
Read a raw binary profile that corresponds to a memory dump from the
runtime profile.
The test is a binary file generated from
cfe/trunk/test/Profile/c-general.c with the new compiler-rt runtime and
the matching text version of the input. It includes instructions on how
to regenerate.
<rdar://problem/15950346>
llvm-svn: 204496
This isn't a format we'll want to write out in practice, but moving it
to the writer library simplifies llvm-profdata and isolates it from
further changes to the format.
This also allows us to update the tests to not rely on the text output
format.
llvm-svn: 204489
This introduces the ProfileData library and updates llvm-profdata to
use this library for reading profiles. InstrProfReader is an abstract
base class that will be subclassed for both the raw instrprof data
from compiler-rt and the efficient instrprof format that will be used
for PGO.
llvm-svn: 204482
Summary:
VECTOR_SHUFFLE concatenates the vectors in an vectorwise fashion.
<0b00, 0b01> + <0b10, 0b11> -> <0b00, 0b01, 0b10, 0b11>
VSHF concatenates the vectors in a bitwise fashion:
<0b00, 0b01> + <0b10, 0b11> ->
0b0100 + 0b1110 -> 0b01001110
<0b10, 0b11, 0b00, 0b01>
We must therefore swap the operands to get the correct result.
The test case that discovered the issue was MultiSource/Benchmarks/nbench.
Reviewers: matheusalmeida
Reviewed By: matheusalmeida
Differential Revision: http://llvm-reviews.chandlerc.com/D3142
llvm-svn: 204480
The SReg_(32|64) register classes contain special registers in addition
to the numbered SGPRs. This can lead to machine verifier errors when
these register classes are used as sub-registers for SReg_128, since
SReg_128 only uses the numbered SGPRs.
Replacing SReg_(32|64) with SGPR_(32|64) fixes this problem, since
the SGPR_(32|64) register classes contain only numbered SGPRs.
Tests cases for this are comming in a later commit.
llvm-svn: 204474
...instead of a separate Requires for each one. This style was already
used in some places and seems more compact.
No behavioral change intended.
llvm-svn: 204452
Some targets require more than one relocation entry to perform a relocation.
This change allows processRelocationRef to process more than one relocation
entry at a time by passing the relocation iterator itself instead of just
the relocation entry.
Related to <rdar://problem/16199095>
llvm-svn: 204439
Extend the target hook to take also the operand index into account when
calculating the cost of the constant materialization.
Related to <rdar://problem/16381500>
llvm-svn: 204435
Originally the algorithm would search for expensive constants and track their
users, which could be instructions and constant expressions. This change only
tracks the constants for instructions, but constant expressions are indirectly
covered too. If an operand is an constant expression, then we look through the
expression to find anny expensive constants.
The algorithm keep now track of the instruction and the operand index where the
constant is used. This allows more precise hoisting of constant materialization
code for PHI instructions, because we only hoist to the basic block of the
incoming operand. Before we had to find the idom of all PHI operands and hoist
the materialization code there.
This also makes updating of instructions easier. Before we had to keep track of
the original constant, find it in the instructions, and then replace it. Now we
can just simply update the operand.
Related to <rdar://problem/16381500>
llvm-svn: 204433
This simplifies working with the constant candidates and removes the tight
coupling between the map and the vector.
Related to <rdar://problem/16381500>
llvm-svn: 204431
.data_region is only used in Darwin, so it shouldn't be generated
for other OS. Currently AArch64 doesn't support darwin yet, so
I removed it from AArch64. When Darwin is supported someday, we can
add it back and associate it with Darwin.
llvm-svn: 204424
NumberOfRelocations field in COFF section table is only 16-bit wide. If an
object has more than 65535 relocations, the number of relocations is stored
to VirtualAddress field in the first relocation field, and a special flag
(IMAGE_SCN_LNK_NRELOC_OVFL) is set to Characteristics field.
In test we cheated a bit. I made up a test file so that it has
IMAGE_SCN_LNK_NRELOC_OVFL flag but the number of relocations is much smaller
than 65535. This is to avoid checking in a large test file just to test a
file with many relocations.
Differential Revision: http://llvm-reviews.chandlerc.com/D3139
llvm-svn: 204418
Sicne MBB->computeRegisterLivenes() returns Dead for sub regs like s0,
d0 is used in vpop instead of updating sp, which causes s0 dead before
its use.
This patch checks the liveness of each subreg to make sure the reg is
actually dead.
llvm-svn: 204411
The function exists to force an expression to be absolute, but there it is not
possible to force a symbol reference since
a = b
.long a
means something else.
This is an alternative fix for pr9951 that uses an assert. It then deletes
the old pr9951 test that was testing nothing already.
llvm-svn: 204399
RTDyldMemoryManager, regardless of whether it thinks they're "required for
execution".
Currently, RuntimeDyld only passes sections that are "required for execution"
to the RTDyldMemoryManager, and takes "required for execution" to mean exactly
"contains symbols or relocations". There are two problems with this:
(1) It can drop sections with anonymous data that is referenced by code.
(2) It leaves the JIT client no way to inspect interesting sections that aren't
actually required to run the program (e.g dwarf sections).
A test case is still in the works.
Future work: We may want to replace this with a generic section filtering
mechanism, but that will require more consideration. For now, this flag at least
allows clients to volunteer to do the filtering themselves.
Fixes <rdar://problem/15177691>.
llvm-svn: 204398
This commit extends the coverage of the constant hoisting pass, adds additonal
debug output and updates the function names according to the style guide.
Related to <rdar://problem/16381500>
llvm-svn: 204389
This option caused LowerInvoke to generate code using SJLJ-based
exception handling, but there is no code left that interprets the
jmp_buf stack that the resulting code maintained (llvm.sjljeh.jblist).
This option has been obsolete for a while, and replaced by
SjLjEHPrepare.
This leaves the default behaviour of LowerInvoke, which is to convert
invokes to calls.
Differential Revision: http://llvm-reviews.chandlerc.com/D3136
llvm-svn: 204388
Use the range machinery for DW_AT_ranges and DW_AT_high/lo_pc.
This commit moves us from a single range per subprogram to extending
ranges if we are:
a) In the same section, and
b) In the same enclosing CU.
This means we have more fine grained ranges for compile units, and fewer
ranges overall when we have multiple functions in the same CU
adjacent to each other in the object file.
Also remove all of the earlier hacks around this functionality for
function sections etc. Also update all of the testcases to take into
account the merging functionality.
with a fix for location entries in the debug_loc section:
Make sure that debug loc entries are relative to the low_pc
of the compile unit. This means that when we only have a single
range that the offset should be just relative to the low_pc
of the unit, for multiple ranges for a CU this means that we'll be
relative to 0 which we emit along with DW_AT_ranges.
This mostly shows up with linked binaries, so add a testcase with
multiple CUs so that our location is going to be offset of a CU
with a non-zero low_pc.
llvm-svn: 204377
The Octeon cpu from Cavium Networks is mips64r2 based and has an extended
instruction set. In order to utilize this with LLVM, a new cpu feature "octeon"
and a subtarget feature "cnmips" is added. A small set of new instructions
(baddu, dmul, pop, dpop, seq, sne) is also added. LLVM generates dmul, pop and
dpop instructions with option -mcpu=octeon or -mattr=+cnmips.
llvm-svn: 204337
Given
bar = foo + 4
.long bar
MC would eat the 4. GNU as includes it in the relocation. The rule seems to be
that a variable that defines a symbol is used in the relocation and one that
does not define a symbol is evaluated and the result included in the relocation.
Fixing this unfortunately required some other changes:
* Since the variable is now evaluated, it would prevent the ELF writer from
noticing the weakref marker the elf streamer uses. This patch then replaces
that with a VariantKind in MCSymbolRefExpr.
* Using VariantKind then requires us to look past other VariantKind to see
.weakref bar,foo
call bar@PLT
doing this also fixes
zed = foo +2
call zed@PLT
so that is a good thing.
* Looking past VariantKind means that the relocation selection has to use
the fixup instead of the target.
This is a reboot of the previous fixes for MC. I will watch the sanitizer
buildbot and wait for a build before adding back the previous fixes.
llvm-svn: 204294
This commit moves us from a single range per subprogram to extending
ranges if we are:
a) In the same section, and
b) In the same enclosing CU.
This means we have more fine grained ranges for compile units, and fewer
ranges overall when we have multiple functions in the same CU
adjacent to each other in the object file.
Also remove all of the earlier hacks around this functionality for
function sections etc. Also update all of the testcases to take into
account the merging functionality.
llvm-svn: 204277
It isn't actually used now, and probably never will be, plus it makes
tests less annoying. I also think SC prints GDS instructions as a
separate instruction name.
llvm-svn: 204270
The current state of affairs has auxiliary symbols described as a big
bag of bytes. This is less than satisfying, it detracts from the YAML
file as being human readable.
Instead, allow for symbols to optionally contain their auxiliary data.
This allows us to have a much higher level way of describing things like
weak symbols, function definitions and section definitions.
This depends on D3105.
Differential Revision: http://llvm-reviews.chandlerc.com/D3092
llvm-svn: 204214
This isn't a complete fix - it falls back to non-comp_dir when multiple
compile units are in play. Adding a map of comp_dir to table is part of
the more general solution, but I gave up (in the short term) when I
realized I'd also have to calculate the size of each type unit so as to
produce correct DW_AT_stmt_list attributes.
llvm-svn: 204202
The use_iterator redesign in r203364 introduced an increment past the
end of a range in -objc-arc-contract. Added an explicit check for the
end of the range.
<rdar://problem/16333235>
llvm-svn: 204195
When deployment target version information is available, emit it to the
target streamer for inclusion in the object file.
rdar://11337778
llvm-svn: 204191
Allow object files to be tagged with a version-min load command for iOS
or MacOSX.
Teach macho-dump to understand the version-min load commands for
testcases.
rdar://11337778
llvm-svn: 204190
noise.
Original commit log:
Replace some dead code with an assert. When I first ported this pass
from a loop pass to a function pass I did so in the naive, recursive
way. It doesn't actually work, we need a worklist instead. When
I switched to the worklist I didn't delete the naive recursion. That
recursion was also buggy because it was dead and never really exercised.
llvm-svn: 204187
pass from a loop pass to a function pass I did so in the naive,
recursive way. It doesn't actually work, we need a worklist instead.
When I switched to the worklist I didn't delete the naive recursion.
That recursion was also buggy because it was dead and never really
exercised.
llvm-svn: 204184
For functions where esi is used as base pointer, we would previously fall back
from lowering memcpy with "rep movs" because that clobbers esi.
With this patch, we just store esi in another physical register, and restore
it afterwards. This adds a little bit of register preassure, but the more
efficient memcpy should be worth it.
Differential Revision: http://llvm-reviews.chandlerc.com/D2968
llvm-svn: 204174
Summary:
SLP Vectorization of intrinsics (r203707) has exposed cases where the
expansion of vector bswap is failing (PR19151).
Reviewers: hfinkel
CC: chandlerc
Differential Revision: http://llvm-reviews.chandlerc.com/D3104
llvm-svn: 204163
Summary:
X86BaseInfo.h defines an enum for the offset of each operand in a memory operand
sequence. Some code uses it and some does not. This patch replaces (hopefully)
all remaining locations where an integer literal was used instead of this enum.
No functionality change intended.
Reviewers: nadav
CC: llvm-commits, t.p.northover
Differential Revision: http://llvm-reviews.chandlerc.com/D3108
llvm-svn: 204158
When converting a signed 32-bit integer to double-precision floating point on
hardware without a lfiwax instruction, we have to instead use a lfd followed
by fcfid. We were erroneously offsetting the address by 4 bytes in
preparation for either a lfiwax or lfiwzx when generating the lfd. This fixes
that silly error.
This was not caught in the test suite since the conversion tests were run with
-mcpu=pwr7, which implies availability of lfiwax. I've added another test
case for older hardware that checks the code we expect in the absence of
lfiwax and other flavors of fcfid. There are fewer tests in this test case
because we punt to DAG selection in more cases on older hardware. (We must
generate complex fiddly sequences in those cases, and there is marginal
benefit in duplicating that logic in fast-isel.)
llvm-svn: 204155
LLVM part of MSan implementation of advanced origin tracking,
when we record not only creation point, but all locations where
an uninitialized value was stored to memory, too.
llvm-svn: 204151
Summary:
The compiler does not always generate linkage names. If a function
has been inlined and its body elided, its linkage name may not be
generated.
When the binary executes, the profiler will use its unmangled name
when attributing samples. This results in unmangled names in the
input profile.
We are currently failing hard when this happens. However, in this case
all that happens is that we fail to attribute samples to the inlined
function. While this means fewer optimization opportunities, it should
not cause a compilation failure.
This patch accepts all valid function names, regardless of whether
they were mangled or not.
Reviewers: chandlerc
CC: llvm-commits
Differential Revision: http://llvm-reviews.chandlerc.com/D3087
llvm-svn: 204142
The revision I'm reverting breaks handling of transitive aliases. This blocks us
and breaks sanitizer bootstrap:
http://lab.llvm.org:8011/builders/sanitizer-x86_64-linux-bootstrap/builds/2651
(and checked locally by Alexey).
This revision is the result of:
svn merge -r204059:204058 -r204028:204027 -r203962:203961 .
+ the regression test added to test/MC/ELF/alias.s
Another way to reproduce the regression with clang:
$ cat q.c
void a1();
void a2() __attribute__((alias("a1")));
void a3() __attribute__((alias("a2")));
void a1() {}
$ ~/work/llvm-build/bin/clang-3.5-good -c q.c && mv q.o good.o && \
~/work/llvm-build/bin/clang-3.5-bad -c q.c && mv q.o bad.o && \
objdump -t good.o bad.o
good.o: file format elf64-x86-64
SYMBOL TABLE:
0000000000000000 l df *ABS* 0000000000000000 q.c
0000000000000000 l d .text 0000000000000000 .text
0000000000000000 l d .data 0000000000000000 .data
0000000000000000 l d .bss 0000000000000000 .bss
0000000000000000 l d .comment 0000000000000000 .comment
0000000000000000 l d .note.GNU-stack 0000000000000000 .note.GNU-stack
0000000000000000 l d .eh_frame 0000000000000000 .eh_frame
0000000000000000 g F .text 0000000000000006 a1
0000000000000000 g F .text 0000000000000006 a2
0000000000000000 g F .text 0000000000000006 a3
bad.o: file format elf64-x86-64
SYMBOL TABLE:
0000000000000000 l df *ABS* 0000000000000000 q.c
0000000000000000 l d .text 0000000000000000 .text
0000000000000000 l d .data 0000000000000000 .data
0000000000000000 l d .bss 0000000000000000 .bss
0000000000000000 l d .comment 0000000000000000 .comment
0000000000000000 l d .note.GNU-stack 0000000000000000 .note.GNU-stack
0000000000000000 l d .eh_frame 0000000000000000 .eh_frame
0000000000000000 g F .text 0000000000000006 a1
0000000000000000 g F .text 0000000000000006 a2
0000000000000000 g .text 0000000000000000 a3
llvm-svn: 204137
Add an assertion that a valid section is referenced. The potential NULL pointer
dereference was identified by the clang static analyzer.
llvm-svn: 204114
This allows us to catch more opportunities for ODR-based type uniquing
during LTO.
Paired commit with CFE which updates some testcases to verify the new
DIBuilder behavior.
llvm-svn: 204106
This removes an attribute (and more importantly, a relocation) from
skeleton type units and removes some unnecessary file names from the
debug_line section that remains in the .o (and linked executable) file.
There's still a few places we could shave off some more space here:
* use compilation dir of the underlying compilation unit (since all the
type units share that compilation dir - though this would be more
complicated in LTO cases where they don't (keep a map of compilation
dir->line table header?))
* Remove some of the unnecessary header fields from the line table since
they're not needed in this situation (about 12 bytes per table).
llvm-svn: 204099
When emitting assembly there's no support for emitting separate line
tables for each compilation unit - so LLVM emits .loc directives
producing a single line table.
Line tables have an implicit directory (index 0) equal to the
compilation directory (DW_AT_comp_dir) of the compilation unit that
references them.
If multiple compilation units (with possibly disparate compilation
directories) reference the same line table, we must avoid relying on
this ambiguous directory.
Achieve this my simply not setting the compilation directory on the line
table when we're in this situation (multiple units while emitting
assembly).
llvm-svn: 204094
We still do a few lookups into the line table mapping in MCContext that
could be factored out into a single lookup (rather than looking it up
once for the table label, once to set the compilation unit, once for
each time we need a file ID, etc... ) but assembly output complicates
that somewhat as we still need a virtual dispatch back to the
MCAsmStreamer in that case.
llvm-svn: 204092
Our handling of compilation directory in DwarfDebug was broken
(incorrectly using the 'last' compilation directory (that of the last
CU in the metadata list) for all function emission in any CU). By moving
this handling down into MCDwarf the issue is fixed as the compilation
dir is tracked correctly per line table.
llvm-svn: 204089
When GlobalOpt has determined that a GlobalVariable only ever has two values,
it would convert the GlobalVariable to a boolean, and introduce SelectInsts
at every load, to choose between the two possible values. These SelectInsts
introduce overhead and other unpleasantness.
This patch makes GlobalOpt just add range metadata to loads from such
GlobalVariables instead. This enables the same main optimization (as seen in
test/Transforms/GlobalOpt/integer-bool.ll), without introducing selects.
The main downside is that it doesn't get the memory savings of shrinking such
GlobalVariables, but this is expected to be negligible.
llvm-svn: 204076
See r204027 for the precursor to this that applied to asm debug info.
This required some non-obvious API changes to handle the case of asm
output (we never go asm->asm so this didn't come up in r204027): the
modification of the file/directory name by MCDwarfLineTableHeader needed
to be reflected in the MCAsmStreamer caller so it could print the
appropriate .file directive, so those StringRef parameters are now
non-const ref (in/out) parameters rather than just const.
llvm-svn: 204069
It is unclear how it would be possible to get M to be NULL in normal scenarios.
Change this to an assert rather than a runtime check as per dblakie's
suggestion.
llvm-svn: 204060
This performs the equivalent of a .set directive in that it creates a symbol
which is an alias for another symbol or value which may possibly be yet
undefined. This directive also has the added property in that it marks the
aliased symbol as being a thumb function entry point, in the same way that the
.thumb_func directive does.
The current implementation fails one test due to an unrelated issue. Functions
within .thumb sections are not marked as thumb_func. The result is that
the aliasee function is not valued correctly.
llvm-svn: 204059
Rather than LegalizeAction::Expand, this needs LegalizeAction::Promote to get
promoted to fp_to_sint v8f32->v8i32. This is a legal operation on AVX.
For that to work properly, we also need to teach the legalizer about the
specific promotion required here. The default vector promotion uses
bitcasting to a vector type of the same total size. We want to promote the
vector element type, effectively widening the operation and then truncating
the result. This is analogous to the current logic of how int_to_fp is
promoted.
The change also factors out some code from the int_to_fp promotion code to
ValueType::widenIntegerVectorElementType. This is now shared between
int_to_fp and fp_to_int.
There is no longer need for the custom lowering of fp_to_sint f32->v8i16 in
X86. It can now go through the new target-independent fp_to_*int promotion
logic.
I also checked that no other target uses Promote for these ops yet, so there
shouldn't be any unexpected change in behavior.
Fixes <rdar://problem/16202247>
llvm-svn: 204058
The type of the immediates should not matter as long as the encoding is
equivalent to the encoding of one of the legal inline constants.
Tested-by: Michel Dänzer <michel.daenzer@amd.com>
llvm-svn: 204056
This instructions writes to an 32-bit SGPR. This change required adding
the 32-bit VCC_LO and VCC_HI registers, because the full VCC register
is 64 bits.
This fixes verifier errors on several of the indirect addressing piglit
tests.
Tested-by: Michel Dänzer <michel.daenzer@amd.com>
llvm-svn: 204055
The "noduplicate" attribute of call instructions is sometimes queried directly
and sometimes through the cannotDuplicate() predicate. This patch streamlines
all queries to use the cannotDuplicate() predicate. It also adds this predicate
to InvokeInst, to mirror what CallInst has.
llvm-svn: 204049
This is really a consistency fix. Since given
a = b
we propagate the information, we should propagate it too given
a = b + (1 - 1)
Fixes pr19145.
llvm-svn: 204028
The previous deduping strategy was woefully inadequate - it only
considered the most recent file used and avoided emitting a duplicate in
that case - never considering the a/b/a scenario.
It was also lacking when it came to directory paths as the previous
filename would never match the current if the filename had been split
into file and directory components.
This change builds caching functionality into the line table at the
lowest level in an optional form (a file number of 0 indicates that one
should be chosen and returned) and will eventually be reused by the
normal source level debugging DWARF emission.
llvm-svn: 204027
- Adds support for inserting vzerouppers before tail-calls.
This is enabled implicitly by having MachineInstr::copyImplicitOps preserve
regmask operands, which allows VZeroUpperInserter to see where tail-calls use
vector registers.
- Fixes a bug that caused the previous version of this optimization to miss some
vzeroupper insertion points in loops. (Loops-with-vector-code that followed
loops-without-vector-code were mistakenly overlooked by the previous version).
- New algorithm never revisits instructions.
Fixes <rdar://problem/16228798>
llvm-svn: 204021
Utilize the previous move of MVT to a separate header for all trivial
cases (that don't need any further restructuring).
Reviewed By: Tim Northover
llvm-svn: 204003
Since our error_category is based on the std one, we should have the
same visibility for the constructor. This also allows us to avoid
using the _do_message implementation detail in our own categories.
llvm-svn: 203998
based on the ODR.
This adds an OdrMemberMap to DwarfDebug which is used to unique C++
member function declarations based on the unique identifier of their
containing class and their mangled name.
We can't use the usual DIRef mechanism here because DIScopes are indexed
using their entire MDNode, including decl_file and decl_line, which need
not be unique (see testcase).
Prior to this change multiple redundant member function declarations would
end up in the same uniqued DW_TAG_class_type.
llvm-svn: 203982
For better or worse, this is currently the normal error reporting path
when dealing with backend errors from inline assembly. It's not just
internal compiler issues that come through here, so we shouldn't be
creating a backtrace on this path.
rdar://16329947
llvm-svn: 203979
Summary:
The sample profiler pass emits several error messages. Instead of
just aborting the compiler with report_fatal_error, we can emit
better messages using DiagnosticInfo.
This adds a new sub-class of DiagnosticInfo to handle the sample
profiler.
Reviewers: chandlerc, qcolombet
CC: llvm-commits
Differential Revision: http://llvm-reviews.chandlerc.com/D3086
llvm-svn: 203976
any lexical scopes then go ahead and turn on DW_AT_ranges for the
compile unit since we would be claiming to describe in the CU
a range for which we don't have information in the CU otherwise.
llvm-svn: 203969
This change brings getCallPreservedMask()'s logic in line with
getCalleeSavedRegs().
While this changes the control flow slightly, the change is not
currently observable. is64Bit must be false to get to the accidental
fallthrough, but the case that we fall into (coldcc) does nothing unless
is64Bit is true.
llvm-svn: 203943
Commit r181723 introduced code to avoid placing initialized variables
needing relocations into the .rodata section, which avoid copy relocs
that do not work as expected on ppc64 function references.
The same treatment is also needed for *named* .rodata.XXX sections.
This patch changes PPC64LinuxTargetObjectFile::SelectSectionForGlobal
to modify "Kind" *before* calling the default SelectSectionForGlobal
routine, instead of first calling the default routine and then just
checking for the (main) .rodata section afterwards.
llvm-svn: 203921
issue in that the new MachineRegisterInfo bundle iterators didn't
dereference to the START of the bundle, while the old skipBundle()
method did.
llvm-svn: 203890
These linkages were introduced some time ago, but it was never very
clear what exactly their semantics were or what they should be used
for. Some investigation found these uses:
* utf-16 strings in clang.
* non-unnamed_addr strings produced by the sanitizers.
It turns out they were just working around a more fundamental problem.
For some sections a MachO linker needs a symbol in order to split the
section into atoms, and llvm had no idea that was the case. I fixed
that in r201700 and it is now safe to use the private linkage. When
the object ends up in a section that requires symbols, llvm will use a
'l' prefix instead of a 'L' prefix and things just work.
With that, these linkages were already dead, but there was a potential
future user in the objc metadata information. I am still looking at
CGObjcMac.cpp, but at this point I am convinced that linker_private
and linker_private_weak are not what they need.
The objc uses are currently split in
* Regular symbols (no '\01' prefix). LLVM already directly provides
whatever semantics they need.
* Uses of a private name (start with "\01L" or "\01l") and private
linkage. We can drop the "\01L" and "\01l" prefixes as soon as llvm
agrees with clang on L being ok or not for a given section. I have two
patches in code review for this.
* Uses of private name and weak linkage.
The last case is the one that one could think would fit one of these
linkages. That is not the case. The semantics are
* the linker will merge these symbol by *name*.
* the linker will hide them in the final DSO.
Given that the merging is done by name, any of the private (or
internal) linkages would be a bad match. They allow llvm to rename the
symbols, and that is really not what we want. From the llvm point of
view, these objects should really be (linkonce|weak)(_odr)?.
For now, just keeping the "\01l" prefix is probably the best for these
symbols. If we one day want to have a more direct support in llvm,
IMHO what we should add is not a linkage, it is just a hidden_symbol
attribute. It would be applicable to multiple linkages. For example,
on weak it would produce the current behavior we have for objc
metadata. On internal, it would be equivalent to private (and we
should then remove private).
llvm-svn: 203866
operator* on the by-operand iterators to return a MachineOperand& rather than
a MachineInstr&. At this point they almost behave like normal iterators!
Again, this requires making some existing loops more verbose, but should pave
the way for the big range-based for-loop cleanups in the future.
llvm-svn: 203865
There aren't /that/ many files, and we are already using various maps
and other standard containers that don't use MCContext's allocator to
store these values, so this doesn't seem to be critical and simplifies
the design (I'll be moving construction out of MCContext shortly so it'd
be annoying to have to pass the allocator around to allocate these
things... and we'll have non-MCContext users (debug_line.dwo) shortly)
llvm-svn: 203831
This patch fixes the bug in peephole optimization that folds a load which defines one vreg into the one and only use of that vreg. With debug info, a DBG_VALUE that referenced the vreg considered to be a use, preventing the optimization. The fix is to ignore DBG_VALUE's during the optimization, and undef a DBG_VALUE that references a vreg that gets removed.
Patch by Trevor Smigiel!
llvm-svn: 203829
This changes the implementation of local directional labels to use a dedicated
map. With that it can then just use CreateTempSymbol, which is what the rest
of MC uses.
CreateTempSymbol doesn't do a great job at making sure the names are unique
(or being efficient when the names are not needed), but that should probably
be fixed in a followup patch.
This fixes pr18928.
llvm-svn: 203826
This replaces several "compile unit ID -> thing" mappings in favor of
one mapping from CUID to the whole line table structure (files,
directories, and lines).
This is another step along the way to refactoring out reusable
components of line table handling for use when generating debug_line.dwo
for fission type units.
Also, might be a good basis to fold some of this handling down into
MCStreamers to avoid the special case of "One line table when doing asm
printing, line table per CU otherwise" by building it into the different
MCStreamer implementations.
llvm-svn: 203821
LDS instructions are pseudo instructions which model
the OQAP defs and uses within a single instruction.
This fixes a hang in the opencv MedianFilter tests.
llvm-svn: 203818
This is a follow-up to r203635. Saleem pointed out that since symbolic register
names are much easier to read, it would be good if we could turn them off only
when we really need to because we're using an external assembler.
Differential Revision: http://llvm-reviews.chandlerc.com/D3056
llvm-svn: 203806
Summary:
This adds ObjectFile::section_iterator_range, that allows to write
range-based for-loops running over all sections of a given file.
Several files from lib/ are converted to the new interface. Similar fixes
should be applied to a variety of llvm-* tools.
Reviewers: rafael
Reviewed By: rafael
CC: llvm-commits
Differential Revision: http://llvm-reviews.chandlerc.com/D3069
llvm-svn: 203799
Summary:
This helps the instruction selector to lower an i64 * i64 -> i128
multiplication into a single instruction on targets which support it.
This is an update of D2973 which was reverted because of a bug reported
as PR19084.
Reviewers: t.p.northover, chapuni
Reviewed By: t.p.northover
CC: llvm-commits, alex, chapuni
Differential Revision: http://llvm-reviews.chandlerc.com/D3021
llvm-svn: 203797
O(N*log(N)). The idea is to introduce total ordering among functions set.
That allows to build binary tree and perform function look-up procedure in O(log(N)) time.
This patch description:
Introduced total ordering among Type instances. Actually it is improvement for existing
isEquivalentType.
0. Coerce pointer of 0 address space to integer.
1. If left and right types are equal (the same Type* value), return 0 (means equal).
2. If types are of different kind (different type IDs). Return result of type IDs
comparison, treating them as numbers.
3. If types are vectors or integers, return result of its
pointers comparison (casted to numbers).
4. Check whether type ID belongs to the next group:
* Void
* Float
* Double
* X86_FP80
* FP128
* PPC_FP128
* Label
* Metadata
If so, return 0.
5. If left and right are pointers, return result of address space
comparison (numbers comparison).
6. If types are complex.
Then both LEFT and RIGHT will be expanded and their element types will be checked with
the same way. If we get Res != 0 on some stage, return it. Otherwise return 0.
7. For all other cases put llvm_unreachable.
llvm-svn: 203788
convenient it is to imagine a world where this works, that is not C++ as
was pointed out in review. The standard even goes to some lengths to
preclude any attempt at this, for better or worse. Maybe better. =]
llvm-svn: 203775
Only one instruction pair needed changing: SMULH & UMULH. The previous
code worked, but MC was doing extra work treating Ra as a valid
operand (which then got completely overwritten in MCCodeEmitter).
No behaviour change, so no tests.
llvm-svn: 203772
VSX is an ISA extension supported on the POWER7 and later cores that enhances
floating-point vector and scalar capabilities. Among other things, this adds
<2 x double> support and generally helps to reduce register pressure.
The interesting part of this ISA feature is the register configuration: there
are 64 new 128-bit vector registers, the 32 of which are super-registers of the
existing 32 scalar floating-point registers, and the second 32 of which overlap
with the 32 Altivec vector registers. This makes things like vector insertion
and extraction tricky: this can be free but only if we force a restriction to
the right register subclass when needed. A new "minipass" PPCVSXCopy takes care
of this (although it could do a more-optimal job of it; see the comment about
unnecessary copies below).
Please note that, currently, VSX is not enabled by default when targeting
anything because it is not yet ready for that. The assembler and disassembler
are fully implemented and tested. However:
- CodeGen support causes miscompiles; test-suite runtime failures:
MultiSource/Benchmarks/FreeBench/distray/distray
MultiSource/Benchmarks/McCat/08-main/main
MultiSource/Benchmarks/Olden/voronoi/voronoi
MultiSource/Benchmarks/mafft/pairlocalalign
MultiSource/Benchmarks/tramp3d-v4/tramp3d-v4
SingleSource/Benchmarks/CoyoteBench/almabench
SingleSource/Benchmarks/Misc/matmul_f64_4x4
- The lowering currently falls back to using Altivec instructions far more
than it should. Worse, there are some things that are scalarized through the
stack that shouldn't be.
- A lot of unnecessary copies make it past the optimizers, and this needs to
be fixed.
- Many more regression tests are needed.
Normally, I'd fix these things prior to committing, but there are some
students and other contributors who would like to work this, and so it makes
sense to move this development process upstream where it can be subject to the
regular code-review procedures.
llvm-svn: 203768
There are currently two schemes for mapping instruction operands to
instruction-format variables for generating the instruction encoders and
decoders for the assembler and disassembler respectively: a) to map by name and
b) to map by position.
In the long run, we'd like to remove the position-based scheme and use only
name-based mapping. Unfortunately, the name-based scheme currently cannot deal
with complex operands (those with suboperands), and so we currently must use
the position-based scheme for those. On the other hand, the position-based
scheme cannot deal with (register) variables that are split into multiple
ranges. An upcoming commit to the PowerPC backend (adding VSX support) will
require this capability. While we could teach the position-based scheme to
handle that, since we'd like to move away from the position-based mapping
generally, it seems silly to teach it new tricks now. What makes more sense is
to allow for partial transitioning: use the name-based mapping when possible,
and only use the position-based scheme when necessary.
Now the problem is that mixing the two sensibly was not possible: the
position-based mapping would map based on position, but would not skip those
variables that were mapped by name. Instead, the two sets of assignments would
overlap. However, I cannot currently change the current behavior, because there
are some backends that rely on it [I think mistakenly, but I'll send a message
to llvmdev about that]. So I've added a new TableGen bit variable:
noNamedPositionallyEncodedOperands, that can be used to cause the
position-based mapping to skip variables mapped by name.
llvm-svn: 203767
Support to the IAS was added to actually parse and handle the complex SO
expressions. However, the object file lowering was not updated to compensate
for the fact that the shift operand may be an absolute expression.
When trying to assemble to an object file, the lowering would fail while
succeeding when emitting purely assembly. Add an appropriate test.
The test case is inspired by the test case provided by Jiangning Liu who also
brought the issue to light.
llvm-svn: 203762
for use with C++11 range-based for-loops.
The gist of phase 1 is to remove the skipInstruction() and skipBundle()
methods from these iterators, instead splitting each iterator into a version
that walks operands, a version that walks instructions, and a version that
walks bundles. This has the result of making some "clever" loops in lib/CodeGen
more verbose, but also makes their iterator invalidation characteristics much
more obvious to the casual reader. (Making them concise again in the future is a
good motivating case for a pre-incrementing range adapter!)
Phase 2 of this undertaking with consist of removing the getOperand() method,
and changing operator*() of the operand-walker to return a MachineOperand&. At
that point, it should be possible to add range views for them that work as one
might expect.
llvm-svn: 203757
This makes the mapping consistent with other CU->X mappings in the
MCContext, helping pave the way to refactor all these values into a
single data structure per CU and thus a single map.
I haven't renamed the data structure as that would make the patch churn
even higher (the MCLineSection name no longer makes sense, as this
structure now contains lines for multiple sections covered by a single
CU, rather than lines for a single section in multiple CUs) and further
refactorings will follow that may remove this type entirely.
For convenience, I also gave the MCLineSection value semantics so we
didn't have to do the lazy construction, manual delete, etc.
(& for those playing at home, refactoring the line printing into a
single data structure will eventually alow that data structure to be
reused to own the debug_line.dwo line table used for type unit file name
resolution)
llvm-svn: 203726
Chandler voiced some concern with checking this in without some
discussion first. Reverting for now.
This reverts r203703, r203704, r203708, and 203709.
llvm-svn: 203723
Extend what's currently done for shift because the HW performs this masking
implicitly:
(rotl:i32 x, (and y, 31)) -> (rotl:i32 x, y)
I use the newly factored out multiclass that was only supporting shifts so
far.
For testing I extended my testcase for the new rotation idiom.
<rdar://problem/15295856>
llvm-svn: 203718