We now no longer need alias analysis - the cases that alias analysis would
handle are now handled as accesses with a large dependence distance.
We can now vectorize loops with simple constant dependence distances.
for (i = 8; i < 256; ++i) {
a[i] = a[i+4] * a[i+8];
}
for (i = 8; i < 256; ++i) {
a[i] = a[i-4] * a[i-8];
}
We would be able to vectorize about 200 more loops (in many cases the cost model
instructs us no to) in the test suite now. Results on x86-64 are a wash.
I have seen one degradation in ammp. Interestingly, the function in which we
now vectorize a loop is never executed so we probably see some instruction
cache effects. There is a 2% improvement in h264ref. There is one or the other
TSCV loop kernel that speeds up.
radar://13681598
llvm-svn: 184685
Untill now we detected the vectorizable tree and evaluated the cost of the
entire tree. With this patch we can decide to trim-out branches of the tree
that are not profitable to vectorizer.
Also, increase the max depth from 6 to 12. In the worse possible case where all
of the code is made of diamond-shaped graph this can bring the cost to 2**10,
but diamonds are not very common.
llvm-svn: 184681
This makes it possible to write unit tests that are less susceptible
to minor code motion, particularly copy placement. block-placement.ll
covers this case with -pre-RA-sched=source which will soon be
default. One incorrectly named block is already fixed, but without
this fix, enabling new coalescing and scheduling would cause more
failures.
llvm-svn: 184680
This is an awful implementation of the target hook. But we don't have
abstractions yet for common machine ops, and I don't see any quick way
to make it table-driven.
llvm-svn: 184664
Rewrote the SLP-vectorization as a whole-function vectorization pass. It is now able to vectorize chains across multiple basic blocks.
It still does not vectorize PHIs, but this should be easy to do now that we scan the entire function.
I removed the support for extracting values from trees.
We are now able to vectorize more programs, but there are some serious regressions in many workloads (such as flops-6 and mandel-2).
llvm-svn: 184647
Although in reality the symbol table in ELF resides in a section, the
standard requires that there be no more than one SHT_SYMTAB. To enforce
this constraint, it is cleaner to group all the symbols under a
top-level `Symbols` key on the object file.
llvm-svn: 184627
It wouldn't really test anything that doesn't already have a more
targeted test:
`yaml2obj-elf-section-basic.yaml`:
Already tests that section content is correctly passed though.
`yaml2obj-elf-symbol-basic.yaml` (this file):
Tests that the st_value and st_size attributes of `main` are set
correctly.
Between those two tests, disassembling the file doesn't really add
anything, so just remove mention of disassembling the file.
llvm-svn: 184607
This reverts commit r184602. In an upcoming commit, I will just remove
the disassembler part of the test; it was mostly just a "nifty" thing
marking a milestone but it doesn't test anything that isn't tested
elsewhere.
llvm-svn: 184606
A FastISel optimization was causing us to emit no information for such
parameters & when they go missing we end up emitting a different
function type. By avoiding that shortcut we not only get types correct
(very important) but also location information (handy) - even if it's
only live at the start of a function & may be clobbered later.
Reviewed/discussion by Evan Cheng & Dan Gohman.
llvm-svn: 184604
This was causing buildbot failures when build without X86 support.
Is there a way to conditionalize the test on the X86 target being
present?
llvm-svn: 184597
Zero is used by BlockFrequencyInfo as a special "don't know" value. It also
causes a sink for frequencies as you can't ever get off a zero frequency with
more multiplies.
This recovers a 10% regression on MultiSource/Benchmarks/7zip. A zero frequency
was propagated into an inner loop causing excessive spilling.
PR16402.
llvm-svn: 184584
The GNU assembler supports (as extension to the ABI) use of PC-relative
relocations in half16 fields, which allows writing code like:
li 1, base-.
This patch adds support for those relocation types in the assembler.
llvm-svn: 184552
The current code base only supports the minimum set of tls-related
relocations and @modifiers that are necessary to support compiler-
generated code. This patch extends this to the full set defined
in the ABI (and supported by the GNU assembler) for the benefit
of the assembler parser.
llvm-svn: 184551
This adds necessary infrastructure to support the @h modifier.
Note that all required relocation types were already present
(and unused).
This patch provides support for using @h in the assembler;
it would also be possible to now use this feature in code
generated by the compiler, but this is not done yet.
llvm-svn: 184548
The output can be in different orders, which breaks the test in some
situations. I have not yet found out what the root cause of the order
difference is. This fixes our internal build. If it is not the right
solution, feel free to roll back.
llvm-svn: 184535
Previously we unconditionally enforced that section references in
symbols in the YAML had a name that was a section name present in the
object, and linked the references to that section. Now, permit empty
section names (already the default, if the `Section` key is not
provided) to indicate SHN_UNDEF.
llvm-svn: 184513
Instead, just have 3 sub-lists, one for each of
{STB_LOCAL,STB_GLOBAL,STB_WEAK}.
This allows us to be a lot more explicit w.r.t. the symbol ordering in
the object file, because if we allowed explicitly setting the STB_*
`Binding` key for the symbol, then we might have ended up having to
shuffle STB_LOCAL symbols to the front of the list, which is likely to
cause confusion and potential for error.
Also, this new approach is simpler ;)
llvm-svn: 184506
it at the moment.
This allows to form more paired loads even when stack coloring pass destroys the
memoryoperand's value.
<rdar://problem/13978317>
llvm-svn: 184492
This is a bit tricky as the xacquire and xrelease hints use the same bytes,
0xf2 and 0xf3, as the repne and rep prefixes.
Fortunately llvm has different llvm MCInst Opcode enums for rep/xrelease
and repne/xacquire. So to make this work a boolean was added the
InternalInstruction struct as part of the Prefix state which is set with the
added logic in readPrefixes() when decoding an instruction to determine
if these prefix bytes are to be disassembled as xacquire or xrelease. Then
we let the matcher pick the normal prefix instructionID and we change the
Opcode after that when it is set into the MCInst being created.
rdar://11019859
llvm-svn: 184490
Also add a v2i32 test to the existing v4i32 test.
Patch by: Aaron Watry
Reviewed-by: Tom Stellard <thomas.stellard@amd.com>
Signed-off-by: Aaron Watry<awatry@gmail.com>
llvm-svn: 184482
Also add SI tests to existing file and a v2i32 test for both
R600 and SI.
Patch by: Aaron Watry
Reviewed-by: Tom Stellard <thomas.stellard@amd.com>
Signed-off-by: Aaron Watry <awatry@gmail.com>
llvm-svn: 184481
The custom lowering causes llc to crash with a segfault.
Ideally, the custom lowering can be fixed, but this allows
programs which load/store v2i32 to work without crashing.
Patch by: Aaron Watry
Reviewed-by: Tom Stellard <thomas.stellard@amd.com>
Signed-off-by: Aaron Watry<awatry@gmail.com>
llvm-svn: 184480
After this patch, the ELF file produced by
`yaml2obj-elf-symbol-basic.yaml`, when linked and executed on x86_64
(under SysV ABI, obviously; I tested on Linux), produces a working
executable that goes into an infinite loop!
llvm-svn: 184469
The cdp2 instruction should have the same restrictions as cdp on the
co-processor registers.
VFP instructions on v8/AArch32 share the same encoding space as cdp2.
llvm-svn: 184445
We collect gather sequences when we vectorize basic blocks. Gather sequences are excellent
hints for vectorization of other basic blocks.
llvm-svn: 184444
The assembler parser common code supports recognizing symbol variants
using the @ modifer. On PowerPC, it should also be possible to use
(some of) those modifiers with directional labels, like "1f@l".
This patch adds support for accepting symbol variants on directional
labels as well.
llvm-svn: 184437
This patch adds support for having the assembler optimize fixups
to constructs like "symbol@ha" or "symbol@l" if "symbol" can be
resolved at assembler time.
This optimization is already present in the PPCMCExpr.cpp code
for handling PPC_HA16/PPC_LO16 target expressions. However,
those target expression were used only on Darwin targets.
This patch changes target expression code so that they are
usable also with the GNU assembler (using the @ha / @l syntax
instead of the ha16() / lo16() syntax), and changes the
MCInst lowering code to generate those target expressions
where appropriate.
It also changes the asm parser to generate HA16/LO16 target
expressions when parsing assembler source that uses the
@ha / @l modifiers. The effect is that now the above-
mentioned optimization automatically becomes available
for those situations too.
llvm-svn: 184436
Fix up three tests - one that was relying on abbreviation number,
another relying on a location list in this case (& testing raw asm,
changed that to use dwarfdump on the debug_info now that that's where
the location is), and another which was added in r184368 - exposing a
bug in that fix that is exposed when we emit the location inline rather
than through a location list. Fix that bug while I'm here.
llvm-svn: 184387
We had been papering over a problem with location info for non-trivial
types passed by value by emitting their type as references (this caused
the debugger to interpret the location information correctly, but broke
the type of the function). r183329 corrected the type information but
lead to the debugger interpreting the pointer parameter as the value -
the debug info describing the location needed an extra dereference.
Use a new flag in DIVariable to add the extra indirection (either by
promoting an existing DW_OP_reg (parameter passed in a register) to
DW_OP_breg + 0 or by adding DW_OP_deref to an existing DW_OP_breg + n
(parameter passed on the stack).
llvm-svn: 184368
This is a basic implementation - we still don't have any support (that I
know of) for dumping DWARF expressions in a meaningful way, so the
location information itself is just printed as a sequence of bytes as we
do elsewhere.
llvm-svn: 184361
The compiler occasionally generates multiple .loc directives in a row
(at the same instruction address). These need to be transformed into
multple actual .debug_line table entries, since they are used to signal
certain information to the debugger (e.g. if the opening brace of a
function body is on the same line as the declaration).
The MCAsmStreamer version of EmitDwarfLocDirective handles this
correctly by emitting a .loc directive every time it is called.
However, the MCObjectStream version simply defaults to recording
the information and emitting only a single table entry later,
e.g. when EmitInstruction is called.
This patch introduces a MCAsmStreamer::EmitDwarfLocDirective
version that emits a line table entry for a .loc directive
that may already be pending before recording the new directive.
(This is similar to how this is handled in GNU as.)
With this patch (and the code alignment factor patch) applied,
I'm now getting identical DWARF .debug sections for all test-suite
object files on PowerPC for the internal and the external assembler.
llvm-svn: 184357
Prior to this change, the considered addressing modes may be invalid since the
maximum and minimum offsets were not taking into account.
This was causing an assertion failure.
The added test case exercices that behavior.
<rdar://problem/14199725> Assertion failed: (CurScaleCost >= 0 && "Legal
addressing mode has an illegal cost!")
llvm-svn: 184341
The type <3 x i8> is a common in graphics and we want to be able to vectorize it.
This changes accelerates bullet by 12% and 471_omnetpp by 5%.
llvm-svn: 184317
"When assembling to the ARM instruction set, the .N qualifier produces
an assembler error and the .W qualifier has no effect."
In the pre-matcher handler in the asm parser the ".w" (wide) qualifier
when in ARM mode is now discarded. And an error message is now
produced when the ".n" (narrow) qualifier is used in ARM mode.
Test cases for these were added.
rdar://14064574
llvm-svn: 184224
value is zero.
This allows optmizations to kick in more easily.
Fix some test cases so that they remain meaningful (i.e., not completely dead
coded) when optimizations apply.
<rdar://problem/14096009> superfluous multiply by high part of zero-extended
value.
llvm-svn: 184222
When producing objects that are abi compliant we are
marking neither the object file nor the assembly file
correctly and thus generate warnings.
We need to set the EF_CPIC flag in the ELF header when
generating direct object.
Note that the warning is only generated when compiling without PIC.
When compiling with clang the warning will be suppressed by supplying:
-Wa,-mno-shared -Wa,-call_nonpic
Also the following directive should also be added:
.option pic0
when compiling without PIC, This eliminates the need for supplying:
-mno-shared -call_nonpic
on the assembler command line.
Patch by Douglas Gilmore
llvm-svn: 184220
For decoding, keep the current behavior of always decoding these as their REP
versions. In the future, this could be improved to recognize the cases where
these behave as XACQUIRE and XRELEASE and decode them as such.
llvm-svn: 184207
When using a positive offset, literal loads where encoded
as if it was negative, because:
- The sign bit was not assigned to an operand
- The addrmode_imm12 operand was not encoding the sign bit correctly
This patch also makes the assembler look at the .w/.n specifier for
loads.
llvm-svn: 184182
The main advantages here are way better heuristics, taking into account not
just loop depth but also __builtin_expect and other static heuristics and will
eventually learn how to use profile info. Most of the work in this patch is
pushing the MachineBlockFrequencyInfo analysis into the right places.
This is good for a 5% speedup on zlib's deflate (x86_64), there were some very
unfortunate spilling decisions in its hottest loop in longest_match(). Other
benchmarks I tried were mostly neutral.
This changes register allocation in subtle ways, update the tests for it.
2012-02-20-MachineCPBug.ll was deleted as it's very fragile and the instruction
it looked for was gone already (but the FileCheck pattern picked up unrelated
stuff).
llvm-svn: 184105
llvm-objdump should provide some way of printing out the addends present in the
.rela sections for debugging purposes if nothing else.
llvm-svn: 184072
Rather than using the full power of target-specific addressing modes in
DBG_VALUEs with Frame Indicies, simply use Frame Index + Offset. This
reduces the complexity of debug info handling down to two
representations of values (reg+offset and frame index+offset) rather
than three or four.
Ideally we could ensure that frame indicies had been eliminated by the
time we reached an assembly or dwarf generation, but I haven't spent the
time to figure out where the FIs are leaking through into that & whether
there's a good place to convert them. Some FI+offset=>reg+offset
conversion is done (see PrologEpilogInserter, for example) which is
necessary for some SelectionDAG assumptions about registers, I believe,
but it might be possible to make this a more thorough conversion &
ensure there are no remaining FIs no matter how instruction selection
is performed.
llvm-svn: 184066
Replace the ill-defined MinLatency and ILPWindow properties with
with straightforward buffer sizes:
MCSchedMode::MicroOpBufferSize
MCProcResourceDesc::BufferSize
These can be used to more precisely model instruction execution if desired.
Disabled some misched tests temporarily. They'll be reenabled in a few commits.
llvm-svn: 184032
Also add a seperate vector lit test file, since r600 doesn't seem to handle
v2i32 load/store yet, but we can test both for SI.
Patch by: Aaron Watry
Reviewed-by: Tom Stellard <thomas.stellard@amd.com>
Signed-off-by: Aaron Watry <awatry@gmail.com>
llvm-svn: 184021
Archive files (.a) can have a symbol table indicating which object
files in them define which symbols. The purpose of this symbol table
is to speed up linking by allowing the linker the read only the .o
files it is actually going to use instead of having to parse every
object's symbol table.
LLVM's archive library currently supports a LLVM specific format for
such table. It is hard to see any value in that now that llvm-ld is
gone:
* System linkers don't use it: GNU ar uses the same plugin as the
linker to create archive files with a regular index. The OS X ar
creates no symbol table for IL files, I assume the linker just parses
all IL files.
* It doesn't interact well with archives having both IL and native objects.
* We probably don't want to be responsible for yet another archive
format variant.
This patch then:
* Removes support for creating and reading such index from lib/Archive.
* Remove llvm-ranlib, since there is nothing left for it to do.
We should in the future add support for regular indexes to llvm-ar for
both native and IL objects. When we do that, llvm-ranlib should be
reimplemented as a symlink to llvm-ar, as it is equivalent to "ar s".
llvm-svn: 184019
We were using RAT_INST_STORE_RAW, which seemed to work, but the docs
say this instruction doesn't exist for Cayman, so it's probably safer
to use a documented instruction instead.
Reviewed-by: Vincent Lejeune<vljn at ovi.com>
llvm-svn: 184015
When we're rematerializing into a not-quite-right register we already add the
real definition as an imp-def, but we should also be marking the "official"
register as dead, since nothing else is going to use it as a result of this
remat.
Not doing this can affect pressure tracking.
rdar://problem/14158833
llvm-svn: 184002
Run the test at O1 instead of O0: ARM FastISel keeps frame pointers around and ignores the flag. The test should now pass on ARM and still passes on x86.See: http://llvm.org/bugs/show_bug.cgi?id=16322
llvm-svn: 183999
(-llc), similarly to the way it was done for clang and llvmc.
This doesn't affect the upstream llvm tests but helps when developing custom
LLVM-based tools and testing them within the LLVM regression framework.
llvm-svn: 183994
in functions which call __builtin_unwind_init()
__builtin_unwind_init() is an undocumented gcc intrinsic which has this effect,
and is used in libgcc_eh.
Goes part of the way toward fixing PR8541.
llvm-svn: 183984
This is a resubmit of r182877, which was reverted because it broken
MCJIT tests on ARM. The patch leaves MCJIT on ARM as it was before: only
enabled for iOS. I've CC'ed people from the original review and revert.
FastISel was only enabled for iOS ARM and Thumb2, this patch enables it
for ARM (not Thumb2) on Linux and NaCl, but not MCJIT.
Thumb2 support needs a bit more work, mainly around register class
restrictions.
The patch punts to SelectionDAG when doing TLS relocation on non-Darwin
targets. I will fix this and other FastISel-to-SelectionDAG failures in
a separate patch.
The patch also forces FastISel to retain frame pointers: iOS always
keeps them for backtracking (so emitted code won't change because of
this), but Linux was getting much worse code that was incorrect when
using big frames (such as test-suite's lencod). I'll also fix this in a
later patch, it will probably require a peephole so that FastISel
doesn't rematerialize frame pointers back-to-back.
The test changes are straightforward, similar to:
http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20130513/174279.html
They also add a vararg test that got dropped in that change.
I ran all of lnt test-suite on A15 hardware with --optimize-option=-O0
and all the tests pass. All the tests also pass on x86 make check-all. I
also re-ran the check-all tests that failed on ARM, and they all seem to
pass.
llvm-svn: 183966
For consistency, change the address in the test case from 0xDEADBEEF to
0xCAFEBABE since 0xCAFEBABE that actually has a 2-byte alignment.
llvm-svn: 183962
This is a preliminary patch for fast instruction selection on
PowerPC. Code generation can differ between DAG isel and fast isel.
Existing tests that specify -O0 were written to expect DAG isel. Make
this explicit by adding -fast-isel=false to the tests.
In some cases specifying -fast-isel=false produces different code even
when there isn't a fast instruction selector specified. This is
because TM.Options.EnableFastISel = 1 at -O0 whether or not a FastISel
object exists. Thus disabling fast isel can actually produce less
conservative code. Because of this, some of the expected code
generation in the -O0 tests needs to be adjusted.
In particular, handling of function arguments is less conservative
with -fast-isel=false (see isOnlyUsedInEntryBlock() in
SelectionDAGBuilder.cpp). This results in fewer stack accesses and,
in some cases, reduced stack size as uselessly loaded values are no
longer stored back to spill locations in the stack.
No functional change with this patch; test case adjustments only.
llvm-svn: 183939
This pass was assuming that if hasAddressTaken() returns false for a
function, the function's only uses are call sites. That's not true
because there can be references by BlockAddresses too.
Fix the pass to handle this case. Fix
BlockAddress::replaceUsesOfWithOnConstant() to allow a function's type
to be changed by RAUW'ing the function with a bitcast of the recreated
function.
Patch by Mark Seaborn.
llvm-svn: 183933
These records are mandatory for executables and are used by the loader.
Reviewers: rafael
CC: llvm-commits
Differential Revision: http://llvm-reviews.chandlerc.com/D939
llvm-svn: 183852
I've been comparing the object file output of LLVM's integrated
assembler against the external assembler on PowerPC, and one
area where differences still remain are in DWARF sections.
In particular, the GNU assembler generates .debug_frame and
.debug_line sections using a code alignment factor of 4, since
all PowerPC instructions have size 4 and must be aligned to a
multiple of 4. However, current MC code hard-codes a code
alignment factor of 1.
This patch changes this by adding a "minimum instruction alignment"
data element to MCAsmInfo and using this as code alignment factor.
This requires passing a MCContext into MCDwarfLineAddr::Encode
and MCDwarfLineAddr::EncodeAdvanceLoc. Note that one caller,
MCDwarfLineAddr::Write, didn't actually have that information
available. However, it turns out that this routine is in fact
never used in the whole code base, so the patch simply removes
it. If it turns out to be needed again at a later time, it
could be re-added with an updated interface.
llvm-svn: 183834
A couple of old test cases in test/MC/PowerPC were still using
LLVM IR. Now that we have a working assembler, we can move
them to assembler tests instead:
ppc64-initial-cfa.ll
ppc64-relocs-01.ll
ppc64-tls-relocs-01.ll
llvm-svn: 183829
This test case was a "sanity check"/"breathing" test case at first, but
is really fragile, which impairs changes to yaml2obj.
`test/Object/yaml2obj-elf-bits-endian.test` is much more robust and
serves as an adequate sanity check.
llvm-svn: 183811
The pass emits a call to sqrt that has attribute "read-none". This call will be
converted to an ISD::FSQRT node during DAG construction, which will turn into
a mips native sqrt instruction.
llvm-svn: 183802
Instead of a custom implementation of replaceAllUsesWith, we just call
replaceAllUsesWith and recreate llvm.used and llvm.compiler-used.
This change is particularity interesting because it makes llvm see
through what clang is doing with static used functions in extern "C"
contexts. With this change, running clang -O2 in
extern "C" {
__attribute__((used)) static void foo() {}
}
produces
@llvm.used = appending global [1 x i8*] [i8* bitcast (void ()* @foo to
i8*)], section "llvm.metadata"
define internal void @foo() #0 {
entry:
ret void
}
llvm-svn: 183756
Negative zero is returned by the primary expression parser as INT32_MIN, so all that the method needs to do is to accept this value.
Behavior already present for Thumb2.
llvm-svn: 183734
- Don't use assert(0), or tests may pass or fail according to assertions.
- For now, The tests are marked as XFAIL for win32 hosts.
FIXME: Could we avoid XFAIL to specify triple in the RUN lines?
llvm-svn: 183728
Currently, only emitting the ELF header is supported (no sections or
segments).
The ELFYAML code organization is broadly similar to the COFFYAML code.
llvm-svn: 183711
Some ARM CPUs only support ARM mode (ancient v4 ones, for example) and some
only support Thumb mode (M-class ones currently). This makes sure such CPUs
default to the correct mode and makes the AsmParser diagnose an attempt to
switch modes incorrectly.
rdar://14024354
llvm-svn: 183710
Previously LEA64_32r went through virtually the entire backend thinking it was
using 32-bit registers until its blissful illusions were cruelly snatched away
by MCInstLower and 64-bit equivalents were substituted at the last minute.
This patch makes it behave normally, and take 64-bit registers as sources all
the way through. Previous uses (for 32-bit arithmetic) are accommodated via
SUBREG_TO_REG instructions which make the types and classes agree properly.
llvm-svn: 183693
A plain "sc" without argument is supposed to be treated like "sc 0"
by the assembler. This patch adds a corresponding alias.
Problem reported by Joerg Sonnenberger.
llvm-svn: 183687
The extended branch mnemonics are supposed to use an implied CR0
if there is no explicit condition register specified. This patch
adds extra variants of the mnemonics to this effect.
Problem reported by Joerg Sonnenberger.
llvm-svn: 183686
the Mips16 port. A few of the psuedos could either take signed
or unsigned arguments and I did not distinguish the case and improperly
rejected some valid cases that the assembler had previously accepted
when they were pure pseudos that expanded as assembly instructions.
llvm-svn: 183633
Since we have ARM unwind directive parser and assembler, we
can check the correctness in two stages:
1. From LLVM assembly (.ll) to ARM assembly (.s)
2. From ARM assembly (.s) to ELF object file (.o)
We already have several "*.s to *.o" test cases. This CL adds
some "*.ll to *.s" test cases and removes the redundant "*.ll to *.o"
test cases.
New test cases to check "*.ll to *.s" code generator:
- ehabi.ll: Check the correctness of the generated unwind directives.
- section-name.ll: Check the section name of functions.
Removed test cases:
- ehabi-mc-cantunwind.ll
(Covered by ehabi-cantunwind.ll, and eh-directive-cantunwind.s)
- ehabi-mc-compact-pr0.ll
(Covered by ehabi.ll, eh-compact-pr0.s, eh-directive-save.s, and
eh-directive-setfp.s)
- ehabi-mc-compact-pr1.ll
(Covered by ehabi.ll, eh-compact-pr1.s, eh-directive-save.s, and
eh-directive-setfp.s)
- ehabi-mc.ll
(Covered by ehabi.ll, and eh-directive-integrated-test.s)
- ehabi-mc-section-group.ll
(Covered by section-name.ll, and eh-directive-section-comdat.s)
- ehabi-mc-section.ll
(Covered by section-name.ll, and eh-directive-section.s)
- ehabi-mc-sh_link.ll
(Covered by eh-directive-text-section.s, and eh-directive-section.s)
llvm-svn: 183628
Changes to ARM unwind opcode assembler:
* Fix multiple .save or .vsave directives. Besides, the
order is preserved now.
* For the directives which will generate multiple opcodes,
such as ".save {r0-r11}", the order of the unwind opcode
is fixed now, i.e. the registers with less encoding value
are popped first.
* Fix the $sp offset calculation. Now, we can use the
.setfp, .pad, .save, and .vsave directives at any order.
Changes to test cases:
* Add test cases to check the order of multiple opcodes
for the .save directive.
* Fix the incorrect $sp offset in the test case. The
stack pointer offset specified in the test case was
incorrect. (Changed test cases: ehabi-mc-section.ll and
ehabi-mc.ll)
* The opcode to restore $sp are slightly reordered. The
behavior are not changed, and the new output is same
as the output of GNU as. (Changed test cases:
eh-directive-pad.s and eh-directive-setfp.s)
llvm-svn: 183627
Handle the case when the disassembler table can't tell
the difference between some encodings of QADD and CPS.
Add some necessary safe guards in CPS decoding as well.
llvm-svn: 183610
r183584 tries to derive some info from the code *AFTER* a call and apply
these derived info to the code *BEFORE* the call, which is not always safe
as the call in question may never return, and in this case, the derived
info is invalid.
Thank Duncan for pointing out this potential bug.
rdar://14073661
llvm-svn: 183606
instantiation issue with non-standard type.
Add a backend option to warn on a given stack size limit.
Option: -mllvm -warn-stack-size=<limit>
Output (if limit is exceeded):
warning: Stack size limit exceeded (<actual size>) in <functionName>.
The longer term plan is to hook that to a clang warning.
PR:4072
<rdar://problem/13987214>.
llvm-svn: 183595
The MemCpyOpt pass is capable of optimizing:
callee(&S); copy N bytes from S to D.
into:
callee(&D);
subject to some legality constraints.
Assertion is triggered when the compiler tries to evalute "sizeof(typeof(D))",
while D is an opaque-typed, 'sret' formal argument of function being compiled.
i.e. the signature of the func being compiled is something like this:
T caller(...,%opaque* noalias nocapture sret %D, ...)
The fix is that when come across such situation, instead of calling some
utility functions to get the size of D's type (which will crash), we simply
assume D has at least N bytes as implified by the copy-instruction.
rdar://14073661
llvm-svn: 183584
On PPC32, [su]div,rem on i64 types are transformed into runtime library
function calls. As a result, they are not allowed in counter-based loops (the
counter-loops verification pass caught this error; this change fixes PR16169).
llvm-svn: 183581
We weren't computing structure size correctly and we were relying on
the original alloca instruction to compute the offset, which isn't
always reliable.
Reviewed-by: Vincent Lejeune <vljn@ovi.com>
llvm-svn: 183568
Option: -mllvm -warn-stack-size=<limit>
Output (if limit is exceeded):
warning: Stack size limit exceeded (<actual size>) in <functionName>.
The longer term plan is to hook that to a clang warning.
PR:4072
<rdar://problem/13987214>
llvm-svn: 183552
My recent ARM FastISel patch exposed this bug:
http://llvm.org/bugs/show_bug.cgi?id=16178
The root cause is that it can't select integer sext/zext pre-ARMv6 and
asserts out.
The current integer sext/zext code doesn't handle other cases gracefully
either, so this patch makes it handle all sext and zext from i1/i8/i16
to i8/i16/i32, with and without ARMv6, both in Thumb and ARM mode. This
should fix the bug as well as make FastISel faster because it bails to
SelectionDAG less often. See fastisel-ext.patch for this.
fastisel-ext-tests.patch changes current tests to always use reg-imm AND
for 8-bit zext instead of UXTB. This simplifies code since it is
supported on ARMv4t and later, and at least on A15 both should perform
exactly the same (both have exec 1 uop 1, type I).
2013-05-31-char-shift-crash.ll is a bitcode version of the above bug
16178 repro.
fast-isel-ext.ll tests all sext/zext combinations that ARM FastISel
should now handle.
Note that my ARM FastISel enabling patch was reverted due to a separate
failure when dealing with MCJIT, I'll fix this second failure and then
turn FastISel on again for non-iOS ARM targets.
I've tested "make check-all" on my x86 box, and "lnt test-suite" on A15
hardware.
llvm-svn: 183551
Fix an assertion when the compiler encounters big constants whose bit width is
not a multiple of 64-bits.
Although clang would never generate something like this, the backend should be
able to handle any legal IR.
<rdar://problem/13363576>
llvm-svn: 183544
OpenBSD's stack smashing protection differs slightly from other
platforms:
1. The smash handler function is "__stack_smash_handler(const char
*funcname)" instead of "__stack_chk_fail(void)".
2. There's a hidden "long __guard_local" object that gets linked
into each executable and DSO.
Patch by Matthew Dempsky.
llvm-svn: 183533
from the LC_DATA_IN_CODE load command. And when disassembling print
the data in code formatted for the kind of data it and not disassemble those
bytes.
I added the format specific functionality to the derived class MachOObjectFile
since these tables only appears in Mach-O object files. This is my first
attempt to modify the libObject stuff so if folks have better suggestions
how to fit this in or suggestions on the implementation please let me know.
rdar://11791371
llvm-svn: 183424
The first symbol on ELF is dummy, but it has a defined content and readelf
normally displays it. With this change llvm-readobj also displays it and we
can check that llvm-mc output is correct according to the standard.
llvm-svn: 183337
Add earlyclobber constaints to prevent input register being allocated as
the output register because, according to Intel spec [1], "If any pair
of the index, mask, or destination registers are the same, this
instruction results a UD fault."
---
[1] http://software.intel.com/sites/default/files/319433-014.pdf
llvm-svn: 183327
When a function is inlined we lazily construct the variables
representing the function's parameters. After that, we add any remaining
unused parameters.
If the function doesn't use all the parameters, or uses them out of
order, then the DWARF would produce them in that order, producing a
parameter order that doesn't match the source.
This fix causes us to always keep the arg variables at the start of the
variable list & in the original order from the source.
llvm-svn: 183297
In ELF (as in MachO), not all relocations point to symbols. Represent this
properly by using a symbol_iterator instead of a SymbolRef. Update llvm-readobj
ELF's dumper to handle relocatios without symbols.
llvm-svn: 183284
IndVarSimplify is willing to move divide instructions outside of their
loop bodies if they are invariant of the loop. However, it may not be
safe to expand them if we do not know if they can trap.
Instead, check to see if it is not safe to expand the instruction and
skip the expansion.
This fixes PR16041.
Testcase by Rafael Ávila de Espíndola.
llvm-svn: 183239
The ARM backend did not expect LDRBi12 to hold a constant pool operand.
Allow for LLVM to deal with the instruction similar to how it deals with
LDRi12.
This fixes PR16215.
llvm-svn: 183238
The problem this time seems to be a thinko. We were assuming that in the CFG
A
| \
| B
| /
C
speculating the basic block B would cause only the phi value for the B->C edge
to be speculated. That is not true, the phi's are semantically in the edges, so
if the A->B->C path is taken, any code needed for A->C is not executed and we
have to consider it too when deciding to speculate B.
llvm-svn: 183226
PR16069 is an interesting case where an incoming value to a PHI is a
trap value while also being a 'ConstantExpr'.
We do not consider this case when performing the 'HoistThenElseCodeToIf'
optimization.
Instead, make our modifications more conservative if we detect that we
cannot transform the PHI to a select.
llvm-svn: 183152
index greater than the size of the vector is invalid. The shuffle may be
shrinking the size of the vector. Fixes a crash!
Also drop the maximum recursion depth of the safety check for this
optimization to five.
llvm-svn: 183080
The MOV64ri64i32 instruction required hacky MCInst lowering because it
was allocated as setting a GR64, but the eventual instruction ("movl")
only set a GR32. This converts it into a so-called "MOV32ri64" which
still accepts a (appropriate) 64-bit immediate but defines a GR32.
This is then converted to the full GR64 by a SUBREG_TO_REG operation,
thus keeping everyone happy.
This fixes a typo in the opcode field of the original patch, which
should make the legact JIT work again (& adds test for that problem).
llvm-svn: 183068
Fixes rdar:14036816, PR16130.
There is an opportunity to compute precise trip counts for 'or'
expressions and multi-exit loops.
rdar:14038809: Optimize trip count computation for multi-exit loops.
To do this we need to record the fact that ExitLimit assumes NSW. When
it does not we can safely assume that the loop trip count is the
minimum ExitLimt across all subexpressions and loop exits.
llvm-svn: 183060
We check that instructions in the loop don't have outside users (except if
they are reduction values). Unfortunately, we skipped this check for
if-convertable PHIs.
Fixes PR16184.
llvm-svn: 183035
Namely, check if the target allows to fold more that one register in the
addressing mode and if yes, adjust the cost accordingly.
Prior to this commit, reg1 + scale * reg2 accesses were artificially preferred
to reg1 + reg2 accesses. Indeed, the cost model wrongly assumed that reg1 + reg2
needs a temporary register for the computation, whereas it was correctly
estimated for reg1 + scale * reg2.
<rdar://problem/13973908>
llvm-svn: 183021
These instructions are deprecated oddities, but we still need to be able to
disassemble (and reassemble) them if and when they're encountered.
Patch by Amaury de la Vieuville.
llvm-svn: 183011
The disassembly of VEXT instructions was too lax in the bits checked. This
fixes the case where the instruction affects Q-registers but a misaligned lane
was specified (should be UNDEFINED).
Patch by Amaury de la Vieuville
llvm-svn: 183003
Unlike most -- hopefully "all other", but I'm still checking -- memory
instructions we support, LOAD REVERSED and STORE REVERSED may access
the memory location several times. This means that they are not suitable
for volatile loads and stores.
This patch is a prerequisite for better atomic load and store support.
The same principle applies there: almost all memory instructions we
support are inherently atomic ("block concurrent"), but LOAD REVERSED
and STORE REVERSED are exceptions.
Other instructions continue to allow volatile operands. I will add
positive "allows volatile" tests at the same time as the "allows atomic
load or store" tests.
llvm-svn: 183002
Now that 3.3 is branched, we are re-enabling virtual registers to help
iron out bugs before the next release. Some of the post-RA passes do
not play well with virtual registers, so we disable them for now. The
needed functionality of the PrologEpilogInserter pass is copied to a
new backend-specific NVPTXPrologEpilog pass.
The test for this commit is not breaking the existing tests.
llvm-svn: 182998
Before this change, each module defined a weak_odr global __msan_track_origins
with a value of 1 if origin tracking is enabled, 0 if disabled. If there are
modules with different values, any of them may win. If 0 wins, and there is at
least one module with 1, the program will most likely crash.
With this change, __msan_track_origins is only emitted if origin tracking is
on. Then runtime library detects if there is at least one module with origin
tracking, and enables runtime support for it.
llvm-svn: 182997
The MOV64ri64i32 instruction required hacky MCInst lowering because it was
allocated as setting a GR64, but the eventual instruction ("movl") only set a
GR32. This converts it into a so-called "MOV32ri64" which still accepts a
(appropriate) 64-bit immediate but defines a GR32. This is then converted to
the full GR64 by a SUBREG_TO_REG operation, thus keeping everyone happy.
llvm-svn: 182991
This fixes the test on ARM. Looks like it was broken by r182877. Not
sure if this is a bug on fast isel on ARM, but this should help fix
the ARM bots.
llvm-svn: 182937
The pattern the test originally checked for doesn't occur on other -mcpu
settings. On atom it's still there though slightly differently scheduled.
llvm-svn: 182933
This test was failing on some hosts when an unexpected register was used for a
variable. This just extends the regexp to allow the new x86-64 registers.
llvm-svn: 182929
Instead of having a bunch of separate MOV8r0, MOV16r0, ... pseudo-instructions,
it's better to use a single MOV32r0 (which will expand to "xorl %reg, %reg")
and obtain other sizes with EXTRACT_SUBREG and SUBREG_TO_REG. The encoding is
smaller and partial register updates can sometimes be avoided.
Until recently, this sequence was a barrier to rematerialization though. That
should now be fixed so it's an appropriate time to make the change.
llvm-svn: 182928
The code to distinguish between unaligned and aligned addresses was
already there, so this is mostly just a switch-on-and-test process.
llvm-svn: 182920
For COFF and MachO, sections semantically have relocations that apply to them.
That is not the case on ELF.
In relocatable objects (.o), a section with relocations in ELF has offsets to
another section where the relocations should be applied.
In dynamic objects and executables, relocations don't have an offset, they have
a virtual address. The section sh_info may or may not point to another section,
but that is not actually used for resolving the relocations.
This patch exposes that in the ObjectFile API. It has the following advantages:
* Most (all?) clients can handle this more efficiently. They will normally walk
all relocations, so doing an effort to iterate in a particular order doesn't
save time.
* llvm-readobj now prints relocations in the same way the native readelf does.
* probably most important, relocations that don't point to any section are now
visible. This is the case of relocations in the rela.dyn section. See the
updated relocation-executable.test for example.
llvm-svn: 182908
Fixes PR16146: gdb.base__call-ar-st.exp fails after
pre-RA-sched=source fixes.
Patch by Xiaoyi Guo!
This also fixes an unsupported dbg.value test case. Codegen was
previously incorrect but the test was passing by luck.
llvm-svn: 182885
FastISel was only enabled for iOS ARM and Thumb2, this patch enables it
for ARM (not Thumb2) on Linux and NaCl.
Thumb2 support needs a bit more work, mainly around register class
restrictions.
The patch punts to SelectionDAG when doing TLS relocation on non-Darwin
targets. I will fix this and other FastISel-to-SelectionDAG failures in
a separate patch.
The patch also forces FastISel to retain frame pointers: iOS always
keeps them for backtracking (so emitted code won't change because of
this), but Linux was getting much worse code that was incorrect when
using big frames (such as test-suite's lencod). I'll also fix this in a
later patch, it will probably require a peephole so that FastISel
doesn't rematerialize frame pointers back-to-back.
The test changes are straightforward, similar to:
http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20130513/174279.html
They also add a vararg test that got dropped in that change.
I ran all of test-suite on A15 hardware with --optimize-option=-O0 and
all the tests pass.
llvm-svn: 182877
This allows rematerialization during register coalescing to handle
more cases involving operations like SUBREG_TO_REG which might need to
be rematerialized using sub-register indices.
For example, code like:
v1(GPR64):sub_32 = MOVZ something
v2(GPR64) = COPY v1(GPR64)
should be convertable to:
v2(GPR64):sub_32 = MOVZ something
but previously we just gave up in places like this
llvm-svn: 182872
Since the testing case uses ref_addr, which requires version 3+ to work,
we will solve the dwarf version issue first.
This patch also causes failures in one of the bots. I will update the patch
accordingly in my next attempt.
rdar://13926659
llvm-svn: 182867
This patch solves the problem of numeric register values not being accepted:
../set_alias.s:1:11: error: expected valid expression after comma
.set r4,$4
^
The parsing of .set directive is changed and handling of symbols in code
as well to enable this feature.
The test example is added.
Patch by Vladimir Medic
llvm-svn: 182807
- llvm.loop.parallel metadata has been renamed to llvm.loop to be more generic
by making the root of additional loop metadata.
- Loop::isAnnotatedParallel now looks for llvm.loop and associated
llvm.mem.parallel_loop_access
- document llvm.loop and update llvm.mem.parallel_loop_access
- add support for llvm.vectorizer.width and llvm.vectorizer.unroll
- document llvm.vectorizer.* metadata
- add utility class LoopVectorizerHints for getting/setting loop metadata
- use llvm.vectorizer.width=1 to indicate already vectorized instead of
already_vectorized
- update existing tests that used llvm.loop.parallel and
llvm.vectorizer.already_vectorized
Reviewed by: Nadav Rotem
llvm-svn: 182802
Previously we would read-modify-write the target bits when processing
relocations for the MCJIT. This had the problem that when relocations
were processed multiple times for the same object file (as they can
be), the result is not idempotent and the values became corrupted.
The solution to this is to take any bits used in the destination from
the pristine object file as LLVM emitted it.
This should fix PR16013 and remote MCJIT on ARM ELF targets.
llvm-svn: 182800
from a different CU.
We used to print out an error message and fail to generate inlined_subroutine.
If we use ref_addr in the generated DWARF, the DWARF version should be 3 or
above.
rdar://13926659
llvm-svn: 182791
Extend LinkModules to pass a ValueMaterializer to RemapInstruction and friends to lazily create Functions for lazily linked globals. This is a big win when linking small modules with large (mostly unused) library modules.
llvm-svn: 182776
This patch adds support for the CRJ and CGRJ instructions. Support for
the immediate forms will be a separate patch.
The architecture has a large number of comparison instructions. I think
it's generally better to concentrate on using the "best" comparison
instruction first and foremost, then only use something like CRJ if
CR really was the natual choice of comparison instruction. The patch
therefore opportunistically converts separate CR and BRC instructions
into a single CRJ while emitting instructions in ISelLowering.
llvm-svn: 182764
When -ffast-math is in effect (on Linux, at least), clang defines
__FINITE_MATH_ONLY__ > 0 when including <math.h>. This causes the
preprocessor to include <bits/math-finite.h>, which renames the sqrt functions.
For instance, "sqrt" is renamed as "__sqrt_finite".
This patch adds the 3 new names in such a way that they will be treated
as equivalent to their respective original names.
llvm-svn: 182739
When expanding unaligned Altivec loads, we use the decremented offset trick to
prevent page faults. Unfortunately, if we have a sequence of consecutive
unaligned loads, this leads to suboptimal code generation because the 'extra'
load from the first unaligned load can be combined with the base load from the
second (but only if the decremented offset trick is not used for the first).
Search up and down the chain, through loads and token factors, looking for
consecutive loads, and if one is found, don't use the offset reduction trick.
These duplicate loads are later combined to yield the desired sequence (in the
future, we might want a more-powerful chain search, but that will require some
changes to allow the combiner routines to access the AA object).
This should complete the initial implementation of the optimized unaligned
Altivec load expansion. There is some refactoring that should be done, but
that will happen when the unaligned store expansion is added.
llvm-svn: 182719
The lvsl permutation control instruction is a function only of the alignment of
the pointer operand (relative to the 16-byte natural alignment of Altivec
vectors). As a result, multiple lvsl intrinsics where the operands differ by a
multiple of 16 can be combined.
llvm-svn: 182708
Altivec only directly supports aligned loads, but the loads have a strange
property: If given an unaligned address, they truncate the address to the next
lower aligned address, and load from there. This property, along with an extra
load and some special-purpose permutation-control instructions that generate
the appropriate permutations from the original unaligned address, allow
efficient lowering of aligned loads. This code uses the trick explained in the
Apple Velocity Engine optimization overview document to prevent the needed
extra load from possibly causing a page fault if the original address happens
to be aligned.
As noted in the FIXMEs, there are several additional optimizations that can be
performed to reduce the cost of these loads even more. These will be
implemented in future commits.
llvm-svn: 182691
Previously, an invalid instruction like:
foo %r1, %r0
would generate the rather odd error message:
....: error: unknown token in expression
foo %r1, %r0
^
We now get the more informative:
....: error: invalid instruction
foo %r1, %r0
^
The same would happen if an address were used where a register was expected.
We now get "invalid operand for instruction" instead.
llvm-svn: 182644
The idea is to make sure that:
(1) "register expected" is restricted to cases where ParseRegister()
is called and the token obviously isn't a register.
(2) "invalid register" is restricted to cases where a register-like "%..."
sequence is found, but the "..." makes no sense.
(3) the generic "invalid operand for instruction" is used in cases where
the wrong register type is used (GPR instead of FPR, etc.).
(4) the new "invalid register pair" is used if the register has the right type,
but is not a valid register pair.
Testing of (1)-(3) is now restricted to regs-bad.s. It uses a representative
instruction for each register class to make sure that only registers from
that class are accepted.
(4) is tested by both regs-bad.s (which checks all invalid register pairs)
and insn-bad.s (which tests one invalid pair for each instruction that
requires a pair).
While there, I changed "Number" to "Num" for consistency with the
operand class.
llvm-svn: 182643
as the BinaryOperator, *not* in the block where the IRBuilder is currently
inserting into. Fixes a bug where scalarizePHI would create instructions
that would not dominate all uses.
llvm-svn: 182639
Other than recognizing the attribute, the patch does little else.
It changes the branch probability analyzer so that edges into
blocks postdominated by a cold function are given low weight.
Added analysis and code generation tests. Added documentation for the
new attribute.
llvm-svn: 182638
This is a basic first step towards symbolization of disassembled
instructions. This used to be done using externally provided (C API)
callbacks. This patch introduces:
- the MCSymbolizer class, that mimics the same functions that were used
in the X86 and ARM disassemblers to symbolize immediate operands and
to annotate loads based off PC (for things like c string literals).
- the MCExternalSymbolizer class, which implements the old C API.
- the MCRelocationInfo class, which provides a way for targets to
translate relocations (either object::RelocationRef, or disassembler
C API VariantKinds) to MCExprs.
- the MCObjectSymbolizer class, which does symbolization using what it
finds in an object::ObjectFile. This makes simple symbolization (with
no fancy relocation stuff) work for all object formats!
- x86-64 Mach-O and ELF MCRelocationInfos.
- A basic ARM Mach-O MCRelocationInfo, that provides just enough to
support the C API VariantKinds.
Most of what works in otool (the only user of the old symbolization API
that I know of) for x86-64 symbolic disassembly (-tvV) works, namely:
- symbol references: call _foo; jmp 15 <_foo+50>
- relocations: call _foo-_bar; call _foo-4
- __cf?string: leaq 193(%rip), %rax ## literal pool for "hello"
Stub support is the main missing part (because libObject doesn't know,
among other things, about mach-o indirect symbols).
As for the MCSymbolizer API, instead of relying on the disassemblers
to call the tryAdding* methods, maybe this could be done automagically
using InstrInfo? For instance, even though PC-relative LEAs are used
to get the address of string literals in a typical Mach-O file, a MOV
would be used in an ELF file. And right now, the explicit symbolization
only recognizes PC-relative LEAs. InstrInfo should have already have
most of what is needed to know what to symbolize, so this can
definitely be improved.
I'd also like to remove object::RelocationRef::getValueString (it seems
only used by relocation printing in objdump), as simply printing the
created MCExpr is definitely enough (and cleaner than string concats).
llvm-svn: 182625
This implements the @llvm.readcyclecounter intrinsic as the specific
MRC instruction specified in the ARM manuals for CPUs with the Power
Management extensions.
Older CPUs had slightly different methods which may also have to be
implemented eventually, but this should cover all v7 cases.
rdar://problem/13939186
llvm-svn: 182603
Now that the LiveDebugVariables pass is running *after* register
coalescing, the ConnectedVNInfoEqClasses class needs to deal with
DBG_VALUE instructions.
This only comes up when rematerialization during coalescing causes the
remaining live range of a virtual register to separate into two
connected components.
llvm-svn: 182592
There were bits & pieces of code lying around that may've given the
impression that debug info metadata supported the possibility that a
subprogram's type could be specified by a non-subroutine type describing
the return type of a void function. This support was incomplete &
unnecessary. Asserts & API have been changed to make the desired usage
more clear.
llvm-svn: 182532
We are not working on a DAG and I ran into a number of problems when I enabled the vectorizations of 'diamond-trees' (trees that share leafs).
* Imroved the numbering API.
* Changed the placement of new instructions to the last root.
* Fixed a bug with external tree users with non-zero lane.
* Fixed a bug in the placement of in-tree users.
llvm-svn: 182508
The earlier change list introduced the following inst combines:
B * (uitofp i1 C) —> select C, B, 0
A * (1 - uitofp i1 C) —> select C, 0, A
select C, 0, B + select C, A, 0 —> select C, A, B
Together these 3 changes would simplify :
A * (1 - uitofp i1 C) + B * uitofp i1 C
down to :
select C, B, A
In practice we found that the first two substitutions can have a
negative effect on performance, because they reduce opportunities to
use FMA contractions; between the two options FMAs are often the
better choice. This change list amends the previous one to enable
just these inst combines:
select C, B, 0 + select C, 0, A —> select C, B, A
A * (1 - uitofp i1 C) + B * uitofp i1 C —> select C, B, A
llvm-svn: 182499
The Value pointers we store in the induction variable list can be RAUW'ed by a
call to SCEVExpander::expandCodeFor, use a TrackingVH instead. Do the same thing
in some other places where we store pointers that could potentially be RAUW'ed.
Fixes PR16073.
llvm-svn: 182485
This is to fix PR15408 where an undefined symbol Lline_table_start1 is used.
Since we do not generate the debug_line section when .loc is used,
Lline_table_start1 is not emitted and we can't refer to it when calculating
at_stmt_list for a compile unit.
llvm-svn: 182344
pic calls. These need to be there so we don't try and use helper
functions when we call those.
As part of this, make sure that we properly exclude helper functions in pic
mode when indirect calls are involved.
llvm-svn: 182343
This resolves the last of the PR14606 failures in the GDB 7.5 test
suite by implementing an optional name field for
DW_TAG_imported_modules/DIImportedEntities and using that to implement
C++ namespace aliases (eg: "namespace X = Y;").
llvm-svn: 182328
By default, a teq instruction is inserted after integer divide. No divide-by-zero
checks are performed if option "-mnocheck-zero-division" is used.
llvm-svn: 182306