IndVarSimplify is willing to move divide instructions outside of their
loop bodies if they are invariant of the loop. However, it may not be
safe to expand them if we do not know if they can trap.
Instead, check to see if it is not safe to expand the instruction and
skip the expansion.
This fixes PR16041.
Testcase by Rafael Ávila de Espíndola.
llvm-svn: 183239
The ARM backend did not expect LDRBi12 to hold a constant pool operand.
Allow for LLVM to deal with the instruction similar to how it deals with
LDRi12.
This fixes PR16215.
llvm-svn: 183238
The problem this time seems to be a thinko. We were assuming that in the CFG
A
| \
| B
| /
C
speculating the basic block B would cause only the phi value for the B->C edge
to be speculated. That is not true, the phi's are semantically in the edges, so
if the A->B->C path is taken, any code needed for A->C is not executed and we
have to consider it too when deciding to speculate B.
llvm-svn: 183226
(4.58s vs 3.2s on an oldish Mac Tower).
The corresponding src is excerpted bellow. The lopp accounts for about 90% of execution time.
--------------------
cat -n test-suite/MultiSource/Benchmarks/Olden/em3d/make_graph.c
90
91 for (k=0; k<j; k++)
92 if (other_node == cur_node->to_nodes[k]) break;
The defective layout is sketched bellow, where the two branches need to swap.
------------------------------------------------------------------------
L:
...
if (cond) goto out-of-loop
goto L
While this code sequence is defective, I don't understand why it incurs 1/3 of
execution time. CPU-event-profiling indicates the poor laoyout dose not increase
in br-misprediction; it dosen't increase stall cycle at all, and it dosen't
prevent the CPU detect the loop (i.e. Loop-Stream-Detector seems to be working fine
as well)...
The root cause of the problem is that the layout pass calls AnalyzeBranch()
with basic-block which is not updated to reflect its current layout.
rdar://13966341
llvm-svn: 183174
PR16069 is an interesting case where an incoming value to a PHI is a
trap value while also being a 'ConstantExpr'.
We do not consider this case when performing the 'HoistThenElseCodeToIf'
optimization.
Instead, make our modifications more conservative if we detect that we
cannot transform the PHI to a select.
llvm-svn: 183152
This was missing from r182908. I didn't noticed it at the time because the MCJIT tests were
disabled when building with cmake on ppc64 (which I fixed in r183143).
llvm-svn: 183147
The CopyToReg nodes will sometimes try to copy a value from a VGPR to an
SGPR. This kind of copy is not possible, so we need to detect
VGPR->SGPR copies and do something else. The current strategy is to
replace these copies with VGPR->VGPR copies and hope that all the users
of CopyToReg can accept VGPRs as arguments.
llvm-svn: 183132
This is needed in clang so one can check if the object needs the
destructor called after its memory was freed. This is useful when
creating many APInt/APFloat objects with placement new, where the
overhead of tracking the pointers for cleanup is significant.
llvm-svn: 183100
index greater than the size of the vector is invalid. The shuffle may be
shrinking the size of the vector. Fixes a crash!
Also drop the maximum recursion depth of the safety check for this
optimization to five.
llvm-svn: 183080
The MOV64ri64i32 instruction required hacky MCInst lowering because it
was allocated as setting a GR64, but the eventual instruction ("movl")
only set a GR32. This converts it into a so-called "MOV32ri64" which
still accepts a (appropriate) 64-bit immediate but defines a GR32.
This is then converted to the full GR64 by a SUBREG_TO_REG operation,
thus keeping everyone happy.
This fixes a typo in the opcode field of the original patch, which
should make the legact JIT work again (& adds test for that problem).
llvm-svn: 183068
Fixes rdar:14036816, PR16130.
There is an opportunity to compute precise trip counts for 'or'
expressions and multi-exit loops.
rdar:14038809: Optimize trip count computation for multi-exit loops.
To do this we need to record the fact that ExitLimit assumes NSW. When
it does not we can safely assume that the loop trip count is the
minimum ExitLimt across all subexpressions and loop exits.
llvm-svn: 183060
Use ScalarEvolution's getBackedgeTakenCount API instead of getExitCount since
that is really what we want to know. Using the more specific getExitCount was
safe because we made sure that there is only one exiting block.
No functionality change.
llvm-svn: 183047
Account for the cost of scaling factor in Loop Strength Reduce when rating the
formulae. This uses a target hook.
The default implementation of the hook is: if the addressing mode is legal, the
scaling factor is free.
<rdar://problem/13806271>
llvm-svn: 183045
We check that instructions in the loop don't have outside users (except if
they are reduction values). Unfortunately, we skipped this check for
if-convertable PHIs.
Fixes PR16184.
llvm-svn: 183035
Namely, check if the target allows to fold more that one register in the
addressing mode and if yes, adjust the cost accordingly.
Prior to this commit, reg1 + scale * reg2 accesses were artificially preferred
to reg1 + reg2 accesses. Indeed, the cost model wrongly assumed that reg1 + reg2
needs a temporary register for the computation, whereas it was correctly
estimated for reg1 + scale * reg2.
<rdar://problem/13973908>
llvm-svn: 183021
NOTE: If this broke your out-of-tree backend, in *RegisterInfo.td, change
the instances of SubRegIndex that have a comps template arg to use the
ComposedSubRegIndex class instead.
In TableGen land, this adds Size and Offset attributes to SubRegIndex,
and the ComposedSubRegIndex class, for which the Size and Offset are
computed by TableGen. This also adds an accessor in MCRegisterInfo, and
Size/Offsets for the X86 and ARM subreg indices.
llvm-svn: 183020
These instructions are deprecated oddities, but we still need to be able to
disassemble (and reassemble) them if and when they're encountered.
Patch by Amaury de la Vieuville.
llvm-svn: 183011
The disassembly of VEXT instructions was too lax in the bits checked. This
fixes the case where the instruction affects Q-registers but a misaligned lane
was specified (should be UNDEFINED).
Patch by Amaury de la Vieuville
llvm-svn: 183003
Unlike most -- hopefully "all other", but I'm still checking -- memory
instructions we support, LOAD REVERSED and STORE REVERSED may access
the memory location several times. This means that they are not suitable
for volatile loads and stores.
This patch is a prerequisite for better atomic load and store support.
The same principle applies there: almost all memory instructions we
support are inherently atomic ("block concurrent"), but LOAD REVERSED
and STORE REVERSED are exceptions.
Other instructions continue to allow volatile operands. I will add
positive "allows volatile" tests at the same time as the "allows atomic
load or store" tests.
llvm-svn: 183002
Now that 3.3 is branched, we are re-enabling virtual registers to help
iron out bugs before the next release. Some of the post-RA passes do
not play well with virtual registers, so we disable them for now. The
needed functionality of the PrologEpilogInserter pass is copied to a
new backend-specific NVPTXPrologEpilog pass.
The test for this commit is not breaking the existing tests.
llvm-svn: 182998
Before this change, each module defined a weak_odr global __msan_track_origins
with a value of 1 if origin tracking is enabled, 0 if disabled. If there are
modules with different values, any of them may win. If 0 wins, and there is at
least one module with 1, the program will most likely crash.
With this change, __msan_track_origins is only emitted if origin tracking is
on. Then runtime library detects if there is at least one module with origin
tracking, and enables runtime support for it.
llvm-svn: 182997
The MOV64ri64i32 instruction required hacky MCInst lowering because it was
allocated as setting a GR64, but the eventual instruction ("movl") only set a
GR32. This converts it into a so-called "MOV32ri64" which still accepts a
(appropriate) 64-bit immediate but defines a GR32. This is then converted to
the full GR64 by a SUBREG_TO_REG operation, thus keeping everyone happy.
llvm-svn: 182991
Fixes PR16130 - clang produces incorrect code with loop/expression at -O2.
This is a 2+ year old bug that's now holding up the release. It's a
case where we knowingly made aggressive assumptions about undefined
behavior. These assumptions are wrong when SCEV is computing a
subexpression that does not directly control the branch. With this
fix, we avoid making assumptions in those cases but still optimize the
common case. SCEV's trip count computation for exits controlled by
'or' expressions is now analagous to the trip count computation for
loops with multiple exits. I had already fixed the multiple exit case
to be conservative.
llvm-svn: 182989
Instead of having a bunch of separate MOV8r0, MOV16r0, ... pseudo-instructions,
it's better to use a single MOV32r0 (which will expand to "xorl %reg, %reg")
and obtain other sizes with EXTRACT_SUBREG and SUBREG_TO_REG. The encoding is
smaller and partial register updates can sometimes be avoided.
Until recently, this sequence was a barrier to rematerialization though. That
should now be fixed so it's an appropriate time to make the change.
llvm-svn: 182928
r182872 introduced a bug in how the register-coalescer's rematerialization
handled defining a physical register. It relied on the output of the
coalescer's setRegisters method to determine whether the replacement
instruction needed an implicit-def. However, this value isn't necessarily the
same as the CopyMI's actual destination register which is what the rest of the
basic-block expects us to be defining.
The commit changes the rematerializer to use the actual register attached to
CopyMI in its decision.
This will be tested soon by an X86 patch which moves everything to using
MOV32r0 instead of other sizes.
llvm-svn: 182925
32-bit writes on amd64 zero out the high bits of the corresponding 64-bit
register. LLVM makes use of this for zero-extension, but until now relied on
custom MCLowering and other code to fixup instructions. Now we have proper
handling of sub-registers, this can be done by creating SUBREG_TO_REG
instructions at selection-time.
Should be no change in functionality.
llvm-svn: 182921
The code to distinguish between unaligned and aligned addresses was
already there, so this is mostly just a switch-on-and-test process.
llvm-svn: 182920
For COFF and MachO, sections semantically have relocations that apply to them.
That is not the case on ELF.
In relocatable objects (.o), a section with relocations in ELF has offsets to
another section where the relocations should be applied.
In dynamic objects and executables, relocations don't have an offset, they have
a virtual address. The section sh_info may or may not point to another section,
but that is not actually used for resolving the relocations.
This patch exposes that in the ObjectFile API. It has the following advantages:
* Most (all?) clients can handle this more efficiently. They will normally walk
all relocations, so doing an effort to iterate in a particular order doesn't
save time.
* llvm-readobj now prints relocations in the same way the native readelf does.
* probably most important, relocations that don't point to any section are now
visible. This is the case of relocations in the rela.dyn section. See the
updated relocation-executable.test for example.
llvm-svn: 182908
Fixes PR16146: gdb.base__call-ar-st.exp fails after
pre-RA-sched=source fixes.
Patch by Xiaoyi Guo!
This also fixes an unsupported dbg.value test case. Codegen was
previously incorrect but the test was passing by luck.
llvm-svn: 182885
This corrects a problem where x86 instructions that implicitly define/use both
an A-register (RAX, EAX, ..) and EFLAGS were declared as only defining/using
EFLAGS, because the outer "let Defs/Uses = [EFLAGS]" in the various multiclasses
overrides the "let Defs/Uses = [areg]" in BinOpAI.
The instructions deriving from BinOpAI were moved out of the "let Defs", and a
BinOpAI_FF class was created, for instructions that implicitly define and use
EFLAGS and the A-register (SBC, ADC).
llvm-svn: 182883
FastISel was only enabled for iOS ARM and Thumb2, this patch enables it
for ARM (not Thumb2) on Linux and NaCl.
Thumb2 support needs a bit more work, mainly around register class
restrictions.
The patch punts to SelectionDAG when doing TLS relocation on non-Darwin
targets. I will fix this and other FastISel-to-SelectionDAG failures in
a separate patch.
The patch also forces FastISel to retain frame pointers: iOS always
keeps them for backtracking (so emitted code won't change because of
this), but Linux was getting much worse code that was incorrect when
using big frames (such as test-suite's lencod). I'll also fix this in a
later patch, it will probably require a peephole so that FastISel
doesn't rematerialize frame pointers back-to-back.
The test changes are straightforward, similar to:
http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20130513/174279.html
They also add a vararg test that got dropped in that change.
I ran all of test-suite on A15 hardware with --optimize-option=-O0 and
all the tests pass.
llvm-svn: 182877
This allows rematerialization during register coalescing to handle
more cases involving operations like SUBREG_TO_REG which might need to
be rematerialized using sub-register indices.
For example, code like:
v1(GPR64):sub_32 = MOVZ something
v2(GPR64) = COPY v1(GPR64)
should be convertable to:
v2(GPR64):sub_32 = MOVZ something
but previously we just gave up in places like this
llvm-svn: 182872
Since the testing case uses ref_addr, which requires version 3+ to work,
we will solve the dwarf version issue first.
This patch also causes failures in one of the bots. I will update the patch
accordingly in my next attempt.
rdar://13926659
llvm-svn: 182867
Tidy up three places where the register class for ARM and Thumb wasn't
restrictive enough:
- No PC dest for reg-reg add/orr/sub.
- No PC dest for shifts.
- No PC or SP for Thumb2 reg-imm add.
I encountered this while combining FastISel with
-verify-machineinstrs. These instructions defined registers whose
classes weren't restrictive enough, and the uses failed
verification. They're also undefined in the ISA, or would produce code
that FastISel wouldn't want. This doesn't fix the register class
narrowing issue (where uses should restrict definitions), and isn't
thorough, but it's a small step in the right direction.
llvm-svn: 182863
This patch solves the problem of numeric register values not being accepted:
../set_alias.s:1:11: error: expected valid expression after comma
.set r4,$4
^
The parsing of .set directive is changed and handling of symbols in code
as well to enable this feature.
The test example is added.
Patch by Vladimir Medic
llvm-svn: 182807
- llvm.loop.parallel metadata has been renamed to llvm.loop to be more generic
by making the root of additional loop metadata.
- Loop::isAnnotatedParallel now looks for llvm.loop and associated
llvm.mem.parallel_loop_access
- document llvm.loop and update llvm.mem.parallel_loop_access
- add support for llvm.vectorizer.width and llvm.vectorizer.unroll
- document llvm.vectorizer.* metadata
- add utility class LoopVectorizerHints for getting/setting loop metadata
- use llvm.vectorizer.width=1 to indicate already vectorized instead of
already_vectorized
- update existing tests that used llvm.loop.parallel and
llvm.vectorizer.already_vectorized
Reviewed by: Nadav Rotem
llvm-svn: 182802
Previously we would read-modify-write the target bits when processing
relocations for the MCJIT. This had the problem that when relocations
were processed multiple times for the same object file (as they can
be), the result is not idempotent and the values became corrupted.
The solution to this is to take any bits used in the destination from
the pristine object file as LLVM emitted it.
This should fix PR16013 and remote MCJIT on ARM ELF targets.
llvm-svn: 182800
from a different CU.
We used to print out an error message and fail to generate inlined_subroutine.
If we use ref_addr in the generated DWARF, the DWARF version should be 3 or
above.
rdar://13926659
llvm-svn: 182791
Extend LinkModules to pass a ValueMaterializer to RemapInstruction and friends to lazily create Functions for lazily linked globals. This is a big win when linking small modules with large (mostly unused) library modules.
llvm-svn: 182776
This patch adds support for the CRJ and CGRJ instructions. Support for
the immediate forms will be a separate patch.
The architecture has a large number of comparison instructions. I think
it's generally better to concentrate on using the "best" comparison
instruction first and foremost, then only use something like CRJ if
CR really was the natual choice of comparison instruction. The patch
therefore opportunistically converts separate CR and BRC instructions
into a single CRJ while emitting instructions in ISelLowering.
llvm-svn: 182764
When -ffast-math is in effect (on Linux, at least), clang defines
__FINITE_MATH_ONLY__ > 0 when including <math.h>. This causes the
preprocessor to include <bits/math-finite.h>, which renames the sqrt functions.
For instance, "sqrt" is renamed as "__sqrt_finite".
This patch adds the 3 new names in such a way that they will be treated
as equivalent to their respective original names.
llvm-svn: 182739
isConsecutiveLS is a slightly more general form of
SelectionDAG::isConsecutiveLoad. Aside from also handling stores, it also does
not assume equality of the chain operands is necessary. In the case of the PPC
backend, this chain condition is checked in a more general way by the
surrounding code.
Mostly, this part of the refactoring in preparation for supporting optimized
unaligned stores.
llvm-svn: 182723
When expanding unaligned Altivec loads, we use the decremented offset trick to
prevent page faults. Unfortunately, if we have a sequence of consecutive
unaligned loads, this leads to suboptimal code generation because the 'extra'
load from the first unaligned load can be combined with the base load from the
second (but only if the decremented offset trick is not used for the first).
Search up and down the chain, through loads and token factors, looking for
consecutive loads, and if one is found, don't use the offset reduction trick.
These duplicate loads are later combined to yield the desired sequence (in the
future, we might want a more-powerful chain search, but that will require some
changes to allow the combiner routines to access the AA object).
This should complete the initial implementation of the optimized unaligned
Altivec load expansion. There is some refactoring that should be done, but
that will happen when the unaligned store expansion is added.
llvm-svn: 182719
The lvsl permutation control instruction is a function only of the alignment of
the pointer operand (relative to the 16-byte natural alignment of Altivec
vectors). As a result, multiple lvsl intrinsics where the operands differ by a
multiple of 16 can be combined.
llvm-svn: 182708
Use a field in the SelectionDAGNode object to track its IR ordering.
This adds fields and utility classes without changing existing
interfaces or functionality.
llvm-svn: 182701
Altivec only directly supports aligned loads, but the loads have a strange
property: If given an unaligned address, they truncate the address to the next
lower aligned address, and load from there. This property, along with an extra
load and some special-purpose permutation-control instructions that generate
the appropriate permutations from the original unaligned address, allow
efficient lowering of aligned loads. This code uses the trick explained in the
Apple Velocity Engine optimization overview document to prevent the needed
extra load from possibly causing a page fault if the original address happens
to be aligned.
As noted in the FIXMEs, there are several additional optimizations that can be
performed to reduce the cost of these loads even more. These will be
implemented in future commits.
llvm-svn: 182691
Previously, an invalid instruction like:
foo %r1, %r0
would generate the rather odd error message:
....: error: unknown token in expression
foo %r1, %r0
^
We now get the more informative:
....: error: invalid instruction
foo %r1, %r0
^
The same would happen if an address were used where a register was expected.
We now get "invalid operand for instruction" instead.
llvm-svn: 182644
The idea is to make sure that:
(1) "register expected" is restricted to cases where ParseRegister()
is called and the token obviously isn't a register.
(2) "invalid register" is restricted to cases where a register-like "%..."
sequence is found, but the "..." makes no sense.
(3) the generic "invalid operand for instruction" is used in cases where
the wrong register type is used (GPR instead of FPR, etc.).
(4) the new "invalid register pair" is used if the register has the right type,
but is not a valid register pair.
Testing of (1)-(3) is now restricted to regs-bad.s. It uses a representative
instruction for each register class to make sure that only registers from
that class are accepted.
(4) is tested by both regs-bad.s (which checks all invalid register pairs)
and insn-bad.s (which tests one invalid pair for each instruction that
requires a pair).
While there, I changed "Number" to "Num" for consistency with the
operand class.
llvm-svn: 182643
as the BinaryOperator, *not* in the block where the IRBuilder is currently
inserting into. Fixes a bug where scalarizePHI would create instructions
that would not dominate all uses.
llvm-svn: 182639
Other than recognizing the attribute, the patch does little else.
It changes the branch probability analyzer so that edges into
blocks postdominated by a cold function are given low weight.
Added analysis and code generation tests. Added documentation for the
new attribute.
llvm-svn: 182638
There was exactly one caller using this API right, the others were relying on
specific behavior of the default implementation. Since it's too hard to use it
right just remove it and standardize on the default behavior.
Defines away PR16132.
llvm-svn: 182636
In these builds, the asserts() are completely compiled out of the code
leaving "End" unused. Directly accessing it, should not have a
performance impact, as it is just a data member.
llvm-svn: 182634
This patch builds on some existing code to do CFG reconstruction from
a disassembled binary:
- MCModule represents the binary, and has a list of MCAtoms.
- MCAtom represents either disassembled instructions (MCTextAtom), or
contiguous data (MCDataAtom), and covers a specific range of addresses.
- MCBasicBlock and MCFunction form the reconstructed CFG. An MCBB is
backed by an MCTextAtom, and has the usual successors/predecessors.
- MCObjectDisassembler creates a module from an ObjectFile using a
disassembler. It first builds an atom for each section. It can also
construct the CFG, and this splits the text atoms into basic blocks.
MCModule and MCAtom were only sketched out; MCFunction and MCBB were
implemented under the experimental "-cfg" llvm-objdump -macho option.
This cleans them up for further use; llvm-objdump -d -cfg now generates
graphviz files for each function found in the binary.
In the future, MCObjectDisassembler may be the right place to do
"intelligent" disassembly: for example, handling constant islands is just
a matter of splitting the atom, using information that may be available
in the ObjectFile. Also, better initial atom formation than just using
sections is possible using symbols (and things like Mach-O's
function_starts load command).
This brings two minor regressions in llvm-objdump -macho -cfg:
- The printing of a relocation's referenced symbol.
- An annotation on loop BBs, i.e., which are their own successor.
Relocation printing is replaced by the MCSymbolizer; the basic CFG
annotation will be superseded by more related functionality.
llvm-svn: 182628
This is a basic first step towards symbolization of disassembled
instructions. This used to be done using externally provided (C API)
callbacks. This patch introduces:
- the MCSymbolizer class, that mimics the same functions that were used
in the X86 and ARM disassemblers to symbolize immediate operands and
to annotate loads based off PC (for things like c string literals).
- the MCExternalSymbolizer class, which implements the old C API.
- the MCRelocationInfo class, which provides a way for targets to
translate relocations (either object::RelocationRef, or disassembler
C API VariantKinds) to MCExprs.
- the MCObjectSymbolizer class, which does symbolization using what it
finds in an object::ObjectFile. This makes simple symbolization (with
no fancy relocation stuff) work for all object formats!
- x86-64 Mach-O and ELF MCRelocationInfos.
- A basic ARM Mach-O MCRelocationInfo, that provides just enough to
support the C API VariantKinds.
Most of what works in otool (the only user of the old symbolization API
that I know of) for x86-64 symbolic disassembly (-tvV) works, namely:
- symbol references: call _foo; jmp 15 <_foo+50>
- relocations: call _foo-_bar; call _foo-4
- __cf?string: leaq 193(%rip), %rax ## literal pool for "hello"
Stub support is the main missing part (because libObject doesn't know,
among other things, about mach-o indirect symbols).
As for the MCSymbolizer API, instead of relying on the disassemblers
to call the tryAdding* methods, maybe this could be done automagically
using InstrInfo? For instance, even though PC-relative LEAs are used
to get the address of string literals in a typical Mach-O file, a MOV
would be used in an ELF file. And right now, the explicit symbolization
only recognizes PC-relative LEAs. InstrInfo should have already have
most of what is needed to know what to symbolize, so this can
definitely be improved.
I'd also like to remove object::RelocationRef::getValueString (it seems
only used by relocation printing in objdump), as simply printing the
created MCExpr is definitely enough (and cleaner than string concats).
llvm-svn: 182625
Now that there is no longer any distinction between symbolLo
and symbolHi operands in either printing, encoding, or parsing,
the operand types can be removed in favor of simply using
s16imm.
This completes the patch series to decouple lo/hi operand part
processing from the particular instruction whose operand it is.
No change in code generation expected from this patch.
llvm-svn: 182618
- move AsmWriter.h from public headers into lib
- marked all AssemblyWriter functions as non-virtual; no need to override them
- DebugIR now "plugs into" AssemblyWriter with an AssemblyAnnotationWriter helper
- exposed flags to control hiding of a) debug metadata b) debug intrinsic calls
C/R: Paul Redmond
llvm-svn: 182617
When targeting the Darwin assembler, we need to generate markers ha16() and
lo16() to designate the high and low parts of a (symbolic) immediate. This
is necessary not just for plain symbols, but also for certain symbolic
expression, typically along the lines of ha16(A - B). The latter doesn't
work when simply using VariantKind flags on the symbol reference.
This is why the current back-end uses hacks (explicitly called out as such
via multiple FIXMEs) in the symbolLo/symbolHi print methods.
This patch uses target-defined MCExpr codes to represent the Darwin
ha16/lo16 constructs, following along the lines of the equivalent solution
used by the ARM back end to handle their :upper16: / :lower16: markers.
This allows us to get rid of special handling both in the symbolLo/symbolHi
print method and in the common code MCExpr::print routine. Instead, the
ha16 / lo16 markers are printed simply in a custom print routine for the
target MCExpr types. (As a result, the symbolLo/symbolHi print methods
can now replaced by a single printS16ImmOperand routine that also handles
symbolic operands.)
The patch also provides a EvaluateAsRelocatableImpl routine to handle
ha16/lo16 constructs. This is not actually used at the moment by any
in-tree code, but is provided as it makes merging into David Fang's
out-of-tree Mach-O object writer simpler.
Since there is no longer any need to treat VK_PPC_GAS_HA16 and
VK_PPC_DARWIN_HA16 differently, they are merged into a single
VK_PPC_ADDR16_HA (and likewise for the _LO16 types).
llvm-svn: 182616
This implements the @llvm.readcyclecounter intrinsic as the specific
MRC instruction specified in the ARM manuals for CPUs with the Power
Management extensions.
Older CPUs had slightly different methods which may also have to be
implemented eventually, but this should cover all v7 cases.
rdar://problem/13939186
llvm-svn: 182603
Performance monitors, including a basic cycle counter, are an official
extension in the ARMv7 specification. This adds support for enabling and
disabling them, orthogonally from CPU selection.
rdar://problem/13939186
llvm-svn: 182602
Now that the LiveDebugVariables pass is running *after* register
coalescing, the ConnectedVNInfoEqClasses class needs to deal with
DBG_VALUE instructions.
This only comes up when rematerialization during coalescing causes the
remaining live range of a virtual register to separate into two
connected components.
llvm-svn: 182592
The error was:
error: non-constant-expression cannot be narrowed from type 'long long' to 'long' in initializer list [-Wc++11-narrowing]
MI.getOperand(6).getImm() & 0x1F,
llvm-svn: 182584
API with my 176880 revision. If a bad Triple is passed in it can also assert.
In this case too it should just return 0 to indicate failure to create the
disassembler.
rdar://13955214
llvm-svn: 182542
There were bits & pieces of code lying around that may've given the
impression that debug info metadata supported the possibility that a
subprogram's type could be specified by a non-subroutine type describing
the return type of a void function. This support was incomplete &
unnecessary. Asserts & API have been changed to make the desired usage
more clear.
llvm-svn: 182532
Using PatLeaf rather than ImmLeaf when defining immediate predicates
prevents simple patterns using those predicates from being recognized
for fast instruction selection. This patch replaces the immSExt16
PatLeaf predicate with two ImmLeaf predicates, imm32SExt16 and
imm64SExt16, allowing a few more patterns to be recognized (ADDI,
ADDIC, MULLI, ADDI8, and ADDIC8). Using the new predicates does not
help for LI, LI8, SUBFIC, and SUBFIC8 because these are rejected for
other reasons, but I see no reason to retain the PatLeaf predicate.
No functional change intended, and thus no test cases yet. This is
preliminary work for enabling fast-isel support for PowerPC. When
that support is ready, we'll be able to test this function.
llvm-svn: 182510
We are not working on a DAG and I ran into a number of problems when I enabled the vectorizations of 'diamond-trees' (trees that share leafs).
* Imroved the numbering API.
* Changed the placement of new instructions to the last root.
* Fixed a bug with external tree users with non-zero lane.
* Fixed a bug in the placement of in-tree users.
llvm-svn: 182508
The earlier change list introduced the following inst combines:
B * (uitofp i1 C) —> select C, B, 0
A * (1 - uitofp i1 C) —> select C, 0, A
select C, 0, B + select C, A, 0 —> select C, A, B
Together these 3 changes would simplify :
A * (1 - uitofp i1 C) + B * uitofp i1 C
down to :
select C, B, A
In practice we found that the first two substitutions can have a
negative effect on performance, because they reduce opportunities to
use FMA contractions; between the two options FMAs are often the
better choice. This change list amends the previous one to enable
just these inst combines:
select C, B, 0 + select C, 0, A —> select C, B, A
A * (1 - uitofp i1 C) + B * uitofp i1 C —> select C, B, A
llvm-svn: 182499
The Value pointers we store in the induction variable list can be RAUW'ed by a
call to SCEVExpander::expandCodeFor, use a TrackingVH instead. Do the same thing
in some other places where we store pointers that could potentially be RAUW'ed.
Fixes PR16073.
llvm-svn: 182485
Addresses a review comment from Ulrich Weigand. No functional change intended.
I'm not sure whether the old TODO that this patch touches still holds,
but that's something we'd get to when adding a targetted scheduling
description.
llvm-svn: 182474
The original version of the pass could underestimate the length of a backward
branch in cases like:
alignment to N bytes or more
...
relaxable branch A
...
foo: (aligned to M<N bytes)
...
bar: (aligned to N bytes)
...
relaxable branch B to foo
We don't add any misalignment gap for "bar" because N bytes of alignment
had already been reached earlier in the function. In this case, assuming
that A is relaxed can push "foo" closer to "bar", and make B appear to be
in range. Similar problems can occur for forward branches.
I don't think it's possible to create blocks with mixed alignments as
things stand, not least because we haven't yet defined getPrefLoopAlignment()
for SystemZ (that would need benchmarking). So I don't think we can test
this yet.
Thanks to Rafael Espíndola for spotting the bug.
llvm-svn: 182460
the C API to provide their own way of allocating JIT memory (both code
and data) and finalizing memory permissions (page protections, cache
flush).
llvm-svn: 182448
libExecutionEngine. Move method implementations that aren't specific to
allocation out of SectionMemoryManager and into RTDyldMemoryManager.
This is in preparation for exposing RTDyldMemoryManager through the C
API.
This is a fixed version of r182407 and r182411. That first revision
broke builds because I forgot to move the conditional includes of
various POSIX headers from SectionMemoryManager into
RTDyldMemoryManager. Those includes are necessary because of how
getPointerToNamedFunction works around the glibc libc_nonshared.a thing.
The latter revision still broke things because I forgot to include
llvm/Config/config.h.
llvm-svn: 182418
libExecutionEngine. Move method implementations that aren't specific to
allocation out of SectionMemoryManager and into RTDyldMemoryManager.
This is in preparation for exposing RTDyldMemoryManager through the C
API.
This is a fixed version of r182407. That revision broke builds because I
forgot to move the conditional includes of various POSIX headers from
SectionMemoryManager into RTDyldMemoryManager. Those includes are
necessary because of how getPointerToNamedFunction works around the
glibc libc_nonshared.a thing.
llvm-svn: 182411
the C API to provide their own way of allocating JIT memory (both code
and data) and finalizing memory permissions (page protections, cache
flush).
llvm-svn: 182408
libExecutionEngine. Move method implementations that aren't specific to
allocation out of SectionMemoryManager and into RTDyldMemoryManager.
This is in preparation for exposing RTDyldMemoryManager through the C
API.
llvm-svn: 182407
Although I had added some support for the BDZ/BDNZ branches into the selector
(in r158204), I had not correctly adjusted the condition at the top of the
loop. As a result, these branches were still essentially unsupported.
This fixes PR16086. Unfortunately, any test case would be very large (because
it would need to force the loop backedge to exceed the range of the 16-bit
immediate).
llvm-svn: 182385
This is to fix PR15408 where an undefined symbol Lline_table_start1 is used.
Since we do not generate the debug_line section when .loc is used,
Lline_table_start1 is not emitted and we can't refer to it when calculating
at_stmt_list for a compile unit.
llvm-svn: 182344
pic calls. These need to be there so we don't try and use helper
functions when we call those.
As part of this, make sure that we properly exclude helper functions in pic
mode when indirect calls are involved.
llvm-svn: 182343
This resolves the last of the PR14606 failures in the GDB 7.5 test
suite by implementing an optional name field for
DW_TAG_imported_modules/DIImportedEntities and using that to implement
C++ namespace aliases (eg: "namespace X = Y;").
llvm-svn: 182328
By default, a teq instruction is inserted after integer divide. No divide-by-zero
checks are performed if option "-mnocheck-zero-division" is used.
llvm-svn: 182306
Now that the preheader insertion logic in LoopSimplify is externally exposed,
use it, and remove the copy-and-pasted version.
No functionality change intended.
llvm-svn: 182300
Other passes, PPC counter-loop formation for example, also need to add loop
preheaders outside of the regular loop simplification pass. This makes
InsertPreheaderForLoop a global function so that it can be used by other
passes.
No functionality change intended.
llvm-svn: 182299
As the pairing of this instruction form with the bdnz/bdz branches is now
enforced by the verification pass, make it clear from the name that these
are used only for counter-based loops.
No functionality change intended.
llvm-svn: 182296
When asserts are enabled, this adds a verification pass for PPC counter-loop
formation. Unfortunately, without sacrificing code quality, there is no better
way of forming counter-based loops except at the (late) IR level. This means
that we need to recognize, at the IR level, anything which might turn into a
function call (or indirect branch). Because this is currently a finite set of
things, and because SelectionDAG lowering is basic-block local, this can be
done. Nevertheless, it is fragile, and failure results in a miscompile. This
verification pass checks that all (reachable) counter-based branches are
dominated by a loop mtctr instruction, and that no instructions in between
clobber the counter register. If these conditions are not satisfied, then an
ICE will be triggered.
In short, this is to help us sleep better at night.
llvm-svn: 182295
R600TextureIntrinsicsReplacer.cpp:232: warning: the address of ‘ArgsType’ will always evaluate as ‘true’
This doesn't have any effect on the output as a vararg intrinsic behaves the
same way as a non-vararg one.
llvm-svn: 182293
Before this change, the SystemZ backend would use BRCL for all branches
and only consider shortening them to BRC when generating an object file.
E.g. a branch on equal would use the JGE alias of BRCL in assembly output,
but might be shortened to the JE alias of BRC in ELF output. This was
a useful first step, but it had two problems:
(1) The z assembler isn't traditionally supposed to perform branch shortening
or branch relaxation. We followed this rule by not relaxing branches
in assembler input, but that meant that generating assembly code and
then assembling it would not produce the same result as going directly
to object code; the former would give long branches everywhere, whereas
the latter would use short branches where possible.
(2) Other useful branches, like COMPARE AND BRANCH, do not have long forms.
We would need to do something else before supporting them.
(Although COMPARE AND BRANCH does not change the condition codes,
the plan is to model COMPARE AND BRANCH as a CC-clobbering instruction
during codegen, so that we can safely lower it to a separate compare
and long branch where necessary. This is not a valid transformation
for the assembler proper to make.)
This patch therefore moves branch relaxation to a pre-emit pass.
For now, calls are still shortened from BRASL to BRAS by the assembler,
although this too is not really the traditional behaviour.
The first test takes about 1.5s to run, and there are likely to be
more tests in this vein once further branch types are added. The feeling
on IRC was that 1.5s is a bit much for a single test, so I've restricted
it to SystemZ hosts for now.
The patch exposes (and fixes) some typos in the main CodeGen/SystemZ tests.
A later patch will remove the {{g}}s from that directory.
llvm-svn: 182274
This converter currently only handles global variables in address space 0. For
these variables, they are promoted to address space 1 (global memory), and all
uses are updated to point to the result of a cvta.global instruction on the new
variable.
The motivation for this is address space 0 global variables are illegal since we
cannot declare variables in the generic address space. Instead, we place the
variables in address space 1 and explicitly convert the pointer to address
space 0. This is primarily intended to help new users who expect to be able to
place global variables in the default address space.
llvm-svn: 182254
Introduction:
In case when stack alignment is 8 and GPRs parameter part size is not N*8:
we add padding to GPRs part, so part's last byte must be recovered at
address K*8-1.
We need to do it, since remained (stack) part of parameter starts from
address K*8, and we need to "attach" "GPRs head" without gaps to it:
Stack:
|---- 8 bytes block ----| |---- 8 bytes block ----| |---- 8 bytes...
[ [padding] [GPRs head] ] [ ------ Tail passed via stack ------ ...
FIX:
Note, once we added padding we need to correct *all* Arg offsets that are going
after padded one. That's why we need this fix: Arg offsets were never corrected
before this patch. See new test-cases included in patch.
We also don't need to insert padding for byval parameters that are stored in GPRs
only. We need pad only last byval parameter and only in case it outsides GPRs
and stack alignment = 8.
Though, stack area, allocated for recovered byval params, must satisfy
"Size mod 8 = 0" restriction.
This patch reduces stack usage for some cases:
We can reduce ArgRegsSaveArea since inner N*4 bytes sized byval params my be
"packed" with alignment 4 in some cases.
llvm-svn: 182237