Add earlyclobber constaints to prevent input register being allocated as
the output register because, according to Intel spec [1], "If any pair
of the index, mask, or destination registers are the same, this
instruction results a UD fault."
---
[1] http://software.intel.com/sites/default/files/319433-014.pdf
llvm-svn: 183327
When a function is inlined we lazily construct the variables
representing the function's parameters. After that, we add any remaining
unused parameters.
If the function doesn't use all the parameters, or uses them out of
order, then the DWARF would produce them in that order, producing a
parameter order that doesn't match the source.
This fix causes us to always keep the arg variables at the start of the
variable list & in the original order from the source.
llvm-svn: 183297
In ELF (as in MachO), not all relocations point to symbols. Represent this
properly by using a symbol_iterator instead of a SymbolRef. Update llvm-readobj
ELF's dumper to handle relocatios without symbols.
llvm-svn: 183284
IndVarSimplify is willing to move divide instructions outside of their
loop bodies if they are invariant of the loop. However, it may not be
safe to expand them if we do not know if they can trap.
Instead, check to see if it is not safe to expand the instruction and
skip the expansion.
This fixes PR16041.
Testcase by Rafael Ávila de Espíndola.
llvm-svn: 183239
The ARM backend did not expect LDRBi12 to hold a constant pool operand.
Allow for LLVM to deal with the instruction similar to how it deals with
LDRi12.
This fixes PR16215.
llvm-svn: 183238
The problem this time seems to be a thinko. We were assuming that in the CFG
A
| \
| B
| /
C
speculating the basic block B would cause only the phi value for the B->C edge
to be speculated. That is not true, the phi's are semantically in the edges, so
if the A->B->C path is taken, any code needed for A->C is not executed and we
have to consider it too when deciding to speculate B.
llvm-svn: 183226
PR16069 is an interesting case where an incoming value to a PHI is a
trap value while also being a 'ConstantExpr'.
We do not consider this case when performing the 'HoistThenElseCodeToIf'
optimization.
Instead, make our modifications more conservative if we detect that we
cannot transform the PHI to a select.
llvm-svn: 183152
index greater than the size of the vector is invalid. The shuffle may be
shrinking the size of the vector. Fixes a crash!
Also drop the maximum recursion depth of the safety check for this
optimization to five.
llvm-svn: 183080
The MOV64ri64i32 instruction required hacky MCInst lowering because it
was allocated as setting a GR64, but the eventual instruction ("movl")
only set a GR32. This converts it into a so-called "MOV32ri64" which
still accepts a (appropriate) 64-bit immediate but defines a GR32.
This is then converted to the full GR64 by a SUBREG_TO_REG operation,
thus keeping everyone happy.
This fixes a typo in the opcode field of the original patch, which
should make the legact JIT work again (& adds test for that problem).
llvm-svn: 183068
Fixes rdar:14036816, PR16130.
There is an opportunity to compute precise trip counts for 'or'
expressions and multi-exit loops.
rdar:14038809: Optimize trip count computation for multi-exit loops.
To do this we need to record the fact that ExitLimit assumes NSW. When
it does not we can safely assume that the loop trip count is the
minimum ExitLimt across all subexpressions and loop exits.
llvm-svn: 183060
We check that instructions in the loop don't have outside users (except if
they are reduction values). Unfortunately, we skipped this check for
if-convertable PHIs.
Fixes PR16184.
llvm-svn: 183035
Namely, check if the target allows to fold more that one register in the
addressing mode and if yes, adjust the cost accordingly.
Prior to this commit, reg1 + scale * reg2 accesses were artificially preferred
to reg1 + reg2 accesses. Indeed, the cost model wrongly assumed that reg1 + reg2
needs a temporary register for the computation, whereas it was correctly
estimated for reg1 + scale * reg2.
<rdar://problem/13973908>
llvm-svn: 183021
These instructions are deprecated oddities, but we still need to be able to
disassemble (and reassemble) them if and when they're encountered.
Patch by Amaury de la Vieuville.
llvm-svn: 183011
The disassembly of VEXT instructions was too lax in the bits checked. This
fixes the case where the instruction affects Q-registers but a misaligned lane
was specified (should be UNDEFINED).
Patch by Amaury de la Vieuville
llvm-svn: 183003
Unlike most -- hopefully "all other", but I'm still checking -- memory
instructions we support, LOAD REVERSED and STORE REVERSED may access
the memory location several times. This means that they are not suitable
for volatile loads and stores.
This patch is a prerequisite for better atomic load and store support.
The same principle applies there: almost all memory instructions we
support are inherently atomic ("block concurrent"), but LOAD REVERSED
and STORE REVERSED are exceptions.
Other instructions continue to allow volatile operands. I will add
positive "allows volatile" tests at the same time as the "allows atomic
load or store" tests.
llvm-svn: 183002
Now that 3.3 is branched, we are re-enabling virtual registers to help
iron out bugs before the next release. Some of the post-RA passes do
not play well with virtual registers, so we disable them for now. The
needed functionality of the PrologEpilogInserter pass is copied to a
new backend-specific NVPTXPrologEpilog pass.
The test for this commit is not breaking the existing tests.
llvm-svn: 182998
Before this change, each module defined a weak_odr global __msan_track_origins
with a value of 1 if origin tracking is enabled, 0 if disabled. If there are
modules with different values, any of them may win. If 0 wins, and there is at
least one module with 1, the program will most likely crash.
With this change, __msan_track_origins is only emitted if origin tracking is
on. Then runtime library detects if there is at least one module with origin
tracking, and enables runtime support for it.
llvm-svn: 182997
The MOV64ri64i32 instruction required hacky MCInst lowering because it was
allocated as setting a GR64, but the eventual instruction ("movl") only set a
GR32. This converts it into a so-called "MOV32ri64" which still accepts a
(appropriate) 64-bit immediate but defines a GR32. This is then converted to
the full GR64 by a SUBREG_TO_REG operation, thus keeping everyone happy.
llvm-svn: 182991
This fixes the test on ARM. Looks like it was broken by r182877. Not
sure if this is a bug on fast isel on ARM, but this should help fix
the ARM bots.
llvm-svn: 182937
The pattern the test originally checked for doesn't occur on other -mcpu
settings. On atom it's still there though slightly differently scheduled.
llvm-svn: 182933
This test was failing on some hosts when an unexpected register was used for a
variable. This just extends the regexp to allow the new x86-64 registers.
llvm-svn: 182929
Instead of having a bunch of separate MOV8r0, MOV16r0, ... pseudo-instructions,
it's better to use a single MOV32r0 (which will expand to "xorl %reg, %reg")
and obtain other sizes with EXTRACT_SUBREG and SUBREG_TO_REG. The encoding is
smaller and partial register updates can sometimes be avoided.
Until recently, this sequence was a barrier to rematerialization though. That
should now be fixed so it's an appropriate time to make the change.
llvm-svn: 182928
The code to distinguish between unaligned and aligned addresses was
already there, so this is mostly just a switch-on-and-test process.
llvm-svn: 182920
For COFF and MachO, sections semantically have relocations that apply to them.
That is not the case on ELF.
In relocatable objects (.o), a section with relocations in ELF has offsets to
another section where the relocations should be applied.
In dynamic objects and executables, relocations don't have an offset, they have
a virtual address. The section sh_info may or may not point to another section,
but that is not actually used for resolving the relocations.
This patch exposes that in the ObjectFile API. It has the following advantages:
* Most (all?) clients can handle this more efficiently. They will normally walk
all relocations, so doing an effort to iterate in a particular order doesn't
save time.
* llvm-readobj now prints relocations in the same way the native readelf does.
* probably most important, relocations that don't point to any section are now
visible. This is the case of relocations in the rela.dyn section. See the
updated relocation-executable.test for example.
llvm-svn: 182908
Fixes PR16146: gdb.base__call-ar-st.exp fails after
pre-RA-sched=source fixes.
Patch by Xiaoyi Guo!
This also fixes an unsupported dbg.value test case. Codegen was
previously incorrect but the test was passing by luck.
llvm-svn: 182885
FastISel was only enabled for iOS ARM and Thumb2, this patch enables it
for ARM (not Thumb2) on Linux and NaCl.
Thumb2 support needs a bit more work, mainly around register class
restrictions.
The patch punts to SelectionDAG when doing TLS relocation on non-Darwin
targets. I will fix this and other FastISel-to-SelectionDAG failures in
a separate patch.
The patch also forces FastISel to retain frame pointers: iOS always
keeps them for backtracking (so emitted code won't change because of
this), but Linux was getting much worse code that was incorrect when
using big frames (such as test-suite's lencod). I'll also fix this in a
later patch, it will probably require a peephole so that FastISel
doesn't rematerialize frame pointers back-to-back.
The test changes are straightforward, similar to:
http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20130513/174279.html
They also add a vararg test that got dropped in that change.
I ran all of test-suite on A15 hardware with --optimize-option=-O0 and
all the tests pass.
llvm-svn: 182877
This allows rematerialization during register coalescing to handle
more cases involving operations like SUBREG_TO_REG which might need to
be rematerialized using sub-register indices.
For example, code like:
v1(GPR64):sub_32 = MOVZ something
v2(GPR64) = COPY v1(GPR64)
should be convertable to:
v2(GPR64):sub_32 = MOVZ something
but previously we just gave up in places like this
llvm-svn: 182872
Since the testing case uses ref_addr, which requires version 3+ to work,
we will solve the dwarf version issue first.
This patch also causes failures in one of the bots. I will update the patch
accordingly in my next attempt.
rdar://13926659
llvm-svn: 182867
This patch solves the problem of numeric register values not being accepted:
../set_alias.s:1:11: error: expected valid expression after comma
.set r4,$4
^
The parsing of .set directive is changed and handling of symbols in code
as well to enable this feature.
The test example is added.
Patch by Vladimir Medic
llvm-svn: 182807
- llvm.loop.parallel metadata has been renamed to llvm.loop to be more generic
by making the root of additional loop metadata.
- Loop::isAnnotatedParallel now looks for llvm.loop and associated
llvm.mem.parallel_loop_access
- document llvm.loop and update llvm.mem.parallel_loop_access
- add support for llvm.vectorizer.width and llvm.vectorizer.unroll
- document llvm.vectorizer.* metadata
- add utility class LoopVectorizerHints for getting/setting loop metadata
- use llvm.vectorizer.width=1 to indicate already vectorized instead of
already_vectorized
- update existing tests that used llvm.loop.parallel and
llvm.vectorizer.already_vectorized
Reviewed by: Nadav Rotem
llvm-svn: 182802
Previously we would read-modify-write the target bits when processing
relocations for the MCJIT. This had the problem that when relocations
were processed multiple times for the same object file (as they can
be), the result is not idempotent and the values became corrupted.
The solution to this is to take any bits used in the destination from
the pristine object file as LLVM emitted it.
This should fix PR16013 and remote MCJIT on ARM ELF targets.
llvm-svn: 182800
from a different CU.
We used to print out an error message and fail to generate inlined_subroutine.
If we use ref_addr in the generated DWARF, the DWARF version should be 3 or
above.
rdar://13926659
llvm-svn: 182791
Extend LinkModules to pass a ValueMaterializer to RemapInstruction and friends to lazily create Functions for lazily linked globals. This is a big win when linking small modules with large (mostly unused) library modules.
llvm-svn: 182776
This patch adds support for the CRJ and CGRJ instructions. Support for
the immediate forms will be a separate patch.
The architecture has a large number of comparison instructions. I think
it's generally better to concentrate on using the "best" comparison
instruction first and foremost, then only use something like CRJ if
CR really was the natual choice of comparison instruction. The patch
therefore opportunistically converts separate CR and BRC instructions
into a single CRJ while emitting instructions in ISelLowering.
llvm-svn: 182764
When -ffast-math is in effect (on Linux, at least), clang defines
__FINITE_MATH_ONLY__ > 0 when including <math.h>. This causes the
preprocessor to include <bits/math-finite.h>, which renames the sqrt functions.
For instance, "sqrt" is renamed as "__sqrt_finite".
This patch adds the 3 new names in such a way that they will be treated
as equivalent to their respective original names.
llvm-svn: 182739
When expanding unaligned Altivec loads, we use the decremented offset trick to
prevent page faults. Unfortunately, if we have a sequence of consecutive
unaligned loads, this leads to suboptimal code generation because the 'extra'
load from the first unaligned load can be combined with the base load from the
second (but only if the decremented offset trick is not used for the first).
Search up and down the chain, through loads and token factors, looking for
consecutive loads, and if one is found, don't use the offset reduction trick.
These duplicate loads are later combined to yield the desired sequence (in the
future, we might want a more-powerful chain search, but that will require some
changes to allow the combiner routines to access the AA object).
This should complete the initial implementation of the optimized unaligned
Altivec load expansion. There is some refactoring that should be done, but
that will happen when the unaligned store expansion is added.
llvm-svn: 182719
The lvsl permutation control instruction is a function only of the alignment of
the pointer operand (relative to the 16-byte natural alignment of Altivec
vectors). As a result, multiple lvsl intrinsics where the operands differ by a
multiple of 16 can be combined.
llvm-svn: 182708
Altivec only directly supports aligned loads, but the loads have a strange
property: If given an unaligned address, they truncate the address to the next
lower aligned address, and load from there. This property, along with an extra
load and some special-purpose permutation-control instructions that generate
the appropriate permutations from the original unaligned address, allow
efficient lowering of aligned loads. This code uses the trick explained in the
Apple Velocity Engine optimization overview document to prevent the needed
extra load from possibly causing a page fault if the original address happens
to be aligned.
As noted in the FIXMEs, there are several additional optimizations that can be
performed to reduce the cost of these loads even more. These will be
implemented in future commits.
llvm-svn: 182691
Previously, an invalid instruction like:
foo %r1, %r0
would generate the rather odd error message:
....: error: unknown token in expression
foo %r1, %r0
^
We now get the more informative:
....: error: invalid instruction
foo %r1, %r0
^
The same would happen if an address were used where a register was expected.
We now get "invalid operand for instruction" instead.
llvm-svn: 182644
The idea is to make sure that:
(1) "register expected" is restricted to cases where ParseRegister()
is called and the token obviously isn't a register.
(2) "invalid register" is restricted to cases where a register-like "%..."
sequence is found, but the "..." makes no sense.
(3) the generic "invalid operand for instruction" is used in cases where
the wrong register type is used (GPR instead of FPR, etc.).
(4) the new "invalid register pair" is used if the register has the right type,
but is not a valid register pair.
Testing of (1)-(3) is now restricted to regs-bad.s. It uses a representative
instruction for each register class to make sure that only registers from
that class are accepted.
(4) is tested by both regs-bad.s (which checks all invalid register pairs)
and insn-bad.s (which tests one invalid pair for each instruction that
requires a pair).
While there, I changed "Number" to "Num" for consistency with the
operand class.
llvm-svn: 182643
as the BinaryOperator, *not* in the block where the IRBuilder is currently
inserting into. Fixes a bug where scalarizePHI would create instructions
that would not dominate all uses.
llvm-svn: 182639
Other than recognizing the attribute, the patch does little else.
It changes the branch probability analyzer so that edges into
blocks postdominated by a cold function are given low weight.
Added analysis and code generation tests. Added documentation for the
new attribute.
llvm-svn: 182638
This is a basic first step towards symbolization of disassembled
instructions. This used to be done using externally provided (C API)
callbacks. This patch introduces:
- the MCSymbolizer class, that mimics the same functions that were used
in the X86 and ARM disassemblers to symbolize immediate operands and
to annotate loads based off PC (for things like c string literals).
- the MCExternalSymbolizer class, which implements the old C API.
- the MCRelocationInfo class, which provides a way for targets to
translate relocations (either object::RelocationRef, or disassembler
C API VariantKinds) to MCExprs.
- the MCObjectSymbolizer class, which does symbolization using what it
finds in an object::ObjectFile. This makes simple symbolization (with
no fancy relocation stuff) work for all object formats!
- x86-64 Mach-O and ELF MCRelocationInfos.
- A basic ARM Mach-O MCRelocationInfo, that provides just enough to
support the C API VariantKinds.
Most of what works in otool (the only user of the old symbolization API
that I know of) for x86-64 symbolic disassembly (-tvV) works, namely:
- symbol references: call _foo; jmp 15 <_foo+50>
- relocations: call _foo-_bar; call _foo-4
- __cf?string: leaq 193(%rip), %rax ## literal pool for "hello"
Stub support is the main missing part (because libObject doesn't know,
among other things, about mach-o indirect symbols).
As for the MCSymbolizer API, instead of relying on the disassemblers
to call the tryAdding* methods, maybe this could be done automagically
using InstrInfo? For instance, even though PC-relative LEAs are used
to get the address of string literals in a typical Mach-O file, a MOV
would be used in an ELF file. And right now, the explicit symbolization
only recognizes PC-relative LEAs. InstrInfo should have already have
most of what is needed to know what to symbolize, so this can
definitely be improved.
I'd also like to remove object::RelocationRef::getValueString (it seems
only used by relocation printing in objdump), as simply printing the
created MCExpr is definitely enough (and cleaner than string concats).
llvm-svn: 182625
This implements the @llvm.readcyclecounter intrinsic as the specific
MRC instruction specified in the ARM manuals for CPUs with the Power
Management extensions.
Older CPUs had slightly different methods which may also have to be
implemented eventually, but this should cover all v7 cases.
rdar://problem/13939186
llvm-svn: 182603
Now that the LiveDebugVariables pass is running *after* register
coalescing, the ConnectedVNInfoEqClasses class needs to deal with
DBG_VALUE instructions.
This only comes up when rematerialization during coalescing causes the
remaining live range of a virtual register to separate into two
connected components.
llvm-svn: 182592
There were bits & pieces of code lying around that may've given the
impression that debug info metadata supported the possibility that a
subprogram's type could be specified by a non-subroutine type describing
the return type of a void function. This support was incomplete &
unnecessary. Asserts & API have been changed to make the desired usage
more clear.
llvm-svn: 182532
We are not working on a DAG and I ran into a number of problems when I enabled the vectorizations of 'diamond-trees' (trees that share leafs).
* Imroved the numbering API.
* Changed the placement of new instructions to the last root.
* Fixed a bug with external tree users with non-zero lane.
* Fixed a bug in the placement of in-tree users.
llvm-svn: 182508
The earlier change list introduced the following inst combines:
B * (uitofp i1 C) —> select C, B, 0
A * (1 - uitofp i1 C) —> select C, 0, A
select C, 0, B + select C, A, 0 —> select C, A, B
Together these 3 changes would simplify :
A * (1 - uitofp i1 C) + B * uitofp i1 C
down to :
select C, B, A
In practice we found that the first two substitutions can have a
negative effect on performance, because they reduce opportunities to
use FMA contractions; between the two options FMAs are often the
better choice. This change list amends the previous one to enable
just these inst combines:
select C, B, 0 + select C, 0, A —> select C, B, A
A * (1 - uitofp i1 C) + B * uitofp i1 C —> select C, B, A
llvm-svn: 182499
The Value pointers we store in the induction variable list can be RAUW'ed by a
call to SCEVExpander::expandCodeFor, use a TrackingVH instead. Do the same thing
in some other places where we store pointers that could potentially be RAUW'ed.
Fixes PR16073.
llvm-svn: 182485
This is to fix PR15408 where an undefined symbol Lline_table_start1 is used.
Since we do not generate the debug_line section when .loc is used,
Lline_table_start1 is not emitted and we can't refer to it when calculating
at_stmt_list for a compile unit.
llvm-svn: 182344
pic calls. These need to be there so we don't try and use helper
functions when we call those.
As part of this, make sure that we properly exclude helper functions in pic
mode when indirect calls are involved.
llvm-svn: 182343
This resolves the last of the PR14606 failures in the GDB 7.5 test
suite by implementing an optional name field for
DW_TAG_imported_modules/DIImportedEntities and using that to implement
C++ namespace aliases (eg: "namespace X = Y;").
llvm-svn: 182328
By default, a teq instruction is inserted after integer divide. No divide-by-zero
checks are performed if option "-mnocheck-zero-division" is used.
llvm-svn: 182306
Before this change, the SystemZ backend would use BRCL for all branches
and only consider shortening them to BRC when generating an object file.
E.g. a branch on equal would use the JGE alias of BRCL in assembly output,
but might be shortened to the JE alias of BRC in ELF output. This was
a useful first step, but it had two problems:
(1) The z assembler isn't traditionally supposed to perform branch shortening
or branch relaxation. We followed this rule by not relaxing branches
in assembler input, but that meant that generating assembly code and
then assembling it would not produce the same result as going directly
to object code; the former would give long branches everywhere, whereas
the latter would use short branches where possible.
(2) Other useful branches, like COMPARE AND BRANCH, do not have long forms.
We would need to do something else before supporting them.
(Although COMPARE AND BRANCH does not change the condition codes,
the plan is to model COMPARE AND BRANCH as a CC-clobbering instruction
during codegen, so that we can safely lower it to a separate compare
and long branch where necessary. This is not a valid transformation
for the assembler proper to make.)
This patch therefore moves branch relaxation to a pre-emit pass.
For now, calls are still shortened from BRASL to BRAS by the assembler,
although this too is not really the traditional behaviour.
The first test takes about 1.5s to run, and there are likely to be
more tests in this vein once further branch types are added. The feeling
on IRC was that 1.5s is a bit much for a single test, so I've restricted
it to SystemZ hosts for now.
The patch exposes (and fixes) some typos in the main CodeGen/SystemZ tests.
A later patch will remove the {{g}}s from that directory.
llvm-svn: 182274
This converter currently only handles global variables in address space 0. For
these variables, they are promoted to address space 1 (global memory), and all
uses are updated to point to the result of a cvta.global instruction on the new
variable.
The motivation for this is address space 0 global variables are illegal since we
cannot declare variables in the generic address space. Instead, we place the
variables in address space 1 and explicitly convert the pointer to address
space 0. This is primarily intended to help new users who expect to be able to
place global variables in the default address space.
llvm-svn: 182254
Introduction:
In case when stack alignment is 8 and GPRs parameter part size is not N*8:
we add padding to GPRs part, so part's last byte must be recovered at
address K*8-1.
We need to do it, since remained (stack) part of parameter starts from
address K*8, and we need to "attach" "GPRs head" without gaps to it:
Stack:
|---- 8 bytes block ----| |---- 8 bytes block ----| |---- 8 bytes...
[ [padding] [GPRs head] ] [ ------ Tail passed via stack ------ ...
FIX:
Note, once we added padding we need to correct *all* Arg offsets that are going
after padded one. That's why we need this fix: Arg offsets were never corrected
before this patch. See new test-cases included in patch.
We also don't need to insert padding for byval parameters that are stored in GPRs
only. We need pad only last byval parameter and only in case it outsides GPRs
and stack alignment = 8.
Though, stack area, allocated for recovered byval params, must satisfy
"Size mod 8 = 0" restriction.
This patch reduces stack usage for some cases:
We can reduce ArgRegsSaveArea since inner N*4 bytes sized byval params my be
"packed" with alignment 4 in some cases.
llvm-svn: 182237
lli's remote MCJIT code calls setExecutable just prior to running
code. In line with Darwin behaviour this seems to be the place to
invalidate any caches needed so that relocations can take effect
properly.
llvm-svn: 182213
We don't need to reject all inline asm as using the counter register (most does
not). Only those that explicitly clobber the counter register need to prevent
the transformation.
llvm-svn: 182191
The peephole tries to reorder MOV32r0 instructions such that they are
before the instruction that modifies EFLAGS.
The problem is that the peephole does not consider the case where the
instruction that modifies EFLAGS also depends on the previous state of
EFLAGS.
Instead, walk backwards until we find an instruction that has a def for
EFLAGS but does not have a use.
If we find such an instruction, insert the MOV32r0 before it.
If it cannot find such an instruction, skip the optimization.
llvm-svn: 182184
This patch matches GCC behavior: the code used to only allow unaligned
load/store on ARM for v6+ Darwin, it will now allow unaligned load/store
for v6+ Darwin as well as for v7+ on Linux and NaCl.
The distinction is made because v6 doesn't guarantee support (but LLVM
assumes that Apple controls hardware+kernel and therefore have
conformant v6 CPUs), whereas v7 does provide this guarantee (and
Linux/NaCl behave sanely).
The patch keeps the -arm-strict-align command line option, and adds
-arm-no-strict-align. They behave similarly to GCC's -mstrict-align and
-mnostrict-align.
I originally encountered this discrepancy in FastIsel tests which expect
unaligned load/store generation. Overall this should slightly improve
performance in most cases because of reduced I$ pressure.
llvm-svn: 182175
Dot4 now uses 8 scalar operands instead of 2 vectors one which allows register
coalescer to remove some unneeded COPY.
This patch also defines some structures/functions that can be used to handle
every vector instructions (CUBE, Cayman special instructions...) in a similar
fashion.
llvm-svn: 182126
Almost all instructions that takes a 128 bits reg as input (fetch, export...)
have the abilities to swizzle their argument and output. Instead of printing
default swizzle for each 128 bits reg, rename T*.XYZW to T* and let instructions
print potentially optimized swizzles themselves.
llvm-svn: 182124
Shuffles that only move an element into position 0 of the vector are common in
the output of the loop vectorizer and often generate suboptimal code when SSSE3
is not available. Lower them to vector shifts if possible.
We still prefer palignr over psrldq because it has higher throughput on
sandybridge.
llvm-svn: 182102
Now that fixup_ppc_ha16 and fixup_ppc_lo16 are being treated exactly
the same everywhere, it no longer makes sense to have two fixup types.
This patch merges them both into a single type fixup_ppc_half16,
and renames fixup_ppc_lo16_ds to fixup_ppc_half16ds for consistency.
(The half16 and half16ds names are taken from the description of
relocation types in the PowerPC ABI.)
No change in code generation expected.
llvm-svn: 182092
The current PowerPC MC back end distinguishes between fixup_ppc_ha16
and fixup_ppc_lo16, which are determined by the instruction the fixup
applies to, and uses this distinction to decide whether a fixup ought
to resolve to the high or the low part of a symbol address.
This isn't quite correct, however. It is valid -if unusual- assembler
to use, e.g.
li 1, symbol@ha
or
lis 1, symbol@l
Whether the high or the low part of the address is used depends solely
on the @ suffix, not on the instruction.
In addition, both
li 1, symbol
and
lis 1, symbol
are valid, assuming the symbol address fits into 16 bits; again, both
will then refer to the actual symbol value (so li will load the value
itself, while lis will load the value shifted by 16).
To fix this, two places need to be adapted. If the fixup cannot be
resolved at assembler time, a relocation needs to be emitted via
PPCELFObjectWriter::getRelocType. This routine already looks at
the VK_ type to determine the relocation. The only problem is that
will reject any _LO modifier in a ha16 fixup and vice versa. This
is simply incorrect; any of those modifiers ought to be accepted
for either fixup type.
If the fixup *can* be resolved at assembler time, adjustFixupValue
currently selects the high bits of the symbol value if the fixup
type is ha16. Again, this is incorrect; see the above example
lis 1, symbol
Now, in theory we'd have to respect a VK_ modifier here. However,
in fact common code never even attempts to resolve symbol references
using any nontrivial VK_ modifier at assembler time; it will always
fall back to emitting a reloc and letting the linker handle it.
If this ever changes, presumably there'd have to be a target callback
to resolve VK_ modifiers. We'd then have to handle @ha etc. there.
llvm-svn: 182091
Previously, three instructions were needed:
trunc.w.s $f0, $f2
mfc1 $4, $f0
sw $4, 0($2)
Now we need only two:
trunc.w.s $f0, $f2
swc1 $f0, 0($2)
llvm-svn: 182053
This patch removes alias definition for addiu $rs,$imm
and instead uses the TwoOperandAliasConstraint field in
the ArithLogicI instruction class.
This way all instructions that inherit ArithLogicI class
have the same macro defined.
The usage examples are added to test files.
Patch by Vladimir Medic
llvm-svn: 182048
Some IR-level instructions (such as FP <-> i64 conversions) are not chained
w.r.t. the mtctr intrinsic and yet may become function calls that clobber the
counter register. At the selection-DAG level, these might be reordered with the
mtctr intrinsic causing miscompiles. To avoid this situation, if an existing
preheader has instructions that might use the counter register, create a new
preheader for the mtctr intrinsic. This extra block will be remerged with the
old preheader at the MI level, but will prevent unwanted reordering at the
selection-DAG level.
llvm-svn: 182045
This patch adds bnez and beqz instructions which represent alias definitions for bne and beq instructions as follows:
bnez $rs,$imm => bne $rs,$zero,$imm
beqz $rs,$imm => beq $rs,$zero,$imm
The corresponding test cases are added.
Patch by Vladimir Medic
llvm-svn: 182040
This is the second part of the change to always return "true"
offset values from getPreIndexedAddressParts, tackling the
case of "memrix" type operands.
This is about instructions like LD/STD that only have a 14-bit
field to encode immediate offsets, which are implicitly extended
by two zero bits by the machine, so that in effect we can access
16-bit offsets as long as they are a multiple of 4.
The PowerPC back end currently handles such instructions by
carrying the 14-bit value (as it will get encoded into the
actual machine instructions) in the machine operand fields
for such instructions. This means that those values are
in fact not the true offset, but rather the offset divided
by 4 (and then truncated to an unsigned 14-bit value).
Like in the case fixed in r182012, this makes common code
operations on such offset values not work as expected.
Furthermore, there doesn't really appear to be any strong
reason why we should encode machine operands this way.
This patch therefore changes the encoding of "memrix" type
machine operands to simply contain the "true" offset value
as a signed immediate value, while enforcing the rules that
it must fit in a 16-bit signed value and must also be a
multiple of 4.
This change must be made simultaneously in all places that
access machine operands of this type. However, just about
all those changes make the code simpler; in many cases we
can now just share the same code for memri and memrix
operands.
llvm-svn: 182032
While testing some experimental code to add vector-scalar registers to
PowerPC, I noticed that a couple of independent instructions were
flipped by the scheduler. The new CHECK-DAG support is perfect for
avoiding this problem.
llvm-svn: 182020
DAGCombiner::CombineToPreIndexedLoadStore calls a target routine to
decompose a memory address into a base/offset pair. It expects the
offset (if constant) to be the true displacement value in order to
perform optional additional optimizations; in particular, to convert
other uses of the original pointer into uses of the new base pointer
after pre-increment.
The PowerPC implementation of getPreIndexedAddressParts, however,
simply calls SelectAddressRegImm, which returns a TargetConstant.
This value is appropriate for encoding into the instruction, but
it is not always usable as true displacement value:
- Its type is always MVT::i32, even on 64-bit, where addresses
ought to be i64 ... this causes the optimization to simply
always fail on 64-bit due to this line in DAGCombiner:
// FIXME: In some cases, we can be smarter about this.
if (Op1.getValueType() != Offset.getValueType()) {
- Its value is truncated to an unsigned 16-bit value if negative.
This causes the above opimization to generate wrong code.
This patch fixes both problems by simply returning the true
displacement value (in its original type). This doesn't
affect any other user of the displacement.
llvm-svn: 182012
Without this change nothing was covering this addFrameMove:
// For 64-bit SVR4 when we have spilled CRs, the spill location
// is SP+8, not a frame-relative slot.
if (Subtarget.isSVR4ABI()
&& Subtarget.isPPC64()
&& (PPC::CR2 <= Reg && Reg <= PPC::CR4)) {
MachineLocation CSDst(PPC::X1, 8);
MachineLocation CSSrc(PPC::CR2);
MMI.addFrameMove(Label, CSDst, CSSrc);
continue;
}
llvm-svn: 181976
This creates stubs that help Mips32 functions call Mips16
functions which have floating point parameters that are normally passed
in floating point registers.
llvm-svn: 181972
Increase the number of instructions LLVM recognizes as setting the ZF
flag. This allows us to remove test instructions that redundantly
recalculate the flag.
llvm-svn: 181937
The old PPCCTRLoops pass, like the Hexagon pass version from which it was
derived, could only handle some simple loops in canonical form. We cannot
directly adapt the new Hexagon hardware loops pass, however, because the
Hexagon pass contains a fundamental assumption that non-constant-trip-count
loops will contain a guard, and this is not always true (the result being that
incorrect negative counts can be generated). With this commit, we replace the
pass with a late IR-level pass which makes use of SE to calculate the
backedge-taken counts and safely generate the loop-count expressions (including
any necessary max() parts). This IR level pass inserts custom intrinsics that
are lowered into the desired decrement-and-branch instructions.
The most fragile part of this new implementation is that interfering uses of
the counter register must be detected on the IR level (and, on PPC, this also
includes any indirect branches in addition to function calls). Also, to make
all of this work, we need a variant of the mtctr instruction that is marked
as having side effects. Without this, machine-code level CSE, DCE, etc.
illegally transform the resulting code. Hopefully, this can be improved
in the future.
This new pass is smaller than the original (and much smaller than the new
Hexagon hardware loops pass), and can handle many additional cases correctly.
In addition, the preheader-creation code has been copied from LoopSimplify, and
after we decide on where it belongs, this code will be refactored so that it
can be explicitly shared (making this implementation even smaller).
The new test-case files ctrloop-{le,lt,ne}.ll have been adapted from tests for
the new Hexagon pass. There are a few classes of loops that this pass does not
transform (noted by FIXMEs in the files), but these deficiencies can be
addressed within the SE infrastructure (thus helping many other passes as well).
llvm-svn: 181927
IR optimisation passes can result in a basic block that contains:
llvm.lifetime.start(%buf)
...
llvm.lifetime.end(%buf)
...
llvm.lifetime.start(%buf)
Before this change, calculateLiveIntervals() was ignoring the second
lifetime.start() and was regarding %buf as being dead from the
lifetime.end() through to the end of the basic block. This can cause
StackColoring to incorrectly merge %buf with another stack slot.
Fix by removing the incorrect Starts[pos].isValid() and
Finishes[pos].isValid() checks.
Just doing:
Starts[pos] = Indexes->getMBBStartIdx(MBB);
Finishes[pos] = Indexes->getMBBEndIdx(MBB);
unconditionally would be enough to fix the bug, but it causes some
test failures due to stack slots not being merged when they were
before. So, in order to keep the existing tests passing, treat LiveIn
and LiveOut separately rather than approximating the live ranges by
merging LiveIn and LiveOut.
This fixes PR15707.
Patch by Mark Seaborn.
llvm-svn: 181922
Now that applyFixup understands differently-sized fixups, we can define
fixup_ppc_lo16/fixup_ppc_lo16_ds/fixup_ppc_ha16 to properly be 2-byte
fixups, applied at an offset of 2 relative to the start of the
instruction text.
This has the benefit that if we actually need to generate a real
relocation record, its address will come out correctly automatically,
without having to fiddle with the offset in adjustFixupOffset.
Tested on both 64-bit and 32-bit PowerPC, using external and
integrated assembler.
llvm-svn: 181894
InstCombine can be uncooperative to vectorization and sink loads into
conditional blocks. This prevents vectorization.
Undo this optimization if there are unconditional memory accesses to the same
addresses in the loop.
radar://13815763
llvm-svn: 181860
There were two problems that made llvm-objdump -r crash:
- for non-scattered relocations, the symbol/section index is actually in the
(aptly named) symbolnum field.
- sections are 1-indexed.
llvm-svn: 181843
The transformation happening here is that we want to turn a
"mul(ext(X), ext(X))" into a "vmull(X, X)", stripping off the extension. We have
to make sure that X still has a valid vector type - possibly recreate an
extension to a smaller type. In case of a extload of a memory type smaller than
64 bit we used create a ext(load()). The problem with doing this - instead of
recreating an extload - is that an illegal type is exposed.
This patch fixes this by creating extloads instead of ext(load()) sequences.
Fixes PR15970.
radar://13871383
llvm-svn: 181842
CXAAtExitFn was set outside a loop and before optimizations where functions
can be deleted. This patch will set CXAAtExitFn inside the loop and after
optimizations.
Seg fault when running LTO because of accesses to a deleted function.
rdar://problem/13838828
llvm-svn: 181838
ARM FastISel is currently only enabled for iOS non-Thumb1, and I'm working on
enabling it for other targets. As a first step I've fixed some of the tests.
Changes to ARM FastISel tests:
- Different triples don't generate the same relocations (especially
movw/movt versus constant pool loads). Use a regex to allow either.
- Mangling is different. Use a regex to allow either.
- The reserved registers are sometimes different, so registers get
allocated in a different order. Capture the names only where this
occurs.
- Add -verify-machineinstrs to some tests where it works. It doesn't
work everywhere it should yet.
- Add -fast-isel-abort to many tests that didn't have it before.
- Split out the VarArg test from fast-isel-call.ll into its own
test. This simplifies test setup because of --check-prefix.
Patch by JF Bastien
llvm-svn: 181801
The changes to CR spill handling missed a case for 32-bit PowerPC.
The code in PPCFrameLowering::processFunctionBeforeFrameFinalized()
checks whether CR spill has occurred using a flag in the function
info. This flag is only set by storeRegToStackSlot and
loadRegFromStackSlot. spillCalleeSavedRegisters does not call
storeRegToStackSlot, but instead produces MI directly. Thus we don't
see the CR is spilled when assigning frame offsets, and the CR spill
ends up colliding with some other location (generally the FP slot).
This patch sets the flag in spillCalleeSavedRegisters for PPC32 so
that the CR spill is properly detected and gets its own slot in the
stack frame.
llvm-svn: 181800
The GNU assembler treats things like:
brasl %r14, 100
in the same way as:
brasl %r14, .+100
rather than as a branch to absolute address 100. We implemented this in
LLVM by creating an immediate operand rather than the usual expr operand,
and by handling immediate operands specially in the code emitter.
This was undesirable for (at least) three reasons:
- the specialness of immediate operands was exposed to the backend MC code,
rather than being limited to the assembler parser.
- in disassembly, an immediate operand really is an absolute address.
(Note that this means reassembling printed disassembly can't recreate
the original code.)
- it would interfere with any assembly manipulation that we might
try in future. E.g. operations like branch shortening can change
the relative position of instructions, but any code that updates
sym+offset addresses wouldn't update an immediate "100" operand
in the same way as an explicit ".+100" operand.
This patch changes the implementation so that the assembler creates
a "." label for immediate PC-relative operands, so that the operand
to the MCInst is always the absolute address. The patch also adds
some error checking of the offset.
llvm-svn: 181773
Mips16/32 floating point interoperability.
When Mips16 code calls external functions that would normally have some
of its parameters or return values passed in floating point registers,
it needs (Mips32) helper functions to do this because while in Mips16 mode
there is no ability to access the floating point registers.
In Pic mode, this is done with a set of predefined functions in libc.
This case is already handled in llvm for Mips16.
In static relocation mode, for efficiency reasons, the compiler generates
stubs that the linker will use if it turns out that the external function
is a Mips32 function. (If it's Mips16, then it does not need the helper
stubs).
These stubs are identically named and the linker knows about these tricks
and will not create multiple copies and will delete them if they are not
needed.
llvm-svn: 181753
We used to give up if we saw two integer inductions. After this patch, we base
further induction variables on the chosen one like we do in the reverse
induction and pointer induction case.
Fixes PR15720.
radar://13851975
llvm-svn: 181746
In the presense of a block being initialized, the frontend will emit the
objc_retain on the original pointer and the release on the pointer loaded from
the alloca. The optimizer will through the provenance analysis realize that the
two are related (albiet different), but since we only require KnownSafe in one
direction, will match the inner retain on the original pointer with the guard
release on the original pointer. This is fixed by ensuring that in the presense
of allocas we only unconditionally remove pointers if both our retain and our
release are KnownSafe (i.e. we are KnownSafe in both directions) since we must
deal with the possibility that the frontend will emit what (to the optimizer)
appears to be unbalanced retain/releases.
An example of the miscompile is:
%A = alloca
retain(%x)
retain(%x) <--- Inner Retain
store %x, %A
%y = load %A
... DO STUFF ...
release(%y)
call void @use(%x)
release(%x) <--- Guarding Release
getting optimized to:
%A = alloca
retain(%x)
store %x, %A
%y = load %A
... DO STUFF ...
release(%y)
call void @use(%x)
rdar://13750319
llvm-svn: 181743
This patch adds alias for addiu instruction which enables following syntax:
addiu $rs,imm
The macro is translated as:
addiu $rs,$rs,imm
Contributer: Vladimir Medic
llvm-svn: 181729
This fixes warning messages observed in the oggenc application test in
projects/test-suite. Special handling is needed for the 64-bit
PowerPC SVR4 ABI when a constant is initialized with a pointer to a
function in a shared library. Because a function address is
implemented as the address of a function descriptor, the use of copy
relocations can lead to problems with initialization. GNU ld
therefore replaces copy relocations with dynamic relocations to be
resolved by the dynamic linker. This means the constant cannot reside
in the read-only data section, but instead belongs in .data.rel.ro,
which is designed for constants containing dynamic relocations.
The implementation creates a class PPC64LinuxTargetObjectFile
inheriting from TargetLoweringObjectFileELF, which behaves like its
parent except to place constants of this sort into .data.rel.ro.
The test case is reduced from the oggenc application.
llvm-svn: 181723
This option is used when the user wants to avoid emitting double precision FP
loads and stores. Double precision FP loads and stores are expanded to single
precision instructions after register allocation.
llvm-svn: 181718
return values are bitcasts.
The chain had previously been being clobbered with the entry node to
the dag, which sometimes caused other code in the function to be
erroneously deleted when tailcall optimization kicked in.
<rdar://problem/13827621>
llvm-svn: 181696
The external user does not have to be in lane #0. We have to save the lane for each scalar so that we know which vector lane to extract.
llvm-svn: 181674
There are two transforms in visitUrem that conflict with each other.
*) One, if a divisor is a power of two, subtracts one from the divisor
and turns it into a bitwise-and.
*) The other unwraps both operands if they are surrounded by zext
instructions.
Flipping the order allows the subtraction to go beneath the sign
extension.
llvm-svn: 181668
Use the widest induction type encountered for the cannonical induction variable.
We used to turn the following loop into an empty loop because we used i8 as
induction variable type and truncated 1024 to 0 as trip count.
int a[1024];
void fail() {
int reverse_induction = 1023;
unsigned char forward_induction = 0;
while ((reverse_induction) >= 0) {
forward_induction++;
a[reverse_induction] = forward_induction;
--reverse_induction;
}
}
radar://13862901
llvm-svn: 181667
mips16/mips32 floating point interoperability.
This patch fixes returns from mips16 functions so that if the function
was in fact called by a mips32 hard float routine, then values
that would have been returned in floating point registers are so returned.
Mips16 mode has no floating point instructions so there is no way to
load values into floating point registers.
This is needed when returning float, double, single complex, double complex
in the Mips ABI.
Helper functions in libc for mips16 are available to do this.
For efficiency purposes, these helper functions have a different calling
convention from normal Mips calls.
Registers v0,v1,a0,a1 are used to pass parameters instead of
a0,a1,a2,a3.
This is because v0,v1,a0,a1 are the natural registers used to return
floating point values in soft float. These values can then be moved
to the appropriate floating point registers with no extra cost.
The only register that is modified is ra in this call.
The helper functions make sure that the return values are in the floating
point registers that they would be in if soft float was not in effect
(which it is for mips16, though the soft float is implemented using a mips32
library that uses hard float).
llvm-svn: 181641
This is only tested for global variables at the moment (& includes tests
for the unnamed parameter case, since apparently this entire function
was completely untested previously)
llvm-svn: 181632
The issue was that the MatchingInlineAsm and VariantID args to the
MatchInstructionImpl function weren't being set properly. Specifically, when
parsing intel syntax, the parser thought it was parsing inline assembly in the
at&t dialect; that will never be the case.
The crash was caused when the emitter tried to emit the instruction, but the
operands weren't set. When parsing inline assembly we only set the opcode, not
the operands, which is used to lookup the instruction descriptor.
rdar://13854391 and PR15945
Also, this commit reverts r176036. Now that we're correctly parsing the intel
syntax the pushad/popad don't match properly. I've reimplemented that fix using
a MnemonicAlias.
llvm-svn: 181620
The shift amount may be larger than the type leading to undefined behavior.
Limit the transform to constant shift amounts. While there update the bits to
clear in the result which may enable additional optimizations.
PR15959.
llvm-svn: 181604
This commit implements the AsmParser for fnstart, fnend,
cantunwind, personality, handlerdata, pad, setfp, save, and
vsave directives.
This commit fixes some minor issue in the ARMELFStreamer:
* The switch back to corresponding section after the .fnend
directive.
* Emit the unwind opcode while processing .fnend directive
if there is no .handlerdata directive.
* Emit the unwind opcode to .ARM.extab while processing
.handlerdata even if .personality directive does not exist.
llvm-svn: 181603
The BFE optimization was the only one we were actually using, and it was
emitting an intrinsic that we don't support.
https://bugs.freedesktop.org/show_bug.cgi?id=64201
Reviewed-by: Christian König <christian.koenig@amd.com>
NOTE: This is a candidate for the 3.3 branch.
llvm-svn: 181580
Patch by: Aaron Watry
Reviewed-by: Tom Stellard <thomas.stellard@amd.com>
Signed-off-by: Aaron Watry <awatry@gmail.com>
NOTE: This is a candidate for the 3.3 branch.
llvm-svn: 181579
Fixes piglit test for OpenCL builtin mul24, and allows mad24 to run.
Patch by: Aaron Watry
Reviewed-by: Tom Stellard <thomas.stellard@amd.com>
Signed-off-by: Aaron Watry <awatry@gmail.com>
NOTE: This is a candidate for the 3.3 branch.
llvm-svn: 181578
v2: Add v4i32 test
Patch by: Aaron Watry
Reviewed-by: Tom Stellard <thomas.stellard@amd.com>
Signed-off-by: Aaron Watry <awatry@gmail.com>
NOTE: This is a candidate for the 3.3 branch.
llvm-svn: 181577
v2: Add vselect v4i32 test
Patch by: Aaron Watry
Reviewed-by: Tom Stellard <thomas.stellard@amd.com>
Signed-off-by: Aaron Watry <awatry@gmail.com>
NOTE: This is a candidate for the 3.3 branch.
llvm-svn: 181576
We generate a `push' of a random register (%rax) if the stack needs to be
aligned by the size of that register. However, this could mess up compact unwind
generation. In particular, we want to still generate compact unwind in the
presence of this monstrosity.
Check if the push of of the %rax/%eax register. If it is and it's marked with
the `FrameSetup' flag, then we can generate a compact unwind encoding for the
function only if the push is the last FrameSetup instruction.
llvm-svn: 181540
Previously we only checked if the LR required saving if the frame size was
non zero. However because the caller reserves 1 word for the callee to use
that doesn't count towards our frame size it is possible for the LR to need
saving and for the frame size to be 0.
We didn't hit when the LR needed saving because of a function calls because
the 1 word of stack we must allocate for our callee means the frame size
is always non zero in this case. However we can hit this case if the LR is
clobbered in inline asm.
llvm-svn: 181520
That's obviously wrong. Conservatively restrict it to the sign bit, which
matches the original intention of this analysis. Fixes PR15940.
llvm-svn: 181518
A computable loop exit count does not imply the presence of an induction
variable. Scalar evolution can return a value for an infinite loop.
Fixes PR15926.
llvm-svn: 181495
for constructors and destructors since the original declaration given
by the AT_specification both won't and can't.
Patch by Yacine Belkadi, I've cleaned up the testcases.
llvm-svn: 181471
- simple one-function case
- function-calling case
- external function calling case
- exception throwing case
- vector case
Note: these tests are somewhat coupled to the current format of debug metadata.
llvm-svn: 181469
This patch extends test/MC/PowerPC/ppc64-fixups.s to not only check for
the correct fixup type in the --show-encoding output, but also runs the
generated object file through llvm-readobj -r and verifies that the
correct ELF relocation records were generated.
llvm-svn: 181453
The floating-point record forms on PPC don't set the condition register bits
based on a comparison with zero (like the integer record forms do), but rather
based on the exception status bits.
llvm-svn: 181423
The reference encoding is correct, but written in the wrong byte order (these are Thumb tests, while the reference is in ARM byte order).
llvm-svn: 181420
This provides basic functionality for imported declarations. For
subprograms and types some amount of lazy construction is supported (so
the definition of a function can proceed the using declaration), but it
still doesn't handle declared-but-not-defined functions (since we don't
generally emit function declarations).
Variable support is really rudimentary at the moment - simply looking up
the existing definition with no support for out of order (declaration,
imported_module, then definition).
llvm-svn: 181392
The two nested loops were confusing and also conservative in identifying
reduction variables. This patch replaces them by a worklist based approach.
llvm-svn: 181369
Apparently we didn't keep an association of Compile Unit metadata nodes
to DIEs so looking up that parental context failed & thus caused no
DW_TAG_imported_modules to be emitted at the CU scope. Fix this by
adding the mapping & sure up the test case to verify this.
llvm-svn: 181339
We were passing an i32 to ConstantInt::get where an i64 was needed and we must
also pass the sign if we pass negatives numbers. The start index passed to
getConsecutiveVector must also be signed.
Should fix PR15882.
llvm-svn: 181286
The alignment is just a byte in the middle of Characteristics, not an
independent flag. Making it an independent field in the yaml
representation makes it more yamlio friendly.
llvm-svn: 181243
Test case by Michele Scandale!
Fixes PR10293: Load not hoisted out of loop with multiple exits.
There are few regressions with this patch, now tracked by
rdar:13817079, and a roughly equal number of improvements. The
regressions are almost certainly back luck because LoopRotate has very
little idea of whether rotation is profitable. Doing better requires a
more comprehensive solution.
This checkin is a quick fix that lacks generality (PR10293 has
a counter-example). But it trivially fixes the case in PR10293 without
interfering with other cases, and it does satify the criteria that
LoopRotate is a loop canonicalization pass that should avoid
heuristics and special cases.
I can think of two approaches that would probably be better in
the long run. Ultimately they may both make sense.
(1) LoopRotate should check that the current header would make a good
loop guard, and that the loop does not already has a sufficient
guard. The artifical SimplifiedLoopLatch check would be unnecessary,
and the design would be more general and canonical. Two difficulties:
- We need a strong guarantee that we won't endlessly rotate, so the
analysis would need to be precise in order to avoid the
SimplifiedLoopLatch precondition.
- Analysis like this are usually based on SCEV, which we don't want to
rely on.
(2) Rotate on-demand in late loop passes. This could even be done by
shoving the loop back on the queue after the optimization that needs
it. This could work well when we find LICM opportunities in
multi-branch loops. This requires some work, and it doesn't really
solve the problem of SCEV wanting a loop guard before the analysis.
llvm-svn: 181230
As pointed out by Rafael Espindola, we should match the DWARF encodings
produced by GCC in both pic and non-pic modes. This was not the case
for the non-pic case.
This patch changes all DWARF encodings to DW_EH_PE_absptr for the
non-pic case, just like GCC does. The test case is updated to check
for both variants.
llvm-svn: 181222
A * (1 - (uitofp i1 C)) -> select C, 0, A
B * (uitofp i1 C) -> select C, B, 0
select C, 0, A + select C, B, 0 -> select C, B, A
These come up in code that has been hand-optimized from a select to a linear blend,
on platforms where that may have mattered. We want to undo such changes
with the following transform:
A*(1 - uitofp i1 C) + B*(uitofp i1 C) -> select C, A, B
llvm-svn: 181216