derived class MipsSETargetLowering.
We shouldn't be generating madd/msub nodes if target is Mips16, since Mips16
doesn't have support for multipy-add/sub instructions.
llvm-svn: 178404
The new instructions have explicit register output operands and use table-gen
patterns instead of C++ code to do instruction selection.
Mips16's instructions are unaffected by this change.
llvm-svn: 178403
clang.arc.used is an interesting call for ARC since ObjCARCContract
needs to run to remove said intrinsic to avoid a linker error (since the
call does not exist).
llvm-svn: 178369
Like nearbyint, rint can be implemented on PPC using the frin instruction. The
complication comes from the fact that rint needs to set the FE_INEXACT flag
when the result does not equal the input value (and frin does not do that). As
a result, we use a custom inserter which, after the rounding, compares the
rounded value with the original, and if they differ, explicitly sets the XX bit
in the FPSCR register (which corresponds to FE_INEXACT).
Once LLVM has better modeling of the floating-point environment we should be
able to (often) eliminate this extra complexity.
llvm-svn: 178362
These instructions are available on the P5x (and later) and on the A2. They
implement the standard floating-point rounding operations (floor, trunc, etc.).
One caveat: frin (round to nearest) does not implement "ties to even", and so
is only enabled in fast-math mode.
llvm-svn: 178337
This reverts commit 617330909f0c26a3f2ab8601a029b9bdca48aa61.
It broke the bots:
/home/clangbuild2/clang-ppc64-2/llvm.src/unittests/ADT/SmallVectorTest.cpp:150: PushPopTest
/home/clangbuild2/clang-ppc64-2/llvm.src/unittests/ADT/SmallVectorTest.cpp:118: Failure
Value of: v[i].getValue()
Actual: 0
Expected: value
Which is: 2
llvm-svn: 178334
Mips assembler supports macros that allows the OR instruction
to have an immediate parameter. This patch adds an instruction
alias that converts this macro into a Mips ORI instruction.
Contributer: Vladimir Medic
llvm-svn: 178316
- RDRAND always clears the destination value when a random value is not
available (i.e. CF == 0). This value is truncated or zero-extended as
the false boolean value to be returned. Boolean simplification needs
to skip this 'zext' or 'trunc' node.
llvm-svn: 178312
To enable a load of a call address to be folded with that call, this
load is moved from outside of callseq into callseq. Such a moving
adds a non-glued node (that load) into a glued sequence. This non-glue
load is only removed when DAG selection folds them into a memory form
call instruction. When such instruction selection is disabled, it breaks
DAG schedule.
To prevent that, such moving is disabled when target favors register
indirect call.
Previous workaround disabling CALL32m/CALL64m insn selection is removed.
llvm-svn: 178308
immediate in a register. I don't believe this should ever fail, but I see no
harm in trying to make this code bullet proof.
I've added an assert to ensure my assumtion is correct. If the assertion fires
something is wrong and we should fix it, rather then just silently fall back to
SelectionDAG isel.
llvm-svn: 178305
Mips assembler allows following to be used as aliased instructions:
jal $rs for jalr $rs
jal $rd,$rd for jalr $rd,$rs
This patch provides alias definitions in td files and test cases to show the usage.
Contributer: Vladimir Medic
llvm-svn: 178304
Compiling in 32-bit mode on a P7 would assert after 64-bit DAG combines were
added for bswap with load/store. This is because these combines are really only
valid in 64-bit mode, regardless of the CPU (and this was not being checked).
llvm-svn: 178286
Since we handle optimizable objc_retainBlocks through strength reduction
in OptimizableIndividualCalls, we know that all code after that point
will only see non-optimizable objc_retainBlock calls. IsForwarding is
only called by functions after that point, so it is ok to just classify
objc_retainBlock as non-forwarding.
<rdar://problem/13249661>.
llvm-svn: 178285
If an objc_retainBlock has the copy_on_escape metadata attached to it
AND if the block pointer argument only escapes down the stack, we are
allowed to strength reduce the objc_retainBlock to to an objc_retain and
thus optimize it.
Current there is logic in the ARC data flow analysis to handle
this case which is complicated and involved making distinctions in
between objc_retainBlock and objc_retain in certain places and
considering them the same in others.
This patch simplifies said code by:
1. Performing the strength reduction in the initial ARC peephole
analysis (ObjCARCOpts::OptimizeIndividualCalls).
2. Changes the ARC dataflow analysis (which runs after the peephole
analysis) to consider all objc_retainBlock calls to not be optimizable
(since if the call was optimizable, we would have strength reduced it
already).
This patch leaves in the infrastructure in the ARC dataflow analysis to
handle this case, which due to 2 will just be dead code. I am doing this
on purpose to separate the removal of the old code from the testing of
the new code.
<rdar://problem/13249661>.
llvm-svn: 178284
This follows up Ulrich Weigand's work in PPCInstrInfo.td and
PPCInstr64Bit.td by doing the corresponding work for most of the
Altivec patterns. I have not been able to do anything for the
following classes of instructions:
(1) Vector logicals. These don't have corresponding intrinsics and
don't have a single obvious vector type. So far as I can tell I need
to leave these as VRRC. Affected instructions are: VAND, VANDC,
VNOR, VOR, VXOR, V_SET0.
(2) Instructions that make use of vector shuffle. The selection code
promotes all shuffles to v16i8, so any pattern that matches on a
shuffle is constrained. I haven't found any way to make the patterns
match on their natural types, so I plan to leave these as VRRC.
Affected instructions are: VMRG*, VSPLTB, VSPLTH, VSPLTW, VPKUHUM,
VPKUWUM.
No change in behavior is anticipated.
llvm-svn: 178277
These are 64-bit load/store with byte-swap, and available on the P7 and the A2.
Like the similar instructions for 16- and 32-bit words, these are matched in the
target DAG-combine phase against load/store-bswap pairs.
llvm-svn: 178276
PPC ISA 2.06 (P7, A2, etc.) has a popcntd instruction. Add this instruction and
tell TTI about it so that popcount-loop recognition will know about it.
llvm-svn: 178233
There were a few places where kill flags were not being set correctly, and
where 32-bit instruction variants were being used with 64-bit registers. After
r178180, this code was being triggered causing llc to assert.
llvm-svn: 178220
This reverts commit 342d92c7a0adeabc9ab00f3f0d88d739fe7da4c7.
Turns out we're going with a different schema design to represent
DW_TAG_imported_modules so we won't need this extra field.
llvm-svn: 178215
form of call in preference to memory indirect on Atom.
In this case, the patch applies the optimization to the code for reloading
spilled registers.
The patch also includes changes to sibcall.ll and movgs.ll, which were
failing on the Atom buildbot after the first patch was applied.
This patch by Sriram Murali.
llvm-svn: 178193
These functions should have the same list of load/store instructions. Now that
all load/store forms have been normalized (to single instructions or pseudos)
they can be resynchronized.
Found by inspection, although hopefully this will improve optimization. I've
also added some comments.
llvm-svn: 178180
indirect through a memory address is to load the memory address into
a register and then call indirect through the register.
This patch implements this improvement by modifying SelectionDAG to
force a function address which is a memory reference to be loaded
into a virtual register.
Patch by Sriram Murali.
llvm-svn: 178171
This may be causing a failure on some buildbots:
Referencing function in another module!
tail call fastcc void @_ZL11EvaluateOpstPtRj(i16 zeroext %17, i16* %Vals, i32* %NumVals), !dbg !219
Referencing function in another module!
tail call fastcc void @_ZL11EvaluateOpstPtRj(i16 zeroext %19, i16* %Vals, i32* %NumVals), !dbg !221
Broken module found, compilation aborted!
Stack dump:
0. Running pass 'Function Pass Manager' on module 'ld-temp.o'.
1. Running pass 'Module Verifier' on function '@_ZL11EvaluateOpstPtRj'
clang: error: unable to execute command: Illegal instruction: 4
clang: error: linker command failed due to signal (use -v to see invocation)
<rdar://problem/13516485>
llvm-svn: 178156
This is a follow-up to r178073 (which should actually make target-customized
spilling work again).
I still don't have a regression test for this (but it would be good to have
one; Thumb 1 and Mips16 use this callback as well).
Patch by Richard Sandiford.
llvm-svn: 178137
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
Tested-by: Michel Dänzer <michel.daenzer@amd.com>
llvm-svn: 178127
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
Tested-by: Michel Dänzer <michel.daenzer@amd.com>
llvm-svn: 178126
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
Tested-by: Michel Dänzer <michel.daenzer@amd.com>
llvm-svn: 178125
The register parameter in these instructions becomes the base register in an
r+i ld instruction (and, thus, cannot be r0).
This is not yet testable because we don't yet allocate r0 (and even then any
test would be very fragile).
llvm-svn: 178121
Either operand of these pseudo instructions can be transformed into the first
operand of an isel instruction (and this operand cannot be r0).
This is not yet testable because we don't yet allocate r0 (and even when we do,
any test would be very fragile).
llvm-svn: 178119
Like the addi/addis instructions themselves, these pseudo instructions also
cannot have r0 as their register parameter (because it will be interpreted as
the value 0).
This is not yet testable because we don't yet allocate r0 (and even when we do,
any regression test would be very fragile because it would depend on the
register allocator heuristics).
llvm-svn: 178118
Some implementation detail in the forgotten past required the link
register to be placed in the GPRC and G8RC register classes. This is
just wrong on the face of it, and causes several extra intersection
register classes to be generated. I found this was having evil
effects on instruction scheduling, by causing the wrong register class
to be consulted for register pressure decisions.
No code generation changes are expected, other than some minor changes
in instruction order. Seven tests in the test bucket required minor
tweaks to adjust to the new normal.
llvm-svn: 178114
This is just the basic groundwork for supporting DW_TAG_imported_module but I
wanted to commit this before pushing support further into Clang or LLVM so that
this rather churny change is isolated from the rest of the work. The major
churn here is obviously adding another field (within the common DIScope prefix)
to all DIScopes (files, classes, namespaces, lexical scopes, etc). This should
be the last big churny change needed for DW_TAG_imported_module/using directive
support/PR14606.
llvm-svn: 178099
As Bill Schmidt pointed out to me, only on Darwin do we need to spill/restore
VRSAVE in the SjLj code. For non-Darwin, don't spill/restore VRSAVE (and I've
added some asserts to make sure that we're not).
As it turns out, we're not currently handling the Darwin case correctly (I've
added a FIXME in the test case). I've tried adding various implied register
definitions/uses to force the spill without success, so I'll need to address
this later.
llvm-svn: 178096
if execution failed. ExecuteAndWait returns -1 upon an execution failure, but
checking the return value isn't sufficient because the wait command may
return -1 as well. This new parameter is to be used by the clang driver in a
subsequent commit.
Part of rdar://13362359
llvm-svn: 178087
If we compile a single source program, the `.gcda' file will be generated where
the program was executed. This isn't desirable, because that place may be at an
unpredictable place (the program could call `chdir' for instance).
Instead, we will output the `.gcda' file in the same place we output the `.gcno'
file. I.e., the directory where the executable was generated. This matches GCC's
behavior.
<rdar://problem/13061072> & PR11809
llvm-svn: 178084
All Intel CPUs since Yonah look a lot alike, at least at the granularity
of the scheduling models. We can add more accurate models for
processors that aren't Sandy Bridge if required. Haswell will probably
need its own.
The Atom processor and anything based on NetBurst is completely
different. So are the non-Intel chips.
llvm-svn: 178080
This will be used to factor out some uses of magic number operand offsets
inside Clang where these fields were updated in an effort to resolve forward
declarations/circular references.
llvm-svn: 178078
As suggested by Bill Schmidt (in reviewing r178067), use the real register
number bit lengths (which is self-documenting, and prevents using illegal
numbers), and set only the relevant bits in HWEncoding (which defaults to 0).
No functionality change intended.
llvm-svn: 178077
As pointed out by Richard Sandiford, my recent updates to the register
scavenger broke targets that use custom spilling (because the new code assumed
that if there were no valid spill slots, than spilling would be impossible).
I don't have a test case, but it should be possible to create one for Thumb 1,
Mips 16, etc.
llvm-svn: 178073
As pointed out by Jakob, we don't need to maintain a separate
register-numbering table. Instead we should let TableGen generate the table for
us from the information (already present) in PPCRegisterInfo.td.
TRI->getEncodingValue is now used to access register-encoding values.
No functionality change intended.
llvm-svn: 178067
Now that the register scavenger can support multiple spill slots, and PEI can
use virtual-register-based scavenging for multiple simultaneous registers, we
can use a virtual register for the transfer register in the CR spilling code.
This should eliminate the last place (outside of the prologue/epilogue) where
we depend on the unconditional availability of the r0 register. We will soon be
able to allocate it (in a somewhat restricted sense) as a GPR.
llvm-svn: 178060
PPC's use of PEI's virtual-register-based scavenging functionality had
redefined the virtual registers (it was non-SSA). Now that PEI supports
dealing with instructions with multiple virtual registers, this can be
cleanup up to use multiple virtual registers and keep SSA form.
No functionality change intended.
llvm-svn: 178059
The previous algorithm could not deal properly with scavenging multiple virtual
registers because it kept only one live virtual -> physical mapping (and
iterated through operands in order). Now we don't maintain a current mapping,
but rather use replaceRegWith to completely remove the virtual register as
soon as the mapping is established.
In order to allow the register scavenger to return a physical register killed
by an instruction for definition by that same instruction, we now call
RS->forward(I) prior to eliminating virtual registers defined in I. This
requires a minor update to forward to ignore virtual registers.
These new features will be tested in forthcoming commits.
llvm-svn: 178058
Now all x86 instructions that have itinerary classes also have SchedRW
lists. This is required before the new scheduling models can be used.
There are still unannotated instructions remaining, but they don't have
itinerary classes either.
llvm-svn: 178051
This is a compile time optimization. Before the patch we would do two traversals
on each call to aliasGEP - one with a set size parameter one with UnknownSize.
We can do better by first checking the result of the alias query with
UnknownSize.
Only if this one returns MayAlias do we query a second time using size and type.
This recovers an about 7% compile time regression on spec/ammp.
radar://12349960
llvm-svn: 178045
The OptimizeIntToFloatBitCast converts shift-truncate sequences
into extractelement operations. The computation of the element
index to be used in the resulting operation is currently only
correct for little-endian targets.
This commit fixes the element index computation to be correct
for big-endian targets as well. If the target byte order is
unknown, the optimization cannot be performed at all.
llvm-svn: 178031
This reverts commit r177968. It is causing failures in a local build bot.
"fatal error: error in backend: Expected a variant SchedClass"
Original commit message:
Move the CortexA9 resources into the CortexA9 SchedModel namespace. Define
resource mappings under the CortexA9 SchedModel. Define resources and mappings
for the SwiftModel.
llvm-svn: 178028
Restore the EXEC mask early, otherwise a copy might end up not beeing executed.
Candidate for the mesa stable branch.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
Tested-by: Michel Dänzer <michel.daenzer@amd.com>
llvm-svn: 178018
If PC or SP is the destination, the disassembler erroneously failed with the
invalid encoding, despite the manual saying that both are fine.
This patch addresses failure to decode encoding T4 of LDR (A8.8.62) which is a
postindexed load, where the offset 0xc is applied to SP after the load occurs.
llvm-svn: 178017
There remain a number of patterns that cannot (and should not)
be handled by the asm parser, in particular all the Pseudo patterns.
This commit marks those patterns as isCodeGenOnly.
No change in generated code.
llvm-svn: 178008
MCTargetDesc/PPCMCCodeEmitter.cpp current has code like:
if (isSVR4ABI() && is64BitMode())
Fixups.push_back(MCFixup::Create(0, MO.getExpr(),
(MCFixupKind)PPC::fixup_ppc_toc16));
else
Fixups.push_back(MCFixup::Create(0, MO.getExpr(),
(MCFixupKind)PPC::fixup_ppc_lo16));
This is a problem for the asm parser, since it requires knowledge of
the ABI / 64-bit mode to be set up. However, more fundamentally,
at this point we shouldn't make such distinctions anyway; in an assembler
file, it always ought to be possible to e.g. generate TOC relocations even
when the main ABI is one that doesn't use TOC.
Fortunately, this is actually completely unnecessary; that code was added
to decide whether to generate TOC relocations, but that information is in
fact already encoded in the VariantKind of the underlying symbol.
This commit therefore merges those fixup types into one, and then decides
which relocation to use based on the VariantKind.
No changes in generated code.
llvm-svn: 178007
As part of the the sequence generated to implement long double -> int
conversions, we need to perform an FADD in round-to-zero mode. This is
problematical since the FPSCR is not at all modeled at the SelectionDAG
level, and thus there is a risk of getting floating point instructions
generated out of sequence with the instructions to modify FPSCR.
The current code handles this by somewhat "special" patterns that in part
have dummy operands, and/or duplicate existing instructions, making them
awkward to handle in the asm parser.
This commit changes this by leaving the "FADD in round-to-zero mode"
as an atomic operation on the SelectionDAG level, and only split it up into
real instructions at the MI level (via custom inserter). Since at *this*
level the FPSCR *is* modeled (via the "RM" hard register), much of the
"special" stuff can just go away, and the resulting patterns can be used by
the asm parser.
No significant change in generated code expected.
llvm-svn: 178006
The LDrs pattern is a duplicate of LD, except that it accepts memory
addresses where the displacement is a symbolLo64. An operand type
"memrs" is defined for just that purpose.
However, this wouldn't be necessary if the default "memrix" operand
type were to simply accept 64-bit symbolic addresses directly.
The only problem with that is that it uses "symbolLo", which is
hardcoded to 32-bit.
To fix this, this commit changes "memri" and "memrix" to use new
operand types for the memory displacement, which allow iPTR
instead of i32. This will also make address parsing easier to
implment in the asm parser.
No change in generated code.
llvm-svn: 178005
The ADDI/ADDI8 patterns are currently duplicated into ADDIL/ADDI8L,
which describe the same instruction, except that they accept a
symbolLo[64] operand instead of a s16imm[64] operand.
This duplication confuses the asm parser, and it actually not really
needed, since symbolLo[64] already accepts immediate operands anyway.
So this commit removes the duplicate patterns.
No change in generated code.
llvm-svn: 178004
This commit changes the ISEL patterns to use a CCBITRC operand
instead of a "pred" operand. This matches the actual instruction
text more directly, and simplifies use of ISEL with the asm parser.
In addition, this change allows some simplification of handling
the "pred" operand, as this is now only used by BCC.
No change in generated code.
llvm-svn: 178003
The BLR pattern cannot be recognized by the asm parser in its current form.
This complexity is due to an apparent attempt to enable conditional BLR
variants. However, none of those can ever be generated by current code;
the pattern is only ever created using the default "pred" operand.
To simplify the pattern and allow it to be recognized by the parser,
this commit removes those attempts at conditional BLR support.
When we later come back to actually add real conditional BLR, this
should probably be done via a fully generic conditional branch pattern.
No change in generated code.
llvm-svn: 178002
In PPCInstr64Bit.td, some branch patterns appear in a different sequence
than the corresponding 32-bit patterns in PPCInstrInfo.td.
To simplify future changes that affect both files, this commit moves
those patterns to rearrange them into a similar sequence.
No effect on generated code.
llvm-svn: 178001
Use a MapVector on types where the iteration order matters.
Otherwise we doesn't always produce a deterministic output.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
llvm-svn: 177999
Fixes PR15570: SEGV: SCEV back-edge info invalid after dead code removal.
Indvars creates a SCEV expression for the loop's back edge taken
count, then determines that the comparison is always true and
removes it.
When loop-unroll asks for the expression, it contains a NULL
SCEVUnknkown (as a CallbackVH).
forgetMemoizedResults should invalidate the loop back edges expression.
llvm-svn: 177986
its own library. These functions are bridging between the bitcode reader
and the ll parser which are in different libraries. Previously we didn't
have any good library to do this, and instead played fast and loose with
a "header only" set of interfaces in the Support library. This really
doesn't work well as evidenced by the recent attempt to add timing logic
to the these routines.
As part of this, make them normal functions rather than weird inline
functions, and sink the implementation into the library. Also clean up
the header to be nice and minimal.
This requires updating lots of build system dependencies to specify that
the IRReader library is needed, and several source files to not
implicitly rely upon the header file to transitively include all manner
of other headers.
If you are using IRReader.h, this commit will break you (the header
moved) and you'll need to also update your library usage to include
'irreader'. I will commit the corresponding change to Clang momentarily.
llvm-svn: 177971
Move the CortexA9 resources into the CortexA9 SchedModel namespace. Define
resource mappings under the CortexA9 SchedModel. Define resources and mappings
for the SwiftModel.
llvm-svn: 177968
This is very much work in progress. Please send me a note if you start to depend
on the added abstract read/write resources. They are subject to change until
further notice.
The old itinerary is still the default.
llvm-svn: 177967
it's only really useful if you're going to crash anyways. Use it in the pretty stack trace
printer to kill the compiler if we hang while printing the stack trace.
llvm-svn: 177962
This will allow for verification and analysis of the merge function of
the data flow analyses in the ARC optimizer.
The actual implementation of this feature is by introducing calls to
the functions llvm.arc.annotation.{bottomup,topdown}.{bbstart,bbend}
which are only declared. Each such call takes in a pointer to a global
with the same name as the pointer whose provenance is being tracked and
a pointer whose name is one of our Sequence states and points to a
string that contains the same name.
To ensure that the optimizer does not consider these annotations in any
way, I made it so that the annotations are considered to be of IC_None
type.
A test case is included for this commit and the previous
ObjCARCAnnotation commit.
llvm-svn: 177952
Previously the inner works of the data flow analysis in ObjCARCOpts was hard to
get out of the optimizer for analysis of bugs or testing. All of the current ARC
unit tests are based off of testing the effect of the data flow
analysis (i.e. what statements are removed or moved, etc.). This creates
weakness in the current unit testing regimem since we are not actually testing
what effects various instructions have on the modeled pointer state.
Additionally in order to analyze a bug in the optimizer, one would need to track
by hand what the optimizer was actually doing either through use of DEBUG
statements or through the usage of a debugger, both yielding large loses in
developer productivity.
This patch deals with these two issues by providing ARC annotation
metadata that annotates instructions with the state changes that they cause in
various pointers as well as provides metadata to annotate provenance sources.
Specifically, we introduce the following metadata types:
1. llvm.arc.annotation.bottomup.
2. llvm.arc.annotation.topdown.
3. llvm.arc.annotation.provenancesource.
llvm.arc.annotation.{bottomup,topdown}: These annotations describes a state
change in a pointer when we are visiting instructions bottomup/topdown
respectively. The output format for both is the same:
!1 = metadata !{metadata !"(test,%x)", metadata !"S_Release", metadata !"S_Use"}
The first element is a string tuple with the following format:
(function,variable name)
The second two elements of the metadata show the previous state of the
pointer (in this case S_Release) and the new state of the pointer (S_Use). We
write the metadata in such a manner to ensure that it is easy for outside tools
to parse. This is important since I am currently working on a tool for taking
this information and pretty printing it besides the IR and that can be used for
LIT style testing via the generation of an index.
llvm.arc.annotation.provenancesource: This metadata is used to annotate
instructions which act as provenance sources, i.e. ones that introduce a
new (from the optimizer's perspective) non-argument pointer to track. This
enables cross-referencing in between provenance sources and the state changes
that occur to them.
This is still a work in progress. Additionally I plan on committing
later today additions to the annotations that annotate at the top/bottom
of basic blocks the state of the various pointers being tracked.
*NOTE* The metadata support is conditionally compiled into libObjCARCOpts only
when we are producing a debug build of llvm/clang and even so are
disabled by default. To enable the annotation metadata, pass in
-enable-objc-arc-annotations to opt.
llvm-svn: 177951
- It's still considered aligned when the specified alignment is larger
than the natural alignment;
- The new alignment for the high 128-bit vector should be min(16,
alignment) as the pointer is advanced by 16, a power-of-2 offset.
llvm-svn: 177947
- Handle the case where the result of 'insert_subvect' is bitcasted
before 'extract_subvec'. This removes the redundant insertf128/extractf128
pair on unaligned 256-bit vector load/store on vectors of non 64-bit integer.
llvm-svn: 177945
All the instructions tagged with IIC_DEFAULT had nothing in common, and
we already have a NoItineraries class to represent untagged
instructions.
llvm-svn: 177937
For instance, following transformation will be disabled:
x + x + x => 3.0f * x;
The problem of these transformations is that it introduces a FP constant, which
following Instruction-Selection pass cannot handle.
Reviewed by Nadav, thanks a lot!
rdar://13445387
llvm-svn: 177933
The problem is that the code mistakenly took for granted that following constructor
is able to create an APFloat from a *SIGNED* integer:
APFloat::APFloat(const fltSemantics &ourSemantics, integerPart value)
rdar://13486998
llvm-svn: 177906
This commit updates the PowerPC back-end (PPCInstrInfo.td and
PPCInstr64Bit.td) to use types instead of register classes in
instruction patterns, along the lines of Jakob Stoklund Olesen's
changes in r177835 for Sparc.
llvm-svn: 177890
This commit updates the PowerPC back-end (PPCInstrInfo.td and
PPCInstr64Bit.td) to use types instead of register classes in
Pat patterns, along the lines of Jakob Stoklund Olesen's
changes in r177829 for Sparc.
llvm-svn: 177889
sure the base register and would-be writeback register don't conflict for
stores. This was already being done for loads.
Unfortunately, it is rather difficult to create a test case for this issue. It
was exposed in 450.soplex at LTO and requires unlucky register allocation.
<rdar://13394908>
llvm-svn: 177874
This simplification happens at 2 places :
- using the nsw attribute when the shl / mul is used by a sign test
- when the shl / mul is compared for (in)equality to zero
llvm-svn: 177856
DAG arguments can optionally be named:
(dag node, node:$name)
With this change, the node is also optional:
(dag node, node:$name, $name)
The missing node is treated as an UnsetInit, so the above is equivalent
to:
(dag node, node:$name, ?:$name)
This syntax is useful in output patterns where we currently require the
types of variables to be repeated:
def : Pat<(subc i32:$b, i32:$c), (SUBCCrr i32:$b, i32:$c)>;
This is preferable:
def : Pat<(subc i32:$b, i32:$c), (SUBCCrr $b, $c)>;
llvm-svn: 177843
In order for the new ZERO register to be used with MC, etc. we need to specify
its register number (0).
Thanks to Kai for reporting the problem!
llvm-svn: 177833
In preparation for using the new register scavenger capability for providing
more than one register simultaneously, specifically note functions that have
spilled VRSAVE (currently, this can happen only in functions that use the
setjmp intrinsic). As with CR spilling, such functions will need to provide two
emergency spill slots to the scavenger.
No functionality change intended.
llvm-svn: 177832
I recently added a BCL instruction definition as part of implementing SjLj
support. This can also be used to MCize bcl emission in the asm printer.
No functionality change intended.
llvm-svn: 177830
The SelectionDAG graph has MVT type labels, not register classes, so
this makes it clearer what is happening.
This notation is also robust against adding more types to the IntRegs
register class.
llvm-svn: 177829
These spilling functions will eventually make use of the register scavenger,
however, they'll do so by taking advantage of PEI's virtual-register-based
delayed scavenging mechanism. As a result, these function parameters will not
be used, and can be removed.
No functionality change intended.
llvm-svn: 177827
The LR register is unconditionally reserved, and its spilling and restoration
is handled by the prologue/epilogue code. As a result, it is never explicitly
spilled by the register allocator.
No functionality change intended.
llvm-svn: 177823
Performing this check unilaterally prevented us from generating FMAs when the incoming IR contained illegal vector types which would eventually be legalized to underlying types that *did* support FMA.
For example, an @llvm.fmuladd on an OpenCL float16 should become a sequence of float4 FMAs, not float4 fmul+fadd's.
NOTE: Because we still call the target-specific profitability hook, individual targets can reinstate the old behavior, if desired, by simply performing the legality check inside their callback hook. They can also perform more sophisticated legality checks, if, for example, some illegal vector types can be productively implemented as FMAs, but not others.
llvm-svn: 177820
177774 broke the lld-x86_64-darwin11 builder; error:
error: comparison of integers of different signs: 'int' and 'size_type' (aka 'unsigned long')
for (SI = 0; SI < Scavenged.size(); ++SI)
~~ ^ ~~~~~~~~~~~~~~~~
Fix this by making SI also unsigned.
llvm-svn: 177780
This patch lets the register scavenger make use of multiple spill slots in
order to guarantee that it will be able to provide multiple registers
simultaneously.
To support this, the RS's API has changed slightly: setScavengingFrameIndex /
getScavengingFrameIndex have been replaced by addScavengingFrameIndex /
isScavengingFrameIndex / getScavengingFrameIndices.
In forthcoming commits, the PowerPC backend will use this capability in order
to implement the spilling of condition registers, and some special-purpose
registers, without relying on r0 being reserved. In some cases, spilling these
registers requires two GPRs: one for addressing and one to hold the value being
transferred.
llvm-svn: 177774
Add "evaluate-tbaa" to print alias queries of loads/stores. Alias queries
between pointers do not include TBAA tags.
Add testing case for "placement new". TBAA currently says NoAlias.
llvm-svn: 177772
We currently have a duplicated set of call instruction patterns depending
on the ABI to be followed (Darwin vs. Linux). This is a bit odd; while the
different ABIs will result in different instruction sequences, the actual
instructions themselves ought to be independent of the ABI. And in fact it
turns out that the only nontrivial difference between the two sets of
patterns is that in the PPC64 Linux ABI, the instruction used for indirect
calls is marked to take X11 as extra input register (which is indeed used
only with that ABI to hold an incoming environment pointer for nested
functions). However, this does not need to be hard-coded at the .td
pattern level; instead, the C++ code expanding calls can simply add that
use, just like it adds uses for argument registers anyway.
No change in generated code expected.
llvm-svn: 177735
Currently, the sub-operand of a memrr address that corresponds to what
hardware considers the base register is called "offreg", while the
sub-operand that corresponds to the offset is called "ptrreg".
To avoid confusion, this patch simply swaps the named of those two
sub-operands and updates all uses. No functional change is intended.
llvm-svn: 177734
PPCTargetLowering::getPreIndexedAddressParts currently provides
the base part of a memory address in the offset result, and the
offset part in the base result. That swap is then undone again
when an MI instruction is generated (in PPCDAGToDAGISel::Select
for loads, and using .md Pat patterns for stores).
This patch reverts this double swap, to make common code and
back-end be in sync as to which part of the address is base
and which is offset.
To avoid performance regressions in certain cases, target code
now checks whether the choice of base register would be rejected
for pre-inc accesses by common code, and attempts to swap base
and offset again in such cases. (Overall, this means that now
pre-ice accesses are generated *more* frequently than before.)
llvm-svn: 177733
The iaddroff ComplexPattern is supposed to recognize displacement
expressions that have been processed by a SelectAddressRegImm,
which means it needs to accept TargetConstant and TargetGlobalAddress
nodes. Currently, it erroneously also accepts some other nodes,
in particular Constant and PPCISD::Lo.
While this problem is currently latent, it would cause wrong-code
bugs with a follow-on patch I'm about to commit, so this patch
tightens the ComplexPattern. The equivalent change is made in
PPCDAGToDAGISel::Select, where pre-inc load patterns are handled
(as opposed to store patterns, the loads are handled in C++ code
without making use of the .td ComplexPattern).
llvm-svn: 177732
The xaddroff pattern is currently (mistakenly) used to recognize
the *base* register in pre-inc store patterns. This patch replaces
those uses by ptr_rc_nor0 (as is elsewhere done to match the base
register of an address), and removes the now unused ComplexPattern.
llvm-svn: 177731
Fixes wrong lighting in some corner cases with r600g and radeonsi, e.g.
manifested by failure of two piglit/glean tests and intermittent black
patches in many apps.
Tested on SI and RS880.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=62012 [radeonsi]
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=58150 [r600g]
NOTE: This is a candidate for the Mesa stable branch.
Reviewed-by: Christian König <christian.koenig@amd.com>
llvm-svn: 177730
Before: the function name was stored by the compiler as a constant string
and the run-time was printing it.
Now: the PC is stored instead and the run-time prints the full symbolized frame.
This adds a couple of instructions into every function with non-empty stack frame,
but also reduces the binary size because we store less strings (I saw 2% size reduction).
This change bumps the asan ABI version to v3.
llvm part.
Example of report (now):
==31711==ERROR: AddressSanitizer: stack-buffer-overflow on address 0x7fffa77cf1c5 at pc 0x41feb0 bp 0x7fffa77cefb0 sp 0x7fffa77cefa8
READ of size 1 at 0x7fffa77cf1c5 thread T0
#0 0x41feaf in Frame0(int, char*, char*, char*) stack-oob-frames.cc:20
#1 0x41f7ff in Frame1(int, char*, char*) stack-oob-frames.cc:24
#2 0x41f477 in Frame2(int, char*) stack-oob-frames.cc:28
#3 0x41f194 in Frame3(int) stack-oob-frames.cc:32
#4 0x41eee0 in main stack-oob-frames.cc:38
#5 0x7f0c5566f76c (/lib/x86_64-linux-gnu/libc.so.6+0x2176c)
#6 0x41eb1c (/usr/local/google/kcc/llvm_cmake/a.out+0x41eb1c)
Address 0x7fffa77cf1c5 is located in stack of thread T0 at offset 293 in frame
#0 0x41f87f in Frame0(int, char*, char*, char*) stack-oob-frames.cc:12 <<<<<<<<<<<<<< this is new
This frame has 6 object(s):
[32, 36) 'frame.addr'
[96, 104) 'a.addr'
[160, 168) 'b.addr'
[224, 232) 'c.addr'
[288, 292) 's'
[352, 360) 'd'
llvm-svn: 177724
The original code used i32, and i64 if legal. This introduced unneeded
casts when they aren't legal, or when the index variable i has another
type. In order of preference: try to use i's type; use the smallest
fitting legal type (using an added DataLayout method); default to i32.
A testcase checks that this works when the index gep operand is i16.
Patch by : Ahmed Bougacha <ahmed.bougacha@gmail.com>
Reviewed by : Duncan
llvm-svn: 177712
-time-ir-parsing flag
This breaks the layering of the Support library. We can't add an
implementation side to IRReader because it refers directly to entities
only accessible as part of the IR, AsmParser, and BitcodeReader
libraries. It can only be used in a context where all of those libraries
will be available.
We'll need to find some other way to get this functionality, and
hopefully solve the long-standing layering problem of IRReader.h...
llvm-svn: 177695
For mips a branch an 18-bit signed offset (the 16-bit
offset field shifted left 2 bits) is added to the
address of the instruction following the branch
(not the branch itself), in the branch delay slot,
to form a PC-relative effective target address.
Previously, the code generator did not perform the
shift of the immediate branch offset which resulted
in wrong instruction opcode. This patch fixes the issue.
Contributor: Vladimir Medic
llvm-svn: 177687
This patch uses the generated instruction info tables to
identify memory/load store instructions.
After successful matching and based on the operand type
and size, it generates additional instructions to the output.
Contributor: Vladimir Medic
llvm-svn: 177685
As Jakob pointed out in his review of r177423, having a shared ZERO
register between the 32- and 64-bit register classes causes this
odd G8RC_NOX0_and_GPRC_NOR0 class to be created. As recommended,
this adds a ZERO8 register which differentiates the 32- and 64-bit
zeros.
No functionality change intended.
llvm-svn: 177683
How did this ever work?
Basically, if you have a function that's inlined into the caller, it may not
have any 'call' instructions, but any 'resume' instructions it may have should
still be forwarded to the outer (caller's) landing pad. This requires that all
of the 'landingpad' instructions in the callee have their clauses merged with
the caller's outer 'landingpad' instruction (hence the bit of ugly code in the
`forwardResume' method).
Testcase in a follow commit to the test-suite repository.
<rdar://problem/13360379> & PR15555
llvm-svn: 177680
Thanks to Jakob for isolating the underlying problem from the
test case in r177423. The original commit had introduced
asymmetric copy operations, but these turned out to be a work-around
to the real problem (the use of == instead of hasSubClassEq in PPCCTRLoops).
llvm-svn: 177679
The DARWIN_USER_TEMP_DIR and DARWIN_USER_CACHE_DIR configuration
settings are more idiomatic for Darwin than the TMPDIR environment
variable.
llvm-svn: 177669
The .set directive in the Mips the assembler can be
used to set the value of a symbol to an expression.
This changes the symbol's value and type to conform
to the expression's.
Syntax: .set symbol, expression
This patch implements the parsing of the above syntax
and enables the parser to use defined symbols when
parsing operands.
Contributor: Vladimir Medic
llvm-svn: 177667
This implements SJLJ lowering on PPC, making the Clang functions
__builtin_{setjmp/longjmp} functional on PPC platforms. The implementation
strategy is similar to that on X86, with the exception that a branch-and-link
variant is used to get the right jump address. Credit goes to Bill Schmidt for
suggesting the use of the unconditional bcl form (instead of the regular bl
instruction) to limit return-address-cache pollution.
Benchmarking the speed at -O3 of:
static jmp_buf env_sigill;
void foo() {
__builtin_longjmp(env_sigill,1);
}
main() {
...
for (int i = 0; i < c; ++i) {
if (__builtin_setjmp(env_sigill)) {
goto done;
} else {
foo();
}
done:;
}
...
}
vs. the same code using the libc setjmp/longjmp functions on a P7 shows that
this builtin implementation is ~4x faster with Altivec enabled and ~7.25x
faster with Altivec disabled. This comparison is somewhat unfair because the
libc version must also save/restore the VSX registers which we don't yet
support.
llvm-svn: 177666
Although there is only one Altivec VRSAVE register, it is a member of
a register class, and we need the ability to spill it. Because this
register is normally callee-preserved and handled by special code this
has never before been necessary. However, this capability will be required by
a forthcoming commit adding SjLj support.
llvm-svn: 177654
The old code used to lower FRAMEADDR tried to replicate the logic in the real
frame-lowering code that determines whether or not the frame pointer (r31) will
be used. When it seemed as through the frame pointer would not be used, the
stack pointer (r1) was used instead. Unfortunately, because the stack size is
not yet known, this does not work. Instead, this change introduces new
always-reserved pseudo-registers (FP and FP8) that are replaced during prologue
insertion with the real frame-pointer register (either r1 or r31).
It is important that this intrinsic always return a valid frame address because
it is used by Clang to store the frame address as part of code generation for
__builtin_setjmp.
llvm-svn: 177653
NEON is not IEEE 754 compliant, so we should avoid lowering single-precision
floating point operations with NEON unless unsafe-math is turned on. The
equivalent VFP instructions are IEEE 754 compliant, but in some cores they're
much slower, so some archs/OSs might still request it to be on by default,
such as Swift and Darwin.
llvm-svn: 177651
header.
This method is called in the hot path for *many* passes, SROA is what
caught my interest. A common pattern is that which branch of the switch
should be taken is known in the callsite and so it is a very good
candidate for inlining and simplification. Moving it into the header
allows the optimizer to fold a lot of boring, repeatitive code in
callers of this routine.
I'm seeing pretty significant speedups in parts of SROA and I suspect
other passes will see similar speedups if they end up working with type
sizes frequently. I've not seen any significant growth of the binaries
as a consequence, but let me know if you see anything suspicious here.
llvm-svn: 177632
The key part of this is ensuring that name prefixes remain in a Twine
form until we get to a point where we can nuke them under NDEBUG. This
is tricky using the old APIs as they played fast and loose with Twine,
which is prone to serious error. The inserter is much cleaner as it is
actually in the call stack leading to the setName call, and so has
a good opportunity to prepend the prefix.
This matters more than you might imagine because most runs over an
alloca find a single partition, and rewrite 3 or 4 instructions
referring to it. As a consequence doing this lazily and exclusively with
Twine allows the optimizer to delete more of it and shaves another 2% to
3% off of the release build's SROA run time for PR15412. I also think
the APIs are cleaner, and the use of Twine is more reliable, so
I consider it a win-win despite the churn required to reach this state.
llvm-svn: 177631
The simplify-libcalls pass implemented a doInitialization hook to infer
function prototype attributes for well-known functions. Given that the
simplify-libcalls pass is going away *and* that the functionattrs pass
is already in place to deduce function attributes, I am moving this logic
to the functionattrs pass. This approach was discussed during patch
review:
http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20121126/157465.html.
llvm-svn: 177619
- After moving logic recognizing vector shift with scalar amount from
DAG combining into DAG lowering, we declare to customize all vector
shifts even vector shift on AVX is legal. As a result, the cost model
needs special tuning to identify these legal cases.
llvm-svn: 177586
Use the new `llvm_gcov_init' function to register the writeout and flush
functions. The initialization function will also call `atexit' for some cleanups
and final writout calls. But it does this only once. This is better than
checking for the `main' function, because in a library that function may not
exist.
<rdar://problem/12439551>
llvm-svn: 177579
This makes it possible to report multiple errors in one invocation.
There are already calls to PrintError in CodeGenDAGPatterns.cpp which
previously would not cause TableGen to fail.
<rdar://problem/13463339>
llvm-svn: 177573