This patch will optimize the following cases:
sub r1, r3 | sub r1, imm
cmp r3, r1 or cmp r1, r3 | cmp r1, imm
bge L1
TO
subs r1, r3
bge L1 or ble L1
If the branch instruction can use flag from "sub", then we can replace
"sub" with "subs" and eliminate the "cmp" instruction.
rdar: 10734411
llvm-svn: 156599
The sub-registers explicitly listed in SubRegs in the .td files form a
tree. In a complicated register bank, it is possible to have
sub-register relationships across sub-trees. For example, the ARM NEON
double vector Q0_Q1 is a tree:
Q0_Q1 = [Q0, Q1], Q0 = [D0, D1], Q1 = [D2, D3]
But we also define the DPair register D1_D2 = [D1, D2] which is fully
contained in Q0_Q1.
This patch teaches TableGen to find such sub-register relationships, and
assign sub-register indices to them. In the example, TableGen will
create a dsub_1_dsub_2 sub-register index, and add D1_D2 as a
sub-register of Q0_Q1.
This will eventually enable the coalescer to handle copies of skewed
sub-registers.
llvm-svn: 156587
Prioritize the instruction that comes closest to keeping pressure
under the target's limit. Then prioritize instructions that avoid
increasing the max pressure in the scheduled region. The max pressure
heuristic is a tad aggressive. Later I'll fix it to consider the
unscheduled pressure as well.
WIP: This is mostly functional but untested and not likely to do much good yet.
llvm-svn: 156574
Added getMaxExcessUpward/DownwardPressure. They somewhat abuse the
tracker by speculatively handling an instruction out of order. But it
is convenient for now. In the future, we will cache each instruction's
pressure contribution to make this efficient.
llvm-svn: 156561
The .td files specify a tree of sub-registers. Store that tree as
ExplicitSubRegs lists in CodeGenRegister instead of extracting it from
the Record when needed.
llvm-svn: 156555
This patch will optimize the following cases:
sub r1, r3 | sub r1, imm
cmp r3, r1 or cmp r1, r3 | cmp r1, imm
bge L1
TO
subs r1, r3
bge L1 or ble L1
If the branch instruction can use flag from "sub", then we can replace
"sub" with "subs" and eliminate the "cmp" instruction.
rdar: 10734411
llvm-svn: 156550
Instruction::IsIdenticalToWhenDefined.
This manifested itself when inlining two calls to the same function. The
inlined function had a switch statement that returned one of a set of
global variables. Without this modification, the two phi instructions that
chose values from the branches of the switch instruction inlined from the
callee were considered equivalent and jump-threading replaced a load for the
first switch value with a phi selecting from the second switch, thereby
producing incorrect code.
This patch has been tested with "make check-all", "lnt runteste nt", and
llvm self-hosted, and on the original program that had this problem,
wireshark.
<rdar://problem/11025519>
llvm-svn: 156548
This mapping is for internal use by TableGen. It will not be exposed in
the generated files.
Unfortunately, the mapping is not completely well-defined. The X86 xmm
registers appear with multiple sub-register indices in the ymm
registers. This is because of the odd idempotent sub_sd and sub_ss
sub-register indices. I hope to be able to eliminate them entirely, so
we can require the sub-registers to form a tree.
For now, just place the canonical sub_xmm index in the mapping, and
ignore the idempotents.
llvm-svn: 156519
refactor code a bit to enable future changes to support run-time information
add support to compute allocation sizes at run-time if penalty > 1 (e.g., malloc(x), calloc(x, y), and VLAs)
llvm-svn: 156515
For the Family 6 switch in sys::getHostCPUName, an unrecognized model was
reported as "i686". That's a really bad default since it means that new
CPUs will be treated as if they can only use 32-bit code. This just looks
at the cpuid extended feature flag for 64 bit support, and if that is set,
it uses a default x86-64 cpu. Similar logic is already used for the Family
15 code. <rdar://problem/11314502>
llvm-svn: 156486
This lets you save the textual representation of the LLVM IR to a file.
Before this patch it could only be printed to STDERR from llvm-c.
Patch by Carlo Kok!
llvm-svn: 156479
My previous change to install llvm-config-host for cross-builds resulted
in that file being installed even when the normal llvm-config was not
installed, e.g., when building the install-clang target. Daniel suggested
this alternative, which solves the immediate problem and also avoids the gunk
in the top-level makefile.
llvm-svn: 156448
When a combine twiddles an extract_vector, care should be take to preserve
the type of the index operand. No luck extracting a reasonable testcase,
unfortunately.
rdar://11391009
llvm-svn: 156419
output. Peter Collingbourne also reports that it is showing up in
$(llvm-config --cflags).
Revert this for now since I don't know enough cmake to fix it properly.
This reverts commit 18efed7adc79c1970f307bb5b015d199012ba872.
llvm-svn: 156392
- Currently this leaves us with less build system support (e.g., installing man pages) for the docs than is desired. I'm working on fixing this, but it may take a while. If someone finds this particularly egregious let me know and I will prioritize it.
llvm-svn: 156389
r145222 "lit/TestRunner.py: [Win32] Introduce WinWaitReleased(f), to wait for file handles to be released by children."
r145223 "lit/TestRunner.py: Use RemoveForce()."
r145381 "lit/TestRunner.py: Try to catch ERROR_FILE_NOT_FOUND, too."
r152916 "lit/TestRunner.py: [Win32] Check all opened_files[] released, rather than (obsoleted) written_files[]."
r153172 "lit/TestRunner.py: [Win32] Rework WinWaitReleased() again! "win32file" from Python Win32 Extensions."
llvm-svn: 156381
replace the operands of expressions with only one use with undef and generate
a new expression for the original without using RAUW to update the original.
Thus any copies of the original expression held in a vector may end up
referring to some bogus value - and using a ValueHandle won't help since there
is no RAUW. There is already a mechanism for getting the effect of recursion
non-recursively: adding the value to be recursed on to RedoInsts. But it wasn't
being used systematically. Have various places where recursion had snuck in at
some point use the RedoInsts mechanism instead. Fixes PR12169.
llvm-svn: 156379
Added new case-ranges orientated methods for adding/removing cases in SwitchInst. After this patch cases will internally representated as ConstantArray-s instead of ConstantInt, externally cases wrapped within the ConstantRangesSet object.
Old methods of SwitchInst are also works well, but marked as deprecated. So on this stage we have no side effects except that I added support for case ranges in BitcodeReader/Writer, of course test for Bitcode is also added. Old "switch" format is also supported.
llvm-svn: 156374
At least some of them:
%vreg1:sub_16bit = COPY %vreg2:sub_16bit; GR64:%vreg1, GR32: %vreg2
Previously, we couldn't figure out that the above copy could be
eliminated by coalescing %vreg2 with %vreg1:sub_32bit.
The new getCommonSuperRegClass() hook makes it possible.
This is not very useful yet since the unmodified part of the destination
register usually interferes with the source register. The coalescer
needs to understand sub-register interference checking first.
llvm-svn: 156334
The getPointerRegClass() hook can return register classes that depend on
the calling convention of the current function (ptr_rc_tailcall).
So far, we have been able to infer the calling convention from the
subtarget alone, but as we add support for multiple calling conventions
per target, that no longer works.
Patch by Yiannis Tsiouris!
llvm-svn: 156328
optional library support to the llvm-build tool:
- Add new command line parameter to llvm-build: “--enable-optional-libraries”
- Add handing of new llvm-build library type “OptionalLibrary”
- Update Cmake and automake build systems to pass correct flags to llvm-build
based on configuration
Patch by Dan Malea!
llvm-svn: 156319
This function is a generalization of getMatchingSuperRegClass() to the
symmetric case where both sides are using a sub-register index. It will
find a super-register class and sub-register indexes that make this
diagram commute:
PreA
SuperRC ----------> RCA
| |
| |
PreB | | SubA
| |
| |
V V
RCB ----------> SubRC
SubB
This can be used to coalesce copies like:
%vreg1:sub16 = COPY %vreg2:sub16; GR64:%vreg1, GR32: %vreg2
llvm-svn: 156317
This patch will optimize -(x != 0) on X86
FROM
cmpl $0x01,%edi
sbbl %eax,%eax
notl %eax
TO
negl %edi
sbbl %eax %eax
In order to generate negl, I added patterns in Target/X86/X86InstrCompiler.td:
def : Pat<(X86sub_flag 0, GR32:$src), (NEG32r GR32:$src)>;
rdar: 10961709
llvm-svn: 156312
Previously, if an instruction definition was missing the mnemonic,
the next line would just assert(). Issue a real diagnostic instead.
llvm-svn: 156263
The primitive conservative heuristic seems to give a slight overall
improvement while not regressing stuff. Make it available to wider
testing. If you notice any speed regressions (or significant code
size regressions) let me know!
llvm-svn: 156258
- Just use sys::Process::GetRandomNumber instead of having two poor
implementations.
- This is ~70 times (!) faster on my OS X machine.
llvm-svn: 156238
This came up when a change in block placement formed a cmov and slowed down a
hot loop by 50%:
ucomisd (%rdi), %xmm0
cmovbel %edx, %esi
cmov is a really bad choice in this context because it doesn't get branch
prediction. If we emit it as a branch, an out-of-order CPU can do a better job
(if the branch is predicted right) and avoid waiting for the slow load+compare
instruction to finish. Of course it won't help if the branch is unpredictable,
but those are really rare in practice.
This patch uses a dumb conservative heuristic, it turns all cmovs that have one
use and a direct memory operand into branches. cmovs usually save some code
size, so we disable the transform in -Os mode. In-Order architectures are
unlikely to benefit as well, those are included in the
"predictableSelectIsExpensive" flag.
It would be better to reuse branch probability info here, but BPI doesn't
support select instructions currently. It would make sense to use the same
heuristics as the if-converter pass, which does the opposite direction of this
transform.
Test suite shows a small improvement here and there on corei7-level machines,
but the actual results depend a lot on the used microarchitecture. The
transformation is currently disabled by default and available by passing the
-enable-cgp-select2branch flag to the code generator.
Thanks to Chandler for the initial test case to him and Evan Cheng for providing
me with comments and test-suite numbers that were more stable than mine :)
llvm-svn: 156234
This will be used to determine whether it's profitable to turn a select into a
branch when the branch is likely to be predicted.
Currently enabled for everything but Atom on X86 and Cortex-A9 devices on ARM.
I'm not entirely happy with the name of this flag, suggestions welcome ;)
llvm-svn: 156233
This is still a topological ordering such that every register class gets
a smaller enum value than its sub-classes.
Placing the smaller spill sizes first makes a difference for the
super-register class bit masks. When looking for a super-register class,
we usually want the smallest possible kind of super-register. That is
now available as the first bit set in the bit mask.
llvm-svn: 156222
We want the representative register class to contain the largest
super-registers available. This makes the function less sensitive to the
register class numbering.
llvm-svn: 156220
In file included from ../lib/Target/NVPTX/VectorElementize.cpp:53:
../lib/Target/NVPTX/NVPTX.h:44:3: warning: default label in switch which covers all enumeration values [-Wcovered-switch-default]
default: assert(0 && "Unknown condition code");
^
1 warning generated.
The prevailing pattern in LLVM is to not use a default label, and instead to
use llvm_unreachable to denote that the switch in fact covers all return paths
from the function.
llvm-svn: 156209
add a new Region::block_iterator which actually iterates over the basic
blocks of the region.
The old iterator, now call 'block_node_iterator' iterates over
RegionNodes which contain a single basic block. This works well with the
GraphTraits-based iterator design, however most users actually want an
iterator over the BasicBlocks inside these RegionNodes. Now the
'block_iterator' is a wrapper which exposes exactly this interface.
Internally it uses the block_node_iterator to walk all nodes which are
single basic blocks, but transparently unwraps the basic block to make
user code simpler.
While this patch is a bit of a wash, most of the updates are to internal
users, not external users of the RegionInfo. I have an accompanying
patch to Polly that is a strict simplification of every user of this
interface, and I'm working on a pass that also wants the same simplified
interface.
This patch alone should have no functional impact.
llvm-svn: 156202
The new target machines are:
nvptx (old ptx32) => 32-bit PTX
nvptx64 (old ptx64) => 64-bit PTX
The sources are based on the internal NVIDIA NVPTX back-end, and
contain more functionality than the current PTX back-end currently
provides.
NV_CONTRIB
llvm-svn: 156196
of the CodeExtractor utility. This allows speculatively computing input
and output sets to measure the likely size impact of the code
extraction.
These sets cannot be reused sadly -- we mutate the function prior to
forming the final sets used by the actual extraction.
The interface has been revamped slightly to make it easier to use
correctly by making the interface const and sinking the computation of
the number of exit blocks into the full extraction function and away
from the rest of this logic which just computed two output parameters.
llvm-svn: 156168
and expose it as a utility class rather than as free function wrappers.
The simple free-function interface works well for the bugpoint-specific
pass's uses of code extraction, but in an upcoming patch for more
advanced code extraction, they simply don't expose a rich enough
interface. I need to expose various stages of the process of doing the
code extraction and query information to decide whether or not to
actually complete the extraction or give up.
Rather than build up a new predicate model and pass that into these
functions, just take the class that was actually implementing the
functions and lift it up into a proper interface that can be used to
perform code extraction. The interface is cleaned up and re-documented
to work better in a header. It also is now setup to accept the blocks to
be extracted in the constructor rather than in a method.
In passing this essentially reverts my previous commit here exposing
a block-level query for eligibility of extraction. That is no longer
necessary with the more rich interface as clients can query the
extraction object for eligibility directly. This will reduce the number
of walks of the input basic block sequence by quite a bit which is
useful if this enters the normal optimization pipeline.
llvm-svn: 156163
This moves the logic for selecting a TLS model to a single place,
instead of the previous three (ARM, Mips, and X86 which already
uses this function).
llvm-svn: 156162
This manually enumerated list of super-register classes has been
superceeded by the automatically computed super-register class masks
available through SuperRegClassIterator.
llvm-svn: 156151
The masks returned by SuperRegClassIterator are computed automatically
by TableGen. This is better than depending on the manually specified
SuperRegClasses.
llvm-svn: 156147
This iterator class provides a more abstract interface to the (Idx,
Mask) lists of super-registers for a register class. The layout of the
tables shouldn't be exposed to clients.
llvm-svn: 156144
minor behavior changes with this, but nothing I have seen evidence of in
the wild or expect to be meaningful. The real goal is unifying our logic
and simplifying the interfaces. A summary of the changes follows:
- Make 'callIsSmall' actually accept a callsite so it can handle
intrinsics, and simplify callers appropriately.
- Nuke a completely bogus declaration of 'callIsSmall' that was still
lurking in InlineCost.h... No idea how this got missed.
- Teach the 'isInstructionFree' about the various more intelligent
'free' heuristics that got added to the inline cost analysis during
review and testing. This mostly surrounds int->ptr and ptr->int casts.
- Switch most of the interesting parts of the inline cost analysis that
were essentially computing 'is this instruction free?' to use the code
metrics routine instead. This way we won't keep duplicating logic.
All of this is motivated by the desire to allow other passes to compute
a roughly equivalent 'cost' metric for a particular basic block as the
inline cost analysis. Sadly, re-using the same analysis for both is
really messy because only the actual inline cost analysis is ever going
to go to the contortions required for simplification, SROA analysis,
etc.
llvm-svn: 156140
but using a FoldingSet underneath and with a largely compatible
interface to that of FoldingSet. This can be used anywhere a FoldingSet
would be natural, but iteration order is significant. The initial
intended use case is in Clang's template specialization lists to
preserve instantiation order iteration.
llvm-svn: 156131
This is a pointer into one of the tables used by
getMatchingSuperRegClass(). It makes it possible to use a shared
implementation of that function.
llvm-svn: 156121
The RC->getSubClassMask() pointer now points to a sequence of register
class bit masks. The first bit mask is the normal sub-class mask. The
following masks are super-reg class masks used by
getMatchingSuperRegClass().
llvm-svn: 156120
for the assembler and disassembler. Which were not being set/read correctly
for offsets greater than 22 bits in some cases.
Changes to lib/Target/ARM/ARMAsmBackend.cpp from Gideon Myles!
llvm-svn: 156118
extraction into a public interface. Also clean it up and apply it more
consistently such that we check for landing pads *anywhere* in the
extracted code, not just in single-block extraction.
This will be used to guide decisions in passes that are planning to
eventually perform a round of code extraction.
llvm-svn: 156114