the ones that get or set the frame index for the $gp save slot.
Remove the piece of code in MipsFunctionInfo::getGlobalBaseReg() which returns
GP. This function should always return a virtual register.
llvm-svn: 156695
- Stop creating stack frame objects needed for saving $gp.
- Insert a node that copies the global pointer register to register $gp
before the call node. This will ensure $gp is valid at the entry of the
called function.
llvm-svn: 156692
- Stop emitting instructions needed to initialize the global pointer register.
- Stop emitting .cprestore directive.
- Do not take into account the $gp save slot when computing stack size.
llvm-svn: 156691
- Remove code which lowers pseudo SETGP01.
- Fix LowerSETGP01. The first two of the three instructions that are emitted to
initialize the global pointer register now use register $2.
- Stop emitting .cpload directive.
llvm-svn: 156689
pointer register.
This is the first of the series of patches which clean up the way global pointer
register is used. The patches will make the following improvements:
- Make $gp an allocatable temporary register rather than reserving it.
- Use a virtual register as the global pointer register and let the register
allocator decide which register to assign to it or whether spill/reloads are
needed.
- Make sure $gp is valid at the entry of a called function, which is necessary
for functions using lazy binding.
- Remove the need for emitting .cprestore and .cpload directives.
llvm-svn: 156671
Don't compute the SuperRegs list until the sub-register graph is
completely finished. This guarantees that the list of super-registers is
properly topologically ordered, and has no duplicates.
llvm-svn: 156629
This patch will optimize the following cases:
sub r1, r3 | sub r1, imm
cmp r3, r1 or cmp r1, r3 | cmp r1, imm
bge L1
TO
subs r1, r3
bge L1 or ble L1
If the branch instruction can use flag from "sub", then we can replace
"sub" with "subs" and eliminate the "cmp" instruction.
rdar: 10734411
llvm-svn: 156599
The sub-registers explicitly listed in SubRegs in the .td files form a
tree. In a complicated register bank, it is possible to have
sub-register relationships across sub-trees. For example, the ARM NEON
double vector Q0_Q1 is a tree:
Q0_Q1 = [Q0, Q1], Q0 = [D0, D1], Q1 = [D2, D3]
But we also define the DPair register D1_D2 = [D1, D2] which is fully
contained in Q0_Q1.
This patch teaches TableGen to find such sub-register relationships, and
assign sub-register indices to them. In the example, TableGen will
create a dsub_1_dsub_2 sub-register index, and add D1_D2 as a
sub-register of Q0_Q1.
This will eventually enable the coalescer to handle copies of skewed
sub-registers.
llvm-svn: 156587
Prioritize the instruction that comes closest to keeping pressure
under the target's limit. Then prioritize instructions that avoid
increasing the max pressure in the scheduled region. The max pressure
heuristic is a tad aggressive. Later I'll fix it to consider the
unscheduled pressure as well.
WIP: This is mostly functional but untested and not likely to do much good yet.
llvm-svn: 156574
Added getMaxExcessUpward/DownwardPressure. They somewhat abuse the
tracker by speculatively handling an instruction out of order. But it
is convenient for now. In the future, we will cache each instruction's
pressure contribution to make this efficient.
llvm-svn: 156561
The .td files specify a tree of sub-registers. Store that tree as
ExplicitSubRegs lists in CodeGenRegister instead of extracting it from
the Record when needed.
llvm-svn: 156555
This patch will optimize the following cases:
sub r1, r3 | sub r1, imm
cmp r3, r1 or cmp r1, r3 | cmp r1, imm
bge L1
TO
subs r1, r3
bge L1 or ble L1
If the branch instruction can use flag from "sub", then we can replace
"sub" with "subs" and eliminate the "cmp" instruction.
rdar: 10734411
llvm-svn: 156550
Instruction::IsIdenticalToWhenDefined.
This manifested itself when inlining two calls to the same function. The
inlined function had a switch statement that returned one of a set of
global variables. Without this modification, the two phi instructions that
chose values from the branches of the switch instruction inlined from the
callee were considered equivalent and jump-threading replaced a load for the
first switch value with a phi selecting from the second switch, thereby
producing incorrect code.
This patch has been tested with "make check-all", "lnt runteste nt", and
llvm self-hosted, and on the original program that had this problem,
wireshark.
<rdar://problem/11025519>
llvm-svn: 156548
This mapping is for internal use by TableGen. It will not be exposed in
the generated files.
Unfortunately, the mapping is not completely well-defined. The X86 xmm
registers appear with multiple sub-register indices in the ymm
registers. This is because of the odd idempotent sub_sd and sub_ss
sub-register indices. I hope to be able to eliminate them entirely, so
we can require the sub-registers to form a tree.
For now, just place the canonical sub_xmm index in the mapping, and
ignore the idempotents.
llvm-svn: 156519
refactor code a bit to enable future changes to support run-time information
add support to compute allocation sizes at run-time if penalty > 1 (e.g., malloc(x), calloc(x, y), and VLAs)
llvm-svn: 156515
For the Family 6 switch in sys::getHostCPUName, an unrecognized model was
reported as "i686". That's a really bad default since it means that new
CPUs will be treated as if they can only use 32-bit code. This just looks
at the cpuid extended feature flag for 64 bit support, and if that is set,
it uses a default x86-64 cpu. Similar logic is already used for the Family
15 code. <rdar://problem/11314502>
llvm-svn: 156486
This lets you save the textual representation of the LLVM IR to a file.
Before this patch it could only be printed to STDERR from llvm-c.
Patch by Carlo Kok!
llvm-svn: 156479
My previous change to install llvm-config-host for cross-builds resulted
in that file being installed even when the normal llvm-config was not
installed, e.g., when building the install-clang target. Daniel suggested
this alternative, which solves the immediate problem and also avoids the gunk
in the top-level makefile.
llvm-svn: 156448
When a combine twiddles an extract_vector, care should be take to preserve
the type of the index operand. No luck extracting a reasonable testcase,
unfortunately.
rdar://11391009
llvm-svn: 156419
output. Peter Collingbourne also reports that it is showing up in
$(llvm-config --cflags).
Revert this for now since I don't know enough cmake to fix it properly.
This reverts commit 18efed7adc79c1970f307bb5b015d199012ba872.
llvm-svn: 156392
- Currently this leaves us with less build system support (e.g., installing man pages) for the docs than is desired. I'm working on fixing this, but it may take a while. If someone finds this particularly egregious let me know and I will prioritize it.
llvm-svn: 156389
r145222 "lit/TestRunner.py: [Win32] Introduce WinWaitReleased(f), to wait for file handles to be released by children."
r145223 "lit/TestRunner.py: Use RemoveForce()."
r145381 "lit/TestRunner.py: Try to catch ERROR_FILE_NOT_FOUND, too."
r152916 "lit/TestRunner.py: [Win32] Check all opened_files[] released, rather than (obsoleted) written_files[]."
r153172 "lit/TestRunner.py: [Win32] Rework WinWaitReleased() again! "win32file" from Python Win32 Extensions."
llvm-svn: 156381
replace the operands of expressions with only one use with undef and generate
a new expression for the original without using RAUW to update the original.
Thus any copies of the original expression held in a vector may end up
referring to some bogus value - and using a ValueHandle won't help since there
is no RAUW. There is already a mechanism for getting the effect of recursion
non-recursively: adding the value to be recursed on to RedoInsts. But it wasn't
being used systematically. Have various places where recursion had snuck in at
some point use the RedoInsts mechanism instead. Fixes PR12169.
llvm-svn: 156379
Added new case-ranges orientated methods for adding/removing cases in SwitchInst. After this patch cases will internally representated as ConstantArray-s instead of ConstantInt, externally cases wrapped within the ConstantRangesSet object.
Old methods of SwitchInst are also works well, but marked as deprecated. So on this stage we have no side effects except that I added support for case ranges in BitcodeReader/Writer, of course test for Bitcode is also added. Old "switch" format is also supported.
llvm-svn: 156374
At least some of them:
%vreg1:sub_16bit = COPY %vreg2:sub_16bit; GR64:%vreg1, GR32: %vreg2
Previously, we couldn't figure out that the above copy could be
eliminated by coalescing %vreg2 with %vreg1:sub_32bit.
The new getCommonSuperRegClass() hook makes it possible.
This is not very useful yet since the unmodified part of the destination
register usually interferes with the source register. The coalescer
needs to understand sub-register interference checking first.
llvm-svn: 156334
The getPointerRegClass() hook can return register classes that depend on
the calling convention of the current function (ptr_rc_tailcall).
So far, we have been able to infer the calling convention from the
subtarget alone, but as we add support for multiple calling conventions
per target, that no longer works.
Patch by Yiannis Tsiouris!
llvm-svn: 156328
optional library support to the llvm-build tool:
- Add new command line parameter to llvm-build: “--enable-optional-libraries”
- Add handing of new llvm-build library type “OptionalLibrary”
- Update Cmake and automake build systems to pass correct flags to llvm-build
based on configuration
Patch by Dan Malea!
llvm-svn: 156319
This function is a generalization of getMatchingSuperRegClass() to the
symmetric case where both sides are using a sub-register index. It will
find a super-register class and sub-register indexes that make this
diagram commute:
PreA
SuperRC ----------> RCA
| |
| |
PreB | | SubA
| |
| |
V V
RCB ----------> SubRC
SubB
This can be used to coalesce copies like:
%vreg1:sub16 = COPY %vreg2:sub16; GR64:%vreg1, GR32: %vreg2
llvm-svn: 156317
This patch will optimize -(x != 0) on X86
FROM
cmpl $0x01,%edi
sbbl %eax,%eax
notl %eax
TO
negl %edi
sbbl %eax %eax
In order to generate negl, I added patterns in Target/X86/X86InstrCompiler.td:
def : Pat<(X86sub_flag 0, GR32:$src), (NEG32r GR32:$src)>;
rdar: 10961709
llvm-svn: 156312
Previously, if an instruction definition was missing the mnemonic,
the next line would just assert(). Issue a real diagnostic instead.
llvm-svn: 156263
The primitive conservative heuristic seems to give a slight overall
improvement while not regressing stuff. Make it available to wider
testing. If you notice any speed regressions (or significant code
size regressions) let me know!
llvm-svn: 156258
- Just use sys::Process::GetRandomNumber instead of having two poor
implementations.
- This is ~70 times (!) faster on my OS X machine.
llvm-svn: 156238
This came up when a change in block placement formed a cmov and slowed down a
hot loop by 50%:
ucomisd (%rdi), %xmm0
cmovbel %edx, %esi
cmov is a really bad choice in this context because it doesn't get branch
prediction. If we emit it as a branch, an out-of-order CPU can do a better job
(if the branch is predicted right) and avoid waiting for the slow load+compare
instruction to finish. Of course it won't help if the branch is unpredictable,
but those are really rare in practice.
This patch uses a dumb conservative heuristic, it turns all cmovs that have one
use and a direct memory operand into branches. cmovs usually save some code
size, so we disable the transform in -Os mode. In-Order architectures are
unlikely to benefit as well, those are included in the
"predictableSelectIsExpensive" flag.
It would be better to reuse branch probability info here, but BPI doesn't
support select instructions currently. It would make sense to use the same
heuristics as the if-converter pass, which does the opposite direction of this
transform.
Test suite shows a small improvement here and there on corei7-level machines,
but the actual results depend a lot on the used microarchitecture. The
transformation is currently disabled by default and available by passing the
-enable-cgp-select2branch flag to the code generator.
Thanks to Chandler for the initial test case to him and Evan Cheng for providing
me with comments and test-suite numbers that were more stable than mine :)
llvm-svn: 156234
This will be used to determine whether it's profitable to turn a select into a
branch when the branch is likely to be predicted.
Currently enabled for everything but Atom on X86 and Cortex-A9 devices on ARM.
I'm not entirely happy with the name of this flag, suggestions welcome ;)
llvm-svn: 156233
This is still a topological ordering such that every register class gets
a smaller enum value than its sub-classes.
Placing the smaller spill sizes first makes a difference for the
super-register class bit masks. When looking for a super-register class,
we usually want the smallest possible kind of super-register. That is
now available as the first bit set in the bit mask.
llvm-svn: 156222
We want the representative register class to contain the largest
super-registers available. This makes the function less sensitive to the
register class numbering.
llvm-svn: 156220
In file included from ../lib/Target/NVPTX/VectorElementize.cpp:53:
../lib/Target/NVPTX/NVPTX.h:44:3: warning: default label in switch which covers all enumeration values [-Wcovered-switch-default]
default: assert(0 && "Unknown condition code");
^
1 warning generated.
The prevailing pattern in LLVM is to not use a default label, and instead to
use llvm_unreachable to denote that the switch in fact covers all return paths
from the function.
llvm-svn: 156209
add a new Region::block_iterator which actually iterates over the basic
blocks of the region.
The old iterator, now call 'block_node_iterator' iterates over
RegionNodes which contain a single basic block. This works well with the
GraphTraits-based iterator design, however most users actually want an
iterator over the BasicBlocks inside these RegionNodes. Now the
'block_iterator' is a wrapper which exposes exactly this interface.
Internally it uses the block_node_iterator to walk all nodes which are
single basic blocks, but transparently unwraps the basic block to make
user code simpler.
While this patch is a bit of a wash, most of the updates are to internal
users, not external users of the RegionInfo. I have an accompanying
patch to Polly that is a strict simplification of every user of this
interface, and I'm working on a pass that also wants the same simplified
interface.
This patch alone should have no functional impact.
llvm-svn: 156202
The new target machines are:
nvptx (old ptx32) => 32-bit PTX
nvptx64 (old ptx64) => 64-bit PTX
The sources are based on the internal NVIDIA NVPTX back-end, and
contain more functionality than the current PTX back-end currently
provides.
NV_CONTRIB
llvm-svn: 156196