more callee-saved registers and introduce copies. Only allows it if scheduling
a node above calls would end up lessen register pressure.
Call operands also has added ABI restrictions for register allocation, so be
extra careful with hoisting them above calls.
rdar://9329627
llvm-svn: 130245
The size of the array may not be aligned according to alignment of its elements if an alignment attribute is
specified in a typedef. Fixes rdar://8665729 & http://llvm.org/PR5637.
llvm-svn: 130242
1. Only run the early (in the module pass pipe) instcombine/simplifycfg
if the "unit at a time" passes they are cleaning up after runs.
2. Move the "clean up after the unroller" pass to the very end of the
function-level pass pipeline. Loop unroll uses instsimplify now,
so it doesn't create a ton of trash. Moving instcombine later allows
it to clean up after opportunities are exposed by GVN, DSE, etc.
3. Introduce some phase ordering tests for things that are specifically
intended to be simplified by the full optimizer as a whole.
This resolves PR2338, and is progress towards PR6627, which will be
generating code that looks similar to test2.
llvm-svn: 130241
member function, i.e. something of the form 'x.f' where 'f' is a non-static
member function. Diagnose this in the general case. Some of the new diagnostics
are probably worse than the old ones, but we now get this right much more
universally, and there's certainly room for improvement in the diagnostics.
llvm-svn: 130239
when X has multiple uses. This is useful for exposing secondary optimizations,
but the X86 backend isn't ready for this when X has a single use. For example,
this can disable load folding.
This is inching towards resolving PR6627.
llvm-svn: 130238
This has two effects: 1. We never inflate to a larger register class than what
the sub-target can handle. 2. Completely unconstrained virtual registers get the
largest possible register class.
llvm-svn: 130229
The hook will be used by the register allocator when recomputing register
classes after removing constraints.
Thumb1 code doesn't allow anything larger than tGPR, and x86 needs to ensure
that the spill size doesn't change.
llvm-svn: 130228
This worked untill now because stars are aligned (i.e. num of complex address elments are always 0 or 2+ and when it is 2+ at least two elements are access together)
llvm-svn: 130225
are defined as enumerations. Current bits include:
eEmulateInstructionOptionAutoAdvancePC
eEmulateInstructionOptionIgnoreConditions
Modified the EmulateInstruction class to have a few more pure virtuals that
can help clients understand how many instructions the emulator can handle:
virtual bool
SupportsEmulatingIntructionsOfType (InstructionType inst_type) = 0;
Where instruction types are defined as:
//------------------------------------------------------------------
/// Instruction types
//------------------------------------------------------------------
typedef enum InstructionType
{
eInstructionTypeAny, // Support for any instructions at all (at least one)
eInstructionTypePrologueEpilogue, // All prologue and epilogue instructons that push and pop register values and modify sp/fp
eInstructionTypePCModifying, // Any instruction that modifies the program counter/instruction pointer
eInstructionTypeAll // All instructions of any kind
} InstructionType;
This allows use to tell what an emulator can do and also allows us to request
these abilities when we are finding the plug-in interface.
Added the ability for an EmulateInstruction class to get the register names
for any registers that are part of the emulation. This helps with being able
to dump and log effectively.
The UnwindAssembly class now stores the architecture it was created with in
case it is needed later in the unwinding process.
Added a function that can tell us DWARF register names for ARM that goes
along with the source/Utility/ARM_DWARF_Registers.h file:
source/Utility/ARM_DWARF_Registers.c
Took some of plug-ins out of the lldb_private namespace.
llvm-svn: 130189
Add support for switch and indirectbr edges. This works by densely numbering
all blocks which have such terminators, and then separately numbering the
possible successors. The predecessors write down a number, the successor knows
its own number (as a ConstantInt) and sends that and the pointer to the number
the predecessor wrote down to the runtime, who looks up the counter in a
per-function table.
Coverage data should now be functional, but I haven't tested it on anything
other than my 2-file synthetic test program for coverage.
llvm-svn: 130186
return it as a clobber. This allows GVN to do smart things.
Enhance GVN to be smart about the case when a small load is clobbered
by a larger overlapping load. In this case, forward the value. This
allows us to compile stuff like this:
int test(void *P) {
int tmp = *(unsigned int*)P;
return tmp+*((unsigned char*)P+1);
}
into:
_test: ## @test
movl (%rdi), %ecx
movzbl %ch, %eax
addl %ecx, %eax
ret
which has one load. We already handled the case where the smaller
load was from a must-aliased base pointer.
llvm-svn: 130180