- Overloading operator<< for raw_ostream and pointers is dangerous, it alters
the behavior of code that includes the header.
- Remove unused ID.
- Use LLVM's byte swapping helpers instead of a hand-coded.
- Make ReadProfilingData work directly on a pointer.
No functionality change.
llvm-svn: 162992
The assembly string for the VMOVPQIto64rr instruction incorrectly lacked the 'v'
prefix, resulting in mis-assembly of the vanilla movd instruction.
llvm-svn: 162963
because it does not support CMOV of vectors. To implement this efficientlyi, we broadcast the condition bit and use a sequence of NAND-OR
to select between the two operands. This is the same sequence we use for targets that don't have vector BLENDs (like SSE2).
rdar://12201387
llvm-svn: 162926
- Add 'UseSSEx' to force SSE legacy insn not being selected when AVX is
enabled.
As the penalty of inter-mixing SSE and AVX instructions, we need
prevent SSE legacy insn from being generated except explicitly
specified through some intrinsics. For patterns supported by both
SSE and AVX, so far, we force AVX insn will be tried first relying on
AddedComplexity or position in td file. It's error-prone and
introduces bugs accidentally.
'UseSSEx' is disabled when AVX is turned on. For SSE insns inherited
by AVX, we need this predicate to force VEX encoding or SSE legacy
encoding only.
For insns not inherited by AVX, we still use the previous predicates,
i.e. 'HasSSEx'. So far, these insns fall into the following
categories:
* SSE insns with MMX operands
* SSE insns with GPR/MEM operands only (xFENCE, PREFETCH, CLFLUSH,
CRC, and etc.)
* SSE4A insns.
* MMX insns.
* x87 insns added by SSE.
2 test cases are modified:
- test/CodeGen/X86/fast-isel-x86-64.ll
AVX code generation is different from SSE one. 'vcvtsi2sdq' cannot be
selected by fast-isel due to complicated pattern and fast-isel
fallback to materialize it from constant pool.
- test/CodeGen/X86/widen_load-1.ll
AVX code generation is different from SSE one after fixing SSE/AVX
inter-mixing. Exec-domain fixing prefers 'vmovapd' instead of
'vmovaps'.
llvm-svn: 162919
[Tobias von Koch] What's happening here is that the CR6SET/CR6UNSET is breaking the chain of register copies glued to the function call (BL_SVR4 node). The scheduler then moves other instructions in between those and the function call, which isn't good!
Right. That's the case where there is no chain of register copies before the call, so InFlag == 0... Attached is a new revision of the patch which should fix this for good.
llvm-svn: 162916
The old PHI updating code in loop-rotate was replaced with SSAUpdater a while
ago, it has no problems with comples PHIs. What had to be fixed is detecting
whether a loop was already rotated and updating dominators when multiple exits
were present.
This change increases overall code size a bit, mostly due to additional loop
unrolling opportunities. Passes test-suite and selfhost with -verify-dom-info.
Fixes PR7447.
Thanks to Andy for the input on the domtree updating code.
llvm-svn: 162912
When a MachineInstr is constructed, its implicit operands are added
first, then the explicit operands are inserted before the implicits.
MCInstrDesc has oprand flags like early clobber and operand ties that
apply to the explicit operands.
Don't look at those flags when the implicit operands are first added in
the explicit operands's positions.
llvm-svn: 162910
- The root cause is that target constant materialization in X86 fast-isel
creates a PC-rel addressing which may overflow 32-bit range in non-Small code
model if .rodata section is allocated too far away from code segment in
MCJIT, which uses Large code model so far.
- Follow the similar logic to fix non-Small code model in fast-isel by skipping
non-Small code model.
llvm-svn: 162881
When there are multiple tied use-def pairs on an inline asm instruction,
the tied uses must appear in the same order as the defs.
It is possible to write an LLVM IR inline asm instruction that breaks
this constraint, but there is no reason for a front end to emit the
operands out of order.
The gnu inline asm syntax specifies tied operands as a single read/write
constraint "+r", so ouf of order operands are not possible.
llvm-svn: 162878
Tombstones and full hash collisions are rare, mark the "empty"
and "no collision" paths as likely. The bug in simplifycfg
that prevented the hints from being picked during selfhost
up was fixed recently :)
llvm-svn: 162874
For normal instructions, isTied() is set automatically by addOperand(),
based on MCInstrDesc, but inline asm has tied operands outside the
descriptor.
llvm-svn: 162869
Ordered memory operations are more constrained than volatile loads and
stores because they must be ordered with respect to all other memory
operations.
llvm-svn: 162861
It is technically allowed to move a normal load across a volatile load,
but probably not a good idea.
It is not allowed to move a load across an atomic load with
Ordering > Monotonic, and we model those with MOVolatile as well.
I recently removed the mayStore flag from atomic load instructions, so
they don't need a pseudo-opcode. This patch makes up for the difference.
llvm-svn: 162857
This lets the user run the program from a different directory and still have the
.gcda files show up in the correct place.
<rdar://problem/12179524>
llvm-svn: 162855
We need to reserve space for the mandatory traceback fields,
though leaving them as zero is appropriate for now.
Although the ABI calls for these fields to be filled in fully, no
compiler on Linux currently does this, and GDB does not read these
fields. GDB uses the first word of zeroes during exception handling to
find the end of the function and the size field, allowing it to compute
the beginning of the function. DWARF information is used for everything
else. We need the extra 8 bytes of pad so the size field is found in
the right place.
As a comparison, GCC fills in a few of the fields -- language, number
of saved registers -- but ignores the rest. IBM's proprietary OSes do
make use of the full traceback table facility.
Patch by Bill Schmidt.
llvm-svn: 162854
The operands on an INLINEASM machine instruction are divided into groups
headed by immediate flag operands. Verify this structure.
Extract verifyTiedOperands(), and only call it for non-inlineasm
instructions.
llvm-svn: 162849
This disables malloc-specific optimization when -fno-builtin (or -ffreestanding)
is specified. This has been a problem for a long time but became more severe
with the recent memory builtin improvements.
Since the memory builtin functions are used everywhere, this required passing
TLI in many places. This means that functions that now have an optional TLI
argument, like RecursivelyDeleteTriviallyDeadFunctions, won't remove dead
mallocs anymore if the TLI argument is missing. I've updated most passes to do
the right thing.
Fixes PR13694 and probably others.
llvm-svn: 162841