Commit Graph

200 Commits

Author SHA1 Message Date
Ulrich Weigand ad0cb91ed9 [PowerPC] Simplify and improve loading into TOC register
During an indirect function call sequence on the 64-bit SVR4 ABI,
generate code must load and then restore the TOC register.

This does not use a regular LOAD instruction since the TOC
register r2 is marked as reserved.  Instead, the are two
special instruction patterns:

 let RST = 2, DS = 2 in
 def LDinto_toc: DSForm_1a<58, 0, (outs), (ins g8rc:$reg),
                     "ld 2, 8($reg)", IIC_LdStLD,
                     [(PPCload_toc i64:$reg)]>, isPPC64;
 
 let RST = 2, DS = 10, RA = 1 in
 def LDtoc_restore : DSForm_1a<58, 0, (outs), (ins),
                     "ld 2, 40(1)", IIC_LdStLD,
                     [(PPCtoc_restore)]>, isPPC64;

Note that these not only restrict the destination of the
load to r2, but they also restrict the *source* of the
load to particular address combinations.  The latter is
a problem when we want to support the ELFv2 ABI, since
there the TOC save slot is no longer at 40(1).

This patch replaces those two instructions with a single
instruction pattern that only hard-codes r2 as destination,
but supports generic addresses as source.  This will allow
supporting the ELFv2 ABI, and also helps generate more
efficient code for calls to absolute addresses (allowing
simplification of the ppc64-calls.ll test case).

llvm-svn: 211193
2014-06-18 17:52:49 +00:00
Hal Finkel e01d32107c [PowerPC] Mark many instructions as commutative
I'm under the impression that we used to infer the isCommutable flag from the
instruction-associated pattern. Regardless, we don't seem to do this (at least
by default) any more. I've gone through all of our instruction definitions, and
marked as commutative all of those that should be trivial to commute (by
exchanging the first two operands). There has been special code for the RL*
instructions, and that's not changed.

Before this change, we had the following commutative instructions:

 RLDIMI
 RLDIMIo
 RLWIMI
 RLWIMI8
 RLWIMI8o
 RLWIMIo
 XSADDDP
 XSMULDP
 XVADDDP
 XVADDSP
 XVMULDP
 XVMULSP

After:

 ADD4
 ADD4o
 ADD8
 ADD8o
 ADDC
 ADDC8
 ADDC8o
 ADDCo
 ADDE
 ADDE8
 ADDE8o
 ADDEo
 AND
 AND8
 AND8o
 ANDo
 CRAND
 CREQV
 CRNAND
 CRNOR
 CROR
 CRXOR
 EQV
 EQV8
 EQV8o
 EQVo
 FADD
 FADDS
 FADDSo
 FADDo
 FMADD
 FMADDS
 FMADDSo
 FMADDo
 FMSUB
 FMSUBS
 FMSUBSo
 FMSUBo
 FMUL
 FMULS
 FMULSo
 FMULo
 FNMADD
 FNMADDS
 FNMADDSo
 FNMADDo
 FNMSUB
 FNMSUBS
 FNMSUBSo
 FNMSUBo
 MULHD
 MULHDU
 MULHDUo
 MULHDo
 MULHW
 MULHWU
 MULHWUo
 MULHWo
 MULLD
 MULLDo
 MULLW
 MULLWo
 NAND
 NAND8
 NAND8o
 NANDo
 NOR
 NOR8
 NOR8o
 NORo
 OR
 OR8
 OR8o
 ORo
 RLDIMI
 RLDIMIo
 RLWIMI
 RLWIMI8
 RLWIMI8o
 RLWIMIo
 VADDCUW
 VADDFP
 VADDSBS
 VADDSHS
 VADDSWS
 VADDUBM
 VADDUBS
 VADDUHM
 VADDUHS
 VADDUWM
 VADDUWS
 VAND
 VAVGSB
 VAVGSH
 VAVGSW
 VAVGUB
 VAVGUH
 VAVGUW
 VMADDFP
 VMAXFP
 VMAXSB
 VMAXSH
 VMAXSW
 VMAXUB
 VMAXUH
 VMAXUW
 VMHADDSHS
 VMHRADDSHS
 VMINFP
 VMINSB
 VMINSH
 VMINSW
 VMINUB
 VMINUH
 VMINUW
 VMLADDUHM
 VMULESB
 VMULESH
 VMULEUB
 VMULEUH
 VMULOSB
 VMULOSH
 VMULOUB
 VMULOUH
 VNMSUBFP
 VOR
 VXOR
 XOR
 XOR8
 XOR8o
 XORo
 XSADDDP
 XSMADDADP
 XSMAXDP
 XSMINDP
 XSMSUBADP
 XSMULDP
 XSNMADDADP
 XSNMSUBADP
 XVADDDP
 XVADDSP
 XVMADDADP
 XVMADDASP
 XVMAXDP
 XVMAXSP
 XVMINDP
 XVMINSP
 XVMSUBADP
 XVMSUBASP
 XVMULDP
 XVMULSP
 XVNMADDADP
 XVNMADDASP
 XVNMSUBADP
 XVNMSUBASP
 XXLAND
 XXLNOR
 XXLOR
 XXLXOR

This is a by-inspection change, and I'm not sure how to write a reliable test
case. I would like advice on this, however.

llvm-svn: 204609
2014-03-24 15:07:28 +00:00
Hal Finkel 940ab934d4 Add CR-bit tracking to the PowerPC backend for i1 values
This change enables tracking i1 values in the PowerPC backend using the
condition register bits. These bits can be treated on PowerPC as separate
registers; individual bit operations (and, or, xor, etc.) are supported.
Tracking booleans in CR bits has several advantages:

 - Reduction in register pressure (because we no longer need GPRs to store
   boolean values).

 - Logical operations on booleans can be handled more efficiently; we used to
   have to move all results from comparisons into GPRs, perform promoted
   logical operations in GPRs, and then move the result back into condition
   register bits to be used by conditional branches. This can be very
   inefficient, because the throughput of these CR <-> GPR moves have high
   latency and low throughput (especially when other associated instructions
   are accounted for).

 - On the POWER7 and similar cores, we can increase total throughput by using
   the CR bits. CR bit operations have a dedicated functional unit.

Most of this is more-or-less mechanical: Adjustments were needed in the
calling-convention code, support was added for spilling/restoring individual
condition-register bits, and conditional branch instruction definitions taking
specific CR bits were added (plus patterns and code for generating bit-level
operations).

This is enabled by default when running at -O2 and higher. For -O0 and -O1,
where the ability to debug is more important, this feature is disabled by
default. Individual CR bits do not have assigned DWARF register numbers,
and storing values in CR bits makes them invisible to the debugger.

It is critical, however, that we don't move i1 values that have been promoted
to larger values (such as those passed as function arguments) into bit
registers only to quickly turn around and move the values back into GPRs (such
as happens when values are returned by functions). A pair of target-specific
DAG combines are added to remove the trunc/extends in:
  trunc(binary-ops(binary-ops(zext(x), zext(y)), ...)
and:
  zext(binary-ops(binary-ops(trunc(x), trunc(y)), ...)
In short, we only want to use CR bits where some of the i1 values come from
comparisons or are used by conditional branches or selects. To put it another
way, if we can do the entire i1 computation in GPRs, then we probably should
(on the POWER7, the GPR-operation throughput is higher, and for all cores, the
CR <-> GPR moves are expensive).

POWER7 test-suite performance results (from 10 runs in each configuration):

SingleSource/Benchmarks/Misc/mandel-2: 35% speedup
MultiSource/Benchmarks/Prolangs-C++/city/city: 21% speedup
MultiSource/Benchmarks/MiBench/automotive-susan: 23% speedup
SingleSource/Benchmarks/CoyoteBench/huffbench: 13% speedup
SingleSource/Benchmarks/Misc-C++/Large/sphereflake: 13% speedup
SingleSource/Benchmarks/Misc-C++/mandel-text: 10% speedup

SingleSource/Benchmarks/Misc-C++-EH/spirit: 10% slowdown
MultiSource/Applications/lemon/lemon: 8% slowdown

llvm-svn: 202451
2014-02-28 00:27:01 +00:00
Hal Finkel 77c8dc1da3 [PPC] Use the correct immediate operands on 64-bit instructions
Several of the 64-bit fixed-point instructions with immediate operands were
using the 32-bit (i32) operand nodes instead of the corresponding 64-bit (i64)
operand definitions (u16imm instead of u16imm64, for example).

This error has had no effect so far, but would have caused type-checking
violations with an upcoming change.

llvm-svn: 198356
2014-01-02 21:26:59 +00:00
Roman Divacky 32143e2bda Implement initial-exec TLS for PPC32.
llvm-svn: 197824
2013-12-20 18:08:54 +00:00
Hal Finkel 2345347eb9 Add a disassembler to the PowerPC backend
The tests for the disassembler were adapted from the encoder tests, and for the
most part, the output from the disassembler matches that encoder-test inputs.
There are some places where more-informative mnemonics could be produced
(notably for the branch instructions), and those cases are noted in the tests
with FIXMEs.

Future work includes:

 - Generating more-informative mnemonics when possible (this may also be done
   in the printer).

 - Remove the dependence on positional "numbered" operand-to-variable mapping
   (for both encoding and decoding).

 - Internally using 64-bit instruction variants in 64-bit mode (if this turns
   out to matter).

llvm-svn: 197693
2013-12-19 16:13:01 +00:00
Hal Finkel b4b99e545b Eliminate PPC instruction decoding ambiguities
The instruction definitions in the PPC backend have a number of variants
defined for the same instruction to represent differences between 64-bit and
32-bit semantics. In order to generate a disassembler for the PPC backend, we
need to mark all but one of these as CodeGen only.

No functionality change intended; this is prep work for PPC disassembly
support.

llvm-svn: 197535
2013-12-17 23:05:18 +00:00
Hal Finkel 46402a4211 Split some PPC itinerary classes
In preparation for adding scheduling definitions for the POWER7, split some PPC
itinerary classes so that the P7's latencies and hazards can be better
described. For the most part, this means differentiating indexed from non-index
pre-increment loads and stores. Also, differentiate single from
double-precision sqrt.

No functionality change intended (except for a more-specific latency for
single-precision sqrt on the A2).

llvm-svn: 195980
2013-11-30 20:41:13 +00:00
Hal Finkel 3e5a360ba3 Add IIC_ prefix to PPC instruction-class names
This adds the IIC_ prefix to the instruction itinerary class names, giving the
PPC backend a naming convention for itinerary classes that is more consistent
with that used by the X86 and ARM backends.

Instruction scheduling in the PPC backend needs a bunch of cleanup and
improvement (especially for the ooo cores). This is just a preliminary step.

No functionality change intended.

llvm-svn: 195890
2013-11-27 23:26:09 +00:00
Hal Finkel 884bde3031 PPC popcnt[dw] do not have record forms
The instruction definitions incorrectly specified that popcntd and popcntw have
record forms; they do not. This mistake was causing invalid code generation.

llvm-svn: 195272
2013-11-20 20:54:55 +00:00
David Majnemer 08249a31b2 PPC: Do not introduce ISD nodes for fctid and fctiw
llvm-svn: 191421
2013-09-26 05:22:11 +00:00
David Majnemer 6ad26d3364 PPC: Add support for fctid and fctiw
Encodings were checked against the Power ISA documents and double
checked against binutils.

This fixes PR17350.

llvm-svn: 191419
2013-09-26 04:11:24 +00:00
Hal Finkel 7fe6a5390f PPC: Enable aggressive anti-dependency breaking
Aggressive anti-dependency breaking is enabled by default for all PPC cores.
This provides a general speedup on the P7 and other platforms (among other
factors, the instruction group formation for the non-embedded PPC cores is done
during post-RA scheduling). In order to do this safely, the incompatibility
between uses of the MFOCRF instruction and anti-dependency breaking are
resolved by marking MFOCRF with hasExtraSrcRegAllocReq. As noted in the removed
FIXME, the problem was that MFOCRF's output is sensitive to the identify of the
source register, and always paired with a shift to undo this effect. Because
anti-dependency breaking is unaware of this hidden dependency of the shift
amount on the source register of the MFOCRF instruction, changing that register
must be inhibited.

Two test cases were adjusted: The SjLj test was made more insensitive to
register choices and scheduling; the saveCR test disabled anti-dependency
breaking because part of what it is testing is proper register reuse.

llvm-svn: 190587
2013-09-12 05:24:49 +00:00
Bill Schmidt ccecf26157 [PowerPC] Add loads, stores, and related things to fast-isel.
This is the next big chunk of fast-isel code.  The primary purpose is
to implement selection of loads and stores, but there is a lot of
drag-along to support this.  The common code to analyze addresses for
both loads and stores is substantial.  It's also necessary to add the
materialization code for global values.

Related to load-store processing is the code to fold loads into
integer extends, since otherwise we generate lots of redundant
instructions.  We also need to add some overrides to some FastEmit
routines to ensure we don't assign GPR 0 to a virtual register when
this would change the meaning of an instruction.

I added handling selection of a few binary arithmetic instructions, to
enable committing some test cases I wrote a while back.

Finally, ap couple of miscellaneous changes:
 * I cleaned up some poor style from a previous patch in
   PPCISelLowering.cpp, pointed out by David Blaikie.
 * I enlarged the Addr.Offset field to avoid sign problems with 32-bit
   offsets. 

llvm-svn: 189636
2013-08-30 02:29:45 +00:00
Bill Schmidt d89f678cfd [PowerPC] More fast-isel chunks (returns and integer extends)
Incremental improvement to fast-isel for PPC64.  This allows us to
select on ret, sext, and zext.  Filling in sext/zext improves some of
the existing logic in handling compare-immediates that needed extends.

A simplified return convention for fast-isel is also added to the
PPC64 calling conventions.  All call/return processing for DAG
selection is handled with custom code, so there isn't an existing CC
to rely on here.  The include of PPCGenCallingConv.inc causes compiler
warnings due to the 32-bit calling conventions that are not used, so
the dummy function "usePPC32CCs()" is added here to silence those.

Test cases for the return and extend logic are added.

llvm-svn: 189266
2013-08-26 19:42:51 +00:00
Hal Finkel 11b9e452f6 Add PPC64 mulli pattern
The PPC backend had been missing a pattern to generate mulli for 64-bit
multiples. We had been generating it only for 32-bit multiplies. Unfortunately,
generating li + mulld unnecessarily increases register pressure.

llvm-svn: 187807
2013-08-06 17:03:03 +00:00
Hal Finkel 40f76d5830 PPC: Add CTR-register clobber to builtin setjmp
Because the builtin longjmp implementation uses a CTR-based indirect jump, when
the control flow arrives at the builtin setjmp call, the CTR register has
necessarily been clobbered. Correspondingly, this adds CTR to the list of
implicit definitions of the builtin setjmp pseudo instruction.

We don't need to add CTR to the implicit definitions of builtin longjmp
because, even though it does clobber the CTR register, the control flow cannot
return to inside the loop unless there is also a builtin setjmp call.

llvm-svn: 186488
2013-07-17 05:35:44 +00:00
Ulrich Weigand 5b427591d6 [PowerPC] Support @tls in the asm parser
This adds support for the last missing construct to parse TLS-related
assembler code:
   add 3, 4, symbol@tls

The ADD8TLS currently hard-codes the @tls into the assembler string.
This cannot be handled by the asm parser, since @tls is parsed as
a symbol variant.  This patch changes ADD8TLS to have the @tls suffix
printed as symbol variant on output too, which allows us to remove
the isCodeGenOnly marker from ADD8TLS.  This in turn means that we
can add a AsmOperand to accept @tls marked symbols on input.

As a side effect, this means that the fixup_ppc_tlsreg fixup type
is no longer necessary and can be merged into fixup_ppc_nofixup.

llvm-svn: 185692
2013-07-05 12:22:36 +00:00
Ulrich Weigand 49f487e6cd [PowerPC] Use mtocrf when available
Just as with mfocrf, it is also preferable to use mtocrf instead of
mtcrf when only a single CR register is to be written.

Current code however always emits mtcrf.  This probably does not matter
when using an external assembler, since the GNU assembler will in fact
automatically replace mtcrf with mtocrf when possible.  It does create
inefficient code with the integrated assembler, however.

To fix this, this patch adds MTOCRF/MTOCRF8 instruction patterns and
uses those instead of MTCRF/MTCRF8 everything.  Just as done in the
MFOCRF patch committed as 185556, these patterns will be converted
back to MTCRF if MTOCRF is not available on the machine.

As a side effect, this allows to modify the MTCRF pattern to accept
the full range of mask operands for the benefit of the asm parser.

llvm-svn: 185561
2013-07-03 17:59:07 +00:00
Ulrich Weigand d5ebc626d5 [PowerPC] Always use mfocrf if available
When accessing just a single CR register, it is always preferable to
use mfocrf instead of mfcr, if the former is available on the CPU.

Current code makes that distinction in many, but not all places
where a single CR register value is retrieved.  One missing
location is PPCRegisterInfo::lowerCRSpilling.

To fix this and make this simpler in the future, this patch changes
the bulk of the back-end to always assume mfocrf is available and
simply generate it when needed.

On machines that actually do not support mfocrf, the instruction
is replaced by mfcr at the very end, in EmitInstruction.

This has the additional benefit that we no longer need the
MFCRpseud hack, since before EmitInstruction we always have
a MFOCRF instruction pattern, which already models data flow
as required.

The patch also adds the MFOCRF8 version of the instruction,
which was missing so far.

Except for the PPCRegisterInfo::lowerCRSpilling case, no change
in generated code intended.

llvm-svn: 185556
2013-07-03 17:05:42 +00:00
Ulrich Weigand ae9cf5828c [PowerPC] Support mtspr/mfspr in the asm parser
This adds support for the generic forms of mtspr/mfspr
for the asm parser.  The compiler will continue to use
the specialized patters for mtlr etc. since those are
needed to correctly describe data flow.

llvm-svn: 185532
2013-07-03 12:32:41 +00:00
Ulrich Weigand 42a09dc12f [PowerPC] PR16512 - Support TLS call sequences in the asm parser
This patch now adds support for recognizing TLS call sequences in
the asm parser.  This needs a new pattern BL8_TLS, which is like
BL8_NOP_TLS except without nop.  That pattern is used for the
asm parser only.

llvm-svn: 185478
2013-07-02 21:31:59 +00:00
Ulrich Weigand 5143bab2f9 [PowerPC] Rework TLS call operand processing
As part of the global-dynamic and local-dynamic TLS sequences, we need
to use a special form of the call instruction:

 bl __tls_get_addr(sym@tlsld)
 bl __tls_get_addr(sym@tlsgd)

which generates two fixups.  The current implementation of this causes
problems with recognizing this form in the asm parser.  To fix this,
this patch reworks operand processing for this special form by using
a single operand to hold both __tls_get_addr and sym@tlsld and defining
a print method to output the above form, and an encoding method to
generate the two fixups.

As a side simplification, the patch replaces the two instruction
patterns BL8_NOP_TLSGD and BL8_NOP_TLSLD by a single BL8_NOP_TLS,
since the patterns already operate in an identical fashion (whether
we have a local-dynamic or global-dynamic symbol is already encoded
in the symbol modifier).

No change in code generation intended.

llvm-svn: 185477
2013-07-02 21:31:04 +00:00
Ulrich Weigand 5a02a02b41 [PowerPC] Accept 17-bit signed immediates for addis
The assembler currently strictly verifies that immediates for
s16imm operands are in range (-32768 ... 32767).  This matches
the behaviour of the GNU assembler, with one exception: gas
allows, as a special case, operands in an extended range
(-65536 .. 65535) for the addis instruction only (and its
extended mnemonic lis).

The main reason for this seems to be to allow using unsigned
16-bit operands for lis, e.g. like lis %r1, 0xfedc.

Since this has been supported by gas for a long time, and
assembler source code seen "in the wild" actually exploits
this feature, this patch adds equivalent support to LLVM
for compatibility reasons.

llvm-svn: 184946
2013-06-26 13:49:53 +00:00
Ulrich Weigand fd3ad693e8 [PowerPC] Support symbolic u16imm operands
Currently, all instructions taking s16imm operands support symbolic
operands.  However, for u16imm operands, we only support actual
immediate integers.  This causes the assembler to reject code like

  ori %r5, %r5, symbol@l

This patch changes the u16imm operand definition to likewise
accept symbolic operands.  In fact, s16imm and u16imm can
share the same encoding routine, now renamed to getImm16Encoding.

llvm-svn: 184944
2013-06-26 13:49:15 +00:00
Ulrich Weigand 6c31c4aae8 [PowerPC] Add rldcr/rldic instructions
This adds pattern for the rldcr and rldic instructions (the last instruction
from the rotate/shift family that were missing).  They are currently used
only by the asm parser.

llvm-svn: 184833
2013-06-25 13:17:10 +00:00
Ulrich Weigand 86247b6e27 [PowerPC] Add predicted forms of branches
This adds support for the predicted forms of branches (+/-).
There are three cases to consider:
- Branches using a PPC::Predicate code
  For these, I've added new PPC::Predicate codes corresponding
  to the BO values for predicted branch forms, and updated insn
  printing to print them correctly.  I've also added new aliases
  for the asm parser matching the new forms.
- bt/bf
  I've added new aliases matching to gBC etc.
- bd(n)z variants
  I've added new instruction patterns for the predicted forms.

In all cases, the new patterns are used for the asm parser only.
(The new infrastructure ought to be sufficient to allow use by
the compiler too at some point.)

llvm-svn: 184754
2013-06-24 16:52:04 +00:00
Ulrich Weigand b6a30d159e [PowerPC] Support absolute branches
There is currently only limited support for the "absolute" variants
of branch instructions.  This patch adds support for the absolute
variants of all branches that are currently otherwise supported.

This requires adding new fixup types so that the correct variant
of relocation type can be selected by the object writer.

While the compiler will continue to usually choose the relative
branch variants, this will allow the asm parser to fully support
the absolute branches, with either immediate (numerical) or
symbolic target addresses.

No change in code generation intended.

llvm-svn: 184721
2013-06-24 11:03:33 +00:00
Ulrich Weigand 9948546923 [PowerPC] Remove symbolLo/symbolHi instruction operand types
Now that there is no longer any distinction between symbolLo
and symbolHi operands in either printing, encoding, or parsing,
the operand types can be removed in favor of simply using
s16imm.

This completes the patch series to decouple lo/hi operand part
processing from the particular instruction whose operand it is.

No change in code generation expected from this patch.

llvm-svn: 182618
2013-05-23 22:48:06 +00:00
Ulrich Weigand 41789de165 [PowerPC] Clean up generation of ha16() / lo16() markers
When targeting the Darwin assembler, we need to generate markers ha16() and
lo16() to designate the high and low parts of a (symbolic) immediate.  This
is necessary not just for plain symbols, but also for certain symbolic
expression, typically along the lines of ha16(A - B).  The latter doesn't
work when simply using VariantKind flags on the symbol reference.
This is why the current back-end uses hacks (explicitly called out as such
via multiple FIXMEs) in the symbolLo/symbolHi print methods.

This patch uses target-defined MCExpr codes to represent the Darwin
ha16/lo16 constructs, following along the lines of the equivalent solution
used by the ARM back end to handle their :upper16: / :lower16: markers.
This allows us to get rid of special handling both in the symbolLo/symbolHi
print method and in the common code MCExpr::print routine.  Instead, the
ha16 / lo16 markers are printed simply in a custom print routine for the
target MCExpr types.  (As a result, the symbolLo/symbolHi print methods
can now replaced by a single printS16ImmOperand routine that also handles
symbolic operands.)

The patch also provides a EvaluateAsRelocatableImpl routine to handle
ha16/lo16 constructs.  This is not actually used at the moment by any
in-tree code, but is provided as it makes merging into David Fang's
out-of-tree Mach-O object writer simpler.

Since there is no longer any need to treat VK_PPC_GAS_HA16 and
VK_PPC_DARWIN_HA16 differently, they are merged into a single
VK_PPC_ADDR16_HA (and likewise for the _LO16 types).

llvm-svn: 182616
2013-05-23 22:26:41 +00:00
Bill Schmidt f88571e027 Change some PowerPC PatLeaf definitions to ImmLeaf for fast-isel.
Using PatLeaf rather than ImmLeaf when defining immediate predicates
prevents simple patterns using those predicates from being recognized
for fast instruction selection.  This patch replaces the immSExt16
PatLeaf predicate with two ImmLeaf predicates, imm32SExt16 and
imm64SExt16, allowing a few more patterns to be recognized (ADDI,
ADDIC, MULLI, ADDI8, and ADDIC8).  Using the new predicates does not
help for LI, LI8, SUBFIC, and SUBFIC8 because these are rejected for
other reasons, but I see no reason to retain the PatLeaf predicate.

No functional change intended, and thus no test cases yet.  This is
preliminary work for enabling fast-isel support for PowerPC.  When
that support is ready, we'll be able to test this function.

llvm-svn: 182510
2013-05-22 20:09:24 +00:00
Hal Finkel 0859ef29d5 Rename PPC MTCTRse to MTCTRloop
As the pairing of this instruction form with the bdnz/bdz branches is now
enforced by the verification pass, make it clear from the name that these
are used only for counter-based loops.

No functionality change intended.

llvm-svn: 182296
2013-05-20 16:08:37 +00:00
Ulrich Weigand 2dbe06a987 [PowerPC] Fix hi/lo encoding in old-style code emitter
This patch implements the equivalent change to r182091/r182092
in the old-style code emitter.  Instead of having two separate
16-bit immediate encoding routines depending on the instruction,
this patch introduces a single encoder that checks the machine
operand flags to decide whether the low or high half of a
symbol address is required.

Since now both encoders make no further distinction between
"symbolLo" and "symbolHi", the .td operand can now use a
single getS16ImmEncoding method.

Tested by running the old-style JIT tests on 32-bit Linux.

llvm-svn: 182097
2013-05-17 14:14:12 +00:00
Hal Finkel 25c1992bc7 Implement PPC counter loops as a late IR-level pass
The old PPCCTRLoops pass, like the Hexagon pass version from which it was
derived, could only handle some simple loops in canonical form. We cannot
directly adapt the new Hexagon hardware loops pass, however, because the
Hexagon pass contains a fundamental assumption that non-constant-trip-count
loops will contain a guard, and this is not always true (the result being that
incorrect negative counts can be generated). With this commit, we replace the
pass with a late IR-level pass which makes use of SE to calculate the
backedge-taken counts and safely generate the loop-count expressions (including
any necessary max() parts). This IR level pass inserts custom intrinsics that
are lowered into the desired decrement-and-branch instructions.

The most fragile part of this new implementation is that interfering uses of
the counter register must be detected on the IR level (and, on PPC, this also
includes any indirect branches in addition to function calls). Also, to make
all of this work, we need a variant of the mtctr instruction that is marked
as having side effects. Without this, machine-code level CSE, DCE, etc.
illegally transform the resulting code. Hopefully, this can be improved
in the future.

This new pass is smaller than the original (and much smaller than the new
Hexagon hardware loops pass), and can handle many additional cases correctly.
In addition, the preheader-creation code has been copied from LoopSimplify, and
after we decide on where it belongs, this code will be refactored so that it
can be explicitly shared (making this implementation even smaller).

The new test-case files ctrloop-{le,lt,ne}.ll have been adapted from tests for
the new Hexagon pass. There are a few classes of loops that this pass does not
transform (noted by FIXMEs in the files), but these deficiencies can be
addressed within the SE infrastructure (thus helping many other passes as well).

llvm-svn: 181927
2013-05-15 21:37:41 +00:00
Ulrich Weigand 640192daa8 [PowerPC] Add assembler parser
This adds assembler parser support to the PowerPC back end.

The parser will run for any powerpc-*-* and powerpc64-*-* triples,
but was tested only on 64-bit Linux.  The supported syntax is
intended to be compatible with the GNU assembler.

The parser does not yet support all PowerPC instructions, but
it does support anything that is generated by LLVM itself.
There is no support for testing restricted instruction sets yet,
i.e. the parser will always accept any instructions it knows,
no matter what feature flags are given.

Instruction operands will be checked for validity and errors
generated.  (Error handling in general could still be improved.)

The patch adds a number of test cases to verify instruction
and operand encodings.  The tests currently cover all instructions
from the following PowerPC ISA v2.06 Book I facilities:
Branch, Fixed-point, Floating-Point, and Vector. 
Note that a number of these instructions are not yet supported
by the back end; they are marked with FIXME.

A number of follow-on check-ins will add extra features.  When
they are all included, LLVM passes all tests (including bootstrap)
when using clang -cc1as as the system assembler.

llvm-svn: 181050
2013-05-03 19:49:39 +00:00
Ulrich Weigand 136ac22eaa PowerPC: Use RegisterOperand instead of RegisterClass operands
In the default PowerPC assembler syntax, registers are specified simply
by number, so they cannot be distinguished from immediate values (without
looking at the opcode).  This means that the default operand matching logic
for the asm parser does not work, and we need to specify custom matchers.
Since those can only be specified with RegisterOperand classes and not
directly on the RegisterClass, all instructions patterns used by the asm
parser need to use a RegisterOperand (instead of a RegisterClass) for
all their register operands.

This patch adds one RegisterOperand for each RegisterClass, using the
same name as the class, just in lower case, and updates all instruction
patterns to use RegisterOperand instead of RegisterClass operands.

llvm-svn: 180611
2013-04-26 16:53:15 +00:00
Ulrich Weigand fa451ba1b9 PowerPC: Fix encoding of rldimi and rldcl instructions
When testing the asm parser, I noticed wrong encodings for the
above instructions (wrong operand name in rldimi, wrong form
and sub-opcode for rldcl).

Tests will be added together with the asm parser.

llvm-svn: 180606
2013-04-26 15:39:12 +00:00
Ulrich Weigand d0585d8686 PowerPC: Mark some more patterns as isCodeGenOnly.
A couple of recently introduced conditional branch patterns
also need to be marked as isCodeGenOnly since they cannot
be handled by the asm parser.

No change in generated code.

llvm-svn: 179690
2013-04-17 17:19:05 +00:00
Hal Finkel 95e6ea69be Mark all PPC comparison instructions as not having side effects
Now that the CR spilling issues have been resolved, we can remove the
unmodeled-side-effect attributes from the comparison instructions (and also
mark them as isCompare). By allowing these, by default, to have unmodeled side
effects, we were hiding problems with CR spilling; but everything seems much
happier now.

llvm-svn: 179502
2013-04-15 02:37:46 +00:00
Hal Finkel 2f29391504 Mark all PPC CR registers to be spilled as live-in and tag MFCR appropriately
Leaving MFCR has having unmodeled side effects is not enough to prevent
unwanted instruction reordering post-RA. We could probably apply a stronger
barrier attribute, but there is a better way: Add all (not just the first) CR
to be spilled as live-in to the entry block, and add all CRs to the MFCR
instruction as implicitly killed.

Unfortunately, I don't have a small test case.

llvm-svn: 179465
2013-04-13 23:06:15 +00:00
Hal Finkel 1b58f335ca PPC: Remove (broken) nested implicit definition lists
TableGen will not combine nested list 'let' bindings into a single list, and
instead uses only the inner scope. As a result, several instruction definitions
were missing implicit register defs that were in outer scopes. This de-nests
these scopes and makes all instructions have only one let binding which sets
implicit register definitions.

llvm-svn: 179392
2013-04-12 18:17:57 +00:00
Hal Finkel 654d43b41a Add PPC instruction record forms and associated query functions
This is prep. work for the implementation of optimizeCompare. Many PPC
instructions have 'record' forms (in almost all cases, this means that the RC
bit is set) that cause the result of the instruction to be compared with zero,
and the result of that comparison saved in a predefined condition register. In
order to add the record forms of the instructions without too much
copy-and-paste, the relevant functions have been refactored into multiclasses
which define both the record and normal forms.

Also, two TableGen-generated mapping functions have been added which allow
querying the instruction code for the record form given the normal form (and
vice versa).

No functionality change intended.

llvm-svn: 179356
2013-04-12 02:18:09 +00:00
Hal Finkel 500b004566 PPC: Prep for if conversion of bctr[l]
This adds in-principle support for if-converting the bctr[l] instructions.
These instructions are used for indirect branching. It seems, however, that the
current if converter will never actually predicate these. To do so, it would
need the ability to hoist a few setup insts. out of the conditionally-executed
block. For example, code like this:
  void foo(int a, int (*bar)()) { if (a != 0) bar(); }
becomes:
        ...
        beq 0, .LBB0_2
        std 2, 40(1)
        mr 12, 4
        ld 3, 0(4)
        ld 11, 16(4)
        ld 2, 8(4)
        mtctr 3
        bctrl
        ld 2, 40(1)
.LBB0_2:
        ...
and it would be safe to do all of this unconditionally with a predicated
beqctrl instruction.

llvm-svn: 179156
2013-04-10 06:42:34 +00:00
Hal Finkel 5711eca19c Allow PPC B and BLR to be if-converted into some predicated forms
This enables us to form predicated branches (which are the same conditional
branches we had before) and also a larger set of predicated returns (including
instructions like bdnzlr which is a conditional return and loop-counter
decrement all in one).

At the moment, if conversion does not capture all possible opportunities. A
simple example is provided in early-ret2.ll, where if conversion forms one
predicated return, and then the PPCEarlyReturn pass picks up the other one. So,
at least for now, we'll keep both mechanisms.

llvm-svn: 179134
2013-04-09 22:58:37 +00:00
Hal Finkel 7795e47b5e PPC rotate instructions don't have unmodeled side effcts
llvm-svn: 178982
2013-04-07 15:06:53 +00:00
Hal Finkel b47a69acde Most PPC M[TF]CR instructions do not have side effects
llvm-svn: 178978
2013-04-07 14:33:13 +00:00
Hal Finkel d71cc3a7f3 PPC pre-increment load instructions do not have side effects
A few were missed in r178972.

llvm-svn: 178973
2013-04-07 06:30:47 +00:00
Hal Finkel 6efd45e902 PPC pre-increment load instructions do not have side effects
llvm-svn: 178972
2013-04-07 05:46:58 +00:00
Hal Finkel 8fc33e5d95 PPC ISEL is a select and never has side effects
llvm-svn: 178960
2013-04-06 19:30:28 +00:00
Hal Finkel f6d45f2379 Add more PPC floating-point conversion instructions
The P7 and A2 have additional floating-point conversion instructions which
allow a direct two-instruction sequence (plus load/store) to convert from all
combinations (signed/unsigned i32/i64) <--> (float/double) (on previous cores,
only some combinations were directly available).

llvm-svn: 178480
2013-04-01 17:52:07 +00:00