1. MachineJumpTableInfo is now created lazily for a function the first time
it actually makes a jump table instead of for every function.
2. The encoding of jump table entries is now described by the
MachineJumpTableInfo::JTEntryKind enum. This enum is determined by the
TLI::getJumpTableEncoding() hook, instead of by lots of code scattered
throughout the compiler that "knows" that jump table entries are always
32-bits in pic mode (for example).
3. The size and alignment of jump table entries is now calculated based on
their kind, instead of at machinefunction creation time.
Future work includes using the EntryKind in more places in the compiler,
eliminating other logic that "knows" the layout of jump tables in various
situations.
llvm-svn: 94470
the '-pre-RA-sched' flag. It actually makes more sense to do it this way. Also,
keep track of the SDNode ordering by default. Eventually, we would like to make
this ordering a way to break a "tie" in the scheduler. However, doing that now
breaks the "CodeGen/X86/abi-isel.ll" test for 32-bit Linux.
llvm-svn: 94308
(X != null) | (Y != null) --> (X|Y) != 0
(X == null) & (Y == null) --> (X|Y) == 0
so that instcombine can stop doing this for pointers. This is part of PR3351,
which is a case where instcombine doing this for pointers (inserting ptrtoint)
is pessimizing code.
llvm-svn: 92406
multiply sequence when the power is a constant integer. Before, our
codegen for std::pow(.., int) always turned into a libcall, which was
really inefficient.
This should also make many gfortran programs happier I'd imagine.
llvm-svn: 92388
I asked Devang to do back on Sep 27. Instead of going through the
MetadataContext class with methods like getMD() and getMDs(), just
ask the instruction directly for its metadata with getMetadata()
and getAllMetadata().
This includes a variety of other fixes and improvements: previously
all Value*'s were bloated because the HasMetadata bit was thrown into
value, adding a 9th bit to a byte. Now this is properly sunk down to
the Instruction class (the only place where it makes sense) and it
will be folded away somewhere soon.
This also fixes some confusion in getMDs and its clients about
whether the returned list is indexed by the MDID or densely packed.
This is now returned sorted and densely packed and the comments make
this clear.
This introduces a number of fixme's which I'll follow up on.
llvm-svn: 92235
compare. On other targets we end up with a call to memcmp because we don't
want 16 individual byte loads. We should be able to use movups as well, but
we're failing to select the generated icmp.
llvm-svn: 92107
SDISel. This optimization was causing simplifylibcalls to
introduce type-unsafe nastiness. This is the first step, I'll be
expanding the memcmp optimizations shortly, covering things that
we really really wouldn't want simplifylibcalls to do.
llvm-svn: 92098
return partial registers. This affected the back-end lowering code some.
Also patch up some places I missed before in the "get" functions.
llvm-svn: 91880
- Move DisableScheduling flag into TargetOption.h
- Move SDNodeOrdering into its own header file. Give it a minimal interface that
doesn't conflate construction with storage.
- Move assigning the ordering into the SelectionDAGBuilder.
This isn't used yet, so there should be no functional changes.
llvm-svn: 91727
The change in SelectionDAGBuilder is needed to allow using bitcasts to convert
between f64 (the default type for ARM "d" registers) and 64-bit Neon vector
types. Radar 7457110.
llvm-svn: 91649
stuff isn't used just yet.
We want to model the GCC `-fno-schedule-insns' and `-fno-schedule-insns2'
flags. The hypothesis is that the people who use these flags know what they are
doing, and have hand-optimized the C code to reduce latencies and other
conflicts.
The idea behind our scheme to turn off scheduling is to create a map "on the
side" during DAG generation. It will order the nodes by how they appeared in the
code. This map is then used during scheduling to get the ordering.
llvm-svn: 91392