This is the second part of the change to always return "true"
offset values from getPreIndexedAddressParts, tackling the
case of "memrix" type operands.
This is about instructions like LD/STD that only have a 14-bit
field to encode immediate offsets, which are implicitly extended
by two zero bits by the machine, so that in effect we can access
16-bit offsets as long as they are a multiple of 4.
The PowerPC back end currently handles such instructions by
carrying the 14-bit value (as it will get encoded into the
actual machine instructions) in the machine operand fields
for such instructions. This means that those values are
in fact not the true offset, but rather the offset divided
by 4 (and then truncated to an unsigned 14-bit value).
Like in the case fixed in r182012, this makes common code
operations on such offset values not work as expected.
Furthermore, there doesn't really appear to be any strong
reason why we should encode machine operands this way.
This patch therefore changes the encoding of "memrix" type
machine operands to simply contain the "true" offset value
as a signed immediate value, while enforcing the rules that
it must fit in a 16-bit signed value and must also be a
multiple of 4.
This change must be made simultaneously in all places that
access machine operands of this type. However, just about
all those changes make the code simpler; in many cases we
can now just share the same code for memri and memrix
operands.
llvm-svn: 182032
regions that aren't actually allocated in the
process. This cache is used by the expression
parser if the underlying process doesn't support
memory allocation, to avoid needless repeated
searches for unused address ranges.
Also fixed a silly bug in IRMemoryMap where it
would continue searching even after it found a
valid region.
<rdar://problem/13866629>
llvm-svn: 182028
This ensures that we format:
void longFunctionName {
} // long comment here
And not:
void longFunctionName {}
// long comment here
As requested in post-commit-review.
llvm-svn: 182024
While testing some experimental code to add vector-scalar registers to
PowerPC, I noticed that a couple of independent instructions were
flipped by the scheduler. The new CHECK-DAG support is perfect for
avoiding this problem.
llvm-svn: 182020
The recent improvement to the Use Nullptr Transform for macro arg
expansions wasn't testing that only allowed NULL macros used in macro
args can be transformed. This revision replaces a TODO to that effect.
Fixes PR15955.
llvm-svn: 182014
DAGCombiner::CombineToPreIndexedLoadStore calls a target routine to
decompose a memory address into a base/offset pair. It expects the
offset (if constant) to be the true displacement value in order to
perform optional additional optimizations; in particular, to convert
other uses of the original pointer into uses of the new base pointer
after pre-increment.
The PowerPC implementation of getPreIndexedAddressParts, however,
simply calls SelectAddressRegImm, which returns a TargetConstant.
This value is appropriate for encoding into the instruction, but
it is not always usable as true displacement value:
- Its type is always MVT::i32, even on 64-bit, where addresses
ought to be i64 ... this causes the optimization to simply
always fail on 64-bit due to this line in DAGCombiner:
// FIXME: In some cases, we can be smarter about this.
if (Op1.getValueType() != Offset.getValueType()) {
- Its value is truncated to an unsigned 16-bit value if negative.
This causes the above opimization to generate wrong code.
This patch fixes both problems by simply returning the true
displacement value (in its original type). This doesn't
affect any other user of the displacement.
llvm-svn: 182012
It turns out that several implementations go through the trouble of
setting up a SourceManager and Lexer and abstracting this into a
function makes usage easier.
Also abstracts SourceManager-independent ranges out of
tooling::Refactoring and provides a convenience function to create them
from line ranges.
llvm-svn: 181997
Clang has an issue between mingw/include/float.h and clang/Headers/float.h with cyclic include_next.
For now, it should work to suppress #include_next in clang/float.h with an explicit target.
(It may work with -U__MINGW32__, though.)
llvm-svn: 181988
When the Polly code generation was written we did not correctly update the
LoopInfo data, but still claimed that the loop information is correct. This
does not only lead to missed optimizations, but it can also cause
miscompilations in case passes such as LoopSimplify are run after Polly.
Reported-by: Sergei Larin <slarin@codeaurora.org>
llvm-svn: 181987
BeforeBB
|
v
GuardBB
/ \
__ PreHeaderBB \
/ \ / |
latch HeaderBB |
\ / \ /
< \ /
\ /
ExitBB
This does not only remove the need for an explicit loop rotate pass, but it also
gives us the possibility to skip the construction of the guard condition in case
the loop is known to be executed at least once. We do not yet exploit this, but
by implementing this analysis in the isl code generator we should be able to
remove more guards than the generic loop rotate pass can. Another point is that
loop rotation can introduce additional PHI nodes, which may hide that a loop can
be executed in parallel. This change avoids this complication and will make it
easier to move the openmp code generation into a separate pass.
llvm-svn: 181986
a FieldDecl from it, and propagate both into the closure type and the
LambdaExpr.
You can't do much useful with them yet -- you can't use them within the body
of the lambda, because we don't have a representation for "the this of the
lambda, not the this of the enclosing context". We also don't have support or a
representation for a nested capture of an init-capture yet, which was intended
to work despite not being allowed by the current standard wording.
llvm-svn: 181985
In the case of inline functions, we have to special case local types
when they are used as template arguments to make sure the template
instantiations are still uniqued in case the function itself is inlined.
llvm-svn: 181981