Objective-C literals conceptually always create new objects, but may be
optimized by the compiler or runtime (constant folding, singletons, etc).
Comparing addresses of these objects is relying on this optimization
behavior, which is really an implementation detail.
In the case of == and !=, offer a fixit to a call to -isEqual:, if the
method is available. This fixit is directly on the error so that it is
automatically applied.
Most of the time, this is really a newbie mistake, hence the fixit.
llvm-svn: 158230
This occurs when you have two insertions and the first one is so long that the
second fixit's column is before the first fixit ends. The edits themselves
don't actually overlap, but our command-line preview does.
llvm-svn: 158229
constexpr until we get to the end of the class definition. When that happens,
be sure to remember that the class actually does have a constexpr constructor.
This is a stopgap solution, which still doesn't cover the case of a class with
multiple copy constructors (only some of which are constexpr). We should be
performing constructor lookup when implicitly defining a constructor in order
to determine whether all constructors it invokes are constexpr.
llvm-svn: 158228
problem was that by moving instructions around inside the function, the pass
could accidentally move the iterator being used to advance over the function
too. Fix this by only processing the instruction equal to the iterator, and
leaving processing of instructions that might not be equal to the iterator
to later (later = after traversing the basic block; it could also wait until
after traversing the entire function, but this might make the sets quite big).
Original commit message:
Grab-bag of reassociate tweaks. Unify handling of dead instructions and
instructions to reoptimize. Exploit this to more systematically eliminate
dead instructions (this isn't very useful in practice but is convenient for
analysing some testcase I am working on). No need for WeakVH any more: use
an AssertingVH instead.
llvm-svn: 158226
Thanks to Jakob's help, this now causes no new test suite failures!
Over the entire test suite, this gives an average 1% speedup. The largest speedups are:
SingleSource/Benchmarks/Misc/pi - 108%
SingleSource/Benchmarks/CoyoteBench/lpbench - 54%
MultiSource/Benchmarks/Prolangs-C/unix-smail/unix-smail - 50%
SingleSource/Benchmarks/Shootout/ary3 - 32%
SingleSource/Benchmarks/Shootout-C++/matrix - 30%
The largest slowdowns are:
MultiSource/Benchmarks/mediabench/gsm/toast/toast - -30%
MultiSource/Benchmarks/Prolangs-C/bison/mybison - -25%
MultiSource/Benchmarks/BitBench/uuencode/uuencode - -22%
MultiSource/Applications/d/make_dparser - -14%
SingleSource/Benchmarks/Shootout-C++/ary - -13%
In light of these slowdowns, additional profiling work is obviously needed!
llvm-svn: 158223
Marking these classes as non-alocatable allows CTR loop generation to
work correctly with the block placement passes, etc. These register
classes are currently used only by some unused TCRETURN patterns.
In future cleanup, these will be removed.
Thanks again to Jakob for suggesting this fix to the CTR loop problem!
llvm-svn: 158221
to addition.
We should not to warn in case the malloc size argument is an
addition containing 'sizeof' operator - it is common to use the pattern
to pack values of different sizes into a buffer.
Ex:
uint8_t *buffer = (uint8_t*)malloc(dataSize + sizeof(length));
llvm-svn: 158219
The preprocessor's handling of diagnostic push/pops is stateful, so
encountering pragmas during a re-parse causes problems. HTMLRewrite
already filters out normal # directives including #pragma, so it's
clear it's not expected to be interpreting pragmas in this mode.
This fix adds a flag to Preprocessor to explicitly disable pragmas.
The "right" fix might be to separate pragma lexing from pragma
parsing so that we can throw away pragmas like we do preprocessor
directives, but right now it's important to get the fix in.
Note that this has nothing to do with the "hack" of re-using the
input preprocessor in HTMLRewrite. Even if we someday copy the
preprocessor instead of re-using it, the copy would (and should) include
the diagnostic level tables and have the same problems.
llvm-svn: 158214
Bulk move of TargetInstrInfo implementation into
TargetInstrInfoImpl. This is dirty because the code isn't part of
TargetInstrInfoImpl class, nor should it be, because the methods are
not target hooks. However, it's the current mechanism for keeping
libTarget useful outside the backend. You'll get a not-so-nice link
error if you invoke a TargetInstrInfo method that depends on CodeGen.
The TargetInstrInfoImpl class should probably be removed since it
doesn't really solve this problem.
To really fix this, we probably need separate interfaces for the
CodeGen/nonCodeGen sides of TargetInstrInfo.
llvm-svn: 158212
The pass itself works well, but the something in the Machine* infrastructure
does not understand terminators which define registers. Without the ability
to use the block-placement pass, etc. this causes performance regressions (and
so is turned off by default). Turning off the analysis turns off the problems
with the Machine* infrastructure.
llvm-svn: 158206
The code which tests for an induction operation cannot assume that any
ADDI instruction will have a register operand because the operand could
also be a frame index; for example:
%vreg16<def> = ADDI8 <fi#0>, 0; G8RC:%vreg16
llvm-svn: 158205
This pass is derived from the Hexagon HardwareLoops pass. The only significant enhancement over the Hexagon
pass is that PPCCTRLoops will also attempt to delete the replaced add and compare operations if they are
no longer otherwise used. Also, invalid preheader DebugLoc is not used.
llvm-svn: 158204
can move instructions within the instruction list. If the instruction just
happens to be the one the basic block iterator is pointing to, and it is
moved to a different basic block, then we get into an infinite loop due to
the iterator running off the end of the basic block (for some reason this
doesn't fire any assertions). Original commit message:
Grab-bag of reassociate tweaks. Unify handling of dead instructions and
instructions to reoptimize. Exploit this to more systematically eliminate
dead instructions (this isn't very useful in practice but is convenient for
analysing some testcase I am working on). No need for WeakVH any more: use
an AssertingVH instead.
llvm-svn: 158199
This was previously only done for executables and shared libraries, but not
for modules. As modules are essentially shared libraries (that need to be
dlopened explicitly), threating them the same as shared libraries seems
reasonable. This fixes the LLVM_BUILD_32_BITS build of Polly.
Contributed by: Ondra Hosek <ondra.hosek@gmail.com>
llvm-svn: 158195
AST: For auto-synthesized ivars give them the location of the related
property (previously they had no source location). This allows them
to be indexed by libclang.
libclang: Make sure synthesized ivars are indexed before the methods that
may reference them.
Fixes rdar://11607001.
llvm-svn: 158189
a cache of address ranges for child sections,
accelerating lookups. This cache is built during
object file loading, and is then set in stone once
the object files are done loading. (In Debug builds,
we ensure that the cache is never invalidated after
that.)
llvm-svn: 158188
CmpRuns.py can be used to compare issues from different analyzer runs.
Since it uses the issue line number to unique 2 issues, adding a new
line to the beginning of a file makes all issues in the file reported as
new.
The hash will be an opaque value which could be used (along with the
function name) by CmpRuns to identify the same issues. This way, we only
fail to identify the same issue from two runs if the function it appears
in changes (not perfect, but much better than nothing).
llvm-svn: 158180
This patch will generate the following for integer ABS:
movl %edi, %eax
negl %eax
cmovll %edi, %eax
INSTEAD OF
movl %edi, %ecx
sarl $31, %ecx
leal (%rdi,%rcx), %eax
xorl %ecx, %eax
There exists a target-independent DAG combine for integer ABS, which converts
integer ABS to sar+add+xor. For X86, we match this pattern back to neg+cmov.
This is implemented in PerformXorCombine.
rdar://10695237
llvm-svn: 158175