when using Python < 2.7.0. This is the case on Snow Leopard, where the tools are always
installed in /Developer. This isn't a proper fix for really figuring out where Xcode
is installed, but should work to fix an obvious problem on Snow Leopard.
llvm-svn: 160321
This code is very sensitive to the difference between "columns" as printed
and "bytes" (SourceManager columns). All variables are now named explicitly
and our assumptions are (hopefully) documented as both comment and assertion.
Whether parseable fixits should use byte offsets or Unicode character counts
is pending discussion on the mailing list; currently the implementation uses
bytes (and has no problems on lines containing multibyte characters).
This has been added to the user manual.
<rdar://problem/11877454>
llvm-svn: 160319
(lldb) script import lldb.macosx.crashlog
(lldb) crashlog -i /tmp/*.crash
% symbolicate --crashed-only
This will symbolicate all of the crash logs only for the crashed thread.
Also print out the crash log index number in the output of the interactive "image" command:
(lldb) script import lldb.macosx.crashlog
(lldb) crashlog -i /tmp/*.crash
% image LLDB.framework
...
This then allows you to symbolicate a crash log by index accurately when you looked for an image of a specific version
llvm-svn: 160316
CmpRuns can be used for static analyzer bug report comparison. However,
we want to make sure external users do not rely on the way bugs are
represented (plist files). Make sure that we have a user
friendly/documented API for CmpRuns script.
llvm-svn: 160314
uint32_t hi(uint64_t res)
{
uint_32t hi = res >> 32;
return !hi;
}
llvm IR looks like this:
define i32 @hi(i64 %res) nounwind uwtable ssp {
entry:
%lnot = icmp ult i64 %res, 4294967296
%lnot.ext = zext i1 %lnot to i32
ret i32 %lnot.ext
}
The optimizer has optimize away the right shift and truncate but the resulting
constant is too large to fit in the 32-bit immediate field. The resulting x86
code is worse as a result:
movabsq $4294967296, %rax ## imm = 0x100000000
cmpq %rax, %rdi
sbbl %eax, %eax
andl $1, %eax
This patch teaches the x86 lowering code to handle ult against a large immediate
with trailing zeros. It will issue a right shift and a truncate followed by
a comparison against a shifted immediate.
shrq $32, %rdi
testl %edi, %edi
sete %al
movzbl %al, %eax
It also handles a ugt comparison against a large immediate with trailing bits
set. i.e. X > 0x0ffffffff -> (X >> 32) >= 1
rdar://11866926
llvm-svn: 160312
The first variant accepts immediate number as the second argument.
The second variant accepts register operand as the second argument.
llvm-svn: 160307
In the added testcase the constant 55 was behind an AssertZext of type i1, and ComputeDemandedBits
reported that some of the bits were both known to be one and known to be zero.
Together with Michael Kuperstein <michael.m.kuperstein@intel.com>
llvm-svn: 160305
Mips shift instructions DSLL, DSRL and DSRA are transformed into
DSLL32, DSRL32 and DSRA32 respectively if the shift amount is between
32 and 63
Here is a description of DSLL:
Purpose: Doubleword Shift Left Logical Plus 32
To execute a left-shift of a doubleword by a fixed amount--32 to 63 bits
Description: GPR[rd] <- GPR[rt] << (sa+32)
The 64-bit doubleword contents of GPR rt are shifted left, inserting
zeros into the emptied bits; the result is placed in
GPR rd. The bit-shift amount in the range 0 to 31 is specified by sa.
This patch implements the direct object output of these instructions.
llvm-svn: 160277
1. FileCheck-ize epilogue.ll and allow another asm instruction to restore %rsp.
2. Remove check in widen_arith-3.ll that was hitting instruction in epilogue instead of
vector add.
llvm-svn: 160274