When an extend more than doubles the size of the elements (e.g., a zext
from v16i8 to v16i32), the normal legalization method of splitting the
vectors will run into problems as by the time the destination vector is
legal, the source vector is illegal. The end result is the operation
often becoming scalarized, with the typical horrible performance. For
example, on x86_64, the simple input of:
define void @bar(<16 x i8> %a, <16 x i32>* %p) nounwind {
%tmp = zext <16 x i8> %a to <16 x i32>
store <16 x i32> %tmp, <16 x i32>*%p
ret void
}
Generates:
.section __TEXT,__text,regular,pure_instructions
.section __TEXT,__const
.align 5
LCPI0_0:
.long 255 ## 0xff
.long 255 ## 0xff
.long 255 ## 0xff
.long 255 ## 0xff
.long 255 ## 0xff
.long 255 ## 0xff
.long 255 ## 0xff
.long 255 ## 0xff
.section __TEXT,__text,regular,pure_instructions
.globl _bar
.align 4, 0x90
_bar:
vpunpckhbw %xmm0, %xmm0, %xmm1
vpunpckhwd %xmm0, %xmm1, %xmm2
vpmovzxwd %xmm1, %xmm1
vinsertf128 $1, %xmm2, %ymm1, %ymm1
vmovaps LCPI0_0(%rip), %ymm2
vandps %ymm2, %ymm1, %ymm1
vpmovzxbw %xmm0, %xmm3
vpunpckhwd %xmm0, %xmm3, %xmm3
vpmovzxbd %xmm0, %xmm0
vinsertf128 $1, %xmm3, %ymm0, %ymm0
vandps %ymm2, %ymm0, %ymm0
vmovaps %ymm0, (%rdi)
vmovaps %ymm1, 32(%rdi)
vzeroupper
ret
So instead we can check if there are legal types that enable us to split
more cleverly when the input vector is already legal such that we don't
turn it into an illegal type. If the extend is such that it's more than
doubling the size of the input we check if
- the number of vector elements is even,
- the source type is legal,
- the type of a split source is illegal,
- the type of an extended (by doubling element size) source is legal, and
- the type of that extended source when split is legal.
If the conditions are met, instead of just splitting both the
destination and the source types, we create an extend that only goes up
one "step" (doubling the element width), and the continue legalizing the
rest of the operation normally. The result is that this operates as a
new, more effecient, termination condition for the loop of "split the
operation until the destination type is legal."
With this change, the above example now compiles to:
_bar:
vpxor %xmm1, %xmm1, %xmm1
vpunpcklbw %xmm1, %xmm0, %xmm2
vpunpckhwd %xmm1, %xmm2, %xmm3
vpunpcklwd %xmm1, %xmm2, %xmm2
vinsertf128 $1, %xmm3, %ymm2, %ymm2
vpunpckhbw %xmm1, %xmm0, %xmm0
vpunpckhwd %xmm1, %xmm0, %xmm3
vpunpcklwd %xmm1, %xmm0, %xmm0
vinsertf128 $1, %xmm3, %ymm0, %ymm0
vmovaps %ymm0, 32(%rdi)
vmovaps %ymm2, (%rdi)
vzeroupper
ret
This generalizes a custom lowering that was added a while back to the
ARM backend. That lowering is no longer necessary, and is removed. The
testcases for it, however, provide excellent ARM tests for this change
and so remain.
rdar://14735100
llvm-svn: 193727
User-vended by-type formatters still would prevail on these hardcoded ones
For the time being, while the infrastructure is there, no such formatters exist
This can be useful for cases such as expanding vtables for C++ class pointers, when there is no clear cut notion of a typename matching, and the feature is low-level enough that it makes sense for the debugger core to be vending it
llvm-svn: 193724
CodeGenTypes.h instantiates llvm::FoldingSet<> with CGFunctionInfo,
and VC++ doesn't like the static_cast from FoldingSetImpl::Node* to
CGFunctionInfo* since it hasn't seen the definition of CGFunctionInfo
and that it inherits from FoldingSetImpl::Node.
llvm-svn: 193722
The function verifyFunction() in lib/IR/Verifier.cpp misses some
calls. It creates a temporary FunctionPassManager that will run a
single Verifier pass. Unfortunately, FunctionPassManager is no
PassManager and does not call doInitialization() and doFinalization()
by itself. Verifier does important tasks in doInitialization() such as
collecting type information used to check DebugInfo metadata and
doFinalization() does some additional checks. Therefore these checks
were missed and debug info couldn't be verified at all, it just
crashed if the function had some.
verifyFunction() is currently not used in llvm unless -debug option is
enabled, and in unittests/IR/VerifierTest.cpp
VerifierTest had to be changed to create the function in a module from
which the type debug info can be collected.
Patch by Michael Kruse.
llvm-svn: 193719
With this patch llvm produces a weak_def_can_be_hidden for linkonce_odr
if they are also unnamed_addr or don't have their address taken.
There is not a lot of documentation about .weak_def_can_be_hidden, but
from the old discussion about linkonce_odr_auto_hide and the name of
the directive this looks correct: these symbols can be hidden.
Testing this with the ld64 in Xcode 5 linking clang reduces the number of
exported symbols from 21053 to 19049.
llvm-svn: 193718
CodeGenABITypes is a wrapper built on top of CodeGenModule that exposes
some of the functionality of CodeGenTypes (held by CodeGenModule),
specifically methods that determine the LLVM types appropriate for
function argument and return values.
I addition to CodeGenABITypes.h, CGFunctionInfo.h is introduced, and the
definitions of ABIArgInfo, RequiredArgs, and CGFunctionInfo are moved
into this new header from the private headers ABIInfo.h and CGCall.h.
Exposing this functionality is one part of making it possible for LLDB
to determine the actual ABI locations of function arguments and return
values, making it possible for it to determine this for any supported
target without hard-coding ABI knowledge in the LLDB code.
llvm-svn: 193717
Fixed the expression parser to be able to iterate across all function name matches that it finds when it is looking for the address of a function that the IR is looking for. Also taught it to deal with reexported symbols.
llvm-svn: 193716
Inlined a copy of cxa_demangle.cpp from:
http://llvm.org/svn/llvm-project/libcxxabi/trunk/src/cxa_demangle.cpp
For systems that don't have demangling built into the system, and for systems that don't want to use the version that is installed. Defining LLDB_USE_BUILTIN_DEMANGLER in your build system allows you to use the built in demangler. This setting is curently automatically enabled for Windows builds.
llvm-svn: 193708
startswith_lower is ocassionally useful and I think worth adding.
endwith_lower is added for completeness.
Differential Revision: http://llvm-reviews.chandlerc.com/D2041
llvm-svn: 193706
Fixing a problem where ValueObject::GetPointeeData() would not accept "partial" valid reads (i.e. asking for 10 items and getting only 5 back)
While suboptimal, this situation is not a flat-out failure and could well be caused by legit scenarios, such as hitting a page boundary
Among others, this allows data formatters to print char* buffers allocated under libgmalloc
llvm-svn: 193704
Currently, instead of showing up as link, it is rendered as
...of FunctionPass <writing-an-llvm-pass-FunctionPass>. The...
PR17733. Patch by Tay Ray Chuan!
llvm-svn: 193698
Fixing this Windows build error:
..\lib\Target\Mips\MipsSEISelLowering.cpp(997) : error C2027: use of undefined type 'llvm::raw_ostream'
llvm-svn: 193696
Also corrected the definition of the intrinsics for these instructions (the
result register is also the first operand), and added intrinsics for bsel and
bseli to clang (they already existed in the backend).
These four operations are mostly equivalent to bsel, and bseli (the difference
is which operand is tied to the result). As a result some of the tests changed
as described below.
bitwise.ll:
- bsel.v test adapted so that the mask is unknown at compile-time. This stops
it emitting bmnzi.b instead of the intended bsel.v.
- The bseli.b test now tests the right thing. Namely the case when one of the
values is an uimm8, rather than when the condition is a uimm8 (which is
covered by bmnzi.b)
compare.ll:
- bsel.v tests now (correctly) emits bmnz.v instead of bsel.v because this
is the same operation (see MSA.txt).
i8.ll
- CHECK-DAG-ized test.
- bmzi.b test now (correctly) emits equivalent bmnzi.b with swapped operands
because this is the same operation (see MSA.txt).
- bseli.b still emits bseli.b though because the immediate makes it
distinguishable from bmnzi.b.
vec.ll:
- CHECK-DAG-ized test.
- bmz.v tests now (correctly) emits bmnz.v with swapped operands (see
MSA.txt).
- bsel.v tests now (correctly) emits bmnz.v with swapped operands (see
MSA.txt).
llvm-svn: 193693
This problem was found and fixed by José Fonseca in March 2011 for
SmallPtrSet, committed r128566. But as far as I can tell, all other
llvm hash tables retain the same problem: the bucket count can grow
without bound while size() remains near constant by repeated
insert/erase cycles that tend to fill the container with tombstones.
Here is a demo that has been reduced to a trivial case:
int
main()
{
llvm::DenseSet<unsigned> d;
for (unsigned i = 0; i < 0xFFFFFFF; ++i)
{
d.insert(i);
d.erase(i);
}
}
While the container size() never grows above 1, the bucket count grows
like this:
nb = 64
nb = 128
nb = 256
nb = 512
nb = 1024
nb = 2048
nb = 4096
nb = 8192
nb = 16384
nb = 32768
nb = 65536
nb = 131072
nb = 262144
nb = 524288
nb = 1048576
nb = 2097152
nb = 4194304
nb = 8388608
nb = 16777216
nb = 33554432
nb = 67108864
nb = 134217728
nb = 268435456
The above program currently consumes a few GB ram. This patch brings
the memory consumption down by several orders of magnitude, and keeps
the bucket count at 64 for the above test.
llvm-svn: 193689
This required correcting the definition of the bins[lr]i intrinsics because
the result is also the first operand.
It also required removing the (arbitrary) check for 32-bit immediates in
MipsSEDAGToDAGISel::selectVSplat().
Currently using binsli.d with 2 bits set in the mask doesn't select binsli.d
because the constant is legalized into a ConstantPool. Similar things can
happen with binsri.d with more than 10 bits set in the mask. The resulting
code when this happens is correct but not optimal.
llvm-svn: 193687
It's possible to embed the frontend in applications that haven't initialized
backend targets so we need to handle this condition gracefully.
llvm-svn: 193685
(or (and $a, $mask), (and $b, $inverse_mask)) => (vselect $mask, $a, $b).
where $mask is a constant splat. This allows bitwise operations to make use
of bsel.
It's also a stepping stone towards matching bins[lr], and bins[lr]i from
normal IR.
Two sets of similar tests have been added in this commit. The bsel_* functions
test the case where binsri cannot be used. The binsr_*_i functions will
start to use the binsri instruction in the next commit.
llvm-svn: 193682
splat.d is implemented but this subtest is currently disabled. This is because
it is difficult to match the appropriate IR on MIPS32. There is a patch under
review that should help with this so I hope to enable the subtest soon.
llvm-svn: 193680