output of the llvm-dwarfdump and llvm-objdump report the endianness
used when the object files were generated.
Patch by Charlie Turner.
llvm-svn: 219110
These will make it easier to test further changes to the
code generation and optimization pipelines as those are
moved to subtargets initialized with target feature and
target cpu.
llvm-svn: 219106
It was just calling a bunch of DwarfUnit functions anyway, as can be
seen by the simplification of removing "TheCU" from all the function
calls in the implementation.
llvm-svn: 219103
By leaving these members out of the member list, we avoid them being
emitted into type unit definitions - while still allowing the
definition/declaration to be injected into the compile unit as expected.
llvm-svn: 219101
By leaving these members out of the member list, we avoid them being
emitted into type unit definitions - while still allowing the
definition/declaration to be injected into the compile unit as expected.
llvm-svn: 219100
group's interface to all of the implementations of that analysis group.
The groups themselves can and do manage this anyways, the pass registry
needn't involve itself.
llvm-svn: 219097
pass registry.
This style of registry is somewhat questionable, but it being
non-monotonic is crazy. No one is (or should be) unloading DSOs with
passes and unregistering them here. I've checked with a few folks and
I don't know of anyone using this functionality or any important use
case where it is necessary.
llvm-svn: 219096
Previously, we would not check the target machine type and the module (object)
machine type. Add a check to ensure that we do not attempt to use an object
file with a different target architecture.
This change identified a couple of tests which were incorrectly mixing up
architecture types, using x86 input for a x64 target. Adjust the tests
appropriately. The renaming of the input and the architectures covers the
changes to the existing tests.
One significant change to the existing tests is that the newly added test input
for x64 uses the correct user label prefix for X64.
llvm-svn: 219093
Particularly, it addresses cases where Reassociate breaks Subtracts but then fails to optimize combinations like I1 + -I2 where I1 and I2 have the same rank and are identical.
Patch by Dmitri Shtilman.
llvm-svn: 219092
This trades a (register-renamer-friendly) movaps for a floating point
/ integer domain cross. That is a very bad trade, even on architectures
where domain crossing is relatively fast. On any chip where there is
even a cycle stall, this is a Very Bad Idea. It doesn't even seem likely
to cause a spill to be introduced because the reason for the copy is to
destructively shuffle in place.
Thanks to Ben Kramer for fixing a bug in this code that my new shuffle
lowering exposed and highlighting that perhaps it should just go away.
=]
llvm-svn: 219090
Rather than a series of cascading ifs, use a switch statement to convert the
error code to a string. This has the benefit of allowing the compiler to inform
us if we ever add a new error code but fail to update the string representation.
Add in stringified versions for a couple of missing InputGraphErrors.
llvm-svn: 219089
In order to support more than x86/x86_64, we need to change the behaviour to use
the actual machine type rather than checking the bitness and assuming that we
are on X86. This replaces the use of is64bit in applyAllRelocations with a
check on the machine type. This will enable adding support for handling ARM
relocations.
Rename the existing applyRelocation methods to be similarly named and to make it
clear the types of relocations they will process.
llvm-svn: 219088
that are unused.
This allows the combiner to delete math feeding shuffles where the math
isn't actually necessary. This improves some of the vperm2x128 tests
that regressed when the vector shuffle lowering started actually
generating vperm instructions rather than forcibly decomposing them.
Sadly, this isn't enough to get this *really* right because we still
form a completely unnecessary permutation. To fix that, we also need to
fold shuffles which just rearrange concatenated or inserted subvectors.
llvm-svn: 219086
new vector shuffle lowering.
This is loosely based on a patch by Marius Wachtler to the PR (thanks!).
I refactored it a bi to use std::count_if and a mutable array ref but
the core idea was exactly right. I also added some direct testing of
this case.
I believe PR21137 is now the only remaining regression.
llvm-svn: 219081
shuffles using AVX and AVX2 instructions. This fixes PR21138, one of the
few remaining regressions impacting benchmarks from the new vector
shuffle lowering.
You may note that it "regresses" many of the vperm2x128 test cases --
these were actually "improved" by the naive lowering that the new
shuffle lowering previously did. This regression gave me fits. I had
this patch ready-to-go about an hour after flipping the switch but
wasn't sure how to have the best of both worlds here and thought the
correct solution might be a completely different approach to lowering
these vector shuffles.
I'm now convinced this is the correct lowering and the missed
optimizations shown in vperm2x128 are actually due to missing
target-independent DAG combines. I've even written most of the needed
DAG combine and will submit it shortly, but this part is ready and
should help some real-world benchmarks out.
llvm-svn: 219079
This resolved the issues with delinearized accesses that might alias,
thus delinearization doesn't deactivate runtime alias checks anymore.
Differential Revision: http://reviews.llvm.org/D5614
llvm-svn: 219078
This class allows to store information about the arrays in the SCoP.
For each base pointer in the SCoP one object is created storing the
type and dimension sizes of the array. The objects can be obtained via
the SCoP, a MemoryAccess or the isl_id associated with the output
dimension of a MemoryAccess (the description of what is accessed).
So far we use the information in the IslExprBuilder to create the
right base type before indexing into the base array. This fixes the
bug http://llvm.org/bugs/show_bug.cgi?id=21113 (both test cases are
included). On top of that we can now build runtime alias checks for
delinearized arrays as the dimension sizes are also part of the
ScopArrayInfo objects.
Differential Revision: http://reviews.llvm.org/D5613
llvm-svn: 219077
This changes the scope discriminator's behavior to start at '1' instead
of '0'. Symbol table diffing, for ABI compatibility testing, kept
finding these as false positives.
llvm-svn: 219075
Summary:
This add support for the C++11 feature, thread_local global variables.
The ABI Clang implements is an improvement of the MSVC ABI. Sadly,
further improvements could be made but not without sacrificing ABI
compatibility.
The feature is implemented as follows:
- All thread_local initialization routines are pointed to from the
.CRT$XDU section.
- All non-weak thread_local variables have their initialization routines
call from a single function instead of getting their own .CRT$XDU
section entry. This is done to open up optimization opportunities to
the compiler.
- All weak thread_local variables have their own .CRT$XDU section entry.
This entry is in a COMDAT with the global variable it is initializing;
this ensures that we will initialize the global exactly once.
- Destructors are registered in the initialization function using
__tlregdtor.
Differential Revision: http://reviews.llvm.org/D5597
llvm-svn: 219074
Joerg suggested on IRC that I look at generalizing the logic from r219067 to
handle more general redundancies (like removing an assume(x > 3) dominated by
an assume(x > 5)). The way to do this would be to ask ValueTracking to
determine the value of the i1 argument. It turns out that ValueTracking is not
very good at this right now (although it does get the trivial redundancy case)
because it does not understand ICmps. Nevertheless, the resulting code in
InstCombine is simpler than r219067, so we might as well do it now.
llvm-svn: 219070
For any @llvm.assume intrinsic, if there is another which dominates it and uses
the same condition, then it is redundant and can be removed. While this does
not alter the semantics of the @llvm.assume intrinsics, it makes subsequent
handling more efficient (and the resulting IR easier to read).
llvm-svn: 219067
Fix http://llvm.org/PR21158 by adding a cast to unsigned long long,
so the comparison would be between two unsigned long longs instead
of bool and unsigned long long.
if (getAsUnsignedInteger(*this, Radix, ULLVal) ||
static_cast<unsigned long long>(static_cast<T>(ULLVal)) != ULLVal)
llvm-svn: 219065