When using the VFS, we want the virtual header location when searching
for a framework module, since that will be the one in the correct
directory structure for the module.
I'll add a regression test once I finish reducing the larger one I have.
llvm-svn: 208901
for sanitizers to pass the C++ compilation and exe linking flags through
from the host CMake configuration. We pass the target flags afterward,
allowing them to trump flags as needed. This is particularly important
when the flags direct Clang, even the just-built-Clang, toward the
standard library, linker, and other tools to use.
llvm-svn: 208896
Added target specific combine rules to fold blend intrinsics according
to the following rules:
1) fold(blend A, A, Mask) -> A;
2) fold(blend A, B, <allZeros>) -> A;
3) fold(blend A, B, <allOnes>) -> B.
Added two new tests to verify that the new folding rules work for all
the optimized blend intrinsics.
llvm-svn: 208895
We now use SReg_* for integer types and VReg_* for floating-point types.
This should help simplify the SIFixSGPRCopies pass and no longer causes
ISel to insert a COPY after termiator instuctions that output a value.
This change is covered by exisitng tests.
llvm-svn: 208888
Summary:
Make checks filtering more intuitive and easy to use. Remove
-disable-checks and change the format of -checks= to a comma-separated list of
globs with optional '-' prefix to denote exclusion. The -checks= option is now
cumulative, so it modifies defaults, not overrides them. Each glob adds or
removes to the current set of checks, so the filter can be refined or overriden
by adding globs.
Example:
The default value for -checks= is
'*,-clang-analyzer-alpha*,-llvm-include-order,-llvm-namespace-comment,-google-*',
which allows all checks except for the ones named clang-analyzer-alpha* and
others specified with the leading '-'. To allow all google-* checks one can
write:
clang-tidy -checks=google-* ...
If one needs only google-* checks, we first need to remove everything (-*):
clang-tidy -checks=-*,google-*
etc.
I'm not sure if we need to change something here, so I didn't touch the docs
yet.
Reviewers: klimek, alexfh
Reviewed By: alexfh
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D3770
llvm-svn: 208883
Previously, TableGen assumed that every aliased operand consumed precisely 1
MachineInstr slot (this was reasonable because until a couple of days ago,
nothing more complicated was eligible for printing).
This allows a couple more ARM64 aliases to print so we can remove the special
code.
On the X86 side, I've gone for explicit AT&T size specifiers as the default, so
turned off a few of the aliases that would have just started printing.
llvm-svn: 208880
In all cases, if a "mov" alias exists, it is the canonical form of the
instruction. Now that TableGen can support aliases containing syntax variants,
we can enable them and improve the quality of the asm output.
llvm-svn: 208874
Previously, we ignored the difference between V64 and V128 when parsing
assembly: they both got mapped to registers in the FPR128 class. This is
basically harmless at the moment because they both print and encode the same
way. However, it will affect the printing of aliases.
llvm-svn: 208866
Summary:
No support for symbols in place of the immediate yet since it requires new
relocations.
Depends on D3671
Reviewers: jkolek, zoran.jovanovic, vmedic
Reviewed By: vmedic
Differential Revision: http://reviews.llvm.org/D3689
llvm-svn: 208858
much more effectively when trying to constant fold a load of a constant.
Previously, we only handled bitcasts by trying to find a totally generic
byte representation of the constant and use that. Now, we look through
the bitcast to see what constant we might fold the load into, and then
try to form a constant expression cast of the found value that would be
equivalent to loading the value.
You might wonder why on earth this actually matters. Well, turns out
that the Itanium ABI causes us to create a single array for a vtable
where the first elements are virtual base offsets, followed by the
virtual function pointers. Because the array is homogenous the element
type is consistently i8* and we inttoptr the virtual base offsets into
the initial elements.
Then constructors bitcast these pointers to i64 pointers prior to
loading them. Boom, no more constant folding of virtual base offsets.
This is the first fix to LLVM to address the *insane* performance Eric
Niebler discovered with Clang on his range comprehensions[1]. There is
more to come though, this doesn't *really* fix the problem fully.
[1]: http://ericniebler.com/2014/04/27/range-comprehensions/
llvm-svn: 208856
Summary:
They aren't implemented for any ISA at the moment.
Depends on D3670
Reviewers: jkolek, zoran.jovanovic, vmedic
Reviewed By: vmedic
Differential Revision: http://reviews.llvm.org/D3671
llvm-svn: 208855