We build a NestedNameSpecifier that records the CXXRecordDecl in which
__super appeared. Name lookup is performed in all base classes of the
recorded CXXRecordDecl. Use of __super is allowed only inside class and
member function scope.
llvm-svn: 218484
Reviewed at http://reviews.llvm.org/D4527
This patch is part of an effort to implement a more generic debugging API, as proposed in http://lists.cs.uiuc.edu/pipermail/llvmdev/2014-July/074656.html, with first part reviewed at http://reviews.llvm.org/D4466. Now adding several new APIs: __asan_report_present, __asan_get_report_{pc,bp,sp,address,type,size,description}, __asan_locate_address. These return whether an asan report happened yet, the PC, BP, SP, address, access type (read/write), access size and bug description (e.g. "heap-use-after-free"), __asan_locate_address takes a pointer and tries to locate it, i.e. say whether it is a heap pointer, a global or a stack, or whether it's a pointer into the shadow memory. If global or stack, tries to also return the variable name, address and size. If heap, tries to return the chunk address and size. Generally these should serve as an alternative to "asan_describe_address", which only returns all the data in text form. Having an API to get these data could allow having debugging scripts/extensions that could show additional information about a variable/expression/pointer. Test cases in test/asan/TestCases/debug_locate.cc and test/asan/TestCasea/debug_report.cc.
llvm-svn: 218481
No functional change.
I initially thought that pulling the Pat<> into the instruction pattern was
not possible because it was doing a transform on the index in order to convert
it from a per-element (extract_subvector) index into a per-chunk (vextract*x4)
index.
Turns out this also works inside the pattern because the vextract_extract
PatFrag has an OperandTransform EXTRACT_get_vextract{128,256}_imm, so the
index in $idx goes through the same conversion.
The existing test CodeGen/X86/avx512-insert-extract.ll extended in the
previous commit provides coverage for this change.
llvm-svn: 218480
No functional change.
These are now implemented as two levels of multiclasses heavily relying on the
new X86VectorVTInfo class. The multiclass at the first level that is called
with float or int provides the 128 or 256 bit subvector extracts. The second
level provides the register and memory variants and some more Pat<>s.
I've compared the td.expanded files before and after. One change is that
ExeDomain for 64x4 is SSEPackedDouble now. I think this is correct, i.e. a
bugfix.
(BTW, this is the change that was blocked on the recent tablegen fix. The
class-instance values X86VectorVTInfo inside vextract_for_type weren't
properly evaluated.)
Part of <rdar://problem/17688758>
llvm-svn: 218478
Add SelectionDAG TableGen definitions for BR_CC so that targets can instruction-select
BR_CC using TableGen pattern matching.
Patch by deadal nix.
llvm-svn: 218476
Machine Sink uses loop depth information to select between successors BBs to
sink machine instructions into, where BBs within smaller loop depths are
preferable. This patch adds support for choosing between successors by using
profile information from BlockFrequencyInfo instead, whenever the information
is available.
Tested it under SPEC2006 train (average of 30 runs for each program); ~1.5%
execution speedup in average on x86-64 darwin.
<rdar://problem/18021659>
llvm-svn: 218472
This change does the following:
* Removes the gtest/Makefile recursive-make-based calling strategy
for gtest execution.
* Adds the gtest/do-gtest.py call script.
- This handles finding and calling the Makefiles that really
run tests.
- This script also transforms the test output into something
that Xcode can place a failure marker on when a test fails.
* Modifies the Xcode external build command target for gtest.
It now calls the gtest/do-gtest.py script.
There is still room for improvement on Xcode integration of
do-gtest.py. Essentially the next several lines of error reporting
from the gtest output should be coalesced into a single line so that
Xcode can tell more about the error directly in the editor. Right now
it just puts a red mark and says "failure" but doesn't give any
details.
llvm-svn: 218470
Summary: This finishes support for ASAN on MSVC2012.
Test Plan: |ninja check-asan| passes locally with this on MSVC2012.
Reviewers: timurrrr
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D5494
llvm-svn: 218465
Summary:
The extra macro definition needs to go into COMPILER_RT_GTEST_CFLAGS
in order to be used for gtests on MSVC2012.
Test Plan: This is part of the fixes necessary for the ASAN tests to pass with MSVC2012.
Reviewers: timurrrr
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D5493
llvm-svn: 218464
llvm::format() is somewhat unsafe. The compiler does not check that integer
parameter size matches the %x or %d size and it does not complain when a
StringRef is passed for a %s. And correctly using a StringRef with format() is
ugly because you have to convert it to a std::string then call c_str().
The cases where llvm::format() is useful is controlling how numbers and
strings are printed, especially when you want fixed width output. This
patch adds some new formatting functions to raw_streams to format numbers
and StringRefs in a type safe manner. Some examples:
OS << format_hex(255, 6) => "0x00ff"
OS << format_hex(255, 4) => "0xff"
OS << format_decimal(0, 5) => " 0"
OS << format_decimal(255, 5) => " 255"
OS << right_justify(Str, 5) => " foo"
OS << left_justify(Str, 5) => "foo "
llvm-svn: 218463
This patch replaces the back reference StringMap from the MS mangler
with a SmallVector of strings. My previous patches reduced the number of
hashes involved in back reference lookups, this one removes them
completely. The back reference map contains at most 10 entries, which
are likely to be of varying sizes and different initial subsequences,
and which can easily became huge (due to templates and namespaces).
The solution presented is the simplest possible one. Nevertheless, it's
enough to reduce compilation times for a particular test case from 11.1s
to 9s, versus 8.58s for the Itanium ABI. Possible further improvements
include using a sorted vector (carefully to not introduce an extra
comparison), storing the string contents in a common arena, and/or keep
the string storage in the context for reuse.
Patch by Agustín Bergé.
llvm-svn: 218461
This change does the following:
* Remove test/c++/...
* Add gtest.
* Add gtest/unittest directory for unittesting individual classes.
* Add an initial Plugins/Process?linux/ThreadStateCoordinatorTest.cpp.
- currently failing a test (intentional).
- added a bare-bones ThreadStateCoordinator.cpp to Plugins/Process/Linux,
more soon. Just enough to prove out running gtest on Ubuntu and MacOSX.
* Added recursive make machinery so that doing a 'make' in gtest/ is
sufficient to kick off the existing test several directories down.
- Caveat - I currently short circuit from gtest/unittest/Makefile directly to
the one and only gtest/unittest/Plugins/Process/Linux directory. We'll need
to add the intervening layers. I haven't done this yet since to fix the
Xcode test failure correspondence, I may need to add a python layer which
might just handle the directory crawling.
* Added an Xcode project to the lldb workspace for gtest.
- Runs the recursive make system in gtest/Makefile.
- Default target is 'test'. test and clean are supported.
- Currently does not support test failure file/line correspondence.
Requires a bit of text transformation to hook that up.
llvm-svn: 218460
The InstrEmitter will skip the check of MI.hasPostISelHook()
before calling AdjustInstrPostInstrSelection() when NDEBUG
is not defined.
This was added in r140228, and I'm not sure if it is intentional or not,
but it is a likely source for bugs, because it means with
Release+Asserts builds you can forget to set the hasPostISelHook
flag on TableGen definitions and AdjustInstrPostInstrSelection() will
still be called.
llvm-svn: 218458
Summary:
I originally tried doing this specifically for X86 in the backend in D5091,
but it was rather brittle and generally running too late to be general.
Furthermore, other targets may want to implement similar optimizations.
So I reimplemented it at the IR-level, fitting it into AtomicExpandPass
as it interacts with that pass (which could not be cleanly done before
at the backend level).
This optimization relies on a new target hook, which is only used by X86
for now, as the correctness of the optimization on other targets remains
an open question. If it is found correct on other targets, it should be
trivial to enable for them.
Details of the optimization are discussed in D5091.
Test Plan: make check-all + a new test
Reviewers: jfb
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D5422
llvm-svn: 218455
These instructions do not indicate they are extendable or the
number of bits in the extendable operand. Rename to match
architected names. Add a testcase for the intrinsics.
llvm-svn: 218453
Summary:
The N32/N64 ABI's require that structs passed in registers are laid out
such that spilling the register with 'sd' places the struct at the lowest
address. For little endian this is trivial but for big-endian it requires
that structs are shifted into the upper bits of the register.
We also require that structs passed in registers have the 'inreg'
attribute for big-endian N32/N64 to work correctly. This is because the
tablegen-erated calling convention implementation only has access to the
lowered form of struct arguments (one or more integers of up to 64-bits
each) and is unable to determine the original type.
Reviewers: vmedic
Reviewed By: vmedic
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D5286
llvm-svn: 218451
On ARM NEON, VAND with immediate (16/32 bits) is an alias to VBIC ~imm with
the same type size. Adding that logic to the parser, and generating VBIC
instructions from VAND asm files.
This patch also fixes the validation routines for NEON splat immediates which
were wrong.
Fixes PR20702.
llvm-svn: 218450
v4f64 and v8f32 shuffles when they are lane-crossing. We have fully
general lane-crossing permutation functions in AVX2 that make this easy.
Part of this also changes exactly when and how these vectors are split
up when we don't have AVX2. This isn't always a win but it usually is
a win, so on the balance I think its better. The primary regressions are
all things that just need to be fixed anyways such as modeling when
a blend can be completely accomplished via VINSERTF128, etc.
Also, this highlights one of the few remaining big features: we do
a really poor job of inserting elements into AVX registers efficiently.
This completes almost all of the big tricks I have in mind for AVX2. The
only things left that I plan to add:
1) element insertion smarts
2) palignr and other fairly specialized lowerings when they happen to
apply
llvm-svn: 218449
256-bit vectors with lane-crossing.
Rather than immediately decomposing to 128-bit vectors, try flipping the
256-bit vector lanes, shuffling them and blending them together. This
reduces our worst case shuffle by a pretty significant margin across the
board.
llvm-svn: 218446
The Thumb2 BXJ instruction (Branch and Exchange Jazelle) is not
defined for v7M or v8A. It is defined for all other Thumb2-supporting
architectures (v6T2, v7A and v7R).
llvm-svn: 218445
lowering where it only used the mask of the low 128-bit lane rather than
the entire mask.
This allows the new lowering to correctly match the unpack patterns for
v8i32 vectors.
For reference, the reason that we check for the the entire mask rather
than checking the repeated mask is because the repeated masks don't
abide by all of the invariants of normal masks. As a consequence, it is
safer to use the full mask with functions like the generic equivalence
test.
llvm-svn: 218442
reduce the amount of checking we do here.
The first realization is that only non-crossing cases between 128-bit
lanes are handled by almost the entire function. It makes more sense to
handle the crossing cases first.
THe second is that until we actually are going to generate fancy shared
lowering strategies that use the repeated semantics of the v8i16
lowering, we should waste time checking for repeated masks. It is
simplest to directly test for the entire unpck masks anyways, so we
gained nothing from this.
This also matches the structure of v32i8 more closely.
No functionality changed here.
llvm-svn: 218441
lowering.
This completes the basic AVX2 feature support, but there are still some
improvements I'd like to do to really get the last mile of performance
here.
llvm-svn: 218440
I made a mistake in the previous commit and produced the wrong pattern.
Fix that. Also make one more shuffle pattern byte-based rather than
word-based, and add two more blend patterns.
llvm-svn: 218439
shuffles rather than word shuffles.
As you might guess, these were built starting from the word shuffle test
cases and I failed to properly port a bunch of them and left them as
widened word shuffle test cases. We still have a couple of tests that
check our ability to widen shuffles, but now we will test the actual
byte shuffle quite a bit better.
llvm-svn: 218438
Nico Rieck added support for this 32-bit COFF relocation some time ago
for Win64 stuff. It appears that as an oversight, the assembly output
used "foo"@IMGREL32 instead of "foo"@IMGREL, which is what we can parse.
Sadly, there were actually tests that took in IMGREL and put out
IMGREL32, and we didn't notice the inconsistency. Oh well. Now LLVM can
assemble it's own output with slightly more fidelity.
llvm-svn: 218437
for this now.
Should prevent folks from running afoul of this and not knowing why
their code won't instruction select the way I just did...
llvm-svn: 218436