checking whether the ArrayRef is equal to an explicit list of arguments.
This is particularly easy to implement even without variadic templates
because ArrayRef happens to be homogeneously typed. As a consequence we
can use a "clever" wrapper type and default arguments to capture in
a single method many arguments as well as *how many* arguments the user
specified.
Thanks to Dave Blaikie for helping me pull together this little helper.
Suggestions for how to improve or generalize it are of course welcome.
I'll be using it immediately in my follow-up patch. =D
llvm-svn: 214041
The architecture specific implementation of routines would be built and included
along with the generic implementation. This would result in multiple
definitions of those symbols.
The linker is free to select either of the two. Most of the time, this
shouldn't be too terrible as the forward iteration should catch the architecture
version due to the ordering. Rather than relying on the linker and build
infrastructure ordering things in a specific manner, only provide the
architecture version when available.
This reduces the size of compiler-rt, simplifies inspection of the library
implementations, and guarantees that the desired version is selected at a
slightly complex build system.
llvm-svn: 214040
This will detect if you are building libcxx in-tree and libcxxabi is
available. If so, it will default to using the in-tree libcxxabi by
setting LIBCXX_CXX_ABI to "libcxxabi", LIBCXX_LIBCXXABI_INCLUDE_PATHS to
"${CMAKE_SOURCE_DIR}/projects/libcxxabi/include" and will add "cxxabi"
as a proper dependency.
Patch by Russell Harmon.
llvm-svn: 214037
Place the floating point constants into the read-only data section. This was
already being done for x86_64, this simply mirrors the behaviour for i686.
llvm-svn: 214034
MMX/SSE instructions expect 128-bit alignment (16-byte) for constants that they
reference. Correct the alignment on the constant values. Although it is quite
possible for the data to end up aligned, there is no guarantee that this will
occur unless it is explicitly aligned to the desired location. If the data ends
up being unaligned, the resultant binary would fault at runtime due to the
unaligned access.
As an example, the follow would fault previously:
cc -c lib/builtins/x86_64/floatundidf.S -o floatundidf.o
cc -c test/builtins/Unit/floatundidf_test.c -o floatundidf_test.c
ld -m elf_x86_64 floatundidf.o floatundidf_test.o -lc -o floatundidf
However, if the object files were reversed, the data would end up aligned and
the problem would go unnoticed.
llvm-svn: 214033
16M regions can waste almost 1G for nothing.
Since region size is used only during initial heap growth,
it's unclear why we even need such huge regions.
llvm-svn: 214027
* Add abbreviation for CXXMethodDecl and for FunctionProtoType. These come up
a *lot* in C++ modules.
* Allow typedef declarations to use the abbreviation if they're class members,
or if they're used.
In passing, add more record name records for Clang AST node kinds.
The downside is that we had already used up our allotment of 12 abbreviations,
so this pushes us to an extra bit on each record to support the extra abbrev
kinds, which increases file size by ~1%. This patch *barely* pays for that
through the other improvements, but we've got room for another 18 abbrevs,
so we should be able to make it much more profitable with future changes.
llvm-svn: 214024
over each node in the worklist prior to combining.
This allows the combiner to produce new nodes which need to go back
through legalization. This is particularly useful when generating
operands to target specific nodes in a post-legalize DAG combine where
the operands are significantly easier to express as pre-legalized
operations. My immediate use case will be PSHUFB formation where we need
to build a constant shuffle mask with a build_vector node.
This also refactors the relevant functionality in the legalizer to
support this, and updates relevant tests. I've spoken to the R600 folks
and these changes look like improvements to them. The avx512 change
needs to be investigated, I suspect there is a disagreement between the
legalizer and the DAG combiner there, but it seems a minor issue so
leaving it to be re-evaluated after this patch.
Differential Revision: http://reviews.llvm.org/D4564
llvm-svn: 214020
Parser::ParseDeclarationSpecifiers eagerly updates the source range of
the DeclSpec with the current token position. However, it might not
consume any more tokens.
Fix this by only setting the start of the range, not the end. This way
the SourceRange will be invalid if we don't consume any more tokens.
This fixes PR20413.
Differential Revision: http://reviews.llvm.org/D4646
llvm-svn: 214018
Re-apply SVN r213684 which was reverted in SVN r213724 since it broke the
build bots. Add a tweak to enable inclusion of the assembly sources in
standalone build as well.
Original commit message:
This patch address the PR20360. The CMake assembler build system
ignores the .S assembly files in builtins library build. This patch
fixes the issue.
llvm-svn: 214013
The .rodata directive was added on the IA-64 (Itanium) platform. The LLVM IAS
supports the .rodata on i386 and x86_64 as well. There is no reason to really
restrict compilation of the builtins to just clang. By explicitly indicating
that the data is meant to be pushed into the .rodata section via the .section
.rodata, the assembly is made compatible with clang and gcc (with GAS).
This will enable building these routines on the Linux buildbots via CMake.
llvm-svn: 214012
The tale starts with r212808 which attempted to fix inversion of the low
and high bits when lowering MUL_LOHI. Sadly, that commit did not include
any positive test cases, and just removed some operations from a test
case where the actual logic being changed isn't fully visible from the
test.
What this commit did was two things. First, it reversed the low and high
results in the formation of the MERGE_VALUES node for the multiple
results. This is entirely correct.
Second it changed the shuffles for extracting the low and high
components from the i64 results of the multiplies to extract them
assuming a big-endian-style encoding of the multiply results. This
second change is wrong. There is no big-endian encoding in x86, the
results of the multiplies are normal v2i64s: when cast to v4i32, the low
i32s are at offsets 0 and 2, and the high i32s are at offsets 1 and 3.
However, the first change wasn't enough to actually fix the bug, which
is (I assume) why the second change was also made. There was another bug
in the MERGE_VALUES formation: we weren't using a VTList, and so were
getting a single result node! When grabbing the *second* result from the
node, we got... well.. colud be anything. I think this *appeared* to
invert things, but had to be causing other problems as well.
Fortunately, I fixed the MERGE_VALUES issue in r213931, so we should
have been fine, right? NOOOPE! Because the core bug was never addressed,
the test in vector-idiv failed when I fixed the MERGE_VALUES node.
Because there are essentially no docs for this node, I had to guess at
how to fix it and tried swapping the operands, restoring the order of
the original code before r212808. While this "fixed" the test case (in
that we produced the write instructions) we were still extracting the
wrong elements of the i64s, and thus PR20355 was still broken.
This commit essentially reverts the big-endian-style extraction part of
r212808 and goes back to the original masks which were correct. Now that
the MERGE_VALUES node formation is also correct, everything works. I've
also included a more detailed test from PR20355 to make sure this stays
fixed.
llvm-svn: 214011
The clever way to implement signed multiplication with unsigned *is
already implemented* and tested and working correctly. The bug is
somewhere else. Re-investigating.
This will teach me to not scroll far enough to read the code that did
what I thought needed to be done.
llvm-svn: 214009
signed multiplication is requested. While there is not a difference in
the *low* half of the result, the *high* half (used specifically to
implement the signed division by these constants) certainly is used. The
test case I've nuked was actively asserting wrong code.
There is a delightful solution to doing signed multiplication even when
we don't have it that Richard Smith has crafted, but I'll add the
machinery back and implement that in a follow-up patch. This at least
restores correctness.
llvm-svn: 214007
We used to initialize symbolizer lazily, but this doesn't work in
various sandboxed environments. Instead, let's be consistent with
the rest of sanitizers.
llvm-svn: 214006
Get rid of Symbolizer::Init(path_to_external) in favor of
thread-safe Symbolizer::GetOrInit(), and use the latter version
everywhere. Implicitly depend on the value of external_symbolizer_path
runtime flag instead of passing it around manually.
No functionality change.
llvm-svn: 214005
This moves some memptr specific code into the generic thunk emission
codepath.
Fixes PR20053.
Reviewers: majnemer
Differential Revision: http://reviews.llvm.org/D4613
llvm-svn: 214004
This isn't a real diagnostic kind, only a location note, and this form isn't even
used if the SourceManager can provide a valid presumed location. But still.
llvm-svn: 214002
llvm revision 210639 renamed the -global-merge backend option to
-enable-global-merge. This change simply updates clang to match that.
Patch by Steven Wu!
llvm-svn: 213993