Instruction prefetch is not implemented for AArch64, it is incorrectly
translated into data prefetch instruction.
Differential Revision: http://reviews.llvm.org/D4777
llvm-svn: 214860
Some types, such as 128-bit vector types on AArch64, don't have any callee-saved registers. So if a value needs to stay live over a callsite, it must be spilled and refilled. This cost is now taken into account.
llvm-svn: 214859
This is required for GNU coding style, among others.
Also update the configuration documentation.
Modified from an original patch by Jarkko Hietaniemi, thank you!
llvm-svn: 214858
Embedded systems seem to have inherited Darwin's choise of "unsigned long" for
size_t (via a bunch of headers), so we should respect that.
rdar://problem/17872787
llvm-svn: 214854
CMAKE buld system should meet everyone's requirements.
Enhanced CMake Build System Commit
* Supports Linux, Mac, Windows, and Intel® Xeon Phi builds
* Supports building with gcc, icc, clang, and Visual Studio compilers
* Supports bulding "fat" libraries on OS/X with clang
* Details and documentation on how to use build system
are in Build_With_CMake.txt
* To use the old CMake build system (corresponds to
CMakeLists.txt.old), just rename CMakeLists.txt to
CMakeLists.txt.other and rename CMakeLists.txt.old to
CMakeLists.txt
llvm-svn: 214850
found by a single test reduced out of a failure on llvm-stress.
The start of the problem (and the crash) came when we tried to use
a find of a non-used slot in the move-to half of the move-mask as the
target for two bad-half inputs. While if lucky this will be the first of
a pair of slots which we can place the bad-half inputs into, it isn't
actually guaranteed. This really isn't surprising, not sure what I was
thinking. The correct way to find the two unused slots is to look for
one of the *used* slots. We know it isn't that pair, and we can use some
modular arithmetic to find the other pair by masking off the odd bit and
adding 2 modulo 4. With this, we reliably found a viable pair of slots
for the bad-half inputs.
Sadly, that wasn't enough. We also had a wrong code bug that surfaced
when I reduced the test case for this where we would use the same slot
twice for the two bad inputs. This is because both of the bad inputs
could be in odd slots originally and thus the mod-2 mapping would
actually be the same. The whole point of the weird indexing into the
pair of empty slots was to try to leverage when the end result needed
the two bad-half inputs to be paired in a dword and pre-pair them in the
correct orrientation. This is less important with the powerful combining
we're now doing, and also easier and more reliable to achieve be noting
that we add the bad-half inputs in order. Thus, if they are in a dword
pair, the low part of that will be the first input in the sequence.
Always putting that in the low element will just do the right thing in
addition to computing the correct result.
Test case added. =]
llvm-svn: 214849
The original code would fail for unsupported value types like i1, i8, and i16.
This fix changes the code to only create a sub-register copy for i64 value types
and all other types (i1/i8/i16/i32) just use the source register without any
modifications.
getRegClassFor() is now guarded by the i64 value type check, that guarantees
that we always request a register for a valid value type.
llvm-svn: 214848
This implements basic argument lowering for AArch64 in FastISel. It only
handles a small subset of the C calling convention. It supports simple
arguments that can be passed in GPR and FPR registers.
This should cover most of the trivial cases without falling back to
SelectionDAG.
This fixes <rdar://problem/17890986>.
llvm-svn: 214846
shorter/easier and have the DAG use that to do the same lookup. This
can be used in the future for TargetMachine based caching lookups from
the MachineFunction easily.
Update the MIPS subtarget switching machinery to update this pointer
at the same time it runs.
llvm-svn: 214838
sequence on AArch64
Re-commit of r214669 without changes to test cases
LLVM::CodeGen/AArch64/arm64-neon-mul-div.ll and
LLVM:: CodeGen/AArch64/dp-3source.ll
This resolves the reported compfails of the original commit.
llvm-svn: 214832
Suppression context might be used in multiple sanitizers working
simultaneously (e.g. LSan and UBSan) and not knowing about each other.
llvm-svn: 214831
/INCLUDE arguments passed as command line options are handled in the
same way as Unix -u. All option values are converted to an undefined
symbol and added to a dummy input file, so that the specified symbols
are resolved.
One tricky thing on Windows is that the option is also allowed to
appear in the object file's directive section. At the time when
it's being read, all (regular) command line options have already
been processed. We cannot add undefined atoms to the dummy file
anymore.
Previously, we added such /INCLUDE to a set that has already been
processed. As a result the options were ignored.
This patch fixes the issue. Now, /INCLUDE symbols in the directive
section are handled as real undefined symbol in the COFF file.
We create an undefined symbol for each /INCLUDE argument and add
it to the file being parsed.
llvm-svn: 214824
Those registers are VFP/NEON and vector instructions should be used instead,
but old cores rely on those co-processors to enable VFP unwinding. This change
was prompted by the libc++abi's unwinding routine and is also present in many
legacy low-level bare-metal code that we ought to compile/assemble.
Fixing bug PR20025 and allowing PR20529 to proceed with a fix in libc++abi.
llvm-svn: 214802
My original LE implementation of the vsldoi instruction, with its
altivec.h interfaces vec_sld and vec_vsldoi, produces incorrect
shufflevector operations in the LLVM IR. Correct code is generated
because the back end handles the incorrect shufflevector in a
consistent manner.
This patch and a companion patch for LLVM correct this problem by
removing the fixup from altivec.h and the corresponding fixup from the
PowerPC back end. Several test cases are also modified to reflect the
now-correct LLVM IR.
The vec_sums and vec_vsumsws interfaces in altivec.h are also fixed,
because they used vec_perm calls intended to be recognized as vsldoi
instructions. These vec_perm calls are now replaced with code that
more clearly shows the intent of the transformation.
llvm-svn: 214801
My original LE implementation of the vsldoi instruction, with its
altivec.h interfaces vec_sld and vec_vsldoi, produces incorrect
shufflevector operations in the LLVM IR. Correct code is generated
because the back end handles the incorrect shufflevector in a
consistent manner.
This patch and a companion patch for Clang correct this problem by
removing the fixup from altivec.h and the corresponding fixup from the
PowerPC back end. Several test cases are also modified to reflect the
now-correct LLVM IR.
llvm-svn: 214800
Duplicate the vararg tests for linux and add a tests which mixed
vararg arguments with darwin positional parameters.
Patch by: Janne Grunau <j@jannau.net>
llvm-svn: 214799
Also, revert a couple of suppressions.
r214298, "Suppress clang/test/Sema/struct-packed-align.c for targeting LLP64."
r214301, "Suppress clang/test/Sema/struct-packed-align.c also on msvc for investigating."
llvm-svn: 214794