POP on armv4t cannot be used to change thumb state (unilke later non-m-class
architectures), therefore we need a different return sequence that uses 'bx'
instead:
POP {r3}
ADD sp, #offset
BX r3
This patch also fixes an issue where the return value in r3 would get clobbered
for functions that return 128 bits of data. In that case, we generate this
sequence instead:
MOV ip, r3
POP {r3}
ADD sp, #offset
MOV lr, r3
MOV r3, ip
BX lr
http://reviews.llvm.org/D4748
llvm-svn: 214881
It's a bit of a tradeoff, since llvm-dwarfdump doesn't print the name of
the global symbol being used as an address in the addressing mode, but
this avoids the dependence on hardcoded set labels that keep changing
(5+ commits over the last few years that each update the set label as it
changes due to other, unrelated differences in output). This could've,
instead, been changed to match the set name then match the name in the
string pool but that would present other issues (needing to skip over
the sets that weren't of interest, etc) and checking that the addresses
(granted, without relocations applied - so it's not the whole story)
match in the two variable location descriptions seems sufficient and
fairly stable here.
There are a few similar other tests with similar label dependence that
I'll update soonish.
llvm-svn: 214878
Instruction prefetch is not implemented for AArch64, it is incorrectly
translated into data prefetch instruction.
Differential Revision: http://reviews.llvm.org/D4777
llvm-svn: 214860
Some types, such as 128-bit vector types on AArch64, don't have any callee-saved registers. So if a value needs to stay live over a callsite, it must be spilled and refilled. This cost is now taken into account.
llvm-svn: 214859
This is required for GNU coding style, among others.
Also update the configuration documentation.
Modified from an original patch by Jarkko Hietaniemi, thank you!
llvm-svn: 214858
Embedded systems seem to have inherited Darwin's choise of "unsigned long" for
size_t (via a bunch of headers), so we should respect that.
rdar://problem/17872787
llvm-svn: 214854
CMAKE buld system should meet everyone's requirements.
Enhanced CMake Build System Commit
* Supports Linux, Mac, Windows, and Intel® Xeon Phi builds
* Supports building with gcc, icc, clang, and Visual Studio compilers
* Supports bulding "fat" libraries on OS/X with clang
* Details and documentation on how to use build system
are in Build_With_CMake.txt
* To use the old CMake build system (corresponds to
CMakeLists.txt.old), just rename CMakeLists.txt to
CMakeLists.txt.other and rename CMakeLists.txt.old to
CMakeLists.txt
llvm-svn: 214850
found by a single test reduced out of a failure on llvm-stress.
The start of the problem (and the crash) came when we tried to use
a find of a non-used slot in the move-to half of the move-mask as the
target for two bad-half inputs. While if lucky this will be the first of
a pair of slots which we can place the bad-half inputs into, it isn't
actually guaranteed. This really isn't surprising, not sure what I was
thinking. The correct way to find the two unused slots is to look for
one of the *used* slots. We know it isn't that pair, and we can use some
modular arithmetic to find the other pair by masking off the odd bit and
adding 2 modulo 4. With this, we reliably found a viable pair of slots
for the bad-half inputs.
Sadly, that wasn't enough. We also had a wrong code bug that surfaced
when I reduced the test case for this where we would use the same slot
twice for the two bad inputs. This is because both of the bad inputs
could be in odd slots originally and thus the mod-2 mapping would
actually be the same. The whole point of the weird indexing into the
pair of empty slots was to try to leverage when the end result needed
the two bad-half inputs to be paired in a dword and pre-pair them in the
correct orrientation. This is less important with the powerful combining
we're now doing, and also easier and more reliable to achieve be noting
that we add the bad-half inputs in order. Thus, if they are in a dword
pair, the low part of that will be the first input in the sequence.
Always putting that in the low element will just do the right thing in
addition to computing the correct result.
Test case added. =]
llvm-svn: 214849
The original code would fail for unsupported value types like i1, i8, and i16.
This fix changes the code to only create a sub-register copy for i64 value types
and all other types (i1/i8/i16/i32) just use the source register without any
modifications.
getRegClassFor() is now guarded by the i64 value type check, that guarantees
that we always request a register for a valid value type.
llvm-svn: 214848
This implements basic argument lowering for AArch64 in FastISel. It only
handles a small subset of the C calling convention. It supports simple
arguments that can be passed in GPR and FPR registers.
This should cover most of the trivial cases without falling back to
SelectionDAG.
This fixes <rdar://problem/17890986>.
llvm-svn: 214846
shorter/easier and have the DAG use that to do the same lookup. This
can be used in the future for TargetMachine based caching lookups from
the MachineFunction easily.
Update the MIPS subtarget switching machinery to update this pointer
at the same time it runs.
llvm-svn: 214838
sequence on AArch64
Re-commit of r214669 without changes to test cases
LLVM::CodeGen/AArch64/arm64-neon-mul-div.ll and
LLVM:: CodeGen/AArch64/dp-3source.ll
This resolves the reported compfails of the original commit.
llvm-svn: 214832
Suppression context might be used in multiple sanitizers working
simultaneously (e.g. LSan and UBSan) and not knowing about each other.
llvm-svn: 214831