r211898 introduced a regression where a large struct, which would
normally be passed ByVal, was causing padding to be inserted to
prevent the backend from using some GPRs, in order to follow the
AAPCS. However, the type of the argument was not being set correctly,
so the backend cannot align 8-byte aligned struct types on the stack.
The fix is to not insert the padding arguments when the argument is
being passed ByVal.
llvm-svn: 213359
Unfortunately, we don't seem to have a direct truncation, but the
extension can be legally split into two operations so we should
support that.
llvm-svn: 213357
Clang may well start emitting these soon, and while it may not be
directly relevant for OpenCL or GLSL, the instructions were just
sitting there waiting to be used.
llvm-svn: 213356
1. Revert "Add default feature for CPUs on AArch64 target in Clang"
at r210625. Then, all enabled feature will by passed explicitly by
-target-feature in -cc1 option.
2. Get "-mfpu" deprecated.
3. Implement support of "-march". Usage is:
-march=armv8-a+[no]feature
For instance, "-march=armv8-a+neon+crc+nocrypto". Here "armv8-a" is
necessary, and CPU names are not acceptable. Candidate features are
fp, neon, crc and crypto. Where conflicting feature modifiers are
specified, the right-most feature is used.
4. Implement support of "-mtune". Usage is:
-march=CPU_NAME
For instance, "-march=cortex-a57". This option will ONLY get
micro-architectural feature enabled specifying to target CPU,
like "+zcm" and "+zcz" for cyclone. Any architectural features
WON'T be modified.
5. Change usage of "-mcpu" to "-mcpu=CPU_NAME+[no]feature", which is
an alias to "-march={feature of CPU_NAME}+[no]feature" and
"-mtune=CPU_NAME" together. Where this option is used in conjunction
with -march or -mtune, those options take precedence over the
appropriate part of this option.
llvm-svn: 213353
Currently the only kind of integer IR attributes that we have are alignment
attributes, and so the attribute kind that takes an integer parameter is called
AlignAttr, but that will change (we'll soon be adding a dereferenceable
attribute that also takes an integer value). Accordingly, rename AlignAttribute
to IntAttribute (class names, enums, etc.).
No functionality change intended.
llvm-svn: 213352
* A submodule of module A is imported into module B
* Another submodule of module A that is not imported into B exports a macro
* Some submodule of module B also exports a definition of the macro, and
happens to be the first submodule of B that imports module A.
In this case, we would incorrectly determine that A's macro redefines B's
macro, and so we don't need to re-export B's macro at all.
This happens with the 'assert' macro in an LLVM self-host. =(
llvm-svn: 213348
I don't think other implicit members like copy assignment and move
assignment require this treatment, because they should already be
operating on a constructed object.
Fixes PR20351.
llvm-svn: 213346
99% of this CL is simply moving calls to "import pexpect" to a more
narrow scope - i.e. the function that actually runs a particular
test. This way the test suite can run on Windows, which doesn't have
pexpect, and the individual tests that use pexpect can be disabled on
a platform-specific basis.
Additionally, this CL fixes a few other cases of non-portability.
Notably, using "ps" to get the command line, and os.uname() to
determine the architecture don't work on Windows. Finally, this
also adds a stubbed out builder_win32 module.
The full test suite runs correctly on Windows after this CL, although
there is still some work remaining on the C++ side to fix one-shot
script commands from LLDB (e.g. script print "foo"), which currently
deadlock.
Reviewed by: Todd Fiala
Differential Revision: http://reviews.llvm.org/D4573
llvm-svn: 213343
Since the result of a SETCC for X86 is 0 or -1 in each lane, we can
move unary operations, in this case [su]int_to_fp through the mask
operation and constant fold the operation away. Generally speaking:
UNARYOP(AND(VECTOR_CMP(x,y), constant))
--> AND(VECTOR_CMP(x,y), constant2)
where constant2 is UNARYOP(constant).
This implements the transform where UNARYOP is [su]int_to_fp.
For example, consider the simple function:
define <4 x float> @foo(<4 x float> %val, <4 x float> %test) nounwind {
%cmp = fcmp oeq <4 x float> %val, %test
%ext = zext <4 x i1> %cmp to <4 x i32>
%result = sitofp <4 x i32> %ext to <4 x float>
ret <4 x float> %result
}
Before this change, the SSE code is generated as:
LCPI0_0:
.long 1 ## 0x1
.long 1 ## 0x1
.long 1 ## 0x1
.long 1 ## 0x1
.section __TEXT,__text,regular,pure_instructions
.globl _foo
.align 4, 0x90
_foo: ## @foo
cmpeqps %xmm1, %xmm0
andps LCPI0_0(%rip), %xmm0
cvtdq2ps %xmm0, %xmm0
retq
After, the code is improved to:
LCPI0_0:
.long 1065353216 ## float 1.000000e+00
.long 1065353216 ## float 1.000000e+00
.long 1065353216 ## float 1.000000e+00
.long 1065353216 ## float 1.000000e+00
.section __TEXT,__text,regular,pure_instructions
.globl _foo
.align 4, 0x90
_foo: ## @foo
cmpeqps %xmm1, %xmm0
andps LCPI0_0(%rip), %xmm0
retq
The cvtdq2ps has been constant folded away and the floating point 1.0f
vector lanes are materialized directly via the ModRM operand of andps.
llvm-svn: 213342
Since the result of a SETCC for AArch64 is 0 or -1 in each lane, we can
move unary operations, in this case [su]int_to_fp through the mask
operation and constant fold the operation away. Generally speaking:
UNARYOP(AND(VECTOR_CMP(x,y), constant))
--> AND(VECTOR_CMP(x,y), constant2)
where constant2 is UNARYOP(constant).
This implements the transform where UNARYOP is [su]int_to_fp.
For example, consider the simple function:
define <4 x float> @foo(<4 x float> %val, <4 x float> %test) nounwind {
%cmp = fcmp oeq <4 x float> %val, %test
%ext = zext <4 x i1> %cmp to <4 x i32>
%result = sitofp <4 x i32> %ext to <4 x float>
ret <4 x float> %result
}
Before this change, the code is generated as:
fcmeq.4s v0, v0, v1
movi.4s v1, #0x1 // Integer splat value.
and.16b v0, v0, v1 // Mask lanes based on the comparison.
scvtf.4s v0, v0 // Convert each lane to f32.
ret
After, the code is improved to:
fcmeq.4s v0, v0, v1
fmov.4s v1, #1.00000000 // f32 splat value.
and.16b v0, v0, v1 // Mask lanes based on the comparison.
ret
The svvtf.4s has been constant folded away and the floating point 1.0f
vector lanes are materialized directly via fmov.4s.
Rather than do the folding manually in the target code, teach getNode()
in the generic SelectionDAG to handle folding constant operands of
vector [su]int_to_fp nodes. It is reasonable (as noted in a FIXME) to do
additional constant folding there as well, but I don't have test cases
for those operations, so leaving them for another time when it becomes
appropriate.
rdar://17693791
llvm-svn: 213341
We were crashing on the relevant test case inputs. Also, refactor this
code a bit so we can report failure and slurp the pragma tokens without
returning a diagnostic id. This is more consistent with the rest of the
parser and sema code.
llvm-svn: 213337
Options struct and move the comment to inMips16HardFloat. Use the
fact that we now know whether or not we cared about soft float to
set the libcalls.
Accordingly rename mipsSEUsesSoftFloat to abiUsesSoftFloat and
propagate since it's no longer CPU specific.
llvm-svn: 213335
Add support for adding section relocations in -r mode. Enhance the test
cases which validate the parsing of .o files to also round trip. They now
write out the .o file and then parse that, verifying all relocations survived
the round trip.
llvm-svn: 213333
If, during the initial parse of a template, we perform aggregate initialization
and form an implicit value initialization for an array type, then when we come
to instantiate the template and redo the initialization step, we would try to
match the implicit value initialization up against an array *element*, not to
the complete array.
Remarkably, we've had this bug since ~the dawn of time, but only noticed it
recently.
llvm-svn: 213332
relaxed in the big RuntimeDyldMachO cleanup of r213293.
No test case yet - this was found via inspection and there's no easy way to test
GOT alignment in RuntimeDyldChecker at the moment. I'm working on adding support
for this now, and hope to have a test case for this soon.
llvm-svn: 213331
This fixes all of the hidden ivar test cases and any case where we try to find the full definition of an objective C class.
This also means hidden ivars show up again.
<rdar://problem/15458957>
llvm.org/pr20270
llvm.org/pr20269
llvm.org/pr20272
llvm-svn: 213328
This reverts commit r213307.
Reverting to have some on-list discussion/confirmation about the ongoing
direction of smart pointer usage in the LLVM project.
llvm-svn: 213325
This reverts commit r213308.
Reverting to have some on-list discussion/confirmation about the ongoing
direction of smart pointer usage in the LLVM project.
llvm-svn: 213324
The code to manage resolvable symbols is now separated from
ExportedSymbolRenameFile so that other class can reuse it.
I'm planning to use it to find the entry function symbol
based on resolvable symbols.
llvm-svn: 213322
These were present in CL 1.0, just not implemented yet.
v2: Use hex values and fix commit message
Signed-off-by: Aaron Watry <awatry@gmail.com>
Reviewed-by: Jeroen Ketema <j.ketema@imperial.ac.uk>
CC: Matt Arsenault <Matthew.Arsenault@amd.com>
llvm-svn: 213321
Vector true is -1, not 1, which means we need to use the relational unary
macro instead of the normal unary builtin one.
Signed-off-by: Aaron Watry <awatry@gmail.com>
Reviewed-by: Tom Stellard <thomas.stellard@amd.com>
llvm-svn: 213316
relational.h includes relational macros for defining functions which need to
return 1 for scalar true and -1 for vector true.
I believe that this is the only place that this behavior is required, so the
macro is placed at its lowest useful level (same directory as it is used in).
This also creates re-usable unary/binary declaration and floatn includes which
should simplify relational builtin declarations.
Mostly patterned off of include/math/[binary_decl|unary_decl|floatn].inc
but with required changes for relational functions.
Signed-off-by: Aaron Watry <awatry@gmail.com>
Reviewed-by: Tom Stellard <thomas.stellard@amd.com>
llvm-svn: 213315
The udivmodsi4/modsi3/umodsi3 code computes jump targets based on ARM encodings
(if CLZ is present and IDIV is not present).
Reverts parts of r211032 and r211035.
llvm-svn: 213309