This adds support for lowering sibling calls with outgoing arguments.
e.g
```
define void @foo(i32 %a)
```
Support is ported from AArch64ISelLowering's `isEligibleForTailCallOptimization`.
The only thing that is missing is a full port of
`TargetLowering::parametersInCSRMatch`. So, if we're using swiftself,
we'll never tail call.
- Rename `analyzeCallResult` to `analyzeArgInfo`, since the function is now used
for both outgoing and incoming arguments
- Teach `OutgoingArgHandler` about tail calls. Tail calls use frame indices for
stack arguments.
- Teach `lowerFormalArguments` to set the bytes in the caller's stack argument
area. This is used later to check if the tail call's parameters will fit on
the caller's stack.
- Add `areCalleeOutgoingArgsTailCallable` to perform the eligibility check on
the callee's outgoing arguments.
For testing:
- Update call-translator-tail-call to verify that we can now tail call with
outgoing arguments, use G_FRAME_INDEX for stack arguments, and respect the
size of the caller's stack
- Remove GISel-specific check lines from speculation-hardening.ll, since GISel
now tail calls like the other selectors
- Add a GISel test line to tailcall-string-rvo.ll since we can tail call in that
test now
- Add a GISel test line to tailcall_misched_graph.ll since we tail call there
now. Add specific check lines for GISel, since the debug output from the
machine-scheduler differs with GlobalISel. The dependency still holds, but
the output comes out in a different order.
Differential Revision: https://reviews.llvm.org/D67471
llvm-svn: 371780
Cleanup/change the code that checks for possible tailcall conventions to
look the same as the one in the X86 target. This makes the distinction
between calling conventions that can guarnatee tailcalls and the ones
that may tailcall more obvious.
- Add Swift to the mayTailCall list
- PreserveMost seemed to be incorrectly part of the guarnteed tail call
list, move it to the mayTailCall list.
llvm-svn: 281376
This commit makes LLVM not estimate branch probabilities when doing a
single bit bitmask tests.
The code that originally made me discover this is:
if ((a & 0x1) == 0x1) {
..
}
In this case we don't actually have any branch probability information
and should not assume to have any. LLVM transforms this into:
%and = and i32 %a, 1
%tobool = icmp eq i32 %and, 0
So, in this case, the result of a bitwise and is compared against 0,
but nevertheless, we should not assume to have probability
information.
CodeGen/ARM/2013-10-11-select-stalls.ll started failing because the
changed probabilities changed the results of
ARMBaseInstrInfo::isProfitableToIfCvt() and led to an Ifcvt of the
diamond in the test. AFAICT, the test was never meant to test this and
thus changing the test input slightly to not change the probabilities
seems like the best way to preserve the meaning of the test.
llvm-svn: 234979
if ((a & 0x1) == 0x1) {
..
}
In this case we don't actually have any branch probability information and
should not assume to have any. LLVM transforms this into:
%and = and i32 %a, 1
%tobool = icmp eq i32 %and, 0
So, in this case, the result of a bitwise and is compared against 0,
but nevertheless, we should not assume to have probability
information.
llvm-svn: 234898
Essentially the same as the GEP change in r230786.
A similar migration script can be used to update test cases, though a few more
test case improvements/changes were required this time around: (r229269-r229278)
import fileinput
import sys
import re
pat = re.compile(r"((?:=|:|^)\s*load (?:atomic )?(?:volatile )?(.*?))(| addrspace\(\d+\) *)\*($| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$)")
for line in sys.stdin:
sys.stdout.write(re.sub(pat, r"\1, \2\3*\4", line))
Reviewers: rafael, dexonsmith, grosser
Differential Revision: http://reviews.llvm.org/D7649
llvm-svn: 230794
This commit starts with a "git mv ARM64 AArch64" and continues out
from there, renaming the C++ classes, intrinsics, and other
target-local objects for consistency.
"ARM64" test directories are also moved, and tests that began their
life in ARM64 use an arm64 triple, those from AArch64 use an aarch64
triple. Both should be equivalent though.
This finishes the AArch64 merge, and everyone should feel free to
continue committing as normal now.
llvm-svn: 209577