Summary: Rename some methods to make Statepoint look more like CallSite.
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D10756
llvm-svn: 241235
Summary:
(I don't think this change needs review, this was uploaded to
phabricator to provide context for later dependent changes.)
Differential Revision: http://reviews.llvm.org/D10630
llvm-svn: 241233
Summary:
This allows the "if (Statepoint SP = Statepoint(I))" idiom.
(I don't think this change needs review, this was uploaded to
phabricator to provide context for later dependent changes.)
Differential Revision: http://reviews.llvm.org/D10629
llvm-svn: 241232
Summary:
Introduce a simple accessor to get the gc_result hanging off of a
statepoint.
(I don't think this change needs review, this was uploaded to
phabricator to provide context for later dependent changes.)
Differential Revision: http://reviews.llvm.org/D10627
llvm-svn: 241231
Previously, we use SymbolTable::rename to resolve AlternateName symbols.
This patch is to merge that mechanism with weak aliases, so that we
remove that function.
llvm-svn: 241230
Summary:
r240039 adds a test case to check that CallGraph does the right thing
with respect to non-leaf intrinsics like statepoint and patchpoint.
This ports the same test case to LazyCallGraph. LazyCallGraph already
does the right thing with respect to escaping function pointers so there
is no need to change any code.
Reviewers: chandlerc
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D10582
llvm-svn: 241226
On Windows the user may invoke the linker directly, so we might not have an
opportunity to add runtime library flags to the linker command line. Instead,
instruct the code generator to embed linker directive in the object file
that cause the required runtime libraries to be linked.
We might also want to do something similar for ASan, but it seems to have
its own special complexities which may make this infeasible.
Differential Revision: http://reviews.llvm.org/D10862
llvm-svn: 241225
Summary: The long name causes problems with some shells.
Reviewers: sivachandra
Subscribers: lldb-commits
Differential Revision: http://reviews.llvm.org/D10888
llvm-svn: 241222
This checks subtarget feature compatibility for inlining by verifying
that the callee is a strict subset of the caller's features. This includes
the cpu as part of the subtarget we can get via the incoming functions as
the backend takes CPUs as feature sets.
This allows us to inline things like:
int foo() { return baz(); }
int __attribute__((target("sse4.2"))) bar() {
return foo();
}
so that generic code can be inlined into specialized functions.
llvm-svn: 241221
I think Undefined symbols are a bit more convenient than StringRefs
since SymbolBodies are handles for symbols. You can get resolved
symbols for undefined symbols just by calling getReplacmenet without
looking up the symbol table.
llvm-svn: 241214
Occasionally we have to resolve an undefined symbol to its
mangled symbol. Previously, we did that on calling side of
findMangle by explicitly updating SymbolBody.
In this patch, mangled symbols are handled as weak aliases
for undefined symbols.
llvm-svn: 241213
Summary:
* Add 64-bit address space feature.
* Rename SIMD feature to SIMD128.
* Handle single-thread model with an IR pass (same way ARM does).
* Rename generic processor to MVP, to follow design's lead.
* Add bleeding-edge processors, with all features included.
* Fix a few DEBUG_TYPE to match other backends.
Test Plan: ninja check
Reviewers: sunfish
Subscribers: jfb, llvm-commits
Differential Revision: http://reviews.llvm.org/D10880
llvm-svn: 241211
TwoAddressInstructionPass stops after a successful commuting but 3 Addr
conversion might be good for some cases.
Consider:
int foo(int a, int b) {
return a + b;
}
Before this commit, we emit:
addl %esi, %edi
movl %edi, %eax
ret
After this commit, we try 3 Addr conversion:
leal (%rsi,%rdi), %eax
ret
Patch by Volkan Keles <vkeles@apple.com>!
Differential Revision: http://reviews.llvm.org/D10851
llvm-svn: 241206
Summary:
This patch changes the way APInt is compared with a value of type uint64_t.
Before the uint64_t value was truncated to the size of APInt before comparison.
Now the comparison takes into account full 64-bit precision.
Test Plan: Unit tests added. No regressions. Self-hosted check-all done as well.
Reviewers: chandlerc, dexonsmith
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D10655
llvm-svn: 241204
Summary:
If test is decorated with expectedFlakey*, run teardown and setup before retry
Don't run retry if the test is already decorated with xfail or skip
Test Plan:
Mark TestMultithreaded.test_sb_api_listener_event_process_state as expectedFlakey
Run ./dotest.py -p TestMultithreaded.py -A x86_64 -C gcc4.9.2
Reviewers: vharron, tberghammer
Subscribers: lldb-commits
Differential Revision: http://reviews.llvm.org/D10830
llvm-svn: 241202
'fork'+'exec' combination made scan-build and ccc-analyzer hang under Windows. The patch replaces 'fork'+'exec' with more reliable 'system' (ccc-analyzer) and piped 'open' (scan-build). See http://reviews.llvm.org/D8774 and http://reviews.llvm.org/D9357 for more details.
llvm-svn: 241201
This is mostly an NFC, which increases code readability (instead of
saving old terminator, generating new one in front of old, and deleting
old, we just call a function). However, it would additionaly copy
the debug location from old instruction to replacement, which
would help PR23837.
llvm-svn: 241197
All file formats only needed 16-bits right now which is enough to fit
in to the padding with other fields.
This reduces the size of MCSymbol to 24-bytes on a 64-bit system. The
layout is now
0 | class llvm::MCSymbol
0 | class llvm::PointerIntPair SectionOrFragmentAndHasName
0 | intptr_t Value
| [sizeof=8, dsize=8, align=8
| nvsize=8, nvalign=8]
8 | unsigned int IsTemporary
8 | unsigned int IsRedefinable
8 | unsigned int IsUsed
8 | _Bool IsRegistered
8 | unsigned int IsExternal
8 | unsigned int IsPrivateExtern
8 | unsigned int Kind
9 | unsigned int IsUsedInReloc
9 | unsigned int SymbolContents
9 | unsigned int CommonAlignLog2
10 | uint32_t Flags
12 | uint32_t Index
16 | union
16 | uint64_t Offset
16 | uint64_t CommonSize
16 | const class llvm::MCExpr * Value
| [sizeof=8, dsize=8, align=8
| nvsize=8, nvalign=8]
| [sizeof=24, dsize=24, align=8
| nvsize=24, nvalign=8]
llvm-svn: 241196
This patch adds initial general-dynamic TLS support for AArch64. Currently
no optimization is done to realx for more performance-wise models (initial-exec
or local-exec). This patch also only currently handles correctly executable
generation, although priliminary DSO support through PLT specific creation
is also added.
With this change clang/llvm bootstrap with lld is possible in static configuration
(some DSO creation fails due missing Linker script support, not AArch64 specific),
although make check also shows some issues.
llvm-svn: 241192
Summary:
According to PTX ISA:
For convenience, ld, st, and cvt instructions permit source and destination data operands to be wider than the instruction-type size, so that narrow values may be loaded, stored, and converted using regular-width registers. For example, 8-bit or 16-bit values may be held directly in 32-bit or 64-bit registers when being loaded, stored, or converted to other types and sizes. The operand type checking rules are relaxed for bit-size and integer (signed and unsigned) instruction types; floating-point instruction types still require that the operand type-size matches exactly, unless the operand is of bit-size type.
So, the ISA does not support load with extending/store with truncatation for floating numbers. This is reflected in setting the loadext/truncstore actions to expand in the code for floating numbers, but vectors of floating numbers are not taken care of.
As a result, loading a vector of floats followed by a fp_extend may be combined by DAGCombiner to a extload, and the extload may be lowered to NVPTXISD::LoadV2 with extending information. However, NVPTXISD::LoadV2 does not perform extending, and no extending instructions are inserted. Finally, PTX instructions with mismatched types are generated, like
ld.v2.f32 {%fd3, %fd4}, [%rd2]
This patch adds the correct actions for vectors of floats, so DAGCombiner would not create loads with extending, and correct code is generated.
Patched by Gang Hu.
Test Plan: Test case attached.
Reviewers: jingyue
Reviewed By: jingyue
Subscribers: llvm-commits, jholewinski
Differential Revision: http://reviews.llvm.org/D10876
llvm-svn: 241191
Given that alignments are always powers of 2, just encode it this way.
This matches how we encode alignment on IR GlobalValue's for example.
This compresses the CommonAlign member down to 5 bits which allows it
to pack better with the surrounding fields.
Reviewed by Duncan Exon Smith.
llvm-svn: 241189
32-bit finally funclets are intended to be called both directly from the
parent function and indirectly from the EH runtime. Because we aren't
contorting LLVM's X86 prologue to match MSVC's, calling the finally
block directly passes in a different value of EBP than the one that the
runtime provides. We need an adapter thunk to adjust EBP to the expected
value. However, WinEHPrepare already has to solve this problem when
cleanups are not pre-outlined, so we can go ahead and rely on it rather
than duplicating work.
Now we only do the llvm.x86.seh.recoverfp dance for 32-bit SEH filter
functions.
llvm-svn: 241187
Don't pattern match for frontend outlined finally calls on non-x64
platforms. The 32-bit runtime uses a different funclet prototype. Now,
the frontend is pre-outlining the finally bodies so that it ends up
doing most of the heavy lifting for variable capturing. We're just
outlining the callsite, and adapting the frameaddress(0) call to line up
the frame pointer recovery.
llvm-svn: 241186