That causes references to them to be weak references which can collapse
to null if no definition is provided. We call these functions
unconditionally, so a definition *must* be provided. Make the
definitions provided in the .cpp file weak by re-declaring them as weak
just prior to defining them. This should keep compilers which cannot
attach the weak attribute to the definition happy while actually
resolving the symbols correctly during the link.
You might ask yourself upon reading this commit log: how did *any* of
this work before? Well, fun story. It turns out we have some code in
Support (BumpPtrAllocator) which both uses virtual dispatch and has
out-of-line vtables used by that virtual dispatch. If you move the
virtual dispatch into its header in *just* the right way, the optimizer
gets to devirtualize, and remove all references to the vtable. Then the
sad part: the references to this one vtable were the only strong symbol
uses in the support library for llvm-tblgen AFAICT. At least, after
doing something just like this, these symbols stopped getting their weak
definition and random calls to them would segfault instead.
Yay software.
llvm-svn: 205137
The ARM64 backend uses it only as a container to keep an MCLOHType and
Arguments around so give it its own little copy. The other functionality
isn't used and we had a crazy method specialization hack in place to
keep it working. Unfortunately that was incompatible with MSVC.
Also range-ify a couple of loops while at it.
llvm-svn: 205114
This adds a second implementation of the AArch64 architecture to LLVM,
accessible in parallel via the "arm64" triple. The plan over the
coming weeks & months is to merge the two into a single backend,
during which time thorough code review should naturally occur.
Everything will be easier with the target in-tree though, hence this
commit.
llvm-svn: 205090
ARM64 has compact-unwind information, but doesn't necessarily want to
emit .eh_frame directives as well. This teaches MC about such a
situation so that it will skip .eh_frame info when compact unwind has
been successfully produced.
For functions incompatible with compact unwind, the normal information
is still written.
llvm-svn: 205087
Given IR like:
%bit = and %val, #imm-with-1-bit-set
%tst = icmp %bit, 0
br i1 %tst, label %true, label %false
some targets can emit just a single instruction (tbz/tbnz in the
AArch64 case). However, with ISel acting at the basic-block level, all
three instructions need to be together for this to be possible.
This adds another transformation to CodeGenPrep to expose these
opportunities, if targets opt in via the hook.
llvm-svn: 205086
This is principally to allow neater mapping of fixups to relocations
in ARM64 ELF. Without this, there isn't enough information available
to GetRelocType, leading to many more fixup_arm64_... enumerators.
llvm-svn: 205085
Another part of the ARM64 backend (so tests will be following soon).
This is currently used by the linker to relax adrp/ldr pairs into nops
where possible, though could well be more broadly applicable.
llvm-svn: 205084
The upcoming ARM64 backend doesn't have section-relative relocations,
so we give each section its own symbol to provide this functionality.
Of course, it doesn't need to appear in the final executable, so
linker-private is the best kind for this purpose.
llvm-svn: 205081
ARM64 for iOS is going to want to emit these symbols in a
linker-private style for efficiency, but other targets probably don't
want that behaviour.
llvm-svn: 205080
This is like the LLVMMatchType, except the verifier checks that the
second argument is a vector with the same base type and half the
number of elements.
This will be used by the ARM64 backend.
llvm-svn: 205079
I started trying to fix a small issue, but this code has seen a small fix too
many.
The old code was fairly convoluted. Some of the issues it had:
* It failed to check if a symbol difference was in the some section when
converting a relocation to pcrel.
* It failed to check if the relocation was already pcrel.
* The pcrel value computation was wrong in some cases (relocation-pc.s)
* It was missing quiet a few cases where it should not convert symbol
relocations to section relocations, leaving the backends to patch it up.
* It would not propagate the fact that it had changed a relocation to pcrel,
requiring a quiet nasty work around in ARM.
* It was missing comments.
llvm-svn: 205076
These are used in the ARM backends to aid type-checking on patterns involving
intrinsics. By making sure one argument is an extended/truncated version of
another.
However, there's no reason to limit them to just vectors types. For example
AArch64 has the instruction "uqshrn sD, dN, #imm" which would naturally use an
intrinsic taking an i64 and returning an i32.
llvm-svn: 205003
BumpPtrAllocator significantly less strange by making it a simple
function of the number of slabs allocated rather than by making it
a recurrance. I *think* the previous behavior was essentially that the
size of the slabs would be doubled after the first 128 were allocated,
and then doubled again each time 64 more were allocated, but only if
every allocation packed perfectly into the slab size. If not, the wasted
space wouldn't be counted toward increasing the size, but allocations
over the size threshold *would*. And since the allocations over the size
threshold might be much larger than the slab size, this could have
somewhat surprising consequences where we rapidly grow the slab size.
This currently requires adding state to the allocator to track the
number of slabs currently allocated, but that isn't too bad. I'm
planning further changes to the allocator that will make this state fall
out even more naturally.
It still doesn't fully decouple the growth rate from the allocations
which are over the size threshold. That fix is coming later.
This specific fix will allow making the entire thing into a more
stateless device and lifting the parameters into template parameters
rather than runtime parameters.
llvm-svn: 204993
Construct a uniform Windows target triple nomenclature which is congruent to the
Linux counterpart. The old triples are normalised to the new canonical form.
This cleans up the long-standing issue of odd naming for various Windows
environments.
There are four different environments on Windows:
MSVC: The MS ABI, MSVCRT environment as defined by Microsoft
GNU: The MinGW32/MinGW32-W64 environment which uses MSVCRT and auxiliary libraries
Itanium: The MSVCRT environment + libc++ built with Itanium ABI
Cygnus: The Cygwin environment which uses custom libraries for everything
The following spellings are now written as:
i686-pc-win32 => i686-pc-windows-msvc
i686-pc-mingw32 => i686-pc-windows-gnu
i686-pc-cygwin => i686-pc-windows-cygnus
This should be sufficiently flexible to allow us to target other windows
environments in the future as necessary.
llvm-svn: 204977
1) When creating a .debug_* section and instead create a .zdebug_
section.
2) When creating a fragment in a .zdebug_* section, make it a compressed
fragment.
3) When computing the size of a compressed section, compress the data
and use the size of the compressed data.
4) Emit the compressed bytes.
Also, check that only if a section has a compressed fragment, then that
is the only fragment in the section.
Assert-fail if the fragment's data is modified after it is compressed.
Initial review on llvm-commits by Eric Christopher and Rafael Espindola.
llvm-svn: 204958
This adds back r204781.
Original message:
Aliases are just another name for a position in a file. As such, the
regular symbol resolutions are not applied. For example, given
define void @my_func() {
ret void
}
@my_alias = alias weak void ()* @my_func
@my_alias2 = alias void ()* @my_alias
We produce without this patch:
.weak my_alias
my_alias = my_func
.globl my_alias2
my_alias2 = my_alias
That is, in the resulting ELF file my_alias, my_func and my_alias are
just 3 names pointing to offset 0 of .text. That is *not* the
semantics of IR linking. For example, linking in a
@my_alias = alias void ()* @other_func
would require the strong my_alias to override the weak one and
my_alias2 would end up pointing to other_func.
There is no way to represent that with aliases being just another
name, so the best solution seems to be to just disallow it, converting
a miscompile into an error.
llvm-svn: 204934
rewrite some of them to be more clear.
The terminology being used in our allocators is making me really sad. We
call things slab allocators that aren't at all slab allocators. It is
quite confusing.
llvm-svn: 204907
It seems that gcov, when faced with a string that is apparently zero
length, just keeps reading words until it finds a length it likes
better. I'm not really sure why this is, but it's simple enough to
make llvm-cov follow suit.
llvm-svn: 204881
In CallInst, op_end() points at the callee, which we don't want to iterate over
when just iterating over arguments. Now take this into account when returning
a iterator_range from arg_operands. Similar reasoning for InvokeInst.
Also adds a unit test to verify this actually works as expected.
llvm-svn: 204851
The edge data structure (EdgeEntry) now holds the indices of its entries in the
adjacency lists of the nodes it connects. This trades a little ugliness for
faster insertion/removal, which is now O(1) with a cheap constant factor. All
of this is implementation detail within the PBQP graph, the external API remains
unchanged.
Individual register allocations are likely to change, since the adjacency lists
will now be ordered differently (or rather, will now be unordered). This
shouldn't affect the average quality of allocations however.
llvm-svn: 204841
This patch is in similar vein to what done earlier to Module::globals/aliases
etc. It allows to iterate over function arguments like this:
for (Argument Arg : F.args()) {
...
}
llvm-svn: 204835
We've already got versions without the barriers, so this just adds IR-level
support for generating the new v8 ones.
rdar://problem/16227836
llvm-svn: 204813