a) x86-64 TLS has been documented
b) the code path should use movq for the correct relocation
to be generated.
I've also added a fixme for the test case that we should improve
the code generated, it should look something like is documented
in the tls abi document.
llvm-svn: 192631
Clobbering is exclusive not inclusive on register units.
For liveness, we need to consider all the preserved registers.
e.g. A regmask that clobbers YMM0 may preserve XMM0.
Units are only clobbered when all super-registers are clobbered.
llvm-svn: 192623
Some clients may add block live ins and may track liveness over a
large scope. This guarantees an efficient implementation in all cases
with no memory allocation/deallocation, independent of the number of
target registers. It could be slightly less convenient but is fine in
the expected case.
llvm-svn: 192622
Clean up creation of static member DIEs. We can create static member DIEs from
two places, so we call getOrCreateStaticMemberDIE from the two places.
getOrCreateStaticMemberDIE will get or create the context DIE first, then it
will check if the DIE already exists, if not, we create the static member DIE
and add it to the context.
Creation of static member DIEs are handled in a similar way as subprogram DIEs.
llvm-svn: 192618
Per original comment, the intention of this loop
is to go ahead and break the critical edge
(in order to sink this instruction) if there's
reason to believe doing so might "unblock" the
sinking of additional instructions that define
registers used by this one. The idea is that if
we have a few instructions to sink "together"
breaking the edge might be worthwhile.
This commit makes a few small changes
to help better realize this goal:
First, modify the loop to ignore registers
defined by this instruction. We don't
sink definitions of physical registers,
and sinking an SSA definition isn't
going to unblock an upstream instruction.
Second, ignore uses of physical registers.
Instructions that define physical registers are
rejected for sinking, and so moving this one
won't enable moving any defining instructions.
As an added bonus, while virtual register
use-def chains are generally small due
to SSA goodness, iteration over the uses
and definitions (used by hasOneNonDBGUse)
for physical registers like EFLAGS
can be rather expensive in practice.
(This is the original reason for looking at this)
Finally, to keep things simple continue
to only consider this trick for registers that
have a single use (via hasOneNonDBGUse),
but to avoid spuriously breaking critical edges
only do so if the definition resides
in the same MBB and therefore this one directly
blocks it from being sunk as well.
If sinking them together is meant to be,
let the iterative nature of this pass
sink the definition into this block first.
Update tests to accomodate this change,
add new testcase where sinking avoids pipeline stalls.
llvm-svn: 192608
Currently MSan checks that arguments of *cvt* intrinsics are fully initialized.
That's too much to ask: some of them only operate on lower half, or even
quarter, of the input register.
llvm-svn: 192599
INSERT is the first type of MSA instruction that requires a change to the way
MSA registers are parsed. This happens because MSA registers may be suffixed by
an index in the form of an immediate or a general purpose register. The changes
to parseMSARegs reflect that requirement.
llvm-svn: 192582
This can happen when processing command line arguments, which
are often stored as std::string's and later turned into
StringRef's via std::string::data(). Unfortunately this
is not guaranteed to return a null-terminated string
until C++11, causing breakage on platforms that don't do this.
llvm-svn: 192558
We were using an anti-pattern of:
- LoadLibrary
- GetProcAddress
- FreeLibrary
This is problematic because of several reasons:
- We are holding on to pointers into a library we just unloaded.
- Calling LoadLibrary results in an increase in the reference count of
the library in question and any libraries that it depends on and
so-on and so-forth. This is none too quick.
Instead, use GetModuleHandleEx with GET_MODULE_HANDLE_EX_FLAG_PIN. This
is done because because we didn't bring the reference for the library
into existence and therefor shouldn't count on it being around later.
llvm-svn: 192550
Before this patch we relied on the order of phi nodes when we looked for phi
nodes of the same type. This could prevent vectorization of cases where there
was a phi node of a second type in between phi nodes of some type.
This is important for vectorization of an internal graphics kernel. On the test
suite + external on x86_64 (and on a run on armv7s) it showed no impact on
either performance or compile time.
radar://15024459
llvm-svn: 192537