Summary:
Re-factor some code to improve clarity and style based on review
comments from http://reviews.llvm.org/D18093.
Reviewers: MatzeB, mcrosier
Subscribers: MatzeB, mcrosier, llvm-commits
Differential Revision: http://reviews.llvm.org/D19128
llvm-svn: 266372
On s390, the return address is in %r14, which is saved 14 words from
the frame pointer.
Unfortunately, there's no way to do a proper fast backtrace on SystemZ
with current LLVM - the saved %r15 in fixed-layout register save
area points to the containing frame itself, and not to the next one.
Likewise for %r11 - it's identical to %r15, unless alloca is used
(and even if it is, it's still useless). There's just no way to
determine frame size / next frame pointer. -mbackchain would fix that
(and make the current code just work), but that's not yet supported
in LLVM. We will thus need to XFAIL some asan tests
(Linux/stack-trace-dlclose.cc, deep_stack_uaf.cc).
Differential Revision: http://reviews.llvm.org/D18895
llvm-svn: 266371
This is the first part of upcoming asan support for s390 and s390x.
Note that there are bits for 31-bit support in this and subsequent
patches - while LLVM itself doesn't support it, gcc should be able
to make use of it just fine.
Differential Revision: http://reviews.llvm.org/D18888
llvm-svn: 266370
Summary:
The check detects multi-statement macros that are used in unbraced conditionals.
Only the first statement will be part of the conditionals and the rest will fall
outside of it and executed unconditionally.
Reviewers: alexfh
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D18766
llvm-svn: 266369
The _gp_disp symbol designates offset between start of function and 'gp'
pointer into GOT. The following code is a typical MIPS function preamble
used to setup $gp register:
lui $gp, %hi(_gp_disp)
addi $gp, $gp, %lo(_gp_disp)
To calculate R_MIPS_HI16 / R_MIPS_LO16 relocations results we use
the following formulas:
%hi(_gp - P + A)
%lo(_gp - P + A + 4),
where _gp is a value of _gp symbol, A is addend, and P current address.
The R_MIPS_LO16 relocation references _gp_disp symbol is always the second
instruction. That is why we need four byte adjustments. The patch assigns
R_PC type for R_MIPS_LO16 relocation and adjusts its addend by 4. That fix
R_MIPS_LO16 calculation.
For details see p. 4-19 at ftp://www.linux-mips.org/pub/linux/mips/doc/ABI/mipsabi.pdf
Differential Revision: http://reviews.llvm.org/D19115
llvm-svn: 266368
We never need to iterate over the K,V pairs, so we can avoid copying the
key as MapVector does.
This is a small speedup on most benchmarks.
llvm-svn: 266364
Some SIMD implementations are not IEEE-754 compliant, for example ARM's NEON.
This patch teaches the loop vectorizer to only allow transformations of loops
that either contain no floating-point operations or have enough allowance
flags supporting lack of precision (ex. -ffast-math, Darwin).
For that, the target description now has a method which tells us if the
vectorizer is allowed to handle FP math without falling into unsafe
representations, plus a check on every FP instruction in the candidate loop
to check for the safety flags.
This commit makes LLVM behave like GCC with respect to ARM NEON support, but
it stops short of fixing the underlying problem: sub-normals. Neither GCC
nor LLVM have a flag for allowing sub-normal operations. Before this patch,
GCC only allows it using unsafe-math flags and LLVM allows it by default with
no way to turn it off (short of not using NEON at all).
As a first step, we push this change to make it safe and in sync with GCC.
The second step is to discuss a new sub-normal's flag on both communitues
and come up with a common solution. The third step is to improve the FastMath
flags in LLVM to encode sub-normals and use those flags to restrict NEON FP.
Fixes PR16275.
llvm-svn: 266363
https://llvm.org/bugs/show_bug.cgi?id=27105
We can check if all bits outside of a constant mask are set with a
single constant.
As noted in the bug report, although this form should be considered the
canonical IR, backends may want to transform this into an 'andn' / 'andc'
comparison against zero because that could be a single machine instruction.
Differential Revision: http://reviews.llvm.org/D18842
llvm-svn: 266362
The DenseMap doesn't store hash results. This means that when it is
resized it has to recompute them.
This patch is a small hack that wraps the StringRef in a struct that
remembers the hash value. That way we can be sure it is only hashed
once.
llvm-svn: 266357
Summary:
This adds the necessary target code to be able to run the ir translator.
Lowering function arguments and returns is a nop and there is no support
for RegBankSelect.
Reviewers: arsenm, qcolombet
Subscribers: arsenm, joker.eph, vkalintiris, llvm-commits
Differential Revision: http://reviews.llvm.org/D19077
llvm-svn: 266356
MachineInstr.h and MachineInstrBuilder.h are very popular headers,
widely included across all LLVM backends. It turns out that there only a
handful of TUs that actually care about DI operands on MachineInstrs.
After this change, touching DebugInfoMetadata.h and rebuilding llc only
needs 112 actions instead of 542.
llvm-svn: 266351
Summary:
If a PHI has an incoming undef, we can pretend that it is equal to one
non-undef, non-self incoming value.
This is particularly relevant in combination with the StructurizeCFG
pass, which introduces PHI nodes with undefs. Previously, this lead to
branch conditions that were uniform before StructurizeCFG to become
non-uniform afterwards, which confused the SIAnnotateControlFlow
pass.
This fixes a crash when Mesa radeonsi compiles a shader from
dEQP-GLES3.functional.shaders.switch.switch_in_for_loop_dynamic_vertex
Reviewers: arsenm, tstellarAMD, jingyue
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D19013
llvm-svn: 266347
Summary:
This fully solves the problem where the StructurizeCFG pass does not
consider the same branches as uniform as the SIAnnotateControlFlow pass.
The patch in D19013 helps with this problem, but is not sufficient
(and, interestingly, causes a "regression" with one of the existing
test cases).
No tests included here, because tests in D19013 already cover this.
Reviewers: arsenm, tstellarAMD
Subscribers: arsenm, llvm-commits
Differential Revision: http://reviews.llvm.org/D19018
llvm-svn: 266346
Summary:
This pass is unnecessary and overly conservative. It was motivated by
situations like
def %vreg0:SGPR_32
...
if-block:
..
def %vreg1:SGPR_32
...
else-block:
...
use %vreg0:SGPR_32
...
and similar situations with uses after the non-uniform control flow, where
we are not allowed to assign %vreg0 and %vreg1 to the same physical register,
even though in the original, thread/workitem-based CFG, it looks like the
live ranges of these registers do not overlap.
However, by the time register allocation runs, we have moved to a wave-based
CFG that accurately represents the fact that the wave may run through both
the if- and the else-block. So the live ranges of %vreg0 and %vreg1 already
overlap even without the SIFixSGPRLiveRanges pass.
In addition to proving this change correct, I have tested it with Piglit
and a small number of other tests.
Reviewers: arsenm, tstellarAMD
Subscribers: MatzeB, arsenm, llvm-commits
Differential Revision: http://reviews.llvm.org/D19041
llvm-svn: 266345
Summary:
I've been carrying this change around with me for a while, because the if ()
managed to confuse me while following the code. All callers ensure that the
assertion holds.
Reviewers: arsenm, tstellarAMD
Subscribers: arsenm, llvm-commits
Differential Revision: http://reviews.llvm.org/D19042
llvm-svn: 266344
FastRegAlloc works only at the basic-block level and spills all live-out
registers. Unfortunately for a stack-based cmpxchg near the spill slots, this
can perpetually clear the exclusive monitor, which means the cmpxchg will never
succeed.
I believe the only way to handle this within LLVM is by expanding the loop
post-regalloc. We don't want this in general because it severely limits the
optimisations that can be done, so we limit this to -O0 compilations.
It's an ugly hack, and about the one good point in the whole mess is that we
can treat all cmpxchg operations in the most naive way possible (seq_cst, no
clrex faff) without affecting correctness.
Should fix PR25526.
llvm-svn: 266339
Summary:
For GL_ARB_compute_shader we need to support workgroup sizes of at least 1024. However, if we want to allow large workgroup sizes, we may need to use less registers, as we have to run more waves per SIMD.
This patch adds an attribute to specify the maximum work group size the compiled program needs to support. It defaults, to 256, as that has no wave restrictions.
Reducing the number of registers available is done similarly to how the registers were reserved for chips with the sgpr init bug.
Reviewers: mareko, arsenm, tstellarAMD, nhaehnle
Subscribers: FireBurn, kerberizer, llvm-commits, arsenm
Differential Revision: http://reviews.llvm.org/D18340
Patch By: Bas Nieuwenhuizen
llvm-svn: 266337
Summary:
The code previously always used s1 as it was using the user + system SGPR
information for compute kernels. This is incorrect for Mesa shaders though,
The register should be the next SGPR after all user and system SGPR's.
We use that Mesa adds arguments for all input and system SGPR's and
take the next available SGPR for the scratch wave offset register.
Signed-off-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Reviewers: mareko, arsenm, nhaehnle, tstellarAMD
Subscribers: qcolombet, arsenm, llvm-commits
Differential Revision: http://reviews.llvm.org/D18941
Patch By: Bas Nieuwenhuizen
llvm-svn: 266336
Summary:
Add a print method to Predicated Scalar Evolution which prints all interesting
transformations done by PSE.
Loop Access Analysis will now print this as part of the analysis output.
We now use this to check the exact expression transformations that were done
by PSE in LAA.
The additional checking also acts as white-box testing for the getAsAddRec method.
Reviewers: anemet, sanjoy
Subscribers: sanjoy, mzolotukhin, llvm-commits
Differential Revision: http://reviews.llvm.org/D18792
llvm-svn: 266334
Summary: The patch is fixing the generation of clang-tidy documentation.
Reviewers: alexfh
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D19121
llvm-svn: 266333
ittnotify fix for barrier imbalance time in case tasks exist. In the current
implementation, task execution time is included into aggregated time on a
barrier. This fix calculates task execution time and corrects the arrive time
by subtracting the task execution time.
Since __kmp_invoke_task() can not only be called on a barrier, the field
th.th_bar_arrive_time is used to check if the function was called at the
barrier (th.th_bar_arrive_time != 0). So for this check, th_bar_arrive_time
is set to zero right after the value is used on the barrier.
Differential Revision: http://reviews.llvm.org/D19030
llvm-svn: 266332
This change adds back off logic in the test and set lock for better contended
lock performance. It uses a simple truncated binary exponential back off
function. The default back off parameters are tuned for x86.
The main back off logic has a two loop structure where each is controlled by a
user-level parameter:
max_backoff - limits the outer loop number of iterations.
This parameter should be a power of 2.
min_ticks - the inner spin wait loop number of "ticks" which is system
dependent and should be tuned for your system if you so choose.
The "ticks" on x86 correspond to the time stamp counter,
but on other architectures ticks is a timestamp derived
from gettimeofday().
The user can modify these via the environment variable:
KMP_SPIN_BACKOFF_PARAMS=max_backoff[,min_ticks]
Currently, since the default user lock is a queuing lock,
one would have to also specify KMP_LOCK_KIND=tas to use the test-and-set locks.
Differential Revision: http://reviews.llvm.org/D19020
llvm-svn: 266329
The android dirty stderr problem has uncovered an issue where lldbutil.expect_state_changes was
reading events other than state change events, which resulted in general confusion. Make it more
strict to accept *only* state changes.
llvm-svn: 266327
Summary:
On some android targets, a binary can produce additional garbage (e.g. warning messages from the
dynamic linker) on the standard error, which confuses some tests. This relaxes the stderr
expectations for targets known for their chattyness.
Reviewers: tfiala, ovyalov
Subscribers: tberghammer, danalbert, srhines, lldb-commits
Differential Revision: http://reviews.llvm.org/D19114
llvm-svn: 266326