Inside iterate, we scan backwards then scan forwards in a loop. When iteration
is not zero, the last node was just updated so we can skip it. But when
iteration is zero, we can't skip the last node.
For the testing case, fixing this will save a spill and move register copies
from hot path to cold path.
llvm-svn: 202557
The previous PBQP solver was very robust but consumed a lot of memory,
performed a lot of redundant computation, and contained some unnecessarily tight
coupling that prevented experimentation with novel solution techniques. This new
solver is an attempt to address these shortcomings.
Important/interesting changes:
1) The domain-independent PBQP solver class, HeuristicSolverImpl, is gone.
It is replaced by a register allocation specific solver, PBQP::RegAlloc::Solver
(see RegAllocSolver.h).
The optimal reduction rules and the backpropagation algorithm have been extracted
into stand-alone functions (see ReductionRules.h), which can be used to build
domain specific PBQP solvers. This provides many more opportunities for
domain-specific knowledge to inform the PBQP solvers' decisions. In theory this
should allow us to generate better solutions. In practice, we can at least test
out ideas now.
As a side benefit, I believe the new solver is more readable than the old one.
2) The solver type is now a template parameter of the PBQP graph.
This allows the graph to notify the solver of any modifications made (e.g. by
domain independent rules) without the overhead of a virtual call. It also allows
the solver to supply policy information to the graph (see below).
3) Significantly reduced memory overhead.
Memory management policy is now an explicit property of the PBQP graph (via
the CostAllocator typedef on the graph's solver template argument). Because PBQP
graphs for register allocation tend to contain many redundant instances of
single values (E.g. the value representing an interference constraint between
GPRs), the new RASolver class uses a uniquing scheme. This massively reduces
memory consumption for large register allocation problems. For example, looking
at the largest interference graph in each of the SPEC2006 benchmarks (the
largest graph will always set the memory consumption high-water mark for PBQP),
the average memory reduction for the PBQP costs was 400x. That's times, not
percent. The highest was 1400x. Yikes. So - this is fixed.
"PBQP: No longer feasting upon every last byte of your RAM".
Minor details:
- Fully C++11'd. Never copy-construct another vector/matrix!
- Cute tricks with cost metadata: Metadata that is derived solely from cost
matrices/vectors is attached directly to the cost instances themselves. That way
if you unique the costs you never have to recompute the metadata. 400x less
memory means 400x less cost metadata (re)computation.
Special thanks to Arnaud de Grandmaison, who has been the source of much
encouragement, and of many very useful test cases.
This new solver forms the basis for future work, of which there's plenty to do.
I will be adding TODO notes shortly.
- Lang.
llvm-svn: 202551
bots when using the standard library facilities. The missing pieces here
aren't always in useful discreet chunks.
Fortunately, the missing pieces are few and far between, and we can
emulate most of them in our headers as needed.
Based on feedback from Lang and Dave.
llvm-svn: 202548
systems have the default as C++11, but retain the ability to build with
C++98.
Again, please restrain your enthusiasm a bit in case this needs to be
reverted. =]
llvm-svn: 202546
Now, please don't get too excited. I've just toggled the default to suss
out the last remaining bot problems. This does *not* mean we can all go
write lots of C++11 code yet. I at least want to let the dust settle
from the bots first.
llvm-svn: 202542
during the finalization for CGDebugInfo in clang we would RAUW
a type and it would result in a corrupted MDNode for an
imported declaration.
Testcase pending as reducing has been difficult.
llvm-svn: 202540
Tools that use the CommandLine library currently exit with an error
when invoked with -version or -help. This is unusual and non-standard,
so we'll fix them to exit successfully instead.
I don't expect that anyone relies on the current behaviour, so this
should be a fairly safe change.
llvm-svn: 202530
This centralizes the Makefile handling of -install_name and -rpath. It also
moves the cmake build to using @rpath. The reason being that libclang needs it,
and it works for everything else.
A followup patch will move clang to using this and then there will be a single
point to edit to support other systems.
llvm-svn: 202499
A lot of this is writing down common knowledge and things often
communicated on mailing lists and in discussions. It could live in the
Programmer's Manual alternatively, but that felt slightly less
well-fitting.
It also includes (and was motivated by) the section on the relevant
language standards for LLVM and the specific features that will be
enabled with the switch to C++11.
With this, all of the documentation for the C++11 switch is, I think, in
place. I plan to flip the switch RSN. =]
llvm-svn: 202497
X86Operand is extracted into individual header, because it allows to create an
arbitrary memory operand and append it to MCInst. It'll be reused in X86 inline
assembly instrumentation.
Patch by Yuri Gorshenin.
llvm-svn: 202496
standards.
It claims the document intentionally doesn't give fixed standards for
brace placement or spacing, and then the document goes on to do
precisely that in several places. Instead, try to highlight that even
these rules are simply *guidance* which may be trumped by some other
circumstance or the local conventions of code.
I'm not trying to change the thrust of this part of the document, and if
folks think this does so, I'm happy to re-wordsmith it. I just don't
want it to be so self-contradicting.
llvm-svn: 202495
a more modern host C++ toolchain for Linux distros where folks sometimes
don't have a good option to get one as part of their system.
This is a first cut, so feedback, testing, and suggestions are very,
very welcom. This is one of the last real documentation changes that was
specifically requested prior to switching LLVM and Clang to build in
C++11 mode by default.
llvm-svn: 202486
* Align targets of indirect jumps to instruction bundle boundaries (in MI layer).
* Add masking instructions before indirect jumps (in MC layer).
Differential Revision: http://llvm-reviews.chandlerc.com/D2847
llvm-svn: 202479
A 'remark' is information that is not an error or a warning, but rather some
additional information provided to the user. In contrast to a 'note' a 'remark'
is an independent diagnostic, whereas a 'note' always depends on another
diagnostic.
A typical use case for remark nodes is information provided to the user, e.g.
information provided by the vectorizer about loops that have been vectorized.
llvm-svn: 202474
The PPC isel instruction can fold 0 into the first operand (thus eliminating
the need to materialize a zero-containing register when the 'true' result of
the isel is 0). When the isel is fed by a bit register operation that we can
invert, do so as part of the bit-register-operation peephole routine.
llvm-svn: 202469
The current COFF unwind printer tries to print SEH handler function names,
assuming that it can always find function names in string table. It crashes
if file being read has no symbol table (i.e. executable).
With this patch, llvm-objdump prints SEH handler's RVA if there's no symbol
table entry for that RVA.
llvm-svn: 202466
The CR bit tracking code broke PPC/Darwin; trying to get it working again...
(the darwin11 builder, which defaults to the darwin ABI when running PPC tests,
asserted when running test/CodeGen/PowerPC/inverted-bool-compares.ll)
llvm-svn: 202459
This change enables tracking i1 values in the PowerPC backend using the
condition register bits. These bits can be treated on PowerPC as separate
registers; individual bit operations (and, or, xor, etc.) are supported.
Tracking booleans in CR bits has several advantages:
- Reduction in register pressure (because we no longer need GPRs to store
boolean values).
- Logical operations on booleans can be handled more efficiently; we used to
have to move all results from comparisons into GPRs, perform promoted
logical operations in GPRs, and then move the result back into condition
register bits to be used by conditional branches. This can be very
inefficient, because the throughput of these CR <-> GPR moves have high
latency and low throughput (especially when other associated instructions
are accounted for).
- On the POWER7 and similar cores, we can increase total throughput by using
the CR bits. CR bit operations have a dedicated functional unit.
Most of this is more-or-less mechanical: Adjustments were needed in the
calling-convention code, support was added for spilling/restoring individual
condition-register bits, and conditional branch instruction definitions taking
specific CR bits were added (plus patterns and code for generating bit-level
operations).
This is enabled by default when running at -O2 and higher. For -O0 and -O1,
where the ability to debug is more important, this feature is disabled by
default. Individual CR bits do not have assigned DWARF register numbers,
and storing values in CR bits makes them invisible to the debugger.
It is critical, however, that we don't move i1 values that have been promoted
to larger values (such as those passed as function arguments) into bit
registers only to quickly turn around and move the values back into GPRs (such
as happens when values are returned by functions). A pair of target-specific
DAG combines are added to remove the trunc/extends in:
trunc(binary-ops(binary-ops(zext(x), zext(y)), ...)
and:
zext(binary-ops(binary-ops(trunc(x), trunc(y)), ...)
In short, we only want to use CR bits where some of the i1 values come from
comparisons or are used by conditional branches or selects. To put it another
way, if we can do the entire i1 computation in GPRs, then we probably should
(on the POWER7, the GPR-operation throughput is higher, and for all cores, the
CR <-> GPR moves are expensive).
POWER7 test-suite performance results (from 10 runs in each configuration):
SingleSource/Benchmarks/Misc/mandel-2: 35% speedup
MultiSource/Benchmarks/Prolangs-C++/city/city: 21% speedup
MultiSource/Benchmarks/MiBench/automotive-susan: 23% speedup
SingleSource/Benchmarks/CoyoteBench/huffbench: 13% speedup
SingleSource/Benchmarks/Misc-C++/Large/sphereflake: 13% speedup
SingleSource/Benchmarks/Misc-C++/mandel-text: 10% speedup
SingleSource/Benchmarks/Misc-C++-EH/spirit: 10% slowdown
MultiSource/Applications/lemon/lemon: 8% slowdown
llvm-svn: 202451
Unfortunately, it is currently impossible to use a PatFrag as part of an output
pattern (the part of the pattern that has instructions in it) in TableGen.
Looking at the current implementation, this was clearly intended to work (there
is already code in place to expand patterns in the output DAG), but is
currently broken by the baked-in type-checking assumption and the order in which
the pattern fragments are processed (output pattern fragments need to be
processed after the instruction definitions are processed).
Fixing this is fairly simple, but requires some way of differentiating output
patterns from the existing input patterns. The simplest way to handle this
seems to be to create a subclass of PatFrag, and so that's what I've done here.
As a simple example, this allows us to write:
def crnot : OutPatFrag<(ops node:$in),
(CRNOR $in, $in)>;
def : Pat<(not i1:$in),
(crnot $in)>;
which captures the core use case: handling of repeated subexpressions inside
of complicated output patterns.
This will be used by an upcoming commit to the PowerPC backend.
llvm-svn: 202450
This extract-and-trunc vector optimization cannot work for i1 values as
currently implemented, and so I'm disabling this for now for i1 values. In the
future, this can be fixed properly.
Soon I'll commit support for i1 CR bit tracking in the PowerPC backend, and
this will be covered by one of the existing regression tests.
llvm-svn: 202449
This is the data structure listed on Microsoft PE/COFF Spec Revision 8.3, p. 80.
The name of the struct is not mentioned in the Microsoft PE/COFF spec, so I made
it up.
llvm-svn: 202438
This is a temporary workaround for native arm linux builds:
PR18996: Changing regalloc order breaks "lencod" on native arm linux builds.
llvm-svn: 202433
seems unlikely to be added. It also doesn't seem like it should be part
of the build system at all (consider out-of-tree builds).
We should probably add nice, easy tool for this that works both for svn
client trees and git-svn client trees, but it probably won't be spelled
"make update".
llvm-svn: 202430
Some MC components like Target Streamers or Assembly Parsers
may need to access the relocation model in order to expand
some directives and/or assembly macros.
llvm-svn: 202418
scan the register file for sub- and super-registers.
No functionality change intended.
(Tests are updated because the comments in the assembler output are
different.)
llvm-svn: 202416
If a function returns a large struct by value return the first 4 words
in registers and the rest on the stack in a location reserved by the
caller. This is needed to support the xC language which supports
functions returning an arbitrary number of return values. This is
r202397 reapplied with a fix to avoid an uninitialized read of a member.
llvm-svn: 202414
Summary:
If a function returns a large struct by value return the first 4 words
in registers and the rest on the stack in a location reserved by the
caller. This is needed to support the xC language which supports
functions returning an arbitrary number of return values.
Reviewers: robertlytton
Reviewed By: robertlytton
CC: llvm-commits
Differential Revision: http://llvm-reviews.chandlerc.com/D2889
llvm-svn: 202397
Summary:
If the src, dst and size of a memcpy are known to be 4 byte aligned we
can call __memcpy_4() instead of memcpy().
Reviewers: robertlytton
Reviewed By: robertlytton
CC: llvm-commits
Differential Revision: http://llvm-reviews.chandlerc.com/D2871
llvm-svn: 202395
toolchain of LLVM. These are already being enforced by the build system
and have been discussed quite a few times on the lists, but
documentation is important. =]
Also, garbage collect the majority of the information about broken host
GCC toolchains. These aren't really relevant any more as they're all
older than the minimum requirement. I've left a few notes about
compilers one step older than the current requirement as these compilers
are at least conceivable to use, and it's better to preserve this kind
of hard-won institutional knowledge.
The next step will be some specific docs on how to set up a sufficiently
modern host toolchain if your system doesn't come with one. But that'll
be tomorrow. =]
llvm-svn: 202375
bits of software and to use a modern GCC version.
The Subversion bit was weird anyways -- it has nothing to do with
compiling LLVM. Also, there are many other ways to get at the trunk
source (git, git-svn, etc).
The TeXinfo thing... I have no idea about. But you can get a working
LLVM w/o it pretty easily. If man pages or something are missing, that
hardly seems like a problem. If folks really want this back, let me
know, but it seems mostly like a distraction.
I'd still like to separate this into:
- Required software to compile.
- Optional software to compile.
- Required software for certain *contributor* activities (like
regenerating configure scripts).
Also we need to mention that there are multiple options for build
systems, and the differences.
Also we should mention Windows.
Also probably other stuff I'm forgetting.
I'm wondering if this whole thing needs to be shot in the head and we
should just start a new, simpler getting started that doesn't have so
many years of accumulated stuff that is no longer relevant.
llvm-svn: 202373
getting started guide.
Some highlights:
- I heard there was this Clang compiler that you could use for your
host compiler. Not sure though.
- We no longer have a GCC frontend with weird build restrictions.
- Windows is doing a bit better than partially supported.
- We nuked everything to do with itanium.
- SPUs? Really?
- Xcode 2.5 and gcc 4.0.1 are really not a concern -- they don't work.
- OMG, we actually tried building LLVM on Alpha? Really?
- PowerPC works pretty well these days.
There is still a lot of stuff here I'm pretty dubious about, but I nuked
most of what was actively misleading, out of date, or patently wrong.
Some of it (mingw stuff especially) isn't really lacking, its just that
the comments here were actively wrong. Hopefully folks that know those
platforms can add back correct / modern information.
llvm-svn: 202370
Summary:
Fixes an issue where a test attempts to use -mcpu=x86-64 on non-X86-64 targets.
This triggers an assertion in the MIPS backend since it doesn't know what ABI to
use by default for unrecognized processors.
CC: llvm-commits, rafael
Differential Revision: http://llvm-reviews.chandlerc.com/D2877
llvm-svn: 202369
any ranges - this includes CU ranges where we were previously emitting an
end list marker even if we didn't have a list.
Testcase includes a test for line table only code emission as the problem
was noticed while writing this test.
llvm-svn: 202357
If the SI_KILL operand is constant, we can either clear the exec mask if
the operand is negative, or do nothing otherwise.
Reviewed-by: Tom Stellard <thomas.stellard@amd.com>
llvm-svn: 202337
any ranges to the list of ranges for the CU as we don't want to emit
them anyway. This ensures that we will still emit ranges if we have
a compile unit compiled with only line tables and one compiled with
full debug info requested (we'll emit for the one with full debug info).
Update testcase metadata accordingly to continue emitting ranges.
llvm-svn: 202333
and update everything accordingly. This can be used to conditionalize
the amount of output in the backend based on the amount of debug
requested/metadata emission scheme by a front end (e.g. clang).
Paired with a commit to clang.
llvm-svn: 202332
This handles pathological cases in which we see 2x increase in spill
code for large blocks (~50k instructions). I don't have a unit test
for this behavior.
Fixes rdar://16072279.
llvm-svn: 202304
The current approach to lower a vsetult is to flip the sign bit of the
operands, swap the operands and then use a (signed) pcmpgt. psubus (unsigned
saturating subtract) can be used to emulate a vsetult more efficiently:
+ case ISD::SETULT: {
+ // If the comparison is against a constant we can turn this into a
+ // setule. With psubus, setule does not require a swap. This is
+ // beneficial because the constant in the register is no longer
+ // destructed as the destination so it can be hoisted out of a loop.
I also enable lowering via psubus in a few other cases where it's clearly
beneficial: setule and setuge if minu/maxu cannot be used.
rdar://problem/14338765
Patch by Adam Nemet <anemet@apple.com>.
llvm-svn: 202301
The aggressive anti-dependency breaker scans instructions, bottom-up, within the
scheduling region in order to find opportunities where register renaming can
be used to break anti-dependencies.
Unfortunately, the aggressive anti-dep breaker was treating a register definition
as defining all of that register's aliases (including super registers). This behavior
is incorrect when the super register is live and there are other definitions of
subregisters of the super register.
For example, given the following sequence:
%CR2EQ<def> = CROR %CR3UN, %CR3UN<kill>
%CR2GT<def> = IMPLICIT_DEF
%X4<def> = MFOCRF8 %CR2
the analysis of the first subregister definition would work as expected:
Anti: %CR2GT<def> = IMPLICIT_DEF
Def Groups: CR2GT=g194->g0(via CR2)
Antidep reg: CR2GT (zero group)
Use Groups:
but the analysis of the second one would not:
Anti: %CR2EQ<def> = CROR %CR3UN, %CR3UN<kill>
Def Groups: CR2EQ=g195
Antidep reg: CR2EQ
Rename Candidates for Group g195: ...
because, when processing the %CR2GT<def>, we'd mark all super registers of
%CR2GT (%CR2 in this case) as defined. As a result, when processing
%CR2EQ<def>, %CR2 no longer appears to be live, and %CR2EQ<def>'s group is not
%unioned with the %CR2 group.
I don't have an in-tree test case for this yet (and even if I did, I don't have
a small one).
llvm-svn: 202294
We should apply fastcc whenever profitable. We can expand this list,
but there are lots of conventions with performance implications that we
don't want to change.
Differential Revision: http://llvm-reviews.chandlerc.com/D2705
llvm-svn: 202293
COFF object files with 0 as string table size are currently rejected. This
prevents us from reading object files written by tools like cvtres that
violate the PECOFF spec and write 0 instead of 4 for the size of an empty
string table.
llvm-svn: 202292
We don't have any test with more than 6 address spaces, so a DenseMap is
probably not the correct answer.
An unsorted array would also be OK, but we have to sort it for printing anyway.
llvm-svn: 202275
This includes instructions with aggregate operands (insert/extract), instructions with vector operands (insert/extract/shuffle), binary arithmetic and bitwise instructions, conversion instructions and terminators.
Work was done by lama.saba@intel.com.
llvm-svn: 202262
The table argument is always 128-bit (and interpreted as <16 x i8>) so the
extra specifier for it is just clutter.
No user-visible behaviour change, so no tests.
llvm-svn: 202258
Summary:
Fixes an issue where a test attempts to use -mcpu=cortex-a15 on non-ARM targets.
This triggers an assertion on MIPS since it doesn't know what ABI to use by default for
unrecognized processors.
Reviewers: rengolin
Reviewed By: rengolin
CC: llvm-commits, aemerson, rengolin
Differential Revision: http://llvm-reviews.chandlerc.com/D2876
llvm-svn: 202256
Summary:
This should fix the MCJIT unit tests that were broken by r201792 on the MIPS buildbot.
MIPS currently uses the default implementation of sys::getHostCPUName() which
always returns "generic". For now, we will accept "generic" and coerce it to
"mips32" or "mips64" depending on the target architecture like we do for empty
CPU names.
Reviewers: jacksprat, matheusalmeida
Reviewed By: jacksprat
Differential Revision: http://llvm-reviews.chandlerc.com/D2878
llvm-svn: 202253
the default.
Based on the patch by Matt Arsenault, D1764!
I switched one place to use the more direct pointer type to compute the
desired address space, and I reworked the memcpy rewriting section to
reflect significant refactorings that this patch helped inspire.
Thanks to several of the folks who helped review and improve the patch
as well.
llvm-svn: 202247
to work independently for the slice side and the other side.
This allows us to only compute the minimum of the two when we actually
rewrite to a memcpy that needs to take the minimum, and preserve higher
alignment for one side or the other when rewriting to loads and stores.
This fix was inspired by seeing the result of some refactoring that
makes addrspace handling better.
llvm-svn: 202242
target_link_libraries(INTERFACE) doesn't bring inter-target dependencies in add_library,
although final targets have dependencies to whole dependent libraries.
It makes most libraries can be built in parallel.
target_link_libraries(PRIVATE) is used to shaared library.
Each dependent library is linked to the target.so, and its user will not see its grandchildren.
For example,
- libclang.so has sufficient libclang*.a(s).
- c-index-test requires just only libclang.so.
FIXME: lld is tweaked minimally. Adding INTERFACE in each library would be better thing.
llvm-svn: 202241
For now, use both keywords, INTERFACE and PRIVATE via the variable,
- ${cmake_2_8_12_INTERFACE}
- ${cmake_2_8_12_PRIVATE}
They could be cleaned up when we introduce 2.8.12.
llvm-svn: 202239
D1764, which in turn set off the other refactorings to make
'getSliceAlign()' a sensible thing.
There are two possible inputs to the required alignment of a memory
transfer intrinsic: the alignment constraints of the source and the
destination. If we are *only* introducing a (potentially new) offset
onto one side of the transfer, we don't need to consider the alignment
constraints of the other side. Use this to simplify the logic feeding
into alignment computation for unsplit transfers.
Also, hoist the clamp of the magical zero alignment for these intrinsics
to the more customary one alignment early. This lets several other
conditions melt away.
No functionality changed. There is a further improvement this exposes
which *will* change functionality, but that's arriving in a separate
patch.
llvm-svn: 202232
rewriting logic: don't pass custom offsets for the adjusted pointer to
the new alloca.
We always passed NewBeginOffset here. Sometimes we spelled it
BeginOffset, but only when they were in fact equal. Whats worse, the API
is set up so that you can't reasonably call it with anything else -- it
assumes that you're passing it an offset relative to the *original*
alloca that happens to fall within the new one. That's the whole point
of NewBeginOffset, it's the clamped beginning offset.
No functionality changed.
llvm-svn: 202231
alignment of the slice being rewritten, not any arbitrary offset.
Every caller is really just trying to compute the alignment for the
whole slice, never for some arbitrary alignment. They are also just
passing a type when they have one to see if we can skip an explicit
alignment in the IR by using the type's alignment. This makes for a much
simpler interface.
Another refactoring inspired by the addrspace patch for SROA, although
only loosely related.
llvm-svn: 202230
consistency with memcpy rewriting, and fix a latent bug in the alignment
management for memset.
The alignment issue is that getAdjustedAllocaPtr is computing the
*relative* offset into the new alloca, but the alignment isn't being set
to the relative offset, it was using the the absolute offset which is
into the old alloca.
I don't think its possible to write a test case that actually reaches
this code where the resulting alignment would be observably different,
but the intent was clearly to use the relative offset within the new
alloca.
llvm-svn: 202229
rather than passing them as arguments.
While I generally prefer actual arguments, in this case the readability
loss is substantial. By using members we avoid repeatedly calculating
the offsets, and once we're using members it is useful to ensure that
those names *always* refer to the original-alloca-relative new offset
for a rewritten slice.
No functionality changed. Follow-up refactoring, all toward getting the
address space patch merged.
llvm-svn: 202228
slice being rewritten.
We had the same code scattered across most of the visits. Instead,
compute the new offsets and the slice size once when we start to visit
a particular slice, and use the member variables from then on. This
reduces quite a bit of code duplication.
No functionality changed. Refactoring inspired to make it easier to
apply the address space patch to SROA.
llvm-svn: 202227
checking in SROA.
The primary change is to just rely on uge for checking that the offset
is within the allocation size. This removes the explicit checks against
isNegative which were terribly error prone (including the reversed logic
that led to PR18615) and prevented us from supporting stack allocations
larger than half the address space.... Ok, so maybe the latter isn't
*common* but it's a silly restriction to have.
Also, we used to try to support a PHI node which loaded from before the
start of the allocation if any of the loaded bytes were within the
allocation. This doesn't make any sense, we have never really supported
loading or storing *before* the allocation starts. The simplified logic
just doesn't care.
We continue to allow loading past the end of the allocation in part to
support cases where there is a PHI and some loads are larger than others
and the larger ones reach past the end of the allocation. We could solve
this a different and more conservative way, but I'm still somewhat
paranoid about this.
llvm-svn: 202224