The previous PBQP solver was very robust but consumed a lot of memory,
performed a lot of redundant computation, and contained some unnecessarily tight
coupling that prevented experimentation with novel solution techniques. This new
solver is an attempt to address these shortcomings.
Important/interesting changes:
1) The domain-independent PBQP solver class, HeuristicSolverImpl, is gone.
It is replaced by a register allocation specific solver, PBQP::RegAlloc::Solver
(see RegAllocSolver.h).
The optimal reduction rules and the backpropagation algorithm have been extracted
into stand-alone functions (see ReductionRules.h), which can be used to build
domain specific PBQP solvers. This provides many more opportunities for
domain-specific knowledge to inform the PBQP solvers' decisions. In theory this
should allow us to generate better solutions. In practice, we can at least test
out ideas now.
As a side benefit, I believe the new solver is more readable than the old one.
2) The solver type is now a template parameter of the PBQP graph.
This allows the graph to notify the solver of any modifications made (e.g. by
domain independent rules) without the overhead of a virtual call. It also allows
the solver to supply policy information to the graph (see below).
3) Significantly reduced memory overhead.
Memory management policy is now an explicit property of the PBQP graph (via
the CostAllocator typedef on the graph's solver template argument). Because PBQP
graphs for register allocation tend to contain many redundant instances of
single values (E.g. the value representing an interference constraint between
GPRs), the new RASolver class uses a uniquing scheme. This massively reduces
memory consumption for large register allocation problems. For example, looking
at the largest interference graph in each of the SPEC2006 benchmarks (the
largest graph will always set the memory consumption high-water mark for PBQP),
the average memory reduction for the PBQP costs was 400x. That's times, not
percent. The highest was 1400x. Yikes. So - this is fixed.
"PBQP: No longer feasting upon every last byte of your RAM".
Minor details:
- Fully C++11'd. Never copy-construct another vector/matrix!
- Cute tricks with cost metadata: Metadata that is derived solely from cost
matrices/vectors is attached directly to the cost instances themselves. That way
if you unique the costs you never have to recompute the metadata. 400x less
memory means 400x less cost metadata (re)computation.
Special thanks to Arnaud de Grandmaison, who has been the source of much
encouragement, and of many very useful test cases.
This new solver forms the basis for future work, of which there's plenty to do.
I will be adding TODO notes shortly.
- Lang.
llvm-svn: 202551
during the finalization for CGDebugInfo in clang we would RAUW
a type and it would result in a corrupted MDNode for an
imported declaration.
Testcase pending as reducing has been difficult.
llvm-svn: 202540
Tools that use the CommandLine library currently exit with an error
when invoked with -version or -help. This is unusual and non-standard,
so we'll fix them to exit successfully instead.
I don't expect that anyone relies on the current behaviour, so this
should be a fairly safe change.
llvm-svn: 202530
X86Operand is extracted into individual header, because it allows to create an
arbitrary memory operand and append it to MCInst. It'll be reused in X86 inline
assembly instrumentation.
Patch by Yuri Gorshenin.
llvm-svn: 202496
* Align targets of indirect jumps to instruction bundle boundaries (in MI layer).
* Add masking instructions before indirect jumps (in MC layer).
Differential Revision: http://llvm-reviews.chandlerc.com/D2847
llvm-svn: 202479
A 'remark' is information that is not an error or a warning, but rather some
additional information provided to the user. In contrast to a 'note' a 'remark'
is an independent diagnostic, whereas a 'note' always depends on another
diagnostic.
A typical use case for remark nodes is information provided to the user, e.g.
information provided by the vectorizer about loops that have been vectorized.
llvm-svn: 202474
The PPC isel instruction can fold 0 into the first operand (thus eliminating
the need to materialize a zero-containing register when the 'true' result of
the isel is 0). When the isel is fed by a bit register operation that we can
invert, do so as part of the bit-register-operation peephole routine.
llvm-svn: 202469
The CR bit tracking code broke PPC/Darwin; trying to get it working again...
(the darwin11 builder, which defaults to the darwin ABI when running PPC tests,
asserted when running test/CodeGen/PowerPC/inverted-bool-compares.ll)
llvm-svn: 202459
This change enables tracking i1 values in the PowerPC backend using the
condition register bits. These bits can be treated on PowerPC as separate
registers; individual bit operations (and, or, xor, etc.) are supported.
Tracking booleans in CR bits has several advantages:
- Reduction in register pressure (because we no longer need GPRs to store
boolean values).
- Logical operations on booleans can be handled more efficiently; we used to
have to move all results from comparisons into GPRs, perform promoted
logical operations in GPRs, and then move the result back into condition
register bits to be used by conditional branches. This can be very
inefficient, because the throughput of these CR <-> GPR moves have high
latency and low throughput (especially when other associated instructions
are accounted for).
- On the POWER7 and similar cores, we can increase total throughput by using
the CR bits. CR bit operations have a dedicated functional unit.
Most of this is more-or-less mechanical: Adjustments were needed in the
calling-convention code, support was added for spilling/restoring individual
condition-register bits, and conditional branch instruction definitions taking
specific CR bits were added (plus patterns and code for generating bit-level
operations).
This is enabled by default when running at -O2 and higher. For -O0 and -O1,
where the ability to debug is more important, this feature is disabled by
default. Individual CR bits do not have assigned DWARF register numbers,
and storing values in CR bits makes them invisible to the debugger.
It is critical, however, that we don't move i1 values that have been promoted
to larger values (such as those passed as function arguments) into bit
registers only to quickly turn around and move the values back into GPRs (such
as happens when values are returned by functions). A pair of target-specific
DAG combines are added to remove the trunc/extends in:
trunc(binary-ops(binary-ops(zext(x), zext(y)), ...)
and:
zext(binary-ops(binary-ops(trunc(x), trunc(y)), ...)
In short, we only want to use CR bits where some of the i1 values come from
comparisons or are used by conditional branches or selects. To put it another
way, if we can do the entire i1 computation in GPRs, then we probably should
(on the POWER7, the GPR-operation throughput is higher, and for all cores, the
CR <-> GPR moves are expensive).
POWER7 test-suite performance results (from 10 runs in each configuration):
SingleSource/Benchmarks/Misc/mandel-2: 35% speedup
MultiSource/Benchmarks/Prolangs-C++/city/city: 21% speedup
MultiSource/Benchmarks/MiBench/automotive-susan: 23% speedup
SingleSource/Benchmarks/CoyoteBench/huffbench: 13% speedup
SingleSource/Benchmarks/Misc-C++/Large/sphereflake: 13% speedup
SingleSource/Benchmarks/Misc-C++/mandel-text: 10% speedup
SingleSource/Benchmarks/Misc-C++-EH/spirit: 10% slowdown
MultiSource/Applications/lemon/lemon: 8% slowdown
llvm-svn: 202451
This extract-and-trunc vector optimization cannot work for i1 values as
currently implemented, and so I'm disabling this for now for i1 values. In the
future, this can be fixed properly.
Soon I'll commit support for i1 CR bit tracking in the PowerPC backend, and
this will be covered by one of the existing regression tests.
llvm-svn: 202449
This is a temporary workaround for native arm linux builds:
PR18996: Changing regalloc order breaks "lencod" on native arm linux builds.
llvm-svn: 202433
scan the register file for sub- and super-registers.
No functionality change intended.
(Tests are updated because the comments in the assembler output are
different.)
llvm-svn: 202416
If a function returns a large struct by value return the first 4 words
in registers and the rest on the stack in a location reserved by the
caller. This is needed to support the xC language which supports
functions returning an arbitrary number of return values. This is
r202397 reapplied with a fix to avoid an uninitialized read of a member.
llvm-svn: 202414
Summary:
If a function returns a large struct by value return the first 4 words
in registers and the rest on the stack in a location reserved by the
caller. This is needed to support the xC language which supports
functions returning an arbitrary number of return values.
Reviewers: robertlytton
Reviewed By: robertlytton
CC: llvm-commits
Differential Revision: http://llvm-reviews.chandlerc.com/D2889
llvm-svn: 202397
Summary:
If the src, dst and size of a memcpy are known to be 4 byte aligned we
can call __memcpy_4() instead of memcpy().
Reviewers: robertlytton
Reviewed By: robertlytton
CC: llvm-commits
Differential Revision: http://llvm-reviews.chandlerc.com/D2871
llvm-svn: 202395
any ranges - this includes CU ranges where we were previously emitting an
end list marker even if we didn't have a list.
Testcase includes a test for line table only code emission as the problem
was noticed while writing this test.
llvm-svn: 202357
If the SI_KILL operand is constant, we can either clear the exec mask if
the operand is negative, or do nothing otherwise.
Reviewed-by: Tom Stellard <thomas.stellard@amd.com>
llvm-svn: 202337
any ranges to the list of ranges for the CU as we don't want to emit
them anyway. This ensures that we will still emit ranges if we have
a compile unit compiled with only line tables and one compiled with
full debug info requested (we'll emit for the one with full debug info).
Update testcase metadata accordingly to continue emitting ranges.
llvm-svn: 202333
and update everything accordingly. This can be used to conditionalize
the amount of output in the backend based on the amount of debug
requested/metadata emission scheme by a front end (e.g. clang).
Paired with a commit to clang.
llvm-svn: 202332