This patch adds two new boolean fields:
- Field `ReadState::IndependentFromDef`.
- Field `WriteState::WritesZero`.
Field `IndependentFromDef` is set for ReadState objects associated with
dependency-breaking instructions. It is used by the simulator when updating data
dependencies between registers.
Field `WritesZero` is set by WriteState objects associated with dependency
breaking zero-idiom instructions. It helps the PRF identify which writes don't
consume any physical registers.
llvm-svn: 342483
Summary:
This patch removes the storing of accumulated floating point data
within the llvm-mca library.
This patch splits-up the two quantities: cycles and number of resource units.
By splitting-up these two quantities, we delay the calculation of "cycles per resource unit"
until that value is read, reducing the chance of accumulating floating point error.
I considered using the APFloat, but after measuring performance, for a large (many iteration)
sample, I decided to go with this faster solution.
Reviewers: andreadb, courbet, RKSimon
Reviewed By: andreadb
Subscribers: llvm-commits, javed.absar, tschuett, gbedwell
Differential Revision: https://reviews.llvm.org/D51903
llvm-svn: 341980
This patch introduces the following changes to the DispatchStatistics view:
* DispatchStatistics now reports the number of dispatched opcodes instead of
the number of dispatched instructions.
* The "Dynamic Dispatch Stall Cycles" table now also reports the percentage of
stall cycles against the total simulated cycles.
This change allows users to easily compare dispatch group sizes with the
processor DispatchWidth.
Before this change, it was difficult to correlate the two numbers, since
DispatchStatistics view reported numbers of instructions (instead of opcodes).
DispatchWidth defines the maximum size of a dispatch group in terms of number of
micro opcodes.
The other change introduced by this patch is related to how DispatchStage
generates "instruction dispatch" events.
In particular:
* There can be multiple dispatch events associated with a same instruction
* Each dispatch event now encapsulates the number of dispatched micro opcodes.
The number of micro opcodes declared by an instruction may exceed the processor
DispatchWidth. Therefore, we cannot assume that instructions are always fully
dispatched in a single cycle.
DispatchStage knows already how to handle instructions declaring a number of
opcodes bigger that DispatchWidth. However, DispatchStage always emitted a
single instruction dispatch event (during the first simulated dispatch cycle)
for instructions dispatched.
With this patch, DispatchStage now correctly notifies multiple dispatch events
for instructions that cannot be dispatched in a single cycle.
A few views had to be modified. Views can no longer assume that there can only
be one dispatch event per instruction.
Tests (and docs) have been updated.
Differential Revision: https://reviews.llvm.org/D51430
llvm-svn: 341055
This patch adds two new fields to the perf report generated by the SummaryView.
Fields are now logically organized into two small groups; only the second group
contains throughput indicators.
Example:
```
Iterations: 100
Instructions: 300
Total Cycles: 414
Total uOps: 700
Dispatch Width: 4
uOps Per Cycle: 1.69
IPC: 0.72
Block RThroughput: 4.0
```
This patch also updates the docs for llvm-mca.
Due to the nature of this change, several tests in the tools/llvm-mca directory
were affected, and had to be updated using script `update_mca_test_checks.py`.
llvm-svn: 340946
This patch also uses colors to highlight problematic wait-time entries.
A problematic entry is an entry with an high wait time that tends to match (or
exceed) the size of the scheduler's buffer.
Color RED is used if an instruction had to wait an average number of cycles
which is bigger than (or equal to) the size of the underlying scheduler's
buffer.
Color YELLOW is used if the time (in cycles) spend waiting for the
operands or pipeline resources is bigger than half the size of the underlying
scheduler's buffer.
Color MAGENTA is used if an instruction does not consume buffer resources
according to the scheduling model.
llvm-svn: 340825
Summary:
This patch introduces llvm-mca as a library. The driver (llvm-mca.cpp), views, and stats, are not part of the library.
Those are separate components that are not required for the functioning of llvm-mca.
The directory has been organized as follows:
All library source files now reside in:
- `lib/HardwareUnits/` - All subclasses of HardwareUnit (these represent the simulated hardware components of a backend).
(LSUnit does not inherit from HardwareUnit, but Scheduler does which uses LSUnit).
- `lib/Stages/` - All subclasses of the pipeline stages.
- `lib/` - This is the root of the library and contains library code that does not fit into the Stages or HardwareUnit subdirs.
All library header files now reside in the `include` directory and mimic the same layout as the `lib` directory mentioned above.
In the (near) future we would like to move the library (include and lib) contents from tools and into the core of llvm somewhere.
That change would allow various analysis and optimization passes to make use of MCA functionality for things like cost modeling.
I left all of the non-library code just where it has always been, in the root of the llvm-mca directory.
The include directives for the non-library source file have been updated to refer to the llvm-mca library headers.
I updated the llvm-mca/CMakeLists.txt file to include the library headers, but I made the non-library code
explicitly reference the library's 'include' directory. Once we eventually (hopefully) migrate the MCA library
components into llvm the include directives used by the non-library source files will be updated to point to the
proper location in llvm.
Reviewers: andreadb, courbet, RKSimon
Reviewed By: andreadb
Subscribers: mgorny, javed.absar, tschuett, gbedwell, llvm-commits
Differential Revision: https://reviews.llvm.org/D50929
llvm-svn: 340755
Before this patch, the SchedulerStatistics only printed the maximum number of
buffer entries consumed in each scheduler's queue at a given point of the
simulation.
This patch restructures the reported table, and adds an extra field named
"Average number of used buffer entries" to it.
This patch also uses different colors to help identifying bottlenecks caused by
high scheduler's buffer pressure.
llvm-svn: 340746
Choosing to revert the change and do it again, hopefully preserving the history
of the changes by using svn copy instead of simply creating a new file from the
contents within Scheduler.
llvm-svn: 340661
Thanks to @waltl for reporting this issue.
I have also added an assert to check for invalid null strategy objects, and I
have reworded a couple of code comments in Scheduler.h
llvm-svn: 340545
With this patch, users can now customize the pipeline selection strategy for
scheduler resources. The resource selection strategy can be defined at processor
resource granularity. This enables the definition of different strategies for
different hardware schedulers.
To override the strategy associated with a processor resource, users can call
method ResourceManager::setCustomStrategy(), and pass a 'ResourceStrategy'
object in input.
Class ResourceStrategy is an abstract class which declares virtual method
`ResourceStrategy::select()`. Method select() is meant to implement the actual
strategy; it is responsible for picking the next best resource from a set of
available pipeline resources. Custom strategy must simply override that method.
By default, processor resources are associated with instances of
'DefaultResourceStrategy'. A 'DefaultResourceStrategy' internally implements a
simple round-robin selector. For more details, please refer to the code comments
in Scheduler.h.
llvm-svn: 340536
The constructor of Scheduler now accepts a SchedulerStrategy object, which is
used internally by method Scheduler::select() to drive the instruction selection
process.
The goal of this patch is to enable the definition of custom selection
strategies while reusing the same algorithms implemented by class Scheduler.
The motivation is that, on some targets, the default strategy may not well
approximate the selection logic in the hardware schedulers.
This patch also adds the ability to pass a ResourceManager object to the
constructor of Scheduler. This gives a bit more flexibility to the design, and
potentially it allows to expose processor resources to SchedulerStrategy
objects.
Differential Revision: https://reviews.llvm.org/D51051
llvm-svn: 340314
The goal of this patch is to simplify the Scheduler's interface in preparation
for D50929.
Some methods in the Scheduler's interface should not be exposed to external
users, since their presence makes it hard to both understand, and extend the
Scheduler's interface.
This patch removes the following two methods from the public Scheduler's API:
- reclaimSimulatedResources()
- updatePendingQueue()
Their logic has been migrated to a new method named 'cycleEvent()'.
Methods 'updateIssuedSet()' and 'promoteToReadySet()' still exist. However,
they are now private members of class Scheduler.
This simplifies the interaction with the Scheduler from the ExecuteStage.
llvm-svn: 340273
The LSUnit is now a HardwareUnit, and it is owned by the mca::Context.
Derived classes can now implement a different consistency model by overriding
method `LSUnit::isReady()`.
This patch also slightly refactors the Scheduler interface in the attempt to
simplifying the interaction between ExecuteStage and the underlying Scheduler.
llvm-svn: 340176
class Scheduler should not know anything of hardware event listeners and
hardware stall events (HWStallEvent). HWStallEvent objects should only be
constructed by pipeline stages to notify listeners of hardware events.
No functional change intended.
llvm-svn: 340036
This patch changes how instruction execution is orchestrated by the Pipeline.
In particular, this patch makes it more explicit how instructions transition
through the various pipeline stages during execution.
The main goal is to simplify both the stage API and the Pipeline execution. At
the same time, this patch fixes some design issues which are currently latent,
but that are likely to cause problems in future if people start defining custom
pipelines.
The new design assumes that each pipeline stage knows the "next-in-sequence".
The Stage API has gained three new methods:
- isAvailable(IR)
- checkNextStage(IR)
- moveToTheNextStage(IR).
An instruction IR can be executed by a Stage if method `Stage::isAvailable(IR)`
returns true.
Instructions can move to next stages using method moveToTheNextStage(IR).
An instruction cannot be moved to the next stage if method checkNextStage(IR)
(called on the current stage) returns false.
Stages are now responsible for moving instructions to the next stage in sequence
if necessary.
Instructions are allowed to transition through multiple stages during a single
cycle (as long as stages are available, and as long as all the calls to
`checkNextStage(IR)` returns true).
Methods `Stage::preExecute()` and `Stage::postExecute()` have now become
redundant, and those are removed by this patch.
Method Pipeline::runCycle() is now simpler, and it correctly visits stages
on every begin/end of cycle.
Other changes:
- DispatchStage no longer requires a reference to the Scheduler.
- ExecuteStage no longer needs to directly interact with the
RetireControlUnit. Instead, executed instructions are now directly moved to the
next stage (i.e. the retire stage).
- RetireStage gained an execute method. This allowed us to remove the
dependency with the RCU in ExecuteStage.
- FecthStage now updates the "program counter" during cycleBegin() (i.e.
before we start executing new instructions).
- We no longer need Stage::Status to be returned by method execute(). It has
been dropped in favor of a more lightweight llvm::Error.
Overally, I measured a ~11% performance gain w.r.t. the previous design. I also
think that the Stage interface is probably easier to read now. That being said,
code comments have to be improved, and I plan to do it in a follow-up patch.
Differential revision: https://reviews.llvm.org/D50849
llvm-svn: 339923
The main difference is that now `cycleStart()` and `cycleEnd()` return an
llvm::Error.
This patch implements a few minor style changes, and adds missing 'const' to
some methods.
llvm-svn: 339885
This patch fixes a regression introduced at revision 338702.
A processor resource mask was incorrectly implicitly truncated to an unsigned
quantity. Later on, the truncated mask was used to initialize an element of a
vector of processor resource descriptors.
On targets with more than 32 processor resources, some elements of the vector
are left uninitialized. As a consequence, this bug might have eventually caused
a crash due to null dereference in the Scheduler.
This patch fixes PR38575, and adds a test for it.
llvm-svn: 339768
Summary:
This patch introduces error handling to propagate the errors from llvm-mca library classes (or what will become library classes) up to the driver. This patch also introduces an enum to make clearer the intention of the return value for Stage::execute.
This supports PR38101.
Reviewers: andreadb, courbet, RKSimon
Reviewed By: andreadb
Subscribers: llvm-commits, tschuett, gbedwell
Differential Revision: https://reviews.llvm.org/D50561
llvm-svn: 339594
LLVM triple normalization is handling "unknown" and empty components
differently; for example given "x86_64-unknown-linux-gnu" and
"x86_64-linux-gnu" which should be equivalent, triple normalization
returns "x86_64-unknown-linux-gnu" and "x86_64--linux-gnu". autoconf's
config.sub returns "x86_64-unknown-linux-gnu" for both
"x86_64-linux-gnu" and "x86_64-unknown-linux-gnu". This changes the
triple normalization to behave the same way, replacing empty triple
components with "unknown".
This addresses PR37129.
Differential Revision: https://reviews.llvm.org/D50219
llvm-svn: 339294
This patch is a follow-up to r338702.
We don't need to use a map to model the wait/ready/issued sets. It is much more
efficient to use a vector instead.
This patch gives us an average 7.5% speedup (on top of the ~12% speedup obtained
after r338702).
llvm-svn: 338883
We don't need to use a map to store ResourceState objects. The number of
processor resources is known statically from the scheduling model. We can
therefore use a vector, and reserve a slot for each processor resource that we
want to simulate.
Every time the ResourceManager queries the ResourceState vector, the index to
the vector of ResourceState objects can be easily computed from the processor
resource mask.
This drastically reduces the time complexity of method ResourceManager::use() and
method ResourceManager::release(). This patch gives an average speedup of 12%.
llvm-svn: 338702