Commit Graph

78 Commits

Author SHA1 Message Date
Andrew Savonichev bba25a9cd8 [MCA] Support carry-over instructions for in-order processors
Instructions that have more uops than the processor's IssueWidth are
issued in multiple cycles.

The patch fixes PR49712.

Differential Revision: https://reviews.llvm.org/D99339
2021-03-26 00:06:19 +03:00
Andrea Di Biagio 97a00b7b20 [MCA] Fix for uninitialised member in constructor. NFC 2021-03-24 11:21:59 +00:00
Andrew Savonichev 292da93d59 [MCA] Disable RCU for InOrderIssueStage
This is a follow-up for:
D98604 [MCA] Ensure that writes occur in-order

When instructions are aligned by the order of writes, they retire
in-order naturally. There is no need for an RCU, so it is disabled.

Differential Revision: https://reviews.llvm.org/D98628
2021-03-24 13:54:04 +03:00
Andrea Di Biagio f5bdc88e4d [MCA] Improved handling of negative read-advance cycles.
Before this patch, register writes were always invalidated by the
RegisterFile at instruction commit stage. So,
the RegisterFile was often losing the knowledge about the `execute
cycle` of writes already committed. While this was not problematic
for non-delayed reads, this was sometimes leading to inaccurate read
latency computations in the presence of negative read-advance cycles.

This patch fixes the issue by changing how the RegisterFile component
internally keeps track of the `execute cycle` information of each
write. On every instruction executed, the RegisterFile gets notified
by the RetireStage, so that it can internally record the execute
cycle of each executed write.
The `execute cycle` information is stored within WriteRef itself, and
it is not invalidated when the write is committed.
2021-03-23 14:47:23 +00:00
Andrew Savonichev e6ce0db378 [MCA] Ensure that writes occur in-order
Delay the issue of a new instruction if that leads to out-of-order
commits of writes.

This patch fixes the problem described in:
https://bugs.llvm.org/show_bug.cgi?id=41796#c3

Differential Revision: https://reviews.llvm.org/D98604
2021-03-18 17:10:20 +03:00
Jay Foad 7340fd6886 [MCA] Support in-order CPUs with MicroOpBufferSize=1
Differential Revision: https://reviews.llvm.org/D98356
2021-03-11 10:12:54 +00:00
Andrew Savonichev d791695cb5 [MCA] Add support for in-order CPUs
This patch adds a pipeline to support in-order CPUs such as ARM
Cortex-A55.

In-order pipeline implements a simplified version of Dispatch,
Scheduler and Execute stages as a single stage. Entry and Retire
stages are common for both in-order and out-of-order pipelines.

Differential Revision: https://reviews.llvm.org/D94928
2021-03-04 14:08:19 +03:00
Kazu Hirata 8857202489 [llvm] Use llvm::find (NFC) 2021-01-19 20:19:14 -08:00
Kazu Hirata 1d0bc05551 [llvm] Use llvm::append_range (NFC) 2021-01-06 18:27:33 -08:00
Evgeny Leviant 9c3b68dc6f [llvm-mca] Fix processing thumb instruction set
Differential revision: https://reviews.llvm.org/D91704
2020-11-24 18:27:59 +03:00
serge-sans-paille 9218ff50f9 llvmbuildectomy - replace llvm-build by plain cmake
No longer rely on an external tool to build the llvm component layout.

Instead, leverage the existing `add_llvm_componentlibrary` cmake function and
introduce `add_llvm_component_group` to accurately describe component behavior.

These function store extra properties in the created targets. These properties
are processed once all components are defined to resolve library dependencies
and produce the header expected by llvm-config.

Differential Revision: https://reviews.llvm.org/D90848
2020-11-13 10:35:24 +01:00
Andrea Di Biagio 0e20666db3 [MCA][LSUnit] Correctly update the internal group flags on store barrier execution. Fixes PR48024.
This is likely to be a regressigion introduced by my last refactoring of the
LSUnit (commit 5578ec32f9). Before this patch, the
"CurrentStoreBarrierGroupID" index was not correctly reset on store barrier
executions.  This was leading to unexpected crashes like the one reported as
PR48024.
2020-10-31 11:57:27 +00:00
Evgeny Leviant 8a7ca143f8 [ARM][SchedModels] Convert IsPredicatedPred to MCSchedPredicate
Differential revision: https://reviews.llvm.org/D89553
2020-10-19 11:37:54 +03:00
Jay Foad 099c089d4b [APInt] New member function setBitVal
Differential Revision: https://reviews.llvm.org/D87033
2020-09-02 21:40:31 +01:00
Andrea Di Biagio 47b95d7cf4 [MCA][InstrBuilder] Correctly mark reserved resources in initializeUsedResources.
This fixes a bug reported by Alex Renda on LLVMDev where mca did not correctly
mark a resource group as "reserved".
(See http://lists.llvm.org/pipermail/llvm-dev/2020-May/141485.html).

The issue was caused by a wrong check in function `initializeUsedResources`.
As a consequence of this, a resource group was left unreserved, and its field
`NumUnits` incorrectly reported an unrealistic number of consumed resource
units.

This patch fixes the issue with the handling of reserved resources in the
InstrBuilder class, and adds a simple test for it.  Ideally, as suggested by
Andy Trick, most of these problems will disappear if in the future we will
introduce a (optional) DelayCycles vector for SchedWriteRes.
2020-05-10 19:25:54 +01:00
Andrea Di Biagio 5578ec32f9 [MCA] Fixed a bug where loads and stores were sometimes incorrectly marked as depedent. Fixes PR45793.
This fixes a regression introduced by a very old commit 280ac1fd1d (was
llvm-svn 361950).

Commit 280ac1fd1d redesigned the logic in the LSUnit with the goal of
speeding up isReady() queries, and stabilising the LSUnit API (while also making
the load store unit more customisable).

The concept of MemoryGroup (effectively an alias set) was added by that commit
to better describe and track dependencies between memory operations.  However,
that concept was not just used for alias dependencies, but it was also used for
describing memory "order" dependencies (enforced by the memory consistency
model).

Instructions of a same memory group were considered "equivalent" as in:
independent operations that can potentially execute in parallel.  The problem
was that the cost of a dependency (in terms of number of cycles) should have
been different for "order" dependency. Instructions in an order dependency
simply have to have to wait until their predecessors are "issued" to an
underlying pipeline (rather than having to wait until predecessors have beeng
fully executed). For simple "order" dependencies, this was effectively
introducing an artificial delay on the "issue" of independent loads and stores.

This patch fixes the issue and adds a new test named 'independent-load-stores.s'
to a bunch of x86 targets. That test contains the reproducible posted by Fabian
Ritter on PR45793.

I had to rerun the update-mca-tests script on several files. To avoid expected
regressions on some Exynos tests, I have added a -noalias=false flag (to match
the old strict behavior on latencies).

Some tests for processor Barcelona are improved/fixed by this change and they
now show better results.  In a few tests we were incorrectly counting the time
spent by instructions in a scheduler queue.  In one case in particular we now
correctly see a store executed out of order.  That test was affected by the same
underlying issue reported as PR45793.

Reviewers: mattd

Differential Revision: https://reviews.llvm.org/D79351
2020-05-05 10:25:36 +01:00
Shengchen Kan 8bb059ab63 [MC][Bugfix] Remove redundant parameter for relaxInstruction
Summary:
Before this patch, `relaxInstruction` takes three arguments, the first
argument refers to the instruction before relaxation and the third
argument is the output instruction after relaxation. There are two quite
strange things:
  1) The first argument's type is `const MCInst &`, the third
  argument's type is `MCInst &`, but they may be aliased to the same
  variable
  2) The backends of ARM, AMDGPU, RISC-V, Hexagon assume that the third
  argument is a fresh uninitialized `MCInst` even if `relaxInstruction`
  may be called like `relaxInstruction(Relaxed, STI, Relaxed)` in a
  loop.

In this patch, we drop the thrid argument, and let `relaxInstruction`
directly modify the given instruction. Also, this patch fixes the bug https://bugs.llvm.org/show_bug.cgi?id=45580, which is introduced by D77851, and
breaks the assumption of ARM, AMDGPU, RISC-V, Hexagon.

Reviewers: Razer6, MaskRay, jyknight, asb, luismarques, enderby, rtaylor, colinl, bcain

Reviewed By: Razer6, MaskRay, bcain

Subscribers: bcain, nickdesaulniers, nathanchance, wuzish, annita.zhang, arsenm, dschuff, jyknight, dylanmckay, sdardis, nemanjai, jvesely, nhaehnle, tpr, sbc100, jgravelle-google, kristof.beyls, hiraditya, aheejin, kbarton, fedor.sergeev, asb, rbar, johnrusso, simoncook, sabuasal, niosHD, jrtc27, MaskRay, zzheng, edward-jones, atanasyan, rogfer01, MartinMosbeck, brucehoult, the_o, PkmX, jocewei, Jim, lenary, s.egerton, pzheng, sameer.abuasal, apazos, luismarques, kerbowa, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D78364
2020-04-21 11:06:55 +08:00
Bill Wendling c55cf4afa9 Revert "Remove redundant "std::move"s in return statements"
The build failed with

  error: call to deleted constructor of 'llvm::Error'

errors.

This reverts commit 1c2241a793.
2020-02-10 07:07:40 -08:00
Bill Wendling 1c2241a793 Remove redundant "std::move"s in return statements 2020-02-10 06:39:44 -08:00
Andrea Di Biagio aaaeac6166 [MCA] Remove verification check on MayLoad and MayStore. NFCI
Field NumMicroOpcodes is currently used by mca to model the number of uOPs
dispatched from the uOp-Queue to the out of order backend.  From a 'dispatch'
point of view, an instruction with zero opcodes is still valid; it simply
doesn't consume any dispatch group slots.

However, mca doesn't expect an instruction with zero uOPs to consume pipeline
resources because it is seen as a contradiction.  In practice, it only makes
sense if such an instruction is eliminated and never really executed. It may be
that mca is being too conservative here. However I believe that mca is right,
and we should probably check that inconsistency in CodeGenSchedule.cpp (when we
also verify scheduling classes in general).

This patch removes the check for MayLoad and MayStore in mca.  That check is
probably too conservative: we are already checking if a zero-uops instruction
consumes any processor resources. Note also that instructions with unmodelled
side-effects also tend to set the MayLoad/MayStore flags even if - theoretically
speaking - they might not even consume any hw resources in practice.

In future we may want to implement different checks (possibly outside of mca)
and potentially revisit the logic in mca that verifies instructions.
For that reason I have raised PR44797.
2020-02-05 13:50:01 +00:00
Benjamin Kramer adcd026838 Make llvm::StringRef to std::string conversions explicit.
This is how it should've been and brings it more in line with
std::string_view. There should be no functional change here.

This is mostly mechanical from a custom clang-tidy check, with a lot of
manual fixups. It uncovers a lot of minor inefficiencies.

This doesn't actually modify StringRef yet, I'll do that in a follow-up.
2020-01-28 23:25:25 +01:00
Mark de Wever 8dc7b982b4 [NFC] Fixes -Wrange-loop-analysis warnings
This avoids new warnings due to D68912 adds -Wrange-loop-analysis to -Wall.

Differential Revision: https://reviews.llvm.org/D71857
2020-01-01 20:01:37 +01:00
Tom Stellard ab411801b8 [cmake] Explicitly mark libraries defined in lib/ as "Component Libraries"
Summary:
Most libraries are defined in the lib/ directory but there are also a
few libraries defined in tools/ e.g. libLLVM, libLTO.  I'm defining
"Component Libraries" as libraries defined in lib/ that may be included in
libLLVM.so.  Explicitly marking the libraries in lib/ as component
libraries allows us to remove some fragile checks that attempt to
differentiate between lib/ libraries and tools/ libraires:

1. In tools/llvm-shlib, because
llvm_map_components_to_libnames(LIB_NAMES "all") returned a list of
all libraries defined in the whole project, there was custom code
needed to filter out libraries defined in tools/, none of which should
be included in libLLVM.so.  This code assumed that any library
defined as static was from lib/ and everything else should be
excluded.

With this change, llvm_map_components_to_libnames(LIB_NAMES, "all")
only returns libraries that have been added to the LLVM_COMPONENT_LIBS
global cmake property, so this custom filtering logic can be removed.
Doing this also fixes the build with BUILD_SHARED_LIBS=ON
and LLVM_BUILD_LLVM_DYLIB=ON.

2. There was some code in llvm_add_library that assumed that
libraries defined in lib/ would not have LLVM_LINK_COMPONENTS or
ARG_LINK_COMPONENTS set.  This is only true because libraries
defined lib lib/ use LLVMBuild.txt and don't set these values.
This code has been fixed now to check if the library has been
explicitly marked as a component library, which should now make it
easier to remove LLVMBuild at some point in the future.

I have tested this patch on Windows, MacOS and Linux with release builds
and the following combinations of CMake options:

- "" (No options)
- -DLLVM_BUILD_LLVM_DYLIB=ON
- -DLLVM_LINK_LLVM_DYLIB=ON
- -DBUILD_SHARED_LIBS=ON
- -DBUILD_SHARED_LIBS=ON -DLLVM_BUILD_LLVM_DYLIB=ON
- -DBUILD_SHARED_LIBS=ON -DLLVM_LINK_LLVM_DYLIB=ON

Reviewers: beanz, smeenai, compnerd, phosek

Reviewed By: beanz

Subscribers: wuzish, jholewinski, arsenm, dschuff, jyknight, dylanmckay, sdardis, nemanjai, jvesely, nhaehnle, mgorny, mehdi_amini, sbc100, jgravelle-google, hiraditya, aheejin, fedor.sergeev, asb, rbar, johnrusso, simoncook, apazos, sabuasal, niosHD, jrtc27, MaskRay, zzheng, edward-jones, atanasyan, steven_wu, rogfer01, MartinMosbeck, brucehoult, the_o, dexonsmith, PkmX, jocewei, jsji, dang, Jim, lenary, s.egerton, pzheng, sameer.abuasal, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D70179
2019-11-21 10:48:08 -08:00
Greg Bedwell 4640223ebd [MCA] Fix a spelling mistake in a comment. NFC 2019-10-27 10:06:22 +00:00
Andrea Di Biagio 8d6651f7b1 [MCA][LSUnit] Track loads and stores until retirement.
Before this patch, loads and stores were only tracked by their corresponding
queues in the LSUnit from dispatch until execute stage. In practice we should be
more conservative and assume that memory opcodes leave their queues at
retirement stage.

Basically, loads should leave the load queue only when they have completed and
delivered their data. We conservatively assume that a load is completed when it
is retired. Stores should be tracked by the store queue from dispatch until
retirement. In practice, stores can only leave the store queue if their data can
be written to the data cache.

This is mostly a mechanical change. With this patch, the retire stage notifies
the LSUnit when a memory instruction is retired. That would triggers the release
of LDQ/STQ entries.  The only visible change is in memory tests for the bdver2
model. That is because bdver2 is the only model that defines the load/store
queue size.

This patch partially addresses PR39830.

Differential Revision: https://reviews.llvm.org/D68266

llvm-svn: 374034
2019-10-08 10:46:01 +00:00
Andrea Di Biagio 2730df2e16 [MCA] Use references to LSUnitBase in class Scheduler and add helper methods to acquire/release LS queue entries. NFCI
llvm-svn: 373236
2019-09-30 17:24:25 +00:00
Andrea Di Biagio 2f51a43f8c [Tblgen][MCA] Add the ability to mark groups as LoadQueue and StoreQueue. NFCI
Before this patch, users were not allowed to optionally mark processor resource
groups as load/store queues. That is because tablegen class MemoryQueue was
originally declared as expecting a ProcResource template argument (instead of a
more generic ProcResourceKind).

That was an oversight, since the original intention from D54957 was to let user
mark any processor resource as either load/store queue.  This patch adds the
ability to use processor resource groups in MemoryQueue definitions. This is not
a user visible change.

Differential Revision: https://reviews.llvm.org/D66810

llvm-svn: 370091
2019-08-27 18:20:34 +00:00
Andrea Di Biagio 589cb004de [MCA] consistently use MCPhysReg instead of unsigned as register type. NFCI
llvm-svn: 369648
2019-08-22 13:32:17 +00:00
Jonas Devlieghere 0eaee545ee [llvm] Migrate llvm::make_unique to std::make_unique
Now that we've moved to C++14, we no longer need the llvm::make_unique
implementation from STLExtras.h. This patch is a mechanical replacement
of (hopefully) all the llvm::make_unique instances across the monorepo.

llvm-svn: 369013
2019-08-15 15:54:37 +00:00
Andrea Di Biagio 3de2f0330f [MCA] Slightly refactor class RetireControlUnit, and add the ability to override the mask of used buffered resources in class mca::Instruction. NFCI
This patch teaches the RCU how to peek 'next' RCUTokens. A new method has been
added to the RetireControlUnit class with the goal of minimizing the complexity
of follow-up patches that will enable macro-fusion support in mca.

This patch also adds method Instruction::getNumMicroOpcodes() to simplify common
interactions with the instruction descriptor (a pattern quite common in some
pipeline stages).

Added the ability to override the default set of consumed scheduler resources
(this -again- is to simplify future patches that add support for macro-op fusion).

No functional change intended.

llvm-svn: 369010
2019-08-15 15:27:40 +00:00
Andrea Di Biagio 7aa0dbb664 [MCA] Slightly refactor the logic in ResourceManager. NFCI
This patch slightly changes the API in the attempt to simplify resource buffer
queries. It is done in preparation for a patch that will enable support for
macro fusion.

llvm-svn: 368994
2019-08-15 12:39:55 +00:00
Andrea Di Biagio cbec9af6bf [MCA] Add flag -show-encoding to llvm-mca.
Flag -show-encoding enables the printing of instruction encodings as part of the
the instruction info view.

Example (with flags -mtriple=x86_64--  -mcpu=btver2):

Instruction Info:
[1]: #uOps
[2]: Latency
[3]: RThroughput
[4]: MayLoad
[5]: MayStore
[6]: HasSideEffects (U)
[7]: Encoding Size

[1]    [2]    [3]    [4]    [5]    [6]    [7]    Encodings:     Instructions:
 1      2     1.00                         4     c5 f0 59 d0    vmulps   %xmm0, %xmm1, %xmm2
 1      4     1.00                         4     c5 eb 7c da    vhaddps  %xmm2, %xmm2, %xmm3
 1      4     1.00                         4     c5 e3 7c e3    vhaddps  %xmm3, %xmm3, %xmm4

In this example, column Encoding Size is the size in bytes of the instruction
encoding. Column Encodings reports the actual instruction encodings as byte
sequences in hex (objdump style).

The computation of encodings is done by a utility class named mca::CodeEmitter.

In future, I plan to expose the CodeEmitter to the instruction builder, so that
information about instruction encoding sizes can be used by the simulator. That
would be a first step towards simulating the throughput from the decoders in the
hardware frontend.

Differential Revision: https://reviews.llvm.org/D65948

llvm-svn: 368432
2019-08-09 11:26:27 +00:00
Andrea Di Biagio 987331671f [MCA] Remove dependency from InstrBuilder in mca::Context. NFC
InstrBuilder is not required to construct the default pipeline.

llvm-svn: 368275
2019-08-08 10:30:58 +00:00
Andrea Di Biagio 6b78e4d0a4 [MCA] Ignore invalid processor resource writes of zero cycles. NFCI
In debug mode, the tool also raises a warning and prints out a message which
helps identify the problematic MCWriteProcResEntry from the scheduling class.
This message would have been useful to have when triaging PR42282.

llvm-svn: 363387
2019-06-14 13:31:21 +00:00
Andrea Di Biagio 47db08dbb1 [MCA] Further refactor the bottleneck analysis view. NFCI.
llvm-svn: 362933
2019-06-10 12:50:08 +00:00
Andrea Di Biagio 6a989c358c [MCA][Scheduler] Change how memory instructions are dispatched to the pending set. NFCI
llvm-svn: 362302
2019-06-01 15:22:37 +00:00
Andrea Di Biagio 280ac1fd1d [MCA] Refactor class LSUnit. NFCI
This should be the last bit of refactoring in preparation for a patch that would
finally fix PR37494.

This patch introduces the concept of memory dependency groups (class
MemoryGroup) and "Load/Store Unit token" (LSUToken) to track the status of a
memory operation.

A MemoryGroup is a node of a memory dependency graph. It is used internally to
classify memory operations based on the memory operations they depend on.  Let I
and J be two memory operations, we say that I and J equivalent (for the purpose
of mapping instructions to memory dependency groups) if the set of memory
operations they depend depend on is identical.

MemoryGroups are identified by so-called LSUToken (a unique group identifier
assigned by the LSUnit to every group). When an instruction I is dispatched to
the LSUnit, the LSUnit maps I to a group, and then returns a LSUToken.
LSUTokens are used by class Scheduler to track memory dependencies.

This patch simplifies the LSUnit interface and moves most of the implementation
details to its base class (LSUnitBase). There is no user visible change to the
output.

llvm-svn: 361950
2019-05-29 11:38:27 +00:00
Andrea Di Biagio c2493ce4a4 [MCA][Scheduler] Improved critical memory dependency computation.
This fixes a problem where back-pressure increases caused by register
dependencies were not correctly notified if execution was also delayed by memory
dependencies.

llvm-svn: 361740
2019-05-26 19:50:31 +00:00
Andrea Di Biagio a549dd2560 [MCA] Refactor the logic that computes the critical memory dependency info. NFCI
CriticalRegDep has been renamed CriticalDependency, and it is now used by class
Instruction to store information about the critical register dependency and the
critical memory dependency. No functional change intendend.

llvm-svn: 361737
2019-05-26 18:41:35 +00:00
Andrea Di Biagio 27b3b5d952 [MCA] Add the ability to compute critical register dependency of an instruction.
This patch adds the methods `getCriticalRegDep()` and `computeCriticalRegDep()` to
class InstructionBase.
The goal is to allow users to obtain information about the critical register
dependency that most affects the latency of an instruction.

These methods are currently unused. However, the long term plan is to use them
in order to allow the computation of a critical-path as part of the bottleneck
analysis. So, this is yet another step towards fixing PR37494.

llvm-svn: 361509
2019-05-23 16:32:19 +00:00
Andrea Di Biagio dd0d9e01ee [MCA] Introduce class LSUnitBase and let LSUnit derive from it.
Class LSUnitBase provides a abstract interface for all the concrete LS units in
llvm-mca.

Methods exposed by the public abstract LSUnitBase interface are:
 - Status isAvailable(const InstRef&);
 - void dispatch(const InstRef &);
 - const InstRef &isReady(const InstRef &);

LSUnitBase standardises the API, but not the data structures internally used by
LS units. This allows for more flexibility.
Previously, only method `isReady()` was declared virtual by class LSUnit.
Also, derived classes had to inherit all the internal data members of LSUnit.

No functional change intended.

llvm-svn: 361496
2019-05-23 13:42:47 +00:00
Andrea Di Biagio 28afd8dc71 [MCA] Make the bool conversion operator in class InstRef explicit. NFCI
This patch makes the bool conversion operator in InstRef explicit.
It also adds a operator< to hel comparing InstRef objects in sets.

llvm-svn: 361482
2019-05-23 10:50:01 +00:00
Andrea Di Biagio 69b8b17945 [MCA] Remove dead assignment. NFC
llvm-svn: 360237
2019-05-08 10:28:56 +00:00
Andrea Di Biagio 0460a3629b [MCA] Notify event listeners when instructions transition to the Pending state. NFCI
llvm-svn: 359983
2019-05-05 16:07:27 +00:00
Andrea Di Biagio d77dc9ada2 [MCA] Add field `IsEliminated` to class Instruction. NFCI
llvm-svn: 359377
2019-04-27 11:59:11 +00:00
Andrea Di Biagio e074ac60b4 [MCA] Add an experimental MicroOpQueue stage.
This patch adds an experimental stage named MicroOpQueueStage.
MicroOpQueueStage can be used to simulate a hardware micro-op queue (basically,
a decoupling queue between 'decode' and 'dispatch').  Users can specify a queue
size, as well as a optional MaxIPC (which - in the absence of a "Decoders" stage
- can be used to simulate a different throughput from the decoders).

This stage is added to the default pipeline between the EntryStage and the
DispatchStage only if PipelineOption::MicroOpQueue is different than zero. By
default, llvm-mca sets PipelineOption::MicroOpQueue to the value of hidden flag
-micro-op-queue-size.

Throughput from the decoder can be simulated via another hidden flag named
-decoder-throughput.  That flag allows us to quickly experiment with different
frontend throughputs.  For targets that declare a loop buffer, flag
-decoder-throughput allows users to do multiple runs, each time simulating a
different throughput from the decoders.

This stage can/will be extended in future. For example, we could add a "buffer
full" event to notify bottlenecks caused by backpressure. flag
-decoder-throughput would probably go away if in future we delegate to another
stage (DecoderStage?) the simulation of a (potentially variable) throughput from
the decoders. For now, flag -decoder-throughput is "good enough" to run some
simple experiments.

Differential Revision: https://reviews.llvm.org/D59928

llvm-svn: 357248
2019-03-29 12:15:37 +00:00
Andrea Di Biagio a194656fa2 [MCA] Fix -Wparentheses warning breaking the -Werror build.
Waring was introduced at r357074.

llvm-svn: 357085
2019-03-27 16:22:36 +00:00
Andrea Di Biagio 333a3264f4 [MCA][Pipeline] Don't visit stages in reverse order when calling method cycleEnd(). NFCI
There is no reason why stages should be visited in reverse order.
This patch allows the definition of stages that push instructions forward from
their cycleEnd() routine.

llvm-svn: 357074
2019-03-27 15:41:53 +00:00
Andrea Di Biagio ddce32e2f3 [MCA] Correctly update the UsedResourceGroups mask in the InstrBuilder.
Found by inspection when looking at the debug output of MCA.
This problem was latent, and none of the upstream models were affected by it.
No functional change intended.

llvm-svn: 357000
2019-03-26 15:38:37 +00:00
Andrea Di Biagio be3281a281 [MCA] Highlight kernel bottlenecks in the summary view.
This patch adds a new flag named -bottleneck-analysis to print out information
about throughput bottlenecks.

MCA knows how to identify and classify dynamic dispatch stalls. However, it
doesn't know how to analyze and highlight kernel bottlenecks.  The goal of this
patch is to teach MCA how to correlate increases in backend pressure to backend
stalls (and therefore, the loss of throughput).

From a Scheduler point of view, backend pressure is a function of the scheduler
buffer usage (i.e. how the number of uOps in the scheduler buffers changes over
time). Backend pressure increases (or decreases) when there is a mismatch
between the number of opcodes dispatched, and the number of opcodes issued in
the same cycle.  Since buffer resources are limited, continuous increases in
backend pressure would eventually leads to dispatch stalls. So, there is a
strong correlation between dispatch stalls, and how backpressure changed over
time.

This patch teaches how to identify situations where backend pressure increases
due to:
 - unavailable pipeline resources.
 - data dependencies.

Data dependencies may delay execution of instructions and therefore increase the
time that uOps have to spend in the scheduler buffers. That often translates to
an increase in backend pressure which may eventually lead to a bottleneck.
Contention on pipeline resources may also delay execution of instructions, and
lead to a temporary increase in backend pressure.

Internally, the Scheduler classifies instructions based on whether register /
memory operands are available or not.

An instruction is marked as "ready to execute" only if data dependencies are
fully resolved.
Every cycle, the Scheduler attempts to execute all instructions that are ready
to execute. If an instruction cannot execute because of unavailable pipeline
resources, then the Scheduler internally updates a BusyResourceUnits mask with
the ID of each unavailable resource.

ExecuteStage is responsible for tracking changes in backend pressure. If backend
pressure increases during a cycle because of contention on pipeline resources,
then ExecuteStage sends a "backend pressure" event to the listeners.
That event would contain information about instructions delayed by resource
pressure, as well as the BusyResourceUnits mask.

Note that ExecuteStage also knows how to identify situations where backpressure
increased because of delays introduced by data dependencies.

The SummaryView observes "backend pressure" events and prints out a "bottleneck
report".

Example of bottleneck report:

```
Cycles with backend pressure increase [ 99.89% ]
Throughput Bottlenecks:
  Resource Pressure       [ 0.00% ]
  Data Dependencies:      [ 99.89% ]
   - Register Dependencies [ 0.00% ]
   - Memory Dependencies   [ 99.89% ]
```

A bottleneck report is printed out only if increases in backend pressure
eventually caused backend stalls.

About the time complexity:

Time complexity is linear in the number of instructions in the
Scheduler::PendingSet.

The average slowdown tends to be in the range of ~5-6%.
For memory intensive kernels, the slowdown can be significant if flag
-noalias=false is specified. In the worst case scenario I have observed a
slowdown of ~30% when flag -noalias=false was specified.

We can definitely recover part of that slowdown if we optimize class LSUnit (by
doing extra bookkeeping to speedup queries). For now, this new analysis is
disabled by default, and it can be enabled via flag -bottleneck-analysis. Users
of MCA as a library can enable the generation of pressure events through the
constructor of ExecuteStage.

This patch partially addresses https://bugs.llvm.org/show_bug.cgi?id=37494

Differential Revision: https://reviews.llvm.org/D58728

llvm-svn: 355308
2019-03-04 11:52:34 +00:00