If we use training algorithms that don't need partial rewards, we don't
need to worry about an ir2native model. In that case, training logs
won't contain a 'delta_size' feature either (since that's the partial
reward).
Differential Revision: https://reviews.llvm.org/D86481
Different training algorithms may produce models that, besides the main
policy output (i.e. inline/don't inline), produce additional outputs
that are necessary for the next training stage. To facilitate this, in
development mode, we require the training policy infrastructure produce
a description of the outputs that are interesting to it, in the form of
a JSON file. We special-case the first entry in the JSON file as the
inlining decision - we care about its value, so we can guide inlining
during training - but treat the rest as opaque data that we just copy
over to the training log.
Differential Revision: https://reviews.llvm.org/D85674
This allows us to subsequently configure the logger for the case when we
use a model evaluator and want to log additional outputs.
Differential Revision: https://reviews.llvm.org/D85577
We don't want mandatory events in the training log. We do want to handle
them, to keep the native size accounting accurate, but that's all.
Fixed the code, also expanded the test to capture this.
Differential Revision: https://reviews.llvm.org/D85373
Further abstracting the specification of a tensor, to more easily
support different types and shapes of tensor, and also to perform
initialization up-front, at TFModelEvaluator construction time.
Differential Revision: https://reviews.llvm.org/D84685
Outside of compiler-rt (where it's arguably an anti-pattern too),
LLVM tries to keep its build files as simple as possible. See e.g.
llvm/docs/SupportLibrary.rst, "Code Organization".
Differential Revision: https://reviews.llvm.org/D84243
Summary:
This is the InlineAdvisor used in 'development' mode. It enables two
scenarios:
- loading models via a command-line parameter, thus allowing for rapid
training iteration, where models can be used for the next exploration
phase without requiring recompiling the compiler. This trades off some
compilation speed for the added flexibility.
- collecting training logs, in the form of tensorflow.SequenceExample
protobufs. We generate these as textual protobufs, which simplifies
generation and testing. The protobufs may then be readily consumed by a
tensorflow-based training algorithm.
To speed up training, training logs may also be collected from the
'default' training policy. In that case, this InlineAdvisor does not
use a model.
RFC: http://lists.llvm.org/pipermail/llvm-dev/2020-April/140763.html
Reviewers: jdoerfert, davidxl
Subscribers: mgorny, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D83733