mirror of https://github.com/tracel-ai/burn.git
0cbe9a927d
* Add training report summary * Fix LossMetric batch size state * Add NumericEntry de/serialize * Fix clippy suggestion * Compact recorder does not use compression (anymore) * Add learner summary expected results tests * Add summary to learner builder and automatically display in fit - Add LearnerSummaryConfig - Keep track of summary metrics names - Add model field when displaying from learner.fit() |
||
---|---|---|
.. | ||
examples | ||
src | ||
Cargo.toml | ||
README.md |
README.md
MNIST
The example is showing you how to:
- Define your own custom module (MLP).
- Create the data pipeline from a raw dataset to a batched multi-threaded fast DataLoader.
- Configure a learner to display and log metrics as well as to keep training checkpoints.
The example can be run like so:
git clone https://github.com/tracel-ai/burn.git
cd burn
# Use the --release flag to really speed up training.
echo "Using ndarray backend"
cargo run --example mnist --release --features ndarray # CPU NdArray Backend - f32 - single thread
cargo run --example mnist --release --features ndarray-blas-openblas # CPU NdArray Backend - f32 - blas with openblas
cargo run --example mnist --release --features ndarray-blas-netlib # CPU NdArray Backend - f32 - blas with netlib
echo "Using tch backend"
export TORCH_CUDA_VERSION=cu121 # Set the cuda version
cargo run --example mnist --release --features tch-gpu # GPU Tch Backend - f32
cargo run --example mnist --release --features tch-cpu # CPU Tch Backend - f32
echo "Using wgpu backend"
cargo run --example mnist --release --features wgpu