2022-11-21 08:41:55 +08:00
|
|
|
# MNIST
|
|
|
|
|
|
|
|
The example is showing you how to:
|
|
|
|
|
|
|
|
* Define your own custom module (MLP).
|
|
|
|
* Create the data pipeline from a raw dataset to a batched multi-threaded fast DataLoader.
|
|
|
|
* Configure a learner to display and log metrics as well as to keep training checkpoints.
|
|
|
|
|
|
|
|
The example can be run like so:
|
|
|
|
|
|
|
|
```bash
|
|
|
|
git clone https://github.com/burn-rs/burn.git
|
|
|
|
cd burn
|
|
|
|
# Use the --release flag to really speed up training.
|
|
|
|
echo "Using ndarray backend"
|
|
|
|
cargo run --example mnist --release --features ndarray # CPU NdArray Backend - f32 - single thread
|
|
|
|
cargo run --example mnist --release --features ndarray-blas-openblas # CPU NdArray Backend - f32 - blas with openblas
|
|
|
|
cargo run --example mnist --release --features ndarray-blas-netlib # CPU NdArray Backend - f32 - blas with netlib
|
|
|
|
echo "Using tch backend"
|
|
|
|
export TORCH_CUDA_VERSION=cu113 # Set the cuda version
|
2023-07-25 21:50:00 +08:00
|
|
|
cargo run --example mnist --release --features tch-gpu # GPU Tch Backend - f32
|
2022-11-21 08:41:55 +08:00
|
|
|
cargo run --example mnist --release --features tch-cpu # CPU Tch Backend - f32
|
2023-07-21 05:12:13 +08:00
|
|
|
echo "Using wgpu backend"
|
|
|
|
cargo run --example mnist --release --features wgpu
|
2022-11-21 08:41:55 +08:00
|
|
|
```
|