2022-11-21 08:41:55 +08:00
|
|
|
# MNIST
|
|
|
|
|
|
|
|
The example is showing you how to:
|
|
|
|
|
2023-12-02 03:33:28 +08:00
|
|
|
- Define your own custom module (MLP).
|
|
|
|
- Create the data pipeline from a raw dataset to a batched multi-threaded fast DataLoader.
|
|
|
|
- Configure a learner to display and log metrics as well as to keep training checkpoints.
|
2022-11-21 08:41:55 +08:00
|
|
|
|
|
|
|
The example can be run like so:
|
|
|
|
|
|
|
|
```bash
|
2023-12-02 03:33:28 +08:00
|
|
|
git clone https://github.com/tracel-ai/burn.git
|
2022-11-21 08:41:55 +08:00
|
|
|
cd burn
|
|
|
|
# Use the --release flag to really speed up training.
|
|
|
|
echo "Using ndarray backend"
|
|
|
|
cargo run --example mnist --release --features ndarray # CPU NdArray Backend - f32 - single thread
|
|
|
|
cargo run --example mnist --release --features ndarray-blas-openblas # CPU NdArray Backend - f32 - blas with openblas
|
|
|
|
cargo run --example mnist --release --features ndarray-blas-netlib # CPU NdArray Backend - f32 - blas with netlib
|
|
|
|
echo "Using tch backend"
|
2024-02-11 01:01:45 +08:00
|
|
|
export TORCH_CUDA_VERSION=cu121 # Set the cuda version
|
2023-07-25 21:50:00 +08:00
|
|
|
cargo run --example mnist --release --features tch-gpu # GPU Tch Backend - f32
|
2022-11-21 08:41:55 +08:00
|
|
|
cargo run --example mnist --release --features tch-cpu # CPU Tch Backend - f32
|
2023-07-21 05:12:13 +08:00
|
|
|
echo "Using wgpu backend"
|
|
|
|
cargo run --example mnist --release --features wgpu
|
2022-11-21 08:41:55 +08:00
|
|
|
```
|