burn/examples/onnx-inference
Nathaniel Simard 96524d40a1
[Breaking] Refactor Backend Names (#904)
2023-10-29 18:27:49 -04:00
..
pytorch Add ability to load onnx state to the generated source code (#319) 2023-05-03 13:05:43 -04:00
src [Breaking] Refactor Backend Names (#904) 2023-10-29 18:27:49 -04:00
Cargo.toml Chore: Bump version for next release (#900) 2023-10-24 19:31:13 -04:00
README.md [Breaking] Refactor Backend Names (#904) 2023-10-29 18:27:49 -04:00
build.rs Add support for different record types in ONNX (#816) 2023-09-21 09:06:57 -04:00

README.md

ONNX Inference

This crate provides a simple example for importing MNIST ONNX model to Burn. The onnx file is converted into a Rust source file using burn-import and the weights are stored in and loaded from a binary file.

Usage

cargo run -- 15

Output:

Finished dev [unoptimized + debuginfo] target(s) in 0.13s
    Running `burn/target/debug/onnx-inference 15`

Image index: 15
Success!
Predicted: 5
Actual: 5
See the image online, click the link below:
https://datasets-server.huggingface.co/assets/mnist/--/mnist/test/15/image/image.jpg

Feature Flags

  • embedded-model (default) - Embed the model weights into the binary. This is useful for small models (e.g. MNIST) but not recommended for very large models because it will increase the binary size significantly and will consume a lot of memory at runtime. If you do not use this feature, the model weights will be loaded from a binary file at runtime.

How to import

  1. Create model directory under src

  2. Copy the ONNX model to src/model/mnist.onnx

  3. Add the following to mod.rs:

    pub mod mnist {
        include!(concat!(env!("OUT_DIR"), "/model/mnist.rs"));
    }
    
  4. Add the module to lib.rs:

    pub mod model;
    
    pub use model::mnist::*;
    
  5. Add the following to build.rs:

    use burn_import::onnx::ModelGen;
    
    fn main() {
        // Generate the model code from the ONNX file.
        ModelGen::new()
            .input("src/model/mnist.onnx")
            .out_dir("model/")
            .run_from_script();
    }
    
    
  6. Add your model to src/bin as a new file, in this specific case we have called it mnist.rs:

    use burn::tensor;
    use burn::backend::ndarray::NdArray;
    
    use onnx_inference::mnist::Model;
    
    fn main() {
        // Create a new model and load the state
        let model: Model<Backend> = Model::new().load_state();
    
        // Create a new input tensor (all zeros for demonstration purposes)
        let input = tensor::Tensor::<NdArray<f32>, 4>::zeros([1, 1, 28, 28]);
    
        // Run the model
        let output = model.forward(input);
    
        // Print the output
        println!("{:?}", output);
    }
    
  7. Run cargo build to generate the model code, weights, and mnist binary.

How to export PyTorch model to ONNX

The following steps show how to export a PyTorch model to ONNX from checked in PyTorch code (see pytorch/mnist.py).

  1. Install dependencies:

    pip install torch torchvision onnx
    
  2. Run the following script to run the MNIST training and export the model to ONNX:

    python3 pytorch/mnist.py
    

This will generate pytorch/mnist.onnx.

Resources

  1. PyTorch ONNX
  2. ONNX Intro