474fa113d6
The import for the model didn't get renamed properly when the name changed from onnx-inference-rp2040 to raspberry-pi-pico. |
||
---|---|---|
.. | ||
.cargo | ||
src | ||
tensorflow | ||
Cargo.lock | ||
Cargo.toml | ||
README.md | ||
build.rs | ||
memory.x |
README.md
Running Onnx Inference on the Raspberry Pi Pico
This example shows how to run an inference on a no_std, no atomic pointer, and no heap environment.
Setup
-
Install raspberry pi pico target
rustup target add thumbv6m-none-eabi
-
Install
probe-rs
. This is optional, installelf2uf2-rs
to use the usb boot withcargo install elf2uf2-rs
. -
Have a compatible probe to flash to the raspberry pi pico. This is optional, alternatively, modify
.cargo/config.toml
and uncomment the runner to useelf2uf2-rs
.
If you are using elfuf2-rs
logging will not go to your serial port, add logging by using embassy-usb
.
Running
Run as usual with cargo run
Project Structure
The project is structured as follows
raspberry-pi-pico
├── Cargo.lock
├── Cargo.toml
├── README.md
├── build.rs
├── memory.x
├── src
│ ├── bin
│ │ └── main.rs
│ ├── lib.rs
│ └── model
│ ├── mod.rs
│ └── sine.onnx
└── tensorflow
├── requirements.txt
└── train.py
Everything is standard with any other cargo project except for the memory.x
, the model
directory, and the tensorflow
directory.
The memory.x
file contains the memory layout of the chip.
The tensorflow
directory contains a python script which generates the onnx model using tensorflow, using the requirements from requirements.txt
.
The onnx model will be outputted to src/model/sine.onnx
. The build.rs
script will generate a rust file which takes in the sine.onnx
file and generates an import, which gets exposed in mod.rs
.