burn/examples/image-classification-web
Sylvain Benner e303e31c8b
Bump next version of Burn to 0.14.0 (#1618)
2024-04-12 17:14:45 -04:00
..
samples Add image classification web demo with WebGPU, CPU backends (#840) 2023-10-05 10:29:13 -04:00
src [Breaking] add runtime options in wgpu init methods (#1505) 2024-03-28 12:44:38 -04:00
Cargo.toml Bump next version of Burn to 0.14.0 (#1618) 2024-04-12 17:14:45 -04:00
NOTICES.md Add image classification web demo with WebGPU, CPU backends (#840) 2023-10-05 10:29:13 -04:00
README.md Update with demo links (#1033) 2023-12-01 13:07:13 -05:00
build-for-web.sh remove exit (#1543) 2024-03-27 16:36:59 -05:00
build.rs docs(book-&-examples): modify book and examples with new `prelude` module (#1372) 2024-02-28 13:25:25 -05:00
https_server.py Add image classification web demo with WebGPU, CPU backends (#840) 2023-10-05 10:29:13 -04:00
index.css Add image classification web demo with WebGPU, CPU backends (#840) 2023-10-05 10:29:13 -04:00
index.html Add image classification web demo with WebGPU, CPU backends (#840) 2023-10-05 10:29:13 -04:00
index.js Chore/release (#1031) 2023-12-01 14:33:28 -05:00
run-server.sh Add image classification web demo with WebGPU, CPU backends (#840) 2023-10-05 10:29:13 -04:00

README.md

Image Classification Web Demo Using Burn and WebAssembly

Live Demo

Overview

This demo showcases how to execute an image classification task in a web browser using a model converted to Rust code. The project utilizes the Burn deep learning framework, WebGPU and WebAssembly . Specifically, it demonstrates:

  1. Converting an ONNX (Open Neural Networks Exchange) model into Rust code compatible with the Burn framework.
  2. Executing the model within a web browser using WebGPU via the burn-wgpu backend and WebAssembly through the burn-ndarray and burn-candle backends.

Running the Demo

Step 1: Build the WebAssembly Binary and Other Assets

To compile the Rust code into WebAssembly and build other essential files, execute the following script:

./build-for-web.sh

Step 2: Launch the Web Server

Run the following command to initiate a web server on your local machine:

./run-server.sh

Step 3: Access the Web Demo

Open your web browser and navigate to:

http://localhost:8000

Backend Compatibility

As of now, the WebGPU backend is compatible only with Chrome browsers running on macOS and Windows. The application will dynamically detect if WebGPU support is available and proceed accordingly.

SIMD Support

The build targets two sets of binaries, one with SIMD support and one without. The web application dynamically detects if SIMD support is available and downloads the appropriate binary.

Model Information

The image classification task is achieved using the SqueezeNet model, a compact Convolutional Neural Network (CNN). It is trained on the ImageNet dataset and can classify images into 1,000 distinct categories. The included ONNX model is sourced from the ONNX Model Zoo. For further details about the model's architecture and performance, you can refer to the original paper.

Credits

This demo was inspired by the ONNX Runtime web demo featuring the SqueezeNet model trained on ImageNet.

The complete list of credits/attribution can be found in the NOTICES file.

Future Enhancements

  • Fall back to WebGL if WebGPU is not supported by the browser. See wgpu's WebGL support

  • Enable SIMD support for Safari browsers after Release 179.

  • Add image paste functionality to allow users to paste an image from the clipboard.