Go to file
Nathaniel Simard 945014b7f1
Add new backend comparison benchmark (#958)
* Add new benchmark

* Remove bad comment

* Add more gelu
2023-11-16 08:15:21 -05:00
.cargo Add `cargo-xtask` helper and move scripts into it (#757) 2023-09-06 08:22:00 -04:00
.github ci/Speed up typos checks (#907) 2023-11-02 14:30:07 -04:00
assets Add configuration doc for vscode environment setup (#737) 2023-08-31 08:23:28 -04:00
backend-comparison Add new backend comparison benchmark (#958) 2023-11-16 08:15:21 -05:00
burn Feat/op fusion decorator (#939) 2023-11-09 21:21:41 -05:00
burn-autodiff Implement tensor.recip() function to calculate elementwise reciprocals (#953) 2023-11-15 09:17:32 -05:00
burn-book Implement tensor.recip() function to calculate elementwise reciprocals (#953) 2023-11-15 09:17:32 -05:00
burn-candle Implement tensor.recip() function to calculate elementwise reciprocals (#953) 2023-11-15 09:17:32 -05:00
burn-common Feat/wgpu/autotune compute (#906) 2023-10-29 16:44:59 -04:00
burn-compute Update burn-compute README.md to check autotune 2023-11-07 12:26:42 -05:00
burn-core Feat/op fusion decorator (#939) 2023-11-09 21:21:41 -05:00
burn-dataset Chore: Bump version for next release (#900) 2023-10-24 19:31:13 -04:00
burn-derive [Breaking] Refactor Backend Names (#904) 2023-10-29 18:27:49 -04:00
burn-fusion WGPU: Support elemwise operation fusion (#948) 2023-11-15 15:13:37 -05:00
burn-import Implement tensor.recip() function to calculate elementwise reciprocals (#953) 2023-11-15 09:17:32 -05:00
burn-ndarray Implement tensor.recip() function to calculate elementwise reciprocals (#953) 2023-11-15 09:17:32 -05:00
burn-no-std-tests [Breaking] Refactor Backend Names (#904) 2023-10-29 18:27:49 -04:00
burn-tch Implement tensor.recip() function to calculate elementwise reciprocals (#953) 2023-11-15 09:17:32 -05:00
burn-tensor WGPU: Support elemwise operation fusion (#948) 2023-11-15 15:13:37 -05:00
burn-tensor-testgen Chore: Bump version for next release (#900) 2023-10-24 19:31:13 -04:00
burn-train Feat/op fusion decorator (#939) 2023-11-09 21:21:41 -05:00
burn-wgpu Implement fusing for recip() (#959) 2023-11-15 17:15:01 -05:00
examples Feat/op fusion decorator (#939) 2023-11-09 21:21:41 -05:00
xtask ci/Speed up typos checks (#907) 2023-11-02 14:30:07 -04:00
.gitignore Remove binaries from .gitignore (#775) 2023-09-07 08:43:52 -04:00
ARCHITECTURE.md [Breaking] Refactor Backend Names (#904) 2023-10-29 18:27:49 -04:00
CODE-OF-CONDUCT.md Add Code of Conduct (#269) 2023-04-03 18:32:20 -04:00
CONTRIBUTING.md add bump command (#901) 2023-10-25 08:59:46 -04:00
Cargo.toml Use updated serde_rusqlite version (MIT/Apache2 license) (#956) 2023-11-14 19:03:07 -05:00
LICENSE-APACHE License fixes (#648) 2023-08-16 12:45:35 -04:00
LICENSE-MIT License fixes (#648) 2023-08-16 12:45:35 -04:00
NOTICES.md Add foundation for importing ONNX files (#297) 2023-04-15 10:44:50 -04:00
POEM.md Create POEM.md (#299) 2023-04-13 09:00:39 -04:00
README.md Update docs link 2023-10-25 11:09:06 -04:00
_typos.toml Add image classification web demo with WebGPU, CPU backends (#840) 2023-10-05 10:29:13 -04:00
codecov.yml add needed lines (#927) 2023-11-03 09:55:33 -04:00
run-checks.ps1 ci/Speed up typos checks (#907) 2023-11-02 14:30:07 -04:00
run-checks.sh ci/Speed up typos checks (#907) 2023-11-02 14:30:07 -04:00

README.md

Discord Current Crates.io Version Documentation Test Status CodeCov Rust Version license

This library strives to serve as a comprehensive deep learning framework, offering exceptional flexibility and written in Rust. Our objective is to cater to both researchers and practitioners by simplifying the process of experimenting, training, and deploying models.

Features

  • Customizable, intuitive and user-friendly neural network module 🔥
  • Comprehensive training tools, including metrics, logging, and checkpointing 📈
  • Versatile Tensor crate equipped with pluggable backends 🔧
    • Torch backend, supporting both CPU and GPU 🚀
    • Ndarray backend with no_std compatibility, ensuring universal platform adaptability 👌
    • WebGPU backend, offering cross-platform, browser-inclusive, GPU-based computations 🌐
    • Candle backend 🕯️
    • Autodiff backend that enables differentiability across all backends 🌟
  • Dataset crate containing a diverse range of utilities and sources 📚
  • Import crate that simplifies the integration of pretrained models 📦

Get Started

The Burn Book 🔥

To begin working effectively with burn, it is crucial to understand its key components and philosophy. For detailed examples and explanations covering every facet of the framework, please refer to The Burn Book 🔥.

Pre-trained Models

We keep an updated and curated list of models and examples built with Burn, see the burn-rs/models repository for more details.

Examples

Here is a code snippet showing how intuitive the framework is to use, where we declare a position-wise feed-forward module along with its forward pass.

use burn::nn;
use burn::module::Module;
use burn::tensor::backend::Backend;

#[derive(Module, Debug)]
pub struct PositionWiseFeedForward<B: Backend> {
    linear_inner: Linear<B>,
    linear_outer: Linear<B>,
    dropout: Dropout,
    gelu: GELU,
}

impl<B: Backend> PositionWiseFeedForward<B> {
    pub fn forward<const D: usize>(&self, input: Tensor<B, D>) -> Tensor<B, D> {
        let x = self.linear_inner.forward(input);
        let x = self.gelu.forward(x);
        let x = self.dropout.forward(x);

        self.linear_outer.forward(x)
    }
}

For more practical insights, you can clone the repository and experiment with the following examples:

Supported Platforms

Burn-ndarray Backend

Option CPU GPU Linux MacOS Windows Android iOS WASM
Pure Rust Yes No Yes Yes Yes Yes Yes Yes
Accelerate Yes No No Yes No No Yes No
Netlib Yes No Yes Yes Yes No No No
Openblas Yes No Yes Yes Yes Yes Yes No

Burn-tch Backend

Option CPU GPU Linux MacOS Windows Android iOS WASM
CPU Yes No Yes Yes Yes Yes Yes No
CUDA No Yes Yes No Yes No No No
MPS No Yes No Yes No No No No
Vulkan Yes Yes Yes Yes Yes Yes No No

Burn-wgpu Backend

Option CPU GPU Linux MacOS Windows Android iOS WASM
Metal No Yes No Yes No No Yes No
Vulkan Yes Yes Yes Yes Yes Yes Yes No
OpenGL No Yes Yes Yes Yes Yes Yes No
WebGpu No Yes No No No No No Yes
Dx11/Dx12 No Yes No No Yes No No No

Support for no_std

Burn, including its burn-ndarray backend, can work in a no_std environment, provided alloc is available for the inference mode. To accomplish this, simply turn off the default features in burn and burn-ndarray (which is the minimum requirement for running the inference mode). You can find a reference example in burn-no-std-tests.

The burn-core and burn-tensor crates also support no_std with alloc. These crates can be directly added as dependencies if necessary, as they are reexported by the burn crate.

Please be aware that when using the no_std mode, a random seed will be generated at build time if one hasn't been set using the Backend::seed method. Also, the spin::mutex::Mutex is used instead of std::sync::Mutex in this mode.

Contributing

Before contributing, please take a moment to review our code of conduct. It's also highly recommended to read our architecture document, which explains our architectural decisions. Please see more details in our contributing guide.

Disclaimer

Burn is currently in active development, and there will be breaking changes. While any resulting issues are likely to be easy to fix, there are no guarantees at this stage.

Sponsors

Thanks to all current sponsors 🙏.

smallstepman premAI-io

License

Burn is distributed under the terms of both the MIT license and the Apache License (Version 2.0). See LICENSE-APACHE and LICENSE-MIT for details. Opening a pull request is assumed to signal agreement with these licensing terms.