Go to file
MOZGIII 7f558bdc46
Expose element traits (#700)
2023-08-27 09:02:39 -04:00
.github Add typos to checks script (#649) 2023-08-16 18:44:16 -04:00
assets doc: update record architecture doc (#329) 2023-05-05 12:53:19 -04:00
burn chore: bump version for next release (#533) 2023-07-26 09:46:28 -04:00
burn-autodiff Fix: grad replace was adding to previous tensor (#695) 2023-08-25 14:58:18 -04:00
burn-book fix(book): use the correct burn versions (#681) 2023-08-23 14:07:25 -04:00
burn-candle Feat/candle/more operations (#682) 2023-08-25 08:46:30 -04:00
burn-common License fixes (#648) 2023-08-16 12:45:35 -04:00
burn-core Added custom initializer option for MHA and other transformer-related modules (#693) 2023-08-25 09:55:05 -04:00
burn-dataset License fixes (#648) 2023-08-16 12:45:35 -04:00
burn-derive Typos (#608) 2023-08-08 17:57:51 -04:00
burn-import ONNX Clip Operator (#696) 2023-08-26 17:58:51 -04:00
burn-ndarray Expose element traits (#700) 2023-08-27 09:02:39 -04:00
burn-no-std-tests Typos (#608) 2023-08-08 17:57:51 -04:00
burn-tch Expose element traits (#700) 2023-08-27 09:02:39 -04:00
burn-tensor ONNX Clip Operator (#696) 2023-08-26 17:58:51 -04:00
burn-tensor-testgen chore: bump version for next release (#533) 2023-07-26 09:46:28 -04:00
burn-train feat: can add custom training and validation metric loggers (#690) 2023-08-25 07:16:45 -04:00
burn-wgpu Expose element traits (#700) 2023-08-27 09:02:39 -04:00
examples new clippy stuff (#687) 2023-08-24 13:20:58 -04:00
scripts Feat/candle/initialize (#650) 2023-08-17 08:50:08 -04:00
.gitignore Convert `run-checks` script into a Rust binary (#628) 2023-08-14 17:43:21 -04:00
ARCHITECTURE.md Typos (#608) 2023-08-08 17:57:51 -04:00
CODE-OF-CONDUCT.md Add Code of Conduct (#269) 2023-04-03 18:32:20 -04:00
CONTRIBUTING.md Add PR template and fix script name (#429) 2023-06-26 15:58:37 -04:00
Cargo.toml Update gix-tempfile requirement from 7.0.0 to 8.0.0 (#679) 2023-08-23 10:32:30 -04:00
LICENSE-APACHE License fixes (#648) 2023-08-16 12:45:35 -04:00
LICENSE-MIT License fixes (#648) 2023-08-16 12:45:35 -04:00
NOTICES.md Add foundation for importing ONNX files (#297) 2023-04-15 10:44:50 -04:00
POEM.md Create POEM.md (#299) 2023-04-13 09:00:39 -04:00
README.md chore: update README.md (#678) 2023-08-23 10:32:51 -04:00
_typos.toml Typos (#608) 2023-08-08 17:57:51 -04:00
run-checks.ps1 Add typos to checks script (#649) 2023-08-16 18:44:16 -04:00
run-checks.sh Add typos to checks script (#649) 2023-08-16 18:44:16 -04:00

README.md

Discord Test Status Documentation Current Crates.io Version Rust Version license

This library strives to serve as a comprehensive deep learning framework, offering exceptional flexibility and written in Rust. Our objective is to cater to both researchers and practitioners by simplifying the process of experimenting, training, and deploying models.

Features

  • Customizable, user-friendly neural network module 🔥
  • Comprehensive training tools, inclusive of metrics, logging, and checkpointing 📈
  • Versatile Tensor crate equipped with pluggable backends 🔧
    • Torch backend, supporting both CPU and GPU 🚀
    • Ndarray backend with no_std compatibility, ensuring universal platform adaptability 👌
    • WebGPU backend, offering cross-platform, browser-inclusive, GPU-based computations 🌐
    • Autodiff backend that enables differentiability across all backends 🌟
  • Dataset crate containing a diverse range of utilities and sources 📚
  • Import crate that simplifies the integration of pretrained models 📦

Supported Platforms

Burn-ndarray Backend

Option CPU GPU Linux MacOS Windows Android iOS WASM
Pure Rust Yes No Yes Yes Yes Yes Yes Yes
Accelerate Yes No No Yes No No Yes No
Netlib Yes No Yes Yes Yes No No No
Openblas Yes No Yes Yes Yes Yes Yes No

Burn-tch Backend

Option CPU GPU Linux MacOS Windows Android iOS WASM
CPU Yes No Yes Yes Yes Yes Yes No
CUDA No Yes Yes No Yes No No No
MPS No Yes No Yes No No No No
Vulkan Yes Yes Yes Yes Yes Yes No No

Burn-wgpu Backend

Option CPU GPU Linux MacOS Windows Android iOS WASM
Metal No Yes No Yes No No Yes No
Vulkan Yes Yes Yes Yes Yes Yes Yes No
OpenGL No Yes Yes Yes Yes Yes Yes No
WebGpu No Yes No No No No No Yes
Dx11/Dx12 No Yes No No Yes No No No

Pre-trained Models

We keep an updated and curated list of models and examples built with Burn, see the burn-rs/models repository for more details.

Get Started

The best way to get started with burn is to clone the repo and play with the examples. This may also be a good idea to take a look the main components of burn to get a quick overview of the fundamental building blocks. If you're interested in how the framework works, you can read our architecture document.

Examples

Components

Understanding the key components and philosophy of burn can greatly help when beginning to work with the framework.

Backend

Nearly everything in burn is based on the Backend trait, which enables you to run tensor operations using different implementations without having to modify your code. While a backend may not necessarily have autodiff capabilities, the ADBackend trait specifies when autodiff is needed. This trait not only abstracts operations but also tensor, device and element types, providing each backend the flexibility they need. It's worth noting that the trait assumes eager mode since burn fully supports dynamic graphs. However, we may create another API to assist with integrating graph-based backends, without requiring any changes to the user's code.

Tensor

At the core of burn lies the Tensor struct, which encompasses multiple types of tensors, including Float, Int, and Bool. The element types of these tensors are specified by the backend and are usually designated as a generic argument (e.g., NdArrayBackend<f32>). Although the same struct is used for all tensors, the available methods differ depending on the tensor kind. You can specify the desired tensor kind by setting the third generic argument, which defaults to Float. The first generic argument specifies the backend, while the second specifies the number of dimensions.

use burn::tensor::backend::Backend;
use burn::tensor::{Tensor, Int};

fn function<B: Backend>(tensor_float: Tensor<B, 2>) {
    let _tensor_bool = tensor_float.clone().equal_elem(2.0); // Tensor<B, 2, Bool>
    let _tensor_int = tensor_float.argmax(1); // Tensor<B, 2, Int>
}

As demonstrated in the previous example, nearly all operations require owned tensors as parameters, which means that calling Clone explicitly is necessary when reusing the same tensor multiple times. However, there's no need to worry since the tensor's data won't be copied, it will be flagged as readonly when multiple tensors use the same allocated memory. This enables backends to reuse tensor data when possible, similar to a copy-on-write pattern, while remaining completely transparent to the user.

Autodiff

The 'Backend' trait is highly flexible, enabling backpropagation to be implemented using a simple backend decorator, which makes any backend differentiable.

use burn::tensor::backend::{ADBackend, Backend};
use burn::tensor::{Distribution, Tensor};
use burn_autodiff::ADBackendDecorator;
use burn_ndarray::NdArrayBackend;

fn linear<B: Backend>(x: Tensor<B, 2>, weight: Tensor<B, 2>, bias: Tensor<B, 2>) -> Tensor<B, 2> {
    x.matmul(weight) + bias
}

fn main() {
    type Backend = NdArrayBackend<f32>;

    let weight = Tensor::random([3, 3], Distribution::Default);
    let bias = Tensor::zeros([1, 3]);
    let x = Tensor::random([3, 3], Distribution::Default);

    let y = linear::<Backend>(x.clone(), weight.clone(), bias.clone());
    // y.backward() // Method backward doesn't exist

    let y = linear::<ADBackendDecorator<Backend>>(
        Tensor::from_inner(x),
        Tensor::from_inner(weight).require_grad(),
        Tensor::from_inner(bias).require_grad(),
    );
    let grads = y.backward(); // Method exists
}

Module

The Module derive allows you to create your own neural network modules, similar to PyTorch. The derive function only generates the necessary methods to essentially act as a parameter container for your type, it makes no assumptions about how the forward pass is declared.

use burn::nn;
use burn::module::Module;
use burn::tensor::backend::Backend;

#[derive(Module, Debug)]
pub struct PositionWiseFeedForward<B: Backend> {
    linear_inner: Linear<B>,
    linear_outer: Linear<B>,
    dropout: Dropout,
    gelu: GELU,
}

impl<B: Backend> PositionWiseFeedForward<B> {
    pub fn forward<const D: usize>(&self, input: Tensor<B, D>) -> Tensor<B, D> {
        let x = self.linear_inner.forward(input);
        let x = self.gelu.forward(x);
        let x = self.dropout.forward(x);

        self.linear_outer.forward(x)
    }
}

Note that all fields declared in the struct must also implement the Module trait. The Tensor struct doesn't implement Module, but Param<Tensor<B, D>> does.

Config

The Config derive lets you define serializable and deserializable configurations or hyper-parameters for your modules or any components.

use burn::config::Config;

#[derive(Config)]
pub struct PositionWiseFeedForwardConfig {
    pub d_model: usize,
    pub d_ff: usize,
    #[config(default = 0.1)]
    pub dropout: f64,
}

The derive also adds useful methods to your config, similar to a builder pattern.

fn main() {
    let config = PositionWiseFeedForwardConfig::new(512, 2048);
    println!("{}", config.d_model); // 512
    println!("{}", config.d_ff); // 2048
    println!("{}", config.dropout); // 0.1
    let config =  config.with_dropout(0.2);
    println!("{}", config.dropout); // 0.2
}

Learner

The Learner is the main struct that let you train a neural network with support for logging, metric, checkpointing and more. In order to create a learner, you must use the LearnerBuilder.

use burn::train::LearnerBuilder;
use burn::train::metric::{AccuracyMetric, LossMetric};
use burn::record::DefaultRecordSettings;

fn main() {
    let dataloader_train = ...;
    let dataloader_valid = ...;

    let model = ...;
    let optim = ...;

    let learner = LearnerBuilder::new("/tmp/artifact_dir")
        .metric_train_plot(AccuracyMetric::new())
        .metric_valid_plot(AccuracyMetric::new())
        .metric_train(LossMetric::new())
        .metric_valid(LossMetric::new())
        .with_file_checkpointer::<DefaultRecordSettings>(2)
        .num_epochs(10)
        .build(model, optim);

    let _model_trained = learner.fit(dataloader_train, dataloader_valid);
}

See this example for a real usage.

Support for no_std

Burn, including its burn-ndarray backend, can work in a no_std environment, provided alloc is available for the inference mode. To accomplish this, simply turn off the default features in burn and burn-ndarray (which is the minimum requirement for running the inference mode). You can find a reference example in burn-no-std-tests.

The burn-core and burn-tensor crates also support no_std with alloc. These crates can be directly added as dependencies if necessary, as they are reexported by the burn crate.

Please be aware that when using the no_std mode, a random seed will be generated at build time if one hasn't been set using the Backend::seed method. Also, the spin::mutex::Mutex is used instead of std::sync::Mutex in this mode.

Contributing

Before contributing, please take a moment to review our code of conduct. It's also highly recommended to read our architecture document, which explains our architectural decisions. Please see more details in our contributing guide.

Continuous Integration

Run checks

On Unix systems, run run-checks.sh using this command

run-checks.sh environment

On Windows systems, run run-checks.ps1 using this command:

run-checks.ps1 environment

The environment argument can assume ONLY the following values:

  • std to perform checks using libstd
  • no_std to perform checks on an embedded environment using libcore

If no environment value has been passed, run both std and no_std checks.

Continuous Deployment

Publish crates

Compile scripts/publish.rs using this command:

rustc scripts/publish.rs --crate-type bin --out-dir scripts

Run scripts/publish using this command

./scripts/publish crate_name

where crate_name is the name of the crate to publish

Disclaimer

Burn is currently in active development, and there will be breaking changes. While any resulting issues are likely to be easy to fix, there are no guarantees at this stage.

Sponsors

You can sponsor the founder of Burn from his GitHub Sponsors profile. The Burn-rs organization doesn't yet have a fiscal entity, but other sponsor methods might become available as the project grows.

Thanks to all current sponsors 🙏.

smallstepman premAI-io

License

Burn is distributed under the terms of both the MIT license and the Apache License (Version 2.0). See LICENSE-APACHE and LICENSE-MIT for details. Opening a pull request is assumed to signal agreement with these licensing terms.