burn/burn-tensor
Nathaniel Simard eee8bf4599
Refactor/mask fill (#74)
2022-11-05 22:09:40 -04:00
..
examples Feat/gelu (#45) 2022-09-24 13:08:08 -04:00
src Refactor/mask fill (#74) 2022-11-05 22:09:40 -04:00
tests Refactor/mask fill (#74) 2022-11-05 22:09:40 -04:00
Cargo.toml Doc/readme (#55) 2022-10-06 17:44:04 -04:00
LICENSE-APACHE Update projects (#29) 2022-09-04 14:22:56 -04:00
LICENSE-MIT Update projects (#29) 2022-09-04 14:22:56 -04:00
README.md Update projects (#29) 2022-09-04 14:22:56 -04:00
env.bash First Commit :D 2022-07-18 19:19:13 -04:00

README.md

Burn Tensor

Burn Tensor Library

Current Crates.io Version license

This library provides multiple tensor implementations hidden behind an easy to use API that supports reverse mode automatic differentiation.

Features

  • Flexible
  • CPU + GPU 🙏
  • Multi-Threads 🚀
  • Intuitive Usage 😌
  • No Global State 🚫
  • Multiple Backends 🦾
  • Reverse Mode Autodiff 🔥

Backends

For now, only two backends are implementated, but adding new ones should not be that hard.

Autodiff

Automatic differentiation is implemented as just another tensor backend without any global state. It's possible since we keep track of the order in which each operation as been executed and the tape is only created when calculating the gradients. To do so, each operation creates a new node which has a reference to its parent nodes. Therefore, creating the tape only requires a simple and efficent graph traversal algorithm.

    let x = ADTensor::from_tensor(x_ndarray);
    let y = ADTensor::from_tensor(y_ndarray);

    let z = x.matmul(&y);

    let grads = z.backward();

    let x_grad = x.grad(&grads);
    let y_grad = y.grad(&grads);

Cuda

To run with CUDA set TORCH_CUDA_VERSION=cu113.

Note

This crate can be use alone without the entire burn stack and with only selected backends for smaller binaries.