Go to file
Nathaniel Simard 674e078a85
Feat/inner module (#26)
2022-08-30 18:05:42 -04:00
.github/workflows feat: add comparison (#19) 2022-08-22 14:27:23 -04:00
burn-dataset Feat/dataloader (#17) 2022-08-22 11:21:22 -04:00
burn-derive Feat/inner module (#26) 2022-08-30 18:05:42 -04:00
burn-tensor Feat/inner module (#26) 2022-08-30 18:05:42 -04:00
ci ci: publish to crates.io 2022-07-27 16:54:34 -04:00
examples Feat/inner module (#26) 2022-08-30 18:05:42 -04:00
src Feat/inner module (#26) 2022-08-30 18:05:42 -04:00
tests refactor: fix tensor generics order 2022-08-09 12:40:51 -04:00
.gitignore First Commit :D 2022-07-18 19:19:13 -04:00
Cargo.toml Refactor/metric (#23) 2022-08-23 16:39:30 -04:00
LICENSE Initial commit 2022-07-18 19:11:45 -04:00
README.md ci: publish to crates.io 2022-07-27 16:54:34 -04:00

README.md

BURN

BURN: Burn Unstoppable Rusty Neurons

This library aims to be a complete deep learning framework with extreme flexibility written in Rust. The goal would be to satisfy researchers as well as practitioners making it easier to experiment, train and deploy your solution.

Why Rust?

A big benefit of using Rust instead of Python is to allow performant multi-threaded deep learning networks which might open new doors for more efficient models. Scale seems to be very important, but the only tool we currently have to achieve it is big matrix multiplication on GPUs. This often implies big batch sizes, which is impossible for online learning. Also, asynchronous sparsely activated networks without copying weights is kind of impossible to achieve with Python (or really hard without proper threading).

Burn-Tensor

BURN has its own tensor library supporting multiple backends, it can also be used for other scientific computing applications. Click here for more details.

Module Definition

Currently working on it ... 💻