mirror of https://github.com/tracel-ai/burn.git
chore: update main README links to crate-specific READMEs (#1415)
This commit is contained in:
parent
4ed90a988e
commit
a808dd0e1c
12
README.md
12
README.md
|
@ -331,7 +331,7 @@ implementation details. It is fully optimized with the
|
||||||
[performance characteristics mentioned earlier](#performance), as it serves as our research
|
[performance characteristics mentioned earlier](#performance), as it serves as our research
|
||||||
playground for a variety of optimizations.
|
playground for a variety of optimizations.
|
||||||
|
|
||||||
See the [WGPU Backend README](./burn-wgpu/README.md) for more details.
|
See the [WGPU Backend README](./crates/burn-wgpu/README.md) for more details.
|
||||||
|
|
||||||
</details>
|
</details>
|
||||||
|
|
||||||
|
@ -345,7 +345,7 @@ Based on [Candle by Hugging Face](https://github.com/huggingface/candle), a mini
|
||||||
for Rust with a focus on performance and ease of use, this backend can run on CPU with support for
|
for Rust with a focus on performance and ease of use, this backend can run on CPU with support for
|
||||||
Web Assembly or on Nvidia GPUs using CUDA.
|
Web Assembly or on Nvidia GPUs using CUDA.
|
||||||
|
|
||||||
See the [Candle Backend README](./burn-candle/README.md) for more details.
|
See the [Candle Backend README](./crates/burn-candle/README.md) for more details.
|
||||||
|
|
||||||
> _Disclaimer:_ This backend is not fully completed yet, but can work in some contexts like
|
> _Disclaimer:_ This backend is not fully completed yet, but can work in some contexts like
|
||||||
> inference.
|
> inference.
|
||||||
|
@ -362,7 +362,7 @@ PyTorch doesn't need an introduction in the realm of deep learning. This backend
|
||||||
[PyTorch Rust bindings](https://github.com/LaurentMazare/tch-rs), enabling you to use LibTorch C++
|
[PyTorch Rust bindings](https://github.com/LaurentMazare/tch-rs), enabling you to use LibTorch C++
|
||||||
kernels on CPU, CUDA and Metal.
|
kernels on CPU, CUDA and Metal.
|
||||||
|
|
||||||
See the [LibTorch Backend README](./burn-tch/README.md) for more details.
|
See the [LibTorch Backend README](./crates/burn-tch/README.md) for more details.
|
||||||
|
|
||||||
</details>
|
</details>
|
||||||
|
|
||||||
|
@ -376,7 +376,7 @@ This CPU backend is admittedly not our fastest backend, but offers extreme porta
|
||||||
|
|
||||||
It is our only backend supporting _no_std_.
|
It is our only backend supporting _no_std_.
|
||||||
|
|
||||||
See the [NdArray Backend README](./burn-ndarray/README.md) for more details.
|
See the [NdArray Backend README](./crates/burn-ndarray/README.md) for more details.
|
||||||
|
|
||||||
</details>
|
</details>
|
||||||
|
|
||||||
|
@ -416,7 +416,7 @@ Of note, it is impossible to make the mistake of calling backward on a model tha
|
||||||
that does not support autodiff (for inference), as this method is only offered by an Autodiff
|
that does not support autodiff (for inference), as this method is only offered by an Autodiff
|
||||||
backend.
|
backend.
|
||||||
|
|
||||||
See the [Autodiff Backend README](./burn-autodiff/README.md) for more details.
|
See the [Autodiff Backend README](./crates/burn-autodiff/README.md) for more details.
|
||||||
|
|
||||||
</details>
|
</details>
|
||||||
|
|
||||||
|
@ -455,7 +455,7 @@ Of note, we plan to implement automatic gradient checkpointing based on compute
|
||||||
bound operations, which will work gracefully with the fusion backend to make your code run even
|
bound operations, which will work gracefully with the fusion backend to make your code run even
|
||||||
faster during training, see [this issue](https://github.com/tracel-ai/burn/issues/936).
|
faster during training, see [this issue](https://github.com/tracel-ai/burn/issues/936).
|
||||||
|
|
||||||
See the [Fusion Backend README](./burn-fusion/README.md) for more details.
|
See the [Fusion Backend README](./crates/burn-fusion/README.md) for more details.
|
||||||
|
|
||||||
</details>
|
</details>
|
||||||
|
|
||||||
|
|
Loading…
Reference in New Issue