mirror of https://github.com/tracel-ai/burn.git
Fixing various syntax errors in the Burn book (#1740)
This commit is contained in:
parent
1cde566317
commit
adbe97dc4d
|
@ -23,7 +23,7 @@ fn main() {
|
|||
```
|
||||
|
||||
In this example, we use the `Wgpu` backend which is compatible with any operating system and will
|
||||
use the GPU. For other options, see the Burn README. This backend type takes the graphics api, the
|
||||
use the GPU. For other options, see the Burn README. This backend type takes the graphics API, the
|
||||
float type and the int type as generic arguments that will be used during the training. By leaving
|
||||
the graphics API as `AutoGraphicsApi`, it should automatically use an API available on your machine.
|
||||
The autodiff backend is simply the same backend, wrapped within the `Autodiff` struct which imparts
|
||||
|
|
|
@ -159,7 +159,7 @@ There are two major things going on in this code sample.
|
|||
|
||||
</details><br>
|
||||
|
||||
Note that each time you create a new file in the `src` directory you also need to add explicitly this
|
||||
Note that each time you create a new file in the `src` directory you also need to explicitly add this
|
||||
module to the `main.rs` file. For instance after creating the `model.rs`, you need to add the following
|
||||
at the top of the main file:
|
||||
|
||||
|
@ -238,13 +238,13 @@ When creating a custom neural network module, it is often a good idea to create
|
|||
the model struct. This allows you to define default values for your network, thanks to the `Config`
|
||||
attribute. The benefit of this attribute is that it makes the configuration serializable, enabling
|
||||
you to painlessly save your model hyperparameters, enhancing your experimentation process. Note that
|
||||
a constructor will automatically be generated for your configuration, which will take as input
|
||||
values for the parameter which do not have default values:
|
||||
a constructor will automatically be generated for your configuration, which will take in as input
|
||||
values the parameters which do not have default values:
|
||||
`let config = ModelConfig::new(num_classes, hidden_size);`. The default values can be overridden
|
||||
easily with builder-like methods: (e.g `config.with_dropout(0.2);`)
|
||||
|
||||
The first implementation block is related to the initialization method. As we can see, all fields
|
||||
are set using the configuration of the corresponding neural network underlying module. In this
|
||||
are set using the configuration of the corresponding neural network's underlying module. In this
|
||||
specific case, we have chosen to expand the tensor channels from 1 to 8 with the first layer, then
|
||||
from 8 to 16 with the second layer, using a kernel size of 3 on all dimensions. We also use the
|
||||
adaptive average pooling module to reduce the dimensionality of the images to an 8 by 8 matrix,
|
||||
|
|
|
@ -1,11 +1,11 @@
|
|||
# Autodiff
|
||||
|
||||
Burn's tensor also supports autodifferentiation, which is an essential part of any deep learning
|
||||
Burn's tensor also supports auto-differentiation, which is an essential part of any deep learning
|
||||
framework. We introduced the `Backend` trait in the [previous section](./backend.md), but Burn also
|
||||
has another trait for autodiff: `AutodiffBackend`.
|
||||
|
||||
However, not all tensors support auto-differentiation; you need a backend that implements both the
|
||||
`Backend` and `AutodiffBackend` traits. Fortunately, you can add autodifferentiation capabilities to any
|
||||
`Backend` and `AutodiffBackend` traits. Fortunately, you can add auto-differentiation capabilities to any
|
||||
backend using a backend decorator: `type MyAutodiffBackend = Autodiff<MyBackend>`. This
|
||||
decorator implements both the `AutodiffBackend` and `Backend` traits by maintaining a dynamic
|
||||
computational graph and utilizing the inner backend to execute tensor operations.
|
||||
|
|
|
@ -295,9 +295,9 @@ Those operations are only available for `Bool` tensors.
|
|||
|
||||
| Burn API | PyTorch Equivalent |
|
||||
| ----------------------------------- | ------------------------------- |
|
||||
| `Tensor.diag_mask(shape, diagonal)` | N/A |
|
||||
| `Tensor.tril_mask(shape, diagonal)` | N/A |
|
||||
| `Tensor.triu_mask(shape, diagonal)` | N/A |
|
||||
| `Tensor::diag_mask(shape, diagonal)`| N/A |
|
||||
| `Tensor::tril_mask(shape, diagonal)`| N/A |
|
||||
| `Tensor::triu_mask(shape, diagonal)`| N/A |
|
||||
| `tensor.argwhere()` | `tensor.argwhere()` |
|
||||
| `tensor.float()` | `tensor.to(torch.float)` |
|
||||
| `tensor.int()` | `tensor.to(torch.long)` |
|
||||
|
|
|
@ -252,7 +252,7 @@ uses the library. We even have some Burn examples that uses the library crate of
|
|||
The examples are unique files under the `examples` directory. Each file produces an executable file
|
||||
with the same name. Each example can then be executed with `cargo run --example <executable name>`.
|
||||
|
||||
Below is an file tree of a typical Burn example package:
|
||||
Below is a file tree of a typical Burn example package:
|
||||
|
||||
```
|
||||
examples/burn-example
|
||||
|
|
Loading…
Reference in New Issue