diff --git a/burn-book/src/basic-workflow/backend.md b/burn-book/src/basic-workflow/backend.md index 2734f8b09..92eda0349 100644 --- a/burn-book/src/basic-workflow/backend.md +++ b/burn-book/src/basic-workflow/backend.md @@ -23,7 +23,7 @@ fn main() { ``` In this example, we use the `Wgpu` backend which is compatible with any operating system and will -use the GPU. For other options, see the Burn README. This backend type takes the graphics api, the +use the GPU. For other options, see the Burn README. This backend type takes the graphics API, the float type and the int type as generic arguments that will be used during the training. By leaving the graphics API as `AutoGraphicsApi`, it should automatically use an API available on your machine. The autodiff backend is simply the same backend, wrapped within the `Autodiff` struct which imparts diff --git a/burn-book/src/basic-workflow/model.md b/burn-book/src/basic-workflow/model.md index 73088c789..07775a082 100644 --- a/burn-book/src/basic-workflow/model.md +++ b/burn-book/src/basic-workflow/model.md @@ -159,7 +159,7 @@ There are two major things going on in this code sample.
-Note that each time you create a new file in the `src` directory you also need to add explicitly this +Note that each time you create a new file in the `src` directory you also need to explicitly add this module to the `main.rs` file. For instance after creating the `model.rs`, you need to add the following at the top of the main file: @@ -238,13 +238,13 @@ When creating a custom neural network module, it is often a good idea to create the model struct. This allows you to define default values for your network, thanks to the `Config` attribute. The benefit of this attribute is that it makes the configuration serializable, enabling you to painlessly save your model hyperparameters, enhancing your experimentation process. Note that -a constructor will automatically be generated for your configuration, which will take as input -values for the parameter which do not have default values: +a constructor will automatically be generated for your configuration, which will take in as input +values the parameters which do not have default values: `let config = ModelConfig::new(num_classes, hidden_size);`. The default values can be overridden easily with builder-like methods: (e.g `config.with_dropout(0.2);`) The first implementation block is related to the initialization method. As we can see, all fields -are set using the configuration of the corresponding neural network underlying module. In this +are set using the configuration of the corresponding neural network's underlying module. In this specific case, we have chosen to expand the tensor channels from 1 to 8 with the first layer, then from 8 to 16 with the second layer, using a kernel size of 3 on all dimensions. We also use the adaptive average pooling module to reduce the dimensionality of the images to an 8 by 8 matrix, diff --git a/burn-book/src/building-blocks/autodiff.md b/burn-book/src/building-blocks/autodiff.md index b66e47423..b9ed0dfdd 100644 --- a/burn-book/src/building-blocks/autodiff.md +++ b/burn-book/src/building-blocks/autodiff.md @@ -1,11 +1,11 @@ # Autodiff -Burn's tensor also supports autodifferentiation, which is an essential part of any deep learning +Burn's tensor also supports auto-differentiation, which is an essential part of any deep learning framework. We introduced the `Backend` trait in the [previous section](./backend.md), but Burn also has another trait for autodiff: `AutodiffBackend`. However, not all tensors support auto-differentiation; you need a backend that implements both the -`Backend` and `AutodiffBackend` traits. Fortunately, you can add autodifferentiation capabilities to any +`Backend` and `AutodiffBackend` traits. Fortunately, you can add auto-differentiation capabilities to any backend using a backend decorator: `type MyAutodiffBackend = Autodiff`. This decorator implements both the `AutodiffBackend` and `Backend` traits by maintaining a dynamic computational graph and utilizing the inner backend to execute tensor operations. diff --git a/burn-book/src/building-blocks/tensor.md b/burn-book/src/building-blocks/tensor.md index e3d10658f..cf9406ee1 100644 --- a/burn-book/src/building-blocks/tensor.md +++ b/burn-book/src/building-blocks/tensor.md @@ -295,9 +295,9 @@ Those operations are only available for `Bool` tensors. | Burn API | PyTorch Equivalent | | ----------------------------------- | ------------------------------- | -| `Tensor.diag_mask(shape, diagonal)` | N/A | -| `Tensor.tril_mask(shape, diagonal)` | N/A | -| `Tensor.triu_mask(shape, diagonal)` | N/A | +| `Tensor::diag_mask(shape, diagonal)`| N/A | +| `Tensor::tril_mask(shape, diagonal)`| N/A | +| `Tensor::triu_mask(shape, diagonal)`| N/A | | `tensor.argwhere()` | `tensor.argwhere()` | | `tensor.float()` | `tensor.to(torch.float)` | | `tensor.int()` | `tensor.to(torch.long)` | diff --git a/burn-book/src/getting-started.md b/burn-book/src/getting-started.md index eb8d6005d..47304fc74 100644 --- a/burn-book/src/getting-started.md +++ b/burn-book/src/getting-started.md @@ -252,7 +252,7 @@ uses the library. We even have some Burn examples that uses the library crate of The examples are unique files under the `examples` directory. Each file produces an executable file with the same name. Each example can then be executed with `cargo run --example `. -Below is an file tree of a typical Burn example package: +Below is a file tree of a typical Burn example package: ``` examples/burn-example