* Element already implements One
* Add element module
* Add our own traits for Zero, One and ToPrimitive to support bool Element
* Fix typo
* Add basic tests for ToPrimitive with expected values
* The most important change of all
* Remove One + Zero identities
* Move zero/one outside mapv + refactor ToPrimitive -> ToElement trait
* Add num-traits to NOTICES.md
* Updated documentation for unfold4d
Added links between the struct and the config. Added a link to the related burn_tensor function in the documentation for the forward function.
* Changing nn relu module documentation to functional api
Removing the formula for relu from the module API to the functional API,
citing a paper relevant to relu
and mentionning the functional API in the module API
* Linking gelu module API documentation to functional API documentation
* Linear module : adding documentation
Adding documentation to the Linear module
mentionning that LinearConfig struct
should be used when creating a Linear Layer
Also adding links to the documentation that points people toward
the right path
* Updated documentation for dropout
Added links between the struct and the config. Added a link to the struct in the forward function for more info.
* embedding + swiglu
* RotaryEncodying : adding documentation
Adding documentation stating the RotaryEncoding should be created using a RotaryEncodingConfig
* prelu: adding documentation
Adding documentation to the prelu module:
- Linking forward function documentation to the functional API
- Citing the first paper to mention prelu
- Adding documentation saying that prelu layer should be created using PReluConfig
* pos_encoding: adding documentation
* Updated documentation for mha
Added links for more info. Added shape info at some places.
* docs: Add documentation for Gru module
Provide documentation for the Gru module, including its configuration and usage. Include a link to the paper that introduced the Gated Recurrent Unit (GRU) and specify that the module should be created using GruConfig. Also, mention that the forward function returns a state tensor with specific dimensions.
* burn-core-nn-transformers: adding documentation
Adding documentation:
- Says to use config to create the layers
- Add mathematical formula to the pwff forward pass
- Add citation in the pwff to the "Attention is all you need" paper
* Updated documentation: ConvTranspose1d and ConvTranspose2d
* docs: Add documentation for Lstm and BiLstm modules
Provide documentation for the Lstm and BiLstm modules, including their configurations and usage. Include links to the papers that introduced Long Short-Term Memory (LSTM) and Bidirectional LSTM. Specify that the modules should be created using LstmConfig and BiLstmConfig respectively.
* docs: Update documentation for ConvTranspose1d and ConvTranspose2d modules
* loss: Adding documenntation to the loss layers
Adding documentation stating to use the config to create the layer
* chore: Refactor Conv1d module imports and update documentation
* docs: Add documentation for AdaptiveAvgPool1d and AdaptiveAvgPool2d modules
Added references to the burn_tensor associated functions. Added links between the struct and the config.
* Refactor Conv1d module imports and update documentation
* chore: Refactor Conv2d module imports and update documentation
* Add documentation for AvgPool1d and AvgPool2d modules
Added references to the burn_tensor associated functions. Added links between the struct and the config.
* Add documentation for MaxPool1d and MaxPool2d modules
Added references to the burn_tensor associated functions. Added links between the struct and the config.
* Add documentation for leaky_relu and removed Config generic
Added references to the burn_tensor associated functions. Added links between the struct and the config. Removed the backend generic from the config since it's not needed (might be a breaking change).
* refactor: Update BatchNormConfig initialization and add documentation.
* Added link to config in embedding struct documentation
* refactor: Update GroupNormConfig initialization and add documentation
* refactor: Update InstanceNormConfig initialization and add documentation
* feat: Update LayerNormConfig initialization and add documentation
* refactor: Update RmsNormConfig initialization and add documentation
* fixed: removed #derive accidentally
* Added missing backticks in pools' shapes
* Format nn doc
* Make config fields public in nn modules
* Update import statements in nn modules
Changed burn_tensor imports to crate::tensor
* Update import statements in nn modules' tests
Changed burn_tensor imports to crate::tensor
* breaking change refactor: Update GroupNormConfig and InstanceNormConfig initialization
* Make SwiGlu fields public
* grammar
* slashes
* input tensors grouping
* copy-pasta mistake
* a not an >:I
* Capitalization
* better desc
* math 'n ticks
* group_norm functional implementation
* removed the ... struct
* decoder typo
* fmt
* referring to private fn in docs
---------
Co-authored-by: Thierry Cantin-Demers <piertcd@gmail.com>
Co-authored-by: mepatrick73 <pameu17@ulaval.ca>
* draft for alternative burn import design
* passes onnx test, fails to build example
* pushing to test example on main
* fixed the issue with the example
* passes the test now
* spring cleaning and minor code changes
* removed pub visibility from most graph_data fields and functions
* comment fixes
* went ahead and removed the constant check for now
* removed unused function arg
In the Record page, there are 4 places where "Pack" is written as
"Park." Also, the fix makes them the same as the official name, which
has no space and is in CamelCase.
Co-authored-by: towerpark <t56ouhw1d@mozmail.com>
* Add a feature to initialize from an existing wgpu adapter/device/queue
This is useful when interacting with other wgpu applications (eg. displaying a burn tensor as a texture in egui). The existing devices are keyed by the wgpu Device ID. Alternatively they could be keyed per adapter which would be more inline with other burn WgpuDevice's (one per adapter), but also there's no real inherent reason to.
This also involves making Queue into an Arc. Alternatively, this could give up ownership of the queue, but it's helpful to be able to synchronize burn operations and custom wgpu operations.
* struct support (receive, use and modify fields)
* support struct with generics
* expect instead of unwrap
* fmt
* rename struc
* fmt
* Clippy
* Fix launcher
* Support creating private cube type without generics
* Cleanup
* generics support
* clippy
* minor
* fmt
---------
Co-authored-by: nathaniel <nathaniel.simard.42@gmail.com>
* Move and redirect GatherElements to new folders/nodes
* Create PyTorch script for gather
* Add onnx file for gather
* Add a gather test to onnx_tests
* Update gather.rs to use select
* Rename codegen test
* Update gather and gather_elements conversion functions
* Validate rank of input node and update output
* Add check for Gather
* Adds remainder ops implementation for Tensor.
* Adds test for % operator.
* Add remainder and % operator entry in tensor.md
---------
Co-authored-by: Jonas Kantic <jk.mail@posteo.net>