Commit Graph

85 Commits

Author SHA1 Message Date
Sylvain Benner e303e31c8b
Bump next version of Burn to 0.14.0 (#1618) 2024-04-12 17:14:45 -04:00
Guillaume Lagrange 9980db440d
Remove unused assets (#1616) 2024-04-12 15:48:16 -04:00
Guillaume Lagrange 264c167c11
Update licenses symlinks (#1613) 2024-04-12 14:43:58 -04:00
Aasheesh Singh fb1da53a38
support for rotary positional encoding to transformer modules. (#1604)
* add rotary positional encoding to transformer modules.

* fix f64 error

* use num_traits

* add panic condition
2024-04-12 11:45:49 -04:00
Dilshod Tadjibaev 2f885480ed
Use num-traits for float ops (#1584) 2024-04-08 10:16:20 -05:00
Louis Fortier-Dubois f5159b6d22
Refactor: split JitKernel and SourceKernel (#1569)
* refactor execute_dynamic into Execution

* minor change

* extension cfg

* jitkernel and sourcekernel

* add todo statement

* cleanup and docs

* update book

* fix server dependancy on compiler

* refactor into shader information

* refactor to compile shader once

* clippy

* clippy

* clippy

* fix doc

* fix doc

* fmt

* rename feature flag

* refactor

* All broked

* compile at the right time

* todo done

* all dynamic

* all dynamic in template too

* fmt

* fix ci

---------

Co-authored-by: nathaniel <nathaniel.simard.42@gmail.com>
2024-04-05 12:58:10 -04:00
Nathaniel Simard 1239d9bfa3
[Breaking] Make Tensor, Module, Optimizer !Sync + Refactor Autodiff (#1575) 2024-04-04 16:01:17 -04:00
Guillaume Lagrange 0978c8a586
Support multilabel binary cross entropy (#1571)
* Support multilabel binary cross entropy

* Add missing alloc Vec
2024-04-03 08:03:07 -04:00
Nathaniel Simard b0c5986d16
Feat/lazy init (#1539) 2024-04-02 10:13:35 -04:00
Karsten Becker c21d5a3207
Add LeakyReLu implementation (#1208)
* Implement LeakyReLu

* Cargo fmt

* Apply suggestions

* cargo fmt

* Use float_mul_scalar

* Should be grad

* Add to books module

* Move test files

* Update leaky relu to use activation function

* Update tensor.md

* Fix failing test due to approx

* Add back the function comment

* Fix comment per PR feedback

---------

Co-authored-by: Dilshod Tadjibaev <939125+antimora@users.noreply.github.com>
2024-03-27 13:57:51 -05:00
jcmullwh 626457e1c6
Provide Tensor Padding Helpers #960 (#1097)
* Initial padding approach

Create padding implementation for the last two dimensions of Float and Int Tensors.

Create PadMode Enum, allowing Constant padding.

Create Padding Struct with Uniform, Asymmetric, height, and width implementations.

Create tests for the padding implementation.

* Update padding.rs

remove unneeded import

* Update from Merge

Use crate Element

Swap from old from_data() to new from_data_devauto()

* Formatting Changes

Formatting changes from cargo fmt --all

* Additional Format Change

One more format change that cargo fmt didn't get the first time.

* Changes to Example

Modify Example to ensure it works.

* modify naming

better names for impl / input variables.

* Modify API

- Change Padding to PadSize.
- integrate padding value into PadMode.
- update tests and examples.

* Comments and print

Improve comments+naming and remove println

* Pad Fixes

Moved pad to numeric

Simplified PadMode Element

updated tensor creations

fixed doc example

* Fix test location

* Simplified pad API

* Fix for failed unit tests

* Remove bool_full

* Rename `pads` to `padding`

---------

Co-authored-by: Dilshod Tadjibaev <939125+antimora@users.noreply.github.com>
2024-03-27 12:46:55 -05:00
Aasheesh Singh a77979e0b6
add rms norm layer (#1527) 2024-03-25 18:59:11 -04:00
Aasheesh Singh 613e698007
Feat/swiglu (#1507) 2024-03-25 15:55:27 -04:00
Rubén J.R 69f1877754
New learning rate schedulers (#1481) 2024-03-19 08:28:42 -05:00
Dilshod Tadjibaev 8a8300c1fb
Add tril_mask, triu_mask and diag_mask ops (#1479) 2024-03-18 10:15:40 -05:00
Arjun31415 d3af29c5b4
Missing `Debug` derive for Group Norm Config (#1482) 2024-03-17 13:12:50 -04:00
Arjun31415 4de1272344
Feat: Add Leaky Relu Model (#1467) 2024-03-14 10:53:40 -05:00
WorldSEnder 53eb3ecfa9
Implement Huber loss (#1444)
* Implement Huber loss

Instead of using a sign or abs function, uses clamping to compute
it outside the bounds. This is better for the autodiff backend.

* mention Huber loss in the book

* unify naming of residuals in comments
2024-03-13 12:55:46 -05:00
carrotflakes 80aac1dde4
Add Rank0 variant to AdaptorRecordV1 and AdaptorRecordItemV1 (#1442) 2024-03-12 13:08:20 -04:00
Kyle Chen c52c49785d
Add linear learning rate scheduler (#1443) 2024-03-12 13:04:12 -04:00
Dilshod Tadjibaev 0138e16af6
Add Enum module support in PyTorchFileRecorder (#1436)
* Add Enum module support in PyTorchFileRecorder

Fixes #1431

* Fix wording/typos per PR feedback
2024-03-11 11:21:01 -05:00
Dilshod Tadjibaev c7d4c23f97
Support for non-contiguous indexes in PyTorchFileRecorder keys (#1432)
* Fix non-contiguous indexes

* Update pytorch-model.md

* Simplify multiple forwards
2024-03-07 13:40:57 -06:00
Dilshod Tadjibaev b12646de0a
Truncate debug display for NestedValue (#1428)
* Truncate debug display for NestedValue

* Fix failing tests
2024-03-07 08:06:31 -05:00
Dilshod Tadjibaev 545444c02a
PyTorchFileRecord print debug option (#1425)
* Add debug print option to PyTorchFileRecorder

* Updated documentation and improved print output

* Improve print wording

* Updated per PR feedback
2024-03-06 16:11:37 -06:00
Dilshod Tadjibaev d43a0b3f90
Add is_close and all_close tensor operators (#1389)
* Add is_close and all_close tensor operators

* Fix broken build issues

* Fix the table

* Add tests to candle
2024-03-01 15:37:14 -06:00
Dilshod Tadjibaev 688958ee74
Enhance PyTorchRecorder to pass top-level key to extract state_dict (#1300)
* Enhance PyTorchRecorder to pass top level key to extract state_dict

This is needed for Whisper weight pt files.

* Fix missing hyphens

* Move top-level-key test under crates

* Add sub-crates as members of workspace

* Update Cargo.lock

* Add accidentally omitted line during merge
2024-02-29 12:57:27 -06:00
Yu Sun 330552afb4
docs(book-&-examples): modify book and examples with new `prelude` module (#1372) 2024-02-28 13:25:25 -05:00
Arjun31415 8e23057c6b
Feature Addition: PRelu Module (#1328) 2024-02-24 10:24:22 -05:00
Yu Sun 1da47c9bf1
feat: add prelude module for convenience (#1335) 2024-02-24 10:17:30 -05:00
Tushushu 27f2095bcd
Implement Instance Normalization (#1321)
* config

* rename as instances, otherwise won't work

* refactor

* InstanceNormConfig

* remove unused var

* forward

* rename

* based on gn

* unit tests

* fix tests

* update doc

* update onnx doc

* renaming method

* add comment

---------

Co-authored-by: VungleTienan <tienan.liu@vungle.com>
2024-02-23 23:31:43 -06:00
Dilshod Tadjibaev 08302e38fc
Fix broken test and run-checks script (#1347) 2024-02-23 10:06:51 -05:00
Aasheesh Singh c86db83fa9
Add support for Any, All operations to Tensor (#1342)
* add any, all op implementation for all tensor types

* add op to burn-book

* fix formatting

* refactor tensor operations from numeric to BaseOps.

* fix book doc

* comments fix and add more tests
2024-02-23 10:06:31 -05:00
Dilshod Tadjibaev d6e859330f
Pytorch message updates (#1344)
* Update pytorch-model.md

* Update error.rs
2024-02-22 12:12:50 -06:00
Guillaume Lagrange bff4961426
Add enum module support (#1337) 2024-02-21 17:03:34 -05:00
Sylvain Benner 4427768570
[refactor] Move burn crates to their own crates directory (#1336) 2024-02-20 13:57:55 -05:00