Commit Graph

117 Commits

Author SHA1 Message Date
Louis Fortier-Dubois 2defd01342
Add tests: Slice assign vs Cat in LSTM backward (#1146)
* slice assign test

* added tests but no error

* test for non zero grad

* clippy

* i'm confused

* fix ci
2024-01-28 21:29:06 -05:00
Joshua Ferguson 4a70a0f8bc
renaming FloatTensor Ops, Primitives, and maybe functions (#1174) 2024-01-27 10:04:50 -05:00
Joshua Ferguson 3b7d9feede
Elementwise pow op (#1133) 2024-01-24 09:46:57 -05:00
wcshds a5bdf38c92
fix the problem of sigmoid gradient generating NaN (#1140)
* use sigmoid derivative formulas

* add test

* fix test error

* move sigmoid to tensor/ops/activation.rs

* use full precision in the default implementation

* rename the param of `sigmoid_backward`
2024-01-16 16:20:18 -05:00
Kirill Mavreshko 97297538b1
Remove _devauto fuctions (#518) (#1110) 2024-01-06 13:36:34 -05:00
Nathaniel Simard d82e6b157b
Fix tests (#1089)
* Fix tests

* Fix fmt

* Fix CI
2023-12-21 13:06:19 -05:00
Kirill Mavreshko 1fd07fcb4a
Explicit device tensors (#1081) 2023-12-20 17:49:59 -05:00
Kelvin Wu 7c6f017c98
Implement chunk for different backends (#1032) 2023-12-20 13:35:59 -05:00
Alex Errant 610d64095e
cargo +nightly fmt (#1017) 2023-12-12 13:29:06 -05:00
David Chavez 71d3c1d142
chore(infra): Share some properties across workspace (#1039) 2023-12-12 09:39:07 -05:00
Louis Fortier-Dubois 8fc52113bc
Chore/bump v12 (#1048) 2023-12-04 10:47:54 -05:00
Louis Fortier-Dubois 3088c466a5
patch 0.11.1 (#1047) 2023-12-04 10:18:30 -05:00
Nathaniel Simard ab1b5890f5
Chore/release (#1031) 2023-12-01 14:33:28 -05:00
Will Brickner 03af140e12
Implement Quiet Softmax (`Attention Is Off By One`) (#692)
* Added quiet_softmax

* Undid bad formatting

---------

Co-authored-by: Dilshod Tadjibaev <939125+antimora@users.noreply.github.com>
2023-11-30 12:58:30 -05:00
Nathaniel Simard 3d6c738776
Refactor/fusion/graph (#988) 2023-11-22 09:55:42 -05:00
Nathaniel Simard cabbaab0c4
Fix/constant tensors (#984)
* Generalize autodiff tensor

* Can have int const module

* Update example

* Support no-std with burn-import

* Fix typos

* Fix alloc problems

* Revert burn-import changes

* Fix examples

* Support Int and Bool Params

* Fix

* Add comment
2023-11-21 15:27:28 -06:00
Luni-4 445603401d
ci/Check dependencies (#895) 2023-11-19 10:35:03 -05:00
Zsombor 4fc0c27e31
Implement tensor.recip() function to calculate elementwise reciprocals (#953) 2023-11-15 09:17:32 -05:00
chenkun 2614944afa
fix approximately equal precision issue in test code (#954) 2023-11-13 15:35:24 -05:00
Luni-4 8c80c9b94a
ci/Speed up typos checks (#907) 2023-11-02 14:30:07 -04:00
Nathaniel Simard 96524d40a1
[Breaking] Refactor Backend Names (#904) 2023-10-29 18:27:49 -04:00
Nathaniel Simard 233922d60c
Chore: Bump version for next release (#900) 2023-10-24 19:31:13 -04:00
Nathaniel Simard d263968236
Refactor unfold4d + Add Module (#870) 2023-10-22 11:53:59 -04:00
Mathias Insley 255dfefab2
Feat/tensor unfold (#819) 2023-10-15 17:05:34 -04:00
Damien Elmes 28e2a99efe
Use a u64 counter for autodiff NodeIDs (#843) 2023-10-04 11:53:34 -04:00
Nathaniel Simard ca787d6446
Feat/async read (#833) 2023-09-28 17:09:58 -04:00
Nathaniel Simard 95e660488e
Refactor/burn compute wgpu (#826) 2023-09-25 10:42:45 -04:00
Louis Fortier-Dubois 8c215e8be3
Bugfix/int swap dims (#823) 2023-09-22 08:38:38 -04:00
Nathaniel Simard af0be5cfeb
Chore: bump version (#777) 2023-09-06 12:15:13 -04:00
Nathaniel Simard c95b34c511
Book: backend extension + custom wgpu kernel (#728) 2023-08-31 09:55:43 -04:00
Louis Fortier-Dubois 7c34e21424
Perf/tensor ops/more tests (#718) 2023-08-30 09:08:18 -04:00
Louis Fortier-Dubois c89f9969ed
Perf/tensor ops/tests (#710) 2023-08-28 12:53:17 -04:00
Nathaniel Simard 084c8bb4e0
Fix: autodiff backward broadcast (#702) 2023-08-28 08:20:38 -04:00
Nathaniel Simard a25f8b224a
Fix: grad replace was adding to previous tensor (#695) 2023-08-25 14:58:18 -04:00
Nathaniel Simard 183620fb20
feat: replace grad tensor (#688) 2023-08-24 14:07:04 -04:00
Louis Fortier-Dubois bc27a87e9d
new clippy stuff (#687) 2023-08-24 13:20:58 -04:00
Caio Piccirillo 2fefc82099
Dilation maxpool (#668) 2023-08-21 14:14:25 -04:00
Nathaniel Simard bda03c6a76
Feat/avg pool/include pad config (#653) 2023-08-17 08:50:31 -04:00
Nathaniel Simard c74e75f748
Fix/wgpu/max pool2d backward (#613) 2023-08-09 16:45:49 -04:00
Caio Piccirillo cb283a9e5b
Max pool1d (#602) 2023-08-09 16:13:48 -04:00
Caio Piccirillo 1d3bbaab13
Typos (#608) 2023-08-08 17:57:51 -04:00
Nathaniel Simard 441a7011ce
Feat/tensor casting (#604) 2023-08-08 10:02:17 -04:00
Nathaniel Simard ce8a175aa4
Feat/conv transpose1d backward (#586) 2023-08-06 10:50:10 -04:00
Nathaniel Simard ca9a8808d9
Feat/adaptive avg pool1d (#585) 2023-08-04 13:55:18 -04:00
Nathaniel Simard 8436d4ff66
Feat/tensor/adaptive avg pool2d (#572) 2023-08-04 10:23:59 -04:00
Nathaniel Simard 597eab524d
Feat/conv transpose2d (#574) 2023-08-03 15:42:18 -04:00
mmalczak 73fb0eaa7e
Addition of abs tensor opperator #506 (#553) 2023-08-01 18:25:14 -04:00
Dilshod Tadjibaev 74c41bdda2
Add clamp, clamp_min, clamp_max tensor ops (#550) 2023-07-26 20:02:38 -04:00
Nathaniel Simard 0a5a2d729a
chore: bump version for next release (#533) 2023-07-26 09:46:28 -04:00
polina guseva 64090a582b
Add static full method for tensor intialization with custom values (#486)
---------

Co-authored-by: Dilshod Tadjibaev <939125+antimora@users.noreply.github.com>
2023-07-24 13:03:27 -04:00