David Chavez
71d3c1d142
chore(infra): Share some properties across workspace ( #1039 )
2023-12-12 09:39:07 -05:00
Louis Fortier-Dubois
8fc52113bc
Chore/bump v12 ( #1048 )
2023-12-04 10:47:54 -05:00
Louis Fortier-Dubois
3088c466a5
patch 0.11.1 ( #1047 )
2023-12-04 10:18:30 -05:00
Nathaniel Simard
ab1b5890f5
Chore/release ( #1031 )
2023-12-01 14:33:28 -05:00
Will Brickner
03af140e12
Implement Quiet Softmax (`Attention Is Off By One`) ( #692 )
...
* Added quiet_softmax
* Undid bad formatting
---------
Co-authored-by: Dilshod Tadjibaev <939125+antimora@users.noreply.github.com>
2023-11-30 12:58:30 -05:00
Nathaniel Simard
3d6c738776
Refactor/fusion/graph ( #988 )
2023-11-22 09:55:42 -05:00
Nathaniel Simard
cabbaab0c4
Fix/constant tensors ( #984 )
...
* Generalize autodiff tensor
* Can have int const module
* Update example
* Support no-std with burn-import
* Fix typos
* Fix alloc problems
* Revert burn-import changes
* Fix examples
* Support Int and Bool Params
* Fix
* Add comment
2023-11-21 15:27:28 -06:00
Luni-4
445603401d
ci/Check dependencies ( #895 )
2023-11-19 10:35:03 -05:00
Zsombor
4fc0c27e31
Implement tensor.recip() function to calculate elementwise reciprocals ( #953 )
2023-11-15 09:17:32 -05:00
chenkun
2614944afa
fix approximately equal precision issue in test code ( #954 )
2023-11-13 15:35:24 -05:00
Luni-4
8c80c9b94a
ci/Speed up typos checks ( #907 )
2023-11-02 14:30:07 -04:00
Nathaniel Simard
96524d40a1
[Breaking] Refactor Backend Names ( #904 )
2023-10-29 18:27:49 -04:00
Nathaniel Simard
233922d60c
Chore: Bump version for next release ( #900 )
2023-10-24 19:31:13 -04:00
Nathaniel Simard
d263968236
Refactor unfold4d + Add Module ( #870 )
2023-10-22 11:53:59 -04:00
Mathias Insley
255dfefab2
Feat/tensor unfold ( #819 )
2023-10-15 17:05:34 -04:00
Damien Elmes
28e2a99efe
Use a u64 counter for autodiff NodeIDs ( #843 )
2023-10-04 11:53:34 -04:00
Nathaniel Simard
ca787d6446
Feat/async read ( #833 )
2023-09-28 17:09:58 -04:00
Nathaniel Simard
95e660488e
Refactor/burn compute wgpu ( #826 )
2023-09-25 10:42:45 -04:00
Louis Fortier-Dubois
8c215e8be3
Bugfix/int swap dims ( #823 )
2023-09-22 08:38:38 -04:00
Nathaniel Simard
af0be5cfeb
Chore: bump version ( #777 )
2023-09-06 12:15:13 -04:00
Nathaniel Simard
c95b34c511
Book: backend extension + custom wgpu kernel ( #728 )
2023-08-31 09:55:43 -04:00
Louis Fortier-Dubois
7c34e21424
Perf/tensor ops/more tests ( #718 )
2023-08-30 09:08:18 -04:00
Louis Fortier-Dubois
c89f9969ed
Perf/tensor ops/tests ( #710 )
2023-08-28 12:53:17 -04:00
Nathaniel Simard
084c8bb4e0
Fix: autodiff backward broadcast ( #702 )
2023-08-28 08:20:38 -04:00
Nathaniel Simard
a25f8b224a
Fix: grad replace was adding to previous tensor ( #695 )
2023-08-25 14:58:18 -04:00
Nathaniel Simard
183620fb20
feat: replace grad tensor ( #688 )
2023-08-24 14:07:04 -04:00
Louis Fortier-Dubois
bc27a87e9d
new clippy stuff ( #687 )
2023-08-24 13:20:58 -04:00
Caio Piccirillo
2fefc82099
Dilation maxpool ( #668 )
2023-08-21 14:14:25 -04:00
Nathaniel Simard
bda03c6a76
Feat/avg pool/include pad config ( #653 )
2023-08-17 08:50:31 -04:00
Nathaniel Simard
c74e75f748
Fix/wgpu/max pool2d backward ( #613 )
2023-08-09 16:45:49 -04:00
Caio Piccirillo
cb283a9e5b
Max pool1d ( #602 )
2023-08-09 16:13:48 -04:00
Caio Piccirillo
1d3bbaab13
Typos ( #608 )
2023-08-08 17:57:51 -04:00
Nathaniel Simard
441a7011ce
Feat/tensor casting ( #604 )
2023-08-08 10:02:17 -04:00
Nathaniel Simard
ce8a175aa4
Feat/conv transpose1d backward ( #586 )
2023-08-06 10:50:10 -04:00
Nathaniel Simard
ca9a8808d9
Feat/adaptive avg pool1d ( #585 )
2023-08-04 13:55:18 -04:00
Nathaniel Simard
8436d4ff66
Feat/tensor/adaptive avg pool2d ( #572 )
2023-08-04 10:23:59 -04:00
Nathaniel Simard
597eab524d
Feat/conv transpose2d ( #574 )
2023-08-03 15:42:18 -04:00
mmalczak
73fb0eaa7e
Addition of abs tensor opperator #506 ( #553 )
2023-08-01 18:25:14 -04:00
Dilshod Tadjibaev
74c41bdda2
Add clamp, clamp_min, clamp_max tensor ops ( #550 )
2023-07-26 20:02:38 -04:00
Nathaniel Simard
0a5a2d729a
chore: bump version for next release ( #533 )
2023-07-26 09:46:28 -04:00
polina guseva
64090a582b
Add static full method for tensor intialization with custom values ( #486 )
...
---------
Co-authored-by: Dilshod Tadjibaev <939125+antimora@users.noreply.github.com>
2023-07-24 13:03:27 -04:00
Nathaniel Simard
d7ce52f0da
Feat/wgpu/conv ( #512 )
2023-07-20 15:14:42 -04:00
Dilshod Tadjibaev
53c088209d
Fix new clippy warnings that cause the CI to fail ( #494 )
2023-07-13 13:39:39 -04:00
Dilshod Tadjibaev
e62ee1269b
Fix burn-tch's random implementation for standard dist ( #469 )
2023-07-06 08:50:50 -04:00
Nathaniel Simard
65bf6c1cbb
Refactor index => slice ( #466 )
2023-07-05 16:30:11 -04:00
Dilshod Tadjibaev
eda241f8cf
Add missing docs and enable missing_docs warn lint ( #420 )
2023-06-21 14:12:13 -04:00
Nathaniel Simard
a8624590af
feat: mask_where ( #409 )
2023-06-18 15:04:28 -04:00
Dilshod Tadjibaev
834c7ecc1f
Clean up cargo descriptions and formatting ( #403 )
2023-06-15 09:20:53 -04:00
Nathaniel Simard
71d7ebbb21
Fix concat backward with more than 1 dim ( #402 )
2023-06-15 09:18:15 -04:00
Nathaniel Simard
ecc67c58f9
Feat/wgpu/swap dims ( #381 )
2023-06-04 19:34:35 -04:00