dengelt
e255472826
Fix typos ( #949 )
2023-11-12 15:08:39 -05:00
Nathaniel Simard
322480b744
Feat/op fusion decorator ( #939 )
...
* WIP
* Impl backend decorator
* WIP
* WIP
* WIP
* WIP
* WIP
* WIP
* Refactor
* Handle graph single ops execution
* WIP
* Starting to get concrete
* WIP
* Fix locator
* Implement add ops
* Start implementing ops
* Add more ops
* Add more ops
* More float ops
* Almost finish float ops
* Almost done with Int
* Some fix
* Into float
* Implement bool ops
* Almost done with MVP
* Fix adaptive pooling
* Add fusion as backend
* Fix memory leak
* Fix
* WIP Doc
* Doc all ops enum
* Initial docs
* Clippy
* Clippy v2
* Fix typos
* Fix doc
* Fix feature flags
* Add missing ops
* Some cleanup
* Revert u128 id
* cosmetic fixes
---------
Co-authored-by: louisfd <louisfd94@gmail.com>
2023-11-09 21:21:41 -05:00
Louis Fortier-Dubois
6011ae01fd
Update burn-compute README.md to check autotune
2023-11-07 12:26:42 -05:00
Nathaniel Simard
c4bc96e27f
Better settings ( #933 )
2023-11-07 07:34:39 -05:00
Louis Fortier-Dubois
a0297530ea
Autotune: fix inputs ( #926 )
2023-11-06 08:59:31 -05:00
Louis Fortier-Dubois
6548f1a730
add needed lines ( #927 )
2023-11-03 09:55:33 -04:00
Aisuko
4e8b573f13
Fixed the wrong order of the attributes ( #930 )
...
Signed-off-by: GitHub <noreply@github.com>
2023-11-03 09:21:58 -04:00
Nathaniel Simard
dddc138757
Add warmup logic when calculating eta ( #923 )
2023-11-03 08:57:09 -04:00
Louis Fortier-Dubois
2ac348c604
fix singular in estimated time ( #928 )
2023-11-03 08:52:48 -04:00
Louis Fortier-Dubois
1cc1844d32
Refactor/autotune/key ( #924 )
2023-11-03 08:46:25 -04:00
Luni-4
8c80c9b94a
ci/Speed up typos checks ( #907 )
2023-11-02 14:30:07 -04:00
Louis Fortier-Dubois
35df31f700
Perf/wgpu/matmul unpadded ( #922 )
2023-11-01 16:37:33 -04:00
Nathaniel Simard
64e58b4463
Make ndarray tensor public ( #920 )
2023-11-01 13:31:21 -04:00
Louis Fortier-Dubois
8742d31d16
Perf/wgpu/matmul vec4rhs ( #914 )
2023-10-31 08:37:17 -04:00
Nathaniel Simard
96524d40a1
[Breaking] Refactor Backend Names ( #904 )
2023-10-29 18:27:49 -04:00
Louis Fortier-Dubois
e2a3329997
Feat/wgpu/autotune compute ( #906 )
2023-10-29 16:44:59 -04:00
Arvid Hammarlund
a9567ab252
Fixing Docs.rs ( #905 )
2023-10-26 15:26:11 -04:00
Luni-4
7332ebcabf
ci/Add coverage as xtask task ( #902 )
2023-10-26 12:45:08 -04:00
nathaniel
c0f836a94d
Update docs link
2023-10-25 11:09:06 -04:00
Louis Fortier-Dubois
068b460078
add bump command ( #901 )
2023-10-25 08:59:46 -04:00
Nathaniel Simard
233922d60c
Chore: Bump version for next release ( #900 )
2023-10-24 19:31:13 -04:00
nathaniel
4eb69735e4
Fix publish workflow
2023-10-24 19:26:21 -04:00
louisfd
d258778272
candle link
2023-10-24 18:28:10 -04:00
louisfd
68e41d744f
Merge branch 'main' of github.com:burn-rs/burn
2023-10-24 18:22:48 -04:00
louisfd
aaae336945
candle readme
2023-10-24 18:22:37 -04:00
nathaniel
cfb3157e04
Fix publish workflow
2023-10-24 18:08:19 -04:00
Luni-4
9f4eec7fe5
ci: Do not consider `examples` folder for coverage ( #898 )
2023-10-24 17:25:04 -04:00
Louis Fortier-Dubois
e76b6d47de
WGPU: matmul vec4 ( #897 )
2023-10-24 17:23:43 -04:00
Louis Fortier-Dubois
0ab611b42e
AdamW NaN fix ( #888 )
2023-10-24 14:48:40 -04:00
Nathaniel Simard
1fd59552db
[Burn-Tensor] Add clone invariance ( #891 )
...
* [Burn-Tensor] Add clone invariance
* Fix div by zero
2023-10-24 14:45:56 -04:00
nathaniel
ae0de594fd
CI: Update publish step
2023-10-24 14:37:27 -04:00
Alex Errant
9f2bc599b8
Add a `sync` feature to common, core, and tensor ( #893 )
2023-10-24 14:32:01 -04:00
nathaniel
d021c7d7e8
Remove wrong comments
2023-10-24 11:55:39 -04:00
Luni-4
aa1f3e3f92
ci/Add filters ( #892 )
2023-10-24 11:12:33 -04:00
Luni-4
9add42442f
Generalize model usage in burn-import README ( #889 )
2023-10-24 09:53:42 -04:00
Nathaniel Simard
84df5554b1
Use const seed ( #894 )
2023-10-24 09:53:11 -04:00
Luni-4
38e88a79bd
ci: Implement source-code coverage ( #890 )
2023-10-23 14:15:14 -04:00
Louis Fortier-Dubois
d96f73da0a
Feat/compute/autotune ( #861 )
...
* wip autotune compute
* too much generics
* wip
* megawip
* in progress
* first test passes
* first test passes
* fixed test
* refactor for cache hit and miss
* cleanup and fixes
* doc and stuff
* doc and stuff
* clippy
* format
* remove lifetime
* cleanup operation
* wip
* wip
* compiles
* wip mutable borrow
* refactor with autotune server
* wip tune benchmark
* test passes
* fix autotune key
* cache hit miss tests
* refactor wgpu to match burn-compute
* better operation execution
* cleanup & refactor
* test for parametered kernel
* fmt
* fmt
* clippy
* allow clippy
* fix no-std
* fmt
* review and ci
* Fix CI
* delete dummy benchmarks again
---------
Co-authored-by: nathaniel <nathaniel.simard.42@gmail.com>
2023-10-23 11:29:44 -04:00
Nathaniel Simard
86db5dc392
Enable candle cuda ( #887 )
2023-10-23 11:00:54 -04:00
Nathaniel Simard
80fe58c604
[Burn-train] Improve panic messages ( #885 )
...
* [Burn-train] Improve panic messages
* Add new to in-memory logger
2023-10-23 10:49:46 -04:00
Louis Fortier-Dubois
e4d9d67526
make candle available ( #886 )
2023-10-23 10:00:39 -04:00
Mathias Insley
07c0cf146d
Wgpu/Clamp Kernels ( #866 )
...
* Update kernel mod.rs
* Wgpu crate implementations and add shader files
* Direct backends to the correct implementation
* Use mask method for candle
* Add index out of bounds protection
* Use a macro to avoid duplication
* Use unary_scalar templates
* New shaders for clamp and clamp_inplace
* Remove unneccessary clamp shaders
* Clamp implementation and test
* Use new clamp implementation for float and int ops
* Better variable names for clamp_min/max
* Revert changes to tensor/ops/tensor.rs
* Fix clamp.wgsl
* Fix shader types
* Use native candle clamp
* Use candle ops for clamp_min/max and revert tensor.rs
* Maximum/minimum were reversed
2023-10-23 07:49:24 -04:00
Nathaniel Simard
d263968236
Refactor unfold4d + Add Module ( #870 )
2023-10-22 11:53:59 -04:00
Damien Elmes
4cb27d289a
Fix train-minimal breakage ( #882 )
...
* Fix train-minimal breakage
* Ensure examples get checked in CI
2023-10-22 11:17:36 -04:00
Louis Fortier-Dubois
01d426236d
candle 0.3.0 ( #881 )
2023-10-20 17:03:47 -04:00
Nathaniel Simard
af813d09ed
Feat/early stopping + burn train refactor ( #878 )
2023-10-20 11:47:31 -04:00
Christophe Biocca
3eb7f380f3
Also consider devices of type Other when trying to find the best device. ( #875 )
2023-10-18 20:44:07 -04:00
Mathias Insley
1962c06c21
Bug/lstm unsqueeze ( #873 )
2023-10-18 18:53:37 -04:00
Nathaniel Simard
dd4e72a98f
Feat/checkpoint criteria ( #862 )
...
* WIP
* Setup
* Test metrics
* Fix bug
* Cleanup
2023-10-17 09:03:11 -04:00
blueyellowpink
8226460e6d
tensor pretty print & summarize ( #869 )
2023-10-16 12:59:13 -04:00