Joshua Ferguson
07c69bd29a
Powf fix2 ( #1193 )
2024-01-31 10:19:15 -05:00
Joshua Ferguson
4a70a0f8bc
renaming FloatTensor Ops, Primitives, and maybe functions ( #1174 )
2024-01-27 10:04:50 -05:00
Dilshod Tadjibaev
0368409eb3
Add support for loading PyTorch `.pt` (weights/states) files directly to model's record ( #1085 )
2024-01-25 10:20:09 -05:00
Joshua Ferguson
3b7d9feede
Elementwise pow op ( #1133 )
2024-01-24 09:46:57 -05:00
Louis Fortier-Dubois
ab67b6b036
slice assign in candle ( #1095 )
2024-01-08 16:41:34 -05:00
Nathaniel Simard
d82e6b157b
Fix tests ( #1089 )
...
* Fix tests
* Fix fmt
* Fix CI
2023-12-21 13:06:19 -05:00
Kirill Mavreshko
1fd07fcb4a
Explicit device tensors ( #1081 )
2023-12-20 17:49:59 -05:00
Kelvin Wu
7c6f017c98
Implement chunk for different backends ( #1032 )
2023-12-20 13:35:59 -05:00
David Chavez
71d3c1d142
chore(infra): Share some properties across workspace ( #1039 )
2023-12-12 09:39:07 -05:00
Louis Fortier-Dubois
8fc52113bc
Chore/bump v12 ( #1048 )
2023-12-04 10:47:54 -05:00
Louis Fortier-Dubois
3088c466a5
patch 0.11.1 ( #1047 )
2023-12-04 10:18:30 -05:00
Nathaniel Simard
ab1b5890f5
Chore/release ( #1031 )
2023-12-01 14:33:28 -05:00
Nathaniel Simard
f6d14f1b1a
Refactor feature flags ( #1025 )
2023-12-01 09:48:28 -05:00
David Chavez
f73136e3df
chore(candle): Allow enabling accelerate ( #1009 )
...
* chore(candle): Allow enabling accelerate
* Temporarily disable test for accelerate feature
* Allow enabling accelerate from upstream
* Update the README
* Have xtask also test using accelerate
* Renable failing test
* Fix matmul on candle when using accelerate
* Add additional comment to xtask method
2023-11-30 13:03:00 -05:00
Nathaniel Simard
3d6c738776
Refactor/fusion/graph ( #988 )
2023-11-22 09:55:42 -05:00
Louis Fortier-Dubois
4711db0e18
bump candle to 0.3.1 and conv_transpose_1d ( #977 )
2023-11-21 09:13:19 -05:00
Zsombor
4fc0c27e31
Implement tensor.recip() function to calculate elementwise reciprocals ( #953 )
2023-11-15 09:17:32 -05:00
Dilshod Tadjibaev
f53ab06efc
Pin candle-core version to "0.3.0" version ( #950 )
...
Candle core 0.3.1 release contains a breaking changes so this is a workaround to pin to "0.3.0".
2023-11-12 17:56:30 -05:00
Nathaniel Simard
96524d40a1
[Breaking] Refactor Backend Names ( #904 )
2023-10-29 18:27:49 -04:00
Nathaniel Simard
233922d60c
Chore: Bump version for next release ( #900 )
2023-10-24 19:31:13 -04:00
louisfd
d258778272
candle link
2023-10-24 18:28:10 -04:00
louisfd
aaae336945
candle readme
2023-10-24 18:22:37 -04:00
Nathaniel Simard
86db5dc392
Enable candle cuda ( #887 )
2023-10-23 11:00:54 -04:00
Mathias Insley
07c0cf146d
Wgpu/Clamp Kernels ( #866 )
...
* Update kernel mod.rs
* Wgpu crate implementations and add shader files
* Direct backends to the correct implementation
* Use mask method for candle
* Add index out of bounds protection
* Use a macro to avoid duplication
* Use unary_scalar templates
* New shaders for clamp and clamp_inplace
* Remove unneccessary clamp shaders
* Clamp implementation and test
* Use new clamp implementation for float and int ops
* Better variable names for clamp_min/max
* Revert changes to tensor/ops/tensor.rs
* Fix clamp.wgsl
* Fix shader types
* Use native candle clamp
* Use candle ops for clamp_min/max and revert tensor.rs
* Maximum/minimum were reversed
2023-10-23 07:49:24 -04:00
Nathaniel Simard
d263968236
Refactor unfold4d + Add Module ( #870 )
2023-10-22 11:53:59 -04:00
Louis Fortier-Dubois
01d426236d
candle 0.3.0 ( #881 )
2023-10-20 17:03:47 -04:00
Mathias Insley
255dfefab2
Feat/tensor unfold ( #819 )
2023-10-15 17:05:34 -04:00
Dilshod Tadjibaev
e2a17e4295
Add image classification web demo with WebGPU, CPU backends ( #840 )
2023-10-05 10:29:13 -04:00
Nathaniel Simard
ca787d6446
Feat/async read ( #833 )
2023-09-28 17:09:58 -04:00
Louis Fortier-Dubois
8c215e8be3
Bugfix/int swap dims ( #823 )
2023-09-22 08:38:38 -04:00
Juliano Decico Negri
293020aae6
#384 Include tests for int.rs and float.rs ( #794 )
2023-09-21 09:00:09 -04:00
Nathaniel Simard
af0be5cfeb
Chore: bump version ( #777 )
2023-09-06 12:15:13 -04:00
Louis Fortier-Dubois
419df3383a
powf and stabilize candle ( #748 )
2023-09-01 10:50:44 -04:00
Louis Fortier-Dubois
760c9e1d8e
Feat/candle/module ops ( #725 )
2023-08-30 18:53:03 -04:00
Louis Fortier-Dubois
f253f19b4e
add tanh ( #733 )
2023-08-30 10:00:50 -04:00
Louis Fortier-Dubois
7c34e21424
Perf/tensor ops/more tests ( #718 )
2023-08-30 09:08:18 -04:00
Louis Fortier-Dubois
c89f9969ed
Perf/tensor ops/tests ( #710 )
2023-08-28 12:53:17 -04:00
Louis Fortier-Dubois
88cb6b07fc
Feat/candle/more operations ( #682 )
2023-08-25 08:46:30 -04:00
Caio Piccirillo
2fefc82099
Dilation maxpool ( #668 )
2023-08-21 14:14:25 -04:00
Louis Fortier-Dubois
b07af74788
support broadcast matmul ( #669 )
2023-08-21 11:43:21 -04:00
Louis Fortier-Dubois
6a5ea0ef7c
Feat/candle/basic operations ( #664 )
2023-08-20 18:55:14 -04:00
nathaniel
6b5ba77084
Fix build
2023-08-17 11:13:59 -04:00
Louis Fortier-Dubois
c1eddf04fc
Feat/candle/initialize ( #650 )
2023-08-17 08:50:08 -04:00