Commit Graph

30 Commits

Author SHA1 Message Date
David Chavez f73136e3df
chore(candle): Allow enabling accelerate (#1009)
* chore(candle): Allow enabling accelerate

* Temporarily disable test for accelerate feature

* Allow enabling accelerate from upstream

* Update the README

* Have xtask also test using accelerate

* Renable failing test

* Fix matmul on candle when using accelerate

* Add additional comment to xtask method
2023-11-30 13:03:00 -05:00
Nathaniel Simard 3d6c738776
Refactor/fusion/graph (#988) 2023-11-22 09:55:42 -05:00
Louis Fortier-Dubois 4711db0e18
bump candle to 0.3.1 and conv_transpose_1d (#977) 2023-11-21 09:13:19 -05:00
Zsombor 4fc0c27e31
Implement tensor.recip() function to calculate elementwise reciprocals (#953) 2023-11-15 09:17:32 -05:00
Dilshod Tadjibaev f53ab06efc
Pin candle-core version to "0.3.0" version (#950)
Candle core 0.3.1 release contains a breaking changes so this is a workaround to pin to "0.3.0".
2023-11-12 17:56:30 -05:00
Nathaniel Simard 96524d40a1
[Breaking] Refactor Backend Names (#904) 2023-10-29 18:27:49 -04:00
Nathaniel Simard 233922d60c
Chore: Bump version for next release (#900) 2023-10-24 19:31:13 -04:00
louisfd d258778272 candle link 2023-10-24 18:28:10 -04:00
louisfd aaae336945 candle readme 2023-10-24 18:22:37 -04:00
Nathaniel Simard 86db5dc392
Enable candle cuda (#887) 2023-10-23 11:00:54 -04:00
Mathias Insley 07c0cf146d
Wgpu/Clamp Kernels (#866)
* Update kernel mod.rs

* Wgpu crate implementations and add shader files

* Direct backends to the correct implementation

* Use mask method for candle

* Add index out of bounds protection

* Use a macro to avoid duplication

* Use unary_scalar templates

* New shaders for clamp and clamp_inplace

* Remove unneccessary clamp shaders

* Clamp implementation and test

* Use new clamp implementation for float and int ops

* Better variable names for clamp_min/max

* Revert changes to tensor/ops/tensor.rs

* Fix clamp.wgsl

* Fix shader types

* Use native candle clamp

* Use candle ops for clamp_min/max and revert tensor.rs

* Maximum/minimum were reversed
2023-10-23 07:49:24 -04:00
Nathaniel Simard d263968236
Refactor unfold4d + Add Module (#870) 2023-10-22 11:53:59 -04:00
Louis Fortier-Dubois 01d426236d
candle 0.3.0 (#881) 2023-10-20 17:03:47 -04:00
Mathias Insley 255dfefab2
Feat/tensor unfold (#819) 2023-10-15 17:05:34 -04:00
Dilshod Tadjibaev e2a17e4295
Add image classification web demo with WebGPU, CPU backends (#840) 2023-10-05 10:29:13 -04:00
Nathaniel Simard ca787d6446
Feat/async read (#833) 2023-09-28 17:09:58 -04:00
Louis Fortier-Dubois 8c215e8be3
Bugfix/int swap dims (#823) 2023-09-22 08:38:38 -04:00
Juliano Decico Negri 293020aae6
#384 Include tests for int.rs and float.rs (#794) 2023-09-21 09:00:09 -04:00
Nathaniel Simard af0be5cfeb
Chore: bump version (#777) 2023-09-06 12:15:13 -04:00
Louis Fortier-Dubois 419df3383a
powf and stabilize candle (#748) 2023-09-01 10:50:44 -04:00
Louis Fortier-Dubois 760c9e1d8e
Feat/candle/module ops (#725) 2023-08-30 18:53:03 -04:00
Louis Fortier-Dubois f253f19b4e
add tanh (#733) 2023-08-30 10:00:50 -04:00
Louis Fortier-Dubois 7c34e21424
Perf/tensor ops/more tests (#718) 2023-08-30 09:08:18 -04:00
Louis Fortier-Dubois c89f9969ed
Perf/tensor ops/tests (#710) 2023-08-28 12:53:17 -04:00
Louis Fortier-Dubois 88cb6b07fc
Feat/candle/more operations (#682) 2023-08-25 08:46:30 -04:00
Caio Piccirillo 2fefc82099
Dilation maxpool (#668) 2023-08-21 14:14:25 -04:00
Louis Fortier-Dubois b07af74788
support broadcast matmul (#669) 2023-08-21 11:43:21 -04:00
Louis Fortier-Dubois 6a5ea0ef7c
Feat/candle/basic operations (#664) 2023-08-20 18:55:14 -04:00
nathaniel 6b5ba77084 Fix build 2023-08-17 11:13:59 -04:00
Louis Fortier-Dubois c1eddf04fc
Feat/candle/initialize (#650) 2023-08-17 08:50:08 -04:00