Commit Graph

16 Commits

Author SHA1 Message Date
syl20bnr 8e78106680 Bump burn version to 0.15.0 2024-08-27 15:13:40 -04:00
Bjorn Beishline 17de832c6e
Make compatible with thumbv6m-none-eabi + add raspberry pi pico example (#2096)
* Made compatible with thumbv6m-none-eabi

* Added example of no_std on rp2040

* Added documentation on usage in no_std

* Rename rp2040 example and add README.md
2024-08-23 07:39:39 -04:00
mepatrick73 e1fed792f7
Gather CPA to CubeCL (#2165)
* working version

* cleanup

* wip

* working version of gather

* testsetsetser

* Revert "testsetsetser"

This reverts commit f37b329697.

* Reapply "testsetsetser"

This reverts commit f8ada0044e.

* Revert "testsetsetser"

This reverts commit f37b329697.

* Revert "working version of gather"

This reverts commit f5047c27c8.

* Revert "wip"

This reverts commit abaaa2dd55.

* Revert "Merge branch 'main' into index-cpa-to-cubecl"

This reverts commit 05bed8ea74, reversing
changes made to 94954fc32c.

* Revert "cleanup"

This reverts commit 94954fc32c.

* Revert "working version"

This reverts commit a06933f029.

* gather test

* fix

* fix clippy

* cleanup
2024-08-22 13:44:26 -04:00
Nathaniel Simard ff8d0308fb
Enable cuda-jit in burn-core + in text classification example (#2160) 2024-08-12 18:22:27 -04:00
Nathaniel Simard bb4a605ca6
Chore/integrate updated cubecl (#2142) 2024-08-08 16:19:39 -04:00
Arthur Brussee 849c8f453b
Consistent sync/async handling, allow more functions to be async for wasm. (#1936) 2024-07-02 08:25:28 -04:00
Guillaume Lagrange cdd1fa1672
Refactor tensor data (#1916)
* Move distribution to module

* Add new TensorData with serialization support

* Implement display and from for TensorData

* Add missing Cargo.lock

* Add missing bytemuck feature

* Add zeros, ones, full and random TensorData methods

* Refactor Data -> TensorData usage

* Fix tests

Since TensorData is not generic over the element type anymore no type inference can be done by the compiler. We must explicitly cast the expected results to the expected backend type.

* Remove commented line

* Fix import

* Add record-backward-compat

* Remove dim const generic from TensorData

* Support NestedValue de/serialization with TensorData

* Fix burn-jit tests

* Remove eprinln

* Refactor onnx import to use TensorData

* Fix tch from_data

* Fix nested value serialization for u8

* Fix missing import

* Fix reduce min onnx test

* Fix deprecated attribute

* Remove shape getter

* Remove strict assert in tests

* Add tensor data as_bytes

* Add tensor check for rank mismatch

* Fix typo (dimensions plural)

* Fix error message

* Update book examples with from_data and fix Display impl for TensorData

* Add deprecation note
2024-06-26 20:22:19 -04:00
Guillaume Lagrange e4836241e1
Fix `DataSerialize` conversion for elements of the same type (#1832) 2024-05-28 18:12:44 -04:00
Ahmed Yarub Hani Al Nuaimi 10737527d8
#1747 Upgrade Rust dependencies (#1748)
* #1747
Upgrade Rust dependencies

* Revert upgrade for tch

The update of tch on windows gives an error:

INTEL MKL ERROR: The specified module could not be found. mkl_vml_avx2.1.dll.
Intel MKL FATAL ERROR: cannot load mkl_vml_avx2.1.dll or mkl_vml_def.1.dll.

* Keep only .cargo/config.toml file which works with rust > 1.75

---------

Co-authored-by: Sylvain Benner <sylvain@benner.online>
2024-05-10 16:25:19 -04:00
Thierry Cantin-Demers b09d8431df
Fix Cargo.toml repository links (#1749)
* Fix wgpu github link

* Fix burn-train repo link

* Fix burn-tensor github repo

* Fix burn-tensor repo link

* Fix remaining repo links in crates Cargo.toml

---------

Co-authored-by: Jonathan Richard <47578360+jwric@users.noreply.github.com>
2024-05-09 15:40:05 -04:00
Nathaniel Simard 587b8f80b3
First draft CUDA runtime (#1685)
Initial cuda runtime crate with a WIP compiler.
2024-04-30 09:46:29 -04:00
Sylvain Benner e303e31c8b
Bump next version of Burn to 0.14.0 (#1618) 2024-04-12 17:14:45 -04:00
Dilshod Tadjibaev 2f885480ed
Use num-traits for float ops (#1584) 2024-04-08 10:16:20 -05:00
Louis Fortier-Dubois f5159b6d22
Refactor: split JitKernel and SourceKernel (#1569)
* refactor execute_dynamic into Execution

* minor change

* extension cfg

* jitkernel and sourcekernel

* add todo statement

* cleanup and docs

* update book

* fix server dependancy on compiler

* refactor into shader information

* refactor to compile shader once

* clippy

* clippy

* clippy

* fix doc

* fix doc

* fmt

* rename feature flag

* refactor

* All broked

* compile at the right time

* todo done

* all dynamic

* all dynamic in template too

* fmt

* fix ci

---------

Co-authored-by: nathaniel <nathaniel.simard.42@gmail.com>
2024-04-05 12:58:10 -04:00
Nathaniel Simard 1239d9bfa3
[Breaking] Make Tensor, Module, Optimizer !Sync + Refactor Autodiff (#1575) 2024-04-04 16:01:17 -04:00
Sylvain Benner 4427768570
[refactor] Move burn crates to their own crates directory (#1336) 2024-02-20 13:57:55 -05:00