Commit Graph

124 Commits

Author SHA1 Message Date
nathaniel 0a1f107e45 Add test 2024-04-25 10:46:14 -04:00
Dilshod Tadjibaev a1bd14c5ae
Reshape bug fix (#1684)
* Revert 1c639c8393

1c639c8393?diff=unified&w=0

* Refactor by @laggui

* Refactor unsqueeze
2024-04-24 19:31:53 -05:00
Nathaniel Simard 886a1de235
Refactor/burn compute (#1580) 2024-04-23 13:05:15 -04:00
Sylvain Benner c579686a8a
Move HandleContainer and Tensor Ops descriptions from burn-fusion to burn-tensor (#1654)
* Move HandlerContainer and Tensor Ops description to burn-tensor

Move HandleContainer and Tensor operations descriptions to burn-tensor crate.
Removed the FusionDevice and replaced it with a DeviceOps trait bound to Backend::Device.

For now added modules to burn-tensor are excluded from no-std as they rely on Arc.

* [burn-tensor] Flatten module hierarchy for tensor representation

+ Add new repr feature to cargo file.

* Remove prefix on dosctring

* [burn-fusion] Require default features of burn-tensor
2024-04-23 11:27:54 -04:00
Guillaume Lagrange e6b1b7a317
Add layer norm onnx op support (#1680) 2024-04-23 11:19:07 -04:00
Dilshod Tadjibaev 1718da5210
Fix reshape bug (support for opset version 1) (#1667)
* Make reshape op version 1

* Refactor per PR feedback
2024-04-22 17:52:25 -05:00
Nathaniel Simard 29fa2ee76c
Support linear 1d (#1682) 2024-04-22 18:39:09 -04:00
Alex Errant d62b344d5b
`Arc<EventStoreClient>` to `Rc<EventStoreClient>` (#1668) 2024-04-22 18:21:53 -04:00
신희제(Heejae Shin/joel.barish) 2a7b296a1b
Add sign ONNX op import support (#1663)
* Add sign ONNX op support

* Update SUPPORTED-ONNX-OPS.md
2024-04-22 09:10:50 -04:00
Louis Fortier-Dubois 2140d9b568
remove JIT subsequent RNG tests (#1652) 2024-04-21 09:48:11 -04:00
Dilshod Tadjibaev 1433284a0f
Fix bug 1645 (Unsqueeze OpSet 11) (#1661)
* Add unsqueeze opset 16 test

* Fix for unsqueeze opset 11

* Remove println statement
2024-04-19 14:17:44 -05:00
Guillaume Lagrange b65a487300
Fix transpose onnx op (permute) (#1657) 2024-04-19 09:34:03 -04:00
Nico Zweifel ee12aee2e7
fix: `window` -> `pub window` in `dataset/mod.rs` (#1658)
* update dataset/mod.rs

* Update mod.rs

* Update window.rs
2024-04-19 09:33:21 -04:00
Guillaume Lagrange 9fbcbed20f
Add where onnx op support (#1653)
* Add where onnx op support

* Add broadcasting support

* Remove broadcasting limitation comment

* Fix broadcasting in mask where

* Forgot to reflect changes in codegen test

* Fix clippy
2024-04-18 15:46:02 -04:00
Guillaume Lagrange 7705fd9c25
Add matmul ONNX op support (#1638)
* Mul onnx op already supported

* Add matmul onnx op checks and tests

* Add missing eq derives

* Change supscript symbol

* Remove dead code

* Add support for matmul broadcast

* No more broadcasting restrictions

* Add results comment for mm, mv and vm
2024-04-18 09:20:31 -04:00
Dilshod Tadjibaev 2a721a9d0c
Enable native sign operation for Candle backend (#1647)
* Enable native sign operation for Candle backend

* Use fixed revision
2024-04-17 09:07:56 -04:00
Guillaume Lagrange 424033283a
Add reduce max ONNX op support (#1636)
* Add reduce max onnx op support

* Fix comments on tensor rank 1 result
2024-04-17 08:26:46 -04:00
Nico Zweifel 5a3f345734
WindowDataset/windows function (#1553) 2024-04-17 07:51:53 -04:00
Guillaume Lagrange 35b36bbe62
Add shape ONNX op support (#1639)
* Add shape onnx op support

* Remove cast node from onnx graph

* Fix shape implementation

* Fix shape config error message

* Fix typo

* Fix clippy type complexity for generated code
2024-04-16 09:28:21 -04:00
Guillaume Lagrange 6d96e8d808
[ONNX] Add not op and extend cast support to tensors (#1634)
* Add not onnx op support

* Extend cast onnx support to tensors

* Fix clippy
2024-04-16 08:45:25 -04:00
Mathias Insley 7377bbe31c
Feat/remainder (#1597)
* Add remainder_scalar op to numeric trait and associated int/float functions

* Update burn-tch crate

* Update ndarray crate

* Update jit crate

* Update candle crate

* Update fusion crate

* Update autodiff crate

* Forgot float.rs for fusion

* Add burn-tensor tests

* Redirect to the pre-existing modulus op

* Fix sign

* Remove mut from burn-tch

* Use sign trick to make wgpu backend work

* Add more unit tests in to cover bases

* Naming fix for burn-fusion

* Update tests w/PyTorch link

* Use different WGSL instructions for remainder

* Redirect to remainder Operator instead of modulo

* Revert Modulo in instruction.rs
2024-04-16 08:35:20 -04:00
Mathias Insley 48c61ebb81
Docs/update contributor book (#1622)
* Update links to latest commit off main

* Some pedantry

* Update links and add jit

* Update instructions for burn-jit and wgpu

* Updated import section with more recent links

* Some grammar/typo/styling fixes

* Code added to burn-wgpu too
2024-04-16 08:33:59 -04:00
Guillaume Lagrange d5f20e2711
Add reduce mean ONNX op support (#1637)
* Add reduce mean onnx op support

* Fix comment
2024-04-16 07:59:35 -04:00
Dilshod Tadjibaev 340a12463a
Update SUPPORTED-ONNX-OPS.md (#1641) 2024-04-16 07:52:15 -04:00
Guillaume Lagrange 81a67b6a09
Add sin onnx op support (#1633) 2024-04-15 15:28:16 -04:00
Sylvain Benner e303e31c8b
Bump next version of Burn to 0.14.0 (#1618) 2024-04-12 17:14:45 -04:00
Guillaume Lagrange cf7b279e5e
Fix burn README symlink (#1617) 2024-04-12 16:00:47 -04:00
Guillaume Lagrange 9980db440d
Remove unused assets (#1616) 2024-04-12 15:48:16 -04:00
Guillaume Lagrange 264c167c11
Update licenses symlinks (#1613) 2024-04-12 14:43:58 -04:00
Nathaniel Simard ff844b1667
Fix candle backend sync (#1579)
* Fix candle backend sync

* tch mps sync

* clippy

---------

Co-authored-by: louisfd <louisfd94@gmail.com>
2024-04-12 12:15:50 -04:00
Aasheesh Singh fb1da53a38
support for rotary positional encoding to transformer modules. (#1604)
* add rotary positional encoding to transformer modules.

* fix f64 error

* use num_traits

* add panic condition
2024-04-12 11:45:49 -04:00
Louis Fortier-Dubois 23210f05f2
JIT: Autotune matmul tiling 2d unroll (#1601)
* autotune tiling 2d unroll

* clippy

* forgotten important stuff
2024-04-12 10:15:21 -04:00
Nathaniel Simard 07a61a1cec
Fix autodiff memory management graph cleaning (#1602) 2024-04-11 16:21:00 -04:00
Guillaume Lagrange 0cbe9a927d
Add learner training report summary (#1591)
* Add training report summary

* Fix LossMetric batch size state

* Add NumericEntry de/serialize

* Fix clippy suggestion

* Compact recorder does not use compression (anymore)

* Add learner summary expected results tests

* Add summary to learner builder and automatically display in fit

- Add LearnerSummaryConfig
- Keep track of summary metrics names
- Add model field when displaying from learner.fit()
2024-04-11 12:32:25 -04:00
Louis Fortier-Dubois bdb62fbcd0
Repeat ops autodiff & fusion + fix autodiff ones & zeros (#1600)
* added repeat to autodiff and fusion + zero one backend init in autodiff

* autodiff for repeat
2024-04-11 11:32:45 -04:00
Dilshod Tadjibaev 2f885480ed
Use num-traits for float ops (#1584) 2024-04-08 10:16:20 -05:00
Guillaume Lagrange f3e0aa6689
Add multi-label classification dataset and metric (#1572)
* Add multilabel classification dataset

- Add MultiLabel annotation support
- Refactor de/serialize annotation with AnnotationRaw
- Add ImageFolderDataset::with_items methods

* Fix custom-image-classification example deps

* Add image_folder_dataset_multilabel test

* Do not change class names order when provided

* Add hamming score and multi-label classification output

* Add new_classification_with_items test

* Fix clippy suggestions

* Implement default trait for hamming score

* Remove de/serialization and use AnnotationRaw as type

* Fix clippy

* Fix metric backend phantom data
2024-04-05 13:16:46 -04:00
Louis Fortier-Dubois f5159b6d22
Refactor: split JitKernel and SourceKernel (#1569)
* refactor execute_dynamic into Execution

* minor change

* extension cfg

* jitkernel and sourcekernel

* add todo statement

* cleanup and docs

* update book

* fix server dependancy on compiler

* refactor into shader information

* refactor to compile shader once

* clippy

* clippy

* clippy

* fix doc

* fix doc

* fmt

* rename feature flag

* refactor

* All broked

* compile at the right time

* todo done

* all dynamic

* all dynamic in template too

* fmt

* fix ci

---------

Co-authored-by: nathaniel <nathaniel.simard.42@gmail.com>
2024-04-05 12:58:10 -04:00
Nathaniel Simard 1239d9bfa3
[Breaking] Make Tensor, Module, Optimizer !Sync + Refactor Autodiff (#1575) 2024-04-04 16:01:17 -04:00
Guillaume Lagrange ce898ff899
Fix pytorch recorder adapt_linear when using autodiff backend (#1576)
* Fix pytorch recorder adapt_linear when using autodiff backend

* Fix comment typo
2024-04-04 12:29:24 -04:00
Guillaume Lagrange 0978c8a586
Support multilabel binary cross entropy (#1571)
* Support multilabel binary cross entropy

* Add missing alloc Vec
2024-04-03 08:03:07 -04:00
Nathaniel Simard b0c5986d16
Feat/lazy init (#1539) 2024-04-02 10:13:35 -04:00
Guillaume Lagrange 8d210a152f
Move log_sigmoid to activation ops (#1558) 2024-04-02 09:25:40 -04:00
Louis Fortier-Dubois edcd92f13d
Refactor execute_dynamic with Execution struct (#1550) 2024-03-28 17:27:48 -04:00
Nathaniel Simard efc3b2d243
[Breaking] add runtime options in wgpu init methods (#1505) 2024-03-28 12:44:38 -04:00
Louis Fortier-Dubois 279be0496a
Conv Transpose: migration + decent speedup (#1541)
* convtranspose benchmark

* adjust bench

* conv transpose works

* Conv Transpose: migration + decent speedup

* delete template folder

* typos

* fix
2024-03-28 12:13:06 -04:00
Guillaume Lagrange b8fc3f141e
Numerically stable log_sigmoid (#1548) 2024-03-28 11:54:23 -04:00
Dilshod Tadjibaev 70b92cb2fb
Update SUPPORTED-ONNX-OPS.md (#1547) 2024-03-28 10:38:53 -05:00
Karsten Becker c21d5a3207
Add LeakyReLu implementation (#1208)
* Implement LeakyReLu

* Cargo fmt

* Apply suggestions

* cargo fmt

* Use float_mul_scalar

* Should be grad

* Add to books module

* Move test files

* Update leaky relu to use activation function

* Update tensor.md

* Fix failing test due to approx

* Add back the function comment

* Fix comment per PR feedback

---------

Co-authored-by: Dilshod Tadjibaev <939125+antimora@users.noreply.github.com>
2024-03-27 13:57:51 -05:00
jcmullwh 626457e1c6
Provide Tensor Padding Helpers #960 (#1097)
* Initial padding approach

Create padding implementation for the last two dimensions of Float and Int Tensors.

Create PadMode Enum, allowing Constant padding.

Create Padding Struct with Uniform, Asymmetric, height, and width implementations.

Create tests for the padding implementation.

* Update padding.rs

remove unneeded import

* Update from Merge

Use crate Element

Swap from old from_data() to new from_data_devauto()

* Formatting Changes

Formatting changes from cargo fmt --all

* Additional Format Change

One more format change that cargo fmt didn't get the first time.

* Changes to Example

Modify Example to ensure it works.

* modify naming

better names for impl / input variables.

* Modify API

- Change Padding to PadSize.
- integrate padding value into PadMode.
- update tests and examples.

* Comments and print

Improve comments+naming and remove println

* Pad Fixes

Moved pad to numeric

Simplified PadMode Element

updated tensor creations

fixed doc example

* Fix test location

* Simplified pad API

* Fix for failed unit tests

* Remove bool_full

* Rename `pads` to `padding`

---------

Co-authored-by: Dilshod Tadjibaev <939125+antimora@users.noreply.github.com>
2024-03-27 12:46:55 -05:00