Commit Graph

106 Commits

Author SHA1 Message Date
Guillaume Lagrange 35b36bbe62
Add shape ONNX op support (#1639)
* Add shape onnx op support

* Remove cast node from onnx graph

* Fix shape implementation

* Fix shape config error message

* Fix typo

* Fix clippy type complexity for generated code
2024-04-16 09:28:21 -04:00
Guillaume Lagrange 6d96e8d808
[ONNX] Add not op and extend cast support to tensors (#1634)
* Add not onnx op support

* Extend cast onnx support to tensors

* Fix clippy
2024-04-16 08:45:25 -04:00
Mathias Insley 7377bbe31c
Feat/remainder (#1597)
* Add remainder_scalar op to numeric trait and associated int/float functions

* Update burn-tch crate

* Update ndarray crate

* Update jit crate

* Update candle crate

* Update fusion crate

* Update autodiff crate

* Forgot float.rs for fusion

* Add burn-tensor tests

* Redirect to the pre-existing modulus op

* Fix sign

* Remove mut from burn-tch

* Use sign trick to make wgpu backend work

* Add more unit tests in to cover bases

* Naming fix for burn-fusion

* Update tests w/PyTorch link

* Use different WGSL instructions for remainder

* Redirect to remainder Operator instead of modulo

* Revert Modulo in instruction.rs
2024-04-16 08:35:20 -04:00
Mathias Insley 48c61ebb81
Docs/update contributor book (#1622)
* Update links to latest commit off main

* Some pedantry

* Update links and add jit

* Update instructions for burn-jit and wgpu

* Updated import section with more recent links

* Some grammar/typo/styling fixes

* Code added to burn-wgpu too
2024-04-16 08:33:59 -04:00
Guillaume Lagrange d5f20e2711
Add reduce mean ONNX op support (#1637)
* Add reduce mean onnx op support

* Fix comment
2024-04-16 07:59:35 -04:00
Dilshod Tadjibaev 340a12463a
Update SUPPORTED-ONNX-OPS.md (#1641) 2024-04-16 07:52:15 -04:00
Guillaume Lagrange 81a67b6a09
Add sin onnx op support (#1633) 2024-04-15 15:28:16 -04:00
Sylvain Benner e303e31c8b
Bump next version of Burn to 0.14.0 (#1618) 2024-04-12 17:14:45 -04:00
Guillaume Lagrange cf7b279e5e
Fix burn README symlink (#1617) 2024-04-12 16:00:47 -04:00
Guillaume Lagrange 9980db440d
Remove unused assets (#1616) 2024-04-12 15:48:16 -04:00
Guillaume Lagrange 264c167c11
Update licenses symlinks (#1613) 2024-04-12 14:43:58 -04:00
Nathaniel Simard ff844b1667
Fix candle backend sync (#1579)
* Fix candle backend sync

* tch mps sync

* clippy

---------

Co-authored-by: louisfd <louisfd94@gmail.com>
2024-04-12 12:15:50 -04:00
Aasheesh Singh fb1da53a38
support for rotary positional encoding to transformer modules. (#1604)
* add rotary positional encoding to transformer modules.

* fix f64 error

* use num_traits

* add panic condition
2024-04-12 11:45:49 -04:00
Louis Fortier-Dubois 23210f05f2
JIT: Autotune matmul tiling 2d unroll (#1601)
* autotune tiling 2d unroll

* clippy

* forgotten important stuff
2024-04-12 10:15:21 -04:00
Nathaniel Simard 07a61a1cec
Fix autodiff memory management graph cleaning (#1602) 2024-04-11 16:21:00 -04:00
Guillaume Lagrange 0cbe9a927d
Add learner training report summary (#1591)
* Add training report summary

* Fix LossMetric batch size state

* Add NumericEntry de/serialize

* Fix clippy suggestion

* Compact recorder does not use compression (anymore)

* Add learner summary expected results tests

* Add summary to learner builder and automatically display in fit

- Add LearnerSummaryConfig
- Keep track of summary metrics names
- Add model field when displaying from learner.fit()
2024-04-11 12:32:25 -04:00
Louis Fortier-Dubois bdb62fbcd0
Repeat ops autodiff & fusion + fix autodiff ones & zeros (#1600)
* added repeat to autodiff and fusion + zero one backend init in autodiff

* autodiff for repeat
2024-04-11 11:32:45 -04:00
Dilshod Tadjibaev 2f885480ed
Use num-traits for float ops (#1584) 2024-04-08 10:16:20 -05:00
Guillaume Lagrange f3e0aa6689
Add multi-label classification dataset and metric (#1572)
* Add multilabel classification dataset

- Add MultiLabel annotation support
- Refactor de/serialize annotation with AnnotationRaw
- Add ImageFolderDataset::with_items methods

* Fix custom-image-classification example deps

* Add image_folder_dataset_multilabel test

* Do not change class names order when provided

* Add hamming score and multi-label classification output

* Add new_classification_with_items test

* Fix clippy suggestions

* Implement default trait for hamming score

* Remove de/serialization and use AnnotationRaw as type

* Fix clippy

* Fix metric backend phantom data
2024-04-05 13:16:46 -04:00
Louis Fortier-Dubois f5159b6d22
Refactor: split JitKernel and SourceKernel (#1569)
* refactor execute_dynamic into Execution

* minor change

* extension cfg

* jitkernel and sourcekernel

* add todo statement

* cleanup and docs

* update book

* fix server dependancy on compiler

* refactor into shader information

* refactor to compile shader once

* clippy

* clippy

* clippy

* fix doc

* fix doc

* fmt

* rename feature flag

* refactor

* All broked

* compile at the right time

* todo done

* all dynamic

* all dynamic in template too

* fmt

* fix ci

---------

Co-authored-by: nathaniel <nathaniel.simard.42@gmail.com>
2024-04-05 12:58:10 -04:00
Nathaniel Simard 1239d9bfa3
[Breaking] Make Tensor, Module, Optimizer !Sync + Refactor Autodiff (#1575) 2024-04-04 16:01:17 -04:00
Guillaume Lagrange ce898ff899
Fix pytorch recorder adapt_linear when using autodiff backend (#1576)
* Fix pytorch recorder adapt_linear when using autodiff backend

* Fix comment typo
2024-04-04 12:29:24 -04:00
Guillaume Lagrange 0978c8a586
Support multilabel binary cross entropy (#1571)
* Support multilabel binary cross entropy

* Add missing alloc Vec
2024-04-03 08:03:07 -04:00
Nathaniel Simard b0c5986d16
Feat/lazy init (#1539) 2024-04-02 10:13:35 -04:00
Guillaume Lagrange 8d210a152f
Move log_sigmoid to activation ops (#1558) 2024-04-02 09:25:40 -04:00
Louis Fortier-Dubois edcd92f13d
Refactor execute_dynamic with Execution struct (#1550) 2024-03-28 17:27:48 -04:00
Nathaniel Simard efc3b2d243
[Breaking] add runtime options in wgpu init methods (#1505) 2024-03-28 12:44:38 -04:00
Louis Fortier-Dubois 279be0496a
Conv Transpose: migration + decent speedup (#1541)
* convtranspose benchmark

* adjust bench

* conv transpose works

* Conv Transpose: migration + decent speedup

* delete template folder

* typos

* fix
2024-03-28 12:13:06 -04:00
Guillaume Lagrange b8fc3f141e
Numerically stable log_sigmoid (#1548) 2024-03-28 11:54:23 -04:00
Dilshod Tadjibaev 70b92cb2fb
Update SUPPORTED-ONNX-OPS.md (#1547) 2024-03-28 10:38:53 -05:00
Karsten Becker c21d5a3207
Add LeakyReLu implementation (#1208)
* Implement LeakyReLu

* Cargo fmt

* Apply suggestions

* cargo fmt

* Use float_mul_scalar

* Should be grad

* Add to books module

* Move test files

* Update leaky relu to use activation function

* Update tensor.md

* Fix failing test due to approx

* Add back the function comment

* Fix comment per PR feedback

---------

Co-authored-by: Dilshod Tadjibaev <939125+antimora@users.noreply.github.com>
2024-03-27 13:57:51 -05:00
jcmullwh 626457e1c6
Provide Tensor Padding Helpers #960 (#1097)
* Initial padding approach

Create padding implementation for the last two dimensions of Float and Int Tensors.

Create PadMode Enum, allowing Constant padding.

Create Padding Struct with Uniform, Asymmetric, height, and width implementations.

Create tests for the padding implementation.

* Update padding.rs

remove unneeded import

* Update from Merge

Use crate Element

Swap from old from_data() to new from_data_devauto()

* Formatting Changes

Formatting changes from cargo fmt --all

* Additional Format Change

One more format change that cargo fmt didn't get the first time.

* Changes to Example

Modify Example to ensure it works.

* modify naming

better names for impl / input variables.

* Modify API

- Change Padding to PadSize.
- integrate padding value into PadMode.
- update tests and examples.

* Comments and print

Improve comments+naming and remove println

* Pad Fixes

Moved pad to numeric

Simplified PadMode Element

updated tensor creations

fixed doc example

* Fix test location

* Simplified pad API

* Fix for failed unit tests

* Remove bool_full

* Rename `pads` to `padding`

---------

Co-authored-by: Dilshod Tadjibaev <939125+antimora@users.noreply.github.com>
2024-03-27 12:46:55 -05:00
Nathaniel Simard 40a26bd2ea
Feat/backend bridge (#1529) 2024-03-26 19:24:45 -04:00
Louis Fortier-Dubois 5bac300688
Migrate/jit/interpolate (#1528)
* separate forward backward

* refactor with pool strategy

* refactor further

* pooling refactored

* refactoring for adaptive wip

* wip adaptive

* adaptive

* delete some wgsl

* avg pool backward

* clippy

* refactor interpolate files

* nearest shader

* nearest

* some boilerplate

* wip

* bilinear

* nearest backward

* cubic

* cleanup

* minor refactor

* add some white space
2024-03-26 08:57:26 -04:00
Louis Fortier-Dubois 37b61ea646
Migrate/jit/adaptive avg pool backward (#1530)
* separate forward backward

* refactor with pool strategy

* refactor further

* pooling refactored

* refactoring for adaptive wip

* wip adaptive

* adaptive

* delete some wgsl

* avg pool backward

* clippy

* minor refactor

* works

* delete wgsl
2024-03-26 08:38:06 -04:00
Aasheesh Singh a77979e0b6
add rms norm layer (#1527) 2024-03-25 18:59:11 -04:00
Louis Fortier-Dubois da5b0438ec
Migrate/jit/pooling (#1509)
* separate forward backward

* refactor with pool strategy

* refactor further

* pooling refactored

* refactoring for adaptive wip

* wip adaptive

* adaptive

* delete some wgsl

* avg pool backward

* clippy

* minor refactor
2024-03-25 16:04:58 -04:00
Aasheesh Singh 613e698007
Feat/swiglu (#1507) 2024-03-25 15:55:27 -04:00
Louis Fortier-Dubois 4542ceddca
Migrate/jit/conv2d (#1493)
* conv2d but bug

* convolution done

* minor clean

* delete wgsl
2024-03-25 10:45:40 -04:00
Sylvain Benner 0adda72316
[backend-comparison] Add system information to benchmark results (#1495)
* Bump sysinfo crate to 0.30.7

* [backend-comparison] Add CPUs and GPUs system info to results

* [backend-comparison] Add integrated GPUs to gathered system info

* [backend-comparison] Use AutoGraphicsApi wgpu backend selection
2024-03-22 23:24:49 -04:00
Dilshod Tadjibaev 6feda90a8c
Tensor expand operator (#1508)
* Improve CI cache - remove burn-tch artifacts

* PyTorch config deserializer from .pt file

* Update pytorch-model.md

* WIP

* Rename broadcast_to to expand

* Rename broadcast_to expand file

* Implemented fusion backend and fix bugs

* Remove old files

* Remove unused state

* Rename to the correct op name

* Add missing comment

* Fix expand check function doc

* Rename the leftover names

* Rename leftover names
2024-03-22 16:33:53 -05:00
Guillaume Lagrange dc45cf1700
Add `topk` tensor operation (#1497)
* Add topk and topk_with_indices

* Change topk_with_indices test to guarantee order (previously equal elements)
2024-03-22 10:57:20 -04:00
Louis Fortier-Dubois dd699a90a2
Migrate/jit/matmul tiling 2d (#1472)
* refactor matmul files

* wip refactor matmul

* everything is memco

* support local arrays

* advancing tiling2d

* advancing tiling2d

* advancing tiling2d

* tiling2d finished but buggy

* configurable unrolling

* not bugged

* fails on unroll

* stupid break

* tiling2d no assumption works

* clippy

* bounds check as bool

* lhs rhs as enum

* tiling 2d major refactor

* remove assign vec4

* variable declarations above loops

* fmt

* clippy

* Fix autotune + unroll

* move val

* clippy

* fmt

---------

Co-authored-by: nathaniel <nathaniel.simard.42@gmail.com>
2024-03-22 08:26:32 -04:00
Sylvain Benner 0a8a3cc9e9
[xtask] Add support for cargo metadata new workspace member format (#1500) 2024-03-21 16:04:52 -04:00
Guillaume Lagrange 3e4af41694
Fix sort descending for 1d case (#1494) 2024-03-21 07:45:37 -04:00
Guillaume Lagrange 47a84cc980
Add tensor sorting operations (#1488)
* Add sort, sort_with_indices and argsort

* Fix wasm build

* Add sort ops autodiff

* Fix TODO parallel comment

* Fix sort_with_indices 1d and add descending options

* Fix clippy

* Fix ascending comment (configurable)
2024-03-20 14:51:04 -04:00
Guillaume Lagrange 430f642394
Change assert_approx_eq precision from 3 to 2 (#1491) 2024-03-19 12:26:21 -04:00
Rubén J.R 69f1877754
New learning rate schedulers (#1481) 2024-03-19 08:28:42 -05:00
carrotflakes 8911093b88
Add `flip` tensor operator (#1468) 2024-03-18 20:33:39 -05:00
Dilshod Tadjibaev 8a8300c1fb
Add tril_mask, triu_mask and diag_mask ops (#1479) 2024-03-18 10:15:40 -05:00