Commit Graph

144 Commits

Author SHA1 Message Date
Guillaume Lagrange 0d5025edbb
Refactor tensor quantization for q_* ops (#2025)
* Move QuantizationScheme to burn-tensor

* Refactor QuantizedTensorPrimitive to include the quantization strategy

* Fix QFloat tensor data display

* Refactor quantization methods to use scheme and qparams (on backend device)

* Fix clippy

* Fix fmt

* Add qtensor primitive tests
2024-07-19 10:39:50 -04:00
Sylvain Benner 0e77e19635
Remove mention of example in backend section of the book (#2014) 2024-07-15 09:34:40 -04:00
Guillaume Lagrange 3afff434bd
Module weight quantization (#2000)
* Add q_into_data and q_reshape

* Fix tch quantize f16 and q_into_data

* Convert to actual dtype/kind in dequantize

* Add module quantization and q_from_data

* Fix clippy

* Add documentation

* Handle deserialize data conversion

* Fix typo

* Add calibration tests

* Fix clippy precision

* Add QTensorOps require_grad methods to avoid dequantizing

* Add Dequantize mapper docs

* Remove dead code
2024-07-15 08:20:37 -04:00
Guillaume Lagrange c0211e2f94
Add static tensor quantization (#1963)
* Add QuantizationBackend, QTensorOps and QTensor

* Refactor QTensorOps as part of Backend trait

* Add tensor dequantize, QFloat dtype and default affine/symmetric quant

* Add ndarray default quantization implementation

* Fix clippy

* Add rayon parallel iter

* Add quantization operations to book

* Add q_shape and q_device ops to avoid converting the tensor just to get attributes

* Implement autodiff grad ops

* Mark autodiff todo for QAT

* Remove note

* Add q_inner and q_from_inner
2024-07-08 10:16:58 -04:00
Arthur Brussee 3f9e97946f
Feat: Dynamic cube count dispatch (#1975) 2024-07-06 19:17:01 -04:00
nathaniel 882a27c52c Revert "Revert "Implement 3D and transposed 3D convolutions. (#1945)""
This reverts commit b8b47ea6e6.
2024-07-05 18:57:01 -04:00
nathaniel 1ad2a63f28 Fix typo 2024-07-05 09:40:32 -04:00
nathaniel b8b47ea6e6 Revert "Implement 3D and transposed 3D convolutions. (#1945)"
This reverts commit d696d74e3d.
2024-07-05 09:40:32 -04:00
Guillaume Lagrange 5236e12c81
Add models and examples reference (#1966)
Co-authored-by: Sylvain Benner <sylvain@benner.online>

---------

Co-authored-by: Sylvain Benner <sylvain@benner.online>
2024-07-04 16:22:08 -04:00
Guillaume Charifi d696d74e3d
Implement 3D and transposed 3D convolutions. (#1945)
* Implement 3D and transposed 3D convolutions.

* Merge changes from onnx-ir #1921 pr

---------

Co-authored-by: Dilshod Tadjibaev <939125+antimora@users.noreply.github.com>
2024-07-02 17:54:35 -05:00
Dilshod Tadjibaev 6f2ba34382
Print module part3 - Update book (#1940)
* Update book example guide

* Update Module book section on module display
2024-07-01 12:42:17 -05:00
Guillaume Lagrange cdd1fa1672
Refactor tensor data (#1916)
* Move distribution to module

* Add new TensorData with serialization support

* Implement display and from for TensorData

* Add missing Cargo.lock

* Add missing bytemuck feature

* Add zeros, ones, full and random TensorData methods

* Refactor Data -> TensorData usage

* Fix tests

Since TensorData is not generic over the element type anymore no type inference can be done by the compiler. We must explicitly cast the expected results to the expected backend type.

* Remove commented line

* Fix import

* Add record-backward-compat

* Remove dim const generic from TensorData

* Support NestedValue de/serialization with TensorData

* Fix burn-jit tests

* Remove eprinln

* Refactor onnx import to use TensorData

* Fix tch from_data

* Fix nested value serialization for u8

* Fix missing import

* Fix reduce min onnx test

* Fix deprecated attribute

* Remove shape getter

* Remove strict assert in tests

* Add tensor data as_bytes

* Add tensor check for rank mismatch

* Fix typo (dimensions plural)

* Fix error message

* Update book examples with from_data and fix Display impl for TensorData

* Add deprecation note
2024-06-26 20:22:19 -04:00
towerpark 3faf544bc4
Book: Fix the link to burn-train in "Learner" page (#1920)
Add the missing "crates/" to the link.

Co-authored-by: towerpark <t56ouhw1d@mozmail.com>
2024-06-25 09:15:34 -04:00
Arthur Brussee ac9f942a46
Remove GraphicsAPI generic for WgpuRuntime (#1888) 2024-06-17 09:04:25 -04:00
George b71c300638
Feat: Add `movedim` tensor operator (#1876)
*  (burn-tensor): add movedim function to tensor API

---------

Co-authored-by: Georgy Andreev <g.andreev@insilicomedicine.com>
2024-06-14 09:01:38 -04:00
towerpark 9a32e53e65
Book: Fix typos in the name of MessagePack format (#1868)
In the Record page, there are 4 places where "Pack" is written as
"Park." Also, the fix makes them the same as the official name, which
has no space and is in CamelCase.

Co-authored-by: towerpark <t56ouhw1d@mozmail.com>
2024-06-10 13:30:53 -04:00
Zirconium409122 e407c76def
Remainder operator doc (#1836)
* Adds remainder ops implementation for Tensor.

* Adds test for % operator.

* Add remainder and % operator entry in tensor.md

---------

Co-authored-by: Jonas Kantic <jk.mail@posteo.net>
2024-06-01 16:49:54 -05:00
Nathaniel Simard 36d4bcd705
[Refactor - Breaking] Refactor cube operations with better names & Support subgroup operations (#1839) 2024-05-31 17:07:21 -04:00
McArthur a2ad424fc8
Indices Operator (#1735) 2024-05-29 09:05:31 -04:00
Guillaume Lagrange e4836241e1
Fix `DataSerialize` conversion for elements of the same type (#1832) 2024-05-28 18:12:44 -04:00
Jonathan Richard 8de05e1419
Add configurable application logger to learner builder (#1774)
* refactor: add TracingSubscriberLogger trait and FileTracingSubscriberLogger struct

* Remove unused log module and renames, fmt

* Renamed tracing subscriber logger

* renamed to application logger installer

* book learner configuration update update

* fix typo

* unused import
2024-05-16 16:25:33 -04:00
Ahmed Yarub Hani Al Nuaimi 10737527d8
#1747 Upgrade Rust dependencies (#1748)
* #1747
Upgrade Rust dependencies

* Revert upgrade for tch

The update of tch on windows gives an error:

INTEL MKL ERROR: The specified module could not be found. mkl_vml_avx2.1.dll.
Intel MKL FATAL ERROR: cannot load mkl_vml_avx2.1.dll or mkl_vml_def.1.dll.

* Keep only .cargo/config.toml file which works with rust > 1.75

---------

Co-authored-by: Sylvain Benner <sylvain@benner.online>
2024-05-10 16:25:19 -04:00
Jonathan Richard e233c38b0f
Add hidden code snippets to guide example in Burn book [redo] (#1742)
* added hidden code snippets in Burn book guide example

* Update backend.md

* Revert last commit
2024-05-06 20:29:28 -04:00
mepatrick73 adbe97dc4d
Fixing various syntax errors in the Burn book (#1740) 2024-05-06 17:25:22 -04:00
Sylvain Benner 1f8b5d3efb
[guide] Remove ambiguity lib vs. executable (#1649) 2024-04-26 15:42:02 -04:00
WU Chen b387829731
Implement bidirectional LSTM (#1035)
* resolve conflict

* move `gate_product` to `GateController`

* BiLstm needs to use its own initializer when init

* resolve conflicts

* add some comments

* improve doc

* correct the description of GateController

* fix fmt

* add `LstmState`

* add test for state

* set batch 2 in bilstm test

* resolve conflict

* fix

* fix doc

* change the batch size back to 1

* change the batch size back to 1

* modify docstring; delete dead comment
2024-04-26 13:28:36 -05:00
Guillaume Lagrange fd26c1a241
Fix ONNX and PyTorch import section links in burn book (#1681) 2024-04-22 18:38:05 -04:00
Sylvain Benner 2d264e9a74
[burn-book] Fix broken URL to SUPPORTED-ONNX-OPS.md (#1651) 2024-04-17 08:15:32 -04:00
Sylvain Benner e700aa0cbf
[burn-book] Fix typos in getting started (#1650) 2024-04-17 08:04:20 -04:00
Nico Zweifel 5a3f345734
WindowDataset/windows function (#1553) 2024-04-17 07:51:53 -04:00
Guillaume Lagrange 0ee2021567
Fix guide project name in the book (#1631) 2024-04-16 09:38:13 -04:00
Gadersd 1235b06e25
Improve grammar (#1619) 2024-04-16 09:32:42 -04:00
Sylvain Benner e303e31c8b
Bump next version of Burn to 0.14.0 (#1618) 2024-04-12 17:14:45 -04:00
Guillaume Lagrange 63947d20a2
Fix missing clone derive for MnistBatcher in the book (#1608) 2024-04-12 12:16:28 -04:00
Aasheesh Singh fb1da53a38
support for rotary positional encoding to transformer modules. (#1604)
* add rotary positional encoding to transformer modules.

* fix f64 error

* use num_traits

* add panic condition
2024-04-12 11:45:49 -04:00
Guillaume Lagrange 06ce2b02d6
Fix missing device in custom training loop book example (#1606) 2024-04-12 10:34:04 -04:00
Guillaume Lagrange 0cbe9a927d
Add learner training report summary (#1591)
* Add training report summary

* Fix LossMetric batch size state

* Add NumericEntry de/serialize

* Fix clippy suggestion

* Compact recorder does not use compression (anymore)

* Add learner summary expected results tests

* Add summary to learner builder and automatically display in fit

- Add LearnerSummaryConfig
- Keep track of summary metrics names
- Add model field when displaying from learner.fit()
2024-04-11 12:32:25 -04:00
M S Hrishikesh 80a41b810e
Fixes to code examples in section 5.2 (#1594)
* Fixes to code examples in section 5.2

* A more generic way to get a device for code examples in Burn book section 5.2

* Change run-checks instruction + fix comment spacing

---------

Co-authored-by: hrishim <hrishim@gail.com>
Co-authored-by: Guillaume Lagrange <lagrange.guillaume.1@gmail.com>
2024-04-09 11:12:37 -04:00
Louis Fortier-Dubois f5159b6d22
Refactor: split JitKernel and SourceKernel (#1569)
* refactor execute_dynamic into Execution

* minor change

* extension cfg

* jitkernel and sourcekernel

* add todo statement

* cleanup and docs

* update book

* fix server dependancy on compiler

* refactor into shader information

* refactor to compile shader once

* clippy

* clippy

* clippy

* fix doc

* fix doc

* fmt

* rename feature flag

* refactor

* All broked

* compile at the right time

* todo done

* all dynamic

* all dynamic in template too

* fmt

* fix ci

---------

Co-authored-by: nathaniel <nathaniel.simard.42@gmail.com>
2024-04-05 12:58:10 -04:00
Dilshod Tadjibaev beff9a8c57
Update pytorch-model.md (#1570) 2024-04-02 14:17:51 -05:00
Nathaniel Simard b0c5986d16
Feat/lazy init (#1539) 2024-04-02 10:13:35 -04:00
Dilshod Tadjibaev edc683bc4b
Update module.md (#1557) 2024-03-29 13:14:01 -04:00
Karsten Becker c21d5a3207
Add LeakyReLu implementation (#1208)
* Implement LeakyReLu

* Cargo fmt

* Apply suggestions

* cargo fmt

* Use float_mul_scalar

* Should be grad

* Add to books module

* Move test files

* Update leaky relu to use activation function

* Update tensor.md

* Fix failing test due to approx

* Add back the function comment

* Fix comment per PR feedback

---------

Co-authored-by: Dilshod Tadjibaev <939125+antimora@users.noreply.github.com>
2024-03-27 13:57:51 -05:00
jcmullwh 626457e1c6
Provide Tensor Padding Helpers #960 (#1097)
* Initial padding approach

Create padding implementation for the last two dimensions of Float and Int Tensors.

Create PadMode Enum, allowing Constant padding.

Create Padding Struct with Uniform, Asymmetric, height, and width implementations.

Create tests for the padding implementation.

* Update padding.rs

remove unneeded import

* Update from Merge

Use crate Element

Swap from old from_data() to new from_data_devauto()

* Formatting Changes

Formatting changes from cargo fmt --all

* Additional Format Change

One more format change that cargo fmt didn't get the first time.

* Changes to Example

Modify Example to ensure it works.

* modify naming

better names for impl / input variables.

* Modify API

- Change Padding to PadSize.
- integrate padding value into PadMode.
- update tests and examples.

* Comments and print

Improve comments+naming and remove println

* Pad Fixes

Moved pad to numeric

Simplified PadMode Element

updated tensor creations

fixed doc example

* Fix test location

* Simplified pad API

* Fix for failed unit tests

* Remove bool_full

* Rename `pads` to `padding`

---------

Co-authored-by: Dilshod Tadjibaev <939125+antimora@users.noreply.github.com>
2024-03-27 12:46:55 -05:00
Aasheesh Singh a77979e0b6
add rms norm layer (#1527) 2024-03-25 18:59:11 -04:00
Aasheesh Singh 613e698007
Feat/swiglu (#1507) 2024-03-25 15:55:27 -04:00
Dilshod Tadjibaev 6feda90a8c
Tensor expand operator (#1508)
* Improve CI cache - remove burn-tch artifacts

* PyTorch config deserializer from .pt file

* Update pytorch-model.md

* WIP

* Rename broadcast_to to expand

* Rename broadcast_to expand file

* Implemented fusion backend and fix bugs

* Remove old files

* Remove unused state

* Rename to the correct op name

* Add missing comment

* Fix expand check function doc

* Rename the leftover names

* Rename leftover names
2024-03-22 16:33:53 -05:00
Guillaume Lagrange dc45cf1700
Add `topk` tensor operation (#1497)
* Add topk and topk_with_indices

* Change topk_with_indices test to guarantee order (previously equal elements)
2024-03-22 10:57:20 -04:00
Guillaume Lagrange 47a84cc980
Add tensor sorting operations (#1488)
* Add sort, sort_with_indices and argsort

* Fix wasm build

* Add sort ops autodiff

* Fix TODO parallel comment

* Fix sort_with_indices 1d and add descending options

* Fix clippy

* Fix ascending comment (configurable)
2024-03-20 14:51:04 -04:00
carrotflakes 8911093b88
Add `flip` tensor operator (#1468) 2024-03-18 20:33:39 -05:00