Commit Graph

285 Commits

Author SHA1 Message Date
Dilshod Tadjibaev 74c41bdda2
Add clamp, clamp_min, clamp_max tensor ops (#550) 2023-07-26 20:02:38 -04:00
Tuhin Tarafder 6c2c7e79c1
Add functions to cast tensor to different kinds (#530) 2023-07-26 14:20:37 -04:00
Nathaniel Simard 0a5a2d729a
chore: bump version for next release (#533) 2023-07-26 09:46:28 -04:00
Louis Fortier-Dubois 589b4503df
add wgpu readme (#531) 2023-07-25 10:44:53 -04:00
Will Brickner e387977eee
Add a `transpose` sugar: `tensor^T` (#528) 2023-07-24 19:30:58 -04:00
polina guseva 64090a582b
Add static full method for tensor intialization with custom values (#486)
---------

Co-authored-by: Dilshod Tadjibaev <939125+antimora@users.noreply.github.com>
2023-07-24 13:03:27 -04:00
Louis Fortier-Dubois 9aca1837c2
Example/wgpu/mnist (#514)
* add wgpu for mnist

* auto graphics api

* fix display tests

* clipy
2023-07-20 17:12:13 -04:00
Nathaniel Simard d7ce52f0da
Feat/wgpu/conv (#512) 2023-07-20 15:14:42 -04:00
Louis Fortier-Dubois 4b60c0e7a0
continuous to contiguous (#511) 2023-07-20 11:28:35 -04:00
Nathaniel Simard c4afff182f
Feat/wgpu/max pool2d (#500) 2023-07-14 13:58:08 -04:00
Dilshod Tadjibaev ef421f0ae9
Add arange with steps op for int tensor (#490) 2023-07-13 17:09:57 -04:00
Nathaniel Simard a2ac2057d8
Fix: wgpu scatter with different shapes (#489) 2023-07-12 13:21:19 -04:00
Jacob cb1ac26bb3
Add SiLU activation function (#482) 2023-07-09 11:28:14 -04:00
Nathaniel Simard 04ad14a32a
refactor: wgpu reductions (#471) 2023-07-06 11:40:37 -04:00
Dilshod Tadjibaev e62ee1269b
Fix burn-tch's random implementation for standard dist (#469) 2023-07-06 08:50:50 -04:00
Nathaniel Simard 65bf6c1cbb
Refactor index => slice (#466) 2023-07-05 16:30:11 -04:00
Nathaniel Simard 98d4abd9c0
Perf/wgpu/comparison (#460) 2023-07-04 09:33:05 -04:00
Nathaniel Simard a1c9970373
Refactor/wgpu/mask (#453) 2023-07-02 12:59:22 -04:00
Nathaniel Simard 77219f7262
Perf/unary binary (#446) 2023-06-29 08:56:35 -04:00
Louis Fortier-Dubois f99fe0fadd
Matmul 2D Tiling (#442) 2023-06-28 16:48:15 -04:00
Nathaniel Simard 13836ac599
Fix: wgpu powf (#443) 2023-06-28 12:50:16 -04:00
Nathaniel Simard dc216f574b
fix ndarray index select assign (#441) 2023-06-27 12:48:01 -04:00
Nathaniel Simard d26dead299
fix arange (#440) 2023-06-27 12:47:49 -04:00
Dilshod Tadjibaev 61291cc30c
Revert temp half-rs fix for no_std (#430)
Fixes #268
2023-06-26 15:59:06 -04:00
Nathaniel Simard 6c834d3b22
Feat/wgpu/index select (#425) 2023-06-23 09:31:49 -04:00
Dilshod Tadjibaev 825aaa9977
Add missing documents (#424) 2023-06-23 09:28:34 -04:00
Nathaniel Simard c4e4c25fef
Feat/wgpu/mask (#422) 2023-06-21 13:01:18 -04:00
Yu Sun 9c5afa3469
fix(loss): Declare all contents of reduction as public (#423) 2023-06-21 13:00:57 -04:00
Dilshod Tadjibaev fce45f51be
Doc fixes (#418) 2023-06-21 12:32:50 -04:00
Nathaniel Simard 3b9c997513
feat: provide embedding default impl (#417) 2023-06-20 16:16:31 -04:00
Nathaniel Simard b6b684b9f2
Feat/wgpu/cat (#416) 2023-06-20 16:16:10 -04:00
Nathaniel Simard 4d40bde7b9
feat: argmax + argmin (#412) 2023-06-20 10:03:00 -04:00
Nathaniel Simard 323261b594
Feat/wgpu/comparisons (#411) 2023-06-19 19:11:49 -04:00
Louis Fortier-Dubois e5cc50a837
refactor/initializer (#398)
* add tests to linear forward

* swap linear weight dims

* rename uniformdefault

* wip bias in init

* refactor initializer

* formatting

* fix get(0) to first

* wip refactor

* builder pattern + kaiming init

* kaiming initializer and major initializer refactor

* fix fan out

* easier fixes

* remove initializer options

* clippy fix

* revert to input*output linear

* gru uses xavier init

* should be normal actually

---------

Co-authored-by: louisfd <louisfortier-dubois@Louiss-MacBook-Pro.local>
2023-06-19 16:55:11 -04:00
Nathaniel Simard 40743c477a
Feat/onnx/more ops (#410) 2023-06-19 16:10:50 -04:00
Nathaniel Simard a8624590af
feat: mask_where (#409) 2023-06-18 15:04:28 -04:00
Dilshod Tadjibaev 834c7ecc1f
Clean up cargo descriptions and formatting (#403) 2023-06-15 09:20:53 -04:00
Louis Fortier-Dubois fd88398ce4
feat/Xavier glorot (#394) 2023-06-11 09:54:44 -04:00
Nathaniel Simard 8c9802c363
Feat/wgpu/reduction (#392) 2023-06-08 16:54:36 -04:00
Nathaniel Simard cb4f049eae
fix: improve assert approx (#390) 2023-06-06 14:34:20 -04:00
Mathias Insley 8a88a868ee
Feat/lstm (#370) 2023-06-06 14:33:22 -04:00
Nathaniel Simard bff752b1a8
Feat/wgpu/index (#386) 2023-06-06 12:29:14 -04:00
Nathaniel Simard c1e1e38a79
Add more ops (#387) 2023-06-06 12:21:20 -04:00
Nathaniel Simard ecc67c58f9
Feat/wgpu/swap dims (#381) 2023-06-04 19:34:35 -04:00
Nathaniel Simard 0a205a3603
feat: add basic matmul for wgpu backend (#379) 2023-06-03 10:32:57 -04:00
Nathaniel Simard 974fdfaba1
Feat/wgpu backend setup (#376) 2023-06-02 11:52:47 -04:00
Nathaniel Simard 2a4ba5a6ab
Feat/gather scatter (#367) 2023-05-27 11:40:04 -04:00
Nathaniel Simard ba0ac112e1
Refactor: Burn Import use BurnGraph (#355) 2023-05-21 13:09:27 -04:00
Nathaniel Simard 510d53f37e
Improve softmax doc (#352) 2023-05-16 14:39:42 -04:00
Nathaniel Simard 976102fec0
Feat/avg pool1d (#349) 2023-05-15 08:29:45 -04:00
Nathaniel Simard 8bbfbd8a8f
perf: softmax (#346) 2023-05-12 09:37:42 -04:00
Dilshod Tadjibaev 05763e1878
Bump version to the next minor to indicate dev (#344) 2023-05-10 18:02:08 -04:00
Nathaniel Simard 73f99ef79f
Feat/maxmin numeric (#340) 2023-05-09 16:35:55 -04:00
Nathaniel Simard a88357ce1d
feat: add max & min ops (#339) 2023-05-09 15:49:33 -04:00
Nathaniel Simard 69001b0d69
Feat/activation ops (#338)
* perf: GELU

* Refactor relu
2023-05-09 08:32:35 -04:00
Nathaniel Simard 29eecd6383
Prepare next release (#335) 2023-05-06 10:32:23 -04:00
Nathaniel Simard b54f9302c7
Feat/avg pool2d (#318) 2023-04-30 12:25:14 -04:00
Sunny Gonnabathula 02abc373d3
add burn-tch support for bf16 (#303) 2023-04-24 11:24:36 -04:00
Nathaniel Simard c5e31b272f
Feat/group conv (#306) 2023-04-22 15:00:41 -04:00
Nathaniel Simard 78ac09fb7a
Support dilation in convolution operations (#301) 2023-04-18 10:01:11 -04:00
Nathaniel Simard bd58922784
Feat/conv stride (#300) 2023-04-16 22:03:16 -04:00
Yu Sun 26cf555612
Feat: add new tensor ops mask_scatter (#258) 2023-04-13 08:58:31 -04:00
Sunny Gonnabathula 69954c14ec
add bf16 element (#295) 2023-04-12 12:36:03 -04:00
Nathaniel Simard 04bcf9550a
Fix/text gen example (#292) 2023-04-11 17:18:45 -04:00
Nathaniel Simard 2220965b5c
Feat/add tensor checks (#283) 2023-04-10 19:16:15 -04:00
Nathaniel Simard f04fe101d8
Feat/module no grad (#274) 2023-04-07 09:01:27 -04:00
Mathias Insley d8f64ce1dd
Pretty Print Tensors (#257) 2023-04-06 20:06:38 -04:00
Dilshod Tadjibaev 4e9e6d2706
Move unsqueeze op to the tensor's base (#261) 2023-04-01 14:20:48 -04:00
Dilshod Tadjibaev 7364d09d32
Add flatten op to the tensor base (#260) 2023-03-31 16:44:48 -04:00
Nathaniel Simard 7d7504686a
Refactor/init modules (#250) 2023-03-24 15:38:02 -04:00
Nathaniel Simard a74e4cd0bc
fix: conv bias backward (#248) 2023-03-23 16:10:26 -04:00
Nathaniel Simard 6f43d983f7
State serialization/deserialization overhaul (#247) 2023-03-23 11:02:46 -04:00
nathaniel 00625d1527 fix: add version to path dependencies 2023-03-21 10:13:44 -04:00
Nathaniel Simard 4e28e2a776
chore: prepare release v0.6.0 (#246) 2023-03-21 09:47:37 -04:00
Dilshod Tadjibaev bf9d33e6fc
Add MNIST inference on the web demo crate (#228)
* Add MNIST inference on the web demo crate

* Fix problems identified during a PR review
2023-03-13 19:51:32 -04:00
Nathaniel Simard 403500f018
feat(burn-core): refactor cross entropy (#229) 2023-03-13 13:47:19 -04:00
Nathaniel Simard d09ab44979
Feat/index_select (#227) 2023-03-12 17:44:22 -04:00
Nathaniel Simard 9655b74b22
Feat/index_select_dim ops (#225) 2023-03-11 16:14:57 -05:00
Nathaniel Simard 860051ca5c
Perf/ndarray maxpool (#223) 2023-03-10 19:14:18 -05:00
Hariganesh Srinivasan 740b554047
Implemented sigmoid and log_sigmoid, Resolves #171 (#221)
* implemented sigmoid and log_sigmoid with tests

* added test for overflow and computation.
2023-03-10 19:03:01 -05:00
Nathaniel Simard 0544a915eb
Refactor/migrate more numeric func (#220) 2023-03-10 10:47:15 -05:00
Nathaniel Simard a2ec774c37
draft: Perf/ndarray matmul (#214) 2023-03-09 14:00:35 -05:00
Nathaniel Simard be96160065
refactor: elems (#206) 2023-03-06 18:39:49 -05:00
Nathaniel Simard 019c5f9c44
Refactor/int backend (#197)
* Update burn-tensor API

* Migrate burn-autodiff

* Update burn-tch

* Update burn-ndarray

* Add some doc
2023-03-06 14:45:58 -05:00
Nathaniel Simard 15ec42dd6f
Refactor/backend bool tensor (#192) 2023-03-05 11:23:46 -05:00
Nathaniel Simard ffd3d35176
Refactor/tensor api (#191) 2023-03-05 09:23:42 -05:00
Nathaniel Simard bddc1461ef
fix: running state (#190) 2023-03-01 17:33:32 -05:00
Nathaniel Simard d4c298c221
Refactor/burn core (#188) 2023-03-01 16:10:58 -05:00
Nathaniel Simard e6e7f4de42
feat: inplace tensor api. (#187) 2023-03-01 10:55:51 -05:00
Nathaniel Simard 25deb5a13b
Refactor/autodiff (#186) 2023-02-28 20:01:26 -05:00
Dilshod Tadjibaev fb925acc73
Make burn and burn-core packages no_std compatible (#168) (#173)
* Make burn-ndarray and burn-tensor no_std compatible (#168)
2023-02-25 09:38:01 -05:00
Dilshod Tadjibaev 9091363ada
Make burn-ndarray and burn-tensor no_std compatible (#168) (#169) 2023-02-21 08:35:24 -05:00
Nathaniel Simard 7d2f43dfca
Refactor Tensor API (#163) 2023-02-17 17:31:20 -05:00
Nathaniel Simard 2401d8ad96
Prepare next release (#161) 2023-02-12 15:32:29 -05:00
Yu Sun 0b85cb0eed
feat(trait-TensorOps): add log1p (#160) 2023-02-11 13:30:50 -05:00
Nathaniel Simard c7963d8485
refactor: device functions (#157) 2023-01-27 18:37:21 -05:00
Nathaniel Simard 2d4e514b41
Refactor/shape function (#156) 2023-01-27 15:18:55 -05:00
Makro f6f0d0e4f3
Add cos, sin and tanh operations (#155)
* Add cos, sin and tanh operations

* Add tests

* Fix formatting
2023-01-24 19:40:30 -05:00
Nathaniel Simard 34d233cd3e
Feat/max pooling backend (#152) 2023-01-21 15:39:21 -05:00
Nathaniel Simard 745c88f0a0
Feat/conv (#147) 2023-01-11 18:33:09 -05:00