.. |
burn
|
Refactor: split JitKernel and SourceKernel (#1569)
|
2024-04-05 12:58:10 -04:00 |
burn-autodiff
|
Fix autodiff memory management graph cleaning (#1602)
|
2024-04-11 16:21:00 -04:00 |
burn-candle
|
Fix candle backend sync (#1579)
|
2024-04-12 12:15:50 -04:00 |
burn-common
|
Feat/lazy init (#1539)
|
2024-04-02 10:13:35 -04:00 |
burn-compute
|
[Breaking] Make Tensor, Module, Optimizer !Sync + Refactor Autodiff (#1575)
|
2024-04-04 16:01:17 -04:00 |
burn-core
|
support for rotary positional encoding to transformer modules. (#1604)
|
2024-04-12 11:45:49 -04:00 |
burn-dataset
|
Add multi-label classification dataset and metric (#1572)
|
2024-04-05 13:16:46 -04:00 |
burn-derive
|
Add enum module support (#1337)
|
2024-02-21 17:03:34 -05:00 |
burn-fusion
|
Repeat ops autodiff & fusion + fix autodiff ones & zeros (#1600)
|
2024-04-11 11:32:45 -04:00 |
burn-import
|
Fix pytorch recorder adapt_linear when using autodiff backend (#1576)
|
2024-04-04 12:29:24 -04:00 |
burn-jit
|
JIT: Autotune matmul tiling 2d unroll (#1601)
|
2024-04-12 10:15:21 -04:00 |
burn-ndarray
|
Use num-traits for float ops (#1584)
|
2024-04-08 10:16:20 -05:00 |
burn-no-std-tests
|
[refactor] Move burn crates to their own crates directory (#1336)
|
2024-02-20 13:57:55 -05:00 |
burn-tch
|
Fix candle backend sync (#1579)
|
2024-04-12 12:15:50 -04:00 |
burn-tensor
|
Use num-traits for float ops (#1584)
|
2024-04-08 10:16:20 -05:00 |
burn-tensor-testgen
|
Splitted the JIT stuff from the Wgpu stuff (#1417)
|
2024-03-06 11:23:53 -05:00 |
burn-train
|
Add learner training report summary (#1591)
|
2024-04-11 12:32:25 -04:00 |
burn-wgpu
|
Refactor: split JitKernel and SourceKernel (#1569)
|
2024-04-05 12:58:10 -04:00 |