* Add a feature to initialize from an existing wgpu adapter/device/queue
This is useful when interacting with other wgpu applications (eg. displaying a burn tensor as a texture in egui). The existing devices are keyed by the wgpu Device ID. Alternatively they could be keyed per adapter which would be more inline with other burn WgpuDevice's (one per adapter), but also there's no real inherent reason to.
This also involves making Queue into an Arc. Alternatively, this could give up ownership of the queue, but it's helpful to be able to synchronize burn operations and custom wgpu operations.
* Ben WIP
* Compile burn-jit
* WGPU works
* Remove old code
* move language cube stuff
* cleaning up
* some import reworking
* remove cube reexport
* template feature flag in cube
* ci
---------
Co-authored-by: nathaniel <nathaniel.simard.42@gmail.com>
* #1747
Upgrade Rust dependencies
* Revert upgrade for tch
The update of tch on windows gives an error:
INTEL MKL ERROR: The specified module could not be found. mkl_vml_avx2.1.dll.
Intel MKL FATAL ERROR: cannot load mkl_vml_avx2.1.dll or mkl_vml_def.1.dll.
* Keep only .cargo/config.toml file which works with rust > 1.75
---------
Co-authored-by: Sylvain Benner <sylvain@benner.online>
* Move HandlerContainer and Tensor Ops description to burn-tensor
Move HandleContainer and Tensor operations descriptions to burn-tensor crate.
Removed the FusionDevice and replaced it with a DeviceOps trait bound to Backend::Device.
For now added modules to burn-tensor are excluded from no-std as they rely on Arc.
* [burn-tensor] Flatten module hierarchy for tensor representation
+ Add new repr feature to cargo file.
* Remove prefix on dosctring
* [burn-fusion] Require default features of burn-tensor
* Add remainder_scalar op to numeric trait and associated int/float functions
* Update burn-tch crate
* Update ndarray crate
* Update jit crate
* Update candle crate
* Update fusion crate
* Update autodiff crate
* Forgot float.rs for fusion
* Add burn-tensor tests
* Redirect to the pre-existing modulus op
* Fix sign
* Remove mut from burn-tch
* Use sign trick to make wgpu backend work
* Add more unit tests in to cover bases
* Naming fix for burn-fusion
* Update tests w/PyTorch link
* Use different WGSL instructions for remainder
* Redirect to remainder Operator instead of modulo
* Revert Modulo in instruction.rs
* refactor execute_dynamic into Execution
* minor change
* extension cfg
* jitkernel and sourcekernel
* add todo statement
* cleanup and docs
* update book
* fix server dependancy on compiler
* refactor into shader information
* refactor to compile shader once
* clippy
* clippy
* clippy
* fix doc
* fix doc
* fmt
* rename feature flag
* refactor
* All broked
* compile at the right time
* todo done
* all dynamic
* all dynamic in template too
* fmt
* fix ci
---------
Co-authored-by: nathaniel <nathaniel.simard.42@gmail.com>
* separate forward backward
* refactor with pool strategy
* refactor further
* pooling refactored
* refactoring for adaptive wip
* wip adaptive
* adaptive
* delete some wgsl
* avg pool backward
* clippy
* minor refactor