Add training support for nearest interpolation
---------
Co-authored-by: yurzhang <yurzhang.oi@gmail.com>
Co-authored-by: Dilshod Tadjibaev <939125+antimora@users.noreply.github.com>
* Added min_pair() and max_pair() methods to numeric Tensors
* Update book with added max_pair() and min_pair() methods.
* Fix spelling typo in comments
* Update comments per change requests
* Fix tensor.equal_elem usage in book
* Add not_equal and not_equal_elem tensor ops
* Fix "element-wise" usage for correctness and uniformity
* Add bool_not_equal test
* Enhance PyTorchRecorder to pass top level key to extract state_dict
This is needed for Whisper weight pt files.
* Fix missing hyphens
* Move top-level-key test under crates
* Add sub-crates as members of workspace
* Update Cargo.lock
* Add accidentally omitted line during merge
* Fix python main entrypoint in book example
* Remove candle windows safeguards (#1178)
* Bump candle-core from 0.3.3 to 0.4.1
* Remove windows current known issue
* Add int_random to int tensor ops
* Int random for tch backend
* Int random for burn-fusion
* int random for autodiff
* Int random for candle backend
* Int random for ndarray backend
* Int random for wgpu backend
* Merge imports
* Typo
* Shader file for int uniform distribution
* Create AutotuneOperationSet and public int_sum_dim_autotune
* Adjust bounds to 0..10
* Create uniform_int_kernel, unit tests, use new kernel
* Reduction kernels for regular and shared memory sum_dim int operations
* Macro that accomadates wgpu IntElement
* Add autotuning to int_mean_dim
* Use correct macro for Int autotuning
* Add int_mean_dim_shared_memory
* Add int_mean_dim and unit test
* Create autotunables for mean_dim
* Run fmt
* Remove comment
* Finish resolving merge conflict, fix doc
* Make the element trait bound a parameter to reduce_tune_ops macro
* Update book
* Fix requested change
* Change range to [0, 255] and update test accordingly
* Forgot to include candle in last commit
* Fix comment
* Use correct int autotune for mean dim
* Fix typo- not sure how this passed earlier
* Resolve syntax issues from merge
* Fix cast_float
* Saving here
* Continue fixing merge conflicts, all tests pass locally
* Run fmt
* Change cast_float to cast_u32_to_float
* Make uniform_int_inner_loop safer
* Be even more explicit about u32 casts
* Skip an intermediate step and cast directly to u32
* Replace JitElement + Element with IntElement
* Run fmt
* This should fix the CI
* This time for sure
* Running into issues with identity nodes
* Vec<RefCell<Node>> seems to work for this
* back to passing tests
* Reworked IO into separate struct
* working towards exploiting topological ordering and more informative ident errors
* the passing of an initializer to coalesce is temporary
* cleaning up dead code
* handled unsqueeze
* reworked node initialization and dim inference
* mainly cleanup
* changed how io use is tracked, moved unsqueeze remapping out of dim inference
* `cargo xtask run-checks all` now passes
* added a fixme and a few doc strings
* removing println and dead code
* spaces in doc strings
* altered top sort to work on node proto, moved prior to node gen
* Update ir.rs
* Update from_onnx.rs
removed dead code
* updated doc string
* camalcased Onnx Graph Builder
* removed self import?
* add any, all op implementation for all tensor types
* add op to burn-book
* fix formatting
* refactor tensor operations from numeric to BaseOps.
* fix book doc
* comments fix and add more tests