* Added min_pair() and max_pair() methods to numeric Tensors
* Update book with added max_pair() and min_pair() methods.
* Fix spelling typo in comments
* Update comments per change requests
* Fix tensor.equal_elem usage in book
* Add not_equal and not_equal_elem tensor ops
* Fix "element-wise" usage for correctness and uniformity
* Add bool_not_equal test
* Enhance PyTorchRecorder to pass top level key to extract state_dict
This is needed for Whisper weight pt files.
* Fix missing hyphens
* Move top-level-key test under crates
* Add sub-crates as members of workspace
* Update Cargo.lock
* Add accidentally omitted line during merge
* Fix python main entrypoint in book example
* Remove candle windows safeguards (#1178)
* Bump candle-core from 0.3.3 to 0.4.1
* Remove windows current known issue
* Add int_random to int tensor ops
* Int random for tch backend
* Int random for burn-fusion
* int random for autodiff
* Int random for candle backend
* Int random for ndarray backend
* Int random for wgpu backend
* Merge imports
* Typo
* Shader file for int uniform distribution
* Create AutotuneOperationSet and public int_sum_dim_autotune
* Adjust bounds to 0..10
* Create uniform_int_kernel, unit tests, use new kernel
* Reduction kernels for regular and shared memory sum_dim int operations
* Macro that accomadates wgpu IntElement
* Add autotuning to int_mean_dim
* Use correct macro for Int autotuning
* Add int_mean_dim_shared_memory
* Add int_mean_dim and unit test
* Create autotunables for mean_dim
* Run fmt
* Remove comment
* Finish resolving merge conflict, fix doc
* Make the element trait bound a parameter to reduce_tune_ops macro
* Update book
* Fix requested change
* Change range to [0, 255] and update test accordingly
* Forgot to include candle in last commit
* Fix comment
* Use correct int autotune for mean dim
* Fix typo- not sure how this passed earlier
* Resolve syntax issues from merge
* Fix cast_float
* Saving here
* Continue fixing merge conflicts, all tests pass locally
* Run fmt
* Change cast_float to cast_u32_to_float
* Make uniform_int_inner_loop safer
* Be even more explicit about u32 casts
* Skip an intermediate step and cast directly to u32
* Replace JitElement + Element with IntElement
* Run fmt
* This should fix the CI
* This time for sure
* add any, all op implementation for all tensor types
* add op to burn-book
* fix formatting
* refactor tensor operations from numeric to BaseOps.
* fix book doc
* comments fix and add more tests
* PyTorch config deserializer from .pt file
* Update pytorch-model.md
* Format the book section
* Update Cargo.lock
* Recommend to resave config as json
* Fix comment wording
* fix(book): add missing second parameter to CrosEntropyLoss constructor
CrossEntropyLoss::new() expects two parameters, the pad_index and the device
* fix: fix missing closing parenthese
* Change DefaultFileRecorder to NamedMpkFileRecorder (no compression)
* Actually, safetensors does not have any checksum for data validation
* Update checksum explainer/recommandation
Updated the explanation at the beginning to match the actual changes to 'Cargo.toml'. Moved down the 'edition' property to match the style of the initial 'Cargo.toml' file. Removed unnecessary comment for a self-explanatory piece of code. Some rewording.
* Add narrow methods
* Revert "Add narrow methods"
This reverts commit 9371d87c79.
* Implement a shared version of narrow
* Correct test case
* Update book
* Improve tests