* initial commit to try implement from_dataframes for a burn dataset
* added the beginnings of tests. removed ref to self in utility method
* added unit test for dataframe module. added utility methods to convert polars rows to burn dataset values
* putting polars and dataframe mod behind a fearure flag
* testing both methods
* added a if let OK so that it doesn't panic. if we can't convert serde map to json string. added comments
* using polars serializer, renaming vars
* removed prints. just unwrapping
* setting feature flags back
* return Value::Null rather than panic if we can't serialize list value. no longer convert to object before converting to string. no longer using serde_json to_string method
* Use native deserializer instead of serde_json
* added support for lazyframes. added support to deserialize a few more data. added a few more tests
* Remove lazy, add more testing and other fixes
* Update the book
* Remove lazy feature
* Put back lazy feature for polars
---------
Co-authored-by: Dilshod Tadjibaev <939125+antimora@users.noreply.github.com>
* Implement ONNX pad
* ONNX pad arguments fix
pad now requires 2 or more arguments
if the third argument is not given, it will default to 0
* fixing bug in input len fix
* change panic comment
Change panic comment from needing two inputs. This comes from the fact that the ONNX spec requires two necessary inputs but could have more two more optional argument.
---------
Co-authored-by: JC <you@example.com>
Co-authored-by: mepatrick73 <pameu17@ulaval.ca>
* Remove panic for squeeze when more than one axis is specified
* Remove extra Model()
* Change script to squeeze all singleton dimensions
* Revert change since burn requires axes to be specified
* Fix input tensor
* Try updating ONNX files again
* Add script for testing multiple axes along with new ONNX file
* Update squeeze.py comments
* Add squeeze_multiple model to tests
* Fix dim_inference
* Move QuantizationScheme to burn-tensor
* Refactor QuantizedTensorPrimitive to include the quantization strategy
* Fix QFloat tensor data display
* Refactor quantization methods to use scheme and qparams (on backend device)
* Fix clippy
* Fix fmt
* Add qtensor primitive tests
* Added parameter trust_remote_code to hf dataset call.
* Removed test modul as it may break causing false negatives.
Set default trust_remote_code to false.
Added an example that highlights the usecase.
* Feat: burn-import implement ONNX ConstantOfShape
* Introduce shape type and use in ConstantOfShape and Shape
* Add tests for bool and int tensors for ConstantOfShape
* Fix ONNX test generation
* Undo comment
---------
Co-authored-by: Dilshod Tadjibaev <939125+antimora@users.noreply.github.com>
* Add QuantizationBackend, QTensorOps and QTensor
* Refactor QTensorOps as part of Backend trait
* Add tensor dequantize, QFloat dtype and default affine/symmetric quant
* Add ndarray default quantization implementation
* Fix clippy
* Add rayon parallel iter
* Add quantization operations to book
* Add q_shape and q_device ops to avoid converting the tensor just to get attributes
* Implement autodiff grad ops
* Mark autodiff todo for QAT
* Remove note
* Add q_inner and q_from_inner