* Add QuantizationBackend, QTensorOps and QTensor
* Refactor QTensorOps as part of Backend trait
* Add tensor dequantize, QFloat dtype and default affine/symmetric quant
* Add ndarray default quantization implementation
* Fix clippy
* Add rayon parallel iter
* Add quantization operations to book
* Add q_shape and q_device ops to avoid converting the tensor just to get attributes
* Implement autodiff grad ops
* Mark autodiff todo for QAT
* Remove note
* Add q_inner and q_from_inner
* Make backend names in JSON reports match burnbench CLI
- add `config_name` to `Backend` trait
- add `backend_config_name` to `Benchmark` trait
- fix documentation for JSON reports to use correct unit of time
* Revert "Make backend names in JSON reports match burnbench CLI"
This reverts commit a09edb6389.
* [backend-comparison] Serialize the feature name passed to burnbench
---------
Co-authored-by: syl20bnr <sylvain.benner@gmail.com>
Uploading is enabled with already implemented --share argument
of the burnbench command line tool.
The burnbench binary passes the URL of the server and the auth
token to the cargo bench process using the additional arguments
--sharing-url and --sharing-token respectively.
The persistence module then upload the results when a --sharing-url
is provided.
The URL is for now hardcoded. The endpoint is production when
compiling in release mode and it is localhost otherwise.
* Refactor serialization of benchmarks
* flatten benchmarks data to make it easier to save documents to a database and
query them
* split some information into their own fields like backend and device
* add new seralized info:
- computed values (mean, median, variance, min, max)
- number of samples
- operation name
- tensor shapes if any
* serialize to separate files, one file per benchmark run
* simplify persistence module to only a save method
* Update bench save file format to use name and uuid
* Compute serialized fields count automatically via a macro
* Rework naming of benchmarks, shapes and add options field
Remove operations field
Correctly create one file per ran benchmark
* Serialize benchmark num_repeats
* Fix expect message to follow the 'should' convention
* Cargo fmt :-)
* Make Clippy happy
* Save files in the burn subdirectory
* Change name of custom_gelu bench to just gelu
* Remove num_repeats from backend-comparison benchmarks
* Fix wrong variable name to compute the median
* Remove false positive possibility in test_mean_duration