Commit Graph

1083 Commits

Author SHA1 Message Date
Laurent Mazare 21109e1983
Recommend using maturin. (#717) 2023-09-02 16:19:35 +01:00
Laurent Mazare ad796eb4be
More quantized llama in python. (#716)
* More quantized llama in python.

* Expose a couple more functions.

* Apply the last layer.

* Use the vocab from the ggml files.
2023-09-02 13:41:48 +01:00
Laurent Mazare e8e33752f4
Sketch a quantized llama using the pyo3 api. (#715)
* Sketch a quantized llama using the pyo3 api.

* Add more ops.

* Expose a few more functions to use in the quantized model.

* Rope embeddings.

* Get the forward pass to work.
2023-09-02 11:26:05 +01:00
Laurent Mazare dabaa479b9
Update README.md (#714) 2023-09-02 07:56:12 +01:00
Laurent Mazare 2c1df6bba1
Add a repeat penality to the llama2-c command line example. (#713)
* Add a repeat penality to the llama2-c command line example.

* Another fix attempt.
2023-09-01 20:38:58 +01:00
Laurent Mazare 4d56cef583
Handle the empty sequence case properly. (#712)
* Handle the empty sequence case properly.

* Proper fix.
2023-09-01 20:12:30 +01:00
Laurent Mazare 19042962d5
Whisper fix (#711)
* Remove unnecessary file.

* Whisper fix.
2023-09-01 20:04:07 +01:00
Laurent Mazare 731e3ffb03
Remove unnecessary file. (#710) 2023-09-01 19:42:23 +01:00
Laurent Mazare 2fef14cb14
Add a repeat penalty to the llama2.c wasm example. (#709) 2023-09-01 19:32:28 +01:00
Laurent Mazare 1e5b2cc1d5
Add some quantized functions to pyo3. (#708) 2023-09-01 19:45:36 +02:00
Laurent Mazare 2ed78ab336
Support for quantized tensors in the python api. (#706)
* Add more pyo3 support.

* Add some support for quantized tensors in pyo3.

* Add an arc layer on qmatmul.

* Add the quantized matmul.

* Quantization support.

* More quantization support.

* Test the python quantization.
2023-09-01 15:53:42 +01:00
Laurent Mazare 237323c2bc
Cleanup the pyo3 setup. (#705) 2023-09-01 14:26:18 +01:00
Laurent Mazare af552a5274
Fix the rnn tests for accelerate. (#704) 2023-09-01 13:21:38 +01:00
Laurent Mazare 7529531056
Add the optimizer trait. (#702) 2023-09-01 12:55:39 +01:00
Laurent Mazare f2d476ca65
Replace the discord link. (#701) 2023-09-01 09:43:55 +01:00
Laurent Mazare f9f482d4e5
Add some doc to the varbuilder. (#700) 2023-09-01 08:28:35 +01:00
Lennard 9736236175
Allow retrieving and setting prefix of VarBuilder (#699) 2023-09-01 08:08:41 +01:00
Laurent Mazare 30a4b593d7
More ops again. (#697) 2023-08-31 22:28:48 +01:00
Laurent Mazare 949f1eae6f
Implement a couple more binary ops. (#693) 2023-08-31 21:30:15 +01:00
Laurent Mazare 7cef35c84d
Tweak some quantized args (#692)
* Print the args + change the default temp/repeat penalty.

* Minor formatting tweak.
2023-08-31 17:25:21 +01:00
Laurent Mazare 7509c98970
Interactive mode for the quantized model. (#690) 2023-08-31 10:52:42 +01:00
Laurent Mazare 94aa234dfd
Add the kv-cache to the whisper wasm version. (#689)
* Add the kv-cache to the whisper wasm version.

* Improve the handling of special tokens.
2023-08-31 09:37:44 +01:00
Laurent Mazare db59816087
Add a GRU layer. (#688)
* Add a GRU layer.

* Fix the n gate computation.
2023-08-31 08:43:10 +01:00
Laurent Mazare d210c71d77
Set the learning rate. (#687) 2023-08-31 08:03:40 +01:00
Laurent Mazare 8e84d8a59b
Llama2.c wasm module. (#686) 2023-08-31 07:44:32 +01:00
Radamés Ajna 9bd486fb96
Add Yolo Pose to JS Example (#684)
* add support for yolo pose models

* fix copy
2023-08-31 06:32:57 +01:00
Laurent Mazare eaf760a751
Add a python variant for the lstm test. (#682) 2023-08-30 22:32:08 +01:00
Radamés Ajna 1d0bb48fae
Improve Whisper WASM UI example (#669)
* wip add module and js worker example

* params

* clean up, send error

* final UI with whisper webworker

* add simple instructions
2023-08-30 20:35:41 +02:00
Laurent Mazare 21e1c73892
Add a LSTM test. (#681)
* Add a LSTM test.

* Clippy.
2023-08-30 20:05:42 +02:00
Laurent Mazare 2047d34b7c
More robust tests (so that they pass on accelerate). (#679) 2023-08-30 18:10:10 +01:00
Laurent Mazare 9874d843f1
Fix the accelerate build (#678)
* Cosmetic changes.

* Fix the accelerate build for tanh.
2023-08-30 18:31:14 +02:00
Laurent Mazare 7d753d3acd
Mnist training dropout (#677)
* Use dropout in the mnist training.

* Fix.
2023-08-30 16:41:01 +01:00
Laurent Mazare 3159982a89
Add a Dropout layer (#676)
* Add a dropout layer.

* Add an actual layer.
2023-08-30 16:19:28 +01:00
Laurent Mazare ad8a62dbf5
Add tanh. (#675)
* Add tanh.

* Use tanh in the lstm block.

* Add a test for tanh forward and backward passes.
2023-08-30 13:54:50 +01:00
Laurent Mazare f35b9f6baa
Add some recurrent neural networks (#674)
* Add the rnn module.

* More LSTM.

* Implement the RNN forward pass.

* More forward pass for LSTM.
2023-08-30 13:27:09 +01:00
Laurent Mazare 618f4e4c78
Add some documentation. (#673)
* Add some documentation.

* Bump the crate version.
2023-08-30 11:54:00 +01:00
Laurent Mazare 5ac0a98f01
Changelog update. (#672) 2023-08-30 09:27:56 +01:00
Laurent Mazare 393690387f
Support dilation in conv-transpose2d. (#671) 2023-08-30 09:22:00 +01:00
Laurent Mazare 9b25113393
Small cleanups (avoid some possible mutations) (#670)
* More mut cleanup.

* Factor out some common bits.
2023-08-30 08:54:00 +01:00
Laurent Mazare a1a5ab8b0a
Neon optimized vecdot (#666)
* Q5k vecdot.

* Add the q3k vecdot.

* Q2k vecdot.

* Move the quantized model to its own file.
2023-08-29 22:28:46 +01:00
Laurent Mazare 59b731de99
Add the powf op. (#664)
* Add the powf op.

* Cuda kernels and backprop.

* Add a test.
2023-08-29 20:48:18 +01:00
Laurent Mazare 2d3fcad267
Simplify usage of the pool functions. (#662)
* Simplify usage of the pool functions.

* Small tweak.

* Attempt at using apply to simplify the convnet definition.
2023-08-29 19:12:16 +01:00
Laurent Mazare b31d41e26a
Add a convnet training example. (#661)
* Add a convnet example.

* Dataset fix.

* Randomize batches.
2023-08-29 18:23:01 +01:00
Laurent Mazare 71221559d3
Fix the dilated convolutions. (#659) 2023-08-29 16:37:42 +01:00
Laurent Mazare a044907ffc
Dilated convolutions (#657)
* Add the dilation parameter.

* Restore the basic optimizer example.

* Dilation support in cudnn.

* Use the dilation parameter in the cpu backend.

* More dilation support.

* No support for dilation in transposed convolutions.

* Add dilation to a test.

* Remove a print.

* Helper function.
2023-08-29 16:12:11 +01:00
Lukas Kreussel ee8bb1bde1
Add `avx` implemenetations of `q2k`, `q3k` and `q5k` vec-dot functions (#654)
* `q2k` avx implementation

* `q3k` avx implementation

* `q5k` avx implementation

* `avx` make masks constant

* clippy stuff
2023-08-29 13:35:56 +01:00
Nicolas Patry 3d2d3c7edb
Merge pull request #658 from huggingface/upgrade_hf_hub2
Upgrading hf-hub (for windows support, removing symlink requirement).
2023-08-29 14:32:15 +02:00
Nicolas Patry 1aca6fa291 Upgrading hf-hub. 2023-08-29 14:18:54 +02:00
Nicolas Patry 4ed202447e Upgrading hf-hub. 2023-08-29 14:14:26 +02:00
Laurent Mazare 1d6bff53fc
Changelog update. (#656) 2023-08-29 12:55:56 +01:00