* Stable diffusion 3.5 support.
* Clippy fixes.
* CFG fix.
* Remove some unnecessary clones.
* Avoid duplicating some of the code.
* Release the mmdit model earlier to reduce memory usage.
* Stella_en_1.5B_v5
* Separated creation. This is a critical step for numerical accuracy and would be documented in the readme
* EmbedDim would require clone and copy
* WIP: example
* Examples added
* a litte more in README
* Add stable diffusion 3 example
Add get_qkv_linear to handle different dimensionality in linears
Add stable diffusion 3 example
Add use_quant_conv and use_post_quant_conv for vae in stable diffusion
adapt existing AutoEncoderKLConfig to the change
add forward_until_encoder_layer to ClipTextTransformer
rename sd3 config to sd3_medium in mmdit; minor clean-up
Enable flash-attn for mmdit impl when the feature is enabled.
Add sd3 example codebase
add document
crediting references
pass the cargo fmt test
pass the clippy test
* fix typos
* expose cfg_scale and time_shift as options
* Replace the sample image with JPG version. Change image output format accordingly.
* make meaningful error messages
* remove the tail-end assignment in sd3_vae_vb_rename
* remove the CUDA requirement
* use default_value in clap args
* add use_flash_attn to turn on/off flash-attn for MMDiT at runtime
* resolve clippy errors and warnings
* use default_value_t
* Pin the web-sys dependency.
* Clippy fix.
---------
Co-authored-by: Laurent <laurent.mazare@gmail.com>
* start to impl chinese clip
* impl vision model
* copy code from bert
* refactor use
* refactor use again
* fix text model
* refactor
* try to fix text model
* tuning
* tuning chinese clip
* delete useless code
* revert code
* Clippy fixes.
* Also apply cargo fmt.
---------
Co-authored-by: laurent <laurent.mazare@gmail.com>
* add bert for masked lm
* working example
* add example readme
* Clippy fix.
* And apply rustfmt.
---------
Co-authored-by: Laurent <laurent.mazare@gmail.com>
* Add Pixtral.
* More pixtral vision encoder.
* Sketch a pixtral example.
* Sketch a pixtral example.
* Better image loading.
* Support loading images embedded in safetensor files.
* Clippy fixes.
* Add the llava multimodal adapter.
* Add more of the llava bits.
* Add the pixtral config.
* More pixtral inference.
* Add the text generation bits.
* Get the example to work.
* Bugfix.
* Run some bits of the model in f32.
* Blessed version :)
* Better rope frequency computations.
* README update.
* Add the SigLIP model.
* Add more to the forward pass of the vision model.
* Complete the forward pass.
* Add the siglip example.
* Fix.
* Another fix.
* Get everything in place.
* Add a readme.
* Quantized version of flux.
* More generic sampling.
* Hook the quantized model.
* Use the newly minted gguf file.
* Fix for the quantized model.
* Default to avoid the faster cuda kernels.
* Add a RotatingKVCache.
* Add some KvCache tests.
* Test the reset too.
* More kv-cache testing.
* More tests for the rotating kv-cache.
* Improve the api for the rotating cache so that the whole src tensor gets returned when it's overlarge.
* Handle contiguity + bugfix + use in mimi.
* Add a way to test the mimi streaming mode.
* Mimi streaming fixes.
* More rotating kv-cache.
* Fix the attn mask generation.
* Handle the abs case.
* Add some tests for the generated mask.
* Adding Granite 7b Instruct model example
* Minor refactoring to make it a little more idiomatic
* Clippy fixes.
* * Adding a README with some information about supported Granite models
* Changing the default prompt to accomodate better the Language
modality of the Granite 7b Instruct model
---------
Co-authored-by: Laurent <laurent.mazare@gmail.com>
* Add the mimi audio-tokenizer.
* Formatting tweaks.
* Add a full example.
* Use the transformers names.
* More renamings.
* Get encoding and decoding to work.
* Clippy fixes.
* Allow loading images with given std and mean
* OpenCLIP text encoder component
* Two MobileCLIP models
* Clippy fixes.
---------
Co-authored-by: Laurent <laurent.mazare@gmail.com>
* silero-vad v5 example
This change adds an example of how to run silero-vad v5
* PR: rename 'vad' to 'silero-vad'
* Update README.md
---------
Co-authored-by: Laurent Mazare <laurent.mazare@gmail.com>
* Start sketching parler-tts support.
* Implement the attention.
* Add the example code.
* Fix the example.
* Add the description + t5 encode it.
* More of the parler forward pass.
* Fix the positional embeddings.
* Support random sampling in generation.
* Handle EOS.
* Add the python decoder.
* Proper causality mask.