ba87f02e84
Add finetuning with mistral using QLora and PEFt |
||
---|---|---|
Fine_Tuning_LLm_Models.ipynb | ||
Fine_Tuning_with_Mistral_QLora_PEFt.ipynb | ||
Fine_tune_Llama_2.ipynb | ||
LICENSE | ||
README.md | ||
lora_tuning.ipynb |
README.md
Finetuning-LLM
Finetuning using Mistral with QLora and PEFt
This section provides a guide on how to perform finetuning using Mistral with QLora and PEFt. The process involves the following steps:
- Setup Environment: Ensure that your environment is set up with all the necessary dependencies for Mistral, QLora, and PEFt.
- Prepare Data: Prepare your dataset for finetuning. This involves preprocessing your data into a suitable format for training.
- Configure Finetuning Parameters: Set up the finetuning parameters, including the learning rate, batch size, and the number of epochs.
- Initiate Finetuning: Start the finetuning process using Mistral with the QLora and PEFt configurations.
- Evaluate Model: After finetuning, evaluate the performance of your model on a validation set to ensure that it meets your expectations.
- Deploy Model: Once satisfied with the model's performance, you can deploy it for inference.
For a detailed demonstration of the finetuning process using Mistral, QLora, and PEFt, refer to the notebook Fine_Tuning_with_Mistral_QLora_PEFt.ipynb
included in this repository.