Go to file
Krish C Naik ba87f02e84
Merge pull request #1 from krishnaik06/finetune-mistral
Add finetuning with mistral using QLora and PEFt
2024-05-14 21:03:44 +05:30
Fine_Tuning_LLm_Models.ipynb Add files via upload 2024-05-07 15:05:40 +05:30
Fine_Tuning_with_Mistral_QLora_PEFt.ipynb Add finetuning with mistral using QLora and PEFt 2024-05-14 21:03:19 +05:30
Fine_tune_Llama_2.ipynb Add files via upload 2024-05-07 15:05:40 +05:30
LICENSE Initial commit 2024-05-07 15:04:00 +05:30
README.md Add finetuning with mistral using QLora and PEFt 2024-05-14 21:03:19 +05:30
lora_tuning.ipynb Add files via upload 2024-05-07 15:05:40 +05:30

README.md

Finetuning-LLM

Finetuning using Mistral with QLora and PEFt

This section provides a guide on how to perform finetuning using Mistral with QLora and PEFt. The process involves the following steps:

  1. Setup Environment: Ensure that your environment is set up with all the necessary dependencies for Mistral, QLora, and PEFt.
  2. Prepare Data: Prepare your dataset for finetuning. This involves preprocessing your data into a suitable format for training.
  3. Configure Finetuning Parameters: Set up the finetuning parameters, including the learning rate, batch size, and the number of epochs.
  4. Initiate Finetuning: Start the finetuning process using Mistral with the QLora and PEFt configurations.
  5. Evaluate Model: After finetuning, evaluate the performance of your model on a validation set to ensure that it meets your expectations.
  6. Deploy Model: Once satisfied with the model's performance, you can deploy it for inference.

For a detailed demonstration of the finetuning process using Mistral, QLora, and PEFt, refer to the notebook Fine_Tuning_with_Mistral_QLora_PEFt.ipynb included in this repository.