Lightweight Fine-Tuning with PEFT
This project focuses on fine-tuning a pre-trained model with minimal computational resources, a key skill for adapting foundation models in resource-constrained environments.
Project Highlights
- Loading and Evaluating: Established a performance baseline with a pre-trained model.
- Fine-Tuning with PEFT: Utilized the PEFT library, focusing on techniques like LoRA for efficient adaptation.
- Model Evaluation: Compared the fine-tuned model to the original, showing improvements in efficiency and accuracy.
Technologies & Skills
- PyTorch & Hugging Face Transformers
- PEFT (Parameter-Efficient Fine-Tuning)
- Efficient Computing