StudyAbroadGPT-7B-LoRa-Kaggle
StudyAbroadGPT is a fine-tuned LoRA adapter based on Mistral-7B-Instruct-v0.3. It is designed to provide accurate, structured, and context-aware guidance for students pursuing education abroad. The model specializes in answering queries regarding university applications, scholarships, visa regulations, and accommodation.
This version was trained using the Unsloth library on a Tesla T4 GPU (via Kaggle), demonstrating the feasibility of fine-tuning LLMs on resource-constrained hardware.
π Model Details
- Base Model:
unsloth/mistral-7b-instruct-v0.3-bnb-4bit - Fine-Tuning Method: LoRA (Low-Rank Adaptation)
- Quantization: 4-bit NF4 (via Unsloth)
- Framework: Unsloth
- Developer: MD Millat Hosen (Sharda University)
π Dataset
The model was trained on the StudyAbroadGPT-Dataset, a synthetic dataset containing 2,274 high-quality student-advisor conversation pairs generated via Gemini Pro.
- Dataset Link: millat/StudyAbroadGPT-Dataset
π Usage
You can easily run this model using the unsloth library for faster inference.
Installation
pip install "unsloth[colab-new] @ git+[https://github.com/unslothai/unsloth.git](https://github.com/unslothai/unsloth.git)"
pip install --no-deps "xformers<0.0.27" "trl<0.9.0" peft accelerate bitsandbytes
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
Model tree for millat/StudyAbroadGPT-7B-LoRa-Kaggle
Base model
mistralai/Mistral-7B-v0.3
Finetuned
mistralai/Mistral-7B-Instruct-v0.3