StudyAbroadGPT-7B-LoRa-Kaggle

StudyAbroadGPT is a fine-tuned LoRA adapter based on Mistral-7B-Instruct-v0.3. It is designed to provide accurate, structured, and context-aware guidance for students pursuing education abroad. The model specializes in answering queries regarding university applications, scholarships, visa regulations, and accommodation.

This version was trained using the Unsloth library on a Tesla T4 GPU (via Kaggle), demonstrating the feasibility of fine-tuning LLMs on resource-constrained hardware.

πŸ“Š Model Details

  • Base Model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit
  • Fine-Tuning Method: LoRA (Low-Rank Adaptation)
  • Quantization: 4-bit NF4 (via Unsloth)
  • Framework: Unsloth
  • Developer: MD Millat Hosen (Sharda University)

πŸ“‚ Dataset

The model was trained on the StudyAbroadGPT-Dataset, a synthetic dataset containing 2,274 high-quality student-advisor conversation pairs generated via Gemini Pro.

πŸš€ Usage

You can easily run this model using the unsloth library for faster inference.

Installation

pip install "unsloth[colab-new] @ git+[https://github.com/unslothai/unsloth.git](https://github.com/unslothai/unsloth.git)"
pip install --no-deps "xformers<0.0.27" "trl<0.9.0" peft accelerate bitsandbytes
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for millat/StudyAbroadGPT-7B-LoRa-Kaggle

Dataset used to train millat/StudyAbroadGPT-7B-LoRa-Kaggle