Model Card


How to use model

  1. Install dependencies (if not installed):
pip install transformers peft
  1. Load Llama-3.1-8B and LoRA adapter:
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

base_model = "unsloth/Meta-Llama-3.1-8B"
lora_repo = "youth-ai-initiative/Med-instructor-Llama-by-Group-1"

tokenizer = AutoTokenizer.from_pretrained(base_model)

model = AutoModelForCausalLM.from_pretrained(
    base_model,
    device_map="auto",
    torch_dtype="auto",
)

model = PeftModel.from_pretrained(
    model,
    lora_repo,
    subfolder="model_weights/lora_weights"
)

prompt = "What are the common symptoms and risk factors of high blood pressure?"

inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(max_new_tokens=200, **inputs)

print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Training Hyperparameters

  • batch_size = 8

  • num_epochs = 2

  • learning_rate = 2e-4

  • LoRA Config:

    • r = 32
    • alpha = 64
    • dropout = 0.05

Training Results

Epoch Training Loss Validation Loss
1 0.963223 0.927974
2 0.906526 0.920803
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for youth-ai-initiative/Med-instructor-Llama-by-Group-1

Finetuned
(288)
this model

Dataset used to train youth-ai-initiative/Med-instructor-Llama-by-Group-1