Model Card
- This model is a fine-tuned adapter model of unsloth/Meta-Llama-3.1-8B on cxllin/medinstructv2 dataset using LoRA configuration.
How to use model
- Install dependencies (if not installed):
pip install transformers peft
- Load Llama-3.1-8B and LoRA adapter:
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
base_model = "unsloth/Meta-Llama-3.1-8B"
lora_repo = "youth-ai-initiative/Med-instructor-Llama-by-Group-1"
tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForCausalLM.from_pretrained(
base_model,
device_map="auto",
torch_dtype="auto",
)
model = PeftModel.from_pretrained(
model,
lora_repo,
subfolder="model_weights/lora_weights"
)
prompt = "What are the common symptoms and risk factors of high blood pressure?"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(max_new_tokens=200, **inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Training Hyperparameters
batch_size = 8
num_epochs = 2
learning_rate = 2e-4
LoRA Config:
- r = 32
- alpha = 64
- dropout = 0.05
Training Results
| Epoch | Training Loss | Validation Loss |
|---|---|---|
| 1 | 0.963223 | 0.927974 |
| 2 | 0.906526 | 0.920803 |
Model tree for youth-ai-initiative/Med-instructor-Llama-by-Group-1
Base model
unsloth/Meta-Llama-3.1-8B