T5-Small GenMedX
Model Description
This is a fine-tuned version of t5-small for medical text generation tasks.
Base Model: t5-small (60M parameters)
Fine-tuning Task: Text-to-text generation on medical domain data
Usage
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
# Load model and tokenizer
model_name = "USERNAME/t5-small-genmedx"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
# Prepare input
input_text = "Your input text here"
inputs = tokenizer(input_text, return_tensors="pt", max_length=256, truncation=True)
# Generate output
outputs = model.generate(
inputs.input_ids,
max_length=160,
num_beams=4,
early_stopping=True
)
# Decode output
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(generated_text)
Training Configuration
- Maximum Source Length: 256 tokens
- Maximum Target Length: 160 tokens
- Batch Size: 4
- Number of Epochs: 15
- Learning Rate: 5e-5
Intended Use
This model is intended for:
- Medical text generation tasks
- Research purposes in the medical NLP domain
Disclaimer
This model is provided as-is for research and educational purposes.
- Downloads last month
- 37
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for Sugandha-Chauhan/T5-Medical-Summarizer
Base model
google-t5/t5-small