Finetuned Deberta End-to-End Aspect-based Sentiment Analysis
This model is a fine-tuned version of yangheng/deberta-v3-base-end2end-absa for Aspect-Based Sentiment Analysis (ABSA).
It performs end-to-end ABSA by jointly extracting aspect terms and their sentiments using a single token-classification head. Labels follow an IOB-with-sentiment format, for example B-ASP-Positive, I-ASP-Negative, or O for non-aspect tokens.
- Developed by: Sajida-dev
- Model type: Token Classification (Aspect-Based Sentiment Analysis)
- Language(s): English
- License: MIT
- Finetuned from:
yangheng/deberta-v3-base-end2end-absa - Library: Transformers
Model Sources
- Repository: https://huggingface.co/sajida-dev/fine-tune-deberta-v3-base-end2end-absa-model
- Base Paper: DeBERTa: Decoding-enhanced BERT with Disentangled Attention (https://arxiv.org/abs/2006.03654)
- Demo: Available via Hugging Face
pipeline("token-classification")
Uses
Direct Use
- Extract aspect terms from text (e.g., "battery life", "screen")
- Assign sentiment polarity (Positive, Negative, Neutral) to each aspect
- Useful for product reviews, customer feedback, and opinion mining
Downstream Use
- Integration into customer service analytics
- Market research sentiment dashboards
- Fine-tuning for domain-specific ABSA tasks (restaurants, healthcare, etc.)
Out-of-Scope Use
- General sentiment classification without aspect extraction
- Non-English text
- Misuse for biased or harmful profiling
Bias, Risks, and Limitations
Bias
Model performance depends on the dataset used for fine-tuning. If training data is domain-specific, generalization to other domains may be limited.
Limitations
- Works best on English text
- May misclassify nuanced sentiments such as sarcasm or irony
- Aspect boundary detection may fail for complex multi-word expressions
Recommendations
Users should validate outputs on their own domain-specific data and consider further fine-tuning if needed.
How to Get Started with the Model
from transformers import pipeline
nlp = pipeline(
"token-classification",
model="sajida-dev/fine-tune-deberta-v3-base-end2end-absa-model"
)
text = "The battery life is amazing but the screen is dull."
results = nlp(text)
print(results)
Then, you can perform inference like this. The model will automatically find the aspects in the text and classify their sentiment.
{
"text": "The user interface is brilliant, but the documentation is a total mess.",
"aspect": ["user interface", "documentation"],
"position": [[4, 19], [41, 54]],
"sentiment": ["Positive", "Negative"],
"probability": [[1e-05, 0.0001, 0.9998], [0.9998, 0.0001, 1e-05]],
"confidence": [0.9997, 0.9997]
}
Training Details
Training Data
- Fine-tuned on ABSA datasets (for example, product review datasets; details to be added).
- Preprocessing includes tokenization with the DeBERTa tokenizer and IOB tagging with sentiment labels.
Training Procedure
- Optimizer: AdamW
- Loss: Cross-entropy for token classification
- Mixed precision: fp16
- Early stopping: Enabled (patience = 3)
Training Hyperparameters
| Hyperparameter | Value |
|---|---|
| Learning rate | 2e-5 |
| Warmup ratio | 0.1 |
| Number of epochs | 5 |
| Train batch size (per device) | 16 |
| Eval batch size (per device) | 32 |
| Gradient accumulation steps | 2 |
| Weight decay | 0.01 |
| Label smoothing factor | 0.05 |
| Evaluation strategy | Per epoch |
| Save strategy | Per epoch |
| Metric for best model | F1 |
| Random seed | 42 |
Evaluation
Testing Data
- Held-out ABSA test split from product review datasets.
Factors & Metrics
- Accuracy
- Precision
- Recall
- F1-score
Results (Final Evaluation on Test Set)
| Metric | Value |
|---|---|
| Accuracy | 0.9660 |
| Precision | 0.5905 |
| Recall | 0.4684 |
| F1-score | 0.5224 |
| Eval loss | 0.1200 |
Environmental Impact
| Item | Details |
|---|---|
| Hardware | GPU |
| Training time | ~3.5 hours |
| Training epochs | 5 |
| Carbon footprint | Not estimated |
Technical Specifications
- Architecture: DeBERTa-v3 base with token classification head
- Objective: Joint aspect extraction and sentiment classification
- Compute Framework: Hugging Face Transformers with PyTorch
Citation
@misc{sajida2025absa,
author = {Sajida-dev},
title = {Fine-tuned DeBERTa-v3 Base End-to-End ABSA Model},
year = {2025},
publisher = {Hugging Face},
howpublished = {https://huggingface.co/sajida-dev/fine-tune-deberta-v3-base-end2end-absa-model}
}
- Downloads last month
- 5
Model tree for sajida-dev/fine-tune-deberta-v3-base-end2end-absa-model
Base model
yangheng/deberta-v3-base-end2end-absa