XLM-RoBERTa with Multi-Task Learning for Sarcasm and Mock Politeness Detection
Model Description
This project fine-tunes XLM-RoBERTa for detecting sarcasm and mock politeness in Filipino (English, Tagalog, or code-mixed (Taglish)) faculty evaluation texts.
Two models are included:
- MTL model โ sarcasm detection (main task) + mock politeness detection (auxiliary task)
- STL model โ sarcasm detection only
The models are packaged into a desktop app (Tkinter + Python) for easy testing.
Intended Uses & Limitations
Intended Use
- Demonstrating multi-task learning in NLP
- Exploring sarcasm and politeness detection in Taglish text
- Academic/research purposes only
Limitations
- Trained on a domain-specific dataset (faculty evaluations)
- May not generalize well outside Taglish or academic settings
- Predictions are not guaranteed to be accurate for all contexts
How to Use
- Download the XLM-R folder from this repository.
- Inside the folder, locate and open: XLM-R/XLM-R.exe
- Use the GUI to input text or upload a
.csvfile (see includedINPUT_SAMPLE.csv). - The app will output predictions for sarcasm (and mock politeness if using MTL).
(No coding required โ the .exe is standalone on Windows.)
Training Data
- Collected faculty evaluation texts written in Filipino (English, Tagalog, or code-mixed (Taglish))
- Annotated for sarcasm and mock politeness
Evaluation
- Compared Single-Task (STL) vs Multi-Task (MTL)
- Metrics: accuracy, precision, recall, F1
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for Bubbli/XLM-R-Sarcasm-MockPoliteness-Detection
Base model
FacebookAI/xlm-roberta-base