XLM-RoBERTa with Multi-Task Learning for Sarcasm and Mock Politeness Detection

Model Description

This project fine-tunes XLM-RoBERTa for detecting sarcasm and mock politeness in Filipino (English, Tagalog, or code-mixed (Taglish)) faculty evaluation texts.

Two models are included:

  • MTL model โ†’ sarcasm detection (main task) + mock politeness detection (auxiliary task)
  • STL model โ†’ sarcasm detection only

The models are packaged into a desktop app (Tkinter + Python) for easy testing.


Intended Uses & Limitations

Intended Use

  • Demonstrating multi-task learning in NLP
  • Exploring sarcasm and politeness detection in Taglish text
  • Academic/research purposes only

Limitations

  • Trained on a domain-specific dataset (faculty evaluations)
  • May not generalize well outside Taglish or academic settings
  • Predictions are not guaranteed to be accurate for all contexts

How to Use

  1. Download the XLM-R folder from this repository.
  2. Inside the folder, locate and open: XLM-R/XLM-R.exe
  3. Use the GUI to input text or upload a .csv file (see included INPUT_SAMPLE.csv).
  4. The app will output predictions for sarcasm (and mock politeness if using MTL).

(No coding required โ€” the .exe is standalone on Windows.)


Training Data

  • Collected faculty evaluation texts written in Filipino (English, Tagalog, or code-mixed (Taglish))
  • Annotated for sarcasm and mock politeness

Evaluation

  • Compared Single-Task (STL) vs Multi-Task (MTL)
  • Metrics: accuracy, precision, recall, F1
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Bubbli/XLM-R-Sarcasm-MockPoliteness-Detection

Finetuned
(3598)
this model