File size: 1,829 Bytes
4da2933 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/Screenshot from 2025-08-20 00-50-33.png
text: None
parameters:
negative_prompt: None
base_model: ProsusAI/finbert
instance_prompt: null
license: other
license_name: useless
license_link: LICENSE
---
# Emotion
<Gallery />
## Model description
Emotion Recognition Model (BERT-based)
📌 Overview
This is a BERT-based emotion recognition model that I created purely for educational and learning purposes.
The model was trained as part of my journey to understand transformers, distillation, GPU management, fine-tuning, and Hugging Face workflows.
⚙️ How I built it
I started with a pretrained BERT model.
I experimented with layer distillation (copying a few layers into a smaller student model).
I trained it on an emotion classification dataset to predict different emotional states from text.
I focused on hands-on practice: learning about tokenization, GPU memory issues, checkpointing, and model saving/loading.
⚠️ Disclaimer
This model is not production-ready.
It is not optimized for real-world use.
It should not be used for commercial, fine-tuning, or deployment purposes.
It was built only as a learning exercise to explore Hugging Face and model training.
💡 Purpose
To help me (and maybe others) understand how Hugging Face works.
To practice model distillation and fine-tuning techniques.
To learn the workflow of pushing models to Hugging Face Hub.
🚫 Limitations
Accuracy and reliability are not guaranteed.
Not suitable for critical applications (mental health, customer service, etc.).
Limited number of layers and trained on a small dataset.
## Download model
[Download](/Abdullah6395/Text_Emotion_Recognition/tree/main) them in the Files & versions tab.
|