klue/klue
Viewer • Updated • 206k • 7.21k • 92
How to use chunwoolee0/klue_ynat_roberta_base_model with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-classification", model="chunwoolee0/klue_ynat_roberta_base_model") # Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("chunwoolee0/klue_ynat_roberta_base_model")
model = AutoModelForSequenceClassification.from_pretrained("chunwoolee0/klue_ynat_roberta_base_model")This model is a fine-tuned version of klue/roberta-base on the klue dataset. It achieves the following results on the evaluation set:
Pretrained RoBERTa Model on Korean Language. See Github and Paper for more details.
Pretrained RoBERTa Model on Korean Language. See Github and Paper for more details.
NOTE: Use BertTokenizer instead of RobertaTokenizer. (AutoTokenizer will load BertTokenizer)
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("klue/roberta-base")
tokenizer = AutoTokenizer.from_pretrained("klue/roberta-base")
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | F1 |
|---|---|---|---|---|
| No log | 1.0 | 179 | 0.4838 | 0.8444 |
| No log | 2.0 | 358 | 0.3848 | 0.8659 |
| 0.4203 | 3.0 | 537 | 0.3778 | 0.8690 |
| 0.4203 | 4.0 | 716 | 0.3762 | 0.8702 |
| 0.4203 | 5.0 | 895 | 0.3747 | 0.8720 |