Datasets:
Add task category and project page link
#2
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,138 +1,67 @@
|
|
| 1 |
---
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
-
|
| 10 |
-
-
|
| 11 |
-
-
|
| 12 |
-
|
| 13 |
-
- pl
|
| 14 |
-
- sl
|
| 15 |
-
- sk
|
| 16 |
-
- sr
|
| 17 |
-
- uk
|
| 18 |
-
- da
|
| 19 |
-
- de
|
| 20 |
-
- is
|
| 21 |
-
- nl
|
| 22 |
-
- nn
|
| 23 |
-
- nb
|
| 24 |
-
- sv
|
| 25 |
-
- ca
|
| 26 |
-
- es
|
| 27 |
-
- fr
|
| 28 |
-
- ga
|
| 29 |
-
- gl
|
| 30 |
-
- it
|
| 31 |
-
- pt
|
| 32 |
-
- ro
|
| 33 |
-
- et
|
| 34 |
-
- fi
|
| 35 |
-
- hu
|
| 36 |
-
- lt
|
| 37 |
-
- lv
|
| 38 |
-
- el
|
| 39 |
-
- mt
|
| 40 |
-
- tr
|
| 41 |
-
- sq
|
| 42 |
-
- eu
|
| 43 |
-
- hy
|
| 44 |
-
- en
|
| 45 |
---
|
| 46 |
|
| 47 |
-
|
|
|
|
|
|
|
| 48 |
|
| 49 |
-
|
|
|
|
|
|
|
| 50 |
|
| 51 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 52 |
|
| 53 |
-
|
| 54 |
|
| 55 |
-
|
| 56 |
-
Gemma-3-27B-it, Mistral-3.1-24B-it, and LLaMA-3.3-70B-it. Up to 500k documents per language from FineWeb2 are included.
|
| 57 |
-
Annotations are aligned with human ratings and intended for quality estimation, distillation, and multilingual benchmark research.
|
| 58 |
|
| 59 |
-
|
| 60 |
|
| 61 |
-
|
|
|
|
|
|
|
| 62 |
|
| 63 |
-
|
| 64 |
|
| 65 |
-
|
| 66 |
-
|------------------|-----------------------------------------------------|
|
| 67 |
-
| id | Unique FW2 identifier for the document |
|
| 68 |
-
| text | Full textual content extracted from the webpage |
|
| 69 |
-
| dum | Common Crawl dump identifier from which the data originates |
|
| 70 |
-
| url | Source URL of the document |
|
| 71 |
-
| date | Timestamp indicating when the document was crawled (ISO 8601 format) |
|
| 72 |
-
| file_path | Path to the WARC file in the Common Crawl S3 bucket |
|
| 73 |
-
| language | ISO 639-3 language code of the document (e.g., deu) |
|
| 74 |
-
| language_script | Script used in the document (e.g., Latn for Latin script) |
|
| 75 |
-
| language_score | Confidence score of the language identification (float between 0 and 1) |
|
| 76 |
-
| top_langs | JSON string mapping detected language-script pairs to their scores |
|
| 77 |
-
| minhash_cluster_size | Number of documents in the deduplication cluster |
|
| 78 |
-
| filter_reason | Reason for filtering or deduplication (e.g., duplicated_5_n_grams), NaN if it would have been filtered |
|
| 79 |
-
| edu_score | Dictionary with per-model aggregated scores (modelname_score), **-1 if a invalid score was generated** |
|
| 80 |
-
| aggregation | Dictionary with per-model aggregated type (modelname_type), either majority or average |
|
| 81 |
|
| 82 |
-
|
|
|
|
| 83 |
|
| 84 |
-
|
| 85 |
-
- Language
|
| 86 |
-
- Model agreement
|
| 87 |
-
- Prediction validity
|
| 88 |
-
- Document length or other features
|
| 89 |
|
| 90 |
-
|
| 91 |
|
| 92 |
-
|
| 93 |
-
|
| 94 |
-
- Distillation and teacher-student LLM training
|
| 95 |
-
- Creating filters for noisy web-scale data
|
| 96 |
|
| 97 |
-
##
|
| 98 |
|
| 99 |
-
|
| 100 |
-
- Some predictions may be invalid or inconsistent
|
| 101 |
-
- No domain control across documents
|
| 102 |
-
- Educational value is a subjective, task-specific metric
|
| 103 |
|
| 104 |
-
|
| 105 |
|
|
|
|
| 106 |
```bibtex
|
| 107 |
-
@article{
|
| 108 |
-
|
| 109 |
-
|
| 110 |
-
|
| 111 |
-
|
| 112 |
-
|
| 113 |
-
|
| 114 |
-
Abbas Goher Khan,
|
| 115 |
-
Richard Rutmann,
|
| 116 |
-
Alex Jude,
|
| 117 |
-
Maurice Kraus,
|
| 118 |
-
Alexander Arno Weber,
|
| 119 |
-
Felix Stollenwerk,
|
| 120 |
-
David Kaczér,
|
| 121 |
-
Florian Mai,
|
| 122 |
-
Lucie Flek,
|
| 123 |
-
Rafet Sifa,
|
| 124 |
-
Nicolas Flores-Herr,
|
| 125 |
-
Joachim Köhler,
|
| 126 |
-
Patrick Schramowski,
|
| 127 |
-
Michael Fromm,
|
| 128 |
-
Kristian Kersting
|
| 129 |
-
},
|
| 130 |
-
year = {2025},
|
| 131 |
-
journal = {arXiv preprint arXiv:2505:22232}
|
| 132 |
-
}
|
| 133 |
-
```
|
| 134 |
-
|
| 135 |
-
|
| 136 |
-
## 🔗 Links:
|
| 137 |
-
- Base Dataset: [FineWeb2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2)
|
| 138 |
-
- Related Work: [FineWeb2 LLM Judging Section](https://huggingface.co/papers/llm-quality-judging-fineweb2)
|
|
|
|
| 1 |
---
|
| 2 |
+
license: cc-by-4.0
|
| 3 |
+
task_categories:
|
| 4 |
+
- question-answering
|
| 5 |
+
- text-generation
|
| 6 |
+
paperswithcode:
|
| 7 |
+
arxiv_id: 2505.13508
|
| 8 |
+
tags:
|
| 9 |
+
- temporal-reasoning
|
| 10 |
+
- reinforcement-learning
|
| 11 |
+
- large-language-models
|
| 12 |
+
library_name: verl
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 13 |
---
|
| 14 |
|
| 15 |
+
<center>
|
| 16 |
+
<img src="https://cdn-uploads.huggingface.co/production/uploads/65d188a4aa309d842e438ef1/t50SGw3jw6kofB0-4s6GJ.png" alt="Output Examples" width="600">
|
| 17 |
+
</center>
|
| 18 |
|
| 19 |
+
<div align="center">
|
| 20 |
+
<a href="https://huggingface.co/collections/ulab-ai/time-r1-682626aea47cb2b876285a16">🤗 <strong>Model</strong> | <a href="https://github.com/ulab-uiuc/Time-R1">🚀 <strong>Code</strong></a> | <a href="https://arxiv.org/abs/2505.13508">📖 <strong>Paper</strong></a>
|
| 21 |
+
</div>
|
| 22 |
|
| 23 |
+
# Time-Bench Dataset
|
| 24 |
+
|
| 25 |
+
This directory contains the Time-Bench dataset, used for training and evaluating the Time-R1 model. The dataset is organized to support the different stages of the Time-R1 training curriculum.
|
| 26 |
+
|
| 27 |
+
## Dataset Files
|
| 28 |
|
| 29 |
+
Below is a list of the key dataset files and their corresponding usage in the Time-R1 framework:
|
| 30 |
|
| 31 |
+
### Stage 1: Temporal Comprehension
|
|
|
|
|
|
|
| 32 |
|
| 33 |
+
These files are used for training and validating the foundational temporal understanding capabilities of the $\theta_1$ model.
|
| 34 |
|
| 35 |
+
* `train_inference_easy.parquet`: Used for the initial phase (Phase 1) of Stage 1 training, focusing on simpler temporal inference tasks.
|
| 36 |
+
* `train_comprehension_combined.parquet`: A comprehensive training set used for Phases 2 and 3 of Stage 1, covering a broader range of temporal comprehension tasks, including **Timestamp Inference, Time-Difference Estimation, Event Ordering and Masked Time Entity Completion**.
|
| 37 |
+
* `test_comprehension_combined.parquet`: The validation set used throughout Stage 1 to evaluate performance on various temporal comprehension tasks.
|
| 38 |
|
| 39 |
+
### Stage 2: Future Event Time Prediction
|
| 40 |
|
| 41 |
+
These files are used for training and validating the future event time prediction capabilities of the θ₂ model.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 42 |
|
| 43 |
+
* `train_prediction_combined.parquet`: The training set for Stage 2, designed to teach the model to predict future event times.
|
| 44 |
+
* `test_prediction.parquet`: The validation set used in Stage 2 to evaluate the model's accuracy in predicting future event times.
|
| 45 |
|
| 46 |
+
### Stage 3: Creative Future Scenario Generation
|
|
|
|
|
|
|
|
|
|
|
|
|
| 47 |
|
| 48 |
+
These files are used as a source of real-world news for comparison and analysis during the validation of Stage 3's creative future scenario generation capabilities. The model generates future news, which is then compared against these real news archives.
|
| 49 |
|
| 50 |
+
* `nyt_years/2024.jsonl`: Contains New York Times articles from the year 2024, used for grounding and evaluating generated future news.
|
| 51 |
+
* `nyt_years/2025.jsonl`: Contains New York Times articles from the year 2025, used similarly for grounding and evaluation.
|
|
|
|
|
|
|
| 52 |
|
| 53 |
+
## Data Format
|
| 54 |
|
| 55 |
+
The `.parquet` files are typically structured with columns relevant to the specific temporal reasoning tasks, including prompts, ground truth answers, and associated metadata. The `.jsonl` files in `nyt_years/` contain news articles in JSON Lines format.
|
|
|
|
|
|
|
|
|
|
| 56 |
|
| 57 |
+
Please refer to the main [Time-R1 paper](https://arxiv.org/abs/2505.13508) and the training scripts in the [Time-R1_code](https://github.com/ulab-uiuc/Time-R1) directory for more details on how these dataset files are utilized.
|
| 58 |
|
| 59 |
+
## Citations
|
| 60 |
```bibtex
|
| 61 |
+
@article{liu2025time,
|
| 62 |
+
title={Time-R1: Towards Comprehensive Temporal Reasoning in LLMs},
|
| 63 |
+
author={Liu, Zijia and Han, Peixuan and Yu, Haofei and Li, Haoru and You, Jiaxuan},
|
| 64 |
+
journal={arXiv preprint arXiv:2505.13508},
|
| 65 |
+
year={2025}
|
| 66 |
+
}
|
| 67 |
+
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|