Add task category and project page link

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +47 -118
README.md CHANGED
@@ -1,138 +1,67 @@
1
  ---
2
- dataset_name: fineweb2-llm-annotated
3
- pretty_name: JQL LLMs Multilingual Educational Quality Annotations
4
- license: odc-by
5
- source_license: Same as FineWeb2 (see upstream dataset)
6
- size_categories:
7
- - 10M<n<100M
8
- language:
9
- - bg
10
- - cs
11
- - hr
12
- - mk
13
- - pl
14
- - sl
15
- - sk
16
- - sr
17
- - uk
18
- - da
19
- - de
20
- - is
21
- - nl
22
- - nn
23
- - nb
24
- - sv
25
- - ca
26
- - es
27
- - fr
28
- - ga
29
- - gl
30
- - it
31
- - pt
32
- - ro
33
- - et
34
- - fi
35
- - hu
36
- - lt
37
- - lv
38
- - el
39
- - mt
40
- - tr
41
- - sq
42
- - eu
43
- - hy
44
- - en
45
  ---
46
 
47
- # 📚 JQL Educational Quality Annotations from LLMs
 
 
48
 
49
- This dataset provides 17,186,606 documents with high-quality LLM annotations for evaluating the **educational value of web documents**, and serves as a benchmark for training and evaluating **multilingual LLM annotators** as described in the JQL [paper](https://arxiv.org/abs/2505.22232).
 
 
50
 
51
- ---
 
 
 
 
52
 
53
- ## 📝 Dataset Summary
54
 
55
- Multilingual document-level quality annotations scored on a 0–5 educational value scale by three state-of-the-art LLMs:
56
- Gemma-3-27B-it, Mistral-3.1-24B-it, and LLaMA-3.3-70B-it. Up to 500k documents per language from FineWeb2 are included.
57
- Annotations are aligned with human ratings and intended for quality estimation, distillation, and multilingual benchmark research.
58
 
59
- ## 🌐 Languages
60
 
61
- In total we included 35 European languages. Input documents are in their native language, but models were prompted and responded in English.
 
 
62
 
63
- ## 🧱 Dataset Structure:
64
 
65
- | Name | Description |
66
- |------------------|-----------------------------------------------------|
67
- | id | Unique FW2 identifier for the document |
68
- | text | Full textual content extracted from the webpage |
69
- | dum | Common Crawl dump identifier from which the data originates |
70
- | url | Source URL of the document |
71
- | date | Timestamp indicating when the document was crawled (ISO 8601 format) |
72
- | file_path | Path to the WARC file in the Common Crawl S3 bucket |
73
- | language | ISO 639-3 language code of the document (e.g., deu) |
74
- | language_script | Script used in the document (e.g., Latn for Latin script) |
75
- | language_score | Confidence score of the language identification (float between 0 and 1) |
76
- | top_langs | JSON string mapping detected language-script pairs to their scores |
77
- | minhash_cluster_size | Number of documents in the deduplication cluster |
78
- | filter_reason | Reason for filtering or deduplication (e.g., duplicated_5_n_grams), NaN if it would have been filtered |
79
- | edu_score | Dictionary with per-model aggregated scores (modelname_score), **-1 if a invalid score was generated** |
80
- | aggregation | Dictionary with per-model aggregated type (modelname_type), either majority or average |
81
 
82
- ## ✂️ Data Splits:
 
83
 
84
- This dataset is not pre-split. Users can generate custom splits by:
85
- - Language
86
- - Model agreement
87
- - Prediction validity
88
- - Document length or other features
89
 
90
- ## 🎯 Intended Use
91
 
92
- - Training multilingual document quality models
93
- - Benchmarking multilingual LLM performance
94
- - Distillation and teacher-student LLM training
95
- - Creating filters for noisy web-scale data
96
 
97
- ## ⚠️ Limitations:
98
 
99
- - LLM-generated scores, not human-authored
100
- - Some predictions may be invalid or inconsistent
101
- - No domain control across documents
102
- - Educational value is a subjective, task-specific metric
103
 
104
- ## 📖 Citation
105
 
 
106
  ```bibtex
107
- @article{ali2025judging,
108
- title = {Judging Quality Across Languages: A Multilingual Approach to Pretraining Data Filtering with Language Models},
109
- author = {
110
- Mehdi Ali,
111
- Manuel Brack,
112
- Max Lübbering,
113
- Elias Wendt,
114
- Abbas Goher Khan,
115
- Richard Rutmann,
116
- Alex Jude,
117
- Maurice Kraus,
118
- Alexander Arno Weber,
119
- Felix Stollenwerk,
120
- David Kaczér,
121
- Florian Mai,
122
- Lucie Flek,
123
- Rafet Sifa,
124
- Nicolas Flores-Herr,
125
- Joachim Köhler,
126
- Patrick Schramowski,
127
- Michael Fromm,
128
- Kristian Kersting
129
- },
130
- year = {2025},
131
- journal = {arXiv preprint arXiv:2505:22232}
132
- }
133
- ```
134
-
135
-
136
- ## 🔗 Links:
137
- - Base Dataset: [FineWeb2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2)
138
- - Related Work: [FineWeb2 LLM Judging Section](https://huggingface.co/papers/llm-quality-judging-fineweb2)
 
1
  ---
2
+ license: cc-by-4.0
3
+ task_categories:
4
+ - question-answering
5
+ - text-generation
6
+ paperswithcode:
7
+ arxiv_id: 2505.13508
8
+ tags:
9
+ - temporal-reasoning
10
+ - reinforcement-learning
11
+ - large-language-models
12
+ library_name: verl
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
  ---
14
 
15
+ <center>
16
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/65d188a4aa309d842e438ef1/t50SGw3jw6kofB0-4s6GJ.png" alt="Output Examples" width="600">
17
+ </center>
18
 
19
+ <div align="center">
20
+ <a href="https://huggingface.co/collections/ulab-ai/time-r1-682626aea47cb2b876285a16">🤗 <strong>Model</strong> | <a href="https://github.com/ulab-uiuc/Time-R1">🚀 <strong>Code</strong></a> | <a href="https://arxiv.org/abs/2505.13508">📖 <strong>Paper</strong></a>
21
+ </div>
22
 
23
+ # Time-Bench Dataset
24
+
25
+ This directory contains the Time-Bench dataset, used for training and evaluating the Time-R1 model. The dataset is organized to support the different stages of the Time-R1 training curriculum.
26
+
27
+ ## Dataset Files
28
 
29
+ Below is a list of the key dataset files and their corresponding usage in the Time-R1 framework:
30
 
31
+ ### Stage 1: Temporal Comprehension
 
 
32
 
33
+ These files are used for training and validating the foundational temporal understanding capabilities of the $\theta_1$ model.
34
 
35
+ * `train_inference_easy.parquet`: Used for the initial phase (Phase 1) of Stage 1 training, focusing on simpler temporal inference tasks.
36
+ * `train_comprehension_combined.parquet`: A comprehensive training set used for Phases 2 and 3 of Stage 1, covering a broader range of temporal comprehension tasks, including **Timestamp Inference, Time-Difference Estimation, Event Ordering and Masked Time Entity Completion**.
37
+ * `test_comprehension_combined.parquet`: The validation set used throughout Stage 1 to evaluate performance on various temporal comprehension tasks.
38
 
39
+ ### Stage 2: Future Event Time Prediction
40
 
41
+ These files are used for training and validating the future event time prediction capabilities of the θ₂ model.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
42
 
43
+ * `train_prediction_combined.parquet`: The training set for Stage 2, designed to teach the model to predict future event times.
44
+ * `test_prediction.parquet`: The validation set used in Stage 2 to evaluate the model's accuracy in predicting future event times.
45
 
46
+ ### Stage 3: Creative Future Scenario Generation
 
 
 
 
47
 
48
+ These files are used as a source of real-world news for comparison and analysis during the validation of Stage 3's creative future scenario generation capabilities. The model generates future news, which is then compared against these real news archives.
49
 
50
+ * `nyt_years/2024.jsonl`: Contains New York Times articles from the year 2024, used for grounding and evaluating generated future news.
51
+ * `nyt_years/2025.jsonl`: Contains New York Times articles from the year 2025, used similarly for grounding and evaluation.
 
 
52
 
53
+ ## Data Format
54
 
55
+ The `.parquet` files are typically structured with columns relevant to the specific temporal reasoning tasks, including prompts, ground truth answers, and associated metadata. The `.jsonl` files in `nyt_years/` contain news articles in JSON Lines format.
 
 
 
56
 
57
+ Please refer to the main [Time-R1 paper](https://arxiv.org/abs/2505.13508) and the training scripts in the [Time-R1_code](https://github.com/ulab-uiuc/Time-R1) directory for more details on how these dataset files are utilized.
58
 
59
+ ## Citations
60
  ```bibtex
61
+ @article{liu2025time,
62
+ title={Time-R1: Towards Comprehensive Temporal Reasoning in LLMs},
63
+ author={Liu, Zijia and Han, Peixuan and Yu, Haofei and Li, Haoru and You, Jiaxuan},
64
+ journal={arXiv preprint arXiv:2505.13508},
65
+ year={2025}
66
+ }
67
+ ```