docfreemo commited on
Commit
937e218
·
verified ·
1 Parent(s): 35e9419

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +275 -25
README.md CHANGED
@@ -1,27 +1,277 @@
1
  ---
2
- configs:
3
- - config_name: default
4
- data_files:
5
- - split: data
6
- path: data/data-*
7
- dataset_info:
8
- features:
9
- - name: subject
10
- dtype: string
11
- - name: predicate
12
- dtype: string
13
- - name: object
14
- dtype: string
15
- - name: object_type
16
- dtype: string
17
- - name: object_datatype
18
- dtype: string
19
- - name: object_language
20
- dtype: string
21
- splits:
22
- - name: data
23
- num_bytes: 481806552802
24
- num_examples: 3130753066
25
- download_size: 32476432840
26
- dataset_size: 481806552802
27
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: cc-by-2.5
3
+ task_categories:
4
+ - text-generation
5
+ - feature-extraction
6
+ language:
7
+ - en
8
+ tags:
9
+ - rdf
10
+ - knowledge-graph
11
+ - semantic-web
12
+ - triples
13
+ size_categories:
14
+ - 1M<n<10M
 
 
 
 
 
 
 
 
 
 
 
 
15
  ---
16
+
17
+ # Freebase
18
+
19
+ ## Dataset Description
20
+
21
+ Large-scale knowledge base (archived by Google)
22
+
23
+ **Original Source:** http://commondatastorage.googleapis.com/freebase-public/rdf/freebase-rdf-latest.gz
24
+
25
+ ### Dataset Summary
26
+
27
+ This dataset contains RDF triples from Freebase converted to HuggingFace
28
+ dataset format for easy use in machine learning pipelines.
29
+
30
+ - **Format:** Originally ntriples, converted to HuggingFace Dataset
31
+ - **Size:** 300.0 GB (extracted)
32
+ - **Entities:** ~50M
33
+ - **Triples:** ~3B
34
+ - **Original License:**
35
+ CC BY 2.5
36
+
37
+ ### Recommended Use
38
+
39
+ Historical knowledge base, large-scale training
40
+
41
+ ### Notes: Archived dataset from 2014, very comprehensive but historical
42
+
43
+
44
+ ## Dataset Format: Lossless RDF Representation
45
+
46
+ This dataset uses a **standard lossless format** for representing RDF (Resource Description Framework)
47
+ data in HuggingFace Datasets. All semantic information from the original RDF knowledge graph is preserved,
48
+ enabling perfect round-trip conversion between RDF and HuggingFace formats.
49
+
50
+ ### Schema
51
+
52
+ Each RDF triple is represented as a row with **6 fields**:
53
+
54
+ | Field | Type | Description | Example |
55
+ |-------|------|-------------|---------|
56
+ | `subject` | string | Subject of the triple (URI or blank node) | `"http://schema.org/Person"` |
57
+ | `predicate` | string | Predicate URI | `"http://www.w3.org/1999/02/22-rdf-syntax-ns#type"` |
58
+ | `object` | string | Object of the triple | `"John Doe"` or `"http://schema.org/Thing"` |
59
+ | `object_type` | string | Type of object: `"uri"`, `"literal"`, or `"blank_node"` | `"literal"` |
60
+ | `object_datatype` | string | XSD datatype URI (for typed literals) | `"http://www.w3.org/2001/XMLSchema#integer"` |
61
+ | `object_language` | string | Language tag (for language-tagged literals) | `"en"` |
62
+
63
+ ### Example: RDF Triple Representation
64
+
65
+ **Original RDF (Turtle)**:
66
+ ```turtle
67
+ <http://example.org/John> <http://schema.org/name> "John Doe"@en .
68
+ ```
69
+
70
+ **HuggingFace Dataset Row**:
71
+ ```python
72
+ {
73
+ "subject": "http://example.org/John",
74
+ "predicate": "http://schema.org/name",
75
+ "object": "John Doe",
76
+ "object_type": "literal",
77
+ "object_datatype": None,
78
+ "object_language": "en"
79
+ }
80
+ ```
81
+
82
+ ### Loading the Dataset
83
+
84
+ ```python
85
+ from datasets import load_dataset
86
+
87
+ # Load the dataset
88
+ dataset = load_dataset("CleverThis/freebase")
89
+
90
+ # Access the data
91
+ data = dataset["data"]
92
+
93
+ # Iterate over triples
94
+ for row in data:
95
+ subject = row["subject"]
96
+ predicate = row["predicate"]
97
+ obj = row["object"]
98
+ obj_type = row["object_type"]
99
+
100
+ print(f"Triple: ({subject}, {predicate}, {obj})")
101
+ print(f" Object type: {obj_type}")
102
+ if row["object_language"]:
103
+ print(f" Language: {row['object_language']}")
104
+ if row["object_datatype"]:
105
+ print(f" Datatype: {row['object_datatype']}")
106
+ ```
107
+
108
+ ### Converting Back to RDF
109
+
110
+ The dataset can be converted back to any RDF format (Turtle, N-Triples, RDF/XML,
111
+ etc.) with **zero information loss**:
112
+
113
+ ```python
114
+ from datasets import load_dataset
115
+ from rdflib import Graph, URIRef, Literal, BNode
116
+
117
+ def convert_to_rdf(dataset_name, output_file="output.ttl", split="data"):
118
+ """Convert HuggingFace dataset back to RDF Turtle format."""
119
+ # Load dataset
120
+ dataset = load_dataset(dataset_name)
121
+
122
+ # Create RDF graph
123
+ graph = Graph()
124
+
125
+ # Convert each row to RDF triple
126
+ for row in dataset[split]:
127
+ # Subject
128
+ if row["subject"].startswith("_:"):
129
+ subject = BNode(row["subject"][2:])
130
+ else:
131
+ subject = URIRef(row["subject"])
132
+
133
+ # Predicate (always URI)
134
+ predicate = URIRef(row["predicate"])
135
+
136
+ # Object (depends on object_type)
137
+ if row["object_type"] == "uri":
138
+ obj = URIRef(row["object"])
139
+ elif row["object_type"] == "blank_node":
140
+ obj = BNode(row["object"][2:])
141
+ elif row["object_type"] == "literal":
142
+ if row["object_datatype"]:
143
+ obj = Literal(row["object"], datatype=URIRef(row["object_datatype"]))
144
+ elif row["object_language"]:
145
+ obj = Literal(row["object"], lang=row["object_language"])
146
+ else:
147
+ obj = Literal(row["object"])
148
+
149
+ graph.add((subject, predicate, obj))
150
+
151
+ # Serialize to Turtle (or any RDF format)
152
+ graph.serialize(output_file, format="turtle")
153
+ print(f"Exported {len(graph)} triples to {output_file}")
154
+ return graph
155
+
156
+ # Usage
157
+ graph = convert_to_rdf("CleverThis/freebase", "reconstructed.ttl")
158
+ ```
159
+
160
+ ### Information Preservation Guarantee
161
+
162
+ This format preserves **100% of RDF information**:
163
+
164
+ - ✅ **URIs**: Exact string representation preserved
165
+ - ✅ **Literals**: Full text content preserved
166
+ - ✅ **Datatypes**: XSD and custom datatypes preserved
167
+ (e.g., `xsd:integer`, `xsd:dateTime`)
168
+ - ✅ **Language Tags**: BCP 47 language tags preserved (e.g., `@en`, `@fr`, `@ja`)
169
+ - ✅ **Blank Nodes**: Node structure preserved (identifiers may change but
170
+ graph isomorphism maintained)
171
+
172
+ **Round-trip guarantee**: Original RDF → HuggingFace → Reconstructed RDF
173
+ produces **semantically identical** graphs.
174
+
175
+ ### Querying the Dataset
176
+
177
+ You can filter and query the dataset like any HuggingFace dataset:
178
+
179
+ ```python
180
+ from datasets import load_dataset
181
+
182
+ dataset = load_dataset("CleverThis/freebase")
183
+
184
+ # Find all triples with English literals
185
+ english_literals = dataset["data"].filter(
186
+ lambda x: x["object_type"] == "literal" and x["object_language"] == "en"
187
+ )
188
+ print(f"Found {len(english_literals)} English literals")
189
+
190
+ # Find all rdf:type statements
191
+ type_statements = dataset["data"].filter(
192
+ lambda x: "rdf-syntax-ns#type" in x["predicate"]
193
+ )
194
+ print(f"Found {len(type_statements)} type statements")
195
+
196
+ # Convert to Pandas for analysis
197
+ import pandas as pd
198
+ df = dataset["data"].to_pandas()
199
+
200
+ # Analyze predicate distribution
201
+ print(df["predicate"].value_counts())
202
+ ```
203
+
204
+ ### Dataset Format
205
+
206
+ The dataset contains all triples in a single **data** split, suitable for
207
+ machine learning tasks such as:
208
+
209
+ - Knowledge graph completion
210
+ - Link prediction
211
+ - Entity embedding
212
+ - Relation extraction
213
+ - Graph neural networks
214
+
215
+ ### Format Specification
216
+
217
+ For complete technical documentation of the RDF-to-HuggingFace format, see:
218
+
219
+ 📖 [RDF to HuggingFace Format Specification](https://github.com/CleverThis/cleverernie/blob/master/docs/rdf_huggingface_format_specification.md)
220
+
221
+ The specification includes:
222
+ - Detailed schema definition
223
+ - All RDF node type mappings
224
+ - Performance benchmarks
225
+ - Edge cases and limitations
226
+ - Complete code examples
227
+
228
+ ### Conversion Metadata
229
+
230
+ - **Source Format**: ntriples
231
+ - **Original Size**: 300.0 GB
232
+ - **Conversion Tool**: [CleverErnie RDF Pipeline](https://github.com/CleverThis/cleverernie)
233
+ - **Format Version**: 1.0
234
+ - **Conversion Date**: 2025-12-08
235
+
236
+
237
+ ## Citation
238
+
239
+ If you use this dataset, please cite the original source:
240
+
241
+ **Original Dataset:** Freebase
242
+ **URL:** http://commondatastorage.googleapis.com/freebase-public/rdf/freebase-rdf-latest.gz
243
+ **License:** CC BY 2.5
244
+
245
+ ## Dataset Preparation
246
+
247
+ This dataset was prepared using the CleverErnie GISM framework:
248
+
249
+ ```bash
250
+ # Download original dataset
251
+ python scripts/rdf_dataset_downloader.py freebase -o datasets/
252
+
253
+ # Convert to HuggingFace format
254
+ python scripts/convert_rdf_to_hf_dataset.py \
255
+ datasets/freebase/[file] \
256
+ hf_datasets/freebase \
257
+ --format nt
258
+
259
+ # Upload to HuggingFace Hub
260
+ python scripts/upload_all_datasets.py --dataset freebase
261
+ ```
262
+
263
+ ## Additional Information
264
+
265
+ ### Original Source
266
+
267
+ http://commondatastorage.googleapis.com/freebase-public/rdf/freebase-rdf-latest.gz
268
+
269
+ ### Conversion Details
270
+
271
+ - Converted using: [CleverErnie GISM](https://github.com/cleverthis/cleverernie)
272
+ - Conversion script: `scripts/convert_rdf_to_hf_dataset.py`
273
+ - Dataset format: Single 'data' split with all triples
274
+
275
+ ### Maintenance
276
+
277
+ This dataset is maintained by the CleverThis organization.