The dataset viewer is not available for this split.
Error code: FeaturesError
Exception: ArrowInvalid
Message: JSON parse error: Invalid value. in row 0
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 174, in _generate_tables
df = pandas_read_json(f)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 38, in pandas_read_json
return pd.read_json(path_or_buf, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 815, in read_json
return json_reader.read()
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 1014, in read
obj = self._get_object_parser(self.data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 1040, in _get_object_parser
obj = FrameParser(json, **kwargs).parse()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 1176, in parse
self._parse()
File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 1392, in _parse
ujson_loads(json, precise_float=self.precise_float), dtype=None
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: Expected object or value
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 228, in compute_first_rows_from_streaming_response
iterable_dataset = iterable_dataset._resolve_features()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 3496, in _resolve_features
features = _infer_features_from_batch(self.with_format(None)._head())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2257, in _head
return next(iter(self.iter(batch_size=n)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2461, in iter
for key, example in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1952, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1974, in _iter_arrow
yield from self.ex_iterable._iter_arrow()
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 503, in _iter_arrow
for key, pa_table in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 350, in _iter_arrow
for key, pa_table in self.generate_tables_fn(**gen_kwags):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 177, in _generate_tables
raise e
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 151, in _generate_tables
pa_table = paj.read_json(
^^^^^^^^^^^^^^
File "pyarrow/_json.pyx", line 342, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: JSON parse error: Invalid value. in row 0Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
𧬠OmniGenBench Hub: Unified Repository for Genomic Foundation Model Resources
Welcome to OmniGenBench Hub - the unified, centralized repository for genomic foundation model resources. This hub consolidates all benchmarks, datasets, models, and pipelines required by the OmniGenBench framework, providing researchers with a single source of truth for reproducible genomic AI research.
π¦ Repository Structure
OmniGenBench_Hub/
βββ benchmarks/ # Benchmark suites (RGB, GUE, BEACON, etc.)
β βββ benchmarks_info.json
β βββ RGB.zip
β βββ GUE.zip
β βββ BEACON.zip
β βββ GB.zip
β βββ PGB.zip
β βββ ...
βββ datasets/ # Individual datasets
β βββ datasets_info.json
β βββ deepsea_tfb_prediction.zip
β βββ translation_efficiency_prediction.zip
β βββ variant_effect_prediction.zip
βββ models/ # Pre-trained models (future)
β βββ models_info.json
βββ pipelines/ # Ready-to-use pipelines (future)
βββ pipelines_info.json
π― What is OmniGenBench Hub?
OmniGenBench Hub serves as the unified data infrastructure for the OmniGenBench framework, providing:
- β Centralized Storage: All resources in one place instead of scattered repositories
- β Organized Structure: Clear subdirectories (benchmarks/, datasets/, models/, pipelines/)
- β Metadata Rich: Comprehensive JSON metadata files for programmatic access
- β Backward Compatible: Seamless migration from legacy Space repositories
- β Standardized Formats: Consistent data formats and structures across all resources
- β Validated Content: All resources tested through the OmniGenBench framework
π Quick Start
Download Benchmarks
# Using OmniGenBench CLI (automatically uses this hub)
ogb autobench -m yangheng/PlantRNA-FM -b RGB
# The framework will automatically download from:
# https://huggingface.co/datasets/yangheng/OmniGenBench_Hub/resolve/main/benchmarks/RGB.zip
Download Datasets
from omnigenbench import OmniDatasetForSequenceClassification, OmniTokenizer
# Initialize tokenizer
tokenizer = OmniTokenizer.from_pretrained("yangheng/OmniGenome-52M")
# Load dataset from hub (automatically downloads from this repository)
datasets = OmniDatasetForSequenceClassification.from_hub(
dataset_name="deepsea_tfb_prediction",
tokenizer=tokenizer,
max_length=512
)
# Access splits
train_dataset = datasets['train']
valid_dataset = datasets['valid']
test_dataset = datasets['test']
Direct Download via HuggingFace Hub API
from huggingface_hub import snapshot_download
# Download entire benchmark
benchmark_path = snapshot_download(
repo_id="yangheng/OmniGenBench_Hub",
repo_type="dataset",
allow_patterns="benchmarks/RGB.zip"
)
# Download specific dataset
dataset_path = snapshot_download(
repo_id="yangheng/OmniGenBench_Hub",
repo_type="dataset",
allow_patterns="datasets/translation_efficiency_prediction.zip"
)
π Available Resources
π§ͺ Benchmarks
Our hub hosts 5 comprehensive benchmark suites covering RNA and DNA analysis tasks:
| Benchmark | Genome | Tasks | Task Types | Species | Description |
|---|---|---|---|---|---|
| RGB | RNA | 10 | Classification, Token Classification | Multi-species | RNA Genome Benchmark - Comprehensive RNA understanding |
| BEACON | RNA | 13 | Classification, Regression | Multi-species | Benchmarking Environment for RNA Computational Methods |
| GUE | DNA | 28 | Classification | Multi-species | Genomic Understanding Evaluation |
| GB | DNA | 9 | Classification | Multi-species | Classic Genomic Benchmark |
| PGB | DNA | 7 | Classification, Regression | Plant | Plant Genomics Benchmark |
Download locations: benchmarks/<benchmark_name>.zip
Metadata: benchmarks/benchmarks_info.json
π Datasets
5 curated individual datasets for specific genomic tasks:
| Dataset | Genome | Task Type | Species | Description |
|---|---|---|---|---|
| deepsea_tfb_prediction | DNA | Classification | Human | Transcription Factor Binding prediction |
| translation_efficiency_prediction | RNA | Regression | Multi-species | mRNA translation efficiency |
| variant_effect_prediction | DNA | Classification | Human | Genomic variant effect prediction |
| RNA-SSP-Archive2 | RNA | Token Classification | Multi-species | RNA Secondary Structure Prediction |
| RNA-mRNA | RNA | Classification | Multi-species | RNA mRNA classification |
Download locations: datasets/<dataset_name>.zip
Metadata: datasets/datasets_info.json
π€ Models (Coming Soon)
Pre-trained genomic foundation models will be hosted here in future releases. Currently, models are available directly on HuggingFace Hub:
- yangheng/OmniGenome-186M
- yangheng/OmniGenome-52M
- yangheng/PlantRNA-FM
- And 30+ more models...
Metadata: models/models_info.json
οΏ½ Pipelines (Coming Soon)
Ready-to-use analysis pipelines will be added in future releases.
Metadata: pipelines/pipelines_info.json
π§ Technical Details
Metadata Files
Each resource category includes a metadata JSON file for programmatic access:
benchmarks/benchmarks_info.json
{
"RGB": {
"filename": "RGB.zip",
"genome": "RNA",
"species": "multi-species",
"task_number": 10,
"task_type": "sequence_classification, token_classification",
"description": "RNA Genome Benchmark",
"url": "https://huggingface.co/datasets/yangheng/OmniGenBench_Hub",
"author": "YANG, HENG",
"license": "Apache-2.0"
}
}
datasets/datasets_info.json
{
"deepsea_tfb_prediction": {
"filename": "deepsea_tfb_prediction.zip",
"genome": "DNA",
"species": "human",
"task_type": "sequence_classification",
"description": "DeepSEA TFB prediction",
"url": "https://huggingface.co/datasets/yangheng/OmniGenBench_Hub",
"author": "YANG, HENG",
"license": "Apache-2.0"
}
}
Download URLs
All resources follow a consistent URL pattern:
https://huggingface.co/datasets/yangheng/OmniGenBench_Hub/resolve/main/<category>/<filename>
Examples:
- Benchmark:
https://huggingface.co/datasets/yangheng/OmniGenBench_Hub/resolve/main/benchmarks/RGB.zip - Dataset:
https://huggingface.co/datasets/yangheng/OmniGenBench_Hub/resolve/main/datasets/deepsea_tfb_prediction.zip - Metadata:
https://huggingface.co/datasets/yangheng/OmniGenBench_Hub/resolve/main/benchmarks/benchmarks_info.json
Automated Downloads
The OmniGenBench framework automatically downloads resources from this hub with fallback support:
- Primary: New unified Hub structure (this repository)
- Fallback: Legacy Space repository (deprecated)
- Cache: Local cache if network unavailable
# Framework handles downloads automatically
from omnigenbench import AutoBench
# This will automatically download RGB.zip from this hub
bench = AutoBench(benchmark="RGB", config_or_model="yangheng/OmniGenome-186M")
bench.run()
π Standard Data Structure
Benchmark Structure
<benchmark_name>.zip
βββ <task_1>/
β βββ train.json
β βββ test.json
β βββ config.py
β βββ metadata.py
βββ <task_2>/
β βββ ...
βββ metadata.py
Dataset Structure
<dataset_name>.zip
βββ data/
β βββ train.json
β βββ valid.json
β βββ test.json
β βββ metadata.json
βββ config.py
βββ README.md
π οΈ Integration with OmniGenBench
Using Benchmarks
# Download and evaluate on RGB benchmark
ogb autobench -m yangheng/PlantRNA-FM -b RGB
# Download and evaluate on multiple benchmarks
ogb autobench -m yangheng/OmniGenome-186M -b RGB,GUE,BEACON
Using Datasets
from omnigenbench import OmniDatasetForSequenceClassification, OmniTokenizer
# Initialize tokenizer
tokenizer = OmniTokenizer.from_pretrained("yangheng/OmniGenome-52M")
# Load dataset (automatically downloads from this hub)
datasets = OmniDatasetForSequenceClassification.from_hub(
dataset_name="deepsea_tfb_prediction",
tokenizer=tokenizer,
max_length=512
)
# Access splits
train_dataset = datasets['train']
valid_dataset = datasets['valid']
test_dataset = datasets['test']
Direct HuggingFace Hub API
from huggingface_hub import hf_hub_download
# Download specific file
file_path = hf_hub_download(
repo_id="yangheng/OmniGenBench_Hub",
filename="benchmarks/RGB.zip",
repo_type="dataset"
)
# Download metadata
metadata_path = hf_hub_download(
repo_id="yangheng/OmniGenBench_Hub",
filename="benchmarks/benchmarks_info.json",
repo_type="dataset"
)
π Key Features
- β Unified Repository: All resources in one centralized location
- β Organized Structure: Clear subdirectories for benchmarks, datasets, models, pipelines
- β Metadata Rich: Comprehensive JSON metadata for programmatic access
- β Auto-Download: Framework automatically downloads from this hub
- β Backward Compatible: Fallback support for legacy repositories
- β Standardized Formats: Consistent data structures across all resources
- β Research-Ready: Validated and tested with OmniGenBench framework
οΏ½ Documentation
For comprehensive guides and tutorials, please visit:
- Framework Documentation: OmniGenBench Docs
- Getting Started Guide: GETTING_STARTED.md
- API Reference: API Documentation
- Example Notebooks: examples/
π€ Contributing
We welcome contributions to expand our resource collection! To contribute:
- Format your resource according to our standards
- Include comprehensive metadata and documentation
- Test with OmniGenBench framework
- Submit a pull request to OmniGenBench repository
π License
All resources in OmniGenBench Hub are released under the Apache 2.0 License, ensuring:
- Free use for research and commercial applications
- Modification and redistribution rights
- Patent protection for users
- Clear attribution requirements
π Support
- GitHub Issues: Report bugs or request features
- GitHub Discussions: Ask questions and share ideas
- Email: [email protected]
οΏ½ Citation
If you use resources from OmniGenBench Hub in your research, please cite:
@software{omnigenbench2025,
author = {Yang, Heng},
title = {OmniGenBench: A Unified Framework for Genomic Foundation Models},
year = {2025},
url = {https://github.com/yangheng95/OmniGenBench}
}
π Statistics
| Category | Count | Description |
|---|---|---|
| Benchmarks | 5 | Comprehensive evaluation suites (RGB, GUE, BEACON, GB, PGB) |
| Datasets | 5 | Curated individual datasets for specific tasks |
| Models | Coming Soon | Pre-trained genomic foundation models |
| Pipelines | Coming Soon | Ready-to-use analysis pipelines |
| Total Tasks | 67+ | Combined tasks across all benchmarks |
πΊοΈ Roadmap
Current Release (v1.0)
- β 7 benchmark suites with 69+ tasks
- β 3 curated datasets
- β Organized folder structure
- β Comprehensive metadata files
- β Automated download integration
Future Releases
- π¦ Pre-trained model hosting
- οΏ½ Analysis pipeline templates
- π Additional benchmark suites
- π More individual datasets
- π Continuous updates and improvements
𧬠OmniGenBench Hub - Unified Infrastructure for Genomic Foundation Model Research
- Downloads last month
- 36