Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ArrowInvalid
Message:      JSON parse error: Invalid value. in row 0
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 174, in _generate_tables
                  df = pandas_read_json(f)
                       ^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 38, in pandas_read_json
                  return pd.read_json(path_or_buf, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 815, in read_json
                  return json_reader.read()
                         ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 1014, in read
                  obj = self._get_object_parser(self.data)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 1040, in _get_object_parser
                  obj = FrameParser(json, **kwargs).parse()
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 1176, in parse
                  self._parse()
                File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 1392, in _parse
                  ujson_loads(json, precise_float=self.precise_float), dtype=None
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
              ValueError: Expected object or value
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 228, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 3496, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2257, in _head
                  return next(iter(self.iter(batch_size=n)))
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2461, in iter
                  for key, example in iterator:
                                      ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1952, in __iter__
                  for key, pa_table in self._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1974, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 503, in _iter_arrow
                  for key, pa_table in iterator:
                                       ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 350, in _iter_arrow
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 177, in _generate_tables
                  raise e
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 151, in _generate_tables
                  pa_table = paj.read_json(
                             ^^^^^^^^^^^^^^
                File "pyarrow/_json.pyx", line 342, in pyarrow._json.read_json
                File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: JSON parse error: Invalid value. in row 0

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

🧬 OmniGenBench Hub: Unified Repository for Genomic Foundation Model Resources

License Benchmarks Datasets Framework

Welcome to OmniGenBench Hub - the unified, centralized repository for genomic foundation model resources. This hub consolidates all benchmarks, datasets, models, and pipelines required by the OmniGenBench framework, providing researchers with a single source of truth for reproducible genomic AI research.

πŸ“¦ Repository Structure

OmniGenBench_Hub/
β”œβ”€β”€ benchmarks/              # Benchmark suites (RGB, GUE, BEACON, etc.)
β”‚   β”œβ”€β”€ benchmarks_info.json
β”‚   β”œβ”€β”€ RGB.zip
β”‚   β”œβ”€β”€ GUE.zip
β”‚   β”œβ”€β”€ BEACON.zip
β”‚   β”œβ”€β”€ GB.zip
β”‚   β”œβ”€β”€ PGB.zip
β”‚   └── ...
β”œβ”€β”€ datasets/                # Individual datasets
β”‚   β”œβ”€β”€ datasets_info.json
β”‚   β”œβ”€β”€ deepsea_tfb_prediction.zip
β”‚   β”œβ”€β”€ translation_efficiency_prediction.zip
β”‚   └── variant_effect_prediction.zip
β”œβ”€β”€ models/                  # Pre-trained models (future)
β”‚   └── models_info.json
└── pipelines/              # Ready-to-use pipelines (future)
    └── pipelines_info.json

🎯 What is OmniGenBench Hub?

OmniGenBench Hub serves as the unified data infrastructure for the OmniGenBench framework, providing:

  • βœ… Centralized Storage: All resources in one place instead of scattered repositories
  • βœ… Organized Structure: Clear subdirectories (benchmarks/, datasets/, models/, pipelines/)
  • βœ… Metadata Rich: Comprehensive JSON metadata files for programmatic access
  • βœ… Backward Compatible: Seamless migration from legacy Space repositories
  • βœ… Standardized Formats: Consistent data formats and structures across all resources
  • βœ… Validated Content: All resources tested through the OmniGenBench framework

πŸš€ Quick Start

Download Benchmarks

# Using OmniGenBench CLI (automatically uses this hub)
ogb autobench -m yangheng/PlantRNA-FM -b RGB

# The framework will automatically download from:
# https://huggingface.co/datasets/yangheng/OmniGenBench_Hub/resolve/main/benchmarks/RGB.zip

Download Datasets

from omnigenbench import OmniDatasetForSequenceClassification, OmniTokenizer

# Initialize tokenizer
tokenizer = OmniTokenizer.from_pretrained("yangheng/OmniGenome-52M")

# Load dataset from hub (automatically downloads from this repository)
datasets = OmniDatasetForSequenceClassification.from_hub(
    dataset_name="deepsea_tfb_prediction",
    tokenizer=tokenizer,
    max_length=512
)

# Access splits
train_dataset = datasets['train']
valid_dataset = datasets['valid']
test_dataset = datasets['test']

Direct Download via HuggingFace Hub API

from huggingface_hub import snapshot_download

# Download entire benchmark
benchmark_path = snapshot_download(
    repo_id="yangheng/OmniGenBench_Hub",
    repo_type="dataset",
    allow_patterns="benchmarks/RGB.zip"
)

# Download specific dataset
dataset_path = snapshot_download(
    repo_id="yangheng/OmniGenBench_Hub",
    repo_type="dataset",
    allow_patterns="datasets/translation_efficiency_prediction.zip"
)

πŸ“Š Available Resources

πŸ§ͺ Benchmarks

Our hub hosts 5 comprehensive benchmark suites covering RNA and DNA analysis tasks:

Benchmark Genome Tasks Task Types Species Description
RGB RNA 10 Classification, Token Classification Multi-species RNA Genome Benchmark - Comprehensive RNA understanding
BEACON RNA 13 Classification, Regression Multi-species Benchmarking Environment for RNA Computational Methods
GUE DNA 28 Classification Multi-species Genomic Understanding Evaluation
GB DNA 9 Classification Multi-species Classic Genomic Benchmark
PGB DNA 7 Classification, Regression Plant Plant Genomics Benchmark

Download locations: benchmarks/<benchmark_name>.zip
Metadata: benchmarks/benchmarks_info.json

πŸ“ Datasets

5 curated individual datasets for specific genomic tasks:

Dataset Genome Task Type Species Description
deepsea_tfb_prediction DNA Classification Human Transcription Factor Binding prediction
translation_efficiency_prediction RNA Regression Multi-species mRNA translation efficiency
variant_effect_prediction DNA Classification Human Genomic variant effect prediction
RNA-SSP-Archive2 RNA Token Classification Multi-species RNA Secondary Structure Prediction
RNA-mRNA RNA Classification Multi-species RNA mRNA classification

Download locations: datasets/<dataset_name>.zip
Metadata: datasets/datasets_info.json

πŸ€– Models (Coming Soon)

Pre-trained genomic foundation models will be hosted here in future releases. Currently, models are available directly on HuggingFace Hub:

Metadata: models/models_info.json

οΏ½ Pipelines (Coming Soon)

Ready-to-use analysis pipelines will be added in future releases.

Metadata: pipelines/pipelines_info.json

πŸ”§ Technical Details

Metadata Files

Each resource category includes a metadata JSON file for programmatic access:

benchmarks/benchmarks_info.json

{
  "RGB": {
    "filename": "RGB.zip",
    "genome": "RNA",
    "species": "multi-species",
    "task_number": 10,
    "task_type": "sequence_classification, token_classification",
    "description": "RNA Genome Benchmark",
    "url": "https://huggingface.co/datasets/yangheng/OmniGenBench_Hub",
    "author": "YANG, HENG",
    "license": "Apache-2.0"
  }
}

datasets/datasets_info.json

{
  "deepsea_tfb_prediction": {
    "filename": "deepsea_tfb_prediction.zip",
    "genome": "DNA",
    "species": "human",
    "task_type": "sequence_classification",
    "description": "DeepSEA TFB prediction",
    "url": "https://huggingface.co/datasets/yangheng/OmniGenBench_Hub",
    "author": "YANG, HENG",
    "license": "Apache-2.0"
  }
}

Download URLs

All resources follow a consistent URL pattern:

https://huggingface.co/datasets/yangheng/OmniGenBench_Hub/resolve/main/<category>/<filename>

Examples:

  • Benchmark: https://huggingface.co/datasets/yangheng/OmniGenBench_Hub/resolve/main/benchmarks/RGB.zip
  • Dataset: https://huggingface.co/datasets/yangheng/OmniGenBench_Hub/resolve/main/datasets/deepsea_tfb_prediction.zip
  • Metadata: https://huggingface.co/datasets/yangheng/OmniGenBench_Hub/resolve/main/benchmarks/benchmarks_info.json

Automated Downloads

The OmniGenBench framework automatically downloads resources from this hub with fallback support:

  1. Primary: New unified Hub structure (this repository)
  2. Fallback: Legacy Space repository (deprecated)
  3. Cache: Local cache if network unavailable
# Framework handles downloads automatically
from omnigenbench import AutoBench

# This will automatically download RGB.zip from this hub
bench = AutoBench(benchmark="RGB", config_or_model="yangheng/OmniGenome-186M")
bench.run()

πŸ“ Standard Data Structure

Benchmark Structure

<benchmark_name>.zip
β”œβ”€β”€ <task_1>/
β”‚   β”œβ”€β”€ train.json
β”‚   β”œβ”€β”€ test.json
β”‚   β”œβ”€β”€ config.py
β”‚   └── metadata.py
β”œβ”€β”€ <task_2>/
β”‚   └── ...
└── metadata.py

Dataset Structure

<dataset_name>.zip
β”œβ”€β”€ data/
β”‚   β”œβ”€β”€ train.json
β”‚   β”œβ”€β”€ valid.json
β”‚   β”œβ”€β”€ test.json
β”‚   └── metadata.json
β”œβ”€β”€ config.py
└── README.md

πŸ› οΈ Integration with OmniGenBench

Using Benchmarks

# Download and evaluate on RGB benchmark
ogb autobench -m yangheng/PlantRNA-FM -b RGB

# Download and evaluate on multiple benchmarks
ogb autobench -m yangheng/OmniGenome-186M -b RGB,GUE,BEACON

Using Datasets

from omnigenbench import OmniDatasetForSequenceClassification, OmniTokenizer

# Initialize tokenizer
tokenizer = OmniTokenizer.from_pretrained("yangheng/OmniGenome-52M")

# Load dataset (automatically downloads from this hub)
datasets = OmniDatasetForSequenceClassification.from_hub(
    dataset_name="deepsea_tfb_prediction",
    tokenizer=tokenizer,
    max_length=512
)

# Access splits
train_dataset = datasets['train']
valid_dataset = datasets['valid']
test_dataset = datasets['test']

Direct HuggingFace Hub API

from huggingface_hub import hf_hub_download

# Download specific file
file_path = hf_hub_download(
    repo_id="yangheng/OmniGenBench_Hub",
    filename="benchmarks/RGB.zip",
    repo_type="dataset"
)

# Download metadata
metadata_path = hf_hub_download(
    repo_id="yangheng/OmniGenBench_Hub",
    filename="benchmarks/benchmarks_info.json",
    repo_type="dataset"
)

🌟 Key Features

  • βœ… Unified Repository: All resources in one centralized location
  • βœ… Organized Structure: Clear subdirectories for benchmarks, datasets, models, pipelines
  • βœ… Metadata Rich: Comprehensive JSON metadata for programmatic access
  • βœ… Auto-Download: Framework automatically downloads from this hub
  • βœ… Backward Compatible: Fallback support for legacy repositories
  • βœ… Standardized Formats: Consistent data structures across all resources
  • βœ… Research-Ready: Validated and tested with OmniGenBench framework

οΏ½ Documentation

For comprehensive guides and tutorials, please visit:

🀝 Contributing

We welcome contributions to expand our resource collection! To contribute:

  1. Format your resource according to our standards
  2. Include comprehensive metadata and documentation
  3. Test with OmniGenBench framework
  4. Submit a pull request to OmniGenBench repository

πŸ“„ License

All resources in OmniGenBench Hub are released under the Apache 2.0 License, ensuring:

  • Free use for research and commercial applications
  • Modification and redistribution rights
  • Patent protection for users
  • Clear attribution requirements

πŸ“ž Support

οΏ½ Citation

If you use resources from OmniGenBench Hub in your research, please cite:

@software{omnigenbench2025,
  author = {Yang, Heng},
  title = {OmniGenBench: A Unified Framework for Genomic Foundation Models},
  year = {2025},
  url = {https://github.com/yangheng95/OmniGenBench}
}

πŸ“Š Statistics

Category Count Description
Benchmarks 5 Comprehensive evaluation suites (RGB, GUE, BEACON, GB, PGB)
Datasets 5 Curated individual datasets for specific tasks
Models Coming Soon Pre-trained genomic foundation models
Pipelines Coming Soon Ready-to-use analysis pipelines
Total Tasks 67+ Combined tasks across all benchmarks

πŸ—ΊοΈ Roadmap

Current Release (v1.0)

  • βœ… 7 benchmark suites with 69+ tasks
  • βœ… 3 curated datasets
  • βœ… Organized folder structure
  • βœ… Comprehensive metadata files
  • βœ… Automated download integration

Future Releases

  • πŸ“¦ Pre-trained model hosting
  • οΏ½ Analysis pipeline templates
  • πŸ“Š Additional benchmark suites
  • 🌐 More individual datasets
  • πŸ”„ Continuous updates and improvements

🧬 OmniGenBench Hub - Unified Infrastructure for Genomic Foundation Model Research

Maintained by YANG, HENG | Homepage | GitHub

Downloads last month
36