exams-hunyuan-ocr / README.md
davanstrien's picture
davanstrien HF Staff
Upload README.md with huggingface_hub
c564cfb verified
metadata
tags:
  - ocr
  - document-processing
  - hunyuan-ocr
  - multilingual
  - markdown
  - uv-script
  - generated

Document OCR using HunyuanOCR

This dataset contains OCR results from images in NationalLibraryOfScotland/Scottish-School-Exam-Papers using HunyuanOCR, a lightweight 1B VLM from Tencent.

Processing Details

Configuration

  • Image Column: image
  • Output Column: markdown
  • Dataset Split: train
  • Batch Size: 1
  • Prompt Mode: parse-document
  • Prompt Language: English
  • Max Model Length: 16,384 tokens
  • Max Output Tokens: 16,384
  • GPU Memory Utilization: 80.0%

Model Information

HunyuanOCR is a lightweight 1B VLM that excels at:

  • πŸ“ Document Parsing - Full markdown extraction with reading order
  • πŸ“Š Table Extraction - HTML format tables
  • πŸ“ Formula Recognition - LaTeX format formulas
  • πŸ“ˆ Chart Parsing - Mermaid/Markdown format
  • πŸ“ Text Spotting - Detection with coordinates
  • πŸ” Information Extraction - Key-value, fields, subtitles
  • 🌐 Translation - Multilingual photo translation

Prompt Modes Available

  • parse-document - Full document parsing (default)
  • parse-formula - LaTeX formula extraction
  • parse-table - HTML table extraction
  • parse-chart - Chart/flowchart parsing
  • spot - Text detection with coordinates
  • extract-key - Extract specific key value
  • extract-fields - Extract multiple fields as JSON
  • extract-subtitles - Subtitle extraction
  • translate - Document translation

Dataset Structure

The dataset contains all original columns plus:

  • markdown: The extracted text in markdown format
  • inference_info: JSON list tracking all OCR models applied to this dataset

Usage

from datasets import load_dataset
import json

# Load the dataset
dataset = load_dataset("{output_dataset_id}", split="train")

# Access the markdown text
for example in dataset:
    print(example["markdown"])
    break

# View all OCR models applied to this dataset
inference_info = json.loads(dataset[0]["inference_info"])
for info in inference_info:
    print(f"Column: {info['column_name']} - Model: {info['model_id']}")

Reproduction

This dataset was generated using the uv-scripts/ocr HunyuanOCR script:

uv run https://huggingface.co/datasets/uv-scripts/ocr/raw/main/hunyuan-ocr.py \
    NationalLibraryOfScotland/Scottish-School-Exam-Papers \
    <output-dataset> \
    --image-column image \
    --batch-size 1 \
    --prompt-mode parse-document \
    --max-model-len 16384 \
    --max-tokens 16384 \
    --gpu-memory-utilization 0.8

Generated with UV Scripts