WEAVE / README.md
WeiChow's picture
Update README.md
b7d16e7 verified
metadata
license: apache-2.0
pretty_name: WEAVE
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/*
task_categories:
  - any-to-any
  - image-to-image
  - text-to-image
  - visual-question-answering
language:
  - en
tags:
  - image
  - image-editing
  - multimodal
library_name: datasets

🧡 WEAVE

arXiv Dataset Checkpoint GitHub Page

πŸ“Œ This is the official repository for the dataset presented in:
Weave: A Benchmark for Evaluating Multimodal Editing Models

πŸ“Š Dataset Structure

WEAVE consists of two main components:

  • πŸ”Ή WEAVE-100k (Training Set): Stored in data/ folder in parquet format
  • πŸ”Ή WEAVEBench (Test Set): Stored in test/ folder in both zip and json formats
Test Set Categories

πŸš€ WEAVE-100k

Test Set Categories

WEAVE-100k is generated through four sophisticated pipelines and multiple validation stages using state-of-the-art VLMs and image generation models:

  • ✨ Leverages cutting-edge models including GPT-4.1, Nano Banana, and SeeDream 4.0
  • ✨ Used to fine-tune Bagel, which achieved superior results on multiple benchmarks including GenEval and GeditBench
  • ✨ For more details, please refer to our paper: arXiv

πŸ§ͺ WEAVEBench

WEAVEBench is manually designed and curated, featuring 16 diverse categories of editing tasks.

Test Set Categories

πŸ“ Test Set File Format (test.json)

⚠️ Note: Image #1 references the first image, starting from 1. This represents the image index, not the conversation turn. It corresponds to the first image in the images array (images[0]).

When using multi-turn conversations, each number index should be replaced once with Image #{idx}<image>\n. For single-turn, simply replace directly.

{
    "domain": "string",
    "images": [],
    "chats": []
}

πŸ” Want to Test on WEAVEBench?

Please refer to our code repository: GitHub

✍️ Citation

@article{weave2024,
  title={Weave: A Benchmark for Evaluating Multimodal Editing Models},
  author={Wei Chow et al.},
  journal={arXiv preprint arXiv:2511.15738},
  year={2024}
}