license: apache-2.0
pretty_name: WEAVE
configs:
- config_name: default
data_files:
- split: train
path: data/*
task_categories:
- any-to-any
- image-to-image
- text-to-image
- visual-question-answering
language:
- en
tags:
- image
- image-editing
- multimodal
library_name: datasets
π§΅ WEAVE
π This is the official repository for the dataset presented in:
Weave: A Benchmark for Evaluating Multimodal Editing Models
π Dataset Structure
WEAVE consists of two main components:
- πΉ WEAVE-100k (Training Set): Stored in
data/folder in parquet format - πΉ WEAVEBench (Test Set): Stored in
test/folder in both zip and json formats
π WEAVE-100k
WEAVE-100k is generated through four sophisticated pipelines and multiple validation stages using state-of-the-art VLMs and image generation models:
- β¨ Leverages cutting-edge models including GPT-4.1, Nano Banana, and SeeDream 4.0
- β¨ Used to fine-tune Bagel, which achieved superior results on multiple benchmarks including GenEval and GeditBench
- β¨ For more details, please refer to our paper:
π§ͺ WEAVEBench
WEAVEBench is manually designed and curated, featuring 16 diverse categories of editing tasks.
π Test Set File Format (test.json)
β οΈ Note:
Image #1references the first image, starting from 1. This represents the image index, not the conversation turn. It corresponds to the first image in theimagesarray (images[0]).When using multi-turn conversations, each number index should be replaced once with
Image #{idx}<image>\n. For single-turn, simply replace directly.
{
"domain": "string",
"images": [],
"chats": []
}
π Want to Test on WEAVEBench?
Please refer to our code repository:
βοΈ Citation
@article{weave2024,
title={Weave: A Benchmark for Evaluating Multimodal Editing Models},
author={Wei Chow et al.},
journal={arXiv preprint arXiv:2511.15738},
year={2024}
}