---
license: apache-2.0
pretty_name: WEAVE
configs:
- config_name: default
data_files:
- split: train
path: data/*
task_categories:
- any-to-any
- image-to-image
- text-to-image
- visual-question-answering
language:
- en
tags:
- image
- image-editing
- multimodal
library_name: datasets
---
# ๐งต WEAVE
[](https://arxiv.org/abs/2511.11434)
[](https://huggingface.co/datasets/WeiChow/Weave/)
[](https://huggingface.co/WeiChow/Bagel-weave)
[](https://github.com/weichow23/weave)
[](https://weichow23.github.io/weave/)
> ๐ This is the official repository for the dataset presented in:
> **Weave: A Benchmark for Evaluating Multimodal Editing Models**
## ๐ Dataset Structure
WEAVE consists of two main components:
- ๐น **WEAVE-100k** (Training Set): Stored in `data/` folder in parquet format
- ๐น **WEAVEBench** (Test Set): Stored in `test/` folder in both zip and json formats
## ๐ WEAVE-100k
WEAVE-100k is generated through four sophisticated pipelines and multiple validation stages using state-of-the-art VLMs and image generation models:
- โจ Leverages cutting-edge models including GPT-4.1, Nano Banana, and SeeDream 4.0
- โจ Used to fine-tune [Bagel](https://huggingface.co/WeiChow/Bagel-weave), which achieved superior results on multiple benchmarks including GenEval and GeditBench
- โจ For more details, please refer to our paper: [](https://arxiv.org/abs/2511.15738)
## ๐งช WEAVEBench
WEAVEBench is manually designed and curated, featuring 16 diverse categories of editing tasks.
### ๐ Test Set File Format (`test.json`)
> โ ๏ธ **Note**: `Image #1` references the first image, starting from 1. This represents the image index, not the conversation turn. It corresponds to the first image in the `images` array (`images[0]`).
>
> When using multi-turn conversations, each number index should be replaced once with `Image #{idx}\n`. For single-turn, simply replace directly.
```json
{
"domain": "string",
"images": [],
"chats": []
}
```
## ๐ Want to Test on WEAVEBench?
Please refer to our code repository: [](https://github.com/weichow23/weave)
## โ๏ธ Citation
```bibtex
@article{weave2024,
title={Weave: A Benchmark for Evaluating Multimodal Editing Models},
author={Wei Chow et al.},
journal={arXiv preprint arXiv:2511.15738},
year={2024}
}
```