Datasets:
File size: 3,907 Bytes
a360477 043b630 5bb5968 a360477 ecdde34 a360477 5bb5968 a360477 5bb5968 a360477 5bb5968 a360477 e622a37 a360477 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 |
---
license: apache-2.0
size_categories:
- 10K<n<100K
task_categories:
- text-to-video
library_name: fastvideo
tags:
- fastvideo
- synthetic
- video-diffusion
---
# FastVideo Synthetic Wan2.2 720P dataset
<p align="center">
<img src="https://raw.githubusercontent.com/hao-ai-lab/FastVideo/main/assets/logo.png" width="200"/>
</p>
<div>
<div align="center">
<a href="https://github.com/hao-ai-lab/FastVideo" target="_blank">FastVideo Team</a> 
</div>
<div align="center">
<a href="https://arxiv.org/abs/2505.13389">Paper</a> |
<a href="https://github.com/hao-ai-lab/FastVideo">Github</a> |
<a href="https://hao-ai-lab.github.io/FastVideo">Project Page</a>
</div>
</div>
## Abstract
Scaling video diffusion transformers (DiTs) is limited by their quadratic 3D attention, even though most of the attention mass concentrates on a small subset of positions. We turn this observation into VSA, a trainable, hardware-efficient sparse attention that replaces full attention at \emph{both} training and inference. In VSA, a lightweight coarse stage pools tokens into tiles and identifies high-weight \emph{critical tokens}; a fine stage computes token-level attention only inside those tiles subjecting to block computing layout to ensure hard efficiency. This leads to a single differentiable kernel that trains end-to-end, requires no post-hoc profiling, and sustains 85% of FlashAttention3 MFU. We perform a large sweep of ablation studies and scaling-law experiments by pretraining DiTs from 60M to 1.4B parameters. VSA reaches a Pareto point that cuts training FLOPS by 2.53$\times$ with no drop in diffusion loss. Retrofitting the open-source Wan-2.1 model speeds up attention time by 6$\times$ and lowers end-to-end generation time from 31s to 18s with comparable quality. These results establish trainable sparse attention as a practical alternative to full attention and a key enabler for further scaling of video diffusion models.
## Dataset Overview
- The prompts were randomly sampled from the [Vchitect_T2V_DataVerse](https://huggingface.co/datasets/Vchitect/Vchitect_T2V_DataVerse) dataset.
- Each sample was generated using the **Wan2.2-TI2V-5B-Diffusers** model and stored the latents.
- The resolution of each latent sample corresponds to **121 frames**, with each frame sized **704×1280**.
- It includes all preprocessed latents required for **Text-to-Video (T2V)** task (Also include the first frame Image).
- The dataset is fully compatible with the [FastVideo](https://github.com/hao-ai-lab/FastVideo) repository and can be directly loaded and used without any additional preprocessing.
## Sample Usage
To download this dataset, ensure you have Git LFS installed, then clone the repository:
```bash
git lfs install
git clone https://huggingface.co/datasets/FastVideo/Wan2.2-Syn-121x704x1280_32k
```
This dataset contains preprocessed latents ready for Text-to-Video (T2V) tasks and is designed to be directly used with the [FastVideo repository](https://github.com/hao-ai-lab/FastVideo) without further preprocessing. Refer to the FastVideo [documentation](https://hao-ai-lab.github.io/FastVideo) for detailed instructions on how to load and use the dataset for training or finetuning.
If you use FastVideo Synthetic Wan2.2 dataset for your research, please cite our paper:
```
@article{zhang2025vsa,
title={VSA: Faster Video Diffusion with Trainable Sparse Attention},
author={Zhang, Peiyuan and Huang, Haofeng and Chen, Yongqi and Lin, Will and Liu, Zhengzhong and Stoica, Ion and Xing, Eric and Zhang, Hao},
journal={arXiv preprint arXiv:2505.13389},
year={2025}
}
@article{zhang2025fast,
title={Fast video generation with sliding tile attention},
author={Zhang, Peiyuan and Chen, Yongqi and Su, Runlong and Ding, Hangliang and Stoica, Ion and Liu, Zhengzhong and Zhang, Hao},
journal={arXiv preprint arXiv:2502.04507},
year={2025}
}
``` |