BrianChen1129 nielsr HF Staff commited on
Commit
5bb5968
·
verified ·
1 Parent(s): ecdde34

Improve dataset card: Add library, tags, project page, abstract, and sample usage (#2)

Browse files

- Improve dataset card: Add library, tags, project page, abstract, and sample usage (761dea76e0e5a2ec0f0f32556f5a910278f26608)


Co-authored-by: Niels Rogge <[email protected]>

Files changed (1) hide show
  1. README.md +19 -5
README.md CHANGED
@@ -1,9 +1,14 @@
1
  ---
2
  license: apache-2.0
3
- task_categories:
4
- - text-to-video
5
  size_categories:
6
  - 10K<n<100K
 
 
 
 
 
 
 
7
  ---
8
 
9
  # FastVideo Synthetic Wan2.2 720P dataset
@@ -16,12 +21,14 @@ size_categories:
16
  </div>
17
 
18
  <div align="center">
19
- <a href="https://arxiv.org/pdf/2505.13389">Paper</a> |
20
- <a href="https://github.com/hao-ai-lab/FastVideo">Github</a>
 
21
  </div>
22
  </div>
23
 
24
-
 
25
 
26
  ## Dataset Overview
27
  - The prompts were randomly sampled from the [Vchitect_T2V_DataVerse](https://huggingface.co/datasets/Vchitect/Vchitect_T2V_DataVerse) dataset.
@@ -30,6 +37,13 @@ size_categories:
30
  - It includes all preprocessed latents required for **Text-to-Video (T2V)** task (Also include the first frame Image).
31
  - The dataset is fully compatible with the [FastVideo](https://github.com/hao-ai-lab/FastVideo) repository and can be directly loaded and used without any additional preprocessing.
32
 
 
 
 
 
 
 
 
33
 
34
  If you use FastVideo Synthetic Wan2.1 dataset for your research, please cite our paper:
35
  ```
 
1
  ---
2
  license: apache-2.0
 
 
3
  size_categories:
4
  - 10K<n<100K
5
+ task_categories:
6
+ - text-to-video
7
+ library_name: fastvideo
8
+ tags:
9
+ - fastvideo
10
+ - synthetic
11
+ - video-diffusion
12
  ---
13
 
14
  # FastVideo Synthetic Wan2.2 720P dataset
 
21
  </div>
22
 
23
  <div align="center">
24
+ <a href="https://arxiv.org/abs/2505.13389">Paper</a> |
25
+ <a href="https://github.com/hao-ai-lab/FastVideo">Github</a> |
26
+ <a href="https://hao-ai-lab.github.io/FastVideo">Project Page</a>
27
  </div>
28
  </div>
29
 
30
+ ## Abstract
31
+ Scaling video diffusion transformers (DiTs) is limited by their quadratic 3D attention, even though most of the attention mass concentrates on a small subset of positions. We turn this observation into VSA, a trainable, hardware-efficient sparse attention that replaces full attention at \emph{both} training and inference. In VSA, a lightweight coarse stage pools tokens into tiles and identifies high-weight \emph{critical tokens}; a fine stage computes token-level attention only inside those tiles subjecting to block computing layout to ensure hard efficiency. This leads to a single differentiable kernel that trains end-to-end, requires no post-hoc profiling, and sustains 85% of FlashAttention3 MFU. We perform a large sweep of ablation studies and scaling-law experiments by pretraining DiTs from 60M to 1.4B parameters. VSA reaches a Pareto point that cuts training FLOPS by 2.53$\times$ with no drop in diffusion loss. Retrofitting the open-source Wan-2.1 model speeds up attention time by 6$\times$ and lowers end-to-end generation time from 31s to 18s with comparable quality. These results establish trainable sparse attention as a practical alternative to full attention and a key enabler for further scaling of video diffusion models.
32
 
33
  ## Dataset Overview
34
  - The prompts were randomly sampled from the [Vchitect_T2V_DataVerse](https://huggingface.co/datasets/Vchitect/Vchitect_T2V_DataVerse) dataset.
 
37
  - It includes all preprocessed latents required for **Text-to-Video (T2V)** task (Also include the first frame Image).
38
  - The dataset is fully compatible with the [FastVideo](https://github.com/hao-ai-lab/FastVideo) repository and can be directly loaded and used without any additional preprocessing.
39
 
40
+ ## Sample Usage
41
+ To download this dataset, ensure you have Git LFS installed, then clone the repository:
42
+ ```bash
43
+ git lfs install
44
+ git clone https://huggingface.co/datasets/FastVideo/Wan2.2-Syn-121x704x1280_32k
45
+ ```
46
+ This dataset contains preprocessed latents ready for Text-to-Video (T2V) tasks and is designed to be directly used with the [FastVideo repository](https://github.com/hao-ai-lab/FastVideo) without further preprocessing. Refer to the FastVideo [documentation](https://hao-ai-lab.github.io/FastVideo) for detailed instructions on how to load and use the dataset for training or finetuning.
47
 
48
  If you use FastVideo Synthetic Wan2.1 dataset for your research, please cite our paper:
49
  ```