icir / README.md
billpsomas's picture
Update README.md
e880775 verified
metadata
license: cc-by-nc-sa-4.0
task_categories:
  - image-text-to-image
language:
  - en
pretty_name: icir
size_categories:
  - 100K<n<1M

i-CIR Dataset (Hugging Face)

website | arxiv | github

About

i-CIR (Instance-Level Composed Image Retrieval) is a curated benchmark for composed image retrieval where each instance corresponds to a specific, visually indistinguishable object (e.g., a particular landmark). Each query combines an image of the instance with a text modification, and retrieval is evaluated against a database containing rich hard negatives (visual / textual / compositional).

i-CIR illustration

Key stats

  • Instances: 202
  • Total images: ~750K
  • Composed queries: 1,883
  • Avg database size / query: ~3.7K images
  • Includes challenging hard negatives per instance.

Dataset Structure

On Hugging Face, i-CIR is hosted as WebDataset shards for scalable/robust downloads and streaming.

icir/
β”œβ”€β”€ webdataset/
β”‚   β”œβ”€β”€ query/
β”‚   β”‚   β”œβ”€β”€ query-000000.tar
β”‚   β”‚   β”œβ”€β”€ query-000001.tar
β”‚   β”‚   └── ...
β”‚   └── database/
β”‚       β”œβ”€β”€ database-000000.tar
β”‚       β”œβ”€β”€ database-000001.tar
β”‚       └── ...
β”œβ”€β”€ annotations/
β”‚   β”œβ”€β”€ query_files.csv
β”‚   β”œβ”€β”€ database_files.csv
β”œβ”€β”€ VERSION.txt
└── LICENSE

Annotations format

  • query_files.csv: each row is (image_path, text_query, instance_id)
  • database_files.csv: each row is (image_path, text_query, instance_id) (the text field may be unused for database features depending on the pipeline)

Inside each WebDataset sample, we store:

  • an image (.jpg/.png/...)
  • a json payload with: img_path, text, instance

Download

One-liner download (recommended):

pip install -U huggingface_hub
huggingface-cli download billpsomas/icir --repo-type dataset --local-dir ./data/icir --revision main

Python (equivalent):

from huggingface_hub import snapshot_download
snapshot_download(repo_id="billpsomas/icir", repo_type="dataset", local_dir="./data/icir", revision="main")

Using the dataset (feature extraction)

You can extract features directly from the WebDataset shards (no image folder extraction needed):

python3 create_features.py \
  --dataset icir \
  --icir_source wds \
  --icir_wds_root ./data/icir \
  --backbone clip \
  --batch 512 \
  --gpu 0

License

The dataset is released under CC BY-NC-SA 4.0. Please see LICENSE for details.


Citation

If you use i-CIR in your research, please cite:

@inproceedings{
    psomas2025instancelevel,
    title={Instance-Level Composed Image Retrieval},
    author={Bill Psomas and George Retsinas and Nikos Efthymiadis and Panagiotis Filntisis and Yannis Avrithis and Petros Maragos and Ondrej Chum and Giorgos Tolias},
    booktitle={The Thirty-ninth Annual Conference on Neural Information Processing Systems},
    year={2025}
}