JohnnyZeppelin commited on
Commit
2a1fad0
·
verified ·
1 Parent(s): 0f745b1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +190 -3
README.md CHANGED
@@ -1,3 +1,190 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ ---
4
+
5
+ # HC-Bench
6
+
7
+ **HC-Bench** is a compact multi-part image benchmark for evaluating recognition and prompting robustness, especially in **hidden-content** scenes. It contains:
8
+
9
+ - **object/** — 56 base images and 56 *hidden* variants of the same lemmas, plus prompts and metadata.
10
+ - **text/** — 56 Latin/English and 56 Chinese lemma–description pairs with matching PNGs.
11
+ - **wild/** — 53 in-the-wild images for additional generalization checks.
12
+
13
+ ---
14
+
15
+ ## Repository structure
16
+
17
+ ```
18
+
19
+ HC-Bench/
20
+ ├─ object/
21
+ │ ├─ base/ # 56 base images (7 types × 8 lemmas)
22
+ │ ├─ hidden/ # 56 hidden-content variants (same lemmas)
23
+ │ ├─ image\_base.txt # 7 types and their 8 lemmas each
24
+ │ ├─ image\_generate\_prompts.txt# per-lemma scene prompts used for generation
25
+ │ └─ lemmas\_descriptions.json # \[{Type, Lemma, Description}] × 56
26
+ ├─ text/
27
+ │ ├─ Latin/ # 28 English PNGs
28
+ │ ├─ Chinese/ # 28 Chinese PNGs
29
+ │ ├─ English\_text.json # 56 entries (Type, Length, Rarity, Lemma, Description)
30
+ │ └─ Chinese\_text.json # 56 entries (Type, Length, Rarity, Lemma, Description)
31
+ └─ wild/ # 53 PNGs
32
+
33
+ ````
34
+
35
+ ---
36
+
37
+ ## Contents
38
+
39
+ ### `object/`
40
+ - **`base/`**: Canonical image per lemma (e.g., `Apple.jpg`, `Einstein.png`).
41
+ - **`hidden/`**: Composite/camouflaged image for the *same* lemma set (e.g., `apple.png`, `einstein.png`).
42
+ - **`image_base.txt`**: The 7 high-level types and their 8 lemmas each (Humans, Species, Buildings, Cartoon, Furniture, Transports, Food).
43
+ - **`image_generate_prompts.txt`**: Per-lemma prompts used to compose/generate scenes (e.g., *“A monorail cutting through a futuristic city with elevated walkways”* for `notredame`).
44
+ - **`lemmas_descriptions.json`**: Minimal metadata with `{Type, Lemma, Description}` aligned 1:1 with the 56 lemmas.
45
+
46
+ ### `text/`
47
+ - **`Latin/`** & **`Chinese/`**: 28 images each (total 56).
48
+ - **`English_text.json`** & **`Chinese_text.json`**: 56-entry lists pairing lemmas to descriptions in two languages.
49
+ (Note: The `English_text.json`/`Chinese_text.json` files include extra fields `Length` and `Rarity` for flexibility.)
50
+
51
+ ### `wild/`
52
+ - 53 natural/urban scenes for robustness and transfer evaluation.
53
+
54
+ ---
55
+
56
+ ## Quick start (🤗 Datasets)
57
+
58
+ > HC-Bench uses the **ImageFolder**/“imagefolder” style. Class labels are inferred from directory names when present (e.g., `base`, `hidden`). If you prefer raw images without labels, pass `drop_labels=True`.
59
+
60
+ ### Load **object/base** and **object/hidden**
61
+ ```python
62
+ from datasets import load_dataset
63
+
64
+ base = load_dataset(
65
+ "imagefolder",
66
+ data_files="https://huggingface.co/datasets/JohnnyZeppelin/HC-Bench/resolve/main/object/base/*",
67
+ split="train",
68
+ drop_labels=True, # drop automatic label inference
69
+ )
70
+
71
+ hidden = load_dataset(
72
+ "imagefolder",
73
+ data_files="https://huggingface.co/datasets/JohnnyZeppelin/HC-Bench/resolve/main/object/hidden/*",
74
+ split="train",
75
+ drop_labels=True,
76
+ )
77
+ ````
78
+
79
+ ### Load **wild/**
80
+
81
+ ```python
82
+ wild = load_dataset(
83
+ "imagefolder",
84
+ data_files="https://huggingface.co/datasets/JohnnyZeppelin/HC-Bench/resolve/main/wild/*",
85
+ split="train",
86
+ drop_labels=True,
87
+ )
88
+ ```
89
+
90
+ ### Load the **JSON** metadata (English/Chinese)
91
+
92
+ ```python
93
+ from datasets import load_dataset
94
+
95
+ en = load_dataset(
96
+ "json",
97
+ data_files="https://huggingface.co/datasets/JohnnyZeppelin/HC-Bench/resolve/main/text/English_text.json",
98
+ split="train",
99
+ )
100
+ zh = load_dataset(
101
+ "json",
102
+ data_files="https://huggingface.co/datasets/JohnnyZeppelin/HC-Bench/resolve/main/text/Chinese_text.json",
103
+ split="train",
104
+ )
105
+ ```
106
+
107
+ > Docs reference: `load_dataset` for JSON & files, and ImageFolder for image datasets.
108
+
109
+ ---
110
+
111
+ ## Pairing base/hidden with metadata
112
+
113
+ Filenames differ in casing/spaces between `base/` (`Apple.jpg`) and `hidden/` (`apple.png`). Use `object/lemmas_descriptions.json` as the canonical list of 56 lemmas and join by `Lemma`:
114
+
115
+ ```python
116
+ import pandas as pd
117
+ from datasets import load_dataset
118
+
119
+ # 1) Canonical lemma list
120
+ lemmas = load_dataset(
121
+ "json",
122
+ data_files="https://huggingface.co/datasets/JohnnyZeppelin/HC-Bench/resolve/main/object/lemmas_descriptions.json",
123
+ split="train",
124
+ ).to_pandas()
125
+
126
+ # 2) Build (lemma -> file) maps
127
+ def to_lemma(name): # normalize filenames to lemma
128
+ import re, os
129
+ stem = os.path.splitext(os.path.basename(name))[0]
130
+ return re.sub(r"\s+", "", stem).lower()
131
+
132
+ base_ds = load_dataset(
133
+ "imagefolder",
134
+ data_files="https://huggingface.co/datasets/JohnnyZeppelin/HC-Bench/resolve/main/object/base/*",
135
+ split="train",
136
+ drop_labels=True,
137
+ )
138
+ hidden_ds = load_dataset(
139
+ "imagefolder",
140
+ data_files="https://huggingface.co/datasets/JohnnyZeppelin/HC-Bench/resolve/main/object/hidden/*",
141
+ split="train",
142
+ drop_labels=True,
143
+ )
144
+
145
+ import os
146
+ base_map = {to_lemma(x["image"].filename): x["image"] for x in base_ds}
147
+ hidden_map= {to_lemma(x["image"].filename): x["image"] for x in hidden_ds}
148
+
149
+ # 3) Join
150
+ lemmas["base_image"] = lemmas["Lemma"].apply(lambda L: base_map.get(L.lower()))
151
+ lemmas["hidden_image"] = lemmas["Lemma"].apply(lambda L: hidden_map.get(L.lower()))
152
+ ```
153
+
154
+ ---
155
+
156
+
157
+
158
+ ---
159
+
160
+ ## Statistics
161
+
162
+ * `object/base`: 56 images
163
+ * `object/hidden`: 56 images
164
+ * `text/Latin`: 28 images
165
+ * `text/Chinese`: 28 images
166
+ * `wild`: 53 images
167
+
168
+ ---
169
+
170
+ ## Citation
171
+
172
+ If you use **HC-Bench**, please cite:
173
+
174
+ ```bibtex
175
+ @misc{li2025semvinkadvancingvlmssemantic,
176
+ title={SemVink: Advancing VLMs' Semantic Understanding of Optical Illusions via Visual Global Thinking},
177
+ author={Sifan Li and Yujun Cai and Yiwei Wang},
178
+ year={2025},
179
+ eprint={2506.02803},
180
+ archivePrefix={arXiv},
181
+ primaryClass={cs.CL},
182
+ url={https://arxiv.org/abs/2506.02803},
183
+ }
184
+ ```
185
+
186
+ ---
187
+
188
+
189
+
190
+