cjerzak commited on
Commit
02a9afc
·
verified ·
1 Parent(s): 46efe01

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +239 -3
README.md CHANGED
@@ -1,3 +1,239 @@
1
- ---
2
- license: bigscience-openrail-m
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: bigscience-openrail-m
3
+ size_categories:
4
+ - 1K<n<10K
5
+ ---
6
+
7
+ # rerandomization-benchmarks
8
+
9
+ Replication dataset for the benchmark and diagnostic analyses in
10
+ **Goldstein, Jerzak, Kamat & Zhu (2025), _“fastrerandomize: Fast Rerandomization Using Accelerated Computing”_.**
11
+
12
+ This repository hosts the aggregated simulation and benchmark results produced by the scripts:
13
+
14
+ - `FastSRR_VaryNAndD.R` (simulation & benchmarking)
15
+ - `FastRR_PlotFigs.R` (aggregation & plotting)
16
+
17
+ from the accompanying software repository for the `fastrerandomize` R package.
18
+
19
+ ---
20
+
21
+ ## Project & Paper Links
22
+
23
+ - **Paper (preprint):** <https://arxiv.org/abs/2501.07642>
24
+ - **Software repository:** <https://github.com/cjerzak/fastrerandomize-software>
25
+ - **Package name:** `fastrerandomize` (R)
26
+
27
+ ---
28
+
29
+ ## What’s in this dataset?
30
+
31
+ The dataset contains **simulation-based benchmark results** used to compare:
32
+
33
+ - Different **hardware backends**
34
+ - `M4-CPU` (Apple M4 CPU, via JAX/XLA)
35
+ - `M4-GPU` (Apple M4 GPU / METAL)
36
+ - `RTX4090` (NVIDIA CUDA GPU)
37
+ - `BaseR` (non-accelerated R baseline)
38
+ - `jumble` (the `jumble` package as an alternative rerandomization implementation)
39
+
40
+ - Different **problem scales**
41
+ - Sample sizes: `n_units ∈ {10, 100, 1000}`
42
+ - Covariate dimensions: `k_covars ∈ {10, 100, 1000}`
43
+ - Monte Carlo draw budgets: `maxDraws ∈ {1e5, 2e5}`
44
+ - Exact vs approximate linear algebra: `approximate_inv ∈ {TRUE, FALSE}`
45
+
46
+ - Different **rerandomization specifications**
47
+ - Acceptance probability targets (via `randomization_accept_prob`)
48
+ - Use or non-use of fiducial intervals (`findFI`)
49
+
50
+ Each row corresponds to a particular Monte Carlo configuration and summarizes:
51
+
52
+ 1. **Design & simulation settings** (e.g., `n_units`, `k_covars`, `maxDraws`, `treatment_effect`)
53
+ 2. **Performance metrics** (e.g., runtime for randomization generation and testing)
54
+ 3. **Statistical diagnostics** (e.g., p-value behavior, coverage, FI width)
55
+ 4. **Hardware & system metadata** (CPU model, number of cores, OS, etc.)
56
+
57
+ These data were used to:
58
+
59
+ - Produce the **runtime benchmark figures** (CPU vs GPU vs baseline R / `jumble`)
60
+ - Compute **speedup factors** and **time-reduction summaries**
61
+ - Feed into macros such as `\FRRMaxSpeedupGPUvsBaselineOverall`, `\FRRGPUVsCPUTimeReductionDthousandPct`, etc., which are then read from `./Figures/bench_macros.tex` in the paper.
62
+
63
+ ---
64
+
65
+ ## Files & Structure
66
+
67
+ *(Adjust this section to match exactly what you upload to Hugging Face; here is a suggested structure.)*
68
+
69
+ - `VaryNAndD_main.csv`
70
+ Aggregated benchmark/simulation results across all configurations used in the paper.
71
+
72
+ - `VaryNAndD_main.parquet` (optional)
73
+ Parquet version of the same table (faster to load in many environments).
74
+
75
+ - `CODE/` (optional, if you choose to include)
76
+ - `FastSRR_VaryNAndD.R`
77
+ - `FastRR_PlotFigs.R`
78
+ Exact R scripts used to generate the raw CSV files and figures.
79
+
80
+ ---
81
+
82
+ ## Main Columns (schema overview)
83
+
84
+ Below is an overview of the most important columns you will encounter in `VaryNAndD_main.*`.
85
+ Names are taken directly from the R code (especially the `res <- as.data.frame(cbind(...))` section in `FastSRR_VaryNAndD.R` and the subsequent processing in `FastRR_PlotFigs.R`).
86
+
87
+ ### Core design variables
88
+
89
+ - `treatment_effect` – Constant treatment effect used in the simulation (e.g., `0.1`).
90
+ - `SD_inherent` – Baseline SD of potential outcomes (`SD_inherent` in `GenerateCausalData`).
91
+ - `n_units` – Total number of experimental units.
92
+ - `k_covars` – Number of covariates.
93
+ - `maxDraws` – Maximum number of candidate randomizations drawn (e.g., `1e5`, `2e5`).
94
+ - `findFI` – Logical (`TRUE`/`FALSE`): whether fiducial intervals were computed.
95
+ - `approximate_inv` – Logical (`TRUE`/`FALSE`): whether approximate inverse / stabilized linear algebra was used.
96
+ - `Hardware` – Hardware / implementation label, recoded in `FastRR_PlotFigs.R` to:
97
+ - `"M4-CPU"` (was `"CPU"`)
98
+ - `"M4-GPU"` (was `"METAL"`)
99
+ - `"RTX4090"` (was `"NVIDIA"`)
100
+ - `"jumble"` (was `"AltPackage"`)
101
+ - `"BaseR"` (pure R baseline)
102
+ - `monte_i` – Monte Carlo replication index.
103
+
104
+ ### Rerandomization configuration
105
+
106
+ - `prob_accept` – Target acceptance probability (`randomization_accept_prob`).
107
+ - `accept_prob` – Same or related acceptance probability field (used within plotting code).
108
+
109
+ ### Randomization-test & FI summaries
110
+
111
+ These are typically aggregated across Monte Carlo replications and/or over covariate-dimension strata:
112
+
113
+ - `p_value` – Mean p-value across replications, by `k_covars` and acceptance probability.
114
+ - `p_value_se` – Standard error of the above p-value estimates.
115
+ - `min_p_value` – Average minimum achievable p-value (`1/(1 + n_accepted)`), reflecting how many accepted randomizations were available.
116
+ - `number_successes` – Average number of accepted randomizations (per configuration).
117
+ - `tau_hat_mean` – Mean estimated treatment effect across replications.
118
+ - `tau_hat_var` – Variance of the estimated treatment effect across replications.
119
+ - `FI_lower_vec`, `FI_upper_vec` – Mean lower/upper endpoints of fiducial intervals.
120
+ - `FI_width` – Median width of the fiducial interval (where available).
121
+ - `truth_covered` – Average indicator for whether the interval covered the true treatment effect.
122
+
123
+ ### Estimator-selection diagnostics (acceptance-prob “minimization”)
124
+
125
+ These summarize how well different strategies for choosing the optimal acceptance probability perform:
126
+
127
+ - `colMeans_mean_p_value_matrix`, `colMeans_median_p_value_matrix`, `colMeans_modal_p_value_matrix` –
128
+ Average p-value summaries used to define estimators of the “best” acceptance probability.
129
+
130
+ - `bias_select_p_via_mean`, `rmse_select_p_via_mean` –
131
+ Bias and RMSE when selecting the acceptance probability based on the mean p-value.
132
+
133
+ - `bias_select_p_via_median`, `rmse_select_p_via_median` –
134
+ Bias and RMSE when selecting the acceptance probability based on the median p-value.
135
+
136
+ - `bias_select_p_via_mode`, `rmse_select_p_via_mode` –
137
+ Bias and RMSE when selecting the acceptance probability based on the modal p-value.
138
+
139
+ - `bias_select_p_via_baseline`, `rmse_select_p_via_baseline` –
140
+ Bias and RMSE of a naive baseline strategy (e.g., choosing acceptance probability at random), used as a comparison.
141
+
142
+ ### Timing and hardware metadata
143
+
144
+ Timing quantities are used to produce the benchmark plots in the paper:
145
+
146
+ - `t_GenerateRandomizations` – Time (seconds) spent generating randomization pools.
147
+ - `t_RandomizationTest` – Time (seconds) spent on randomization-based inference.
148
+ - `randtest_time` – Duplicated / convenience version of `t_RandomizationTest` in some contexts.
149
+ - `sysname`, `machine`, `hardware_version` – OS and machine-level metadata (`Sys.info()`).
150
+ - `nCores` – Number of CPU cores from `benchmarkme::get_cpu()`.
151
+ - `cpuModel` – CPU model name from `benchmarkme::get_cpu()`.
152
+
153
+ > **Note:** Because the scripts were developed iteratively, some columns may appear duplicated or with slightly redundant naming (e.g., multiple `randtest_time`-like fields). For replication of the paper’s figures, these are harmless; users may drop redundant columns as needed.
154
+
155
+ ---
156
+
157
+ ## How to use the dataset
158
+
159
+ ### In Python (via `datasets`)
160
+
161
+ ```python
162
+ from datasets import load_dataset
163
+
164
+ ds = load_dataset("YOUR_USERNAME/rerandomization-benchmarks", split="train")
165
+ print(ds)
166
+ print(ds.column_names)
167
+ ````
168
+
169
+ Or directly with `pandas`:
170
+
171
+ ```python
172
+ import pandas as pd
173
+
174
+ df = pd.read_csv("VaryNAndD_main.csv")
175
+ df.head()
176
+ ```
177
+
178
+ ### In R
179
+
180
+ ```r
181
+ library(data.table)
182
+
183
+ bench <- fread("VaryNAndD_main.csv")
184
+ str(bench)
185
+
186
+ # Example: reproduce summaries by hardware and problem size
187
+ bench[, .(
188
+ mean_t_generate = mean(t_GenerateRandomizations, na.rm = TRUE),
189
+ mean_t_test = mean(t_RandomizationTest, na.rm = TRUE)
190
+ ), by = .(Hardware, n_units, k_covars, maxDraws, approximate_inv)]
191
+ ```
192
+
193
+ You can then:
194
+
195
+ * Recreate runtime comparisons across hardware platforms.
196
+ * Explore how acceptance probability, dimension, and sample size interact.
197
+ * Use the timing information as inputs for your own design/planning calculations.
198
+
199
+ ---
200
+
201
+ ## Citation
202
+
203
+ If you use this dataset, **please cite the main paper**:
204
+
205
+ ```bibtex
206
+ @misc{goldstein2025fastrerandomizefastrerandomizationusing,
207
+ title = {fastrerandomize: Fast Rerandomization Using Accelerated Computing},
208
+ author = {Rebecca Goldstein and Connor T. Jerzak and Aniket Kamat and Fucheng Warren Zhu},
209
+ year = {2025},
210
+ eprint = {2501.07642},
211
+ archivePrefix= {arXiv},
212
+ primaryClass = {stat.CO},
213
+ url = {https://arxiv.org/abs/2501.07642}
214
+ }
215
+ ```
216
+
217
+ If you refer specifically to this Hugging Face dataset (e.g., for meta-analysis or benchmarking), you may also add a line such as:
218
+
219
+ > “We use the `rerandomization-benchmarks` dataset (Hugging Face) accompanying Goldstein et al. (2025).”
220
+
221
+ ---
222
+
223
+ ## ⚖️ License & Terms of Use
224
+
225
+ * The **code** in the associated repository is licensed under **GPL-3.0**.
226
+ * The **data** in this dataset are simulation outputs derived from that code and are provided for **research and educational use**.
227
+
228
+ Please open an issue in the GitHub repository or contact the corresponding author if you have questions about reuse in other contexts.
229
+
230
+ ---
231
+
232
+ ## Contact
233
+
234
+ For questions about the paper, software, or dataset:
235
+
236
+ * Corresponding author: **Connor T. Jerzak** – [[email protected]](mailto:[email protected])
237
+ * Issues & contributions: please use the GitHub repository issues page for `fastrerandomize`.
238
+
239
+ ---