id
stringlengths 10
10
| url
stringlengths 42
42
| title
stringlengths 5
214
| average_rating
float64 -1
8.5
| average_confidence
float64 -1
5
| ratings
listlengths 0
9
| confidences
listlengths 0
9
| reviewers_num
int64 0
9
| keywords
listlengths 1
42
| abstract
stringlengths 26
4.31k
| tldr
stringlengths 0
250
| primary_area
stringclasses 21
values | pdf_url
stringlengths 40
40
| submission_date
timestamp[s]date 2025-09-01 19:59:51
2025-09-20 20:18:08
| total_reviews
int64 0
18
| reviews
listlengths 0
9
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
vxkzW4ljeX
|
https://openreview.net/forum?id=vxkzW4ljeX
|
A universal compression theory: Lottery ticket hypothesis and superpolynomial scaling laws
| 5.5
| 3
|
[
4,
6,
8,
4
] |
[
3,
3,
2,
4
] | 4
|
[
"Neural scaling law",
"model compression",
"lottery ticket hypothesis",
"deep learning theory"
] |
When training large-scale models, the performance typically scales with the number of parameters and the dataset size according to a slow power law. A fundamental theoretical and practical question is whether comparable performance can be achieved with significantly smaller models and substantially less data. In this work, we provide a positive and constructive answer. We prove that a generic permutation-invariant function of $d$ objects can be asymptotically compressed into a function of $\operatorname{polylog} d$ objects with vanishing error. This theorem yields two key implications: (Ia) a large neural network can be compressed to polylogarithmic width while preserving its learning dynamics; (Ib) a large dataset can be compressed to polylogarithmic size while leaving the loss landscape of the corresponding model unchanged. (Ia) directly establishes a proof of the \textit{dynamical} lottery ticket hypothesis, which states that any ordinary network can be strongly compressed such that the learning dynamics and result remain unchanged. (Ib) shows that a neural scaling law of the form $L\sim d^{-\alpha}$ can be boosted to an arbitrarily fast power law decay, and ultimately to $\exp(-\alpha' \sqrt[m]{d})$.
|
We prove that permutation symmetry enables polylogarithmic compression of neural networks and datasets, thus establishing the dynamical lottery ticket hypothesis and boosting neural scaling laws
|
unsupervised, self-supervised, semi-supervised, and supervised representation learning
|
https://openreview.net/pdf?id=vxkzW4ljeX
| 2025-09-19T05:07:02
| 4
|
[
{
"id": "vvIZ8RIzRX",
"forum": "vxkzW4ljeX",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14172/Reviewer_YxjE",
"reviewer_name": "Reviewer_YxjE",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This work addresses dataset and neural network compression from a moment-matching perspective. Under certain assumptions, this approach establishes novel compression rates and power laws for these tasks. It also enables the boosting of neural power laws, which describe performance versus dataset size dynamics. A number of low-dimensional experiments are conducted to support the claims.",
"strengths": "The work is mathematically sound and easy to follow. The text is clear, supported by a decent and concise background overview. The authors provide both rigorous derivations and intuitive explanations for their theoretical results, and the experiments support their claims across a number of settings.",
"weaknesses": "My main criticism revolves around the **curse of dimensionality**, which the authors underaddress several times throughout the paper.\n\n1. Both (9) and (10) have dimensionality-dependent exponents, which explode when $m \\to \\infty$ given that other constants are fixed. This is later combated by selecting $k > (1 = \\sigma^{-1}) m - 1$, which, in turn, explodes $\\binom{m+k}{k}$. Through some trickery in Theorem 7 (unfortunately, due to time constraints, I was not able to fully verify the math), the authors miraculously balance these issues by attaining a poly-log compression rate.\n\n That said, one might expect that substituting $d'$ from (45) into (44) should yield errors which are (asymptotically) under some fixed $\\omega$. However, when done numerically for $m=10$, $\\rho=0.1$, $\\omega=0.1$, and any multiplicative constant in (45), I always get an exploding upper bound on the compression error. Reasonable variations of $\\rho$ and $\\omega$ do not alleviate the issue, which only worsens as $m$ grows.\n\n2. Since $k$ in Theorem 7 grows with increasing $d$, $f$ is required to be increasingly smooth. While most contemporary NNs are $\\infty$-smooth almost everywhere, their numerical smoothness degrades with increasing dimensionality or a decreasing learning rate [1]. In practice, this will take a toll on the derived bounds in terms of asymptotic constants or other parameters (e.g., $\\rho$ in (44)). This problem remains unaddressed in the main text.\n\n3. The experimental setups are toy, with the dimensionality being $4-12$ orders of magnitude lower than in real-world tasks. In my opinion, this might lead to the following problems:\n - While showing decent performance in low-dimensional regimes, the proposed compression method might entail overfitting in high-dimensional setups. Stochastic gradient descent (SGD) is known to apply implicit regularization during training [2], thus selecting less overfitting solutions. Your method, however, might \"overcompress\" a NN/dataset: among all solutions, a non-generalizable one is selected (train error or even dynamics are the same, but test error is not).\n - It is known that some problems in ML have exponential (in dimensionality) sample complexity (e.g., density estimation). Your result, however, suggests that these problems are also log-exponential in dimensionality (Theorem 7 applied to dataset compression) given the train error is preserved. The only logical conclusion I can arrive at is that such compression almost always entails overfitting when considering complex problems.\n\n4. While the authors briefly mention the manifold hypothesis in Section 7, it is not clear how one can use it to improve the method. Moment matching is agnostic to manifolds: i.e., it generally cannot capture such intricate structures. Therefore, another manifold learning strategy must be employed beforehand to decrease the dimensionality. Such a strategy typically requires the full dataset, as manifold learning is usually of exponential sample complexity.\n\n[1] Cohen et al. \"Gradient Descent on Neural Networks Typically Occurs at the Edge of Stability\". Proc. of ICLR 2021.\n\n[2] Smith et al. \"On the Origin of Implicit Regularization in Stochastic Gradient Descent\". Proc. of ICLR 2021.\n\n**Minor issues:**\n\n1. Broken reference in line 190: \"Appendix ??\"",
"questions": "1. Can you, please, provide additional experiments (e.g., for high dataset dimensionality or low sampling sizes) proving that your method avoids overfitting?\n2. I kindly ask to address my concerns in Weakness 1. In particular, I am interested in the numerical verification of the bounds provided.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T18:58:28",
"modification_date": "2025-11-12T13:17:10",
"review_url": "https://openreview.net/forum?id=vxkzW4ljeX¬eId=vvIZ8RIzRX",
"license": "CC BY 4.0"
},
{
"id": "iCi3cG9IVu",
"forum": "vxkzW4ljeX",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14172/Reviewer_1dtD",
"reviewer_name": "Reviewer_1dtD",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 4,
"presentation": 4,
"summary": "The paper proves a universal compression theorem, showing that almost any symmetric function of $d$ elements can be compressed to a function with $O({\\rm polylog}$ (d)) elements losslessly. The theory leads to two key applications. First is the dynamical lottery ticket hypothesis, proving that large networks can be compressed to polylogarithmic width while preserving their training dynamics. Second is dataset compression, demonstrating that neural scaling laws can be theoretically improved from power-law to stretched-exponential decay.",
"strengths": "- The paper delivers a rigorous theoretical result that proves the dynamical lottery ticket hypothesis by showing that large networks can be compressed while preserving their original training dynamics.\n- Provides a generalized compression theory with broad applicability across diverse domains (e.g., dataset and model compression), demonstrating strong theoretical versatility and significant potential for cross-domain impact.\n- Establishes clear practical advantages, such as improved scaling laws and model compression, that are well grounded in the proposed theoretical framework.",
"weaknesses": "- The paper lacks a thorough discussion on the applicability of the proposed theory to complex neural architectures such as Transformer blocks, which integrate linear projections, attention mechanisms, and normalization layers.\n- There seems to be a missing reference link to the Appendix at line 190 on page 4 (“Appendix ??”).",
"questions": "- The model assumes neuron permutation symmetry. Does the assumption is applicable to complex modules in neural networks, such as Transformer block?\n- In experiments such as Figure 3 or 4, how much real computation time does the proposed compression take?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T17:45:33",
"modification_date": "2025-11-12T13:17:10",
"review_url": "https://openreview.net/forum?id=vxkzW4ljeX¬eId=iCi3cG9IVu",
"license": "CC BY 4.0"
},
{
"id": "BvtKM40N8v",
"forum": "vxkzW4ljeX",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14172/Reviewer_ui4S",
"reviewer_name": "Reviewer_ui4S",
"rating": 8,
"confidence": 2,
"soundness": 3,
"contribution": 3,
"presentation": 4,
"summary": "The paper introduces the universal compression theorem as a step towards the dynamical lottery ticket hypothesis (LTH), which claims that in a dense network there exists a subnetwork, which when trained in isolation exhibits the same training dynamics as the original one. The theorem states (informally) that a permutation-invariant function of $d$ variables each of dimensionality $m$ can be asymptotically compressed to a function of $O(\\text{polylog } d)$ variables. The authors argue that, because many model / dataset objects are symmetric in parameters / datapoints, these results imply polylog-rate network and dataset compression under the assumptions of the theorem. Another implication of polylog compression is the scaling law $L \\approx L_0 + C d ^{-\\alpha}$ changing from power law form to stretched-exponential form $L \\approx L_0 + \\exp (- \\alpha’ \\sqrt[m]{d})$, both for model and dataset size.",
"strengths": "1. The paper provides theoretical grantees on asymptotic polylogarithmic compression for symmetrical functions. The authors provide Algorithm 1 for compression of symmetric functions using moment-matching and validate it numerically.\n2. An important feature is the universality of the result: the implications of the theorem include both neural networks and datasets.\n3. A major practical consequence of the work is the potential speed up guarantees on the power-law scaling laws, which are known to \"be slow\", i.e. have small power exponentials.\n4. Although the main result is theoretical, the authors back each claim with numerical experiments: they show on a synthetic function that compression error drops with in agreement with the theoretical bound (Fig. 2); that training dynamics on a compressed dataset follows training on the full dataset (Fig. 3); training performances of full and compressed models are identical to support dynamical LTH (Fig. 4); and compressing a network or dataset leads to a larger scaling law exponent (Fig. 5). These comprehensive validations neatly complement the theoretical backbone of the paper.",
"weaknesses": "1. Further empirical evaluation would strengthen this work, as the authors note.\n2. The proposed moment-matching algorithm scales poorly with moment order $k$ and dimension $m$ (via $\\binom{m+k}{k}$), which limits immediate practical effects despite the asymptotic guarantees.\n3. The theoretical claim of polylogarithmic compression yielding a stretched-exponential scaling $\\text{exp} (- \\sqrt[m]{d})$ is not supported with evidence. The numerical experiments in Section 6 demonstrate how the scaling laws can be improved only for quadratic compression.",
"questions": "1. Can you show an example with the scaling laws of a form $L \\approx L_0 + c \\text{exp} (- \\alpha’ \\sqrt[m]{d})$ to illustrate the stretched-exponential regime?\n2. In numerical experiments in Section 6 the exponent should have improved by a factor of 2: $C d^{-\\alpha} = C (\\frac{d’}{16})^{-2 \\alpha} =C’ (d’)^{-2\\alpha} $. The reported values are close but lower, 1.271 vs $2\\alpha = 1.366$ and 0.608 vs $2 \\alpha=0.616$. Why does this difference appear? And why is it larger for dataset compression? \n3. Many elements of modern neural networks do not fall under the smoothness assumptions, like ReLU, top-k selections, sparse \\ quantized representations. How do you imagine expanding your work around those limitations and how would compression rates be affected?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T03:02:05",
"modification_date": "2025-11-12T13:17:11",
"review_url": "https://openreview.net/forum?id=vxkzW4ljeX¬eId=BvtKM40N8v",
"license": "CC BY 4.0"
},
{
"id": "oaCo1YCmRM",
"forum": "vxkzW4ljeX",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14172/Reviewer_LiWj",
"reviewer_name": "Reviewer_LiWj",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "The paper studies how neural networks and datasets can be compressed by exploiting permutation symmetries. The authors show that symmetric functions can be represented using fewer variables, which implies that both the model and the data can be reduced to polylogarithmic size without significantly changing the loss. This leads to what the authors call a dynamical lottery ticket hypothesis and stronger scaling laws.",
"strengths": "The paper presents an interesting idea: using permutation symmetry to achieve strong compression of both networks and datasets.\nThe theoretical argument (that symmetric functions can be represented with fewer variables), is promising. The results aim to connect model compression, scaling laws, and the lottery ticket hypothesis in a unified framework.",
"weaknesses": "The paper proposes a theoretical link between symmetry, compression, and scaling laws. However, the lack of clear algorithmic formulation and the absence of fair experimental baselines limit its current practical relevance.\n\nThe main limitation of the paper is the lack of rigor and clarity. The compression process is described only at a high level. It is not clear how one would actually construct the compressed network or dataset in practice. The paper does not include pseudocode or complexity estimates, making it hard to evaluate the tractability of the proposed methods. \n\nThe experimental comparison is incomplete. The proposed compressed network is compared with both the original network and a random sparse network. However, it is already known that random sparse networks perform poorly, while sparse networks obtained with *Iterative Magnitude Pruning* (IMP, Frankle & Carbin 2019) can match the performance of dense ones. A fair comparison should therefore include IMP or other modern sparse training methods.\n\nThe compression-error trade-off is not clearly quantified. The claim that a network with (d) parameters can be reduced to polylogarithmic size should be expressed as a function of the error, and possibly compared to existing theoretical bounds.\nFinally, some parts of the theoretical presentation are unclear. The meaning of the function $f$ in Theorem 5 is not explained, and the notation $|f' - f| = \\omega(d)$ is confusing, since $ \\omega(d)$ can mean any function that grows faster than $d$, but such a bound would be vacuous.",
"questions": "- Could you provide a concrete description of the compression algorithm? How are the compressed parameters and datasets obtained from the original ones?\n- How does your method compare, both in compression ratio and performance, with Iterative Magnitude Pruning or other sparse training techniques?\n- Can you explicitly state the trade-off between compression and approximation error, and how it compares with previous results (e.g. to those for the Strong LTH such as Pensia et al., 2020)?\n- What exactly does the function $f$ represent in Theorem 5? You didn't define it. Can you clarify the notation?\n- Can you argue that the bound $|f' - f| = \\omega(d)$ is not vacuous?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-27T18:32:31",
"modification_date": "2025-11-12T13:17:11",
"review_url": "https://openreview.net/forum?id=vxkzW4ljeX¬eId=oaCo1YCmRM",
"license": "CC BY 4.0"
}
] |
fwCoRzh0Dw
|
https://openreview.net/forum?id=fwCoRzh0Dw
|
InfiniteHiP: Extending Language Model Context Up to 3 Million Tokens on a Single GPU
| 4
| 3
|
[
6,
4,
2
] |
[
2,
3,
4
] | 3
|
[
"Sparse Attention",
"Efficient Attention",
"Context Extrapolation",
"KV Cache Offloading"
] |
In modern large language models (LLMs), handling very long context lengths presents significant challenges as it causes slower inference speeds and increased memory costs. Additionally, most existing pre-trained LLMs fail to generalize beyond their original training sequence lengths. To enable efficient and practical long-context utilization, we introduce \textit{InfiniteHiP}, a novel and practical LLM inference framework that accelerates processing by dynamically eliminating irrelevant context tokens through a modular hierarchical token pruning algorithm. Our method also allows generalization to longer sequences by selectively applying various RoPE adjustment methods according to the internal attention patterns within LLMs. Furthermore, we offload the key-value cache to host memory during inference, significantly reducing GPU memory pressure. As a result, InfiniteHiP enables the processing of up to 3 million tokens on a single L40s 48GB GPU -- 3x larger -- without any permanent loss of context information. Our framework achieves an 18.95x speedup in attention decoding for a 1 million token context without requiring additional training. We implement our method in the SGLang framework and demonstrate its effectiveness and practicality through extensive evaluations.
|
InfiniteHiP extends the servable model context length beyond VRAM and pretrained model context limitation.
|
infrastructure, software libraries, hardware, systems, etc.
|
https://openreview.net/pdf?id=fwCoRzh0Dw
| 2025-09-17T09:29:23
| 3
|
[
{
"id": "1VQ0xZHvLL",
"forum": "fwCoRzh0Dw",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8178/Reviewer_SD7R",
"reviewer_name": "Reviewer_SD7R",
"rating": 6,
"confidence": 2,
"soundness": 3,
"contribution": 3,
"presentation": 2,
"summary": "InfiniteHiP is a training-free long-context inference framework designed to address three key bottlenecks of LLMs when processing long sequences: computational efficiency, memory consumption, and generalization beyond the pretraining window.\nBuilding upon the original HiP, InfiniteHiP introduces a series of system-level improvements that make long-context inference feasible on a single GPU. The framework consists of three major components: Hierarchical Multi-Stage Pruning; Dynamic RoPE Adjustment, which adapts positional encoding strategies dynamically to enable out-of-length generalization for short-context pretrained models; and Hierarchical KV Offloading with LRU Policy, which manages multi-stage cache refreshing and memory transfer between GPU and host to minimize VRAM pressure. Through the synergy of these mechanisms, InfiniteHiP achieves significant performance improvements within the SGLang inference framework, specifically, a 7.24× end-to-end decoding speedup and an 18.95× acceleration in attention computation on million-token contexts, all without requiring any retraining.",
"strengths": "1. The work demonstrates strong practicality and engineering significance. InfiniteHiP can be directly integrated with a variety of existing models, such as LLaMA, Qwen, Gemma, and EXAONE, providing a general and deployment-ready solution for long-context inference on commodity GPUs.\n2. Another notable strength lies in its unified and system-oriented design perspective. Instead of focusing on a single optimization aspect, the framework simultaneously tackles the three major challenges of long-context modeling: computation, generalization, and memory through a coherent modular architecture.",
"weaknesses": "1. Despite its strong engineering impact, the scope of related work is relatively limited, covering only four prior studies, which may not sufficiently position InfiniteHiP within the broader literature of efficient attention and memory optimization.\n2. The main innovations reside at the system level, and the algorithmic novelty is incremental rather than conceptual. Each of the three modules, pruning, RoPE adjustment, and KV management, builds upon previously established ideas, leading to the impression of being “incremental but practical.”\n3. Although several ablation experiments are presented, the paper lacks a systematic quantitative analysis that isolates and justifies the independent contribution of each module. Strengthening the analytical rigor and theoretical interpretation of these components would significantly enhance the paper’s scientific depth and persuasive power.",
"questions": "see weakness",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-26T02:13:07",
"modification_date": "2025-11-12T12:02:17",
"review_url": "https://openreview.net/forum?id=fwCoRzh0Dw¬eId=1VQ0xZHvLL",
"license": "CC BY 4.0"
},
{
"id": "R6abusy28e",
"forum": "fwCoRzh0Dw",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8178/Reviewer_G1Ti",
"reviewer_name": "Reviewer_G1Ti",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 3,
"presentation": 2,
"summary": "This paper introduces InfiniteHiP, a training-free inference framework designed to address the challenges of processing extremely long contexts in Large Language Models (LLMs). The work tackles three main issues: the high computational and memory costs of the attention mechanism, the failure of pre-trained models to generalize beyond their training length, and the significant GPU memory pressure from the Key-Value (KV) cache. The core contributions are: 1) A modular multi-stage hierarchical token pruning algorithm that dynamically eliminates irrelevant context to accelerate attention. 2) A dynamic RoPE adjustment method that enables out-of-length generalization without fine-tuning. 3) An efficient KV cache offloading system that uses host memory and an LRU policy to manage the cache on a single GPU. The authors demonstrate that InfiniteHiP can process up to 3 million tokens on a single 48GB GPU, achieving significant speedups and strong performance on long-context benchmarks.",
"strengths": "- The paper is evaluated on a comprehensive set of benchmarks, including LongBench, RULER, and ∞Bench.\n\n- The work is substantial, integrating multiple techniques (sparse attention, OOL generalization, and KV cache offloading) into a single, practical framework. The implementation within the SGLang framework and detailed performance analysis show a significant engineering effort.\n\n- The proposed method achieves strong performance.",
"weaknesses": "- Crucial details of the proposed method, particularly the complete algorithms for context pruning (Algorithms 1-3), are deferred to the appendix. While this may be due to space constraints, it makes it challenging for the reader to fully grasp the core mechanism without frequently referencing the appendix.\n\n- The heuristic used in the `SelectRep` algorithm is a primary concern. The paper states that when a chunk is divided into two branches, the **first token** of each branch is used as a proxy to decide which branch to discard . This choice seems counter-intuitive. Considering the nature of the causal attention mask, the **last token** of a branch would likely be a more representative summary of the information within that branch. However, even so, the assumption that a single, fixed-position token can reliably represent an entire chunk is not convincingly justified and lacks strong empirical support in the paper.\n\n- The paper could be strengthened by discussing and comparing its KV cache offloading mechanism with other recent works[1,2,3]. \n\nI am willing to raise my score if my concerns are adequately addressed.\n\n[1] InfiniGen: Efficient Generative Inference of Large Language Models with Dynamic KV Cache Management\n\n[2] Arkvale: Efficient generative llm inference with recallable key-value eviction\n\n[3] OmniKV: Dynamic context selection for efficient long-context LLMs",
"questions": "1. A significant contribution of this work is the sophisticated KV cache management system. Given its practicality, do the authors plan to open-source the code to facilitate reproducibility and encourage further research in this area?\n\n2. Could the author share insights on why the first token was chosen as the representative token?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-22T02:46:22",
"modification_date": "2025-11-12T12:02:17",
"review_url": "https://openreview.net/forum?id=fwCoRzh0Dw¬eId=R6abusy28e",
"license": "CC BY 4.0"
},
{
"id": "W15YsjD4uF",
"forum": "fwCoRzh0Dw",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8178/Reviewer_ZK9U",
"reviewer_name": "Reviewer_ZK9U",
"rating": 2,
"confidence": 4,
"soundness": 3,
"contribution": 1,
"presentation": 3,
"summary": "InfiniteHiP improves the KV cache offloading mechanism of HiP Attention (ICLR 2025) by enhancing its cache management policy. The core idea remains the same to manage the KV cache on the unified memory space while keeping a smaller key bank on the GPU memory, which acts as a cache. The use of the Least Recently Used (LRU) policy as the eviction mechanism is incremental.\n\n\nAfter reviewing section 3, FROM HIP TO INFINITEHIP, we are certain that this work is incremental. The token pruning is borrowed from H2O; the dynamic RoPE adjustment is a trick; and Least Recently Used (LRU) is incremental. This is an engineering-heavy paper with incremental improvements over existing work, overstated claims, and limited novel insights. To maintain the high standard of the ICLR conference, we tend to reject this paper.",
"strengths": "The work integrates sparse attention, offloading, and OOL generalization into one unified system. The training-free design and work integration can lead to better performance.\n\nWe believe training-free inference is essential for effective inference, and this paper demonstrates it.\n\nGPU kernels for InfiniteHIP are a good implementation.",
"weaknesses": "The experimental benchmark selection is LongBench and ∞Bench to prove the effectiveness of InfiniteHiP. However, the context length of LongBench (32K) and ∞Benc (100k) is much lower than its claim of supporting 3 MILLION TOKENS on a single GPU. That means the extended context length has not been proven effective for extremely long context tasks. We suggest that the authors conduct experiments on LongBench v2 with a longer context length.\n\nIn Table 5, the RULER Performance of InfiniteHiP starts to be lower than full attention at 128k (74.99 vs. 76.89). Will this tend to continue to go down for a longer context > 128k? This trend can make the title up to 3 million tokens on a single GPU an overstated claim if the InfiniteHiP can not maintain accuracy for long context.\n\nThe RoPE Strategy of sing chunk-indexed RoPE for layers 1-3 and relative RoPE for layers 4-32 is based on observing \"sliding window patterns in early layers\" (Appendix D). Why exactly layers 1-3? What about layers 1-8 or other setting? An ablation study in other settings would help a lot.\n\nThe baseline is also out of date, which compares FA2 instead of FA3 [1] or flashinfer [2]. Other lossy baselines include H2O, StreamingLLM, and InfLLM, from 2023-2024. We recommend a state-of-the-art baseline like [3] or [4]\n\n[1] Ye Z, Chen L, Lai R, et al. Flashinfer: Efficient and customizable attention engine for llm inference serving[J]. arXiv preprint arXiv:2501.01005, 2025.\n\n[2] Shah J, Bikshandi G, Zhang Y, et al. Flashattention-3: Fast and accurate attention with asynchrony and low-precision[J]. Advances in Neural Information Processing Systems, 2024, 37: 68658-68685.\n\n[3] Song W, Jayanthi S M, Ronanki S, et al. Compress, Gather, and Recompute: REFORMing Long-Context Processing in Transformers[J]. arXiv preprint arXiv:2506.01215, 2025.\n\n[4] Deng W, Yang Y, Du P, et al. HGCA: Hybrid GPU-CPU Attention for Long Context LLM Inference[J]. arXiv preprint arXiv:2507.03153, 2025.",
"questions": "Analysis of the impact of InfiniteHIP on network reasoning capabilities?\n\nHow would chunk size affect the InfiniteHiP performance?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-17T07:25:18",
"modification_date": "2025-11-12T12:02:17",
"review_url": "https://openreview.net/forum?id=fwCoRzh0Dw¬eId=W15YsjD4uF",
"license": "CC BY 4.0"
}
] |
5rjSeZCM6l
|
https://openreview.net/forum?id=5rjSeZCM6l
|
FedSumUp:Secure Federated Learning Without Client-Side Training for Resource-Constrained Edge Devices
| 3.5
| 3.25
|
[
4,
2,
4,
4
] |
[
3,
3,
3,
4
] | 4
|
[
"Federated Learning",
"Data Condensation",
"Server-Side Optimization",
"Privacy-Preserving",
"Edge Devices",
"Variational Autoencoder"
] |
Horizontal Federated Learning (HFL) enables multiple clients with private data to collaboratively train a global model without sharing their local data. As a research branch of HFL, Federated Data Condensation with Distribution Matching (FDCDM) introduces a novel collaborative paradigm where clients upload small synthetic datasets instead of gradients and parameters. FDCDM faces two key challenges: privacy leakage risk, where synthetic data may leak the privacy of real data; and high computational cost on the client side, which limits the deployment capability of FDCDM on resource-constrained devices. To address these challenges, we propose FedSumUp, an improved FDCDM method. The core designs of FedSumUp include: generating initial data templates based on a Variational Autoencoder (VAE); and migrating the entire synthetic data optimization process to the server side, requiring clients only to upload distilled synthetic data and the mean of raw data features without exposing the original data itself. Experimental results on multiple real-world datasets demonstrate that FedSumUp achieves notable advantages in the following aspects: drastically reducing the visual similarity between synthetic and real data, and effectively resisting membership inference attacks; significantly lowering client-side computational overhead, making it deployable on edge devices. FedSumUp is the first work to systematically analyze privacy risks in FDCDM from the perspective of data similarity, providing a new direction for building efficient and privacy-preserving federated learning frameworks.
|
alignment, fairness, safety, privacy, and societal considerations
|
https://openreview.net/pdf?id=5rjSeZCM6l
| 2025-09-20T12:40:47
| 4
|
[
{
"id": "GcXZTsH254",
"forum": "5rjSeZCM6l",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission23402/Reviewer_VTkQ",
"reviewer_name": "Reviewer_VTkQ",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper tackles two core weaknesses in Federated Data Condensation with Distribution Matching (FDCDM), a branch of horizontal federated learning:\n\n1. Synthetic datasets can still resemble real client data.\n2. Existing FDCDM algorithms need heavy local optimization.\n\nAuthors present the first systematic privacy-risk analysis in FDCDM from a data-similarity viewpoint.They propose FedSumUp, where a pre-trained VAE is used to create initial synthetic templates, all expensive optimizations are shifted to the server, leaving only data summarization task on clients.\n\nExperimental Results show the following:\nFar less visual similarity between synthetic and real data → improved privacy.\nHigh reduction in client computation, making it very suitable for edge devices.",
"strengths": "+ Systematically reduction of visual similarity between synthetic and real data\n+ No direct data, gradients, or parameters ever leave the client. \n+ No Client-Side Training\n+ VAE based summarization provides a standardized, privacy-safe feature extraction pipeline\n+ Centralizing the condensation and MMD-based alignment process ensures consistent optimization quality and reduces heterogeneity issues caused by varying client compute capacities.\n+ Balanced Utility and Privacy",
"weaknesses": "- Performance heavily relies on the representational strength and generalization of the pre-trained VAE. If the VAE does not capture key features relevant to a domain (e.g., medical images), synthetic data quality may degrade.\n- Migrating all optimization to the server increases centralized computational load, which can become difficult with large datasets and large number of clients. \n- Since the method removes personalized patterns to prevent MIAs, models may lose subtle but useful client-specific features, affecting tasks that rely on personalization\n- Since VAE is available to the server, can't the data be recovered through gradient inversion?\n- Centralizing all optimization increases the risk of server compromise; a malicious server could still attempt to reverse-engineer latent features. \n- It hasn't been tested on real-world images yet. The datasets used are very basic ones and less challenging. \n- More exhaustive experiments and real-world setups of FL should be explored as done in the following paper (inspired by Office 31):\n\"Federated Learning for Commercial Image Sources\", WACV 2023.\nDataset link: https://drive.google.com/file/d/1qgpj1TsGT4lnhhOmwR4gqVRigoHnMRnX",
"questions": "Although raw data is not shared, mean feature embeddings might still reveal distributional hints of private data. How to handle that?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-04T02:46:16",
"modification_date": "2025-11-12T18:17:25",
"review_url": "https://openreview.net/forum?id=5rjSeZCM6l¬eId=GcXZTsH254",
"license": "CC BY 4.0"
},
{
"id": "P0EfJoz3jo",
"forum": "5rjSeZCM6l",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission23402/Reviewer_LYj3",
"reviewer_name": "Reviewer_LYj3",
"rating": 2,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 1,
"summary": "This paper extends Federated Data Condensation with Distribution Matching (FDCDM) by addressing privacy limitations and computational constraints on edge devices. The authors incorporate Variational Autoencoders (VAE) to extract latent representations of client-side data, which are then transmitted to the server for synthetic data generation. This approach requires only serverside model training, thereby reducing the computational burden on clients and mitigating sample-level privacy leakage by transferring latent representations instead of ”initial templates” that could result in visual information leakage.",
"strengths": "The paper presents a complete framework with improvements in accuracy and computational efficiency compared to baseline methods.",
"weaknesses": "1. The paper suffers from significant organizational issues that impede comprehension. While the work builds upon Heterogeneous Federated Learning (HFL), this foundational design choice is not clearly articulated. The framework is only briefly mentioned at the beginning of the Introduction, using vague terminology to describe the challenges and background of HFL before transitioning to FDCDM and data security concerns. This fragmented presentation makes it difficult for readers to understand the core methodology and its relationship to existing work.\n\n2. Additionally, the paper’s contributions are presented in an unprofessional manner, lacking sufficient evidence to support claimed security benefits. The computational advantage is also inadequately substantiated, with only a vague claim of ”reducing client side computational overhead by over 90% compared to methods like FedSD2C,” which lacks rigor and clarity.\n\n3. The rationale for incorporating VAE to prevent sample leakage is inadequately explained. While the paper dedicates considerable space to discussing how the initial template used for synthetic data generation poses a risk of privacy leakage through potential attacks, it fails to provide a clear explanation of why and how VAE addresses this vulnerability. The connection between the VAE-based latent representation and enhanced privacy protection remains unclear.\n\n4. Despite claiming ”theoretical innovation,” the paper provides no theoretical analysis or formal results. The absence of theoretical foundations significantly weakens the paper’s contributions and makes it difficult to assess the principled nature of the proposed approach.\n\n5. The experimental evaluation is inadequate to support the paper’s claims. The experiments are limited to simple datasets and moderately non-IID settings, which is insufficient to demonstrate the method’s effectiveness. The performance evaluation does not adequately explore more challenging tasks or varied heterogeneous settings, making it less convinced regarding the improvement.",
"questions": "See weakness.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T11:39:31",
"modification_date": "2025-11-12T18:17:26",
"review_url": "https://openreview.net/forum?id=5rjSeZCM6l¬eId=P0EfJoz3jo",
"license": "CC BY 4.0"
},
{
"id": "x5rrNe6sbA",
"forum": "5rjSeZCM6l",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission23402/Reviewer_A8Df",
"reviewer_name": "Reviewer_A8Df",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This paper identifies two critical challenges in the existing Federated Data Condensation with Distribution Matching (FDCDM) paradigm: 1) significant privacy risks when using real data as templates for synthetic data generation, and 2) high computational costs on the resource-constrained edge device client side. To address these issues, the paper proposes FedSumUp, in which each client sends (per-class) VAE latent codes and mean feature vectors; the server performs a two-phase optimization (latent → pixel) to synthesize a global, small dataset used to train the global model.",
"strengths": "1. By offloading all complex optimization to the server, the client-side burden is reduced to a simple one-pass encoding and feature extraction.\n\n2. By using a VAE to generate abstract latent codes, FedSumUp avoids using templates that are either too realistic (leaking privacy) or too noisy (hurting utility).",
"weaknesses": "1. The server must now perform a two-phase optimization (latent code and pixel-level) for each participating client in every round. This cost could be substantial and scales with the number of clients, yet it is not reported, which makes the \"efficiency\" claim one-sided.\n\n2. The paper assumes a semi-honest server adversary and evaluates privacy against server-side MIA, yet the server receives per-class latent codes and mean feature vectors every round. The server with the public VAE decoder may decode latents to image-like content. No further analysis is provided on how much information these latents/means reveal.",
"questions": "1. What is the total computational overhead on the server, and how does this cost scale as the number of clients increases?\n\n2. The entire method relies on a general-purpose VAE pre-trained on a public dataset. How would the method's performance be affected if the clients' private data comes from a highly specialized domain that is significantly \"out-of-distribution\" for the VAE's pre-training data?\n\n3. You opted for a weaker optimization objective on the server to match the mean of the real and synthetic features, rather than a stronger distributional loss. Was this choice primarily for efficiency?\n\n4. What is the reconstruction fidelity when directly decoding transmitted latents with the provided/public decoder?\n\n5. If the server actively optimizes latents to probe the client distribution, how robust is FedSumUp to targeted reconstruction/inversion?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T01:06:45",
"modification_date": "2025-11-12T18:17:26",
"review_url": "https://openreview.net/forum?id=5rjSeZCM6l¬eId=x5rrNe6sbA",
"license": "CC BY 4.0"
},
{
"id": "B7THUcdFRl",
"forum": "5rjSeZCM6l",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission23402/Reviewer_FmzS",
"reviewer_name": "Reviewer_FmzS",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 2,
"summary": "This paper introduces FedSumUp, a federated data condensation framework designed specifically to address privacy leakage and client-side computational overhead issues in horizontal federated learning. Instead of requiring clients to synthesize and optimize data locally, FedSumUp shifts all expensive data optimization to the server and has clients only upload VAE-encoded latent codes and mean data features, thereby sharply reducing computational load and limiting privacy exposure. The paper systematically critiques current FDCDM paradigms, exposing visual privacy leakage in real-data initialization and utility degradation under random-noise initialization, and presents extensive experiments to demonstrate improved privacy, efficiency, and performance under various non-IID settings compared to several strong baselines.",
"strengths": "1) The paper is the first to rigorously expose and analyze visual privacy leakage in FDCDM schemes, especially under real-data initialization. This is well-illustrated with Table 1 and the corresponding discussion on Page 4, and visually connected to MIA vulnerabilities.\n2) By offloading all synthetic data optimization to the server, the proposed method massively reduces resource requirements on clients (as substantiated by Tables 3, 5, 6, and 7). Table 3 highlights that client runtime is reduced by over 10–15x compared to other methods.\n3) The paper proposes a clever privacy-preserving mechanism. It uses a general-purpose, pre-trained VAE as a privacy filter, where clients upload highly abstract \"latent codes\" rather than raw images. The server first optimizes these codes in the latent space before decoding them. This process tends to filter out personalized features while retaining the common class features beneficial for model training , thereby mechanistically helping to resist Membership Inference Attacks (MIA).",
"weaknesses": "1. While the paper exposes practical visual privacy leakages in prior FDCDM methods, its claimed privacy enhancements in FedSumUp are predominantly empirical (via MIA ACC, Table 3 and Table 5). There is no formal privacy analysis or theoretical bound (e.g., differential privacy guarantees, or information-theoretic leakage quantification). \n2. While Appendix A.6 claims that the VAE is universal and not fine-tuned per client or task, the actual privacy and generalization performance of this VAE is not deeply interrogated. What happens if the VAE is insufficiently expressive for specific domains or tasks? Could the VAE itself encode subtle privacy leakages if, for example, the upstream training dataset for the VAE includes client-resembling data? No empirical test of VAE generality or security robustness is attempted.\n3. The protocol seems to assume honest-but-curious clients, but if clients upload malicious codes or manipulated means, what prevents poisoning or information leakage back to the server or other clients? There is no discussion of potential mechanisms.\n4. While MNIST, FashionMNIST, and CIFAR10 are standard, they are relatively small and may not sufficiently represent real-world high-heterogeneity, high-dimensional, or non-vision FL tasks. It remains unclear if the claimed privacy/utility gains hold for domains outside canonical image datasets.",
"questions": "See Weakness.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-29T19:46:01",
"modification_date": "2025-11-12T18:17:26",
"review_url": "https://openreview.net/forum?id=5rjSeZCM6l¬eId=B7THUcdFRl",
"license": "CC BY 4.0"
}
] |
|
qN0Il4dtGg
|
https://openreview.net/forum?id=qN0Il4dtGg
|
HARMAP: Hierarchical Atomic Representation for Materials Property Prediction
| 3.5
| 3
|
[
2,
2,
4,
6
] |
[
4,
3,
3,
2
] | 4
|
[
"AI for Materials",
"Atomic Representation",
"Material Property Prediction"
] |
Accurate prediction of material properties is a key step toward rapid materials discovery and cost-effective exploration of vast chemical spaces. Recent advances in machine learning (ML) offer a data-driven alternative that enables fast and scalable property estimation. However, prevailing graph-based pipelines use one-hot or shallow element embeddings and simple distance-based edges, which under-encode element-specific characteristics and cannot faithfully capture bond relations. Thus, we develop HARMAP, a Hierarchical Atomic Representation for Materials Property prediction. First, we build a chemistry-informed Hierarchical Element Knowledge Tree (HEK-Tree) that classifies elements from coarse to fine (e.g., metal vs. non-metal, subgroupings), producing atomic embeddings that preserve unique identities and inter-atomic relations. Second, we map these features into hyperbolic spaces that preserve hierarchical structure, enabling compact separation of levels and smooth similarity across related elements. Finally, we construct a compound graph whose nodes use the learned atomic embeddings and whose edges combine geometric proximity, providing bond-aware connectivity. Across three large public datasets, HARMAP consistently improves over formula-only, structure-only, and standard graph baselines, indicating the effectiveness of HARMAP's unique atomic and bond representations.
|
A Hierarchical Atomic Representation for Materials Property prediction.
|
applications to physical sciences (physics, chemistry, biology, etc.)
|
https://openreview.net/pdf?id=qN0Il4dtGg
| 2025-09-10T21:25:01
| 4
|
[
{
"id": "Kr0LTtqs14",
"forum": "qN0Il4dtGg",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission3745/Reviewer_CDzq",
"reviewer_name": "Reviewer_CDzq",
"rating": 2,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This paper introduces HARMAP, a hierarchical hyperbolic representation framework for materials property prediction.\nThe method builds a Hierarchical Element Knowledge Tree (HEK-Tree) that encodes chemical taxonomy (metals, non-metals, families, elements) into hyperbolic embeddings, preserving periodic-table hierarchies. \nA Bond-aware Connectivity (BondNeC) mechanism then computes chemically meaningful edge features from hyperbolic distances, and a Hyperbolic Transformer (Hypformer) processes the resulting compound graph for property regression.\nExperiments show its improvements over baselines. There are also ablations showing the contribution of hierarchical encoding and bond features.",
"strengths": "- The idea of embedding periodic-table hierarchies in hyperbolic space is original and well motivated by chemistry’s tree-like structure.\n- Benchmarks evaluation shows consistent improvement, improving upon recent strong baselines such as CrystalFramer and eComFormer.\n- Thorough ablation studies demonstrate clear incremental improvements from each module (HEK-Tree depth, BondNeC, learnable nodes).\n- The paper is clearly structured, with motivating figures and detailed derivations of hyperbolic operations.\nAppendices contain implementation and theoretical clarifications, increasing reproducibility.",
"weaknesses": "- The HEK-Tree and much of the architecture (Hypformer) is adapted from prior word hierarchy [1, 2, 3] and hyperbolic Transformer work [1, 2]; the new contribution mostly lies in its application domain.\n- The model is closer to an engineering combination of existing components than to a new fundamental architecture.\n- Hyperbolic operations and dual-stage encoding are more expensive than Euclidean counterparts.\n\n[1] Tifrea, Alexandru, Gary Bécigneul, and Octavian-Eugen Ganea. \"Poincar\\'e glove: Hyperbolic word embeddings.\" arXiv preprint arXiv:1810.06546 (2018).\n\n[2] Sonthalia, Rishi, and Anna Gilbert. \"Tree! i am no tree! i am a low dimensional hyperbolic embedding.\" Advances in Neural Information Processing Systems 33 (2020): 845-856.\n\n[3] Zhang, Delvin Ce, Rex Ying, and Hady W. Lauw. \"Hyperbolic graph topic modeling network with continuously updated topic tree.\" Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2023.\n\n[4] Yang, Menglin, et al. \"Hypformer: Exploring efficient transformer fully in hyperbolic space.\" Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2024.\n\n[5] Yang, Xin, et al. \"Hgformer: Hyperbolic Graph Transformer for Recommendation.\" arXiv preprint arXiv:2502.15693 (2024).",
"questions": "- The paper does not report runtime, model size, or training efficiency relative to Transformer/GNN baselines.\n- All benchmarks are standard formation-energy/bandgap tasks; results on smaller or experimental datasets (e.g., magnetism, phonon, or thermoelectric properties) would better test generalization. Also MatBench and MatBench-discovery can be taken into account.\n- While component-wise ablations are given, cross-validation or statistical significance of MAE differences is not reported.\n- No uncertainty estimates are provided, which are important in materials modeling contexts.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-03T04:01:24",
"modification_date": "2025-11-12T11:09:19",
"review_url": "https://openreview.net/forum?id=qN0Il4dtGg¬eId=Kr0LTtqs14",
"license": "CC BY 4.0"
},
{
"id": "nFAzzlciSO",
"forum": "qN0Il4dtGg",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission3745/Reviewer_3Qd5",
"reviewer_name": "Reviewer_3Qd5",
"rating": 2,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "The authors present HARMAP, which consists of the steps of building KEK-Tree, mapping features into hyperbolic spaces to preserve hierarchical structures of the KEK-tree, and constructing compound graphs to learn atom embedding taking into account bond-aware connectivity.\n\nThe performance evaluation of HARMAP was performed with three public datasets and its effectiveness was shown. \n\nWhile HARMAP might include technical novelties, their empirical evaluation is weak and shallow.\nFor example, I am unsure of the significance of the improvement achieved by HARMAP in the tables -- it looks very small improvements. Also, no standard deviations are shown in the tables.\n\nBesides, what empirically happens with HARMAP from a viewpoint of crystal structures is missing (e.g., which substructures have contributed to improve the MAE score and *why* do such contributions happen?). This makes me feel that their analysis quite shallow and they merely show numbers without deeper understanding to the model behaviors translated into generic interpretations and trends to materials in the datasets.",
"strengths": "Algorithmic novelty of HARMAP",
"weaknesses": "Performance evaluation which is shallow and not convincing",
"questions": "I do not have specific questions but would like to see the authors' response based on the comments above",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-02T15:26:17",
"modification_date": "2025-11-12T11:09:20",
"review_url": "https://openreview.net/forum?id=qN0Il4dtGg¬eId=nFAzzlciSO",
"license": "CC BY 4.0"
},
{
"id": "v4uA8CIxzm",
"forum": "qN0Il4dtGg",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission3745/Reviewer_Py87",
"reviewer_name": "Reviewer_Py87",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper introduces HARMAP, a novel machine learning framework designed to improve the accuracy of materials property prediction. The authors identify key limitations in existing graph-based models, which often rely on oversimplified atomic representations (like one-hot encodings) and geometric-only edges, failing to capture the rich hierarchical relationships between elements and chemically meaningful bonds. HARMAP addresses these shortcomings by creating a more sophisticated and chemistry-aware representation of crystalline materials.\n\nThe main contributions of the work are threefold. First, the authors construct a Hierarchical Element Knowledge Tree (HEK-Tree), a taxonomy that organizes elements from broad categories (e.g., metal/nonmetal) down to specific chemical families and individual elements. Second, this tree is embedded into hyperbolic space, a geometric domain naturally suited for representing hierarchical data with low distortion, which allows the model to preserve chemical relationships effectively. Finally, the framework introduces Bond-aware Connectivity (Bondnec), a method to enrich the edges in the crystal graph by combining standard interatomic distances with a chemical similarity score derived from the hierarchical embeddings, leading to a more accurate representation of bonding.",
"strengths": "1. Holistic Integration of Chemistry: The model moves beyond simple geometry by incorporating deep chemical knowledge. The HEK-Tree encodes established periodic trends, and the Bondnec module infuses chemical similarity into bond representations. This allows the model to reason about atomic interactions in a way that is more aligned with a chemist's intuition.\n\n2. Strong Empirical Performance: The paper provides compelling evidence of its effectiveness. HARMAP achieves state-of-the-art results across three major, diverse benchmarks (Materials Project, JARVIS, OQMD) and on multiple key properties (formation energy, bandgap, elastic moduli). The consistent and significant improvements over strong baselines are a major strength.\n\n3. Comprehensive Ablation Studies: The authors thoroughly validate their design choices through extensive ablations. They demonstrate the individual contribution of the HEK-Tree, the hyperbolic backbone (Hypformer), and the Bondnec edges, proving that each component is essential for the final performance. The study on the HEK-Tree depth also provides valuable insights into the importance of hierarchy.",
"weaknesses": "1. Potential Rigidity of the HEK-Tree: The HEK-Tree is constructed based on fixed, pre-defined chemical knowledge. While this provides a strong inductive bias, it might be less flexible than a fully learned hierarchy. It may not easily adapt to discover novel, non-intuitive element relationships that are not already captured by the standard periodic table grouping.\n\n2. Limited Interpretability of Learned Embeddings: Although the HEK-Tree structure itself is interpretable, the actual node embeddings learned in hyperbolic space are high-dimensional and abstract. While the paper shows the model works, it may be difficult to directly translate these learned representations back to concrete, new chemical insights without further analysis.",
"questions": "1. To what extent is the hierarchy of the HEK-Tree itself learnable, and have you experimented with allowing the tree structure or hierarchical paths to be optimized during training, rather than being fixed based on pre-defined chemical knowledge?\n\n2. Could you provide a comparative analysis of HARMAP's computational cost (e.g., FLOPs, memory usage, or training time) against key baselines to clarify the performance-to-cost ratio and practical scalability?\n\n3. Can you provide any qualitative analysis or case studies demonstrating that the learned Bondnec similarity scores S(i,j) align with known chemical bonding preferences, to validate the claim of capturing chemically meaningful connections?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-02T01:38:13",
"modification_date": "2025-11-12T11:09:22",
"review_url": "https://openreview.net/forum?id=qN0Il4dtGg¬eId=v4uA8CIxzm",
"license": "CC BY 4.0"
},
{
"id": "78nlETkm8m",
"forum": "qN0Il4dtGg",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission3745/Reviewer_t2RJ",
"reviewer_name": "Reviewer_t2RJ",
"rating": 6,
"confidence": 2,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper is concerned about crystal property prediction and proposes a hierarchical atomic representation for materials property prediction (HARMAP). The main characteristics of HARMAP are (i) a hierarchical element knowledge tree (HEK-Tree), which encodes domain knowledge (the periodic table) as a hierarchical tree representation, allowing us to embed each atom in a hyperbolic space by considering its relationship to other similar atoms (in a learnable way) and (ii) a bond-aware connectivity (Bondnec), which constructs a graph from a crystal structure by considering not only atomic distances between pairs of atoms but their distances in the hyperbolic space.\n\nThe empirical studies show that the proposed method achieves the best predictive performance among others in a standard suite of benchmark tasks and that the proposed architecture is reasonable by ablation studies.",
"strengths": "It is reasonable to incorporate a taxonomy chemical elements into embeddings of atoms and bonds, instead of using one-hot vectors, for performance improvement. The resultant architecture to implement the idea is also sound to me. The empirical results are compelling to demonstrate the benefit of the proposed idea.",
"weaknesses": "Most of the details on how the authors run the experiments are not in the main part of the paper and are sent to the supplementary material. Since such information is important to understand whether the experiments were conducted in a fair way, I would like to see it in the main body.\n\nI'm mostly curious about how the hyperparameters are determined, specifically embedding dimensions and the numbers of Transformer blocks. In Appendix B.3, the authors provided these numbers used in the experiments, but as far as I am aware of, have not provided the information regarding how these numbers are determined.",
"questions": "I would like to ask the authors to clarify how the hyperparameters are determined in the experiment.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T16:34:31",
"modification_date": "2025-11-12T11:09:22",
"review_url": "https://openreview.net/forum?id=qN0Il4dtGg¬eId=78nlETkm8m",
"license": "CC BY 4.0"
}
] |
0hLuQAT3fV
|
https://openreview.net/forum?id=0hLuQAT3fV
|
Universal Image Immunization against Diffusion-based Image Editing via Semantic Injection
| 5
| 3.5
|
[
4,
4,
4,
8
] |
[
3,
4,
4,
3
] | 4
|
[
"Diffusion Model",
"AI Safety",
"Image Immunization",
"Adversarial Attack",
"Image Editing"
] |
Recent advances in diffusion models have enabled powerful image editing capabilities guided by natural language prompts, unlocking new creative possibilities. However, they introduce significant ethical and legal risks, such as deepfakes and unauthorized use of copyrighted visual content. To address these risks, image immunization has emerged as a promising defense against AI-driven semantic
manipulation. Yet, most existing approaches rely on image-specific adversarial perturbations that require individual optimization for each image, thereby limiting scalability and practicality. In this paper, we propose the first universal image immunization framework that generates a single, broadly applicable adversarial perturbation specifically designed for diffusion-based editing pipelines. Inspired by
universal adversarial perturbation (UAP) techniques used in targeted attacks, our method generates a UAP that embeds a semantic target into images to be protected. Simultaneously, it suppresses original content to effectively misdirect the model’s attention during editing. As a result, our approach effectively blocks malicious editing attempts by overwriting the original semantic content in the image via the
UAP. Moreover, our method operates effectively even in data-free settings without requiring access to training data or domain knowledge, further enhancing its practicality and broad applicability in real-world scenarios. Extensive experiments show that our method, as the first universal immunization approach, significantly outperforms several baselines in the UAP setting. In addition, despite the inherent difficulty of universal perturbations, our method also achieves performance on par with image-specific methods under a more restricted perturbation budget, while also exhibiting strong black-box transferability across different diffusion models.
|
alignment, fairness, safety, privacy, and societal considerations
|
https://openreview.net/pdf?id=0hLuQAT3fV
| 2025-09-12T19:50:27
| 4
|
[
{
"id": "Cp6SNqZd08",
"forum": "0hLuQAT3fV",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4421/Reviewer_nGCo",
"reviewer_name": "Reviewer_nGCo",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "This paper proposes a universal image immunization framework that protects images from malicious diffusion-based editing by applying a single, broadly effective adversarial perturbation. Unlike image-specific defenses, the method generates a universal adversarial perturbation (UAP) that embeds a semantic target and suppresses original content, thereby misdirecting the diffusion model’s attention and preventing faithful or unauthorized semantic modifications.",
"strengths": "- Research on anti-editing is meaningful and promising.\n- The proposed universal adversarial perturbation (UAP) demonstrates greater effectiveness compared to prior per-image optimization approaches.\n- Experimental results show that the proposed method achieves improved performance in several cases.",
"weaknesses": "- During the editing phase, does the proposed method need to append the target prompt (e.g., “Ronaldo”) to the editing prompt? If so, how can it guarantee that a malicious user would use that specific prompt? If not, how does the UAP maintain robustness across different editing prompts, given that it appears to be trained with a fixed target prompt?\n\n- How well does the UAP generalize to complex or lengthy editing prompts? Does its effectiveness degrade under more complicated prompt conditions?\n\n- The UAP is trained on 10,000 random image–prompt pairs. How does the size of this training set influence the robustness and generalization of the learned perturbation?\n\n- Since the primary goal is to defend against editing rather than to generate a target pattern, why is the target semantic injection loss necessary? Would using only the source semantic suppression loss suffice, and how would that affect performance?",
"questions": "Please refer to the weakness part above.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-25T15:59:05",
"modification_date": "2025-11-12T11:15:53",
"review_url": "https://openreview.net/forum?id=0hLuQAT3fV¬eId=Cp6SNqZd08",
"license": "CC BY 4.0"
},
{
"id": "vydGJ4hJSZ",
"forum": "0hLuQAT3fV",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4421/Reviewer_8CoU",
"reviewer_name": "Reviewer_8CoU",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "The paper empirically proposes a universal image immunization method against diffusion-based editing by jointly optimizing semantic injection and semantic suppression losses. A single universal perturbation is trained to mislead diffusion models semantically while preserving visual quality. Extensive experiments demonstrate strong white-box and black-box defense across multiple diffusion models.",
"strengths": "- The paper proposes a universal, data-free image immunization framework that generalizes across diffusion models.\n- The method introduces a simple yet effective dual-loss design to achieve semantic-level defense.\n- The approach demonstrates strong transferability and robustness under both white-box and black-box settings.\n- The experiments cover multiple diffusion models and editing scenarios, showing consistent performance.",
"weaknesses": "- The paper presents an empirical approach with limited theoretical justification.\n- The authors do not provide a thorough discussion on why $\\mathcal{L}_\\text{inj}$ is effective in the cross-attention feature space or its theoretical justification, relying instead primarily on empirical validation.\n- The evaluation relies heavily on pixel and perceptual similarity metrics, despite the method's core focus on semantic injection and suppression; adding CLIPScore or Grounding DINO detection would better assess semantic alignment.\n- The study lacks visualization of training dynamics; plotting the evolution of semantic injection and suppression losses would help verify optimization stability and convergence.\n- The paper misses key related works on image immunization, such as attention-based EditShield [1] and diffusion latent attack [2].\n\n[1] Chen et. al. EditShield: Protecting Unauthorized Image Editing by Instruction-guided Diffusion Models, ECCV 2024 \n[2] Shih et. al. Pixel Is Not a Barrier: An Effective Evasion Attack for Pixel-Domain Diffusion Models, AAAI 2025",
"questions": "- Could the authors include CLIPScore or Grounding DINO detection in the main results to evaluate semantic alignment, and provide additional metrics in the revision for completeness?\n- When converting tensors back to images, clipping and quantization are applied. Could these operations break the attack by altering $\\delta$ effective direction or strength and thus reduce the semantic injection/suppression effect?\n- Could the two losses interfere or cancel each other out during optimization, given their opposite semantic objectives?\n- Could the authors provide training curves of total, injection, and suppression losses to illustrate optimization stability and convergence?\n\nI encourage the authors to strengthen the paper by addressing the weakness and questions in the rebuttal.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-19T12:09:37",
"modification_date": "2025-11-12T11:15:54",
"review_url": "https://openreview.net/forum?id=0hLuQAT3fV¬eId=vydGJ4hJSZ",
"license": "CC BY 4.0"
},
{
"id": "iarZaNpvvo",
"forum": "0hLuQAT3fV",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4421/Reviewer_S8F9",
"reviewer_name": "Reviewer_S8F9",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 4,
"summary": "The paper presents a framework that learns a single universal adversarial perturbation (UAP) to safeguard images from unauthorized text‑guided diffusion model editing. Unlike prior approaches that rely on image‑specific perturbations—limiting scalability and practicality—the proposed method employs one universal perturbation applicable to any image. By overwriting the original semantic content with a target semantic at the cross‑attention level, the approach effectively alters the resulting edits. Experimental results demonstrate that the proposed UAP not only outperforms existing baselines in universal settings but also achieves performance comparable to image‑specific perturbations.",
"strengths": "1. The proposed method enables universal protection using a single perturbation, making it significantly more practical and scalable compared to image-specific perturbations.\n2. The paper is clearly written and easy to follow, with well-structured methodology and presentation.\n3. The approach demonstrates broad applicability across diverse editing models, including Stable Diffusion v1.4 and v2.0, InstructPix2Pix, DiT, and inpainting pipelines.",
"weaknesses": "1. While the method aims to inject target semantics, it is unclear whether the perturbation truly captures the intended concept. For instance, in the *cow* example of Figure 3 (Ours), the generated image still depicts a cow. Also, the perturbation appears to preserve only the **structure** of the *Ronaldo* target image, rather than semantic attributes like gender or identity.\n2. The authors claim that text semantics are naturally fused into visual features at the cross-attention output level. However, textual semantics are also embedded within **attention map**—as used in prior works such as Lo et al. [1]—since the key vectors in the cross-attention mechanism are derived from the textual prompt. The novelty of operating on cross-attention outputs rather than attention maps may be overstated.\n3. The data-dependent UAP is trained on 10,000 randomly sampled LAION-2B-en image–text pairs and evaluated on 500 images spanning 10 object classes. It remains unclear whether the semantic suppression generalizes beyond the training distribution. In particular, if a new image contains semantics absent from the 10,000 training pairs, the UAP may exhibit reduced effectiveness.\n4. The proposed UAP is added to *generated* images before passing them through the diffusion model. However, this is not representative of typical editing pipelines, which usually operate on *real* images. The mismatch between training/deployment assumptions and real usage scenarios raises concerns about practical robustness.\n\n[1] Lo et al., Distraction is all you need: Memory-efficient image immunization against diffusion-based image editing, CVPR 2024.\n\nNote: Weaknesses 1-4 correspond directly to Questions 1-4.",
"questions": "1. In Section 5.2, the authors claim that the generated results reflect the injected *Ronaldo* semantics. If the target image were replaced with a different individual—such as a woman or someone with distinct facial attributes—would the resulting edits reflect high-level semantic changes (e.g., gender, identity) rather than merely structural features? An expanded ablation on target identity would help assess the generality and depth of the proposed semantic injection.\n2. The “Attention Map Attack” baseline generates perturbations by minimizing the alignment between the attention map and the original image semantics (Section 7.4). If this baseline were re-implemented using the same loss functions (Eq. 4 and 5), but applied to attention maps rather than cross-attention outputs, would it achieve comparable effectiveness to the proposed method? A direct comparison would clarify whether operating on cross-attention outputs offers a meaningful advantage over using attention maps.\n3. How does the proposed UAP perform on test images that contain semantics not seen during training? Additional experiments would help validate the generalization of semantic suppression beyond the 10,000 training pairs.\n4. All evaluations appear to apply the UAP to *generated* images. Has the method been tested on *real* images as inputs to the editing pipeline? Since most practical use cases involve real images, results on this setting would be valuable.\n5. In Figure 2(b), the attention map for the *Ronaldo* prompt appears to be as responsive as that of *people*, despite the claim in the Figure 2 caption that attention should be suppressed for the target prompt. Can the authors clarify this observation?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-17T13:22:40",
"modification_date": "2025-11-12T11:15:54",
"review_url": "https://openreview.net/forum?id=0hLuQAT3fV¬eId=iarZaNpvvo",
"license": "CC BY 4.0"
},
{
"id": "F0Pmzrxobs",
"forum": "0hLuQAT3fV",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4421/Reviewer_Hbgg",
"reviewer_name": "Reviewer_Hbgg",
"rating": 8,
"confidence": 3,
"soundness": 4,
"contribution": 3,
"presentation": 3,
"summary": "The paper introduces a universal adversarial perturbation approach for protecting images from diffusion-based editing. Instead of optimizing perturbations per image, it learns a single perturbation that can be applied universally. The method combines a semantic injection loss that aligns perturbed images with a target concept and a suppression loss that reduces the influence of original semantics, effectively disrupting unauthorized edits. Experiments show strong protection, cross-model generalization, and competitive performance even in data-free settings.",
"strengths": "1. The paper is clearly written and well organized, with intuitive figures and a logical presentation of ideas.\n\n2. The proposed framework is well motivated, and the introduction of the source semantic suppression loss is a novel and insightful component that strengthens the overall approach.\n\n3. The experiments are thorough and convincing, showing strong and consistent results across models and settings, including data-free and black-box scenarios.",
"weaknesses": "The comparison with *Semantic Attack* may not be fully fair, as the original method is not designed under any $ \\ell_2 $ or $ \\ell_\\infty $ perturbation constraint. Imposing such a bound changes its optimization behavior and could disadvantage it in this setting.",
"questions": "1. Have the authors explored how the visual structure of the chosen target (for example, purely geometric or black-and-white grid patterns instead of semantic objects like “Ronaldo” or “Tiger”) affects the resulting perturbation? Such structured patterns might yield more uniform attention disruption and stronger transferability.\n\n2. The method achieves strong performance under the universal constraint. If this constraint were relaxed—allowing limited image-specific adaptation—how might the performance change, and what strategies could further strengthen the performance in that less restricted setting?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-17T13:02:27",
"modification_date": "2025-11-12T11:15:55",
"review_url": "https://openreview.net/forum?id=0hLuQAT3fV¬eId=F0Pmzrxobs",
"license": "CC BY 4.0"
}
] |
|
3sJ4zKToW6
|
https://openreview.net/forum?id=3sJ4zKToW6
|
Consistent Low-Rank Approximation
| 6.666667
| 3.333333
|
[
4,
8,
8
] |
[
3,
2,
5
] | 3
|
[
"low-rank approximation",
"online algorithms",
"consistency",
"recourse"
] |
We introduce and study the problem of consistent low-rank approximation, in which rows of an input matrix $\mathbf{A}\in\mathbb{R}^{n\times d}$ arrive sequentially and the goal is to provide a sequence of subspaces that well-approximate the optimal rank-$k$ approximation to the submatrix $\mathbf{A}^{(t)}$ that has arrived at each time $t$, while minimizing the recourse, i.e., the overall change in the sequence of solutions. We first show that when the goal is to achieve a low-rank cost within an additive $\varepsilon\cdot||\mathbf{A}^{(t)}||_F^2$ factor of the optimal cost, roughly $\mathcal{O}\left(\frac{k}{\varepsilon}\log(nd)\right)$ recourse is feasible. For the more challenging goal of achieving a relative $(1+\varepsilon)$-multiplicative approximation of the optimal rank-$k$ cost, we show that a simple upper bound in this setting is $\frac{k^2}{\varepsilon^2}\cdot\text{poly}\log(nd)$ recourse, which we further improve to $\frac{k^{3/2}}{\varepsilon^2}\cdot\text{poly}\log(nd)$ for integer-bounded matrices and $\frac{k}{\varepsilon^2}\cdot\text{poly}\log(nd)$ for data streams with polynomial online condition number. We also show that $\Omega\left(\frac{k}{\varepsilon}\log\frac{n}{k}\right)$ recourse is necessary for any algorithm that maintains a multiplicative $(1+\varepsilon)$-approximation to the optimal low-rank cost, even if the full input is known in advance. Finally, we perform a number of empirical evaluations to complement our theoretical guarantees, demonstrating the efficacy of our algorithms in practice.
|
optimization
|
https://openreview.net/pdf?id=3sJ4zKToW6
| 2025-09-19T05:52:21
| 3
|
[
{
"id": "G9M6d2dYmo",
"forum": "3sJ4zKToW6",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14297/Reviewer_ex4U",
"reviewer_name": "Reviewer_ex4U",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "This paper studies the problem of low-rank approximation (LRA). Specifically given a matrix $A$, this work studies the problem of approximating $A$ with a matrix $AV^TV$, such that $||A-AV^TV||_F^2 \\leq (1+\\epsilon)||A-A_k||_F^2$ where $A_k$ is the best rank-$k$ approximation of $A$, and rows of $A$ arrive sequentially in time. This is a very widely studied problem. A primary contribution of the paper is the following problem: given rows of $A$ arrive sequentially over time, define measure called recourse computed as $||P_t - P_t-1||_F^2$ where $P_t$ is the orthogonal projection matrix corresponding to $V^TV$ at time $t$. This work studies LRA through the lens of recourse and demonstrates that -- 1) when the goal is to approximate $A$ with $\\epsilon$ additive error, an $O(k\\log(nd)/\\epsilon)$ recourse is feasible, 2) when the goal is to approximate $A$ with $1+\\epsilon$ multiplicative error, an $O(k^2\\text{poly}\\log(nd)/\\epsilon^2)$ recourse is feasible. This is further improved to $k^{3/2}\\text{poly}\\log(nd)/\\epsilon^2$ for matrices with integer entries that are bounded, and $k^{2}\\text{poly}\\log(nd)/\\epsilon^2$ when condition number is bounded. A lower bound of $\\Omega(k\\log(n/k)/\\epsilon)$ is also shown for $1+\\epsilon$ multiplicative approximation algorithms.",
"strengths": "- The problem setting is interesting, i.e., studying of the subspace corresponding to streaming updates and understand how subspace can differ for different algorithms is an interesting idea. Mostly because one can imagine having to reconstruct the approximation matrix again and again if the subspace is changing significantly (e.g., as stated for the Frequent directions method). \n\n- I have only glossed over the proofs, which are pretty simple, and believe they are correct. Given the authors present a lower bound to the problem, it helps us ground the theoretical upper bounds presented in this work. \n\n- I really appreciate the simple algorithms which helps maintain the approximation at time $t$. The algorithm basically checks importance of an incoming row by first identifying the bottom $\\sqrt{k}$ singular values among the top $k$ singular vectors. If these vectors have very low spectral contribution, they are \"disposable\" and so can be replaced by any incoming vector. \n\n- Good empirical evaluations help us understand how the algorithms presented here work in practice.",
"weaknesses": "- There is a significant body of work on rank-$k$ approximation algorithms. However, only frequent directions has been empirically compared against. I am surprised as why this is the case.\n\n- Most of the theoretical contributions are really derivative of prior work. While I really appreciate the problem setting, the contributions are really understanding how the subspace are drifting with time given the subspace approximation algorithm. \n\n- Algorithm 2 requires computing SVD at each round in the worst case. So while one may be easily able to reduce recourse, the run time of the algorithms grows with $tk^3$, which seems expensive!\n\n- For distributional shifts, just checking the bottom $\\sqrt{k}$ may not be enough, e.g., for windowed algorithms due to Braverman et. al. 2020, or the works of Musco-Musco, or Cohen et. al. on online leverage score sampling, one might need to re-evaluate samples which was heavy at some point and might become of low importance in future. What do we do then?",
"questions": "Please check the weaknesses section.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T04:49:50",
"modification_date": "2025-11-12T13:19:00",
"review_url": "https://openreview.net/forum?id=3sJ4zKToW6¬eId=G9M6d2dYmo",
"license": "CC BY 4.0"
},
{
"id": "Yx0pNuAbRL",
"forum": "3sJ4zKToW6",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14297/Reviewer_odAo",
"reviewer_name": "Reviewer_odAo",
"rating": 8,
"confidence": 2,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This paper studies the online low rank approximation problem. In this problem one is given a matrix $A\\in \\mathbb{R}^{n\\times d}$ with integer entries bounded by $M$ and whose rows $a_1,\\ldots, a_n$ arrive one by one. Let $A^{t}$ denote the matrix of the first $t$ rows at time $t\\in [n]$, the goal is to output a matrix $V^{t}\\in \\mathbb{R}^{k\\times n}$ such that $A^{t}(V^{t})^T V^{t}$ is a $1+\\epsilon$ approximation rank $k$ approximation to $A^{t}$ at every time $t\\in [n]$. In particular the paper studies \\emph{consistent} algorithms for online low rank approximation. More precisely the goal is to minimize the recourse of the algorithm measured as \n\n$$\\sum_{t=1}^n \\|P_A-P_B\\|_F^2$$\n\nfor $A = V_t$ and $B = V_{t-1}$ where $P_{V}$ is the orthogonal projection matrix of the subspace spanned by $V$. Thus a low recourse algorithm ensures that the subspace of low rank approximation does not change drastically on average over the stream. Note that recourse of $nk$ can be achieved trivially by computing the best rank $k$ approximation at each step from scratch.\n\nThe first result shown in the paper is an algorithm that achieves a recourse of $O((k/\\epsilon)\\log(ndM))$ but incurs an additional additive error $\\epsilon \\|A^{t}\\|_F^2$ at each step $t\\in [n]$. Furthermore they show that a recourse of $O((k/\\epsilon^2)\\log^2 n)$ assuming an online condition number of poly$(n)$ and no additive error. Finally for matrices with integer entries they also obtain improved bounds. On the negative side they prove a lower bound on the recourse of $\\Omega(n/\\epsilon \\log(n/k))$ for obtaining a $1+\\epsilon$ approximation at every time step by constructing a hard sequence of rows.",
"strengths": "The paper introduces a novel model for studying low rank approximation of consistency. Consistent and low recourse algorithms have been studied for other problems in data science thus making low rank approximation a natural problem to study from a theoretical perspective. Moreover the authors show good upper and lower bounds for low rank approximation in this model.",
"weaknesses": "The paper does not have many weaknesses but one is that although the problem has a natural theoretical motivation, it would be interesting for the authors to discuss more concrete practical motivations for studying low recourse algorithms for low rank approximation.",
"questions": "Although the authors very briefly discuss potential applications in feature engineering, it would be interesting to see if there are any concrete applications of low recourse algorithms for low rank approximation.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-26T22:44:45",
"modification_date": "2025-11-12T13:19:00",
"review_url": "https://openreview.net/forum?id=3sJ4zKToW6¬eId=Yx0pNuAbRL",
"license": "CC BY 4.0"
},
{
"id": "SiSxXPBsD0",
"forum": "3sJ4zKToW6",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14297/Reviewer_vaVm",
"reviewer_name": "Reviewer_vaVm",
"rating": 8,
"confidence": 5,
"soundness": 4,
"contribution": 3,
"presentation": 4,
"summary": "This paper studies low rank approximation in a streaming model, where in addition to standard goals of small space and update time, they also do not want the provided solution to change too much across the lifetime of the stream. This is modeled as \"recourse\" which is the sum of squared distances between subspaces at each step.",
"strengths": "Standard streaming subspace approximation algorithms like FrequentDirections and Ridge Leverage Score Sampling can have very large recourse as shown theoretically, and empirically on real data. That means they can bounce between solutions. \n\nThe algorithm is subtle yet simple. It is careful about when to update the estimate with extra care to not to change the subspace too much if it does not have to. It reminds me of distributed streaming algorithms (e.g., https://arxiv.org/abs/1404.7571) that try to minimize total communication of updates, but with focus on ensuring a stable answer. \n\nI think the recourse setting is natural and useful. It is a nice way to quantify stability of the sketch. \n\nA strength is that feels like a complete paper on this topic. It has a variety of upper bounds for additive and relative error, and shows lower bounds on recourse that asymptotically match the upper bounds. There are basic experiments that show that the algorithm is not just theoretical, but works in practice -- whereas baselines like FrequentDirections does not.",
"weaknesses": "Nothing to note.",
"questions": "None.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-26T04:24:26",
"modification_date": "2025-11-12T13:19:01",
"review_url": "https://openreview.net/forum?id=3sJ4zKToW6¬eId=SiSxXPBsD0",
"license": "CC BY 4.0"
}
] |
|
OyIJvyyB3R
|
https://openreview.net/forum?id=OyIJvyyB3R
|
LLM2Fx-Tools: Tool Calling for Music Post-Production
| 5.5
| 3.5
|
[
4,
8,
6,
4
] |
[
3,
3,
4,
4
] | 4
|
[
"Music Post Production",
"Fx Chain Generation",
"Tool Calling"
] |
This paper introduces LLM2Fx-Tools, a multimodal tool-calling framework that generates executable sequences of audio effects (Fx-chain) for music post-production. LLM2Fx-Tools uses a large language model (LLM) to understand audio inputs, select audio effects types, determine their order, and estimate parameters, guided by chain-of-thought (CoT) planning. We also present LP-Fx, a new instruction-following dataset with structured CoT annotations and tool calls for audio effects modules. Experiments show that LLM2Fx-Tools can infer an Fx-chain and its parameters from pairs of unprocessed and processed audio, enabled by autoregressive sequence modeling, tool calling, and CoT reasoning. We further validate the system in a style transfer setting, where audio effects information is transferred from a reference source and applied to new content. Finally, LLM-as-a-judge evaluation demonstrates that our approach generates appropriate CoT reasoning and responses for music production queries. To our knowledge, this is the first work to apply LLM-based tool calling to audio effects modules, enabling interpretable and controllable music production where users can incorporate their own audio plugins.
|
LLM2Fx-Tools is a framework that uses a multimodal LLM to automatically generate executable audio effect chains (as tools), chain-of-thought reasoning, and natural language responses.
|
applications to computer vision, audio, language, and other modalities
|
https://openreview.net/pdf?id=OyIJvyyB3R
| 2025-09-19T13:42:11
| 4
|
[
{
"id": "B7fQjc5nan",
"forum": "OyIJvyyB3R",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16141/Reviewer_Rbd9",
"reviewer_name": "Reviewer_Rbd9",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This paper applies existing LLM tool calling techniques to audio effects chain generation. The system uses chain-of-thought to predict effect sequences from audio. The authors create a 101K synthetic dataset LP-Fx generated by Gemini 2.5. In my opinion, the work is mostly an application of existing techniques to a new domain without significant technical innovation.",
"strengths": "1. First work applying structured tool calling to audio effects chains\n2. Comprehensive evaluation across multiple metrics",
"weaknesses": "1. The paper misuses terminology. \"Audio style transfer\" has established meaning in audio processing literature (timbre/texture transformation). This work only does audio effects parameter transfer, which is much narrower. This creates confusion with existing work and is misleading.\n2. Limited technical novelty. The method is standard multimodal LLM fine-tuning: audio encoder -> adapter -> LLM with LoRA. This is direct application of existing techniques without methodological contribution.\n3. No human evaluation despite claims about \"interpretability\" and \"controllable music production\". All evaluation is automatic metrics or LLM-as-a-judge, which has known reliability issues.\n4. Missing details: How does the model handle effects outside the 9 trained types? The paper claims \"users can incorporate their own audio plugins\" but provides no evidence.\n5. The work is mostly experimental validation that LLM tool calling works for this task. The technical contribution is limited.",
"questions": "Recommand using the term \"audio effects (parameter) transfer\" instead of \"audio style transfer\"\n\nGemini 2.5 Flash gets MAE 0.32 vs your 0.23, but it achieves 78% accuracy vs your 80%. Why does such a large model fail so badly on parameters? Does it indicate setup issue?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-04T11:53:54",
"modification_date": "2025-11-12T13:45:07",
"review_url": "https://openreview.net/forum?id=OyIJvyyB3R¬eId=B7fQjc5nan",
"license": "CC BY 4.0"
},
{
"id": "DNjRsVuCU3",
"forum": "OyIJvyyB3R",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16141/Reviewer_CSNR",
"reviewer_name": "Reviewer_CSNR",
"rating": 8,
"confidence": 3,
"soundness": 2,
"contribution": 3,
"presentation": 4,
"summary": "The paper presents LLM2Fx-Tools, a novel tool-calling framework which for a given set of audio inputs, provides executable audio effects sequences (Fx-chain), with appropriate CoT reasoning and responses . The paper also introduces LP-FX, a new instruction following dataset with CoT annotations and tools calls for audio effects. The authors provide experimental validation for their approach along with a demo page for subjective verification.",
"strengths": "Originality: The paper's key novelty lies in formulating Fx-chain estimation as a LLM-based tool call problem. The autoregressive modeling for LLMs is able to learn the sequential order of audio effect calls as opposed to systems only based on audio features. \n\nQuality: The paper has detailed experiments around the three evaluation tasks, reverse engineering to show the model can predict tool-chain for paired audios, blind style transfer to show the generalization capability to unseen audios, and natural language language generation to showcase interpretability. Across all the tasks, LLM2FX-Tools results are strong as compared to the baselines. The authors conduct ablations to show the importance of optimization decisions (CoT, NTL, MST). \n\nClarity: The paper is well written with clear notation and figures, with appendix covering all necessary details for dataset generation, evaluation and LLM prompting.\n\nSignificance: LLM2FX-Tools framework treats the audio effect modules as external non-differentiable tools, which makes the framework flexible to diverse real world scenarios.The authors also present a LP-FX dataset with CoT annotations and tool calls, which is beneficial for future research",
"weaknesses": "For the reverse engineering task, the strongest baseline is Multi-task regression, which comes close even without relying on the ordering of Fx-chain, while the LLM is learning that information. The authors can consider adding a pairwise-ordering loss for the 9 audio effects for the multi-task baselines.\n\nFor the style transfer task, the style of the output appears to be mixed between the input and reference audio while listening subjectively to the demo examples. A comparison with differential audio effects style transfer baseline would be quite important to see (https://arxiv.org/pdf/2207.08759). The objective evaluation can benefit on a larger set than 100 test samples\n\nAs covered in the limitation section, the paper relies on Fx-normalization and Fx-removal preprocessing, while ideally, they should be modeling as part of the tool-calling framework. Experimental validation is limited to single instruments, datasets are relatively smaller in size with ~2k tracks.",
"questions": "For equation 4, $N$ corresponds to the sequence length or training examples? Previous equation 3 has $t$, which is over the sequence, while for $t$ in equation 4 represents the upper range of the number token. It will be helpful if we can improve the notation a little here.\n\nPlease consider having a subjective evaluation and a stronger baseline like (https://arxiv.org/pdf/2207.08759) for the style transfer task. \n\nGemini 2.5 Pro is used both for dataset generation judge in 3.2, and natural language evaluation judge for table 4. Could we use a different judge to remove the bias for this case?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T08:17:05",
"modification_date": "2025-11-12T13:45:07",
"review_url": "https://openreview.net/forum?id=OyIJvyyB3R¬eId=DNjRsVuCU3",
"license": "CC BY 4.0"
},
{
"id": "nsPS8cJzfi",
"forum": "OyIJvyyB3R",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16141/Reviewer_A8mk",
"reviewer_name": "Reviewer_A8mk",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper presents LLM2Fx-Tools, a tool-calling framework that generates sequences of audio effects (Fx-chains) for music post-production. The authors also introduce a new dataset, LP-Fx, to support this task. The topic is novel and of clear interest to the audio and music research community. The proposed system builds upon Fx-Encoder++ and fine-tunes Qwen-4B to achieve the goal of automatic Fx-chain generation.",
"strengths": "- The proposed approach to Fx-chain estimation is novel. The integration of Chain-of-Thought (CoT) reasoning into the training framework is also interesting.\n\n- The problem is clearly defined and well motivated.\n\n- The methodology for dataset creation is clearly described and systematically organized.",
"weaknesses": "- In Figure 1, the meaning of FxNorm is unclear.\n\n- In Figure 2, why does e_{SEP} consist of two tokens?\n\n- Below Equation (1), what is N? What is param_n?\n\n- In Section 2.1, second paragraph, the authors mention “handle both tasks.” What exactly are the two tasks?\n\n- In Section 2.1, the term “secondary task” is introduced but not clearly defined.\n\n- In Section 2.2 (Audio Encoder), why was Fx-Encoder++ chosen over other possible encoders? How might different audio encoders influence system performance?\n\n- The writing in Section 2.3 (Number Token Loss) and Equation (4) needs improvement for clarity. The statement “a key problem with Cross Entropy is that it treats all incorrect predictions equally” is vague—please elaborate on how this issue is addressed in your proposed loss. Overall, Subsection 2.3 and Equation (4) are difficult to follow.",
"questions": "- Will the training dataset and training code be released for reproducibility in future work?\n\n- Will the evaluation dataset and evaluation code also be made publicly available?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T22:01:42",
"modification_date": "2025-11-12T13:45:07",
"review_url": "https://openreview.net/forum?id=OyIJvyyB3R¬eId=nsPS8cJzfi",
"license": "CC BY 4.0"
},
{
"id": "AhiFImc0KX",
"forum": "OyIJvyyB3R",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16141/Reviewer_Cr7x",
"reviewer_name": "Reviewer_Cr7x",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This paper proposes a new framework (including a model, dataset, and overall methodology) for music post-production based on a multimodal LLM. This is evaluated on inferring an Fx-chain, but also on style transfer and using the LLM as a judge.",
"strengths": "* This is a relatively unexplored application domain in terms of using multimodal LLMs for music post-production tasks.\n* The overall choice of models, the task definition, training process, dataset creation, and evaluation methodology are all appropriate and technically sound.",
"weaknesses": "* My main source of criticism for this paper is that this work overall uses established AI methods for a new application. There is little to no AI innovation taking place, and I feel that this work would be more suitable for a venue specializing in audio production or audio engineering (e.g. AES conferences or conventions, ICASSP, or DAFx). I do not see any compelling evidence for inclusion in ICLR.",
"questions": "As stated above, I fully agree with the design choices made by the authors in terms of methodology, problem setting, evaluation, and the new dataset based on MedleyDB. The paper is also well structured and well written, and I also appreciate the inclusion of a section on Limitations, which is not something always present in ICLR submissions. \n\nMy only comment is the one stated above, on whether is ICLR the most suitable venue for a work which does not offer any innovation in AI, but rather uses established AI methods with some minor modifications as to support a research question and problem directly situated in the field of audio engineering. As such I might recommend this paper on being marginally out of scope of ICLR.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-29T00:08:45",
"modification_date": "2025-11-12T13:45:08",
"review_url": "https://openreview.net/forum?id=OyIJvyyB3R¬eId=AhiFImc0KX",
"license": "CC BY 4.0"
}
] |
rcsZNV9A5j
|
https://openreview.net/forum?id=rcsZNV9A5j
|
Flash Multi-Head Feed-Forward Network
| 5
| 3.75
|
[
6,
4,
4,
6
] |
[
3,
4,
4,
4
] | 4
|
[
"Machine Learning Systems",
"Machine Learning",
"Software-Hardware Codesign",
"Natural Language Processing",
"Transformer",
"Deep Learning",
"Model Architecture"
] |
We explore Multi-Head FFN (MH-FFN) as a replacement of FFN in the Transformer architecture, motivated by the structural similarity between single-head attention and FFN. While multi-head mechanisms enhance expressivity in attention, naively applying them to FFNs faces two challenges: memory consumption scaling with the head count, and an imbalanced ratio between the growing intermediate size and the fixed head dimension as models scale, which degrades scalability and expressive power. To address these challenges, we propose Flash Multi-Head FFN (FlashMHF), with two key innovations: an I/O-aware fused kernel computing outputs online in SRAM akin to FlashAttention, and a design using dynamically weighted parallel sub-networks to maintain a balanced ratio between intermediate and head dimensions. Validated on models from 128M to 1.3B parameters, FlashMHF consistently improves perplexity and downstream task accuracy over SwiGLU FFNs, while reducing peak memory usage by 3-5x and accelerating inference by up to 1.08x. Our work establishes the multi-head design as a superior architectural principle for FFNs, presenting FlashMHF as a powerful, efficient, and scalable alternative to FFNs in Transformers.
|
We propose a novel multi-head FFN that achieves better transformer model performance while using 3-5x less memory and running 1.00-1.08x faster than standard SwiGLU FFNs.
|
unsupervised, self-supervised, semi-supervised, and supervised representation learning
|
https://openreview.net/pdf?id=rcsZNV9A5j
| 2025-09-16T16:13:44
| 4
|
[
{
"id": "TygVX9zSRX",
"forum": "rcsZNV9A5j",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission7175/Reviewer_i2pJ",
"reviewer_name": "Reviewer_i2pJ",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 2,
"summary": "This paper proposes FlashMHF, which is a multi-head feed-forward networks (FFNs) for Transformers. Motivated by the structural similarity between single-head attention and FFNs, the authors identify two key challenges in current MHF: memory explosion and scaling imbalance. FlashMHF solves these problems by pairing a scaled-balanced parallel FFN subnetworks designed with a high-efficiency, IO-aware kernel. Experiments on models from 128M to 1.3B parameters show improvements in perplexity and downstream tasks, with 3-5x memory reduction and up to 1.08x inference speedup.",
"strengths": "The motivation of the paper is well justified with two problems in naive multi-head attention. There are proper ablations such as head dimensions and model scales, and downstream task evaluations are standard. The idea is straightforward by using sub-networks to group different heads to solve the problems, yet results are pretty impressive.",
"weaknesses": "1. In section 3.2.1 the authors say their FlashMHF functions Luke a dense MoE, however, there is no direct comparison against dense MoE architecture. \n2. There is no ablations for “Flash”, so it’s hard to isolate memory savings from the architectural change and the kernel optimization. \n3. Lack of large scale experiments to verify the scaling effect - largest model size is 1.3B.\n4. About presentation, Figure 3a doesn’t show multihead which is confusing. Also, the biggest innovation of it seems to come from MoE, while the title is a bit misleading, “mixture of dense multi-head FFN experts” might be better cover what the core idea is.",
"questions": "Multi-head needs to be concat so we do need to materialize the full tensor. In section 3.2.2 it says “The key to solving the memory explosion lies in the multi-head design itself” seems wrong, shouldn’t it be in the expert design, because we can do the weighted average accumulation?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T17:26:11",
"modification_date": "2025-11-12T11:48:53",
"review_url": "https://openreview.net/forum?id=rcsZNV9A5j¬eId=TygVX9zSRX",
"license": "CC BY 4.0"
},
{
"id": "idLqumvNwL",
"forum": "rcsZNV9A5j",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission7175/Reviewer_B4B2",
"reviewer_name": "Reviewer_B4B2",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 4,
"summary": "This paper proposes FlashMHF, which introduces the multi-head mechanism (in attention) into the Feed-Forward Network (FFN) module while balancing performance scalability and implementation efficiency. The proposed design addresses two key issues in naïve multi-head FFNs (i.e., scaling imbalance and memory explosion), by decomposing parallel FFN subnetworks and implementing an I/O-aware flash kernel. Experimental results on small- and medium-scale models demonstrate that FlashMHF outperforms the de facto SwiGLU baseline in language modeling tasks, while significantly reducing memory usage.",
"strengths": "1. The paper is well-motivated and clearly written. It identifies two key challenges of multi-head FFNs and proposes corresponding solutions, which are empirically validated.\n2. FlashMHF achieves lower PPL and better downstream performance than SwiGLU and other baselines. The architectural design choices are well-supported by effective ablation studies, including the multi-head mechanism, SwiGLU component, and subnetwork structure.\n3. lt is implemented with a kernel design analogous to FlashAttention, ensuring the feasibility of training large-scale language models efficiently.",
"weaknesses": "1. **Source of subnetwork advantages.**\nThe authors claim that the benefit of the subnetwork design mainly arises from a more balanced expansion ratio. However, for a given head, the parallel subnetwork computation essentially differs from a dense FFN only by an additional **blockwise gating** applied to intermediate activations. When concatenated, this does not effectively control the expansion ratio and finally increases by $d_{model}/d_h$ compared to a standard SwiGLU. I suspect the improvement stems from added nonlinearity (gating with normalization) rather than from the parallel sub-net. In other words, applying a similar gating mechanism to a standard SwiGLU might also yield certain loss improvement (as the experiments show, the standalone multi-head design brings no clear advantage at larger scales).\n\n2. **Fairness of speed evaluation.**\nThe speed comparison appears somewhat unfair. To match parameter counts, the authors add four extra layers (1/5 of total) for baseline; but deeper networks are inherently slower due to layer-wise serialization, whereas **increasing width** would be a fairer adjustment. Moreover, the attention computation also scales with depth, thus latency improvements only become apparent at longer sequence lengths (as shown in Fig. 7b).\n\n3. **Memory evaluation setup.**\nThe memory comparison setup should be clarified. SwiGLU can also be easily adapted to a flash kernel, and many frameworks **fuse activation functions** to reduce memory overhead. It is unclear whether the authors’ implementation accounts for these optimizations. Considering that modern LLM training almost universally employs **gradient checkpointing**, FFN intermediate activations are typically recomputed rather than stored, which should be reflected in a more realistic baseline comparison.",
"questions": "1. What is the specific implementation of SwiGLU used in the efficiency evaluation? Is activation recomputation (gradient checkpointing) applied during measurement?\n2. In the GLU formulation, you define $\\mathbf{Q}=\\mathbf{X}$ (Eq. 3). However, in the multi-head FFN definition, a separate projection $\\mathbf{W}_{in}$ is introduced to obtain the query (Eq.10). Is this design choice be empirically validated as necessary?\n3. Compared to PKV, the activation function used in PAttention [1] might serve as a more solid baseline for comparison.\n4. Given that most sota Transformer architectures now adopt MoE designs, how do the authors view the compatibility and potential integration of FlashMHF with MoE architectures?\n\n[1] TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parameters. 2024.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T04:03:09",
"modification_date": "2025-11-12T11:48:53",
"review_url": "https://openreview.net/forum?id=rcsZNV9A5j¬eId=idLqumvNwL",
"license": "CC BY 4.0"
},
{
"id": "DiDcnXvEMd",
"forum": "rcsZNV9A5j",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission7175/Reviewer_p8ui",
"reviewer_name": "Reviewer_p8ui",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "The paper proposes FlashMHF, a replacement for standard FFNs in Transformer architectures. The core idea is to mirror the Multi-Head design of Attention also in the FFNs implementation. The paper however warns that a naive adaptation incurs scaling issues, both in terms of increasing memory consumption and expressive power degradation. The authors address both of these issues by carefully prescribing how the intermediate activations dimension should scale with model size, and by implementing a fused kernel for FlashMHF which avoids materialising intermediate tensors. Results show how substituting the FlashMHF component with FFNs can boost performance (both on PPL, and downstream tasks evaluations taken from lm-eval-harness), while simultaneously reducing peak memory utilisation, and slightly improving latency.",
"strengths": "- The main motivations behind the choice of architecture modifications are justified reasonably well\n- The analysis is convincing, and the experiments conducted overall complete (although some results could be presented better)",
"weaknesses": "- Novelty is limited: both core methodologies (mirroring MH Attention and improving kernel application via tiling) have already been proposed",
"questions": "__On Novelty__:\nAs I mentioned above, I find the novelty aspect of the paper rather limited. As you yourselves correctly point out, the structural symmetry between sequence-wise Attention and feature-wise FFN (which acts as main justification behind your work) has already been illustrated; the proposal to split FFNs in a multi-head fashion was already (granted, partly) investigated in MH-MoE; the tile-wise implementation of your fused kernel is directly inspired by FlashAttention, and is at the core of the design of efficient parallelisation of MMMs. The most relevant novelty is then given by the proposed re-scaling of the size of the internal components of the MH FFN. I still appreciate the overall execution, but I find this limits the contribution of the paper.\n\n__On Fig7__:\nThe presentation of the results in Sec4.3, and more specifically in Fig7, should be heavily revised, for a number of reasons:\n- From what you write in L415, your “comparison uses a 20-layer FlashMHF and MH-FFN against a 24-layer SwiGLU baseline”, so I’m understanding you’re considering memory consumption and latency in a *forward pass through the whole architecture*, including both FFN/FlashMHF and Attention layers? I believe at this stage it would be more relevant instead to have a direct comparison between the *single* FlashMHF / FFN layer, so to properly identify the improvements introduced by your proposed modification (as the Attention layer is the same in both cases, I take it). To be clear: I do appreciate the result you report (ultimately, the “weight” of the overall architecture is what practitioners mostly care about), but the presence of Attention does dirty the relevant metric. Notice this should play in your favour, too, in that the memory / speedup gains should be more marked. If instead I misunderstood, and you’re considering just FFN/FlashMHF layers, please clarify this in the text.\n- What is the deal with the sequence lengths picked? I was expecting orderly powers of 2, which would make identifying the O(L) trend straightforward at glance. Also, please use a ylog scale, for the same reason\n- Moreover, why picking sequence lengths in the first place? Since you’re focusing on the FFN layer (which applies a perfectly sequence-parallel operation, and acts purely along the feature dimensions), then a scaling trend with respect to feature dimension would be much more relevant, in my opinion. What you’re effectively reporting here is the scaling trend of Attention. Again, it’s not like this result is not useful per se, but the way it’s presented makes it harder to isolate the contribution of your own component, which should be the focus of this section.\n- Finally, and perhaps most importantly, the comparison is not entirely fair: if I understood correctly, you’re using an unoptimised version of SwiGLU (which unnecessarily materialises intermediate tensors) to compare against your own fused kernel for FlashMHF. How much of the gains you’re seeing are due to the tiled implementation of the kernel? Because that same solution could be easily applied to SwiGLU as well, I reckon.\n\n__On Gating__:\nIn L200-218 you describe your chosen per-head expert aggregation mechanism. There is a number of different ways one could go on about aggregating both within and across heads: have you experimented with different methods? Can you expand on the reasoning behind this specific choice? Compared to the remainder of the paper, this section is lacking some justifications.\n\n\n__Minor__:\n- In L247 you write: “we synchronize the hyper-parameter settings for the optimizer across all models”. What does this mean? I’m expecting, say, optimal LR’s to vary across architectures, at least in principle. Are you not performing any hyper-parameter sweep whatsoever? And if you’re doing it, are you picking the best for *which* architecture exactly?\n- You’re going down the route of making the FFN more akin to Attention; but there is also the “dual” approach of making attention more akin to FFNs, as explored in “MLP-Mixer: An all-MLP Architecture for Vision”. I don’t think it makes sense to explicitly add a comparison with this architecture, but I would at least mention it, as I believe it’s relevant. Moreover, I was quite surprised to see that there isn’t much work which just goes all the way and substitutes FFN with component-wise attention. Apart from MH-MoE, I could only find “DaViT: Dual Attention Vision Transformers” (again, only applied to vision).\n\n\n__Grammar / Rewording / Formatting__:\n- L51 “we analyse the …, a straightforward … and identify” -> the clause is breaking the flow. Maybe “we analyse the … (a straightforward … ), and identify…”\n- L62 analogous -> analogousLY\n- L89 the equation is hanging: consider prepending something like “We consider the parameters: ”\n- L107 remains -> reTains\n- Eq(1,2,3,…) I think you’re misusing the equivalent-by-definition / delta-equivalent (\\triangleq) symbol. The defined-as symbol (\\eqqcolon) would be much better indicated here, imho\n- L115 define headwise split -> define THE headwise split? Define headwise split AS? (Similarly for headwise concatenation in L121)\n- L119 this operation split -> splitS\n- L120 into $d_h\\times H$…sub tensors? parts? blocks?\n- L131 to overcome these challenges -> which challenges? I reckon it refers to the “practical limitations” above, but it’s rather vague\n- L473: write -> store? Write … to memory?\n- L474: incorporates -> incorporate",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T22:00:24",
"modification_date": "2025-11-12T11:48:54",
"review_url": "https://openreview.net/forum?id=rcsZNV9A5j¬eId=DiDcnXvEMd",
"license": "CC BY 4.0"
},
{
"id": "CtOi7cbV2E",
"forum": "rcsZNV9A5j",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission7175/Reviewer_kH78",
"reviewer_name": "Reviewer_kH78",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 4,
"summary": "Motivated by the structural similarity between single-ahead attention and a feed-forward network (FFN), the paper explores multi-head FFNs. To account for the increased memory consumption and other issues, the authors propose a novel architecture, FlashMHF, inspired by FlashAttention which also dynamically weights parallel sub-networks. They find for small models that their design improves perplexity and task accuracy whilst reducing peak memory usage and inference time vs a SwiGLU FFN. This suggests that FlashMHF might be a powerful new architectural component that could replace the FFN in existing transformer architectures.",
"strengths": "1. Good empirical results on the models and tasks tested compared to a standard baseline, clear improvements across a range of downstream tasks\n2. Mathematical exposition of the preliminaries and method is clear and well-written\n3. The method is a satisfying synthesis of existing ideas and innovations in other aspects of the transformer architecture to improve the FFN block\n4. The GPU memory scaling of the proposed FFN architecture is smaller than that of a typical FFN",
"weaknesses": "1. The paper proposes a core architectural innovation for LLMs, but only tests on very small models (<=1.3B parameters).\n2. Three model sizes are tested but they are not plotted together / compared directly so it's not clear how improvements scale with size. There's limited empirical evidence that we should expect the accuracy / perplexity advantages of this architecture to improve with scale, rather than diminish.\n3. It's not clear why scaling imbalance, $d_{ff}/d_h$, is an issue, as stated on line 056. The reference given on line 057 does not address this since neural scaling laws assume normal FFNs, and does not investigate multi-head FFNs. The discussion and evidence given in 3.1 and 4.1 seemingly only addresses one way of scaling the model. There are unstated assumptions in the paper relative to prior work about what ratios are important, and how one would scale a model with multi-head FFNs. To make a claim about these ratios scaling poorly and causing issues, one needs to provide evidence that any way of scaling them would lead to performance degradation relative to using single-head FFNs. Otherwise it can just be argued that scaling them in a different way might resolve this problem naturally, without need for more complex architectures. More explicitly put, for the Naive multi-head FFN baseline you make assumptions such as \"the per-head width is typically kept fixed\" (line 180), and then show this is bad. Why should one keep this fixed then? Why not just scale things in a different way? Additionally, why is it correct to equate the ratio $d_{ff}/d_{model}$ in normal SwiGLU designs with the ratio $d_{ff}/d_h$? Saying this latter ratio is outside of the optimal values found for the former ratio in prior work tells us nothing without additional evidence or reasoning backing up the validity of this comparison.\n4. Framing something that scales linearly as you scale up the model as an \"explosion\" is disingenuous. Typically things are framed as explosions when they scale exponentially. It is not convincing that the memory requirements of the naive multi-head FFN or standard FFN are a critical issue.\n5. You do not conduct enough ablations for the claims on lines 338-343 to be valid.\n6. It's not clear that inference latency reduction results are statistically significant\n7. All plots are given with training steps on the x-axis, not wall clock time. It's unclear how the proposed architecture affects training time.",
"questions": "1. Why does an imbalanced ratio between intermediate FFN size and FFN head dimension degrade scalability and expressive power? (See weakness 3 for related critique and questions)\n2. Why should multiple FFN heads split up the model dimension, and not each use the whole thing, as would be analogous to multi-head attention? Obviously this would lead to greater computation requirements, but perhaps also better performance? It would be nice to see this investigated, though this is a very minor point.\n3. How does the parameter count of FlashMHF scale and compare to a standard FFN?\n4. Does the proposed architecture increase training time for a fixed number of training steps?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-27T23:46:54",
"modification_date": "2025-11-12T11:48:56",
"review_url": "https://openreview.net/forum?id=rcsZNV9A5j¬eId=CtOi7cbV2E",
"license": "CC BY 4.0"
}
] |
eS4MAmmCHy
|
https://openreview.net/forum?id=eS4MAmmCHy
|
PEL-NAS: Search Space Partitioned Architecture Prompt Co-evolutionary LLM-driven Hardware-Aware Neural Architecture Search
| 3.5
| 4
|
[
4,
4,
4,
2
] |
[
4,
4,
4,
4
] | 4
|
[
"Large Language Model",
"Hardware-aware",
"Neural Architecture Search"
] |
Hardware-Aware Neural Architecture Search (HW-NAS) requires joint optimization of accuracy and latency under device constraints.
Traditional supernet-based methods require multiple GPU days per dataset. Large Language Model (LLM)-driven approaches avoid training a large supernet and can provide quick feedback, but we observe an exploration bias: the LLM repeatedly proposes neural network designs within limited search space and fails to discover architectures across different latency ranges in the whole search space. To address this issue, we propose PEL-NAS: a search space Partitioned, architecture prompt co-Evolutionary and LLM-driven Neural Architecture Search that can generate neural networks with high accuracy and low latency with reduced search cost. Our proposed PEL-NAS has three key components: 1) a complexity-driven partitioning engine that divides the search space by complexity to enforce diversity and mitigate exploration bias; 2) an LLM-powered architecture prompt co-evolution operator, in which the LLM first updates a knowledge base of design heuristics based on results from the previous round, then performs a guided evolution algorithm on architectures with prompts that incorporate this knowledge base. Prompts and designs improve together across rounds which avoid random guesswork and improve efficiency; 3) a zero-cost predictor to avoid training a large number of candidates from scratch. Experimental results show that on HW-NAS-Bench, PEL-NAS can achieve overall higher HV, lower IGD, and up to 54% lower latency than baselines at similar accuracy. Meanwhile, the search cost drops from days to minutes compared with traditional supernet baselines.
|
infrastructure, software libraries, hardware, systems, etc.
|
https://openreview.net/pdf?id=eS4MAmmCHy
| 2025-09-18T03:16:21
| 4
|
[
{
"id": "r5WN4tP0vh",
"forum": "eS4MAmmCHy",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission9721/Reviewer_ygWA",
"reviewer_name": "Reviewer_ygWA",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "This paper introduces a novel framework for HW-NAS by partitioning the search space into complexity-based niches and performs an LLM as an evolutionary operator (crossover + mutation) whose prompts and design heuristics co-evolve from round-to-round; candidates are scored training-free using zero-cost proxies to target accuracy–latency Pareto fronts. The experiments are performed on standard benchmarks, demonstrating the effectiveness of PEL-NAS.",
"strengths": "- A novel framework for NAS which used LLM to guide the search process.\n- Strong results on HW-NAS-Bench and ViT search spaces.",
"weaknesses": "- Lack of theoretical support for the LLM-based section. How and why is an LLM-based approach superior to traditional evolutionary search methods? Compared to evolutionary search, LLM-based search is expensive since it requires a pretrained language model.\n- The idea of partitioning the search space into multiple subspaces is not novel, as it has been proposed in prior studies [1]\n- Lack of ablation studies using other LLM models such as DeepSeek or Gemini.\n- Lack of providing the performance curve as the search progresses.\n- The reviewer believes that the framework is general. Instead of focusing on multiple objectives (i.e. accuracy and latency), how about benchmarking this method using accuracy as the sole objective? The authors should compare its performance against Random Search, Reinforcement Learning, and Evolutionary Search, using both single-proxy and ensemble-proxy settings, with and without search space partitioning, to validate the effectiveness of the LLM-based algorithm.\n\n\n\n[1] Few-Shot Neural Architecture Search, Yiyang Zhao et al.",
"questions": "See weaknesses.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-03T12:38:09",
"modification_date": "2025-11-12T12:20:18",
"review_url": "https://openreview.net/forum?id=eS4MAmmCHy¬eId=r5WN4tP0vh",
"license": "CC BY 4.0"
},
{
"id": "1COuuMvFR9",
"forum": "eS4MAmmCHy",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission9721/Reviewer_xvWB",
"reviewer_name": "Reviewer_xvWB",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This paper proposes PEL-NAS, a search-space partitioned, architecture prompt co-Evolutionary and LLM-driven NAS. The approach relies on complexity aware partitioning of the search space, an LLM-powered Evolutionary operating guided by an evolving knowledge base, and a training-free evaluation protocol. Experiments on HW-NAS-Bench shows good coverage of the Pareto front and good computational search performance compared to baselines.",
"strengths": "1. PEL-NAS is a sensible NAS approach that demonstrates strong empirical performance and efficiency.",
"weaknesses": "1. The partitioning method is manual, i.e., heavily reliant on the architecture search space. For example, the authors choose conv 3x3 following careful analysis of HW-NAS-Bench which of course doesn't apply to ViTs, so choose Embed Dim and Depth Num for ViT. This is a critical limitation of the method. \n2. Related to the previous point, partitioning is critical to PEL-NAS (Table 5) but it is not clear how sensitive the choice of partitioning (partitioning criteria) is on performance. \n3. The training-free objective evaluation, which leads to the efficient search cost, is not novel and is tied to the choice of benchmark,",
"questions": "1. What is the impact of using a different LLM or variants of the prompt on the performance?\n2. What is an example actual knowledge base produced?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T11:28:08",
"modification_date": "2025-11-12T12:20:18",
"review_url": "https://openreview.net/forum?id=eS4MAmmCHy¬eId=1COuuMvFR9",
"license": "CC BY 4.0"
},
{
"id": "ivDvQH3yMC",
"forum": "eS4MAmmCHy",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission9721/Reviewer_w62C",
"reviewer_name": "Reviewer_w62C",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "The paper proposes PEL-NAS, a hardware-aware NAS framework that (i) partitions the search space by simple architectural complexity (e.g., counts of 3×3 convs) to ensure diverse exploration, and (ii) uses an LLM with a persistent knowledge base to co-evolve design rules and prompts for generating candidates. \n\nCandidates are scored without training: accuracy comes from an offline-trained surrogate built on zero-cost proxies, and latency from HW-NAS-Bench; multi-objective selection advances a Pareto set (HV/IGD). On CIFAR-10/100 and ImageNet-16-120 across six devices, PEL-NAS reports higher HV/lower IGD than prior baselines with minutes-level search; a ViT study adapts partitioning to depth/embedding size and uses Auto-Proxy plus measured latency. \n\nAblations suggest partitioning and the surrogate contribute the largest gains, with smaller incremental benefit from the LLM co-evolution.",
"strengths": "- **Originality.** The paper identifies a clear and highly relevant problem: the inherent exploration bias (or mode collapse) of LLMs when applied to the vast NAS search space . The proposed solution, complexity-driven partitioning , is a novel and direct structural intervention to mitigate this specific, LLM-centric bias. This is combined with an LLM that maintains a persistent knowledge base to steer evolution, forming a targeted and well-motivated framework.\n- **Quality.** Consistent Pareto gains (HV↑/IGD↓) across multiple devices and datasets; ablations clearly identify partitioning and the ZC-ensemble surrogate as principal contributors, with LLM co-evolution adding a smaller but positive increment.\n- **Clarity.** The paper is well written and effectively uses clear figures (e.g., Figure 2 , Figure 4 ) and well-organized tables (e.g., Table 1 , Table 5 ) to present its methodology and results.",
"weaknesses": "**Positioning and novelty.**\nThe paper's contributions center on two components, but the ablation study (Table 5) clearly shows that the search space partitioning is the paper's single most critical component; its removal causes the most significant performance degradation . In contrast, the LLM-KB (a form of persistent memory) has precedents in prior work (e.g., LLMatic’s [1] two archives include a prompt archive that stores/updates prompts over the search; RZ-NAS [2] adds explicit reflection modules), making its novelty more incremental.\nWhile partitioning proves essential, a deeper analysis of its generality and robustness would strengthen the paper. Specifically:\n- Lack of Analysis for Generality and Robustness: \n - The partitioning rules appear to be manually defined and may depend on domain-specific heuristics. For instance, the CNN search space is partitioned by the count of nor_conv_3x3 operators, whereas the ViT variant uses entirely different manually designed criteria, Embed Dim and Depth Num. The paper assumes that the number of 3×3 convolutions reliably reflects latency, but this “parameter ≠ latency” paradox is a well-known issue. This raises an open question about whether the current partitioning rule would remain effective if the optimization target were memory or another hardware metric. In such cases, should the partitioning instead be based on memory-bound operators?\n - While the proposed rule works well under the evaluated settings, its stability and generality across architectures remain uncertain. A targeted sensitivity study that varies both the partitioning axis (e.g., FLOPs, latency, parameter count, or memory usage) and the number of partitions, especially under different search-space scales, would help clarify how robust the framework truly is. In particular, comparing the proposed complexity-based partitioning against a random or uniform partition baseline would help confirm whether the performance gains stem from the partitioning principle itself or merely from enforcing any form of structured diversification. More broadly, developing an automated way to infer salient complexity dimensions for new search spaces would make this approach more generalizable and less dependent on manual expert analysis.\n\nThis suggests that the partitioning strategy is an orthogonal contribution, largely independent of the specific LLM-KB searcher. Demonstrating such generality, for example, by applying the partitioning method to other LLM-based searchers, would substantially strengthen the contribution and position it as a general tool for mitigating LLM exploration bias rather than just one component of a single method.\n\n**Clarification of \"Training-Free\" Terminology.**\nThe \"training-free\" terminology warrants clarification. The method is only training-free during the search phase. Its accuracy estimation relies entirely on a pre-trained accuracy surrogate model that must be trained offline. This is explicitly an XGBoost model fit on ZC proxies for CNNs and a pre-existing \"Auto-Proxy\" predictor for ViTs. This approach is distinct from fully training-free methods that use raw ZC proxies directly for ranking without fitting a surrogate, and this distinction should be made clearer.\n\n**Conflated Comparison of Search vs. Estimation Strategies.**\nThe main experimental comparisons (Tables 2 & 3) conflate the paper's search strategy (partitioned LLM) with the performance estimation strategy (surrogate model). PEL-NAS is benchmarked against methods using fundamentally different estimators, like supernets (FairNAS , PRP-NAS , and DARTS) or full-training (LLMatic). The resulting cost differences (Table 4) are largely dominated by the choice of estimator (surrogate vs. supernet), which reflects differences in methodological setup rather than isolating the contribution of the proposed search strategy.\n\nSince the proposed LLM-based searcher with search-space partition could, in principle, operate on top of any performance estimator, whether a pre-trained supernet, a learned surrogate (as used in this paper), or even a raw zero-cost proxy score, contrasting it directly against supernet-based pipelines obscures what the LLM searcher itself contributes. A more precise evaluation would hold the performance estimator constant (e.g., using the same trained supernet or surrogate predictor) and compare searchers head-to-head, including Random Search, standard Evolutionary Algorithms, and prior LLM-based approaches such as LLMatic. This would more clearly isolate the effectiveness and efficiency of the proposed partitioned LLM search strategy itself.\n\n[1] Muhammad U. Nasir, Sam Earle, Christopher Cleghorn, Steven James, Julian Togelius. LLMatic: Neural Architecture Search Via Large Language Models And Quality Diversity Optimization. GECCO 2024.\n\n[2] Zipeng_Ji, Guanghui Zhu, Chunfeng Yuan, Yihua Huang. RZ-NAS: Enhancing LLM-guided Neural Architecture Search via Reflective Zero-Cost Strategy. ICML 2025.",
"questions": "**On Sensitivity to the Partitioning Axis and Granularity.**\n- The paper's CNN partitioning strategy is based on the nor_conv_3x3 count , which is identified as the most parameter-heavy operator. Is the general principle simply to partition by the most computationally expensive operator? How would the results change if a different operator, such as nor_conv_1x1, were used as the partitioning axis instead?\n- The paper uses a fixed number of six niches for the HW-NAS-Bench space. How sensitive is the method's final performance (e.g., HV and IGD) to this choice? What would be the impact of using significantly fewer (e.g., 3) or more (e.g., 10) niches? Furthermore, should the optimal number of niches be relative to the overall size and complexity of the search space?\n- As a diagnostic baseline, how does the proposed complexity-based partitioning compare against a random or uniform partition of the same size? Such a comparison could help determine whether the gains arise from the complexity metric itself or from structured diversification in general.\n\n**On Fair Comparisons and the Generality of Partitioning.** To properly isolate the value of the proposed search strategy from the surrogate-based estimator:\n- Could the authors provide results for baselines, including simple ones (e.g., Random Search, standard Evolutionary Algorithm) and, especially, searchers from prior LLM works (like LLMatic or RZ-NAS), that all use the exact same pre-trained ZC-proxy surrogate for evaluation?\n- More importantly, since the partitioning strategy appears to be an orthogonal contribution , could the authors apply this partitioning scheme to other prior LLM searchers (like LLMatic's) to demonstrate that it provides a general and consistent boost to their performance?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-25T16:35:17",
"modification_date": "2025-11-12T12:20:18",
"review_url": "https://openreview.net/forum?id=eS4MAmmCHy¬eId=ivDvQH3yMC",
"license": "CC BY 4.0"
},
{
"id": "oSrIUUnOOZ",
"forum": "eS4MAmmCHy",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission9721/Reviewer_MXNW",
"reviewer_name": "Reviewer_MXNW",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "In this paper, authors propose PEL-NAS, a training-free framework for Hardware-Aware Neural Architecture Search (HW-NAS) that addresses the exploration bias inherent in Large Language Model (LLM)-driven approaches. The authors observe that LLMs tend to generate architectures within limited regions of the search space, exhibiting a form of mode collapse. To address this issue, they propose a framework which contains three components: 1) a complexity-driven partitioning strategy that divides the search space into disjoint niches based on architectural complexity; 2) an LLM-based co-evolution mechanism that maintains and updates a knowledge base while generating new architectures; 3) a zero-cost ensemble predictor for rapid evaluation. Empirical results on HW-NAS-Bench demonstrate superior Pareto front with dramatically reduced search costs.",
"strengths": "1. Authors identified a genuine problem in LLM-based NAS, i.e., exploration bias leading to incomplete Pareto fronts, as well as mode collapse. The complexity-driven partitioning strategy is empirically justified and provides an elegant solution to enforce diversity. Moreover, the complexity-driven partitioning is a kind of targeted structural intervention rather than just mere prompt engineering, demonstrates clear connection to hardware-related model complexity.\n\n2. The dramatic reduction in search cost from GPU days to minutes while achieving promising results addresses a practical limitation in HW-NAS deployment. Besides, the successful extension to ViT search spaces demonstrates the framework’s adaptability beyond the original CNN-focused HW-NAS-Bench experiments.\n\n3. The dual-stage prompt engineering that the LLM alternates between updating a knowledge base and generating architectures, represents a interesting and effective approach to leverage LLMs’ reasoning capabilities while maintain the search memory.",
"weaknesses": "1. One of major concerns regarding this paper (as well as other similar LLM-driven NAS works) is the data contamination problem. For instance, HW-NAS-Bench contains only 15625 architectures and publicly available since 2021 (well before GPT-4’s training cutoff), while GPT-4 and other LLMs are trained on vast web corpora that likely include published NAS papers and their architectures. There is a substantial risk that the LLM might be essentially performing 'retrieval' rather than genuine 'search' or 'discovery'. Although the co-evolution mechanism and knowledge base updates might push the LLM slightly beyond memorisation, it would be better that authors can somehow verify the generated architectures are novel or outside the LLM’s training distribution. \n\n2. Another concern is regarding the search space scalability, the manual identification of complexity-driving operators, e.g., nor_conv_3x3 for CNNs, Embed Dim and Depth Num for ViTs, raises questions about the scalability to novel search spaces. An automated/heuristic partitioning strategy would be more valuable.\n\n3. While the authors identify exploration bias, they did not deeply analyse why LLMs exhibit this behaviour or explore prompt engineering alternatives that might directly address this bias without requiring partitioning. The incomplete analysis of LLM behaviour weakens their claimed corresponding contribution.\n\n4. Some of compared baselines are outdated (e.g., DARTS is from 7 years ago), the proposed framework could benefit from comparisons with more recent training-free NAS methods, e.g., MeCo [1], SWAP [2] or L-SWAG [3], and other diversity-prompting techniques in evolutionary algorithms beyond the mentioned baselines. \n\n \n \n\n[1] Jiang et al., MeCo: Zero-Shot NAS with One Data and Single Forward Pass via Minimum Eigenvalue of Correlation. NeurIPS 2023.\n\n[2] Peng et al., SWAP-NAS: Sample-Wise Activation Patterns for Ultra-fast NAS. ICLR 2024.\n\n[3] Casarin et al., L-SWAG: Layer-Sample Wise Activation with Gradients Information for Zero-Shot NAS on Vision Transformers. CVPR 2025.",
"questions": "1. How sensitive is the proposed method to the number of niches? Have authors experimented with different partitioning granularities?\n\n2. Can the partitioning strategy be automatically/heuristically learned rather than manually defined based on the search space analysis?\n\n3. Can authors perform some memorisation tests? For example, prompt the LLM (e.g., GPT-4) to directly generate architectures from HW-NAS-Bench search space, and see whether it’s feasible, as well as the performance of produced architectures.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-22T16:34:55",
"modification_date": "2025-11-12T12:20:19",
"review_url": "https://openreview.net/forum?id=eS4MAmmCHy¬eId=oSrIUUnOOZ",
"license": "CC BY 4.0"
}
] |
|
MgVNhx5uaa
|
https://openreview.net/forum?id=MgVNhx5uaa
|
ATOM-Bench: From Atoms to Conclusions in Objective Evaluation of Large Multimodal Models Reasoning
| 3
| 3.75
|
[
2,
4,
4,
2
] |
[
4,
4,
3,
4
] | 4
|
[
"multimodal Large Language Models",
"benchmark",
"chain of thought"
] |
Chain-of-Thought (CoT) reasoning has significantly enhanced the ability of Large Multimodal Models (LMMs) to tackle complex image–text tasks, establishing itself as a cornerstone of multimodal learning. Despite significant progress, the impact of CoT on LMMs still lacks objective evaluation and in-depth research. Current CoT evaluation paradigms rely on powerful LLMs as judges of free-form text, but this introduces bias and hallucination from the evaluator itself. Moreover, it may penalize models for stylistic variations rather than genuine reasoning failures, thereby undermining the fairness and reliability of the assessment. To address this gap, we introduce ATOM-Bench, a CoT evaluation framework built on objective atomic questions. ATOM-Bench decomposes complex reasoning tasks into a series of atomic nodes, covering 570 high-resolution real-world images and 2,920 questions across 4 cognitive dimensions, and 12 domains, including architecture, text, transportation, culture, climate, and geology. Our benchmark introduces three novel quantitative metrics to objectively analyze reasoning faithfulness, consistency, and robustness. Extensive experiments with 22 LMMs validate the effectiveness of our framework. The results reveal that even the strongest models often exhibit a mismatch between surface-level correctness of final answers and their underlying evidence comprehension, while also exposing cognitive rigidity when faced with objective facts.We believe that ATOM-Bench, as a more objective and diagnostic tool, will advance LMMs toward more reliable and faithful reasoning.
|
We introduce ATOM-Bench, a diagnostic benchmark for evaluating Chain-of-Thought reasoning in Large Multimodal Models via objective atomic questions, spanning 2,920 QAs over 570 real-world images, to address challenges of reasoning reliability.
|
datasets and benchmarks
|
https://openreview.net/pdf?id=MgVNhx5uaa
| 2025-09-18T21:58:39
| 4
|
[
{
"id": "qyea8A8FPG",
"forum": "MgVNhx5uaa",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission11801/Reviewer_sAqG",
"reviewer_name": "Reviewer_sAqG",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "The paper presents ATOM-Bench, a benchmark for evaluating reasoning processes in large multimodal models. It reformulates complex reasoning into atomic multiple-choice questions with ground-truth answers to enable objective measurement. The dataset contains 570 real-world images and 2,920 questions covering four cognitive dimensions and twelve subdomains. The authors introduce three metrics to assess reasoning consistency, hallucination rate, and robustness when models are given corrected evidence. Experiments on 22 multimodal models are conducted to analyze their reasoning behavior and the relationship between answer accuracy and reasoning consistency.",
"strengths": "1. Atomic multiple-choice questions provides objective, interpretable, and reproducible evaluation results.\n2. The analysis highlights a clear gap between correctness and reasoning quality, offering concrete empirical observations.",
"weaknesses": "1. The dataset is relatively small and limited in diversity, containing only 570 real-world images, which restricts coverage of varied visual scenes and reduces the stability of cross-model comparisons.\n2. The task scope is overly narrow, as the framework is primarily validated on single-image geo-localization, limiting its generalizability to other multimodal reasoning tasks.\n3. Error analysis remains anecdotal and lacks systematic statistics on error types or cross-model differences, making it difficult to derive actionable insights for model improvement.",
"questions": "1. Could it be extended to cover more complex or diverse multimodal reasoning tasks beyond single-image geolocation?\n2. Could the authors provide a more detailed error analysis with statistics and a deeper look at failure patterns?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T16:18:37",
"modification_date": "2025-11-12T12:47:01",
"review_url": "https://openreview.net/forum?id=MgVNhx5uaa¬eId=qyea8A8FPG",
"license": "CC BY 4.0"
},
{
"id": "X8f3AtT1WA",
"forum": "MgVNhx5uaa",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission11801/Reviewer_sR7U",
"reviewer_name": "Reviewer_sR7U",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 3,
"presentation": 3,
"summary": "- The author proposes a novel atomic-question-based CoT evaluation framework, comparing to the previous benchmark which reasons over the CoT process using traditional \"LLM as a Judge\", the evaluation framework focuses on objective and fairness, including three new evaluation metrics, RCS(reasoning-conclusion support), HI(Hallucinated Inference), RRS(Reasoning Revision Score).\n- Besides the evaluation framework, the authors introduce ATOM-Bench, the benchmark includes 2,920 multi-choice questions across 4 cognitive dimensions and 12 subtasks.\n- Authors also evaluates 22 leading models and provide insights including even the state-of-the-art models like Gemini-2.5-Pro and GPT-5 can show post-hoc fallacies, models often fail to revise errors when confronted with indisputable ground-truth evidence.",
"strengths": "- The originality of the paper is good, the paper focuses on the fair and objective evaluation without llm-as-the-judge process.\n- The dimension of the ATOM-Bench is good, it includes 14 different atomic skills, including spatial reasoning.\n- The evaluation models are sufficient, including 22 leading models with both open-sourced and close-sourced ones.\n- The data curation process is very clear to the readers:\nEach step clearly specifies which data sources were used, what criteria were applied, and how the results were verified.\nHuman verification and inter-annotator agreement evaluation were introduced to ensure annotation quality.\nThe logic behind the question categorization is clear and task-oriented.\n- The structure of the paper is easy to read.",
"weaknesses": "- Overall, I appreciate the readers for the presentation of this paper, however, **examples** are significantly insufficiant for both methodology part and evaluation part. And I think this is one of the biggest weakness of this paper. I search very carefully for more detailed examples in the appendix and only find failure analysis and a few failure examples.\n\n More specifically, authors should provide more examples regarding:\n1. full multimodal reasoning process of a model regarding the answer, how to evaluate based on that example\n2. examples of how atomic tasks compose into complex reasoning\n3. lacks visual illustrations of multimodal input and error analysis\n\n- The CoT evaluation framework and benchmark samples are only applied on geolocation, which according to my knowledge, this pipeline and benchmark could also be used to a broader domain for evaluating reasoning process, e.g. Mathematical Reasoning, Science Reasoning, which also needs step-by-step objective reasoning process in order to successfully answer a question.\n\n- About the evaluation metric, the RCS (Reasoning Consistency Score) and HI (Hallucination Index) is overlapping with each other, e.g. high reasoning consistency scores indicates low hullucination index. The evaluation dimension is not diverse enough. Have you considered other evaluation metrics, for example, evaluate perception error and reasoning error separately. \n\n- Although the benchmark fucos on objective evaluation, the structure of reasoning, the completeness and the soundness of the answer, however, are the keys to evaluate the correctness of the answer, but the paper entirely ignores them.",
"questions": "I provide the following questions for authors:\n- The motivation of the paper is to reduce the biased evaluation of \"llm-as-the-judge\", however, in the evaluation metrics, there is some human threshold. e.g. In RCS, the τ=0.75 is set by human. This step also introduces human bias. Why is τ=0.75? Do you have explanations on it?\n- The author claims that the proposed evaluation metric is more objective and fair than traditional evaluation method. Do you have quantitative results to prove that? For example, sample a subset of Atom-Bench and using standard \"llm-as-the-judge\" process to evaluate and compare it against the proposed evaluation metrics.\n- Can the proposed evaluation framework generalize to other domains except for the geographical reasoning task, e.g. for a broader domain, e.g. in Mathematical Reasoning? If so, could you provide examples regarding how to apply this framework into a broder domain, e.g. Mathematical Reasoning using a standard Benchmark like Mathvista?\n- **Especially**, you should provide more examples regarding: (as listed in weakness)\n1. full multimodal reasoning process of a model regarding the answer, how to evaluate based on that example\n2. examples of how atomic tasks compose into complex reasoning\n3. visual illustrations of multimodal input and error analysis\n\nI will consider raise my score if you can address my concerns listed above.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T06:20:46",
"modification_date": "2025-11-12T12:47:02",
"review_url": "https://openreview.net/forum?id=MgVNhx5uaa¬eId=X8f3AtT1WA",
"license": "CC BY 4.0"
},
{
"id": "Q3TyhMYSY4",
"forum": "MgVNhx5uaa",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission11801/Reviewer_db8N",
"reviewer_name": "Reviewer_db8N",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "This paper notes that while Chain-of-Thought (CoT) reasoning improves Large Multimodal Models (LMMs) in complex image-text tasks, current CoT evaluation—relying on LLMs as judges—suffers from bias, hallucination, and mispenalizing stylistic variations over real reasoning failures.\nTo address this, it proposes ATOM-Bench: a framework that decomposes complex tasks into atomic questions (covering 570 high-res images, 2,920 questions across 4 cognitive dimensions and 12 domains) and introduces three quantitative metrics (RCS, HI, RRS) to turn subjective evaluation into evidence-based diagnostics, solving the \"black-box evaluating a black-box\" issue.\nExperiments on 22 LMMs show even state-of-the-art models mismatch final answer correctness with evidence comprehension and have cognitive rigidity. The paper contributes an objective, reproducible CoT evaluation framework, the first high-res process-oriented CoT benchmark, and insights into LMMs’ gaps in reasoning faithfulness and flexibility to advance reliable LMM research.",
"strengths": "1. Instead of relying on Large Language Models (LLMs) as judges in traditional paradigms, ATOM-Bench adopts \"atomic questions\" to eliminate the \"black-box evaluating black-box\" dilemma. \n2. The benchmark is built on 570 high-resolution real-world images, validated through human-machine collaboration (including expert cross-reviews of clue authenticity and distractor rationality) to guarantee data quality. \n3. It decomposes complex reasoning tasks into clue-level (CLQ) and conclusion-level (CoLQ) atomic nodes, covering 4 cognitive dimensions and 12 real-world domains. T",
"weaknesses": "1. The benchmark only centers on single-image geolocation, failing to cover complex scenarios like video temporal reasoning or cross-modal generation. It cannot fully measure LMMs’ performance across diverse CoT tasks.\n2. All questions are multiple-choice, with no assessment of free-text reasoning chain generation. It cannot evaluate models’ ability to express logical steps in open text for real-world applications.\n3. Complex reasoning is decomposed into pre-defined \"standard chains,\" ignoring the diverse reasoning paths models may actually take. It fails to reflect models’ real logical decision-making processes.",
"questions": "1. Its atomic decomposition relies on preset logic. Is this decomposition consistent with humans’ actual reasoning paths? \n2. ATOM-Bench lacks samples from low-resource regions (e.g., small countries). Does it plan to supplement such data to improve evaluation comprehensiveness? \n3. It doesn’t specify the weight of image vs. text clues. When clues conflict, can current metrics fairly measure models’ decision rationality?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-28T17:19:47",
"modification_date": "2025-11-12T12:47:02",
"review_url": "https://openreview.net/forum?id=MgVNhx5uaa¬eId=Q3TyhMYSY4",
"license": "CC BY 4.0"
},
{
"id": "IWuQrflMwV",
"forum": "MgVNhx5uaa",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission11801/Reviewer_WwKZ",
"reviewer_name": "Reviewer_WwKZ",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "This paper is about the evaluation of large multimodal model reasoning. The authors claim that they introduce a CoT evaluation framework built on objective atomic questions, covering 570 high-resolution real-world images and 2,920 questions across 4 cognitive dimensions, and 12 domains, including architecture, text, transportation, culture, climate, and geology. They have tested a number of large multimodal models with the proposed evaluation framework.",
"strengths": "1. The authors have evaluated a number of multimodal large language models with the proposed evaluation framework.\n2. The proposed benchmark covers a wide range of domains.",
"weaknesses": "1. The authors claim that \"Current CoT evaluation paradigms rely on powerful LLMs as judges of free-form text, but this introduces bias and hallucination from the evaluator itself.\" However, the proposed evaluation framework also relies on LLMs and shares the weakness of prior works.\n2. The proposed evaluation framework relies on \"rigorous human review and refinement\", as the authors claim. However, it is not clear how the authors ensure rigor in this process. Are there any human errors in this process? How to ensure the human experts have checked the dataset with care?\n3. The evaluation framework relies heavily on human efforts in checking many details in the evaluation, which is not practical and hard to scale with paid annotators. Additionally, the proposed benchmark is relatively small. Although the authors claim it covers a wide range of fields and cognitive dimensions, I wonder whether these fields or dimensions are properly represented with limited data.\n4. Given the fact that the dataset is small and the method is not practical, I doubt whether this paper has made enough contribution.",
"questions": "Please refer to the weakness section.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-25T20:15:06",
"modification_date": "2025-11-12T12:47:03",
"review_url": "https://openreview.net/forum?id=MgVNhx5uaa¬eId=IWuQrflMwV",
"license": "CC BY 4.0"
}
] |
wztR0XcNW9
|
https://openreview.net/forum?id=wztR0XcNW9
|
TopoCore: Unifying Topology Manifolds and Persistent Homology for Data Pruning
| 4
| 3
|
[
4,
2,
6
] |
[
3,
3,
3
] | 3
|
[
"Coreset Selection",
"Topological Data Analysis",
"Persistent Homology",
"Architectural Transferability",
"Data-Efficient Learning",
"Manifold Learning",
"Pretrained Models"
] |
Geometric coreset selection methods, while practical for leveraging pretrained models, are fundamentally unstable. Their reliance on extrinsic geometric metrics makes them highly sensitive to variations in feature embeddings, leading to poor performance when transferring across different network architectures or when dealing with noisy features. We introduce TopoCore, a novel framework that resolves this challenge by leveraging the principles of topology to capture the intrinsic, stable structure of data. TopoCore operates in two stages, (1) utilizing a _topology-aware manifold approximation_ to establish a global low-dimensional embedding of the dataset. Subsequently, (2) it employs _differentiable persistent homology_ to perform a local topological optimization on the manifold embeddings, scoring samples based on their structural complexity. We show that at high pruning rates (e.g., 90\%), our _dual-scale topological approach_ yields a coreset selection method that boosts accuracy with up to 4$\times$ better precision than existing methods. Furthermore, through the inherent stability properties of topology, TopoCore is (a) exceptionally robust to noise perturbations of the feature embeddings and (b) demonstrates superior architecture transferability, improving both accuracy and stability across diverse network architectures. This study demonstrates a promising avenue towards stable and principled topology-based frameworks for robust data-efficient learning.
|
learning on graphs and other geometries & topologies
|
https://openreview.net/pdf?id=wztR0XcNW9
| 2025-09-18T02:54:05
| 3
|
[
{
"id": "p1cclI53pH",
"forum": "wztR0XcNW9",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission9698/Reviewer_Sq9q",
"reviewer_name": "Reviewer_Sq9q",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "The paper addresses the problem of coreset selection, i.e. a small representative subset of a large dataset that minimizes the degradation in model performance and allows for faster training and reduced storage. Although existing geometry-based methods do not require an expensive training, they rely on extrinsic metrics that make them sensitive to variations in feature embeddings. The authors propose TopoCore, a two-stage method for coreset selection that utilizes topology to accurately approximate the underlying manifold of the data. To preserve the global structure, during the first stage, feature embeddings of deep neural network are projected onto a low-dimensional manifold with UMAP. To preserve the local structure, during the second stage topological persistence of points is maximized independently for each class. The coreset selection is based on the TopologyScore that combines Density Score, reflecting global representativeness, and Persistence Score, reflecting local topological complexity. The empirical evaluation includes comparison with several baseline methods in both training-based and training-free scenarios, analysis of method’s performance when feature embedding model is varied. The authors also analyze the TopoCore robustness to the noise injected into feature embeddings.",
"strengths": "- The paper proposes a novel topology-based view on the problem of coreset selection that leverages the geometric methods.\n- Experimental results demonstrate that TopoCore outperforms benchmark methods, especially at high pruning rate and on more complex datasets.\n- TopoCore is more robust to noise in the feature space, especially at the higher pruning rates (70-90%). \n- TopoCore provides better results across a wide range of embedding model choice.",
"weaknesses": "- Although the paper provides some evidence for the choice of UMAP, a more recent works [1][2][3], which were shown to outperform UMAP with better preservation of data topology, are not considered for comparison and/or improvement of TopoCore.\n- Experimental part is limited. As far as I understand, the experiments focus on the test accuracy of the ResNet-family models (ResNet-18, ResNet-50) for different pruning rates and embedding models. The evaluation lacks results for more recent architectures, for example, transformers, and estimation of other properties such as quality of transfer learning / domain adaptation.\n- The paper does not provide any estimate on the computational cost of the proposed procedure. Is TopoCore more computationally intensive than the benchmark methods?\n\nMinor: The notion of prototype is often used in the main text but formal definition is given only in the appendix.\n\n[1] M. Moor et al. Topological autoencoders. ICLR, 2020.\n[2] I. Trofimov et al. Learning topology-preserving data representations. ICLR, 2023.\n[3] E. Tulchinskii et al. RTD-Lite: scalable topological analysis for comparing weighted graphs in learning tasks. AISTATS, 2025.",
"questions": "Please, see weaknesses.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T03:21:38",
"modification_date": "2025-11-12T12:20:06",
"review_url": "https://openreview.net/forum?id=wztR0XcNW9¬eId=p1cclI53pH",
"license": "CC BY 4.0"
},
{
"id": "xDDQjbq6df",
"forum": "wztR0XcNW9",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission9698/Reviewer_y5KN",
"reviewer_name": "Reviewer_y5KN",
"rating": 2,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "The paper introduces TopoCore, a method for coreset selection. It is a combination of dimensionality reduction, non-parametric density estimation and persistent homology. Then, coresets are used for further training of ResNet-18. \nExperimental results show that the proposed method slightly outperforms baseline.",
"strengths": "The paper proposes a new approach to coreset selection. \nThis is one of the few applications of multipersistence to deep learning.",
"weaknesses": "1) The paper is hard to understand. Some notions like \"Hilbert decomposition signed measure\" are not defined.\n2) Some details of the method are missing (see Questions)\n3) The difference is no statistically significant w.r.t. baselines in many cases (Table 5 in Appendix).\nPlease include statistical tests to validate significance.\n4) Improvements over Random selection is quite small. I doubt that the method is of practical importance.\n5) A relevant publications is missing:\n\nTrofimov, I., Cherniavskii, D., Tulchinskii, E., Balabin, N., Burnaev, E., & Barannikov, S. (2023). Learning topology-preserving data representations. arXiv preprint arXiv:2302.00136.",
"questions": "1) As far as I understood, persistence scores are calculated for every class separately. Are they summed next? \n2) The optimization of L_{pers} can naturally lead to a degenerate solution, like points very far from each other, which maximizes persistence. How do you handle it?\n3) Is TopologyScore maximized or minimized or minimized?\n4) In Table 1, why TopoCore exhibits different metrics in \"no training dynamics\" and \"with training dynamics\" blocks?\nI assume that the difference must be only in baselines.\n5) Some important details are hard to understand from the paper. How L_{proj} is optimized? Together with TopologyScore or not?\nHow the coreset is selected? Is should be a subset of a dataset, but I can't find details.\nWhere are similarities p_{ij} are taken from? etc.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-23T21:47:27",
"modification_date": "2025-11-12T12:20:07",
"review_url": "https://openreview.net/forum?id=wztR0XcNW9¬eId=xDDQjbq6df",
"license": "CC BY 4.0"
},
{
"id": "pYAdWkt3a1",
"forum": "wztR0XcNW9",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission9698/Reviewer_hXms",
"reviewer_name": "Reviewer_hXms",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 4,
"summary": "The paper addresses the coreset selection problem: choosing a small subset of training data that maintains nearly the same model performance as the full dataset. It introduces a training-free approach that operates on frozen embeddings, viewing the dataset as a point cloud and using both global manifold density and local topological persistence to identify samples essential to the intrinsic structure. On benchmark image datasets, the method consistently achieves higher retention and lower variance than geometric baselines, especially under high pruning, showing stability and robustness across architectures, though its experiments are limited to vision tasks and lack compute analysis.",
"strengths": "1. **Originality and clarity.** A training free coreset that combines manifold density with local topological persistence is a fresh, well motivated idea and the method is clearly described for replication.\n\n2. **Strong empirical results.** Consistently high retention and lower variance than geometric baselines, especially at high pruning, plus sensible ablations on mixing weights and optimization depth.\n\n3. **Practical robustness.** Works across multiple backbones with good transfer and noise robustness, indicating the selection signal is less model dependent than distance based methods.",
"weaknesses": "1. **Limited scope.** Evaluation is restricted to vision benchmarks; no NLP or other modalities are tested, which weakens generality claims.\n\n2. **Dependence on embeddings and projection**. Results hinge on the quality of frozen features and the chosen manifold projector, with limited guidance on hyperparameters or stability across settings.\n\n3. **Unclear computational costs.** No clear wall clock, memory, or scaling analysis for kNN construction and persistence steps, so the cost–accuracy tradeoff is unclear.\n\n4. **Quantitative evidence is incomplete.** The paper’s claims, like *“up to 4× better precision” and improved proxy-to-target transfer*, are not consistently backed by tables, and the sensitivity of results to k-NN and manifold-projection settings remains largely unexplored.",
"questions": "1. How sensitive are results to the manifold projection choice and its hyperparameters?\n\n2. Please provide runtime and memory comparisons vs baselines on CIFAR and ImageNet to clarify scalability.\n\n3. Do you have any non-vision results to support generality, for example ANLI or IMDB with a frozen RoBERTa encoder (D2 paper).\n\nI like this paper and find the direction promising. I am at marginal accept and am willing to increase my score if the authors address my concerns in the rebuttal with concrete evidence.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-18T05:37:25",
"modification_date": "2025-11-12T12:20:07",
"review_url": "https://openreview.net/forum?id=wztR0XcNW9¬eId=pYAdWkt3a1",
"license": "CC BY 4.0"
}
] |
|
WnRzN4U8Y8
|
https://openreview.net/forum?id=WnRzN4U8Y8
|
WIMFRIS: WIndow Mamba Fusion and Parameter Efficient Tuning for Referring Image Segmentation
| 5
| 4.5
|
[
4,
6,
4,
6
] |
[
5,
5,
3,
5
] | 4
|
[
"Referring image segmentation",
"parameter efficient tuning",
"computer vision"
] |
Existing Parameter-Efficient Tuning (PET) methods for Referring Image Segmentation (RIS) primarily focus on layer-wise feature alignment, often neglecting the crucial role of a neck module for the intermediate fusion of aggregated multi-scale features, which creates a significant performance bottleneck. To address this limitation, we introduce WIMFRIS, a novel framework that establishes a powerful neck architecture alongside a simple yet effective PET strategy. At its core is our proposed HMF block, which first aggregates multi-scale features and then employs a novel WMF module to perform effective intermediate fusion. This WMF module leverages non-overlapping window partitioning to mitigate the information decay problem inherent in SSMs while ensuring rich local-global context interaction. Furthermore, our PET strategy enhances primary alignment with a MTA for robust textual priors, a MSA for precise vision-language fusion, and learnable emphasis parameters for adaptive stage-wise feature weighting. Extensive experiments demonstrate that WIMFRIS achieves new state-of-the-art performance across all public RIS benchmarks.
|
This paper introduces WIMFRIS, a framework that achieves state-of-the-art in referring image segmentation by proposing a novel HMF neck module to efficiently fuse text with visual features , overcoming a key performance bottleneck in prior methods.
|
applications to computer vision, audio, language, and other modalities
|
https://openreview.net/pdf?id=WnRzN4U8Y8
| 2025-09-20T14:00:25
| 4
|
[
{
"id": "l3NeqmvthW",
"forum": "WnRzN4U8Y8",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission23757/Reviewer_N61Y",
"reviewer_name": "Reviewer_N61Y",
"rating": 4,
"confidence": 5,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "The paper presents a parameter-efficient framework that integrates a window-based intermediate fusion neck (HMF) and lightweight adapters (MTA, MSA, and emphasis parameters) to enhance vision–language alignment for referring image segmentation.",
"strengths": "- The paper introduces a Hierarchical Mamba Fusion (HMF) block, which performs intermediate vision–language fusion by aggregating multi-scale features and applying a window-based Mamba module (WMF).\n- A parameter-efficient tuning (PET) strategy is presented, consisting of a Mamba Text Adapter (MTA) for modeling textual priors, a Multi-Scale Aligner (MSA) with RFMixer and cross-attention for visual–text alignment, and learnable emphasis parameters for adaptive layer weighting.\n- The overall framework, WIMFRIS, integrates these components and is experimentally compared against existing PET-based and full fine-tuning methods on multiple RIS benchmarks.",
"weaknesses": "* Lack of Novelty\n\nThe paper shows limited novelty. The **PET part** closely follows DETRIS, essentially extending its parameter-efficient tuning framework with minor Mamba-based modifications. The **neck design** heavily overlaps with the fusion architecture in fixation phase in SaFiRe, both adopting window-based Mamba fusion for intermediate vision-language alignment. Overall, the work mainly integrates these existing ideas rather than introducing a substantively new contribution.\n\n\n* Incomplete Manuscript\n\nThe paper appears **incomplete**. Section 3.2 is unfinished, and the crucial description of the **task decoder** is missing. This omission disrupts the continuity between Sections 2.3 and 2.4. The authors should carefully verify whether the submitted version is the complete manuscript.\n\n\n* Unfair and Limited Comparison\n\nFor Table 1\n\n\n1. **Unfair Comparison :**\nTo ensure fairness, (1) the parameters of PET-based methods should be adjusted to achieve **comparable model sizes**, and (2) the **backbones of all compared methods** should be unified.\n\n2. **Limited Comparison with State-of-the-Arts:**\nMore PET-based approaches should be included, as previous works (e.g., ETRIS, DETRIS, RISCLIP) have done, especially those involving **backbone-side modality fusion** in RIS, such as **PWAM in LAVT**, **SDF in VLT**, and **CFE in RISCLIP**, as well as classical parameter-efficient tuning methods like **LoRA** and **Adapter**.\n\n3. **Marginal Improvement of the WMF Neck:**\nCompared with **DETRIS**, the improvements achieved by the proposed **WMF Neck** are quite marginal.\n\n4. **Insufficient Comparison :**\nA more comprehensive comparison is needed to substantiate the claimed advantages of the proposed neck method, including detailed analyses of **parameter counts**, **computational cost (GFLOPs)**, and **inference speed**, particularly in comparisons with **ETRIS/DETRIS necks**.\n\nFor Table 2\n\n1. **Inconsistent Metrics:**\n Table 2 mixes **mIoU** and **oIoU** without clarification. While RISCLIP, DETRIS, and WIMFRIS use **mIoU**, most other methods report **oIoU**. In particular, for works like **CGFormer** and **Polyformer**, which provide both metrics, the authors still report their **oIoU** values. Since **mIoU** is generally higher than **oIoU** on the RefCOCO family datasets, this inconsistency makes the performance comparison **unreliable**.\n2. **RISCLIP Issue:**\n According to the authors’ own definition (line 44, “…keeping the vast majority of the backbone parameters frozen”), RISCLIP also freezes its CLIP backbone and should be considered a parameter-efficient tuning method. Moreover, the results of **RISCLIP-L** are missing, which appear **significantly higher** than those of the proposed “Ours-L” model (trained on RefCOCO+, mIoU: **RISCLIP-L** 74.38 / 78.77 / 66.84 vs. **Ours-L** 71.9 / 76.2 / 67.2).\n\n\n* Efficiency Analysis\n\nAlthough this work emphasizes the **PET framework** and uses the **efficient Mamba architecture**, more detailed **efficiency analyses** should be provided—specifically **GFLOPs**, **inference speed**, and preferably **FPS**.\n\n\n* Minor Issues\n\nIn **Table 3(a)**, the content does not match the caption: *4×4* is **not** the smallest window size.\n\n\n\n***I would be happy to revise my score if the author addresses these points.***\n\n\n\n---\n\n**References:**\n\nDETRIS: Densely Connected Parameter-Efficient Tuning for Referring Image Segmentation AAAI2025\n\nSaFiRe: SaFiRe: Saccade-Fixation Reiteration with Mamba for Referring Image Segmentation NeurIPS 2025\n\nLAVT: Language-Aware Vision Transformer for Referring Image Segmentation CVPR2022\n\nVLT: Vision-Language Transformer and Query Generation for Referring Segmentation TPAMI2023\n\nRISCLIP:Extending CLIP’s Image-Text Alignment to Referring Image Segmentation NAACL2024\n\nLoRA: Low-Rank Adaptation of Large Language Models. ICLR2022\n\nParameter-Efficient Transfer Learning for NLP. ICML2019\n\nCGFormer: Contrastive Grouping with Transformer for Referring Image Segmentation CVPR2023\n\nPolyFormer: Referring Image Segmentation as Sequential Polygon Generation CVPR2023",
"questions": "* Could you clarify the **task decoder design**?\n\n* In Table 1, which IoU metric is used—**mIoU** or **oIoU**? ETRIS reports oIoU from the original paper, but DETRIS uses mIoU.\n\n* In Table 2, please clarify metric issue and the RISCLIP issue mentioned in W1-B.\n\n* What are the **inference speed** and **GFLOPs** of the proposed model?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T19:11:04",
"modification_date": "2025-11-12T18:19:52",
"review_url": "https://openreview.net/forum?id=WnRzN4U8Y8¬eId=l3NeqmvthW",
"license": "CC BY 4.0"
},
{
"id": "F6qig8fkUO",
"forum": "WnRzN4U8Y8",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission23757/Reviewer_WGQk",
"reviewer_name": "Reviewer_WGQk",
"rating": 6,
"confidence": 5,
"soundness": 3,
"contribution": 4,
"presentation": 3,
"summary": "This paper introduces WIMFRIS, a framework for Referring Image Segmentationthat focuses on both a novel intermediate fusion neck architecture (the Hierarchical Mamba Fusion, or HMF, block) and a parameter-efficient tuning strategy. The HMF block leverages a Window Mamba Fuser module to effectively aggregate and fuse multi-scale vision and language features, using window partitioning to tackle the exponential decay in information typical of state-space models. The PET strategy employs adapters to efficiently align textual and visual representations and a learnable stage-wise emphasis mechanism. Extensive experiments are conducted on major RIS benchmarks, demonstrating state-of-the-art results for WIMFRIS compared to both PET-based and full fine-tuning methods.",
"strengths": "- WIMFRIS achieves state-of-the-art or highly competitive performance across all standard RIS benchmarks (RefCOCO, RefCOCO+, G-Ref), outperforming previous parameter-efficient and full-tuning baselines. Table 2 clearly demonstrates these gains, including mixed-data setups.\n- Multiple ablation tables systematically dissect the contributions of each module and architectural choice.\n- The schematic diagrams provide clear breakdowns of the model pipeline, supporting the text’s descriptions of modular design and the flow of visual and textual feature processing. The visualizations offer compelling qualitative evidence for improved segmentation, especially in challenging situations (e.g., clutter, occlusion).\n- The paper carefully characterizes the underlying exponential decay issue in SSM-based fusion, and the model’s windowed approach is well justified both mathematically and empirically.\n- WIMFRIS demonstrates competitive results while tuning a very small fraction of backbone parameters, highlighting the value for practical deployment.\n- The explicit, detailed description of contrastive, dice, and alignment losses (and their weighting) makes reproduction feasible and testable.",
"weaknesses": "- While MSA adapters and MTA are described and visualized in Figure 2, the specific methodology for choosing insertion layers for adapters in different backbones is only loosely justified. There is a missed opportunity for a principled, possibly automated or analytical policy for placement, and no ablation on layer choice is provided.\n- Although Table 3 (a) explores performance trade-offs for window size, the choice of optimal $4 \\times 4$ is only empirically justified. There is little theoretical or dataset-specific reasoning for why this size generalizes, and exploring task- or scale-adaptive policies would strengthen claims of robustness.\n- There are several grammatical errors and awkward phrasings, as well as the use of slightly non-standard abbreviations in the tables (e.g., \"vol\", \"m/s/6\", \"m/sfI\" in Table 1), which may disrupt readability and hinder quick assimilation for a broad audience.",
"questions": "- Can the authors provide a rationale for the placement of PET adapters (MSA, MTA) at specific depths in the vision/text backbone? Have they considered or tested more adaptive/learned strategies for insertion, and can they provide ablations or guidelines for optimal selection?\n\n- How is the concatenation between text class tokens and visual patch windows actually handled in practice (e.g., with respect to normalization, possible channel mismatch, and possible overfitting due to repetitive text tokens)? Would normalization before SSM scans improve performance or stability?\n\n- Have the authors empirically measured the actual decay rate of long-range dependencies for varying window sizes in SSM, and if so, can those be reported? Is the optimal window size truly dataset/task dependent?\n\n- Are there notable scenarios where the windowed approach harms segmentation accuracy, e.g., in very small or oddly-shaped object instances, or when referring expressions are ambiguous or highly context-dependent?\n\n- Will the complete code (including all adapter implementations and ablation regimes) be released for reproducibility, and if so, under what license and conditions?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T14:48:49",
"modification_date": "2025-11-12T18:19:52",
"review_url": "https://openreview.net/forum?id=WnRzN4U8Y8¬eId=F6qig8fkUO",
"license": "CC BY 4.0"
},
{
"id": "MinqdN4erx",
"forum": "WnRzN4U8Y8",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission23757/Reviewer_CZEK",
"reviewer_name": "Reviewer_CZEK",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 1,
"summary": "This paper proposes a novel parameter-efficient tuning (PET) method named WIMFRIS for referring image segmentation. In contrast to existing PET methods that primarily focus on layer-wise feature alignment and are struggle to aggregate multi-scale features, the proposed approach introduces a simple yet effective neck architecture based on the Mamba module. WIMFRIS achieves state-of-the-art performance on standard RIS benchmarks, demonstrating both efficiency and strong segmentation capability.",
"strengths": "- The paper proposes a new efficient parameter-efficient tuning (PET)–based referring image segmentation (RIS) approach named WIMFRIS.\n- The proposed algorithm enhances efficiency by replacing conventional blocks with an HMF block that actively leverages the Mamba architecture. In addition, it introduces several novel components—an SSM-based MTA, an MSA robust to multiple receptive fields, and an RFMixer—which together contribute to more precise vision-language fusion.\n- The method achieves state-of-the-art performance on popular RIS benchmarks, demonstrating both effectiveness and robustness.",
"weaknesses": "- Structural Issues in Writing\n - In the Abstract, abbreviations such as HMF and WMF appear without their full names or descriptions, making it difficult for readers to understand them.\n - Figure 1 lacks an explanation of the HMF module, requiring readers to infer that WMF is a sub-module of HMF only from context.\n- #Params of PET and Performance Comparison\n - When comparing with existing PET methods, it would be fair to keep the number of PET parameters (#params) consistent across models. According to Table 1, when DINOv2-B/14 is used as the vision encoder, the proposed method shows only a slight improvement in performance compared to DETRIS, even though it uses more parameters. This raises concerns that the effectiveness of WIMFRIS may not be scalable.\n- Limited Novelty\n - The paper proposes several modules (e.g., WMF, HMF, MSA, MTA), but the architectural novelty of each component seems limited. For instance, the HMF module appears to replace multiple cross-attention layers with a more efficient Mamba-based structure, but the use of Mamba itself is not novel. Similarly, the MSA and RFMixer are designed to handle multiple receptive fields, but this concept is not entirely new.\n - The paper would benefit from additional discussion or evidence to substantiate the novelty of these architectural contributions.\n- Lack of Ablation Studies\n - As mentioned above, the paper lacks experiments that demonstrate the effectiveness and novelty of the proposed modules. For example, it would strengthen the work to include comparisons between MSA/RFMixer and baseline or vanilla methods for handling multiple receptive fields.\n - Table 3-(a) appears more like an engineering-oriented study rather than one providing clear scientific insight.",
"questions": "Please provide your responses with reference to the weaknesses mentioned above.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T00:16:57",
"modification_date": "2025-11-12T18:19:52",
"review_url": "https://openreview.net/forum?id=WnRzN4U8Y8¬eId=MinqdN4erx",
"license": "CC BY 4.0"
},
{
"id": "i5RjdUj9Fg",
"forum": "WnRzN4U8Y8",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission23757/Reviewer_UdVq",
"reviewer_name": "Reviewer_UdVq",
"rating": 6,
"confidence": 5,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "WIMFRIS introduces a neck-heavy, parameter-efficient RIS framework that aggregates multi-scale DINOv2 features, fuses them with CLIP text via a windowed Mamba block, and adaptively re-weights each stage, setting new SOTA mIoU on RefCOCO/+/g with < 3 % trainable params.",
"strengths": "1. First to plug a windowed SSM neck (WMF) into RIS; mitigates exponential decay of vanilla Mamba.\n2. Learnable emphasis per stage is simple yet novel for PET.\n3. Exhaustive ablations: window size, kernel configs, PET modules all explored.\n4. Plug-in HMF boosts ETRIS & DETRIS (Table 1), proving generic utility.",
"weaknesses": "1. All results are fine-tuned; real-world deployment often lacks target-domain labels.\n2. WMF prepends text to windows, but vision never feeds back to text; may miss visual disambiguation cues.\n3. Parameter efficiency ≠ inference speed; window partitioning + SSM may hurt parallelism.",
"questions": "See weakness",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-28T11:05:37",
"modification_date": "2025-11-12T18:19:53",
"review_url": "https://openreview.net/forum?id=WnRzN4U8Y8¬eId=i5RjdUj9Fg",
"license": "CC BY 4.0"
}
] |
zDI2G8t0of
|
https://openreview.net/forum?id=zDI2G8t0of
|
A Statistical Benchmark for Diffusion Posterior Sampling Algorithms
| 5.5
| 4
|
[
4,
8,
4,
6
] |
[
4,
5,
3,
4
] | 4
|
[
"Diffusion models",
"Bayesian inverse problems",
"statistical evaluation",
"Gibbs sampling"
] |
We propose a statistical benchmark for diffusion posterior sampling (DPS) algorithms in linear inverse problems. Our test signals are discretized Lévy processes whose posteriors admit efficient Gibbs methods. These Gibbs methods provide gold-standard posterior samples for direct, distribution-level comparisons with (DPS) algorithms. They also serve as oracle denoisers in the reverse diffusion, which enables the isolation of the error that arises from the approximations to the likelihood score. We instantiate the benchmark with the minimum-mean-squared-error optimality gap and posterior coverage tests and evaluate popular algorithms on the inverse problems of denoising, deconvolution, imputation, and reconstruction from partial Fourier measurements. We release the benchmark code at https://github.com/emblem-saying/dps-benchmark. The repository exposes simple plug-in interfaces, reference scripts, and config-driven runs so that new algorithms can be added and evaluated with minimal effort. We invite the community to contribute and report results.
|
We made an evaluation pipeline for diffusion posterior sampling algorithms for Bayesian linear inverse problems that relies on the construction of posteriors with known posteriors that we can efficiently sample from.
|
datasets and benchmarks
|
https://openreview.net/pdf?id=zDI2G8t0of
| 2025-09-19T00:36:58
| 4
|
[
{
"id": "qh8Nh3DeU4",
"forum": "zDI2G8t0of",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission13084/Reviewer_tkeZ",
"reviewer_name": "Reviewer_tkeZ",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 3,
"presentation": 3,
"summary": "- The authors introduce a benchmark suite for evaluating algorithms designed to solve linear inverse problems with diffusion model priors\n- The benchmark is built on a synthetic setup derived from discretized Lévy processes\n- It hence include setting of heavy-tailed/power-law–like distributions beyond the Gaussian case\n- The key motivation lies in the fact that Lévy processes possess explicit marginal distributions and can be targeted using Gibbs sampling\n- This property allows the benchmark to generate ground-truth posterior samples (from inverse problem and denoising posterior) for quantitative comparison across algorithms",
"strengths": "- The paper is well-written and accompanied with concise explanations in the appendix\n- The motivation of the paper is well articulated namely for principled benchmarking in diffusion-based inverse problem solvers\n- The proposed benchmark is a valuable contribution, as it extends evaluation on Gaussian setup to a broader family of distributions",
"weaknesses": "**Overstated or misleading claims**\nThe repeated use of the term \"oracle\", e.g., Lines 56, 129, 277, 355 is misleading.\nThe samples used in the benchmark are produced via Gibbs sampling—an iterative procedure—hence they are approximate, not exact. The quality of these samples depends on choices such as burn-in time, which are hyperparameters of the framework.\nThis issue becomes more apparent when the benchmark is applied to algorithms requiring gradients of the denoiser (Line 257-263 and equation (60)): the paper substitutes the latter with a covariance estimator of $X_0 | X_t,$ and hence further deviating from the notion of an \"oracle\".\n\n\n**Template for posterior samplers**\nThe proposed benchmark template seems overly restrictive. By focusing on algorithms that use only the denoiser, it neglects methods that require the Jacobian of the denoiser.\nAlthough the paper connects this to the covariance $Cov(X_0 \\mid X_t)$, estimating this covariance is far more computationally demanding and less stable, and therefore it downgrade the claim that the benchmark offers \"oracle\" quantities with minimal approximation error.\n\n**Evaluation design**\n- The inclusion of learned denoisers in the evaluation is conceptually inconsistent with the paper’s stated goal of removing approximation errors (Section 1.1).\nIf the benchmark aims to isolate algorithmic performance, learned denoisers reintroduce training-dependent variability. While the authors justify this by citing robustness testing, the notion of robustness is loose and in practice requires hyperparameter tuning, which introduces additional confounding factors.\n- The experimental comparison is limited. Only 3 algorithms are evaluated, and these do not represent the diversity of available approaches, e.g., optimization-based, variational, or midpoint-guided methods; see the literature in [1] and [2]\n\n**Remarks and minor issues**\n\n- In background, rephrase the statement in Line 132 about DDPM, sampling in fact depend on several parameters and it is bold to say \" researchers typically use\"; I would argue that frequently DDIM sampling is used with $\\eta = 0$ (simulating the probability-flow ODE) for sharp samples with few diffusion steps\n- The used abbreviation **DPS** is already/actually the name of a well-known algorithm in diffusion models and inverse problems [4], hence the abbreviation might be misleading using it here to refer to something else may cause confusion.\n- The authors may also consider adding the following reference on inverse problems benchmarks [3]\n- Line 288: The statement that DiffPIR is an extension of C-DPS is incorrect. DiffPIR follows a distinct formulation based on quadratic half-splitting with an auxiliary variable and does not rely on the VJP of the denoiser.\n\n\n---\n\n.. [1] Daras, Giannis, et al. \"A survey on diffusion models for inverse problems.\" arXiv preprint arXiv:2410.00083 (2024).\n\n.. [2] Oliviero-Durmus, Alain, et al. \"Generative modelling meets Bayesian inference: a new paradigm for inverse problems.\" Philosophical Transactions A 383.2299 (2025): 20240334.\n\n.. [3] Zheng, Hongkai, et al. \"Inversebench: Benchmarking plug-and-play diffusion priors for inverse problems in physical sciences.\" arXiv preprint arXiv:2503.11043 (2025).\n\n.. [4] Chung, Hyungjin, et al. \"Diffusion posterior sampling for general noisy inverse problems.\" arXiv preprint arXiv:2209.14687 (2022).",
"questions": "- I generally found the figures hard to understand and interpret, I'm referring namely to figure 1, it says that it shows reverse using the oracle denoiser, but it is not clear, similarly, for figure 3, it is hard to interpret namely to say wether the algorithms performs well or not \n- can the authors provides hints/explanation on the derivation of equation (13)\n- the authors claim that introduced framework can also assess the approach where the conditional components is learned (Line: 168-170), but it is not clear how it can be achieved given that in some tasks the likelihood is not known, e.g. tasks such deraining or dehazing, see for instance [1]; yet the benchmark is built on the ability to explicitly write the posteriors/marginals and target them using Gibbs sampling\n\n- A more broad question: did the authors think about how the benchmark can be extend to nonlinear inverse problems ?\n\n---\n\n.. [1] Wang, Hanting, et al. \"IRBridge: Solving Image Restoration Bridge with Pre-trained Generative Diffusion Models.\" arXiv preprint arXiv:2505.24406 (2025).",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-02T18:47:06",
"modification_date": "2025-11-12T13:03:31",
"review_url": "https://openreview.net/forum?id=zDI2G8t0of¬eId=qh8Nh3DeU4",
"license": "CC BY 4.0"
},
{
"id": "Ecx5TSO86m",
"forum": "zDI2G8t0of",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission13084/Reviewer_bQ8j",
"reviewer_name": "Reviewer_bQ8j",
"rating": 8,
"confidence": 5,
"soundness": 4,
"contribution": 4,
"presentation": 3,
"summary": "Diffusion posterior sampling algorithms have become prominent methods for sampling from posterior diffusion with a denoising diffusion model prior. While many methods have been proposed in the recent years, most of the interesting benchmarks do not come with ground-truth posterior samples to which one can compare against. The aim of this paper is to close this gap by proposing a statistical benchmark that mimicks the behaviour of realistic data (power-law-like extremes as stated in the paper). To this end the authors consider the posterior associated to Lévy processes and use an efficient Gibbs sampler to obtain gold-standard posterior samples that serve as reference.",
"strengths": "- This paper tackles a fundamental problem in the evaluation of diffusion posterior samplers and proposes a very useful benchmark which in my opinion could be useful to the community and should be present in all the forthcoming papers. \n- The model is general enough to contain different instantiations such as Laplace and spike and slab and thus goes beyond the existing gaussian mixture toy examples. \n- The paper is rather well-written and quite pedagogical, I enjoyed reading it.",
"weaknesses": "The only weakness I see is the structuring of the main paper. For example I think that some parts of the related works (such as the first two paragraphs) could be moved to the appendix as they are slightly relevant to the content of the paper. This space could be used to provide for example more background on the GLM framework, as one needs to go to the appendix to read more interesting details about it. \nI also think that Figure 1 and 2 are misplaced as at this stage of the paper the Lévy process is not introduced and we don't know yet what St(1) means.",
"questions": "I have a few suggestions and related works to be considered: \n- I think it would have been interesting to include samples from a conditional diffusion model, by either training the conditional denoiser or estimating the denoiser using Monte Carlo samples as is done for DPS methods. I believe that it could be relevant since it provides a lower bound on the performance that one hopes to achieve with DPS methods. \n- [1] considers an actual real world setting where gold standard samples can be obtained using MCMC. \n- The toy Gaussian mixture benchmark is introduced in [2, 3] \n\n[1] Cardoso, G.V. and Pereira, M., 2025. Predictive posterior sampling from non-stationnary Gaussian process priors via Diffusion models with application to climate data. \n[2] Cardoso, G., Idrissi, Y.J.E., Corff, S.L. and Moulines, E., 2023. Monte Carlo guided diffusion for Bayesian linear inverse problems. \n[3] Boys, B., Girolami, M., Pidstrigach, J., Reich, S., Mosca, A. and Akyildiz, O.D., 2023. Tweedie moment projected diffusions for inverse problems.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T08:46:03",
"modification_date": "2025-11-12T13:03:32",
"review_url": "https://openreview.net/forum?id=zDI2G8t0of¬eId=Ecx5TSO86m",
"license": "CC BY 4.0"
},
{
"id": "XUACQbtL5B",
"forum": "zDI2G8t0of",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission13084/Reviewer_TmEt",
"reviewer_name": "Reviewer_TmEt",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper introduces a statistical benchmark for evaluating diffusion posterior sampling algorithms using discretized Lévy processes with tractable Gibbs posteriors as ground truth. While the framework enables rigorous distribution-level comparisons, the evaluation is severely limited to low-dimensional (d=64) linear inverse problems, raising serious concerns about scalability and practical relevance to realistic imaging applications.",
"strengths": "Developing a benchmark for posterior sampling in high-dimensional problems is important.",
"weaknesses": "• All experiments use d=64 signals with only linear operators. No evidence is provided that the framework scales to realistic dimensions (e.g., 256×256 images) or nonlinear problems, fundamentally limiting the practical applicability and making it unclear whether findings transfer to problems researchers actually solve.\n\n• The authors cite power-law phenomena in finance and images to motivate heavy-tailed priors, but never demonstrate that their 1D discretized Lévy processes meaningfully capture structure in realistic signals. The connection to actual image statistics remains unsubstantiated.\n\n• Table 4 shows learned denoisers often match or exceed oracle performance, undermining claims about isolating likelihood approximation errors. The paper doesn't establish whether likelihood errors dominate versus other sources (discretization, hyperparameter sensitivity), weakening the diagnostic utility argument.\n\n• DPS algorithms are tuned with learned denoisers but evaluated with oracle denoisers using the same hyperparameters (lines 276-278). This mismatch means oracle results may be suboptimal, contradicting claims about properly isolating algorithmic errors.\n\n• Claims of \"efficient implementations\" and \"acceptable runtimes\" (lines 231-234, 822-823) lack any quantitative evidence; no runtime comparisons, memory usage, or scalability analysis is provided to substantiate efficiency claims or assess practical feasibility at higher dimensions.",
"questions": "Does this benchmark can be used for amortized diffusion sampling methods, i.e., learning the full posterior?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T23:07:14",
"modification_date": "2025-11-12T13:03:32",
"review_url": "https://openreview.net/forum?id=zDI2G8t0of¬eId=XUACQbtL5B",
"license": "CC BY 4.0"
},
{
"id": "5qBVdgq9kk",
"forum": "zDI2G8t0of",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission13084/Reviewer_pM9c",
"reviewer_name": "Reviewer_pM9c",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 2,
"summary": "The authors propose a new benchmark for evaluating different posterior sampling algorithms using diffusion models (dubbed DPS in this paper; Different from DPS [1]), where the posterior samples can be computed analytically, so that the ground truth is given. \nPrevious *benchmarks* that admit analytical posterior samples were constrained to settings where the prior is a mixture of Gaussians, which largely differs from the natural data statistics. The prior distributions considered in this paper is much larger, and the authors propose methods to efficiently compute ground truth posterior distributions. Several widely established baselines are compared.\n\n**References**\n\n[1] Chung et al., \"Diffusion posterior sampling for general noisy inverse problems\", ICLR 2023",
"strengths": "1. To the best of my knowledge, this is the first approach to go beyond mixture of gaussian priors when attempting to build a ground truth posterior distribution.\n\n2. The paper is well-written and easy to follow, with sufficient background given in the appendix.\n\n3. The method of acquiring the posterior distribution by extending Kuric et al. [1] is sound.\n\n\n**References**\n\n[1] Kuric et al., \"The Gaussian latent machine: Efficient prior and posterior sampling for inverse problems\", arxiv 2025",
"weaknesses": "1. Being able to use different prior/posterior distributions as ground truth is, in and of itself, important. Nevertheless, the argument would be strengthened if the paper shows that the proposed distributions in this paper are closer to real-world statistics in some cases. Currently, only some references are given.\n\n2. The authors mention that the proposed framework can be extended to higher-dimensional settings, but there are complications. It would add much value if the authors were to include experiments with $d$ that match the typical image resolutions. Currently, it seems like the experiments are conducted with low dimensionality ($d$). What's the value of $d$ chosen here?",
"questions": "Is there any reason to constrain the benchmark for *diffusion* posterior sampling algorithms?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-15T10:10:36",
"modification_date": "2025-11-12T13:03:32",
"review_url": "https://openreview.net/forum?id=zDI2G8t0of¬eId=5qBVdgq9kk",
"license": "CC BY 4.0"
}
] |
Bq5lSYZl4L
|
https://openreview.net/forum?id=Bq5lSYZl4L
|
Conversational Orientation Reasoning: Egocentric-to-Allocentric Navigation with Multimodal Chain-of-Thought
| 2
| 2.666667
|
[
2,
2,
2
] |
[
4,
3,
1
] | 3
|
[
"conversational AI",
"multimodal reasoning",
"chain-of-thought",
"spatial reasoning",
"egocentric navigation"
] |
Conversational agents must translate egocentric utterances (e.g., “on my right”) into allocentric orientations (N/E/S/W). This challenge is particularly critical in indoor or complex facilities where GPS signals are weak and detailed maps are unavailable. While chain-of-thought (CoT) prompting has advanced reasoning in language and vision tasks, its application to multimodal spatial orientation remains underexplored. We introduce Conversational Orientation Reasoning (COR), a new benchmark designed for Traditional Chinese conversational navigation projected from real-world environments, addressing egocentric-to-allocentric reasoning in non-English and ASR-transcribed scenarios. We propose a multimodal chain-of-thought (MCoT) framework, which integrates ASR-transcribed speech with landmark coordinates through a structured three-step reasoning process: (1) extracting spatial relations, (2) mapping coordinates to absolute directions, and (3) inferring user orientation. A curriculum learning strategy progressively builds these capabilities on Taiwan-LLM-13B-v2.0-Chat, a mid-sized model representative of resource-constrained settings. Experiments show that MCoT achieves 100% orientation accuracy on clean transcripts and 98.1% with ASR transcripts, substantially outperforming unimodal and non-structured baselines. Moreover, MCoT demonstrates robustness under noisy conversational conditions, including ASR recognition errors and multilingual code-switching. The model also maintains high accuracy in cross-domain evaluation and resilience to linguistic variation, domain shift, and referential ambiguity. These findings highlight the potential of structured MCoT spatial reasoning as a path toward interpretable and resource-efficient embodied navigation.
|
We introduce the Conversational Orientation Reasoning (COR) benchmark and propose a multimodal chain-of-thought framework for egocentric-to-allocentric orientation reasoning.
|
applications to computer vision, audio, language, and other modalities
|
https://openreview.net/pdf?id=Bq5lSYZl4L
| 2025-09-18T16:58:03
| 3
|
[
{
"id": "pd0onPVjy7",
"forum": "Bq5lSYZl4L",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission10972/Reviewer_dPzU",
"reviewer_name": "Reviewer_dPzU",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 1,
"summary": "This paper's topic is translating egocentric utterances (e.g., on my right) into allocentric (N/E/S/W) orientation in conversational navigation. It introduces COR, a Traditional-Chinese benchmark drawn from real urban layouts projected onto a 10×10 Manhattan grid, where inputs are ASR-transcribed speech plus landmark coordinates and outputs are cardinal orientations. The authors propose a multimodal chain-of-thought (MCoT) framework with a structured three-step reasoning recipe: (1) relation extraction, (2) coordinate to absolute direction mapping, (3) final orientation inference, which is trained via curriculum on Taiwan-LLM-13B-v2.0-Chat. On COR, MCoT reports 100% accuracy on clean transcripts and 98.1% with ASR transcripts, with additional robustness under linguistic variation, cross-domain transfer (Taipei Station), and referential ambiguity. The paper argues that structured reasoning improves both accuracy and interpretability for resource-constrained, GPS-limited scenarios.",
"strengths": "1. The task is formalized cleanly with explicit mapping rules and a Manhattan grid, focused on the neglected egocentric to allocentric transformation rather than high-level action prediction, and does so in a non-English, ASR-noisy setting (Traditional Chinese). The three-step MCoT plus curriculum addresses reasoning stability and interpretability.\n2. The method addresses GPS-denied environments and resource-constrained deployments. The reported accuracy suggests potential utility for speech-driven navigation assistants where full maps/sensors are unavailable.",
"weaknesses": "1. The task on a 10 x 10 Manhattan grid with axis-aligned landmarks and a fixed rule table (Table 1) looks algorithmically solvable by a simple deterministic program: parse the relation and landmark, compute $\\Delta$, take `AbsDir(Δ)` by comparing $|\\Delta_x|$ vs $|\\Delta_y|$, then rotate by the relation. Without a strong non-neural baseline, the 100% clean accuracy is hard to contextualize.\nI suggest adding (i) a rule-based solver and (ii) a probabilistic variant robust to noisy extraction, and comparing accuracy and latency. \n2. The ASR noise is introduced via a TTS to ASR loop on clean, templated text, not real user speech. CER-based severity is reported, but real conversational prosody, disfluency, and OOVs can be harsher.\n3. The grid is small, axis-aligned, and excludes diagonal cases; mapping rules are deterministic and known.\n4. Baseline choices and numbers look unusually weak.\n Few-shot (with/without CoT) and fine-tuned no CoT baselines perform near chance or worse, which raises questions about prompt/format mismatches rather than inherent task difficulty.\n5. Data are programmatically generated from the same mapping rules used by the model’s step-2/3 supervision. Curriculum-tuning on these deterministic recipes could lead to memorization of rule templates rather than robust reasoning.\n6. Reasoning quality is reported as a match rate of intermediate steps and format error as schema violations, but neither measures *faithfulness* (whether the trace genuinely causes the answer).\n7. The scope of robustness is still narrow.\n Cross-domain stays on a 10 x 10 Manhattan grid with the same rule table. Strong performance may reflect data homogeneity.\n8. The work motivates resource-constrained, GPS-limited settings but evaluates offline on a mid-size 13B chat model with LoRA.\nI suggest providing latency/memory profiles, and a small-footprint variant (e.g., 1–3B or distilled student) to substantiate edge deployment claims.",
"questions": "1. What is the performance and latency of a deterministic solver that (a) extracts the relation via a regex/IE component and (b) applies Table 1 + rotation? If omitted, why? \n2. How did you ensure the programmatic generation templates and the model’s step-wise supervision do not leak distributional shortcuts that trivialize the task?\n3. Is the reasoning trace *required* to be correct for the final answer to be accepted (e.g., by an external checker), or can the model produce a correct label with an incorrect intermediate step?\n4. Beyond the TTS to ASR loop, do you have evaluations on spontaneous human speech (code-switching, disfluency, accents) recorded in the target locales?\n6. How would MCoT handle diagonal/off-axis landmarks, continuous coordinates, or non-orthogonal street plans? \n8. How sensitive is performance to removing supervision on *one* of the three steps (e.g., only supervise steps 1+3)? Any evidence that curriculum (vs. joint training) is the key driver?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T12:06:33",
"modification_date": "2025-11-12T12:36:02",
"review_url": "https://openreview.net/forum?id=Bq5lSYZl4L¬eId=pd0onPVjy7",
"license": "CC BY 4.0"
},
{
"id": "5VdUvU7e3A",
"forum": "Bq5lSYZl4L",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission10972/Reviewer_zCUP",
"reviewer_name": "Reviewer_zCUP",
"rating": 2,
"confidence": 3,
"soundness": 2,
"contribution": 1,
"presentation": 3,
"summary": "- The authors propose a benchmark for conversational navigation that is in Chinese, a multimodal CoT framework, and a curriculum learning strategy to try and achieve high performance on this benchmark. Overall, the authors are able to achieve strong results on the proposed benchmark by fine-tuning an open LLM using their proposed framework.",
"strengths": "- The authors achieved strong performances on the proposed benchmark.\n- The figures are clear.",
"weaknesses": "- Nothing proposed in the paper is unique or *novel*, everything done was a standard fine-tuning technique with an added curriculum and standard multimodal reasoning steps.\n- The central research questions are not particularly exciting, important, or impactful for the field\n- The model achieves 100% reasoning performance on on the proposed egocentric spatial orientation task, which signals two things to me:\n\t- First, the model is likely overfitting this task after being fine-tuned, resulting in a loss of much of its prior knowledge.\n\t- Second, the proposed benchmark is too easy.\n- While some of these ideas are briefly mentioned in the limitations, I see them as notable large problems. \n\t- The models are evaluated in an unrealistic environment (a 10x10) grid.\n\t- The authors only focus on a single language.\n- The proposed approach is complex, and not easy to implement.",
"questions": "N/A",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T07:51:34",
"modification_date": "2025-11-12T12:36:03",
"review_url": "https://openreview.net/forum?id=Bq5lSYZl4L¬eId=5VdUvU7e3A",
"license": "CC BY 4.0"
},
{
"id": "CQmOKlbf17",
"forum": "Bq5lSYZl4L",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission10972/Reviewer_Kvna",
"reviewer_name": "Reviewer_Kvna",
"rating": 2,
"confidence": 1,
"soundness": 1,
"contribution": 2,
"presentation": 3,
"summary": "This paper proposed a Conversational Orientation Reasoning (COR) benchmark for Traditional Chinese conversational navigation. To solve this languages based, tabular egocentric-to-allocentric navigation task, authors further proposed a multi-modal chain-of-thought (MCoT) framework for fine-tuning a Taiwan-LLM-13B-v2.0-Chat. The proposed framework can solve the egocentric-to-allocentric with high success rate and showed robustness to noise in the conversation input.\n\nNevertheless, the benchmark itself consisted a simple 10x10 table, which could oversimplify the realistic task. One can imagine that the realistic task could involve high-dimensional input, e.g., images captured from the user or vectorized map. A multi-model LLM could have the capability to solve this high-dim to orientation mapping task, rather than tabular setup.",
"strengths": "The paper proposed a novel problem and a solution on conversational orientation reasoning from language and semantic map. Experiments demonstrated the effectiveness and robustness of the proposed solution. The paper presented the problem and the method clearly.",
"weaknesses": "The major weakness is the proposed problem oversimplified real-world scenarios. This surprised the capability of the method as well as multi-model LLM’s capabilities. The current method seems can only solve an in-distribution 10x10 table and another novel 10x10 table. Reasoning the orientation from a table seems can be solve using hard-code.",
"questions": "What is the advantage of the proposed method over a set of hard-coded rules?\nCould proposed method reason from high-dimensional input, for example, from the photos captured from a user?\nHow capable is the method generalize to novel tables or even open-world language/landmarks?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-25T08:38:45",
"modification_date": "2025-11-12T12:36:03",
"review_url": "https://openreview.net/forum?id=Bq5lSYZl4L¬eId=CQmOKlbf17",
"license": "CC BY 4.0"
}
] |
Fz0KFsZE6C
|
https://openreview.net/forum?id=Fz0KFsZE6C
|
OpenSIR: Open-Ended Self-Improving Reasoner
| 4
| 3.75
|
[
4,
4,
4,
4
] |
[
3,
4,
4,
4
] | 4
|
[
"large language model",
"math reasoning",
"self-play",
"reinforcement learning"
] |
Recent advances in large language model (LLM) reasoning through reinforcement learning rely on annotated datasets for verifiable rewards, potentially limiting models' ability to exceed human-level performance. While self-play offers a promising alternative, existing approaches depend on external verifiers or cannot learn open-endedly. We present Open-Ended Self-Improving Reasoner (OpenSIR), a self-play framework where an LLM learns to generate and solve novel problems by alternating teacher and student roles without external supervision. To generate novel problems, OpenSIR optimises for both difficulty and diversity, rewarding problems that challenge appropriately while exploring distinct concepts, enabling open-ended mathematical discovery. Starting from a single trivial seed problem, OpenSIR substantially improves instruction models: Llama-3.2-3B-Instruct advances from 73.9 to 78.3 on GSM8K, and from 28.8 to 34.4 on College Math, while Gemma-2-2B-Instruct rises from 38.5 to 58.7 on GSM8K. Our analyses reveal that OpenSIR achieves open-ended learning through co-evolving teacher-student roles that adaptively calibrate difficulty and drive diverse exploration, progressing autonomously from basic to advanced mathematics.
|
foundation or frontier models, including LLMs
|
https://openreview.net/pdf?id=Fz0KFsZE6C
| 2025-09-19T23:25:06
| 4
|
[
{
"id": "k8yimgxXcV",
"forum": "Fz0KFsZE6C",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission19344/Reviewer_LBsx",
"reviewer_name": "Reviewer_LBsx",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper introduces OpenSIR, a self-play reinforcement learning framework where a single policy jointly optimizes two roles: a teacher that generates novel and diverse math problems, and a student that solves them accurately. Both roles are jointly trained to form an open-ended self-improvement loop that enables the model to continuously enhance its problem generation and reasoning abilities. This dual-role optimization enables the model to bootstrap from a single seed problem and progressively enhance both problem generation and reasoning capability without human supervision. Experiments show that OpenSIR consistently outperforms GRPO baselines and base instruction models, achieving comparable or superior accuracy without any human-annotated data.",
"strengths": "- The proposed approach significantly outperforms supervised RL approaches (GRPO) and instruction-tuned baselines across a number of models and benchmarks.\n\n- The proposed approach requires no human-annotated data, reducing cost and reliance on manual labeling.\n\n- Joint optimization of teacher and student creates a self-calibrating cycle, enabling continuous self-generated training at optimal difficulty.",
"weaknesses": "- In 4.1 Figure 2, the observed V-shaped difficulty trend is interesting, but the authors should provide evidence of the student model’s performance over training (e.g., accuracy or solve rate) to substantiate the claim that this pattern reflects true self-calibration.\n\n- In 2.1, the author states, “We initialise the problem pool P_0 with a single trivial problem (“What is 1+1?”)” Given the simplicity of this seed, it is worth discussing whether and how this choice constrains the initial diversity or attainable difficulty of the generated problems, and whether the model can robustly escape such a limited starting point.\n\n- While the paper conducts ablations on diversity and length rewards and dual-role training, it does not provide individual analyses for the solvability reward components. It remains unclear how the component contributes to the overall accuracy performance.",
"questions": "- In Table 8, equal weights (α = λ = γ = 1.0, δ = 0.1) are assigned to most teacher rewards and 1.0 to the student accuracy reward. Could the authors clarify how these weights were chosen? Were they empirically tuned, or are the results robust to moderate changes in these hyperparameters?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-02T20:59:07",
"modification_date": "2025-11-12T15:08:02",
"review_url": "https://openreview.net/forum?id=Fz0KFsZE6C¬eId=k8yimgxXcV",
"license": "CC BY 4.0"
},
{
"id": "W8C1F9N6h1",
"forum": "Fz0KFsZE6C",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission19344/Reviewer_w5VH",
"reviewer_name": "Reviewer_w5VH",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This paper introduces OpenSIR (Open-Ended Self-Improving Reasoner), a self-play framework for large language models (LLMs) that autonomously improves reasoning abilities without external supervision. OpenSIR alternates teacher and student roles, generating and solving novel problems optimized for difficulty and diversity, enabling open-ended mathematical discovery. Starting from a single trivial seed problem, OpenSIR drives autonomous progression from basic to advanced concepts. Experiments show significant performance improvements on benchmarks like GSM8K and College Math, with models achieving substantial gains. The framework's adaptive teacher-student co-evolution fosters diverse exploration and calibrated learning, advancing LLM reasoning capabilities effectively.",
"strengths": "1. The paper tests its framework, OpenSIR, across multiple benchmarks (e.g., GSM8K, College Math) using various backbone LLMs, such as Llama-3.2B-Instruct and Gemma-2-2B-Instruct. This demonstrates some generality and effectiveness of the approach across different models and tasks.\n\n2. The paper tackles an important topic—using reinforcement learning (RL) to improve LLM reasoning capabilities. RL is a compelling approach for driving autonomous learning, making this work relevant and interesting for advancing LLMs.",
"weaknesses": "1. The core idea of OpenSIR lacks novelty, appearing more like a combination of popular concepts (self-play, RL, curriculum learning) rather than introducing a new approach. The paper could benefit from showcasing deeper insights or unique contributions that distinguish it from existing methods.\n\n2. The authors do not provide code or other necessary materials, making it difficult for researchers to replicate the results or experiment with the framework. Including well-documented code and resources would significantly enhance the paper's impact and accessibility.\n\n3. The font used in Figure 1 appears informal and less readable, which detracts from the professional presentation of the paper. Using a more formal and easily readable font would improve the clarity and visual impact of the figure, making it more suitable for academic audiences.\n\n4. The experiments are conducted on relatively small models. While the results are promising, the robustness and effectiveness of the proposed method on larger-scale models remain unverified. Expanding the experiments to larger models would strengthen the paper's claims and demonstrate broader applicability.",
"questions": "Refer to Weaknesses",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T12:45:31",
"modification_date": "2025-11-12T15:08:03",
"review_url": "https://openreview.net/forum?id=Fz0KFsZE6C¬eId=W8C1F9N6h1",
"license": "CC BY 4.0"
},
{
"id": "QtumZmrWYp",
"forum": "Fz0KFsZE6C",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission19344/Reviewer_Jjz1",
"reviewer_name": "Reviewer_Jjz1",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This paper introduces the Open-Ended Self-Improving Reasoner (OpenSIR), a novel framework that enables a Large Language Model (LLM) to autonomously improve its mathematical reasoning capabilities. The core of OpenSIR is a self-play mechanism where a single LLM policy alternates between two roles: a \"teacher\" that generates new mathematical problems and a \"student\" that solves them. Starting from a single trivial seed problem (e.g., \"What is 1+1?\"), the system bootstraps its own learning curriculum without any external human-annotated data. The teacher is rewarded for creating problems that are both appropriately difficult (calibrated via the student's solve rate) and conceptually diverse (measured by embedding distance to previously seen problems). The authors demonstrate that this approach significantly improves the performance of smaller LLMs (Llama-3.2-3B and Gemma-2-2B) on several math reasoning benchmarks, outperforming baselines trained on thousands of human-labeled examples.",
"strengths": "1.The paper's primary strength lies in its contribution to open-ended, autonomous learning. By successfully demonstrating that an LLM can bootstrap complex reasoning skills from a single trivial example without human supervision, OpenSIR presents a compelling alternative to data-intensive RLHF methods. This addresses a major bottleneck in scaling LLM capabilities and is a significant step towards more autonomous AI systems.\n\n2.The design of the reward function for the teacher role is very effective. Decomposing \"novelty\" into two intuitive dimensions, difficulty (via scoresol and scorelen) and diversity (scorediv), provides a robust mechanism for generating a dynamic and adaptive curriculum. This allows the model to avoid getting stuck on trivial problems or generating impossibly hard ones, guiding it from basic arithmetic to advanced topics like calculus and trigonometry",
"weaknesses": "1.The experiments are confined to smaller models (2B-3B parameters). While the results are impressive, the paper shows minimal gains for the stronger Qwen-2.5-3B model. The authors suggest this may be due to benchmark contamination, but it could also indicate that the self-improvement process yields diminishing returns for models that are already highly capable. A discussion on the scalability of this approach to state-of-the-art models (e.g., 8B+) is a notable omission.\n\n2.The self-play loop requires multiple forward passes for each problem generated (G solution attempts per problem) before a single policy update. This process seems computationally expensive compared to standard supervised fine-tuning. The paper does not provide a clear analysis of the computational overhead, making it difficult to assess the practical feasibility and cost-effectiveness of OpenSIR versus simply training on a large, existing dataset.\n\n3.The analysis in Section 4.2 reveals a critical weakness: problems with very low solve rates are often invalid rather than genuinely difficult. The framework's reliance on solve rate as a proxy for difficulty struggles to distinguish between these two cases. While the chosen thresholds (e.g., s_min = 0.5) seem to work, this suggests the curriculum generation might be sensitive to these hyperparameters and could inadvertently filter out challenging but valid new problem domains where the model initially has a very low success rate.",
"questions": "1. The concept of using self-play for generation and reasoning has been explored in prior work, such as R-Zero. Could the authors further elaborate on the core mechanistic novelty of OpenSIR, particularly its key distinctions from existing approaches? Furthermore, the performance improvement attributed to reinforcement learning appears relatively modest. Does this suggest a potential performance ceiling for this method?\n\n2. The paper's evaluation is primarily conducted on established benchmarks like GSM8K and MATH, where current models already demonstrate strong performance. Does the proposed framework have the potential to generate and solve more complex, competition-level problems (e.g., from AIME)? How robust is the framework's effectiveness in these more challenging scenarios?\n\n3. The experiments are conducted mainly on small-scale models (3B-parameter range). Could the authors comment on the anticipated efficacy of this approach on larger-scale models (e.g., 8B, 14B)? Is the performance upper-bound of the method constrained by the capability ceiling of the initial base model? In other words, is the framework primarily eliciting latent abilities rather than imparting genuinely new skills?\n\n4. The current comparisons are primarily against base models and a general-purpose RL method (GRPO). The paper lacks a direct comparison with contemporary state-of-the-art models in mathematical reasoning that also leverage synthetic data",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-27T01:29:14",
"modification_date": "2025-11-12T15:08:03",
"review_url": "https://openreview.net/forum?id=Fz0KFsZE6C¬eId=QtumZmrWYp",
"license": "CC BY 4.0"
},
{
"id": "Sil4GT5m6R",
"forum": "Fz0KFsZE6C",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission19344/Reviewer_BCw5",
"reviewer_name": "Reviewer_BCw5",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This paper introduces OpenSIR (Open-Ended Self-Improving Reasoner), a self-play framework designed to enhance LLM reasoning without external supervision by using a single policy that alternates between \"teacher\" and \"student\" roles. Starting from a single trivial seed problem , the teacher is trained via reinforcement learning to generate novel problems, optimizing a \"novelty\" reward that combines both difficulty (calibrated by solve rates and solution length) and diversity (using embedding-based distance to encourage exploration) . The student, in turn, is trained to solve these problems, with correctness determined by majority voting across multiple solution attempts. This self-improving loop allows the model to autonomously bootstrap its capabilities. While the paper has merit, it requires major revision to be ready for publication.",
"strengths": "The paper is well motivated.\n\nThe method is tested on three model families: Llama-3.2-3B-Instruct, Gemma-2-2B-Instruct, Qwen-2.5-3B-Instruct.\n\nAblation and analysis is appreciated.",
"weaknesses": "Why is R-Zero not in the baseline? Any other potentially missing baselines?\n\nPseudocode should be available in the paper. If there is no space in the main body, at least in the appendix. It will dramatically help with clarity and reproducibility.\n\nThe work should mention how it compares and contrasts with “AdaSTaR: Adaptive Data Sampling for Training Self-Taught Reasoners” (NeurIPS 2025) as their method also directly tackled the same problem: (1) difficulty (2) diversity. This seems like a key related work.\n\n> To generate novel problems, OpenSIR optimises for both difficulty and diversity, rewarding problems that challenge appropriately while exploring distinct concepts, enabling open-ended mathematical discovery.\n\nThere seems to be numerous hyperparameters; e.g. the two s values, alpha, avg@16. What are all the hyperparameters (in the method and also in the empirical study)? How are they set? Heuristically? Empirically? Any sensitivity tests? Large newly introduced hyperparameter space can be a pain for practitioners, especially if the method is sensitive to them.\n\nIt seems like this method incurs numerous additional costs. Is the additional cost overhead clearly communicated in the paper? E.g. the embeddings’ cosine similarity will incur costs. This should especially be clearly provided in Tab. 1 where it is very possible that GRPO_gsm8k and GRPO_math has used less training compute than the proposed method.\n\nDoes this method only work on the math domain? If so, the scope seems slightly limited.\n\nTo my understanding, it is normal practice to keep GRPO going until it hits peak performance. It is concerning that an arbitrary fixed compute budget has been provided.\n> To compare models trained on the same number of problem-solution pairs, we train the GRPO baselines with 100 steps, and OpenSIR for 200 steps since OpenSIR allocates half of its training budget to problem generation.\n\nIt would be ideal if there was at least one model with larger model size. These models are very small.\n\nAmong the three models tested Qwen 2.5 is the strongest one. Naturally, post-training is done on the strongest available base model. Considering this, the Qwen results are most important. However, this method’s gain over baselines in the Qwen results is negligably small; even possibly due to noise. Accuracy gain over best baseline is 0.29% which is virtually no real-world gain.\n\nWhile it does make sense that the proposed method does not use any labels. There is no need to not use existing labels from e.g. gsm8k, math. There should be experiments with train on existing available labeled train set + opensir to show that this is meaningful in the real-world.\n> OpenSIR consistently outperforms GRPO baselines across model architectures despite generating training data through self-play from a single seed problem, while GRPO baselines use over 7,000 human-annotated examples.",
"questions": "Is there a reason why there is such little use of colors in the text? The clarity of the paper may improve with some coloring, e.g. sections, references.\n\nDoes this method work on thinking models, not just instruct models?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-25T16:06:03",
"modification_date": "2025-11-12T15:08:04",
"review_url": "https://openreview.net/forum?id=Fz0KFsZE6C¬eId=Sil4GT5m6R",
"license": "CC BY 4.0"
}
] |
|
QpqBqCTtW4
|
https://openreview.net/forum?id=QpqBqCTtW4
|
Unifying Stable Optimization and Reference Regularization in RLHF
| 5
| 2.75
|
[
6,
4,
4,
6
] |
[
4,
2,
3,
2
] | 4
|
[
"RLHF",
"LLM",
"Alignment"
] |
Reinforcement Learning from Human Feedback (RLHF) has advanced alignment capabilities significantly but remains hindered by two core challenges: reward hacking and stable optimization. Current solutions independently address these issues through separate regularization strategies, specifically a KL-divergence penalty against a supervised fine-tuned model ($\pi_0$) to mitigate reward hacking, and policy ratio clipping towards the current policy ($\pi_t$) to promote stable alignment. However, the implicit trade-off arising from simultaneously regularizing towards both $\pi_0$ and $\pi_t$ remains under-explored. In this paper, we introduce a unified regularization approach that explicitly balances the objectives of preventing reward hacking and maintaining stable policy updates. Our simple yet principled alignment objective yields a weighted supervised fine-tuning loss with a superior trade-off, which demonstrably improves both alignment results and implementation complexity. Extensive experiments across diverse benchmarks validate that our method consistently outperforms RLHF and online preference learning methods, achieving enhanced alignment performance and stability.
|
alignment, fairness, safety, privacy, and societal considerations
|
https://openreview.net/pdf?id=QpqBqCTtW4
| 2025-09-03T09:45:48
| 4
|
[
{
"id": "yqzBpTdgiX",
"forum": "QpqBqCTtW4",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission1200/Reviewer_7Noe",
"reviewer_name": "Reviewer_7Noe",
"rating": 6,
"confidence": 4,
"soundness": 2,
"contribution": 3,
"presentation": 3,
"summary": "The paper proposes an approach to address the trade-offs arising from simultaneously regularizing towards the reference policy (to mitigate reward hacking) and the current policy (for stable policy updates), in RLHF. They accomplish this by regularizing towards a convex combination of the reference policy and current policy. This is done via a weighted supervised fine-tuning loss, which allows for stable training. Experimentally, their proposed approach improves over baselines (both online RL-based approaches and online/offline direct alignment algorithms)",
"strengths": "The paper addresses a critical problem that has not been explored in the literature, the impact of regularizing towards both the reference policy and the current policy, in RLHF. Regularizing towards the reference policy is done by a KL penalty to the reward, and regulazing towards the current policy is achieved via clipping or a KL constraint. Together, these two constraint our objective to operate in the intersection of the trust region that becomes increasingly restrictive as training progresses. The paper proposes a simple weighted supervised fine-tuning objective by regularizing towards a convex combination of both the reference policy and the current policy, leading to a stable training algorithm DAR (Dual-Regularized Advantage Regression). Experimental results showcase that DAR surpasses online RL based methods (PPO, GRPO, RLOO) and online/offline direct alignment methods (DPO/IPO/SLiC) across three training domains. The presentation is clear, crisp with no major issues with the grammar and writing flow.",
"weaknesses": "One of my concerns with the paper is their choice to regularize towards a convex combination of the reference policy $\\pi_{0}$ and the current policy $\\pi_{t}$ i.e $\\alpha D_{KL}(\\pi \\vert\\vert \\pi_{0}) + (1-\\alpha) D_{KL}(\\pi \\vert\\vert \\pi_{t})$. This inherently leads to incentivizing regularizing to one of the distibutions than the other (when $\\alpha$ != 0.5). It would have been better to have two independent multipliers for each of the divergence, which supports the Lagrangian view of the objective when looking at the divergences as constraints i.e $\\alpha_{1} D_{KL}(\\pi \\vert\\vert \\pi_{0}) + \\alpha_{2} D_{KL}(\\pi \\vert\\vert \\pi_{t})$.\n\nAdditionally, in proposition 4.1, the objective has a KL penalty wrt a reference mixture distribution $\\pi_{ref} = \\pi_{0}^{\\alpha}\\pi_{t}^{1-\\alpha}$. $\\pi_{ref}$ need not be a valid probability distribution, since it may not sum up to 1. The KL term here hence may not be between two distributions. There needs to be a normalizing factor for $\\pi_{ref}$ to make it a valid probability distribution. This would affect their proof of the optimal policy in Theorem 4.2, since there would now be normalizing factors of $\\pi_{ref}$ in the expressions.\n\nIn the DAR derivation in Appendix C.3, ignoring the earlier issue with $\\pi_{ref}$ not being normalized, on line 885, they state \"we factor out the partition function $Z(x)$ as it is a positive constant and doesnt shift optimal policy\". $Z(x)$ depends on $x$ and leads to weighing each prompt differently. Considering a simple setting with two possible $x$, the objective is $Z(x_{1}) f_{\\theta}(x_{1}) + Z(x_{2}) f_{\\theta}(x_{2})$. How can $Z(x)$ be factored out without affecting the objective \n\nThe authors state that as $\\pi_{t}$ is trained, the log-likelihood interpolation constructs a reference target that is inherently positioned closer to the optimal policy. No proof for this is provided. If this is solely because $\\pi_{ref}$ contains $\\pi_{t}^{1-\\alpha}$, that becomes more optimal over the course of training. If so, standard PPO also has a policy constraint with respect to $\\pi_{t}$. How is yours more optimal?",
"questions": "1) Is there a reason for choosing a convex combination of the two KL divergences for DAR, instead of choosing independent multipliers?\n\n2) Why is the $\\pi_{ref}$ not normalized in the KL constraint for DAR and does this impact the derivation of the optimal policy for DAR?\n\n3) Why can $Z(x)$ be dropped from the objective, without modifying it, in the DAR derivation in Appendix C.3?\n\n4) Is there a theoretical justification for this statement \"as $\\pi_{t}$ is trained, the log-likelihood interpolation constructs a reference target that is inherently positioned closer to the optimal policy.\" in line 240?\n\n\nMinor Questions/Nits\n\n- Equation 2 operates at trajectory level, whereas PPO is defined at token-level\n- What the reward (avg, total, etc) in Table 1. Explain it in detail in caption.\n- Why no comaprision against PPO in the the Experiments for Standard RLHF (line 317)?\n- In Figure 3, why no evolution plots for online direct alignment methods?\n- Improvements on MT-Bench and AlpacaEval 2.0 in Table 3 seem marginal. Also the column is named \"AlphacaEval\".",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T13:01:23",
"modification_date": "2025-11-12T10:48:24",
"review_url": "https://openreview.net/forum?id=QpqBqCTtW4¬eId=yqzBpTdgiX",
"license": "CC BY 4.0"
},
{
"id": "mFZcuqyjCW",
"forum": "QpqBqCTtW4",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission1200/Reviewer_rHRn",
"reviewer_name": "Reviewer_rHRn",
"rating": 4,
"confidence": 2,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "This paper proposes a dual-KL regularization framework for RLHF that unifies two objectives usually treated separately: (1) preventing reward hacking via KL to the initial SFT model π₀, and (2) maintaining stability via KL or clipping to the current policy πₜ.\nThe authors show that these can be merged into a single interpolated reference in log-space, leading to a new weighted-SFT formulation called DAR (Dual-regularized Advantage Regression). DAR is positioned as a simple, RL-free alternative to PPO/GRPO, with theoretical analysis and experiments on Qwen2-7B showing improved reward–KL trade-offs.",
"strengths": "1. Clear identification of a long-standing conflict between stability and reference regularization.\n2. Mathematical formulation is elegant and internally consistent.\n3. DAR simplifies PPO-style RLHF into a regression-like loss that is easier to implement and more stable.",
"weaknesses": "1. Outdated baseline setup. All experiments use Qwen2-7B and compare mainly against PPO, GRPO, and RLOO; no comparison to modern alignment frameworks, stronger models, and new RL methods.\n2. The novelty is mostly formal: the “dual-KL” is effectively a convex interpolation between π₀ and πₜ, similar to prior multi-reference ideas.\n3. Theoretical results rely on clean advantage estimation; no analysis under noisy or biased rewards.\n4. Empirical gains are modest and might vanish under stronger baselines.",
"questions": "1.How sensitive is performance to α? Could an adaptive trade-off help?\n2.Would DAR remain stable under noisy or AI-feedback reward models?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T12:58:58",
"modification_date": "2025-11-12T10:48:24",
"review_url": "https://openreview.net/forum?id=QpqBqCTtW4¬eId=mFZcuqyjCW",
"license": "CC BY 4.0"
},
{
"id": "nSqWeKzCTc",
"forum": "QpqBqCTtW4",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission1200/Reviewer_GWzK",
"reviewer_name": "Reviewer_GWzK",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The paper proposes a *dual-KL regularization* approach that aims to jointly address two RLHF pain points: (i) preventing reward hacking via reference regularization and (ii) ensuring stable optimization via trust-region style control. The method first studies a constrained “PPO-Align” objective and then derives a *weighted dual-KL* objective that yields an interpretable, weighted SFT (DAR) loss by effectively interpolating between the initialization policy $\\pi_{0}$ and the current policy $\\pi_{t}$ in log space. Experiments on TL;DR, Anthropic Helpfulness, and Harmlessness report improved win rates (Figure 3/Table 2), with evaluations judged by GPT-4 Turbo. Figure 1 provides the conceptual motivation for expanding the search region beyond the intersection of the trust regions to balance stability and reference adherence.",
"strengths": "- **Timely problem framing.** The paper clearly motivates the dual goals of stabilizing policy updates while constraining drift from a reference policy, and argues for a unified objective rather than separate mechanisms.\n- **Empirical gains.** Across three benchmarks, DAR shows strong win rates against online RLHF (e.g., PPO, GRPO, RLOO) and online DAP baselines; curves in Figure 3 and the summary in Table 2 support the claim.\n- **Implementation clarity intent.** The paper points to code/supplement details, which is important given the number of components interacting (policy, judge, datasets, ablations).",
"weaknesses": "- In the discussion of PPO stability, the classical TRPO/PPO literature typically regularizes with a KL of the form $D_{\\mathrm{KL}}\\left[\\pi_{\\text{old}}\\||\\pi_{\\theta}\\right]$, see Schulman et al. (2015, TRPO) and Schulman et al. (2017, PPO). By contrast, the paper’s *PPO-Align* (Sec. 4.1) constrains with $D_{\\mathrm{KL}}\\left[\\pi_{\\theta}\\||\\pi_{t}\\right]$ and penalizes $D_{\\mathrm{KL}}\\left[\\pi_{\\theta} \\||\\pi_{0}\\right]$. The rationale for changing directions relative to the classical trust-region view is not made explicit. This makes it hard to judge whether the final dual-KL choice is a principled departure or just a convenient variant.\n- Figure 3/Table 2 use GPT-4 Turbo as the judge and Qwen2-72B-Instruct as the LLM annotator. Results would be stronger with a stronger contemporaneous judge (e.g., GPT-5) and annotator (Qwen3) for additional validation.\n- While DAR (weighted SFT) is argued to be stable, quantitative stability analyses for the *dual-PPO* path are sparse (e.g., per-iteration KL to $\\pi_{0}$ and $\\pi_{t}$, gradient-norm/entropy trends, collapse rates across seeds).\n- Most results focus on Qwen2 and Llama-3.1 settings; adding a recent backbone (e.g., Qwen3) would better probe portability.\n- *(Minor)* Colors/encodings are hard to parse quickly; the caption should explicitly map colors/shapes to policies/regions and call out what “search expansion” specifically denotes.",
"questions": "- Given the centrality of Table 2, the community would benefit from end-to-end scripts (prompts, judge configs, seeds, filtering) in the supplement to exactly reproduce those numbers. Can we provide that in supplement?\n- Would a *mixed-direction* dual-KL be preferable on theory/empirics—e.g., a mode-covering term $D_{\\mathrm{KL}}\\left[\\pi_{\\theta}\\||\\pi_{0}\\right]$ *and* a mode-seeking term around the behavior/current policy $D_{\\mathrm{KL}}\\left[\\pi_{t}\\||\\pi_{\\theta}\\right]$?\n- Could you report training stability metrics such as gradient-norm statistics, entropy, and seed-wise variance for dual-PPO?\n- How do conclusions change with newer backbones (e.g., Qwen3) or different reward models?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-21T00:07:05",
"modification_date": "2025-11-12T10:48:24",
"review_url": "https://openreview.net/forum?id=QpqBqCTtW4¬eId=nSqWeKzCTc",
"license": "CC BY 4.0"
},
{
"id": "CTbHwvO34v",
"forum": "QpqBqCTtW4",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission1200/Reviewer_4Fmj",
"reviewer_name": "Reviewer_4Fmj",
"rating": 6,
"confidence": 2,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This work proposes a unified regularization approach with weights that explicitly balances the objectives of preventing reward hacking and maintaining stable policy updates in RLHF.",
"strengths": "The authors found a reasonable limitation of the existing approach to solve reward hacking and maintain stable policy updates, i.e., the conflict of two regularizers that pull the trained policy towards the reference policy and previous-step policy respectively. This well motivates the proposed dual-KL approaches, with the novel and straightfoward idea of combination of the two regularizers, which are presented clearly. The experiments look comprehensive to me.",
"weaknesses": "A few clarity issues as shown in the following questions.",
"questions": "(1) Is Eq. (2) PPO for RL not RLHF, since there is no KL penalty to $\\pi _ {\\rm ref}$? The equation in Section 4.1 seems to transform the KL penalty in Eq. (1) into hard constraint $KL<\\epsilon$, yes? \n\n(2) \"Empirical Validation\" in Section 4.1 involves two dual-KL variants. Why not move \"Empirical Validation\" to after introducing two dual-KL variants?\n\n(3) Your dual-KL (Eq. 3) looks like a special case of [1] with 2 references, what are your differences and additional contributions? \n[1] Gholamali Aminian, Amir R Asadi, Idan Shenfeld, and Youssef Mroueh. Theoretical analysis of kl-regularized rlhf with multiple reference models. ArXiv:2502.01203, 2025.\n\n(4) In proposition 4.1, $\\log\\pi_{\\theta}(y|x)=\\alpha\\log\\pi_0(y|x)+(1-\\alpha)\\log\\pi_t(y|x)+C(x)$ with a constant normalization $C(x)$ is suggested to ensure policies summing up to 1. \n\n(5) What are the evaluation metrics of MT Bench in Table 3? \n\n(6) In Figure 5c, could you explain more about EOS-missing rate, and why $\\alpha=0$ increases EOS-missing rate? Is it convenient to add LC-win rate? \n\n(7) In Algorithm 1, what are $\\mu_A$ and $\\sigma_A$? What are the meanings of the two weights $w _ {\\rm reg}^i$ and $w _ {\\rm adv}^i$ and how are they computed?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-18T22:20:05",
"modification_date": "2025-11-12T10:48:25",
"review_url": "https://openreview.net/forum?id=QpqBqCTtW4¬eId=CTbHwvO34v",
"license": "CC BY 4.0"
}
] |
|
kWl13kRJTQ
|
https://openreview.net/forum?id=kWl13kRJTQ
|
AC-Sampler: Accelerate and Correct Diffusion Sampling with Metropolis-Hastings Algorithm
| 4.666667
| 3.666667
|
[
4,
6,
4
] |
[
3,
4,
4
] | 3
|
[
"Diffusion model",
"Metropolis-Hastings Algorithm",
"Langevin Dynamics"
] |
Diffusion-based generative models have recently achieved state-of-the-art performance in high-fidelity image synthesis. These models learn a sequence of denoising transition kernels that gradually transform a simple prior distribution into a complex data distribution. However, requiring many transitions not only slows down sampling but also accumulates approximation errors.
We introduce the Accelerator-Corrector Sampler (AC-Sampler), which accelerates and corrects diffusion sampling without fine-tuning. It generates samples directly from intermediate timesteps using the Metropolis–Hastings (MH) algorithm while correcting them to target the true data distribution. We derive a tractable density ratio for arbitrary timesteps with a discriminator, enabling computation of MH acceptance probabilities. Theoretically, our method yields samples better aligned with the true data distribution than the original model distribution. Empirically, AC-Sampler achieves FID 2.38 with only 15.8 NFEs, compared to the base sampler’s FID 3.23 with 17 NFEs on unconditional CIFAR-10. On CelebA-HQ 256×256, it attains FID 6.6 with 98.3 NFEs. AC-Sampler can be combined with existing acceleration and correction techniques, demonstrating its flexibility and broad applicability.
|
Accelerate and Correct Diffusion Sampling with Metropolis-Hastings Algorithm
|
generative models
|
https://openreview.net/pdf?id=kWl13kRJTQ
| 2025-09-19T09:41:43
| 3
|
[
{
"id": "WL3lfb3jFc",
"forum": "kWl13kRJTQ",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14955/Reviewer_MQvF",
"reviewer_name": "Reviewer_MQvF",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "Dear authors, I am the AC. Since two reviewers ghosted the paper or wrote last-minute that they wouldn't be able to submit a review, and since I was unable to find emergency reviewers, I have now written an emergency review of the paper. \n\nThe paper proposes a Metropolis-Hastings correction at an intermediate diffusion step \\tau to accelerate the sampling process of diffusion models. The idea of integrating MCMC updates into the reverse process is conceptually interesting, and the theoretical analysis is clearly written. The experimental results show modest improvements in FID with fewer or comparable NFEs.",
"strengths": "Bringing an explicit Metropolis-Hastings correction into the diffusion sampling loop is an interesting idea to integrate score-based generative modeling and classical MCMC. Prior papers have explored MH with diffusion in other contexts, such as MCMC correction for model composition and Metropolis sampling for constrained diffusion, but using an MH step specifically as a generic accelerator within the standard image-synthesis pipeline is still relatively underexplored.\n\nThe paper not only proposes a new method but also analyzes the expected NFE under acceptance/rejection dynamics (e.g., conditions under which truncating at \\tau, followed by a short MH chain, reduces per-sample NFEs). \n\nThe empirical results, while limited in their scope (see below), are promising and show consistent improvements in both terms of FID scores and efficiency.",
"weaknesses": "While the MH-corrected acceleration mechanism is interesting, the claimed efficiency improvement is not convincing. The method truncates the reverse diffusion at an intermediate noise level \\tau runs a short chain there, and then further denoises each accepted sample from \\tau -> 0. Thus, every accepted sample still requires a full denoising segment, meaning there is no intrinsic saving *per sample* unless multiple final samples share the same denoising sequence from T to \\tau. Again, if the goal is to generate one image at inference time (in the realistic setting), there is no saving, as far as I understand. \n\nIn the \"amortized\" setting, where one truncated trajectory is used for several final samples, the expected NFE per sample can indeed drop, as shown in Proposition 4.2. However, if only one sample is drawn per trajectory, the method would be slower, not faster, due to the added proposal evaluations. The paper would benefit from explicitly stating this amortization assumption or correcting my understanding of the paper. \n\nWhile the paper emphasizes improved efficiency and reduced NFEs, it does not adequately situate the proposed MH-based acceleration among existing approaches explicitly designed for fast diffusion sampling. In particular, recent methods such as Tong et al., “Learning to Discretize Denoising Diffusion ODEs (LD3)” (ICLR 2025) and the references mentioned herein (which also work with a trained diffusion model) should be compared to. Relating to my point above, the comparison should be done for generating one single sample for the same number of time steps (plus the overhead of your discriminator), not averaged over a generation of a batch of l images.",
"questions": "Is my understanding correct that the improved efficiency is only observed for the (imo unrealistic assumption) that multiple images are generated for the same prior noise?\n\nWhat is the exact goal of the theorems and how do they relate to practical settings? E.g., Prop 4.2: how are all the assumptions made implicitly here realistic?\n\nWhy are you not comparing to other methods for reducing the step size without having to train a new diffusion model?\n\nWhy did you not also make a comparison for generating a single (or a small number of) images?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-12T21:21:22",
"modification_date": "2025-11-12T21:21:22",
"review_url": "https://openreview.net/forum?id=kWl13kRJTQ¬eId=WL3lfb3jFc",
"license": "CC BY 4.0"
},
{
"id": "GFCGAPdMj0",
"forum": "kWl13kRJTQ",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14955/Reviewer_jfbK",
"reviewer_name": "Reviewer_jfbK",
"rating": 6,
"confidence": 4,
"soundness": 4,
"contribution": 3,
"presentation": 3,
"summary": "The paper proposes AC-Sampler, a “accelerator-corrector” for diffusion models that jumps to an intermediate timestep (instead of starting at pure noise) and then applies Metropolis–Hastings (MH) with a MALA proposal built from the pretrained score network. This both shortens the reverse trajectory (speedup) and, via MH acceptance, corrects samples so their marginal at targets the true distribution. A time-dependent discriminator provides an estimate of the density ratio so the MH acceptance probability is tractable.",
"strengths": "1. Clever decomposition of the acceptance ratio and use of a time-dependent discriminator make the MH step closed-form and cheap to evaluate。\n\n2. Theoretical guarantees: Expected NFE reduction when the acceptance rate exceeds a mild threshold; KL to the data distribution does not worsen and improves with more MALA steps (under stated integrability conditions).\n\n3. Designed to sit atop existing accelerators/correctors (e.g., DPM-v3, DG), often improving their FID at fewer steps.",
"weaknesses": "1. The proposed method does not appear to provide acceleration when the batch size is 1; in this regime, the acceleration gain vanishes.\n\n2. Hyperparameter sensitivity & chain design: Performance depends on the choice of target timestep 𝜏 and the proposal step size/SNR, as well as burn-in/chain length—these trade speed for acceptance/mixing.",
"questions": "1. Is the proposed method compatible with classifier-free guidance (CFG)? While CFG has known theoretical issues, modern large-scale systems rely on it heavily.\n\n2. Could the authors add a discussion in related work comparing their approach with SDE-based sampling methods [1–3] (which can be viewed as Langevin-type correctors) and with work that addresses training–inference mismatch [4,5]?\n\n\n[1] Gotta go fast when generating data with score-based models.\n\n[2] SA-solver: Stochastic adams solver for fast sampling of diffusion models.\n\n[3] Seeds: Exponential sde solvers for fast high-quality sampling from diffusion models.\n\n[4] Input perturbation reduces exposure bias in diffusion models.\n\n[5] Improved Diffusion-based Generative Model with Better Adversarial Robustness.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T17:38:48",
"modification_date": "2025-11-12T13:28:12",
"review_url": "https://openreview.net/forum?id=kWl13kRJTQ¬eId=GFCGAPdMj0",
"license": "CC BY 4.0"
},
{
"id": "8QE9utwvt8",
"forum": "kWl13kRJTQ",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14955/Reviewer_pPCw",
"reviewer_name": "Reviewer_pPCw",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This paper proposes a sampling algorithm called Accelerator-Corrector Sampler (AC Sampler) that both accelerates and corrects diffusion sampling. This sampler is based on Metropolis-Hastings (MH) algorithm. The acceleration is achieved via a “warm-start” by performing denoising from prior distribution to a target tilmestep $\\tau$ which serves as the initial sample of MCMC. The MH acceptance probability is computed with the help of score function and also an additional time-dependent discriminator. This time-dependent discriminator is trained to predict the likelihood ratio of the unknown target data distribution and model’s marginal distribution at time $t$. The other terms in the expression for acceptance probability are Gaussian distributions and can be easily computed. \n\nThe advantages of AC-Sampler have been shown on CIFAR-10, CelebA-HQ 256 and ImageNet- 64 and 256. AC-Sampler can also be composed with other SOTA samplers such as EDM’s Heun sampler, discriminator guidance, DPM-v3 etc. and it results in improved FID score. In many cases, the average NFEs are also comparable or less, which is ideal.",
"strengths": "1. The proposed method is orthogonal to many existing samplers and can be combined with them as indicated in Table 1 and Table 2. Further, this composition results in improved FID in general.\n2. The proposed algorithm seems to have better mode coverage than EDM as indicated by Recall metrics. This also has been qualitatively demonstrated against DDPM solver in Figure 6.\n3. The paper provides theoretical proof (Theorem 4.3 and 4.4) which shows that the generated sample distribution with AC-Sampler is closer to the true data distribution in terms of KL divergence. The paper also provide results on the expected reduction in the number of NFEs.",
"weaknesses": "1. The method requires training an additional time-dependent discriminator however it needs to be done only one time. There will also be additional overhead from this discriminator during sampling. It is unclear if the forward pass through the discriminator is accounted for in NFEs. The discriminator also uses a pre-trained ADM classifier as a feature extractor which might not be readily available for all datasets. \n2. There is a potential mismatch between theory and practice. The practical implementation of MH in Algorithm 1 employs “propose-until-accept” design. This can introduce stationary bias as mentioned in Appendix E. This should be highlighted in the main paper. This also means that in this case, the chain wouldn’t converge to the desired target distribution $q_T$ but would rather converge to a different distribution due to the bias. \n3. Appendix B reports poor performance on CelebA-HQ 256x256 where the method in the main paper doesn’t scale to high dimensional data. Appendix B proposes to do MH algorithm in the joint space of time and data. This suggests that the primary algorithm from the main paper is not robust. It is also unclear if this method can be applied to Text-to-image models. \n4. Additional overhead in terms of wall clock time over many samplers such as DDIM, DPM-v3 etc. is unclear from the paper. There is only a comparison against EDM’s Heun sampler in the paper. In some cases, the NFE reduction is not very significant and therefore confidence intervals or the standard deviation needs to be reported.\n5. The performance of the method is quite sensitive to the choice of $\\tau$ as indicated in Table 9. In addition, there are many hyper parameters to tune such as burn-in length, number of steps to skip, number of parallel chains etc.",
"questions": "1. This work is probably relevant and could be included in related works:\nScore-Based Metropolis-Hastings Algorithms, Ahmed Aloui, Ali Hasan, Juncheng Dong, Zihao Wu, Vahid Tarokh, 2024\n3. What is the typical length of MCMC chain i.e. how main times do we need to repeat Algorithm 1 before generating the final sample?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T02:29:21",
"modification_date": "2025-11-12T13:28:12",
"review_url": "https://openreview.net/forum?id=kWl13kRJTQ¬eId=8QE9utwvt8",
"license": "CC BY 4.0"
}
] |
nRl7D1D3qf
|
https://openreview.net/forum?id=nRl7D1D3qf
|
Spatial Sign based Direct Sparse Linear Discriminant Analysis for High Dimensional Data
| 3.333333
| 3.666667
|
[
2,
4,
4
] |
[
4,
3,
4
] | 3
|
[
"High dimensional data",
"Linear discriminant analysis",
"Spatial-sign"
] |
Robust high-dimensional classification under heavy-tailed distributions without losing efficiency, is a central challenge in modern statistics and machine learning. However, most existing linear discriminant analysis (LDA) methods are sensitive to deviations from normality and may suffer from suboptimal performance in heavy-tailed settings. This paper investigates the robust LDA problem with elliptical distributions in high-dimensional data. Our approach constructs stable discriminant directions by leveraging a robust spatial sign-based mean and covariance estimator, which allows accurate estimation even under extreme distributions. We demonstrate that SSLDA achieves an optimal convergence rate in terms of both misclassification rate and estimate error. Our theoretical results are further confirmed by extensive numerical experiments on both simulated and real datasets. Compared with state-of-the-art approaches, the SSLDA method offers superior improved finite sample performance and notable robustness against heavy-tailed distributions.
|
unsupervised, self-supervised, semi-supervised, and supervised representation learning
|
https://openreview.net/pdf?id=nRl7D1D3qf
| 2025-09-18T22:55:01
| 3
|
[
{
"id": "iGiMRo6ObX",
"forum": "nRl7D1D3qf",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12368/Reviewer_pWtR",
"reviewer_name": "Reviewer_pWtR",
"rating": 2,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "This paper focuses on the direct sparse linear discriminant analysis for high-dimensional classification. The proposed SSLDA directly estimates the discriminant direction under the assumption of elliptical distribution, which accelerates the training efficiency without corrupting the final accuracy. Moreover, the spatial sign-based methodology is introduced to handle the heavy-tailed outliers. Theoretical and experimental results demonstrate the superior classification ability of the proposed SSLDA. However, the work seems to be incremental, and the presentation is relatively poor. The detailed comments are summarized in the weakness list. Overall, I think this paper fails to reach the borderline of ICLR.",
"strengths": "1. This paper introduces the spatial sign-based methodology to the classific LDA algorithm, which can provide a reference for further research. \n2. Experiments show the effectiveness of spatial sign-based theory on enhancing LDA.",
"weaknesses": "1. The proposed SSLDA seems to be a straightforward combination of existing technologies. As discussed in Section 1, the spatial-sign-based methodology is a mature tool for high-dimensional data classification, and it has been integrated into many machine learning algorithms. This paper combines the spatial-sign-based approach with LDA straightforwardly, without making enough novel and significant improvements.\n2. In Section 1, the first main contribution states that ‘we establish theoretical results for SSLDA in the sparse scenario’. What exactly does this theoretical result mean? There is a lack of detailed explanation.\n3. Section 1 provides a detailed history of LDA with high-dimensional classification, which is somewhat long-winded, and fails to elaborate on the key concepts relevant to SSLDA. What are spatial sign and elliptical distribution? The author should appropriately reduce the review of previous works, and provide a more detailed introduction to SSLDA. It is necessary to clearly and intuitively state why the spatial sign can handle high-dimensional long-tailed distributions.\n4. The comparative algorithms are too old. The latest was published in 2019.\n5. SSLDA is regarded as a robust classification approach. However, there are no experiments to test the robustness of SSLDA on handling outliers and noisy points.\n6. Some equations lack punctuation, such as Eqs. (3) and (5).",
"questions": "Please see the weakness list. There are no more questions.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T17:01:07",
"modification_date": "2025-11-12T12:54:37",
"review_url": "https://openreview.net/forum?id=nRl7D1D3qf¬eId=iGiMRo6ObX",
"license": "CC BY 4.0"
},
{
"id": "amXhqHVJa0",
"forum": "nRl7D1D3qf",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12368/Reviewer_VSmU",
"reviewer_name": "Reviewer_VSmU",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "This paper introduces Spatial Sign-based Direct Sparse Linear Discriminant Analysis (SSLDA), a novel classification method designed to address the critical challenge of robust high-dimensional classification under heavy-tailed distributions. The authors identify that classical Linear Discriminant Analysis (LDA) and its high-dimensional sparse variants often fail when data deviates from the Gaussian assumption, as they rely on non-robust sample mean and covariance estimators.",
"strengths": "This paper presents a robust high-dimensional classifier, Spatial Sign-based Sparse LDA (SSLDA), which directly estimates the optimal discriminant direction under elliptical distributions. The method's core innovation lies in replacing conventional, non-robust estimators with the spatial median and the spatial sign covariance matrix, enabling accurate classification even for heavy-tailed data where standard methods fail. The authors provide strong theoretical guarantees, proving the estimator's consistency and establishing optimal convergence rates for the misclassification error.",
"weaknesses": "- **1. Limited Discussion on the Elliptical Distribution Assumption:** The paper's entire theoretical framework relies on the assumption that the data follows an elliptical distribution. This is a potential limitation for real-world datasets that may exhibit significant skewness or more complex, non-elliptical dependency structures. The work could be improved by explicitly discussing the robustness of SSLDA to violations of this assumption. A constructive suggestion would be to include a simulation where data is generated from a clearly non-elliptical (e.g., skewed) distribution to empirically explore the method's performance boundaries and better define its applicability.\n- **2. Scope of Experimental Validation Could Be Broadened:** Although the experiments are well-designed, they could be more comprehensive to fully demonstrate generalizability. Specifically, the empirical validation relies heavily on synthetic data and a single, specific image classification task. To more convincingly argue for the method's broad utility, the authors could include experiments on a wider range of real-world benchmark datasets from other domains where high-dimensional, heavy-tailed data is common, such as finance (e.g., stock returns), genomics, or text analysis. This would provide stronger evidence of the method's practical impact beyond the presented application.\n- **3. Lack of Comparison with Alternative Robust Sparse Methods:** The paper effectively compares SSLDA against several leading sparse LDA methods. However, it does not include comparisons with other classes of robust, high-dimensional classifiers that are not based on the LDA framework, such as robust sparse logistic regression or support vector machines with robust kernels. Including such comparisons would help to position SSLDA more clearly within the broader landscape of robust classification tools and would provide a more complete picture of its relative strengths and weaknesses.",
"questions": "- **1. On the Robustness Beyond Elliptical Distributions:** The theoretical guarantees of SSLDA are firmly established under the elliptical distribution assumption. Could you please comment on the empirical robustness of SSLDA when this assumption is violated, for instance, with significantly skewed distributions? Have you conducted any preliminary tests on such data? A discussion on the expected behavior or potential modifications to handle non-elliptical data would greatly help users understand the boundaries of the method's applicability.\n- **2. On the Generalizability and Practical Impact:** The experimental results on synthetic data and the image classification task are compelling. To further demonstrate the general utility of SSLDA, it would be highly beneficial to see its performance on one or two additional benchmark datasets from domains known for high-dimensional, heavy-tailed data, such as finance (e.g., asset returns) or genomics. This would significantly strengthen the claim of the method's broad practical impact.\n- **3. On the Comparison with the Broader Robust Classification Landscape:** The paper provides excellent comparisons against other sparse LDA methods. Could you discuss the rationale for not including comparisons with other paradigms for robust high-dimensional classification, such as $l_1$-regularized robust logistic regression or sparse SVM with robust kernels? A discussion on how you expect SSLDA to perform relative to these alternative approaches, or results from such a comparison, would help to better position your contribution within the entire field of robust classification, not just the LDA family.\n- **4. On the Choice of Tuning Parameters:** The method involves a regularization parameter $\\lambda_n$. It would be helpful for practitioners if you could provide more detailed guidance on the selection of this parameter in practical scenarios, especially when the underlying distribution is unknown. Did you observe any robust strategies for choosing $\\lambda_n$ across different distributional settings in your simulations?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-28T11:24:49",
"modification_date": "2025-11-12T12:54:37",
"review_url": "https://openreview.net/forum?id=nRl7D1D3qf¬eId=amXhqHVJa0",
"license": "CC BY 4.0"
},
{
"id": "uvvCsmngmO",
"forum": "nRl7D1D3qf",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12368/Reviewer_4A8h",
"reviewer_name": "Reviewer_4A8h",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This paper proposes SSLDA (Spatially Sign-based LDA), a method that constructs discriminant directions using robust estimators of mean and covariance based on spatial signs, which are less sensitive to extreme values than classical moment-based estimators. Unlike traditional LDA, which relies on sample means and covariances (vulnerable to heavy tails), SSLDA uses spatial sign transformations to achieve stability.",
"strengths": "Unlike traditional LDA, which assumes Gaussian data and is sensitive to outliers, SSLDA leverages spatial sign-based estimators for mean and covariance.\n\nThis provides strong theoretical guarantees, ensuring that SSLDA performs well even in high-dimensional settings.\n\nSimulation studies and real-data experiments demonstrate that SSLDA outperforms state-of-the-art robust LDA methods.",
"weaknesses": "SSLDA relies on the assumption that data follows an elliptical distribution.\n\nThe performance of SSLDA depends on the choice of spatial sign scaling parameters.\n\n It is crucial to explore the model's extension to multi-class scenarios. Additionally, the paper does not explicitly address or discuss the computational complexity associated with SSLDA.\n\nThe experimental evaluation on real-world datasets remains limited.",
"questions": "Some robust LDA such as regularized LDA and L1-norm LDA should be addressed",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-25T17:59:27",
"modification_date": "2025-11-12T12:54:38",
"review_url": "https://openreview.net/forum?id=nRl7D1D3qf¬eId=uvvCsmngmO",
"license": "CC BY 4.0"
}
] |
|
JxmjzC6syB
|
https://openreview.net/forum?id=JxmjzC6syB
|
Benchmarking Stochastic Approximation Algorithms for Fairness-Constrained Training of Deep Neural Networks
| 3.5
| 3.75
|
[
4,
2,
6,
2
] |
[
4,
4,
4,
3
] | 4
|
[
"Fair Machine Learning",
"stochastic approximation",
"Augmented Lagrangian",
"Sequential Quadratic Programming",
"benchmarking"
] |
The ability to train Deep Neural Networks (DNNs) with constraints is instrumental in improving the fairness of modern machine-learning models. Many algorithms have been analysed in recent years, and yet there is no standard, widely accepted method for the constrained training of DNNs. In this paper, we provide a challenging benchmark of real-world large-scale fairness-constrained learning tasks, built on top of the US Census (Folktables, Ding et al, 2021). We point out the theoretical challenges of such tasks and review the main approaches in stochastic approximation algorithms. Finally, we demonstrate the use of the benchmark by implementing and comparing three recently proposed, but as-of-yet unimplemented, algorithms both in terms of optimization performance, and fairness improvement. We will release the code of the benchmark as a Python package after peer-review.
|
We provide a benchmark for comparing stochastic approximation algorithms, based on real-world fairness-constrained learning problems.
|
datasets and benchmarks
|
https://openreview.net/pdf?id=JxmjzC6syB
| 2025-09-20T18:06:49
| 4
|
[
{
"id": "FJkAp0M492",
"forum": "JxmjzC6syB",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission24989/Reviewer_1Hbo",
"reviewer_name": "Reviewer_1Hbo",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The paper presents a benchmark for evaluating stochastic approximation algorithms in fairness-constrained training of deep neural networks. Built on the US Census dataset, the benchmark enables large-scale experiments on fairness objectives formulated as constrained ERM problems. The authors review existing algorithms and implement three recent ones, namely Stochastic Ghost, Stochastic Smoothed and Linearized Augmented Lagrangian Method, and Stochastic Switching Subgradient, and compare them to SGD baselines. The study highlights the lack of unified toolkits and provides a first step toward standardized evaluation for fairness-constrained optimization prooblems.",
"strengths": "1. This work provides a reproducible and extensible benchmark framework for fairness-constrained deep learning, filling a gap in the literature where no unified platform existed. \n2. The writing is very clear, and the notations are consistent. I appreciate Table 3, where the authors review a wide range of stochastic constrained optimization algorithms with a structured taxonomy and theoretical assumptions.\n3. The work evaluates multiple fairness criteria, independence, separation, sufficiency, and Wasserstein distance, showing nuanced trade-offs among methods.",
"weaknesses": "1. The paper primarily implements existing algorithms rather than introducing a new one. While benchmarking is valuable, this may limit perceived theoretical contribution.\n2. Only one dataset with a binary protected attribute is used. The scalability and generalization to multiple attributes have not been tested.\n3. The presentation of the experimental results can be improved. The current figures are difficult to read.\n4. There is no discussion of hyperparameter search across algorithms. Why are the parameters set as in lines 366-375? The results could reflect suboptimal settings rather than intrinsic algorithmic differences.",
"questions": "1. See some questions in Weaknesses.\n2. Could the work extend to multi-group or intersectional attributes?\n3. Are there plans to include computational efficiency or memory usage comparisons?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T15:40:47",
"modification_date": "2025-11-12T18:27:53",
"review_url": "https://openreview.net/forum?id=JxmjzC6syB¬eId=FJkAp0M492",
"license": "CC BY 4.0"
},
{
"id": "TfdcgdzTnE",
"forum": "JxmjzC6syB",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission24989/Reviewer_Rxvy",
"reviewer_name": "Reviewer_Rxvy",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 1,
"presentation": 3,
"summary": "This paper provides a benchmark of several stochastic optimization algorithms for training deep neural networks under different fairness constraints. They consider datasets built on top of the US Census (Folktables) dataset. They consider the popular group fairness measures independence (statistical parity), sufficiency, and separation. Several experiments have been included in the paper that compare the performance of these existing optimization techniques in minimizing the measures of fairness over the datasets.",
"strengths": "-- The paper has included a lot of experiments on different algorithmic (stochastic) variants of implementing fairness as a constraint during training. Methods considered include: (i) Stochastic ghost method; (ii) Stochastic smoothed and linearized AL method; (iii) Stochastic switching subgradient method, etc.\n\n-- They consider the three popular fairness notions, and also multiple datasets. \n\n-- Presentation is generally good.",
"weaknesses": "-- While the vast experiments are highly appreciated, I believe this paper is more suitable as a dataset/benchmark paper. The stochastic optimization algorithms already exist in the literature and have also been used for constraint optimization. The paper applies these constrained optimization variants for the specific constraint of group fairness and studies their performance. \n\n-- Indeed, the paper is quite comprehensive in their experimentation. But, still, the novelty would be limited for such a venue since it is more like a survey of applying different existing techniques to the fairness constraint and seeing the performance. The paper could also be better suited as a survey paper. Though there do exist several other survey papers on fairness in literature, and it would be important to highlight what is the technical gap in existing survey/benchmarking papers that this paper fills.\n\n-- The measures of fairness are mainly the three popular ones. \n\n-- Some works compare tradeoffs between different group fairness measures. E.g. The possibility of fairness: Revisiting the impossibility theorem in practice. \nIt would be good to compare with them.",
"questions": "Q1. What would be the gap in existing survey papers or benchmarking papers on algorithmic fairness that this paper fills?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T06:33:41",
"modification_date": "2025-11-12T18:27:53",
"review_url": "https://openreview.net/forum?id=JxmjzC6syB¬eId=TfdcgdzTnE",
"license": "CC BY 4.0"
},
{
"id": "C1USHlXDff",
"forum": "JxmjzC6syB",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission24989/Reviewer_S3CZ",
"reviewer_name": "Reviewer_S3CZ",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 4,
"summary": "This paper presents a benchmarking study and toolbox for evaluating stochastic approximation algorithms applied to fairness-constrained training of deep neural networks. The authors consider the general constrained empirical risk minimization (ERM) problem, where fairness is imposed through hard constraints (e.g., demographic parity, equal opportunity, equalized odds). They provide an open-source implementation integrating four major stochastic optimization algorithms (Stochastic Ghost (StGh), SSL-ALM, Augmented Lagrangian Method (ALM), and Stochastic Switching Subgradient (SSw)) within a unified framework built on PyTorch and Folktables datasets. The benchmark allows users to automatically construct constrained training formulations and apply fairness constraints across up to 5.7 billion protected subgroups from census-based datasets. Extensive experiments on the ACSIncome dataset compare the convergence speed, fairness violation, and test performance of each algorithm, as well as their robustness under different fairness metrics and constraint formulations. The work aims to standardize empirical evaluation practices in fairness-constrained deep learning and offer a reproducible experimental testbed for future research",
"strengths": "1- Evaluates four distinct fairness-constrained optimization algorithms under identical experimental setups, providing valuable comparative insights.\n\n2- Offers a transparent, well-engineered implementation with all datasets, hyperparameters, and metrics clearly documented.\n\n3- Bridges fairness theory with realistic deep learning setups, enabling reproducible fairness experiments on real data.\n\n4- Covers three key fairness notions (independence, separation, sufficiency) and links them to optimization constraints.\n\n5- The benchmark includes stochastic ghost gradient methods, augmented Lagrangian, and subgradient-based solvers, giving a broad coverage of optimization paradigms.",
"weaknesses": "1- Experiments are mainly conducted on a single dataset (ACSIncome), which restricts the scope of empirical validation. Inclusion of more varied domains (e.g., image or language tasks) would strengthen the claim of generality.\n\n2- While results are reported, deeper analysis of when and why certain algorithms perform better (e.g., under which fairness metrics or subgroup imbalances) is missing.\n\n3- Although billions of potential subgroup combinations are mentioned, the experiments do not convincingly demonstrate performance at that scale.\n\n4- The paper does not clearly discuss how the proposed framework will be maintained or integrated with existing fairness toolkits, which may limit its long-term impact.",
"questions": "1- How does the framework handle multiple simultaneous protected attributes, especially when fairness constraints interact (e.g., intersectional fairness)?\n\n2- Are the results robust to different data distributions or dataset shifts (e.g., subsampled or noisy features)?\n\n3- How computationally expensive are these fairness constraints for large-scale DNNs compared to regularization-based approaches?\n\n4- Could the benchmark include group fairness metrics beyond accuracy, such as calibration or counterfactual fairness?\n\n5- How do hyperparameter settings for fairness constraints (e.g., δ thresholds) influence the convergence behavior of the tested algorithms?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-29T05:00:39",
"modification_date": "2025-11-12T18:27:54",
"review_url": "https://openreview.net/forum?id=JxmjzC6syB¬eId=C1USHlXDff",
"license": "CC BY 4.0"
},
{
"id": "kuci4hadD3",
"forum": "JxmjzC6syB",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission24989/Reviewer_2stX",
"reviewer_name": "Reviewer_2stX",
"rating": 2,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "The paper introduces a new benchmark, based on the US census, for testing fairness-constrained learning tasks. The dataset is tested on various methods from the literature, which are implemented in a toolbox.",
"strengths": "- A standard benchmark specific for testing the fairness of machine learning algorithms can surely be an interesting contribution, and the paper proposes to address this gap",
"weaknesses": "- While the main claimed contribution of the paper is to provide a benchmark for fairness-constrained learning, I found the structure of the paper unclear, with most of the text reviewing existing literature and not many details on the introduced benchmark and dataset and on what the key novelties and results of the paper are.\n\n- The proposed dataset is an instance of the ACS dataset, already published in [Frances Ding, Moritz Hardt, John Miller, and Ludwig Schmidt. Retiring adult: New datasets for fair machine learning. Advances in Neural Information Processing Systems, 34, 2021], and already used in the context of fairness applications. Consequently, the main contributions of the paper are the implementation and existing comparison of existing algorithms. In my opinion, this contribution, while potentially useful, is too incremental for publication in ICLR",
"questions": "- What are the main contributions of the paper? Is the proposed dataset an instance of the ACS dataset? What does it make it particularly suitable for fairness testing compared to the original ACS dataset?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-28T18:29:30",
"modification_date": "2025-11-12T18:27:54",
"review_url": "https://openreview.net/forum?id=JxmjzC6syB¬eId=kuci4hadD3",
"license": "CC BY 4.0"
}
] |
kXhPkDaFbJ
|
https://openreview.net/forum?id=kXhPkDaFbJ
|
ProtoKV: Long-context Knowledges Are Already Well-Organized Before Your Query
| 5
| 3
|
[
4,
4,
6,
6
] |
[
3,
2,
3,
4
] | 4
|
[
"Large Language Model",
"KV Cache"
] |
Modern Large Language Models (LLMs) face fundamental challenges in processing long text sequences due to the quadratic complexity of attention mechanisms. Key-Value (KV) cache retention strategies mitigate this issue by selectively preserving salient KV pairs for autoregressive generation. However, existing methods fail to adequately and efficiently preserve the semantic integrity of the compressed representations. In this paper, we discover a prevalent phenomenon in LLM: within the key embedding space, while most tokens exhibit similarity with their contextual neighbors (we term position-determined tokens), a small subset of special tokens serving as semantic anchors consistently show local deviation property and form one or several clusters (we term semantic-anchored tokens). Motivated by this observation, we propose ProtoKV that separately processes these two token categories for KV cache compression. Within this framework, we first construct semantic prototypes based on the inherent characteristics of the two token categories, and subsequently form clusters of semantically similar tokens as basic compression units. This approach preserves semantic integrity with high computational efficiency. Experiments on LongBench demonstrate that ProtoKV achieves 2.11% higher accuracy than state-of-the-art methods under matched memory constraints.
|
We discovered a new paradigm for key distribution in LLMs and used it to guide the KV cache compression strategy.
|
foundation or frontier models, including LLMs
|
https://openreview.net/pdf?id=kXhPkDaFbJ
| 2025-09-14T16:29:35
| 4
|
[
{
"id": "cq17GdgvB8",
"forum": "kXhPkDaFbJ",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission5042/Reviewer_mS4R",
"reviewer_name": "Reviewer_mS4R",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "The authors discover that while most tokens demonstrate high similarity (in key space) with their contextual neighbors (position-determined tokens, PDTs), a subset of tokens (dubbed \"semantic-anchored tokens\", SATs) deviate from this property while accumulating a significant amount of attention. The authors construct lsh-based prototypes for SATs and chunk-based protoypes for PDTs as compression units. Clusters are ranked according to an importance metric, tokens are assigned to these clusters, and tokens from the top-ranked clusters are retained until the budget is met. This approach (ProtoKV) outcompetes baselines by >2% on LongBench and outcompetes other baselines at a lower budget and ties others at higher budgets.",
"strengths": "- This appears to be the first work to discover SATs. The experiment distinguishing them from sinks validates this unique token type. \n\n- The clustering approach is principled according to the token types. \n\n- ProtoKV defeats several popular strategies across varying model families on LongBench. \n\n- The success of the method at a very small budget (64 tokens) is attractive for severely resource-constrained systems.",
"weaknesses": "- LongBench and RULER are known to stack lots of noisy context around sparsely distributed signals, thus possibly rendering the appearance of SATs as unique to these types of benchmarks. The appearance of this token type does not appear to be explored over a greater variety of long-context tasks.\n\n- Besides Llama-3-8B Instruct, only older models are tested. The authors should consider evaluating their approach on newer Qwen, Phi models, and/or the latest Mistral-7B. \n\n- The performance gain on RULER is quite minimal. While the average improvement on LongBench is +2%, the individual numbers on LongBench in Figure 34 are far less significant, where ProtoKV either incrementally wins or even loses against other baselines on a variety of tasks. This makes it difficult to determine whether ProtoKV is truly a worthwhile compression strategy.",
"questions": "- See weaknesses. \n - How does the method perform on RULER 16K or 32K?\n - Is H2O truly **that** bad on RULER (Table 2)? This doesn't seem to concur with other literature. \n - How does this approach fundamentally differ from the Reformer, which also chunks and groups tokens according to LSH buckets?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-04T17:22:55",
"modification_date": "2025-11-12T11:23:57",
"review_url": "https://openreview.net/forum?id=kXhPkDaFbJ¬eId=cq17GdgvB8",
"license": "CC BY 4.0"
},
{
"id": "fZNCziGOWd",
"forum": "kXhPkDaFbJ",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission5042/Reviewer_Y76g",
"reviewer_name": "Reviewer_Y76g",
"rating": 4,
"confidence": 2,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "The paper proposes ProtoKV, a semantic-aware KV cache compression framework for large language models (LLMs) that mitigates long-context inference costs. It introduces two token categories—Semantic-Anchored Tokens (SATs) and Position-Determined Tokens (PDTs)—and constructs hybrid semantic prototypes for each, guiding KV retention through cluster-based attention relevance. The method preserves semantic integrity while maintaining computational efficiency, outperforming baselines such as SnapKV, H2O, and ChunkKV by up to 2.11% on LongBench and achieving 97.3% retrieval accuracy in Needle-in-a-Haystack tests.",
"strengths": "•\tThe identification of SATs as clustering outliers in the key embedding space (Fig. 4–6) provides a new lens for understanding token semantics in LLMs. This insight grounds ProtoKV’s prototype-based compression design and distinguishes it from previous attention- or position-driven methods.\n•\tThe framework (Sec. 4.2–4.3) integrates Random Fourier Feature hashing (Eq. 7–8) and prototype-guided selection (Eq. 10–11), avoiding costly iterative clustering (Fig. 8). Pseudocode and reproducibility details are given (Appendix J), enhancing transparency.\n•\tProtoKV is compared with multiple baselines across three architectures (LLaMA-2, LLaMA-3, Mistral) and two benchmarks (LongBench, Ruler), showing robustness under varying KV budgets (64–512) and across tasks (Fig. 9, 10, 12–14). The ablation studies further isolate the roles of prototype number and SAT count.",
"weaknesses": "•\tWhile the “local deviation property” (Eq. 4–6) is empirically supported, the causal explanation (Sec. 3.2) remains qualitative. There is no analytical or statistical validation that SATs correspond to meaningful semantic units across layers or models.\n•\tKey hyperparameters such as neighborhood window $\\kappa$, prototype number, and threshold $\\beta$ are only briefly tuned (Fig. 12–13) without robustness metrics or cross-dataset variance, limiting confidence in generalizability.\n•\tAlthough computational cost is compared (Fig. 14), there is no wall-clock latency or memory breakdown versus model size (e.g., >8 B models), and no statistical significance tests for accuracy gains (Table 2–4).\n•\tThe contribution of each stage (SAT detection, LSH clustering, observation window) is only partially evaluated; removing or modifying these modules’ effects is not explicitly quantified.",
"questions": "1.\tCould the authors provide layer-wise or head-wise distributions of SATs to clarify whether semantic anchoring is model-general or architecture-specific?\n2.\tHow does ProtoKV behave under streaming or dynamic decoding, where prefilling-only assumptions may not hold?\n3.\tCan the authors include significance analysis or confidence intervals to verify the reported 2.11 % average improvement?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T20:01:11",
"modification_date": "2025-11-12T11:23:58",
"review_url": "https://openreview.net/forum?id=kXhPkDaFbJ¬eId=fZNCziGOWd",
"license": "CC BY 4.0"
},
{
"id": "Y5r1lM5lxN",
"forum": "kXhPkDaFbJ",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission5042/Reviewer_KnCx",
"reviewer_name": "Reviewer_KnCx",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The paper introduces ProtoKV, a novel framework designed to enhance the efficiency of Key-Value (KV) cache retention in large language models (LLMs) when processing long text sequences. The authors identify two categories of tokens within the key embedding space: Position-Determined Tokens (PDTs), which maintain strong similarity with their contextual neighbors, and Semantic-Anchored Tokens (SATs), which exhibit local deviation and form clusters. By leveraging the unique properties of these two token types, ProtoKV constructs semantic prototypes that improve KV cache compression while preserving semantic integrity. Experimental results demonstrate that ProtoKV outperforms existing state-of-the-art methods by achieving an average accuracy improvement of 2.11% on the LongBench benchmark, showcasing its effectiveness in maintaining high retrieval accuracy with minimal KV cache retention.",
"strengths": "- Good Presentation. The presentation of this paper is very clear, the structure is reasonable, and the presentation of the figures and tables is also very precise.\n- Reasonable Idea: The paper presents a novel perspective on token categorization in LLMs, particularly the identification and utilization of SATs as semantic anchors, which is a significant contribution to the field.\n- Good Performance: The authors provide comprehensive experiments across various benchmarks, clearly demonstrating the advantages of ProtoKV over existing methods in terms of accuracy and efficiency.\n- Insightful Analyses.",
"weaknesses": "- Complexity of Implementation: The proposed method may introduce additional complexity in the implementation of LLMs, which could be a barrier for adoption in certain applications or by practitioners with limited resources.\n- Limited Comparison with Other Methods: While the paper provides lots of experiments to evaluate ProtoKV, a more extensive analysis involving **more recent SOTA approaches** could strengthen the argument for its superiority.",
"questions": "Please refer to the weakness part.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-28T22:12:11",
"modification_date": "2025-11-12T11:23:58",
"review_url": "https://openreview.net/forum?id=kXhPkDaFbJ¬eId=Y5r1lM5lxN",
"license": "CC BY 4.0"
},
{
"id": "O2DGUi9WBP",
"forum": "kXhPkDaFbJ",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission5042/Reviewer_m3F1",
"reviewer_name": "Reviewer_m3F1",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 4,
"summary": "The paper introduces ProtoKV, a KV-cache compression method for LLMs. It first exploits a key structural insight: most tokens (“position-determined”) cluster with their neighbors, while a small set of “semantic-anchored” tokens (SAT) consistently deviate and form clusters. By splitting tokens into these two classes, ProtoKV builds separate semantic prototypes and compresses each class into semantically coherent clusters, preserving meaning while cutting memory. On LongBench it outperforms prior state-of-the-art techniques by 2.11\\% accuracy under the same memory budget.",
"strengths": "1. The paper is well written and excellently presented. Figure 1 clearly illustrates the paper's novel findings.\n\n2. The paper finds that semantic-anchored tokens, which exhibit the local deviation property, are important for generation. It provides exhaustive experimental results to demonstrate this.\n\n3. ProtoKV builds separate semantic prototypes and compresses each class into semantically coherent clusters. This method improves accuracy compared with existing baselines.",
"weaknesses": "1. The rationale behind locality-sensitive hashing applied to SAT is unclear. (Question 1)\n\n2. The time cost of ProtoKV may hinder its applicability. (Question 2)\n\n3. There are no error bounds for the key results. Quantitative robustness indicators such as standard deviations would help validate generalizable conclusions.",
"questions": "1. The paper claims that SATs are salient for generation, so why not select all SATs directly? The results as shown in Figure 12 illustrate that the SAT prototype number does not influence the accuracy, because ProtoKV selects all SATs, even though they are classified into different clusters. Additionally, what type of RFF-based hashing is it, locality-sensitive or random?\n\n2. Figure 14(b) shows that the average compression time exceeds half an hour. However, full attention computation usually takes several minutes for a long context. Do the users have to wait half an hour for KV cache compression?\n\n3. As shown in Figure 34, none of the methods match FullKV in terms of accuracy. Could ProtoKV achieves FullKV’s level of precision with a larger budget size, e.g., 1024 tokens?\n\n\n\nMinor comments:\n- Line 461, Eq. equation 8 -> Eq. 8\n- The caption of Figure 34 states that bold indicates the best performance and underline the second performance, but no text in the figure is bolded or underlined.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-22T10:32:49",
"modification_date": "2025-11-12T11:23:58",
"review_url": "https://openreview.net/forum?id=kXhPkDaFbJ¬eId=O2DGUi9WBP",
"license": "CC BY 4.0"
}
] |
qAfbeMal0m
|
https://openreview.net/forum?id=qAfbeMal0m
|
TimeExpert: Boosting Long Time Series Forecasting with Temporal Mix of Experts
| 2.5
| 3.75
|
[
2,
4,
2,
2
] |
[
3,
4,
4,
4
] | 4
|
[
"Time-Series",
"Mix of Experts",
"Lag Effects"
] |
Transformer-based architectures dominate time series modeling by enabling global attention over all timestamps, yet their rigid “one-size-fits-all” context aggregation fails to address two critical challenges in real-world data: (1) inherent lag effects, where the relevance of historical timestamps to a query varies dynamically; (2) anomalous segments, which introduce noisy signals that degrade forecasting accuracy.
To resolve these problems, we propose the Temporal Mix of Experts (TMOE)—a novel attention-level mechanism that reimagines key-value (K-V) pairs as local experts (each specialized in a distinct temporal context) and performs adaptive expert selection for each query via localized filtering of irrelevant timestamps. Complementing this local adaptation, a shared global expert preserves the Transformer’s strength in capturing long-range dependencies. We then replace the vanilla attention mechanism in popular time-series Transformer frameworks (i.e., PatchTST and Timer) with TMOE, without extra structural modifications, yielding our specific version TimeExpert and general version TimeExpert-G.
Extensive experiments on seven real-world long-term forecasting benchmarks demonstrate that TimeExpert and TimeExpert-G outperform state-of-the-art methods. Code will be released after acceptance.
|
other topics in machine learning (i.e., none of the above)
|
https://openreview.net/pdf?id=qAfbeMal0m
| 2025-09-01T22:36:11
| 4
|
[
{
"id": "mayMhGKz9x",
"forum": "qAfbeMal0m",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission382/Reviewer_uwnJ",
"reviewer_name": "Reviewer_uwnJ",
"rating": 2,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "The authors propose a mixture of expert model for time series data and evaluate it on seven standard benchmarks, improving against various baselines.",
"strengths": "- good motivation\n- well written paper\n- reasonable evaluation",
"weaknesses": "- unclear / incomplete description of MoE mechanisms, especially how it adapts to different data characteristics and temporal patterns\n- good examples to show some of the main ideas but no examples showing the limits of the proposed method\n- given that the results are data-dependent, how to adapt to this dependence is left unclear",
"questions": "Treating key-value pairs as experts is quite unconventional and seem too fine granular. How does your MoE architecture compare to conventional LLM MoE architectures, and how do you select the number of KV pairs to catch different temporal patterns and lags? How do you adapt to different dataset characteristics?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-02T02:42:56",
"modification_date": "2025-11-12T10:45:07",
"review_url": "https://openreview.net/forum?id=qAfbeMal0m¬eId=mayMhGKz9x",
"license": "CC BY 4.0"
},
{
"id": "apQWb0azq1",
"forum": "qAfbeMal0m",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission382/Reviewer_EoMT",
"reviewer_name": "Reviewer_EoMT",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "This paper introduces the Temporal Mix of Experts (TMOE), a attention-level mechanism designed to address the limitations of standard Transformers in time series forecasting. TMOE reimagines KV pairs as \"local experts\" and employs an adaptive, top-k selection process to filter out irrelevant or anomalous timestamps, thus handling dynamic lag effects and noise. \n\nBy integrating TMOE into existing frameworks (TimeExpert and TimeExpert-G), the authors demonstrate SOTA performance on seven real-world long-term forecasting benchmarks.",
"strengths": "1. While there have been various attempts to capture temporal dependencies in time-series attention, the proposed Local Expert system based on a MOE framework is a novel and original approach to the problem.\n\n2. This mechanism leads to performance improvements. The paper demonstrates that the proposed TimeExpert models achieve SOTA results across seven real-world forecasting benchmarks, validating the effectiveness of the TMOE design.",
"weaknesses": "1. The justification for framing the mechanism as \"MOE\" is somewhat unclear. The core mechanism could be interpreted as a standard attention model modified with top-k filtering and an additional learned factor for temporal proximity, rather than a true Mixture-of-Experts architecture. The paper could benefit from a clearer distinction.\n\n2. The paper claims that TimeExpert excels at filtering noise and \"*PatchTST is similarly affected, with its prediction dragged downward and the overall amplitude suppressed across the forecast horizon. By comparison, TimeExpert remains largely unaffected, filtering out this transient and non-structural noise while maintaining focus on the more stable long-term dynamics.*\" (Section 4.2). However, this claim is substantiated only with experiments using a short lookback window of $L=96$. This experimental setup is insufficient to validate claims about \"long-term\" dynamics. To properly support its central hypothesis, the paper must include experiments with much longer input sequences (e.g., $L=336$ or $L=512$ like PatchTST) to demonstrate that the TMOE mechanism can indeed identify and utilize stable long-term patterns over extended historical data, rather than just performing well on short-term contexts.\n\n3. The paper lacks crucial qualitative analysis or visualizations to demonstrate how the Local Experts actually operate. There are no figures or analysis (e.g., heatmaps of selected indices, histograms of temporal distances) showing which experts are being selected by the TMOE mechanism. This omission makes it difficult to verify the central claims that the model is actually filtering anomalies or adaptively selecting relevant temporal lags, as opposed to just achieving good performance through other means.",
"questions": "Please refer to weaknesses",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T03:59:22",
"modification_date": "2025-11-12T10:45:07",
"review_url": "https://openreview.net/forum?id=qAfbeMal0m¬eId=apQWb0azq1",
"license": "CC BY 4.0"
},
{
"id": "KiFY9GRzH0",
"forum": "qAfbeMal0m",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission382/Reviewer_GDHp",
"reviewer_name": "Reviewer_GDHp",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "The manuscript proposes a sparse attention mechanism, HEA, built on temporal relevance between time steps, and claims that it can suppress outliers. A forecasting framework, TimeExpert, is then constructed around HEA and is reported to outperform baseline methods.",
"strengths": "The proposed TimeExpert model is evaluated on benchmark datasets and reports promising accuracy compared with recent state-of-the-art baselines. Its HEA module reduces attention complexity to $O(kn)$ by adaptively focusing on temporally related time steps, thereby avoiding computations on less relevant positions and improving efficiency.",
"weaknesses": "The novelty of the proposed HEA is not clearly established. It appears to follow prior sparse-attention designs (e.g., Informer) by attending each query to a selected subset of keys, but the key-selection procedure itself is insufficiently specified. In particular, the adaptive rule in Eq. (2) does not capture locality and seasonality simultaneously.\n\nIn addition, the embedding pipeline is not described, so the exact input representation fed into HEA is unclear, making it difficult to verify what temporal or cross-variable information is captured.\n\nThe core contribution of this manuscript is to propose a sparse attention mechanism, but it does not provide any efficiency analysis to demonstrate the computational benefits over full attention or other sparse variants.",
"questions": "I have a few suggestions which may improve the quality of the paper:\n\n(a) The “local expert” is defined as the key–value pair at time step s, but the selection and subsequent computation do not clearly highlight why this pair is an “expert.” The procedure essentially follows standard attention (query–key productions followed by a weighted sum over values). Please clarify what is unique about the expert selection and how it changes the computation.\n\n(b) The outlier-suppression argument is not fully convincing. If $x_s$ is an outlier, its key $k_s$ will be dissimilar to $q_t$, so a standard full-attention mechanism would also assign a low weight to $v_s$. Please justify why the additional feature-similarity term is necessary and quantify its benefit.\n\n(c) The temporal relevance term $|t - s|$ is defined via absolute time-step distance. For local patterns (e.g., Exchange, Figure 1(a)) $|t - s|$is small; for seasonal patterns (e.g., ETTm1, Weather, Solar-Energy) time steps that are far apart in index can still be strongly related. Explain how $|t - s|$ captures both locality and seasonal recurrence, or extend the formulation to handle seasonal lags explicitly.\n\n(d) Please explain $x_{t,d}$ in Eq. (3). What is the role of the feature index $d$?\n\n(e) In the ablation study, add comparisons with several recent sparse-attention mechanisms for time-series forecasting to better demonstrate the effectiveness of HEA.\n\n(f) The computation for the embedding stage and the output layer of TimeExpert is omitted from Figure 2. Please describe the processing steps before the encoder and after the encoder to make the pipeline complete.\n\n(g) Include a quantitative comparison to demonstrate the computational advantages, reporting parameters, FLOPs, and peak memory usage.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T13:17:46",
"modification_date": "2025-11-12T10:45:07",
"review_url": "https://openreview.net/forum?id=qAfbeMal0m¬eId=KiFY9GRzH0",
"license": "CC BY 4.0"
},
{
"id": "xFLC2VGxi5",
"forum": "qAfbeMal0m",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission382/Reviewer_wKoc",
"reviewer_name": "Reviewer_wKoc",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "This paper proposes a novel attention-level mechanism (TMoE) to replace the original attention mechanism in Transformer-based forecasting models. TMoE consists of several local experts and a shared global expert, to address the limitations of rigid global attention in handling lag effects and anomalies.",
"strengths": "1. The manuscript is well-written, and the proposed methodology is presented with overall clarity.\n\n2. The experiments show that TimeExpert achieves a competitive edge, outperforming several recent state-of-the-art baselines.",
"weaknesses": "1. The primary mechanism of TMoE involves using the top-k indices from each row of the attention map to select the top-k experts. How does this fundamentally differ from the previously common approach of sparsifying attention via top-k selection per row (as in Informer [1])? It appears to be a similar concept but repackaged with MoE.\n\n2. The experimental analysis lacks depth. The authors claim that TMoE can selectively exploit informative temporal segments while filtering out noisy or redundant ones. However, they do not provide visual experiments to support this argument. For instance, in the prediction visualization of Figure 3, which specific local experts are selected for each case? What patterns do they correspond to? Is there actual evidence that noisy time steps are filtered out? Figure 3 alone is insufficient to substantiate these points.\n\n3. The evaluation omits some commonly used benchmarks, such as the Traffic and Electricity datasets. The rationale for excluding these particular datasets should be clarified, especially considering their prevalence in the literature.\n\n[1] Zhou H, Zhang S, Peng J, et al. Informer: Beyond efficient transformer for long sequence time-series forecasting[C]//Proceedings of the AAAI conference on artificial intelligence. 2021, 35(12): 11106-11115.",
"questions": "Please see weakness.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-25T17:29:29",
"modification_date": "2025-11-12T10:45:07",
"review_url": "https://openreview.net/forum?id=qAfbeMal0m¬eId=xFLC2VGxi5",
"license": "CC BY 4.0"
}
] |
|
TvpaeQVTGQ
|
https://openreview.net/forum?id=TvpaeQVTGQ
|
A Fast, Reliable, and Secure Programming Language for LLM Agents with Code Actions
| 5.5
| 2.25
|
[
6,
6,
4,
6
] |
[
2,
2,
4,
1
] | 4
|
[
"llm",
"agent",
"code actions",
"code generation"
] |
Modern large language models (LLMs) are often deployed as agents, calling external tools adaptively to solve tasks. Rather than directly calling tools, it can be more effective for LLMs to write code to perform the tool calls, enabling them to automatically generate complex control flow such as conditionals and loops. Such code actions are typically provided as Python code, since LLMs are quite proficient at it; however, Python may not be the ideal language due to limited built-in support for performance, security, and reliability. We propose a novel programming language for code actions, called QUASAR, which has several benefits: (1) automated parallelization to improve performance, (2) uncertainty quantification to improve reliability and mitigate hallucinations, and (3) security features enabling the user to validate actions. LLMs can write code in a subset of Python, which is automatically transpiled to QUASAR. We evaluate our approach on the ViperGPT and CaMeL agents, applied to the GQA visual question answering and AgentDojo AI assistant datasets, demonstrating that LLMs with QUASAR actions instead of Python actions retain strong performance, while reducing execution time by up to 56%, improving security by reducing user approvals by up to 53%, and improving reliability by applying conformal prediction to achieve a desired target coverage level.
|
We propose a new language for LLM agents to use for actions, and we show its benefits over Python in terms of performance, reliability, and security.
|
foundation or frontier models, including LLMs
|
https://openreview.net/pdf?id=TvpaeQVTGQ
| 2025-09-19T04:23:36
| 4
|
[
{
"id": "uwDUZ3rzdg",
"forum": "TvpaeQVTGQ",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14018/Reviewer_FyqM",
"reviewer_name": "Reviewer_FyqM",
"rating": 6,
"confidence": 2,
"soundness": 4,
"contribution": 3,
"presentation": 3,
"summary": "This paper introduces QUASAR, a novel programming language designed specifically for LLM agents that use code actions. Unlike Python—which is the standard medium for LLM-generated code—QUASAR provides built-in mechanisms for performance optimization (via automatic parallelization), security (via dynamic access control and user approval of external calls), and reliability (via conformal semantics for uncertainty quantification).",
"strengths": "+ The rewrite-rule semantics and external call dispatch mechanism are rigorously formalized.\n\n+ The ability to propagate model uncertainty at the program level is a novel contribution that could inspire future work on trustworthy agent execution.\n\n+ The use of a Python subset and a transpiler ensures backward compatibility with current LLMs, addressing real-world deployability concerns (without performance degradation).",
"weaknesses": "- The paper does not specify how QUASAR manages external call failures, exceptions, or thread-level errors. For example, what happens if an external API call fails, times out, or returns an invalid response? Is the failure propagated, retried, or absorbed?\n\n- While QUASAR executes external calls “as soon as all their arguments are available,” it is not clear whether “futures” or deferred results are explicitly represented in the language. How does the interpreter manage dependencies among pending external calls or enforce order when results are reused?",
"questions": "It wasn’t clear to me what the sentence “There is only one external rule Rext = {Rext}. This rule is designed to enable calls to external functions f ∈ Fext” means in practice. Does this imply that all side-effecting operations (e.g., API calls, LLM queries) are handled uniformly through this single rewrite rule? How does QUASAR distinguish between different external APIs at runtime?\n\nCould QUASAR’s transpilation strategy be generalized to other host languages (e.g., typescript) or is it fundamentally tied to Python’s semantics?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-03T18:29:36",
"modification_date": "2025-11-12T13:15:10",
"review_url": "https://openreview.net/forum?id=TvpaeQVTGQ¬eId=uwDUZ3rzdg",
"license": "CC BY 4.0"
},
{
"id": "klRly1X6if",
"forum": "TvpaeQVTGQ",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14018/Reviewer_M2ub",
"reviewer_name": "Reviewer_M2ub",
"rating": 6,
"confidence": 2,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The paper introduces QUASAR, a new programming language designed to make LLM-driven code execution faster, safer, and statistically reliable. It combines a pure functional core, explicit side-effect isolation, automatic parallelization, and conformal prediction–based uncertainty propagation. The work is ambitious and conceptually motivated, aiming to establish a formal, language-level foundation for trustworthy agent behavior. Broadening experiments and addressing realistic LLM integration would make it stronger in practice and more convincing.",
"strengths": "- Designing an LLM-native programming language for code generation action is innovative and promising. \n- QUASAR introduces a pure functional core that separates computation from side effects.\nThis separation allows deterministic execution, simplifies formal reasoning, and makes program behavior easier to verify and audit.\n- The runtime system can automatically detect independent external calls and execute them concurrently.\nExperiments show up to 56% reduction in total execution time, demonstrating concrete performance gains compared to sequential baselines.\n- QUASAR enforces strict external-call isolation through explicit user approval.\nIt introduces a batch-approval mechanism that reduces the number of user interactions by over 50%, balancing usability and safety while preventing unverified API calls.",
"weaknesses": "- **Narrow evaluation scope:**\nThe experiments are confined to small, synthetic benchmarks (GQA and AgentDojo). These tasks are short and prestructured, which limits the external validity of the claims. There is no evaluation in complex or dynamic environments that real LLM agents operate in.\n- **Limited language expressiveness:**\nQUASAR only supports a very restricted subset of Python (functions, variables, simple control flow). It does not handle classes, exceptions, pattern matching, or early returns. This simplicity makes formal analysis easier but severely limits applicability to realistic agent workflows that depend on richer language features.\n- **Unclear LLM integration strategy:**\nThe paper proposes transpiling from a restricted Python subset but does not explain how LLMs are constrained to generate only this subset. There is no discussion of prompting or fine-tuning when the model produces invalid constructs. This leaves a major usability gap between theory and practice.",
"questions": "How are LLMs guided or constrained to produce valid Python subsets that can be reliably transpiled into QUASAR, and what is the success rate of this process in practice?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-02T17:31:13",
"modification_date": "2025-11-12T13:15:11",
"review_url": "https://openreview.net/forum?id=TvpaeQVTGQ¬eId=klRly1X6if",
"license": "CC BY 4.0"
},
{
"id": "bmUeKsprI0",
"forum": "TvpaeQVTGQ",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14018/Reviewer_JjBk",
"reviewer_name": "Reviewer_JjBk",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper introduces QUASAR, a novel programming language designed to improve the performance, security, and reliability of LLM-based agents. QUASAR achieves this by separating internal computations from external side effects, supporting parallel execution, and utilizing conformal semantics for uncertainty quantification. The language is implemented through a workflow where LLMs generate Python code, which is then transpiled to QUASAR. Experimental results show that QUASAR improves execution speed (up to 56% faster), reduces user interaction for security validation, and maintains high reliability with a target error rate of 0.1.",
"strengths": "- The paper effectively identifies challenges in LLM-based agents which write Python code to invoke tool APIs, and presents a practical solution through QUASAR.\n- The \"internal computation - external side effects separation\" architecture and the introduction of conformal semantics are novel and offer significant advantages in performance, security, and reliability.\n- Experiments on real-world agents like ViperGPT and CaMeL, covering performance, security, and reliability, demonstrate the practical benefits of QUASAR.",
"weaknesses": "- Lack of Detailed Technical Explanation: The paper lacks in-depth descriptions of key components like QUASAR’s rewrite rules, Python subset syntax, and transpiler implementation, which could impact reproducibility and understanding.\n- Flexibility Concerns in Tool-Calling Scenarios: While QUASAR improves upon Python in certain areas, there is a concern about whether it can maintain the same flexibility as Python in all tool-calling scenarios. Python’s ecosystem is rich with libraries that facilitate diverse use cases (e.g., system administration tasks, network programming, data processing). It’s unclear if QUASAR can handle such diverse scenarios with the same ease and flexibility, particularly in more dynamic, real-time applications where Python’s built-in flexibility is often crucial. A clearer discussion on this aspect and how QUASAR addresses such scenarios, if at all, would be valuable.",
"questions": "- What optimizations would you suggest for fine-tuning on small datasets like AgentDojo? Are techniques like transfer learning or data augmentation being considered to improve performance on such tasks?\n- How does QUASAR address tasks that cannot be parallelized due to dependencies? Could you provide more insight into the characteristics of tasks that hinder parallel execution?\n- Could QUASAR be considered more of a specialized Python interpreter rather than a new programming language? How does it differ from existing solutions such as parallel-execution Python interpreters (PyPy, Cython) or security frameworks (e.g., Sandboxed Python), which already provide performance improvements for Python code? What makes QUASAR’s approach more beneficial than these well-established, mature solutions?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T19:42:23",
"modification_date": "2025-11-12T13:15:12",
"review_url": "https://openreview.net/forum?id=TvpaeQVTGQ¬eId=bmUeKsprI0",
"license": "CC BY 4.0"
},
{
"id": "5nILa6oIgp",
"forum": "TvpaeQVTGQ",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14018/Reviewer_MYLk",
"reviewer_name": "Reviewer_MYLk",
"rating": 6,
"confidence": 1,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This paper falls outside my area of expertise. I'm unable to assess this paper.",
"strengths": "N/A",
"weaknesses": "N/A",
"questions": "N/A",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-15T13:04:06",
"modification_date": "2025-11-12T13:15:12",
"review_url": "https://openreview.net/forum?id=TvpaeQVTGQ¬eId=5nILa6oIgp",
"license": "CC BY 4.0"
}
] |
K0idbmzcgc
|
https://openreview.net/forum?id=K0idbmzcgc
|
OS-W2S: An Automatic Labeling Engine for Language-Guided Open-Set Aerial Object Detection
| 4.8
| 3
|
[
8,
2,
4,
6,
4
] |
[
3,
3,
2,
3,
4
] | 5
|
[
"Open-Set Aerial Object Detection",
"Automatic Label Engine",
"Multi-instance Open-set Aerial Dataset"
] |
In recent years, language-guided open-set aerial object detection has gained significant attention due to its better alignment with real-world application needs. However, due to limited datasets, most existing language-guided methods primarily focus on vocabulary-level descriptions, which fail to meet the demands of fine-grained open-world detection. To address this limitation, we propose constructing a large-scale language-guided open-set aerial detection dataset, encompassing three levels of language guidance: from words to phrases, and ultimately to sentences. Centered around an open-source large vision-language model and integrating image-operation-based preprocessing with BERT-based postprocessing, we present the $\textbf{OS-W2S Label Engine}$, an automatic annotation pipeline capable of handling diverse scene annotations for aerial images. Using this label engine, we expand existing aerial detection datasets with rich textual annotations and construct a novel benchmark dataset, called Multi-instance Open-set Aerial Dataset ($\textbf{MI-OAD}$), addressing the limitations of current remote sensing grounding data and enabling effective language-guided open-set aerial detection. Specifically, MI-OAD contains 163,023 images and 2 million image-caption pairs, with multiple instances per caption, approximately 40 times larger than comparable datasets.
To demonstrate the effectiveness and quality of MI-OAD, we evaluate three representative tasks: language-guided open-set aerial detection, open-vocabulary aerial detection (OVAD), and remote sensing visual grounding (RSVG). On language-guided open-set aerial detection, training on MI-OAD lifts Grounding DINO by +31.1 AP$_{50}$ and +34.7 Recall@10 with sentence-level inputs under zero-shot transfer. Moreover, using MI-OAD for pre-training yields state-of-the-art performance on multiple existing OVAD and RSVG benchmarks, validating both the effectiveness of the dataset and the high quality of its OS-W2S annotations. More details are available at \url{https://anonymous.4open.science/r/MI-OAD}.
|
datasets and benchmarks
|
https://openreview.net/pdf?id=K0idbmzcgc
| 2025-09-19T08:13:11
| 5
|
[
{
"id": "jv63TR5pJc",
"forum": "K0idbmzcgc",
"review_number": 5,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14647/Reviewer_6Dr8",
"reviewer_name": "Reviewer_6Dr8",
"rating": 8,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The paper proposes an automatic labeling engine that generates fine-grained textual annotations for aerial images using VLMs, and introduces MI-OAD, a large-scale dataset for language-guided openset aerial object detection. MI-OAD contains 163k images and 2 million image caption pairs at word, phrase, and sentence levels. Experiments show that pretraining on MI-OAD substantially boosts performance across open-vocabulary aerial detection, remote-sensing visual grounding, and zero-shot open-set detection tasks.",
"strengths": "- Visual grounding has significant value and wide applications. Yet, existing dataset is not large enough to support the task. This paper proposed an automated way to generate grounding dataset using VLMs. The dataset will advance the research in this direction.\n\n- The labeling pipeline is novel for aerial domains, combining structured preprocessing, VLM interaction, and BERT-based postprocessing. MI-OAD’s scale and multi-granularity annotation approach make it a comprehensive dataset for open-set aerial detection.\n\n- Training or pretraining on MI-OAD significantly improves multiple benchmarks, indicating clear practical value and potential to establish a standard benchmark for language-guided aerial detection.",
"weaknesses": "- Although sourced from eight aerial datasets, details about geographic, environmental, or temporal diversity are sparse. It is unclear whether MI-OAD adequately represents different regions, seasons, or sensor modalities.\n\n- The label engine relies heavily on a single chosen VLM (InternVL-2.5-38B-AWQ), and the paper does not assess how dataset quality varies across models, e.g. evaluating usinng other VLMs.\n\n- Only a very small portion of dataset is manually reviewed (0.5% of data). The generalization of annotation quality to the remaining dataset is assumed, but not empirically proven.\n\n- The paper emphasizes overall improvements but provides little qualitative or quantitative discussion of where MI-OAD annotations or trained models fail.",
"questions": "- A more thorough analysis of the dataset, e.g. regions, sensor types, is encouraged.\n\n- Alternative VLMs are encouraged to be analyzed for comparison.\n\n- Additional human review is encouraged.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-02T22:55:17",
"modification_date": "2025-11-12T13:23:39",
"review_url": "https://openreview.net/forum?id=K0idbmzcgc¬eId=jv63TR5pJc",
"license": "CC BY 4.0"
},
{
"id": "tgxiTgz5xD",
"forum": "K0idbmzcgc",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14647/Reviewer_RRL5",
"reviewer_name": "Reviewer_RRL5",
"rating": 2,
"confidence": 3,
"soundness": 2,
"contribution": 3,
"presentation": 2,
"summary": "The paper presents an automatic labeling pipeline for generating language-guided annotations in aerial imagery, and uses it to build a new large-scale dataset. The dataset integrates data from existing aerial detection sources and adds multiple types of text descriptions for each instance. The authors show that models trained or adapted on MI-OAD achieve better performance on several aerial detection and grounding benchmarks.",
"strengths": "- The paper tackles the lack of large-scale language-grounded datasets in the aerial domain, which is a real bottleneck for open-set detection research. MI-OAD is significantly larger and more diverse than existing aerial grounding datasets.\n- Experiments are extensive and show clear improvements across several downstream benchmarks. The dataset and code are publicly released, making the work reproducible and potentially useful to the community.",
"weaknesses": "- The discussion of related work focuses almost entirely on model architectures rather than dataset construction. Since this paper’s main contribution is a dataset and annotation pipeline, it should instead position the work within the context of existing dataset-building methodologies. A detailed quantitative comparison with prior aerial or language-grounded datasets is missing. The paper should explicitly articulate what is new about the proposed pipeline beyond scale, and how its annotation strategy differs from existing automatic labeling systems.\n\n- The overall novelty is limited. The paper reads more like a detailed technical report that consolidates known components into a large dataset pipeline. While the engineering execution is solid, the scientific contribution is unclear. The paper lacks abstraction or theoretical framing that would justify it as a research advance suitable for ICLR. It would fit better as a data or resource paper for a vision-oriented venue such as CVPR / ICCV.\n\n- An ablation analysis is missing. Since the annotation pipeline contains multiple preprocessing and postprocessing components, it would be important to remove or modify individual steps and evaluate model performance on the resulting datasets. This would verify that each design choice meaningfully contributes to dataset quality; otherwise, the complexity of the pipeline is not well justified.\n\n- In the README.md source code (https://anonymous.4open.science/api/repo/MI-OAD/file/README.md) of the provided code repository, a GitHub badge link (https://img.shields.io/github/stars/GT-Wei/MI-OAD) reveals the authors' identity. This appears to be unintentional, so I think maybe it is fine.",
"questions": "See Weaknesses",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T23:58:24",
"modification_date": "2025-11-12T13:23:40",
"review_url": "https://openreview.net/forum?id=K0idbmzcgc¬eId=tgxiTgz5xD",
"license": "CC BY 4.0"
},
{
"id": "L4mIsDaIgr",
"forum": "K0idbmzcgc",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14647/Reviewer_eWjL",
"reviewer_name": "Reviewer_eWjL",
"rating": 4,
"confidence": 2,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper presents an automatic labeling tool (OS-W2S) and uses it to create MI-OAD, a large-scale aerial imagery dataset designed to advance language-guided aerial open-set object detection research.",
"strengths": "The paper is well organized.\n\nThe proposed MI-OAD dataset is a valuable, large-scale dataset for the community.\n\nExperiments on YOLO-World and Grounding DINO demonstrated that MI-OAD can improve the model's performance in aerial object detection.",
"weaknesses": "The core of the paper is to use VLM to generate annotations, but VLM itself may have biases (such as preferences for specific colors and shapes) and the risk of creating \"illusions\". Although the paper validates this with a stronger model in Section 4, \"Quality Control Analysis\", it does not fundamentally avoid the problem.\n\nIn section 5.4, Key terms like \"OPT-RSVG\" and \"DIOR-RSVG\" are not defined.\n\nWhy is the “Grounding DINO (+MI-PAD)” configuration in Table 2 much lower than the “LPVA” baseline under the cmuIoU metric?",
"questions": "Please see the weakness.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T18:00:43",
"modification_date": "2025-11-12T13:23:40",
"review_url": "https://openreview.net/forum?id=K0idbmzcgc¬eId=L4mIsDaIgr",
"license": "CC BY 4.0"
},
{
"id": "ERIr7M5PbQ",
"forum": "K0idbmzcgc",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14647/Reviewer_X2to",
"reviewer_name": "Reviewer_X2to",
"rating": 6,
"confidence": 3,
"soundness": 2,
"contribution": 3,
"presentation": 3,
"summary": "The paper proposes an automatic labeling engine, OS-W2S, which leverages large vision-language models (e.g., InternVL) with structured preprocessing and BERT-based postprocessing to generate word-, phrase-, and sentence-level annotations. Using this system, the authors construct MI-OAD, a large-scale language-guided open-set aerial detection dataset with 163K images and 2M image-caption pairs. The paper demonstrates substantial performance gains on three tasks (language-guided open-set detection, open-vocabulary detection, and remote-sensing visual grounding) achieving +31.1 AP50 and +34.7 Recall@10 improvements for Grounding DINO.",
"strengths": "1. Originality: While the core methodology reuses existing components (e.g., InternVL, BERT, Grounding DINO), the paper exhibits a creative integration of large vision-language models for automated annotation in the aerial domain, which is a relatively unexplored application area. The introduction of a multi-granularity labeling strategy also shows originality in problem formulation. The OS-W2S pipeline effectively extends the scope of open-set detection from object categories to natural-language-level semantics.\n\n2. Quality: The technical quality of the work is strong. The pipeline is methodologically sound, carefully engineered, and empirically validated. The experiments are thorough, covering multiple downstream tasks, and results consistently support the paper’s claims. The reproducibility is high, with sufficient implementation details and dataset statistics provided.\n\n3. Clarity: The paper is very clearly written and well-structured. The figures and diagrams are of high quality, particularly those explaining the annotation pipeline and dataset structure. \n\n4. Significance: The proposed MI-OAD dataset and OS-W2S labeling engine have strong practical and community significance. They address a major bottleneck in open-set remote sensing(data scarcity)and could serve as a foundation for future multimodal research in aerial imagery. Although the theoretical innovation is limited, the impact on benchmark construction and large-scale data automation is substantial and could influence related domains.\n\nThis paper’s strength lies in clarity, execution quality, and domain-level contribution. It delivers a solid, reproducible, and impactful system that will benefit the community, even though it does not introduce fundamentally new learning mechanisms.",
"weaknesses": "1.The core innovation of OS-W2S lies in the integration of existing components (InternVL, BERT, Grounding DINO) rather than the introduction of new learning mechanisms or optimization objectives. Unlike recent ICLR papers that advance representation learning, this work primarily presents an engineering system for data generation.\n\n2.The paper reports aggregate metrics (AP50, Recall@10) but does not investigate failure modes or error patterns in generated annotations. This omission weakens the empirical rigor, especially for a system that depends heavily on VLM predictions.\n\n3.Although 10K samples were manually verified, this represents only 0.5% of the dataset. The authors should consider statistical sampling or cross-annotation consistency tests to better quantify the overall label reliability.\n\n4.The related work section does not clearly differentiate OS-W2S from prior automatic labeling frameworks, such as LabelAnything: Multi-Class Few-Shot Semantic Segmentation with Visual Prompts (ECAI 2025) , which also leverage LLM/VLM pipelines.\nThe work’s weaknesses are not in execution but in research framing and analytical depth. With more rigorous evaluation of label reliability, stronger theoretical motivation for learning representations, and detailed analysis of model behavior, the paper could evolve from a engineering contribution into a research study.",
"questions": "1.Currently, OS-W2S is described as a deterministic pipeline integrating pretrained models. Could the authors clarify whether any learnable components or adaptive mechanisms are included in the labeling process (e.g., fine-tuning InternVL or learning confidence thresholds)?\n\n2.You mention that 10K samples were manually checked. Could you please describe how these samples were selected (random, stratified by scene type, or class-balanced)? A more rigorous sampling or reliability metric could help validate the robustness of your dataset.\n\n3.Aerial imagery often involves dense and overlapping objects, where language-based models may confuse spatial relationships. How does OS-W2S handle such ambiguity? For instance, when multiple small vehicles are present, how is phrase-level annotation disambiguated? Could you provide examples of typical failure cases and mitigation strategies?\n\n4.Since OS-W2S relies on large VLMs, it would be helpful to know how hallucinated or semantically incorrect captions are detected or filtered out. How do you filter out noisy subtitles?\n\n5.How does OS-W2S differ from other automatic labeling systems that combine LLMs and VLMs, such as LabelAnything ?\n\n6.Could the authors discuss how OS-W2S might influence learned feature representations? For example, does the multi-granularity annotation improve feature disentanglement or visual-textual alignment in downstream models?\n\n7.Can OS-W2S be easily generalized to other domains (e.g., medical imaging, autonomous driving) where labeling cost is also high? If so, what adaptations would be needed, for example, domain-specific vocabulary or hierarchical label templates?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T16:02:21",
"modification_date": "2025-11-12T13:23:41",
"review_url": "https://openreview.net/forum?id=K0idbmzcgc¬eId=ERIr7M5PbQ",
"license": "CC BY 4.0"
},
{
"id": "2iBGEPo5xL",
"forum": "K0idbmzcgc",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14647/Reviewer_zebP",
"reviewer_name": "Reviewer_zebP",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper presents OS-W2S Label Engine, an automatic annotation pipeline capable of handling diverse scene annotations for aerial images. Using this label engine, the authors also construct a benchmark dataset MI-OAD for language-guided open-set aerial detection. Experiments on open-vocabulary aerial detection and remote sensing visual grounding are carried by training and evaluating the model with MI-OAD.",
"strengths": "1. MI-OAD contains 163K images and 2 million image-caption pairs, with multiple instances per caption, which is large-scale compared to existing datasets. \n2. The experiments show that using the proposed MI-OAD as training corpus results in a better performance in both open-vocabulary detection and visual grounding tasks.",
"weaknesses": "1. While MI-OAD contains 163,023 images and 2 million image-caption pairs, it is difficult to evaluate a dataset simply based on its size. For example, if we split each image into four, then the number will be 4x larger. And for MI-OAD, its quality is not guaranteed. The captions are generated by VLMs, but there is not enough experiments involving quality assessment. \n2. The dataset is essentially generated by integrating available datasets. The problem is, after reviewing some of the data, I find that most of them keep the original categories, without unification. From what I understand, most of the data are labeled with only a small portion of the 100 categories mentioned in the paper, and the other objects are just ignored.\n3. Some minor writing issues: For example, in Page 6, Fig.3b, Section5.1: Missing space. ( 1,765 instances): Redundant space. In Fig. 2, InterVL should be InternVL. In Fig. 3, (e) is not aligned, and some labels in (a)-(d) are too small.",
"questions": "1. Images in RS show high variation. Some of them contain only a few objects, some may contain more than 100 objects. How do you control the captions in these two different situations? If there are many objects in the image, what will the caption be like? \n2. In figure 3, \"instances per caption\" shows 98% images have <= 20 instances. But many datasets used by the authors (such as DOTA) are originally very dense. Why is this discrepancy? Do you crop the image to decrease the instance count? If yes, is the preprocessing rule detailed in the paper?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T04:18:19",
"modification_date": "2025-11-12T13:23:41",
"review_url": "https://openreview.net/forum?id=K0idbmzcgc¬eId=2iBGEPo5xL",
"license": "CC BY 4.0"
}
] |
|
KUn4IBIZC7
|
https://openreview.net/forum?id=KUn4IBIZC7
|
MotifGrIm: Motif-Based Multi-Granularity Graph-Image Pretraining for Molecular Representation Learning
| 2.5
| 4.5
|
[
2,
2,
2,
4
] |
[
5,
4,
5,
4
] | 4
|
[
"Multi-Modal Contrastive Learning",
"Molecular Representation Learning",
"Graph Neural Network"
] |
Molecular representation learning is widely considered as a crucial task in computer-aided molecular applications and design. Recently, many studies have explored pretraining models on unlabeled data to learn molecular structures and enhance the performance of downstream tasks. However, existing methods mainly focus on graph domains, with limited attention to other modals, such as the images. In addition, most existing methods focus on the atomic or molecular level, which leads to the neglect of high-order connection information or local structure information. In this work, we propose a motif-based multi-granularity graph-image pretraining framework, MotifGrIm, for molecular representation learning.In this framework, we incorporate motifs into the image domain for the first time,by generating distinct background features for different motifs in molecular im-ages, offering a novel approach to enhancing molecular representation. Through contrastive learning within and across modules, we effectively tackle two key challenges in molecular motif pretraining with graph neural networks: (1) the over-smoothing problem, which restricts GNNs to shallow layers and hinders global molecular information capture, and (2) the aggregation of motif nodes, which leads to the loss of connectivity information between motifs. Additionally, to more effectively capture information across different molecular granularities, we propose a multi-granularity prediction pretraining strategy to optimize the model. For downstream tasks, we use only the graph encoders for prediction, reducing both time and memory consumption. We evaluate MotifGrIm on molecular property prediction and long-range benchmarks. Across eight commonly used molecular property prediction datasets, MotifGrIm outperforms state-of-the-art models with an average ROC-AUC improvement of 1.16% and achieves the best results on five of them. On long-range datasets, MotifGrIm improves the performance by at least 14.8%.
|
unsupervised, self-supervised, semi-supervised, and supervised representation learning
|
https://openreview.net/pdf?id=KUn4IBIZC7
| 2025-09-19T13:26:21
| 4
|
[
{
"id": "dpFgmibU9f",
"forum": "KUn4IBIZC7",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16081/Reviewer_dFht",
"reviewer_name": "Reviewer_dFht",
"rating": 2,
"confidence": 5,
"soundness": 2,
"contribution": 1,
"presentation": 2,
"summary": "This paper introduces MotifGrIm, a pre-training framework for molecular representation learning that operates on multiple granularities (molecule and motif) and modalities (graph and image). The core contribution is the introduction of motif-level information into the image domain by generating \"motif images,\" where different molecular motifs are highlighted with distinct colors. The framework employs contrastive and predictive learning objectives to align representations within and across these different views. The authors demonstrate the effectiveness of MotifGrIm through experiments on molecular property prediction and long-range interaction benchmarks.",
"strengths": "1. The framework systematically integrates several existing learning paradigms, including multi-modal contrastive learning (graph-image), multi-granularity learning (molecule-motif), and auxiliary predictive tasks, into a single, unified pre-training process.\n\n2. The paper is evaluated on a wide array of standard downstream tasks, including eight MoleculeNet benchmarks and two long-range benchmarks. The inclusion of extensive ablation studies provides some insight into the utility of the framework's different components.",
"weaknesses": "1. Limited Novelty and Incremental Contribution\n\nThe primary weakness of this work lies in its limited conceptual novelty. The core ideas—leveraging motifs for representation learning and using graph-image multi-modal pre-training—have been individually explored in prior work. The main contribution of this paper is to combine these two known concepts. The specific implementation of coloring motifs in an image, while an interesting detail, feels more like an incremental engineering step than a fundamental advance in the field. Consequently, the work reads more like a skillful assembly of existing components rather than a novel method with a strong, original foundation.\n\n2. High Framework Complexity for Marginal Gain\n\nThe proposed MotifGrIm framework is exceedingly complex. It involves generating four distinct data views, training three separate encoders (two GNNs, one Vision Transformer), and optimizing a complex objective function with multiple contrastive and predictive losses. Despite this high complexity, the reported performance gains over the strongest baseline (MoleculeSTM) are marginal (an average ROC-AUC improvement of 1.16%). The paper fails to justify why such a complex approach is warranted for a modest improvement, raising questions about its practical value and elegance.\n\n3. Insufficient Ablation to Justify the Core Idea\n\nThe central claimed novelty is the use of \"motif images\" to make the image encoder substructure-aware. However, the ablation studies are insufficient to validate this specific contribution. The experiments only compare against variants that remove the entire image modality (w.o. IH and w.o. IPH). A critical and missing baseline would be a model variant that uses standard, uncolored molecular images for the image-based contrastive learning tasks. Without this comparison, it is impossible to determine whether the performance gain comes from simply adding an image modality or specifically from the proposed motif-coloring scheme.\n\n4. Omission of Computational Cost Analysis\n\nTraining a large vision transformer on image data is substantially more resource-intensive than graph-only pre-training methods. A detailed analysis of pre-training time and memory consumption is necessary to assess the practicality and scalability of the method. The lack of this analysis makes it difficult to evaluate the method's efficiency.\n\n5. Uncertainty Regarding Generalizability\n\nThe framework's performance is tied to a specific \"Principal Subgraph Mining\" algorithm for motif extraction. The paper does not investigate how robust the model is to different motif-finding algorithms (e.g., chemistry-based fragmentation rules like BRICS). This dependency on a single, heuristic-based algorithm raises concerns about the generalizability and robustness of the entire approach.",
"questions": "1. Given that motif-based pre-training and graph-image contrastive learning have been explored in separate prior works, could you clarify the primary conceptual novelty of your framework beyond their direct combination? Furthermore, please justify the trade-off between the significant complexity of your proposed framework and the marginal performance gains observed over strong baselines.\n\n2. Your central technical contribution appears to be the motif-coloring scheme. To properly validate its effectiveness, it is essential to compare it against a model that also uses the image modality but without any motif coloring (i.e., using standard molecular images). Can you provide results for this crucial ablation or explain why it was omitted?\n\n3. Could you provide a detailed analysis of the pre-training computational cost (e.g., training time, GPU memory) of MotifGrIm and compare it to both a strong graph-only baseline (like MGSSL) and a strong multi-modal baseline (like MoleculeSTM)? This information is critical for understanding the practical implications of your method.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T11:10:25",
"modification_date": "2025-11-12T13:44:25",
"review_url": "https://openreview.net/forum?id=KUn4IBIZC7¬eId=dpFgmibU9f",
"license": "CC BY 4.0"
},
{
"id": "bfxiRe4A7l",
"forum": "KUn4IBIZC7",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16081/Reviewer_myBA",
"reviewer_name": "Reviewer_myBA",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This paper presents MotifGrIm, a motif-based multi-granularity graph-image pretraining framework for molecular representation learning. It incorporates motif information into the image modality. The framework generates four views of each molecule—molecular graph, motif graph, molecular image, and motif image—and employs contrastive learning within and across modalities, along with multi-granularity prediction tasks, to capture both local substructures and global topology. Experiments demonstrate state-of-the-art performance on eight molecular property prediction datasets.",
"strengths": "This paper is well-written and is easy to understand.",
"weaknesses": "1. Limited Gain from Graph-Image Multimodal Pretraining: The paper claims that graph-image multimodal pretraining enhances molecular representation by combining structural and visual information. However, this approach may offer diminishing returns compared to other modalities. Specifically, molecular images largely encapsulate information already present in graph structures (e.g., atoms and bonds depicted in 2D layouts), meaning the additional gains from images could be marginal. In contrast, textual descriptions (e.g., from scientific literature) provide rich, high-level semantic information—such as functional properties or biological contexts—that graphs alone cannot capture. \n\n2. Questionable Contrastive Learning Strategy for Motif Alignment: The paper employs contrastive learning to align representations between molecular graphs and motif graphs, as well as between molecular images and motif images. This strategy aims to pull these views closer in the embedding space, but it raises conceptual concerns. For instance, aligning a molecule's global representation with its substructures (motifs) could lead to semantic inconsistencies: different motifs within the same molecule might have divergent structures or functions (e.g., a hydrophobic group versus a polar group), yet they are forced to be proximate in the feature space simply because they belong to the same molecule. \n\n3. Outdated Baselines Weaken SOTA Claims: The experimental evaluation compares MotifGrIm against several existing methods, but many baselines originate from works published in 2023 or earlier. This undermines the paper's claim of state-of-the-art (SOTA) performance. The authors should compare with more recent baselines such as [1].\n\n[1] Advancing Molecular Graph-Text Pre-training via Fine-grained Alignment",
"questions": "1. Could the authors elaborate on the underlying hypothesis for minimizing the distance between a whole-molecule embedding and its motif-level embeddings in the latent space? \n\n2. Could the authors provide results or discuss potential downstream tasks that specifically leverage the image encoder?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T20:29:52",
"modification_date": "2025-11-12T13:44:26",
"review_url": "https://openreview.net/forum?id=KUn4IBIZC7¬eId=bfxiRe4A7l",
"license": "CC BY 4.0"
},
{
"id": "UE39q99kWW",
"forum": "KUn4IBIZC7",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16081/Reviewer_9X1e",
"reviewer_name": "Reviewer_9X1e",
"rating": 2,
"confidence": 5,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "The paper proposes MotifGrIm, a motif-based, multi-granularity graph-image pretraining framework for molecular representation learning. It constructs four synchronized views per molecule (molecular graph, motif graph, molecular image, motif image) and performs contrastive learning within and across modalities.",
"strengths": "The idea of injecting motif knowledge into an image view and aligning it with graphs is interesting.",
"weaknesses": "1.\tThe paper repeatedly claims “we incorporate motifs into the image domain for the first time”. However, MaskMol [1] (Cheng et al., 2024; BMC Biology, 2025) already performs knowledge-guided molecular image pre-training with pixel masking that explicitly incorporates atomic, bond, and motif knowledge in the image view. Therefore, I suggest that the author give an accurate description of the innovation. \n2.\tThe model is pretrained on only ~456K molecules, which is far smaller than commonly used corpora in recent baselines (e.g., CGIP ≈ 10M, AttrMasking ≈ 2M). As a result, the head-to-head comparisons are not on equal footing. I suggest that the author use a larger pre-training dataset for training, on the one hand, to demonstrate the effectiveness of the pre-training method in parallel comparison with other baselines, and on the other hand, to demonstrate the scalability of the model.\n3.\tThe method encodes motifs via background colors, while standard 2D molecular images already use colors for atom types (e.g., O=red, N=blue, S=yellow) and lines for bonds. In such sparse, line-art images, colored motif backgrounds can collide with or wash out atom-type colors, potentially creating ambiguous or misleading cues for the CNN (e.g., a red oxygen symbol over a reddish motif background). The paper does not specify how these collisions are avoided or constrained.\n4.\tThe paper uses graph + image pretraining but does not compare against strong image-only SSL methods (such as VideoMol [2], MaskMol [1], and so on), leaving the benefit of multi-modal unclear. \n\n[1] MaskMol: knowledge-guided molecular image pre-training framework for activity cliffs with pixel masking, BMC Biology, 2025. \n[2] A molecular video-derived foundation model for scientific drug discovery, Nature Communications, 2024.",
"questions": "Consistent with Weakness",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-21T17:07:50",
"modification_date": "2025-11-12T13:44:26",
"review_url": "https://openreview.net/forum?id=KUn4IBIZC7¬eId=UE39q99kWW",
"license": "CC BY 4.0"
},
{
"id": "L2LmDIqYGj",
"forum": "KUn4IBIZC7",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16081/Reviewer_zGWg",
"reviewer_name": "Reviewer_zGWg",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "The authors present MotifGrIm, a motif-based multi-granularity pretraining framework for molecular representation learning that jointly leverages graph and image modalities, integrating motif-level information into both domains: representing motifs as structured subgraphs in the graph encoder and as distinct background patterns in the image encoder.\nA multi-granularity prediction strategy and contrastive learning are proposed to capture molecular information at different hierarchical levels (atom, motif, and molecule). MotifGrIm also aims to mitigate two core issues: Over-smoothing and the loss of motif connectivity information",
"strengths": "- The idea about the integration of motif information into the image domain is conceptually new, bridging structural and visual molecular representations.\n\n- The use of motifs as intermediate granularity between atom-level and molecule-level features effectively captures hierarchical molecular semantics.",
"weaknesses": "- The rationale for involving molecular images is not fully convincing. It remains unclear why image-domain motif augmentation leads to better representations or whether it simply introduces auxiliary regularization.\n\n- The framework’s expressive power relative to Weisfeiler–Lehman (WL) or motif-aware GNNs is not discussed.",
"questions": "- What is the intuition behind incorporating motifs into molecular images? Do motif-based backgrounds actually enhance structural alignment, or could they introduce artifacts?\n\n- Does the motif-based message passing or alignment extend the expressive power beyond 1-WL equivalence?\n\n- Although motif integration offers interpretability potential, the paper does not provide visual or quantitative evidence linking learned features to chemically meaningful motifs.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-18T09:37:02",
"modification_date": "2025-11-12T13:44:26",
"review_url": "https://openreview.net/forum?id=KUn4IBIZC7¬eId=L2LmDIqYGj",
"license": "CC BY 4.0"
}
] |
|
Bp2VlfYAMc
|
https://openreview.net/forum?id=Bp2VlfYAMc
|
TIPS: A Text-Image Pairs Synthesis Framework for Robust Text-based Person Retrieval
| 5
| 4
|
[
2,
6,
4,
8
] |
[
4,
4,
4,
4
] | 4
|
[
"Text-based Person Retrieval",
"Text-Image Pairs Synthesis",
"Diffusion Model",
"Identity Preservation",
"Test-Time Augmentation"
] |
Text-based Person Retrieval (TPR) faces critical challenges in practical applications, including zero-shot adaptation, few-shot adaptation, and robustness issues. To address these challenges, we propose a Text-Image Pairs Synthesis (TIPS) framework, which is capable of generating high-fidelity and diverse pedestrian text-image pairs in various real-world scenarios. Firstly, two efficient diffusion-model fine-tuning strategies are proposed to develop a Seed Person Image Generator (SPG) and an Identity Preservation Generator (IDPG), thus generating person image sets that preserve the same identity. Secondly, a general TIPS approach utilizing LLM-driven text prompt synthesis is constructed to produce person images in conjunction with SPG and IDPG. Meanwhile, a Multi-modal Large Language Model (MLLM) is employed to filter images to ensure data quality and generate diverse captions. Furthermore, a Test-Time Augmentation (TTA) strategy is introduced, which combines textual and visual features via dual-encoder inference to consistently improve performance without architectural modifications. Extensive experiments conducted on TPR datasets demonstrate consistent performance improvements of three representative TPR methods across zero-shot, few-shot, and generalization settings.
|
applications to computer vision, audio, language, and other modalities
|
https://openreview.net/pdf?id=Bp2VlfYAMc
| 2025-09-20T01:35:00
| 4
|
[
{
"id": "ZigShWA1Ae",
"forum": "Bp2VlfYAMc",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission20172/Reviewer_Ao47",
"reviewer_name": "Reviewer_Ao47",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 1,
"summary": "This paper proposes a Text-Image Pairs Synthesis (TIPS) framework to address practical challenges of TPR in real-world scenarios, including zero-shot adaptation, few-shot adaptation, and robustness issues. Two person image generators, SPG and IDPG, are introduced to synthesize realistic, identity-consistent pedestrian images. Additionally, TIPS incorporates a caption generator and a filtering mechanism to enhance data quality. Furthermore, a test-time adaptation (TTA) method is proposed to further improve retrieval accuracy.",
"strengths": "1. The paper provides a comprehensive exploration of practical challenges in TPR tasks, such as zero-shot adaptation, few-shot adaptation, and robustness, which are critical for real-world applications.\n2. The experiments are extensive and the analysis is in-depth, providing valuable empirical insights.",
"weaknesses": "1. **Logical Inconsistency**: In the Introduction, the paper argues that existing methods typically rely on real person images, limiting extensibility and scenario diversity. However, in the methodology, the collection of training data in this work also involves gathering real-person images, which appears inconsistent with the stated motivation.\n2. **Presentation and Reproducibility**: The descriptions of SPG and IDPG in the methodology section are rather opaque, making it difficult to fully understand the specific generation processes. Moreover, the correspondence between the textual descriptions and Figure 2 is unclear, which further hinders comprehension. Additionally, the writing in the methods section lacks technical rigor and professionalism. For example, in the S3 stage, the paper merely states that the outputs are \"further evaluated for identity and outfit consistency with the seed image,\" but does not specify how MLLMs are utilized for evaluation, what the evaluation criteria are, or how generation quality and identity consistency are measured. Such methodological details are essential for reproducibility and for ensuring the technical soundness of the proposed approach.\n3. **Novelty**: The proposed framework is largely an engineering integration of existing generation techniques for data augmentation under different scenarios. While practically valuable, the paper lacks substantial methodological innovation, which may limit its impact on future research.\n4. **Experimental Setup**: The zero-shot setting samples images from CUHK03, CUHK02, Market-1501, MSMT17, and VIPER. However, the downstream dataset CUHK-PEDES contains images from CUHK03, Market-1501, and VIPER, while ICFG-PEDES and RSTPReid contain images from MSMT17. This setup may lead to identity overlap, which contradicts the claimed zero-shot setting. Additionally, there is a concern that test images from these datasets may inadvertently be included in the training set, potentially leading to information leakage.",
"questions": "1. In Table 3 (generalization scenario), what is the difference between \"raw\" and \"ours\" in the training data? Please clarify this in the paper to avoid confusion.\n\n2. The Introduction states that existing datasets suffer from poor text-image alignment, yet recent works [1,2] have focused on person image captioning. How does the proposed caption generation method ensure higher quality and greater diversity compared to these methods?\n\n [1] Jiang J, Ding C, Tan W, et al. Modeling Thousands of Human Annotators for Generalizable Text-to-Image Person Re-identification[C]//Proceedings of the Computer Vision and Pattern Recognition Conference. 2025: 9220-9230.\n\n [2] Tan W, Ding C, Jiang J, et al. Harnessing the power of mllms for transferable text-to-image person reid[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024: 17127-17137.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T21:44:25",
"modification_date": "2025-11-12T15:48:05",
"review_url": "https://openreview.net/forum?id=Bp2VlfYAMc¬eId=ZigShWA1Ae",
"license": "CC BY 4.0"
},
{
"id": "4O7BwHNWjI",
"forum": "Bp2VlfYAMc",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission20172/Reviewer_rPXb",
"reviewer_name": "Reviewer_rPXb",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper proposes the TIPS framework, which uses two diffusion generators (SPG/IDPG) together with an MLLM to fully automate the synthesis of high-fidelity, diverse text–pedestrian image pairs. At inference time, TTA fuses the visual features of synthesized preview images with text features, delivering steady gains without modifying the model. The method shows improvements on CUHK-PEDES, ICFG-PEDES, and RSTPReid under zero/few-shot and cross-domain settings.",
"strengths": "1.Provides a fully automated, scalable data-generation pipeline, from prompt generation to synthesis, data filtering, and automatic description, capable of batch-producing high-quality text–pedestrian image pairs.\n\n2.Achieves significant gains in zero/few-shot settings with strong sample efficiency.\n\n3.Requires no network modifications: at inference, fusing “preview image” features with text features enhances consistency and boosts performance.",
"weaknesses": "1.The pipeline is relatively complex overall, relying on LLMs/MLLMs and generators, which raises implementation complexity.\n\n2.Each scenario requires training the generators first; expanding to 400k pairs incurs substantial computation and time costs.\n\n3.A preview image must be generated at inference, adding 2.75s per query; latency increases markedly for methods without reranking.\n\n4.SPG may produce appearance/identity inconsistencies across runs under the same prompt, so it depends on IDPG and MLLM filtering, failures in these stages can degrade quality.\n\n5.The TTA fusion coefficient α requires empirical tuning and may need to be adjusted across methods/datasets.\n\n6.Data scoring and description are produced by an MLLM, so stylistic biases or preferences may be injected into the synthetic data, affecting downstream distributions.",
"questions": "1.If identity drift occurs, what is its frequency, and what proportion of cases are corrected by the IDPG + MLLM filtering?\n\n2.Is the per-scenario training time cost excessively high?\n\n3.Please quantify the stylistic diversity and similarity of the generated texts, compare results across different LLMs/templates, and evaluate whether MLLM-produced descriptions introduce stylistic bias that affects downstream retrieval.\n\n4.In scenarios with available annotations, SPG can be trained directly on target-domain image–text pairs. Please explain how you prevent leakage or overlap with the test distribution/identities.\n\nIf these concerns are addressed, I will raise my score.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T15:48:00",
"modification_date": "2025-11-12T15:48:05",
"review_url": "https://openreview.net/forum?id=Bp2VlfYAMc¬eId=4O7BwHNWjI",
"license": "CC BY 4.0"
},
{
"id": "Us8s1HE9mg",
"forum": "Bp2VlfYAMc",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission20172/Reviewer_Xhbx",
"reviewer_name": "Reviewer_Xhbx",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 3,
"presentation": 3,
"summary": "This paper proposes the TIPS framework, an automated system for synthesizing text-image pairs, designed to address the problem of data scarcity in text-based person retrieval tasks under zero-shot, few-shot, and cross-domain scenarios. Its core innovation lies in two diffusion-based efficient generators: SPG, which generates seed images, and IDPG, which expands images while preserving identity consistency. Additionally, it includes a comprehensive LLM/ MLLM-integrated pipeline and a test-time augmentation strategy.",
"strengths": "1. This paper is of practical value, as it addresses the issues of identity consistency and diversity in data synthesis, thereby expanding text-image pairs for text-based person retrieval.\n2. The framework is comprehensive, covering the entire process from text generation to final training data synthesis, and can be extended to other multimodal synthesis tasks.\n3. The experiments on dataset quality evaluation are convincing, as demonstrated by the results under zero-shot, few-shot, and generalization scenarios presented in the paper.",
"weaknesses": "1. The paper presents qualitative results but lacks quantitative evaluation of the identity consistency in IDPG-generated images (e.g., using a pretrained face or ReID model to compute feature similarity between generated image pairs).\n2. The overall pipeline quality relies on the accuracy of the MLLM serving as a “judge.” However, the potential biases and errors of the MLLM itself may be introduced into the synthesized data, which has not been thoroughly discussed.\n3. The generation cost is relatively high; although the model is lightweight, producing 400k pairs of samples still requires considerable time and computational resources.",
"questions": "1. Could a quantitative evaluation of the identity consistency be conducted for the set of images generated by IDPG?\n2. Does the TTA significantly increase the inference overhead?\n3. The MLLM may make mistakes during the filtering and annotation process. Have you investigated potential errors and analyzed how these errors might affect the quality of the synthesized data and, consequently, the performance of downstream TPR models?\n4. Does the generated data exhibit any “background or pose patterning” issues? Could you provide diversity statistics to illustrate this?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T15:15:03",
"modification_date": "2025-11-12T15:48:05",
"review_url": "https://openreview.net/forum?id=Bp2VlfYAMc¬eId=Us8s1HE9mg",
"license": "CC BY 4.0"
},
{
"id": "w5CywVQDbb",
"forum": "Bp2VlfYAMc",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission20172/Reviewer_2BnY",
"reviewer_name": "Reviewer_2BnY",
"rating": 8,
"confidence": 4,
"soundness": 3,
"contribution": 4,
"presentation": 4,
"summary": "The authors propose a fully automated text-image pair synthesis framework, TIPS, to address critical challenges in Text-based Person Retrieval (TPR), such as poor zero-shot adaptability and the low quality and limited practicality of existing synthetic data. For the first time, they generate a high-fidelity, identity-consistent pedestrian image dataset with controllable resolution solely from textual descriptions, and further introduce a complementary enhancement strategy—Test-Time Augmentation (TTA).",
"strengths": "1. This paper clearly identifies the current bottlenecks in the TPR task, and the authors' motivation for proposing an automated text-image pair generation pipeline is well-justified.\n2. The authors also propose a plug-and-play Test-Time Augmentation (TTA) strategy that enhances the performance of existing methods, and experimental results demonstrate the superiority of their approach.",
"weaknesses": "1、Regarding the proposed TTA module, although the experiments demonstrate its effectiveness, the introduction of this component appears somewhat abrupt relative to the overall motivation of the paper. The TTA mechanism seems not to be conceptually aligned with the core objective of the work.\n2、For the proposed dataset, the paper (including the supplementary material) does not provide detailed statistical information or descriptive analysis. This lack of dataset characterization limits the reader’s understanding of its scale, diversity, and quality.\n3、The ablation study section is rather limited. For instance, one distinctive feature of TIPS is the ability to control the pixel quality of generated images. However, the paper does not investigate whether image resolution or pixel-level control affects the final retrieval performance.",
"questions": "Detailed comments can refer to the weaknesses section.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-29T20:31:32",
"modification_date": "2025-11-12T15:48:06",
"review_url": "https://openreview.net/forum?id=Bp2VlfYAMc¬eId=w5CywVQDbb",
"license": "CC BY 4.0"
}
] |
|
JPLRtQINNy
|
https://openreview.net/forum?id=JPLRtQINNy
|
Domain Bridging: Enabling Adaptation without Peeking at Target Data
| 3.333333
| 4
|
[
4,
2,
4
] |
[
4,
5,
3
] | 3
|
[
"domain bridging",
"evaluation-based adaptation",
"zeroth-order optimization",
"proprietary target data"
] |
Adapting models to target domains with proprietary data remains a challenging problem. One possible setup to enable adaptation is to allow target domain owners to privately evaluate candidate models on their own data. For example, model providers consider how to adjust models to better fit the unseen target data, relying solely on returned model performance. Existing methods adopt Zeroth-Order (ZO) optimization to refine model parameters or employ a two-stage learning process that first identifies the target-related samples in the source data and then retrains the model. However, we find that these methods struggle to generalize well for the target tasks during inference, primarily because of the failure to account for data-statistical shifts between source and target domains. To address this limitation, we introduce the concept of domain bridging in the context of model adaptation for proprietary target data. The core idea is to bridge the domain gap by learning target-aligned perturbations on source data, enabling the fine-tuned model to achieve better performance on target domains. A natural attempt is to extend ZO optimization to this setting. However, this approach fails to produce reliable perturbations on real datasets. To address this, we design a target-aligned, sample-wise perturbation learner, enabling reliable adaptation from performance-only feedback. We provide theoretical convergence guarantees and demonstrate through experiments on five datasets across image and text modalities that our domain bridging method achieves state-of-the-art performance, improving accuracy by approximately 4\%.
|
Domain Bridging introduces an efficient framework that learns source data perturbations to bridge domain gaps, enabling effective model fine-tuning for target domains without requiring direct access to proprietary target data.
|
unsupervised, self-supervised, semi-supervised, and supervised representation learning
|
https://openreview.net/pdf?id=JPLRtQINNy
| 2025-09-17T18:39:58
| 3
|
[
{
"id": "tSIq7SSmfc",
"forum": "JPLRtQINNy",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8980/Reviewer_RAvJ",
"reviewer_name": "Reviewer_RAvJ",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "The paper proposes a domain bridging framework for adaptation across domains with distributional shifts, where target domain data are proprietary and cannot be directly accessed, using only performance feedback from the target side. This paper proposes an Efficient Domain Bridging (EDB) algorithm that solves a bi-level optimization problem, which (1) finds the best sample-wise perturbation on the source data by optimizing the target domain owner feedback, using an approximated gradient method; and (2) fine tune the model parameters by minimizing the loss on the perturbed source data.",
"strengths": "1. The paper proposes a domain bridging framework for model adaptation with proprietary target data.\n \n2. The method is evaluated across diverse benchmarks (both image and text).\n \n3. The presentation is generally clear.",
"weaknesses": "1. **Presentation and notation clarity.** \n Several parts of the paper, particularly Section 2.2, lack sufficient clarity in presentation. The description of Retraining with Source Data Valuation (RSDV) is ambiguous. For example, it is unclear what $\\phi_{\\pi^t[i]}$ represents, and whether $\\phi_{\\pi^t[i]}$ and $\\phi_{\\pi^{t-1}[i]}$ correspond to the same data point. Similarly, the definitions of $V(\\theta^t_{i})$ and $V(\\theta^t_{i-1})$ are not explicit? Furthermore, the paper should clarify whether the reweighted loss indeed uses $\\phi$ values as sample weights. In addition, the notation $\\hat{\\ell}$ introduced in line 250 does not appear in Eqs. (5)–(6), which disrupts the consistency of notation.\n \n2. **Conceptual positioning and insufficient comparison.** \n The proposed framework appears conceptually closer to zeroth-order (ZO) fine-tuning and adversarial or robustness-oriented perturbation methods than to conventional domain adaptation. The idea of “domain bridging” largely combines these existing techniques and applies them to the scenario where target domain data are proprietary and inaccessible. Because of this hybrid nature, the current comparisons, which focused mainly on ZO and RSDV baselines, are not sufficient to convincingly demonstrate the uniqueness or superiority of the proposed method.\n \n3. **Lack of hyperparameter sensitivity analysis.** \n The paper does not report how sensitive the results are to hyperparameters in Algorithm 1.",
"questions": "See above.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T10:22:49",
"modification_date": "2025-11-12T12:11:50",
"review_url": "https://openreview.net/forum?id=JPLRtQINNy¬eId=tSIq7SSmfc",
"license": "CC BY 4.0"
},
{
"id": "76ED0xWyOo",
"forum": "JPLRtQINNy",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8980/Reviewer_Ed7R",
"reviewer_name": "Reviewer_Ed7R",
"rating": 2,
"confidence": 5,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "This paper introduces Domain Bridging (DB), a novel method for adapting pre-trained models to proprietary target domains without direct data access. The core idea is to learn sample-wise perturbations on the accessible source data, guided solely by performance feedback (e.g., accuracy) from the unobserved target domain. This process steers the model's feature representations to better align with the target domain. The authors propose an Efficient Domain Bridging (EDB) algorithm that addresses the limitations of direct Zeroth-Order optimization by using a more reliable gradient estimation within a bi-level optimization framework. Experiments on image and text classification tasks show that EDB achieves state-of-the-art performance, improving accuracy by approximately 4% over existing baselines.",
"strengths": "1. This paper presents a novel concept of domain bridging via source data perturbation. This approach effectively narrows the representation gap between source and target domains without violating data privacy, offering a fresh perspective in evaluation-based model adaptation.\n\n2. The method demonstrates consistent and significant improvements across multiple datasets (Office-31, Office-Home, PACS, VLCS, Amazon Review) and modalities (image, text), validating its robustness and generalizability. It also shows faster convergence and better query efficiency compared to baseline methods.",
"weaknesses": "1. The EDB method depends on a zeroth-order estimator for gradients, which can be noisy and less precise than true gradients. The performance gap between EDB and its variant with exact gradients (EDB) indicates that approximation errors limit the method's full potential.\n\n2. While EDB converges faster than baselines, the process of learning sample-wise perturbations for the entire source dataset within a bi-level optimization framework is inherently complex and could lead to higher computational costs per iteration, especially with large-scale source data.\n\n3. The adaptation process is highly reliant on the performance feedback from the target domain. The paper shows that performance degrades with noisy feedback, suggesting the method might be vulnerable to imperfect or adversarial feedback in real-world scenarios.\n\n4. The robustness analysis against noisy feedback, while valuable, uses simulated Gaussian noise added to the performance metric. The paper would be more convincing if it tested the method against more realistic noise types, such as label noise in the target domain's support set or non-IID noise distributions that might occur in real-world data. \n\n5. This paper lacks the analysis of perturbation interpretability. The paper correctly notes that the learned perturbations do not visually resemble the target data, which is a privacy feature. However, a deeper analysis of what these perturbations represent or how they correlate with specific domain shift characteristics (e.g., style, texture) would provide valuable insights into the mechanistic interpretation of the \"bridging\" process, moving beyond quantitative distance metrics like MED.\n\n6.The experiments are conducted on standard academic benchmarks (e.g., Office-31, Office-Home). Although the paper mentions that perturbations can be computed in parallel, it does not fully demonstrate the method's scalability and computational efficiency on larger, more complex datasets (e.g., DomainNet, VisDA 2017) or with larger model architectures (ViT). A more thorough scalability analysis would strengthen the claim for practical deployment.",
"questions": "See Weaknesses.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T09:55:26",
"modification_date": "2025-11-12T12:11:51",
"review_url": "https://openreview.net/forum?id=JPLRtQINNy¬eId=76ED0xWyOo",
"license": "CC BY 4.0"
},
{
"id": "qfpgJjeXZp",
"forum": "JPLRtQINNy",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8980/Reviewer_Eamd",
"reviewer_name": "Reviewer_Eamd",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The paper tackles evaluation-based domain adaptation when target data are inaccessible. Instead of updating model weights directly from performance-only feedback or using a 2-stage 'value then retrain' method, the authors propose Domain Bridging, learn sample-wise perturbations of the source data that make standard fine-tuning on the perturbed source behave as if training on the unseen target. They provide an efficient bi-level procedure with a tailored gradient estimator, prove convergence under mild assumptions, and show consistent gains across multiple vision/language benchmarks, with improved sample-efficiency.",
"strengths": "Thank you for your inspiring work. The main ideas were sound, and the paper delivered them very well. Below are some of the strengths I wish to underline.\n\n- The paper has clear motivations (privacy-aware adaptation) and a compelling idea to address realistic issues in evaluation-based DA.\n- The proposed method is sound, and its effectiveness is supported by empirical results. \n- The paper is well-written and easy to understand \n- Clear Positioning: I appreciate the author's effort in adding Appendix A. Research Position, which concisely captures how the paper relates to other works.",
"weaknesses": "Below, I have listed some suggestions that would strengthen the paper.\n\n\n- Statistical Stability: In Sec 4.1. The authors claim that the results of the 'best performance from 10 ind. runs' were reported. I believe that for a fair comparison, the mean performance and its standard error/deviation across #N runs should be reported. Could you please report them?\n- The theoretical analysis is sound. However, it relies on strong assumptions (strong convexity and a bounded Hessian), which usually do not hold for most modern deep networks. Could you elaborate on this assumption? Alternatively, the authors could show experiments on small, linear models or provide surrogate approximations. \n- Baseline comparisons with DG: I also looked at Appendix E. (comparisons on DG and sDG). However, since DG and sDG do not have target domain access, the comparisons in Tab. 11 & 12 are of less significance (as DA has target domain access). For a fair comparison, the DG/sDG methods should be evaluated with a setting that has a matching feedback budget.\n- Hyperparameter Tuning: How were the hyperparameters chosen? In Line 336, you mentioned following the setting in Liu et al., 2018a, but the authors have not shown how changes in $𝜉, γ, ϵ, η, δ$ affect overall performance. Please see Questions.",
"questions": "Please refer to the weaknesses for additional questions.\n\n- Step Sizes Assumption: The proof assumes diminishing step sizes; however, and correct me if I'm wrong, but experiments appear to use fixed param lr 𝜉 and perturbation lr. Could you explain if these experimental setting aligns with the theory (diminishing schedule).\n \n- Scability & Architectural Adjustments: Do performance gains and query efficiency persist on larger backbones (e.g., larger ResNet variants) or in different architectures (e.g., vision transformers) -- this is of low priority\n\n- Hyperparameters: We want to see how the changes in hyperparameters affect the performance (e.g., Target holdout acc., support-holdout gap, and query efficiency).",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T00:20:54",
"modification_date": "2025-11-12T12:11:51",
"review_url": "https://openreview.net/forum?id=JPLRtQINNy¬eId=qfpgJjeXZp",
"license": "CC BY 4.0"
}
] |
jAYHFBdQ0M
|
https://openreview.net/forum?id=jAYHFBdQ0M
|
Johnson-Lindenstrauss Transforms in Distributed Optimization
| 3.5
| 3
|
[
2,
4,
4,
4
] |
[
4,
4,
2,
2
] | 4
|
[
"optimization",
"distributed optimization",
"communication compresson"
] |
Increasing volumes of data and models in the machine learning demand efficient methods. Distributed optimization addresses these challenges, for instance, by utilizing compression mechanisms, that reduce the number of bits transmitted. One of the known techniques, that diminish the dimension of the database are Johnson-Lindenstrauss (JL) mappings, that benefit from the ease of implementation. Unlike the usual sparsification techniques, they preserve the scalar product and distances between the vectors, which is beneficial for advanced machine learning problems, such as byzantine-robust learning, personalized and vertical federated learning. In this paper, we close the gap and connect JL Transforms with optimization algorithms and demonstrate, that we can compress communication messages with them. We also validate our theoretical results by the conducted experiments.
|
optimization
|
https://openreview.net/pdf?id=jAYHFBdQ0M
| 2025-09-17T16:49:35
| 4
|
[
{
"id": "wYRuKyGPEl",
"forum": "jAYHFBdQ0M",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8812/Reviewer_ecJS",
"reviewer_name": "Reviewer_ecJS",
"rating": 2,
"confidence": 4,
"soundness": 1,
"contribution": 1,
"presentation": 2,
"summary": "The paper studies distributed optimization with data compression techniques. Specifically, the authors study how Johnson–Lindenstrauss (JL) mappings can be used for data compression in distributed optimization.\n\nJL mappings are random mappings that preserve the $\\ell_2$ norms between vectors when compressed, with high probability.\n\nIf JL mappings are linear, then they also ensure that inner products are preserved after compression, with high probability.\n\nThe paper considers three problems:\n\n1. Byzantine-robust optimization where some workers can send adversarial gradients and the optimization process needs to be robust. This is done using a trusted device as a reference.\n\n2. Personalized federated learning where each worker has their own data and objectives but they want the model parameters to still be similar across devices.\n\n3. Vertical federated learning where the columns or features of the data are spread across the devices.\n\nFirst, the authors show that JL transforms induce a $JL_s$ operator that is unbiased. The authors claim the $JL_s$ operator also preserves norms, which other compressors like Rand-$k$ do not.\n\nFor all three cases, inspired by existing algorithms in the non-compressed setting, the authors develop algorithms with JL-transform compression.\n\n1. In the Byzantine case, the authors develop Grad-BANT that sends projected gradients. Under assumptions on the bound of the attack, that is, assumptions on data similarity, the authors claim that it achieves $O(1/T)$ convergence with high probability.\n\n2. In the personalized federated learning case, the authors use an accelerated gradient–style approach from Hanzely et al. (2020) and show they achieve the same convergence rate as the uncompressed case.\n\n3. In the vertical FL case, the authors show ADMM can be adapted to get the same saddle-point residual convergence.\n\nThe authors also give experimental evidence for these methods.",
"strengths": "The problem studied in the paper is definitely interesting.\n\nJL Transforms have been shown to be useful in many applications including distributed optimization. Further characterizing how we can use JL transforms for more problems and achieve high probability results would be of interest to the community. \n\nThe authors have experimental results showing that the algorithms they developed do lead to good validation accuracy.",
"weaknesses": "One of my biggest concerns with this paper is that I think it interprets the JL transform lemma to be much stronger than what it actually implies.\n\nIn Definition 4, the authors assume that there is a single stochastic mapping $h$ that can uniformly satisfy, for all $u,v$ pairs:\n\n$\\Pr\\\\big[(1-\\varepsilon)\\\\|u-v\\\\|_2^2 \\le \\\\|h(u)-h(v)\\\\|_2^2 \\le (1+\\varepsilon)\\\\|u-v\\\\|_2^2\\big] \\ge 1-\\delta$\n\nThis is not true. The JL transform lemma results presented in Johnson et al. (1984) and Dasgupta and Gupta (2003) show that the uniform guarantee that the norms are preserved simultaneously between all points is only true for a finite set. The probabilistic guarantee, which holds with high probability, is only for a single point or a single pair, which is then used in a union-bound to get the result for the finite set. \n\nWhereas the authors are assuming that the high-probability result holds for all $x$ simultaneously. In other words, the JL transform says that if we fix an $x$, then if we sample an $S$ (or $h$), with high probability the norm of $x$ will be preserved. It does not say that if we sample an $S$, then for all $x$, the norms will be preserved with high probability. Since $S$ is a $k \\times n$ matrix with $k < n$, by definition it cannot have full column rank, thus it necessarily has a non-trivial null space. For any vector in the null space of $S$, the norm will not be preserved. Thus, no fixed $S$ can satisfy the JL lemma simultaneously or uniformly.\n\nI believe this is a critical flaw, and this result is used to further prove lemmas, which I believe are also untrue as a consequence.\n\nFor example:\n\n1. In the proof for Lemma 4, presented as Lemma 9 in the supplementary material, the authors claim that since\n$(1-\\varepsilon)\\\\|x\\\\|^2 \\le \\\\|Sx\\\\|^2 \\le (1+\\varepsilon)\\\\|x\\\\|^2$\nthen the eigenvalues of $S^\\top S$ are between $[1-\\varepsilon, 1+\\varepsilon]$. \n\nThis conclusion would be true only if $(1-\\varepsilon)\\\\|x\\\\|^2 \\le\\\\|Sx\\\\|^2 \\le (1+\\varepsilon)\\\\|x\\\\|^2$ was true for all unit-norm $x$ simultaneously, but the JL lemma only holds when $x$ is fixed.\n\nIn fact, since $S$ is a $k \\times n$ matrix, the rank of $S^\\top S$ is at most $k$, and since $k < n$, there are at least $n - k$ eigenvalues of $S^\\top S$ equal to $0$, which are obviously not in $[1-\\varepsilon, 1+\\varepsilon]$.\n\nMoreover, the Marchenko-Pastur distribution shows that in the Gaussian JL matrix, the largest eigenvalue of $S^\\top S$ scales as $(1 + \\sqrt{k/n})^2$, which is larger than $1 + \\varepsilon$.\n\nThis result is used to prove the main result for the Byzantine case, making that result unreliable.\n\n2. Similarly, Lemma 15 says if $S$ is a JL matrix, then with high probability $\\ker S = \\{0\\}$. This is also untrue. Since $k < n$, $S$ cannot have full column rank, thus $\\dim \\ker S \\ge n - k$, i.e., a non-trivial null space.\n\nThis result is used to prove the result for the vertical FL, making it unreliable.\n\nI have not checked all the proofs, so I am not too confident if there are more such issues.\n\n\nMoreover, the writing of the paper can be improved and is confusing at times, and it is missing references. For example:\n1. In Eq. (4), how did we change $r_i(x_i)$ to $r_i(z_i)$? Here $x_i \\in \\mathbb{R}^{n_i}$ whereas $z_i \\in \\mathbb{R}^{m}$.\n2. The updates of ADMM are explained without introducing or mentioning what $\\rho$ is.\n3. Missing reference in the Experiments section, it says “At Figure ??”.\n4. Proof of Lemma 11 says “Theorem 5 from (?)”.",
"questions": "1. Can the authors address my concern about the overly strong assumption in the JL lemma? Specifically, can they show how this assumption can be rectified or prove that their claims still hold under the standard JL guarantees?\n\n2. Which results remain valid if statements such as Lemma 4 and Lemma 15 are incorrect?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-08T12:56:32",
"modification_date": "2025-11-12T12:09:39",
"review_url": "https://openreview.net/forum?id=jAYHFBdQ0M¬eId=wYRuKyGPEl",
"license": "CC BY 4.0"
},
{
"id": "CtFrtBhK25",
"forum": "jAYHFBdQ0M",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8812/Reviewer_Mpf9",
"reviewer_name": "Reviewer_Mpf9",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "This paper presents a method of using Johnson-Lindenstrauss (JL) mappings to reduce communication while ensuring robustness in distributed learning and federated learning. The paper primarily focused on proving that the JL mapping ensures convergence with high probability, so that the desired criterion is satisfied with high confidence, rather than on average. This is based on connecting distributed optimization with JL Transforms' ability to maintain L2 distances between vectors. \n\nThe paper adapted the method for distributed learning for Byzantine-resilient optimization, building on the work of Grad-BANT. In both the personalized federated learning setting and the vertical federated learning setting, JL is used to demonstrate that compressed communication provides a convergence guarantee. \n\nThe theory is well developed and clearly presented. \n\nHowever, the experimental section is short and incomplete. The JL mapping experiments are based on two types of stochastic projection matrices, Gaussian and Rademacher. They do better than randomized sparsification on the mushroom dataset. Training with ResNet-20 on CIFAR-10 is mentioned, but it is unclear which figure it refers to, nor is the learning setting specified.",
"strengths": "The paper could serve as a good tutorial on demonstrating why JL is a suitable technique for compression in the distributed and federated learning setting. It fills the knowledge gap by demonstrating the distributed optimization capabilities of JL Transforms, which maintain L2 distances between vectors. This ensures that the desired criterion can be satisfied with high confidence, rather than on average.",
"weaknesses": "The experimental validation is weak. It is challenging to assess the practical relevance of this method, as it primarily consists of an analysis of the convergence.",
"questions": "none.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-04T12:53:34",
"modification_date": "2025-11-12T12:09:40",
"review_url": "https://openreview.net/forum?id=jAYHFBdQ0M¬eId=CtFrtBhK25",
"license": "CC BY 4.0"
},
{
"id": "1aQN0cyhS6",
"forum": "jAYHFBdQ0M",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8812/Reviewer_8yCp",
"reviewer_name": "Reviewer_8yCp",
"rating": 4,
"confidence": 2,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "This paper proposes to incorporate Johnson–Lindenstrauss (JL) transforms into distributed optimization frameworks, offering a unifying view of distance-preserving compression for diverse settings including Byzantine-robust training, personalized federated learning, and vertically partitioned ADMM. The idea is elegant and theoretically sound, as JL projections naturally preserve geometric structures that many existing compression operators destroy. The authors develop modified algorithms with provable high-probability convergence guarantees and provide extensive empirical results on both convex and nonconvex problems.",
"strengths": "1. The use of JL transforms as communication compressors is novel and grounded in well-established random projection theory. The high-probability analysis extends beyond standard unbiased-compression assumptions, which makes the theoretical contribution nontrivial. \n2. Applying the same principle to three distinct distributed setups—Byzantine-robust, personalized, and vertical FL—demonstrates generality. The paper’s structure is coherent, and the notation is consistent across sections.\n3. Experiments are extensive and include diverse datasets and attacks; results consistently show that JL-based compression can outperform random sparsification and low-rank baselines while significantly reducing communication.",
"weaknesses": "1. Although the conceptual introduction of JL transforms is interesting, the algorithmic modifications in the three settings are relatively straightforward adaptations. The originality lies more in applying an existing tool than in developing fundamentally new algorithms.\n2. The paper does not thoroughly discuss the computational or memory cost of generating and applying large random matrices $S$ and $S^\\top$. For high-dimensional models, the matrix–vector multiplications may offset communication gains unless structured JL variants are used.\n3. While experiments are numerous, details such as the exact compression ratio, random seed synchronization method, and matrix reuse strategy are briefly mentioned but not systematically evaluated. This makes reproduction and scalability assessment difficult.",
"questions": "1. How do the authors handle the computational and memory overhead of multiplying by large random matrices $S$ and $S^\\top$ in high-dimensional models ? Have the authors considered or tested structured JL transforms (such as sparse JL) to mitigate this cost? \n2. Since federated clients may have highly non-IID data, how stable is JL-based compression under strong heterogeneity? Does the preservation of inner products in low-dimensional space still hold in practice when client updates are strongly biased? \n3. The theoretical bounds depend on the JL dimension $k$, but the empirical section does not show how performance varies with $k$. Could the authors include sensitivity studies to clarify the minimal $k$ needed for reliable optimization and when performance starts to degrade?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T00:37:21",
"modification_date": "2025-11-12T12:09:40",
"review_url": "https://openreview.net/forum?id=jAYHFBdQ0M¬eId=1aQN0cyhS6",
"license": "CC BY 4.0"
},
{
"id": "hKoUa1Tldp",
"forum": "jAYHFBdQ0M",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8812/Reviewer_McCk",
"reviewer_name": "Reviewer_McCk",
"rating": 4,
"confidence": 2,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This paper integrates Johnson-Lindenstrauss (JL) Transforms into distributed optimization to mitigate communication bottlenecks and other issues in distributed optimization. The authors give comprehensive convergence analyses, as well as numerical experiments to support the theory.",
"strengths": "1. The paper fills a critical gap by leveraging JL’s unique distance/scalar-product preservation.\n2. The convergence analyses are comprehensive.",
"weaknesses": "1. The designed experiments are relatively simple. Datasets such as a9a and w8a fail to fully demonstrate the algorithm performance. Additionally, CIFAR-10 is relatively small, so it is recommended to try larger datasets like CIFAR-100 or Tiny-ImageNet. Larger-scale experiments are required to fully verify the effectiveness of the proposed method.\n2. The core focus of the paper is not clear enough. In the experiment section, the proposed method is compared with three types of methods: a. communication-efficient methods, b. personalized methods, and c. vertical FL. However, each category lacks sufficient depth. It is suggested that the research scope be narrowed down to ensure a more in-depth analysis.\n3. There is no ablation study related to critical parameters.\n4. Line 431, Missing reference.",
"questions": "Please see weaknesses",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-23T16:43:16",
"modification_date": "2025-11-12T12:09:41",
"review_url": "https://openreview.net/forum?id=jAYHFBdQ0M¬eId=hKoUa1Tldp",
"license": "CC BY 4.0"
}
] |
|
bjNKvuBMqJ
|
https://openreview.net/forum?id=bjNKvuBMqJ
|
Solving robust MDPs as a sequence of static RL problems
| 3.5
| 3.25
|
[
4,
2,
4,
4
] |
[
3,
4,
3,
3
] | 4
|
[
"Robust reinforcement learning"
] |
Designing control policies whose performance level is guaranteed to remain above a given
threshold in a span of environments is a critical feature for the adoption of reinforcement learning
(RL) in real-world applications. The search for such robust policies is a notoriously difficult
problem, related to the so-called dynamic model of transition function uncertainty, where the
environment dynamics are allowed to change at each time step. But in practical cases, one
is rather interested in robustness to a span of static transition models throughout interaction
episodes. The static model is known to be harder to solve than the dynamic one, and seminal
algorithms, such as robust value iteration, as well as most recent works on deep robust RL, build
upon the dynamic model. In this work, we propose to revisit the static model. We suggest an
analysis of why solving the static model under some mild hypotheses is a reasonable endeavor,
based on an equivalence with the dynamic model, and formalize the general intuition that
robust MDPs can be solved by tackling a series of static problems. We introduce a generic
meta-algorithm called IWOCS, which incrementally identifies worst-case transition models so
as to guide the search for a robust policy. Discussion on IWOCS sheds light on new ways to
decouple policy optimization and adversarial transition functions and opens new perspectives
for analysis. We derive a deep RL version of IWOCS and demonstrate it is competitive with
state-of-the-art algorithms on classical benchmarks.
|
We propose IWOCS, a method for robust MDPs that finds worst-case transitions, separates policy optimization from adversarial dynamics, and matches state-of-the-art deep RL performance.
|
reinforcement learning
|
https://openreview.net/pdf?id=bjNKvuBMqJ
| 2025-09-19T15:48:19
| 4
|
[
{
"id": "fLHf4O2p5e",
"forum": "bjNKvuBMqJ",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16724/Reviewer_E1gk",
"reviewer_name": "Reviewer_E1gk",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper proposes a new meta-algorithm, Incremental Worst-Case Search (IWOCS), for solving robust Markov Decision Processes (MDPs) with transition function uncertainty. The authors focus on the static model of uncertainty, where the environment's transition dynamics are fixed for an entire episode, which is often more practical but harder to solve than the commonly used dynamic model where dynamics can change at every timestep. The IWOCS algorithm works by iteratively building a discrete set of worst-case transition models. This approach effectively decouples policy optimization (a standard RL problem) from the search for adversarial environments. Empirical results on MuJoCo benchmarks show that IWOCS is competitive with and often outperforms existing robust RL methods.",
"strengths": "+ The paper's focus on static models is well-motivated. This model is a realistic representation of many real-world robustness problems.\n\n+ The proposed method is validated with strong experimental results on the MuJoCo benchmarks.\n\n+ The paper is well-structured and clearly written in the introduction and experiment sections.",
"weaknesses": "-- Algorithmic Contributions are Unclear: It is not clear which method is the main proposed one. The grid-search-based IWOCS* outperforms the CMA-ES-based IWOCS in aggregate (Table 1). This suggests that either the more sophisticated CMA-ES is an ineffective or unnecessary component, or that the benchmark uncertainty spaces are not challenging enough to warrant it over a simple grid search.\n\n-- Interpretation of Baseline Results: The interpretation of the baseline results is lacking. In both Table 1 (worst-case) and Table 2 (average), methods like M3DDPG and RARL show highly negative normalized scores, implying they perform worse than a non-robust vanilla TD3. This is a surprising and counter-intuitive result that requires a clear explanation.\n\n-- Incomplete Related Work: The related work section is not comprehensive and appears to be missing citations to some recent and relevant work in robust RL (e.g., [1, 2]).\n\n-- Disconnect Between Theory and Practice: There is a significant disconnect between the paper's theoretical motivation and its empirical validation. The core justification (static-dynamic equivalence, no-duality gap) relies on the sa-rectangularity assumption. However, the authors explicitly state their MuJoCo experiments do not respect the rectangularity assumption (Footnote 4).\n\n#### [1] Reddi et al. “Robust Adversarial Reinforcement Learning via Bounded Rationality Curricula”, ICLR 2024\n#### [2] dong et al. “Variational Adversarial Training Towards Policies with Improved Robustness”, AISTATS 2025",
"questions": "- On Baseline Performance: Can you provide an interpretation for why established robust RL methods like M3DDPG and RARL perform significantly worse than the non-robust TD3 baseline in your experiments (Tables 1 and 2)? \n\n- On Table 3: Please clarify the experimental setup for Table 3. It is mentioned that each line corresponds to a different random seed, then why the initialized values in the first 3 columns are the same. \n\n- How do you see IWOCS scaling to high-dimensional uncertainty spaces where black-box optimization methods like CMA-ES are intractable?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T05:04:16",
"modification_date": "2025-11-12T13:52:53",
"review_url": "https://openreview.net/forum?id=bjNKvuBMqJ¬eId=fLHf4O2p5e",
"license": "CC BY 4.0"
},
{
"id": "RsXLcNOOzO",
"forum": "bjNKvuBMqJ",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16724/Reviewer_RTSD",
"reviewer_name": "Reviewer_RTSD",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 1,
"presentation": 2,
"summary": "This paper introduced the IWOCS method that finds the optimal policy robust to a set of pre-defined environmental transitions $\\mathcal{T}$. Specifically, it iteratively finds the transition $T_i$ that minimizes $V^{\\pi_{i-1}}$ from the previous iteration as well as the policy $\\pi_i$ that maximizes the pessimistic value from the all the $T_i$ and $V^{\\pi_i}$ it has interacted with before. Note that the algorithm is operated based on naive sampling implicitly assuming that sufficient samples can be obtained such that both $T = argmin {V_T}^{\\pi_{i-1}}$ can be achieved almost surely and $V^*_{T_i}$ can be estimated without error in each iteration.",
"strengths": "* The motivations are clearly conveyed by the paper and the approach is straightforward.\n* The method is tested over various enviroments and compared with some baselines.",
"weaknesses": "* The main concern from the reviewer is that the method is implicitly dependent on the fact that in each iteration the values functions $Q*$ and $V*$ can be perfectly obtained, and that $T_i$ can be found to minimize $V_T^\\pi$. This might be doable in relatively small and discrete environments where the transition set $\\mathcal{T}$ is also discrete. A number of concerns were raised from here.\n * First, the $T_i$ in each iteration is found by using some evolution algorithms/strategies -- what is the optimality guarantee/error bound/regret there that each time $T_i$ minimizes $V^\\pi$ globally (when $\\mathcal{T}$ and environmental transitions are both continuous)? If $T_i$ could not be solved perfectly in each iteration, how would it affect the optimality of the policy?\n * What is the sample complexity of finding $T_i$?\n * In non-discrete environmental transitions, how ${Q*_T}$ are obtained? If intractable, assuming that $Q*_T$ can be estimated with some error. Then could it violate the monotonicity property (property 2)? If this property is violated, would the algorithm still work? Could the authors show some guarantee, or the conditions, that the monotonicity could still be preserved even if $Q*_T$ could not be perfectly estimated? Or the other way around, if the monotonicity is not strictly preserved, how would it affect the optimality of the policy?\n * Even if the questions above could not be justified theoretically, could the authors validate them through numerical simulations (maybe start with the toy example and potentially extending to more complexed continuous environments)?\n\n* The method also requires the set $\\mathcal{T}$ to be fully known *a priori*. So the scope of this work is arguable covered by most of the distributionally robust RL work [1-4 below as a non-exhaustive list], which can find a policy robust to *unknown* environmental disturbance. Two more concerns following this line.\n * These methods can be potentially directly applied to the problem setup considered in this paper. Moreover, these methods usually come with sample complexity analyses, convergence and/or regret/optimality guarantees.\n * The reviewer is also curious how they performs against IWOCS in the experimental setup considered in this paper?\n\n[1] Ramesh, Shyam Sundhar, et al. \"Distributionally robust model-based reinforcement learning with large state spaces.\" International Conference on Artificial Intelligence and Statistics. PMLR, 2024.\n\n[2] Shi, Laixi, et al. \"The curious price of distributional robustness in reinforcement learning with a generative model.\" Advances in Neural Information Processing Systems 36 (2023): 79903-79917.\n\n[3] Liu, Zijian, et al. \"Distributionally Robust $ Q $-Learning.\" International Conference on Machine Learning. PMLR, 2022.\n\n[4] Tessler, Chen, Yonathan Efroni, and Shie Mannor. \"Action robust reinforcement learning and applications in continuous control.\" International Conference on Machine Learning. PMLR, 2019.",
"questions": "One additional minor comment/question\n\n* Given that $\\mathcal{T}$ is expected to be fully known -- should it still be called the \"uncertainty set\"? In the reviewer's opinion, naming them as \"a set of environmental dynamics/transitions/parameters\" seem to better aligned with how $\\mathcal{T}$ was used in this work.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T13:20:37",
"modification_date": "2025-11-12T13:52:53",
"review_url": "https://openreview.net/forum?id=bjNKvuBMqJ¬eId=RsXLcNOOzO",
"license": "CC BY 4.0"
},
{
"id": "D8jynmWQMl",
"forum": "bjNKvuBMqJ",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16724/Reviewer_GhBY",
"reviewer_name": "Reviewer_GhBY",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 3,
"presentation": 3,
"summary": "This paper reexamines robust RL through the lens of static robust MDPs. It demonstrates that a standard robust MDP can be solved by decomposing it into a sequence of static RL problems, replacing the standard min–max formulation with an iterative worst-case search. The proposed algorithm IWOCS alternates between standard RL in a fixed so-far worst environment and identifying the worst environment. Experiments on MuJoCo benchmarks (Ant, Hopper, and HalfCheetah) show that IWOCS achieves good robustness and stability compared to prior methods like RARL and M2TD3.",
"strengths": "1. The main idea of transforming robust MDPs as a sequence of static RL problems is insightful with well established mathematical explanation.\n2. The proposed IWOCS algorithm performs well across benchmarks, showing stronger robustness and solid average returns.\n3. The paper is well written and easy to follow, with clear structure and good intuition.",
"weaknesses": "1. The algorithm is computationally expensive. It requires storing multiple Q-functions (line 5 of algorithm 1) and solving several full RL problems, which limits scalability to large-scale or high-dimensional settings. The worst-environment search is also heuristic and unstable across tasks (IWOCS vs. IWOCS* show noticeable gaps among different tasks) with no guarantee.\n2. There’s no analysis of sample complexity, or how sensitive the algorithm is if the worst env identification is imperfect (I feel this is not easy for the continuous action/state )",
"questions": "1. Is there reliable algorithm for worst env search? if not, can you analyze the algorithm's convergence given the retrieved env is not the worst one?\n2. Could you analyze the sample complexity of this algorithm?\n3. Could you add SAC as one of the baseline?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-29T16:03:00",
"modification_date": "2025-11-12T13:52:54",
"review_url": "https://openreview.net/forum?id=bjNKvuBMqJ¬eId=D8jynmWQMl",
"license": "CC BY 4.0"
},
{
"id": "eVDZEoAVWE",
"forum": "bjNKvuBMqJ",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16724/Reviewer_C5Nh",
"reviewer_name": "Reviewer_C5Nh",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "This paper revisits robust Markov Decision Processes (MDPs) from the perspective of the static model of transition uncertainty, as opposed to the commonly used dynamic or two-player adversarial model. It argues that under stationary policies and sa-rectangular uncertainty sets, the two formulations are equivalent. Building on this insight, the authors propose the Incremental Worst-Case Search (IWOCS) meta-algorithm, which iteratively identifies worst-case transition models and solves a sequence of standard (non-robust) RL problems. The method decouples policy optimization from adversarial search and is implemented using value iteration and a deep RL version. Experiments on MuJoCo benchmarks show that IWOCS achieves competitive or superior worst- and average-case performance compared to existing robust RL methods.",
"strengths": "1. The paper introduces a novel static-model framework for robust MDPs, providing new insights into environment uncertainty through the equivalence between static and dynamic formulations under stationary policies and rectangular uncertainty.\n\n2. The proposed IWOCS framework is conceptually simple, modular, and highly scalable to both tabular and deep RL settings.\n\n3. Extensive experiments on MuJoCo benchmarks demonstrate strong worst-case performance, confirming the practical effectiveness of IWOCS compared with state-of-the-art robust RL methods.",
"weaknesses": "1. The paper’s writing quality is uneven, with inconsistent comma usage, incorrect citation formatting (e.g., line 305), and an empty Appendix A.\n\n2. The related work section largely overlooks recent advances (past 3 years) in robust and distributionally robust RL.\n\n3. The framework claims to decouple policy optimization from the adversary, yet if the agent picks transition models, it effectively remains a two-player adversarial process.\n\n4. The choice of discrete uncertainty sets is not well-motivated. Most modern robust MDP studies consider continuous uncertainty set, which provides stronger theoretical guarantees and broader coverage.\n\n5. Section 4 introduces a simpler process without explaining what is simplified, why it is necessary, or how it impacts theoretical soundness.\n\n6. Algorithm 1 is poorly described: the value function computation is missing; $\\mathcal{T}_ {i}$ is defined but never used; $T_i$ is not properly updated (the algorithm may stagnate); and the “find worst $T_{i+1}$” step is ambiguous. These issues make the procedure difficult to reproduce.\n\n7. Figure 1 lacks an x-axis label, and IWOCS appears to have only two plotted points, limiting interpretability.\n\n8. Appendix F’s pseudocode introduces undefined variables (e.g., $T_{i+1}$) and inconsistent notation, rendering the algorithm incomplete.",
"questions": "1. What is the motivation for using discrete uncertainty sets instead of continuous ones? How does this affect the optimality of IWOCS?\n\n2. How does IWOCS fundamentally differ from two-player adversarial training, given that it still identifies worst-case transitions?\n\n3. What exactly is meant by the simpler process mentioned in Section 4? What is simplified, and at what theoretical cost?\n\n4. Is IWOCS trained online or offline, and how are samples collected across iterations?\n\n5. Why were only three baselines (M2TD3, M3DDPG, RARL) selected? Have newer robust RL methods (2022–2025) been considered?\n\n6. Could you specify the normalization formula used for Tables 1–2 and provide raw reward values for reproducibility?\n\n7. Why does the Ant environment perform significantly worse than others?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-20T15:36:47",
"modification_date": "2025-11-12T13:52:54",
"review_url": "https://openreview.net/forum?id=bjNKvuBMqJ¬eId=eVDZEoAVWE",
"license": "CC BY 4.0"
}
] |
2Aj7sA2vbb
|
https://openreview.net/forum?id=2Aj7sA2vbb
|
MADGen:Minority Attribute Discovery in Text-to-Image Generative Models
| 4
| 3.666667
|
[
6,
4,
2
] |
[
3,
4,
4
] | 3
|
[
"Bias identification",
"Bias mitigation",
"Fairness",
"Diffusion models"
] |
Text-to-image diffusion models achieve impressive generation quality but also inherit and amplify biases from training data, resulting in biased coverage of semantic attributes. Prior work addresses this in two ways. Closed-set approaches mitigate biases in predefined fairness categories (e.g., gender, race), assuming socially salient minority attributes are known a priori. Open-set approaches frame the task as bias identification, highlighting majority attributes that dominate outputs. Both overlook a complementary task: uncovering minority features underrepresented in the data distribution (social, cultural, or stylistic) yet still encoded in model representations. We introduce MADGen, the first framework, to our knowledge, for discovering minority attributes in diffusion models. Our method leverages Matryoshka Sparse Autoencoders and introduces a minority metric that integrates neuron activation frequency with semantic distinctiveness, enabling the unsupervised identification of rare attributes. Specifically, MADGen identifies a set of neurons whose behavior can be directly interpreted through their top-activating images, which correspond to underrepresented semantic attributes in the model. Quantitative and qualitative experiments demonstrate that MADGen uncovers attributes beyond fairness categories, supports systematic auditing of architectures such as Stable Diffusion 1.5, 2, and XL, and enables amplification of minority attributes during generation.
|
A framework to identify minority or underrepresented attributes in the intermediate representations of diffusion models.
|
alignment, fairness, safety, privacy, and societal considerations
|
https://openreview.net/pdf?id=2Aj7sA2vbb
| 2025-09-14T17:38:00
| 4
|
[
{
"id": "7NFMIySpiC",
"forum": "2Aj7sA2vbb",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission5074/Reviewer_Z4F1",
"reviewer_name": "Reviewer_Z4F1",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 4,
"summary": "This work addresses the issue of bias in text-to-image diffusion models, introducing MADGen, a framework designed to discover minority attributes in diffusion models, which uses Matryoshka Sparse Autoencoders and a novel minority metric to discover and amplify underrepresented attributes in diffusion models.",
"strengths": "1. The paper is well-written and easy to follow. The figures are well-designed and enhance the understanding of the method.\n2. This paper mentions a very promising issue in bias in text-to-image diffusion models. The framework is an effective method to systematically uncover latent minority attributes in diffusion models without predefined categories.",
"weaknesses": "1. The experimental validation would benefit from broader comparative analysis. While the paper claims MADGen supports systematic auditing across Stable Diffusion variants (1.5, 2, and XL) and enables attribute amplification, the experimental results just employ Stable Diffusion v1.4 in quantitative comparison. \n\n2. Quantitative comparison with state-of-the-art supervised debiasing methods[1-5] to clarify the trade-offs between unsupervised discovery and supervised correction would be interesting.\n\n[1] Friedrich F, Schramowski P, Brack M, et al. Fair diffusion: Instructing text-to-image generation models on fairness[J]. arXiv preprint arXiv:2302.10893, 2023.\n\n[2]Gandikota R, Orgad H, Belinkov Y, et al. Unified concept editing in diffusion models[C]//Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2024: 5111-5120.\n\n[3]Shen X, Du C, Pang T, et al. Finetuning Text-to-Image Diffusion Models for Fairness[C]//The Twelfth International Conference on Learning Representations.\n\n[4]Li J, Hu L, Zhang J, et al. Fair text-to-image diffusion via fair mapping[C]//Proceedings of the AAAI Conference on Artificial Intelligence. 2025, 39(25): 26256-26264.\n\n[5]Li H, Shen C, Torr P, et al. Self-discovering interpretable diffusion latent directions for responsible text-to-image generation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024: 12006-12016.",
"questions": "1. MADGen focuses on discovering minority attributes. However, if integrated with existing debiasing methods (such as prompt engineering), could potential conflicts arise? Can you provide more case and experiments demonstrating its compatibility with debiasing methods?\n2. How does the method behave when dealing with multiple biases case, like both gender and race? \n3. Whether using LLMs for attribute annotation may introduce extra bias due to their blind spots or inherent biases?\n4. When using heatmaps to visualize interpretability results of generative images, did you explore the relationship between these interpretable features and conditional prompts?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-07T17:52:32",
"modification_date": "2025-11-12T11:24:16",
"review_url": "https://openreview.net/forum?id=2Aj7sA2vbb¬eId=7NFMIySpiC",
"license": "CC BY 4.0"
},
{
"id": "U32bOlSBZ5",
"forum": "2Aj7sA2vbb",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission5074/Reviewer_LaCj",
"reviewer_name": "Reviewer_LaCj",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "This paper proposes MADGen, a post-hoc framework for discovering underrepresented “minority attributes” within text-to-image diffusion models by applying Matryoshka Sparse Autoencoders (MSAEs) to the internal bottleneck activations of the denoising UNet. The method ranks latent neurons based on a minority score combining activation rarity and semantic distinctiveness, and visualises them via top-activating samples and heatmaps. The authors show that MADGen can surface demographic, stylistic, and contextual biases across multiple Stable Diffusion variants, and demonstrate that prompting with discovered minority attributes can increase their presence in generated outputs. The framework is positioned as a representation-grounded auditing tool rather than a direct fairness mitigation method.",
"strengths": "1. The paper addresses an important and timely problem: moving beyond predefined fairness axes to discover more general underrepresented attributes in diffusion models.\n2. The use of MSAEs for hierarchical concept decomposition in diffusion activations is new in this specific context and provides a structured way to inspect internal model representations.\n3. The experimental analysis includes cross-model comparison (SD v1.4, v2.1, SDXL) and explores multiple types of minority attributes (demographic, stylistic, contextual), demonstrating a broader scope than traditional fairness-only audits.",
"weaknesses": "1. The technical contribution is limited: the method is largely a direct application of existing MSAE architectures to diffusion features, and the proposed “minority score” is heuristic rather than theoretically grounded.\n2. Although the paper claims to be label-free, the pipeline critically depends on external vision-language models (GPT-4o) for semantic interpretation and attribute detection, meaning the attribute space is constrained by the biases and vocabulary of those models.\n3. Although the method allows intervening on internal latent activations to increase the presence of minority attributes, this intervention still operates externally to the diffusion model and does not modify the model parameters or shift its intrinsic generative distribution.\n4. The neuron visualisations are purely qualitative, and the paper does not provide quantitative validation that the discovered neurons are causally responsible for specific attributes. For example, there is no ablation, activation editing, or information-theoretic analysis to show that (i) neurons are disentangled, (ii) manipulating them reliably controls attributes, or (iii) they do not encode confounded signals. \n5. Generalisation is not demonstrated: it is unclear whether MSAE-trained neurons transfer across prompts, attribute types, or datasets, and whether the identified attributes are stable or model-specific.",
"questions": "1. How does MADGen compare to the baseline diffusion model in terms of inference or memory cost?\n2. Can the authors demonstrate that the same MSAE-trained neurons remain meaningful across different prompts or datasets, rather than requiring per-prompt retraining?\n3. The “amplification” experiment only edits prompts; have the authors evaluated whether the number of minority-attribute samples increases quantitatively (e.g., counting attribute occurrence before and after intervention)?\n4. How robust is the method to the choice of external annotators (GPT-4o / Qwen-VL? If different models are used, do the discovered attributes change?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T20:16:50",
"modification_date": "2025-11-12T11:24:16",
"review_url": "https://openreview.net/forum?id=2Aj7sA2vbb¬eId=U32bOlSBZ5",
"license": "CC BY 4.0"
},
{
"id": "bLONZELWYQ",
"forum": "2Aj7sA2vbb",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission5074/Reviewer_ehoC",
"reviewer_name": "Reviewer_ehoC",
"rating": 2,
"confidence": 4,
"soundness": 1,
"contribution": 1,
"presentation": 2,
"summary": "This paper introduces MADGen, a framework intended to discover \"minority attributes\" in text-to-image diffusion models. These attributes are defined as semantic concepts that are encoded in the model's internal representations but are systematically underrepresented in the generated outputs. The method trains a Matryoshka Sparse Autoencoder (MSAE) on the U-Net's intermediate activations to find interpretable features (neurons). It then proposes a \"Minority Score,\" $s(z) = d \\odot (1-\\nu)$, which combines semantic distinctiveness ($d$, based on CLIP) and activation rarity ($1-\\nu$), to rank these neurons. The authors claim this framework can audit models and demonstrate its use across Stable Diffusion v1.4, v2.1, and XL.",
"strengths": "The paper's only significant strength is identifying and articulating the important, unsolved problem of \"minority attribute discovery,\" distinguishing it from the more common tasks of closed-set mitigation or majority bias detection.",
"weaknesses": "1. The method's core premise that low activation frequency ($\\nu_i$) indicates systematically suppressed attributes is an unvalidated heuristic.\n2. The paper's primary qualitative \"proof\" is unconvincing and self-contradictory. The authors use fragmented heatmaps in Figure 5 to dismiss low-scoring neurons as \"non-coherent\". However, a supposedly \"successful\" high-scoring neuron in Figure 3 (top-left) exhibits the exact same flaw, activating nonsensically on the image corners. This suggests the authors (or their LLM) are engaging in post-hoc rationalization, labeling the images (e.g., \"black-and-white\") and simply ignoring the contradictory heatmap evidence.\n3. The evaluation is fatally flawed. It is circular: the method finds rare things, and the evaluation proves it found rare things. It is confounding: the method and its evaluation are critically dependent on external black-box models (CLIP, GPT-4o, LLaMA-4 Scout).\n4. The method is brittle and relies on arbitrary thresholds, such as the \"90th percentile\" cutoff and the \"0.003\" cosine distance, with no sensitivity analysis or justification.",
"questions": "1. In Figure 5, you dismiss low-scoring neurons because their heatmaps are \"diffuse, fragmented, and fail to capture coherent semantic attributes\". However, in Figure 3 (top-left, \"Black-and-white photo...\"), a high-scoring \"minority\" neuron clearly activates on the image corners, not the semantic content. How do you justify labeling this neuron as a coherent minority attribute while dismissing the ones in Figure 5 as noise? This appears to be a major contradiction.\n2. The paper's premise links low activation frequency to \"minority\" or \"suppressed\" attributes. How do you distinguish between attributes that are \"systematically suppressed\" (i.e., the model learns to under-produce them relative to the training data) and attributes that are simply \"naturally rare\" (i.e., they have a low frequency in the training data)? Your method $s(z) = d \\odot (1-\\nu)$ appears to find both without distinction.\n3. This is a critical point. Please clarify the exact training and inference pipeline. Do you train one global MSAE on a large, general-purpose corpus, or must you train a new MSAE for each prompt? What is the computational cost, and how does this scale as an auditing tool?\n4. The semantic distinctiveness score $d_i$ is entirely dependent on CLIP. How can you be sure your method is not just discovering the biases of CLIP's embedding space rather than the diffusion model's internal representations?\n5. The minority score $s(z) = d \\odot (1-\\nu)$ is a heuristic. Can you provide a stronger justification for this multiplicative form?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T03:33:11",
"modification_date": "2025-11-12T11:24:16",
"review_url": "https://openreview.net/forum?id=2Aj7sA2vbb¬eId=bLONZELWYQ",
"license": "CC BY 4.0"
}
] |
zq40cmz1JD
|
https://openreview.net/forum?id=zq40cmz1JD
|
When Speculation Spills Secrets: Side Channels via Speculative Decoding in LLMs
| 5
| 3.5
|
[
6,
6,
4,
4
] |
[
4,
4,
3,
3
] | 4
|
[
"Large Language Models",
"Speculative Decoding",
"Side Channel Attack",
"Privacy"
] |
Deployed large language models (LLMs) often rely on speculative decoding, a technique that generates and verifies multiple candidate tokens in parallel, to improve throughput and latency. In this work, we reveal a new side-channel whereby input-dependent patterns of correct and incorrect speculations can be inferred by monitoring per-iteration token counts or packet sizes. We demonstrate that an adversary observing these patterns can fingerprint user queries with >90% accuracy across four speculative-decoding schemes, REST (∼100%), LADE (up to 92%), BiLD (up to 95%), and EAGLE (up to 77.6%) and leak confidential datastore contents used for prediction at rates exceeding 25 tokens/sec. We evaluate the side-channel attacks in both research prototypes as well as the production-grade vLLM serving framework. To defend against these, we propose and evaluate a suite of mitigations, including packet padding and iteration-wise token aggregation.
|
We develop a side channel attack leaking private user inputs by exploiting speculative decoding optimizations in LLM inference.
|
infrastructure, software libraries, hardware, systems, etc.
|
https://openreview.net/pdf?id=zq40cmz1JD
| 2025-09-19T04:10:50
| 4
|
[
{
"id": "YAO1FTCAHu",
"forum": "zq40cmz1JD",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission13973/Reviewer_mQbv",
"reviewer_name": "Reviewer_mQbv",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 2,
"summary": "This paper studies side-channel attacks on speculative decoding in LLMs that can leak information about a user’s prompt. Different prompts/responses can lead to unique patterns of accepted and rejected draft tokens, which can be manifested as differences in packet sizes (if the response is streamed). Therefore, a network attacker can observe packet sizes to predict information about a user’s prompt.\n\nThe paper runs experiments in controlled, simulated settings, finding that the attack can accurately predict at low temperatures when the exact set of test prompts is known beforehand. At higher temperatures, or when the exact test prompts are not known beforehand, accuracy significantly decreases, but is still better than random guessing. The paper also proposes and evaluates defenses to mitigate the attack.",
"strengths": "It is important to draw attention to the potential privacy risks of LLM inference techniques, given how widely LLMs and inference optimizations are used today. The paper runs simple, proof-of-concept experiments in controlled, simulated settings that demonstrate that speculative decoding can leak information about user prompts via packet sizes. The paper also proposes and evaluates defenses against the attack, giving concrete mitigations that LLM providers can implement.\n\nCode and documentation are uploaded in the supplementary material, which enhances the reproducibility of the paper.",
"weaknesses": "* The high accuracies reported in the abstract are only achieved in limited settings: low temperature (0.3) and the exact set of 50 test prompts are known and used at training time. When the temperature increases, or when the exact test prompts are not trained on, the accuracies decrease significantly, although still above random guessing. It seems like much of what the attack is doing is memorizing the fingerprint for specific responses, as indicated by the brittleness to increasing temperature.\n\n To avoid being misleading, the accuracies in the abstract should be updated or omitted, or the caveat of the limited setting should be explicitly stated. \n* No experiments are run on real-world production systems like ChatGPT or Claude, which are done by the related works [Weiss et al. (2024)](https://arxiv.org/abs/2403.09751) and [Carlini et al. (2024)](https://arxiv.org/abs/2410.17175). This would provide more evidence of the efficacy of the attack in more realistic settings, where more factors are unknown (speculative decoding implementation, streaming logic, etc.) \n* I think that the tables would be more naturally presented as graphs. The tables generally show how the accuracy changes as some parameter changes (temperature, traces per query, etc.). For example, Table 2 could plot accuracy against temperature, with one line representing each speculative decoding method. The tables have many numbers, making them hard to read and see the overall trends. \n* The formatting of the paper could use some polishing. For example, parenthetical citations are not correctly formatted throughout the paper. The parentheses are missing, so they interrupt the sentence and make them harder to read.",
"questions": "### Questions\n\n1. For the locally run experiments, how are the packet sizes determined? How is it determined when to send each packet, and how many tokens are sent in each packet? \n2. When the attack accuracy is high, how similar are the responses/traces at test time compared to the ones from training time? Is the model giving the same response, or is there more diversity? \n3. For REST, how similar are responses to each other as the temperature increases? Since REST is retrieval-based, I am wondering if increasing temperature does not increase response diversity as much as in the other speculative decoding methods, leading to the high accuracy for REST. \n4. Do you have a reference or explanation for modeling times in high server load with a log-normal distribution? \n5. In the out-of-distribution experiment (Section 4.8), for each ground truth prompt, there may be several diseases that have similar symptoms. So, random guessing would have a higher performance than normal. What is the accuracy achieved by random guessing under this evaluation? \n6. In the datastore leakage attack (Section 5), what is the false positive rate, i.e., classifying a sequence as in the datastore when it does not actually appear? \n7. How does the packet size increase 230x when padding to 1024 bytes? This means that the original packet size is just 4 bytes, which is enough room for at most 4 characters (not even counting metadata). \n 1. Also, the packet size can be made smaller, and if there are too many tokens generated in one iteration, it can just be split into several packets. There can be some delay between these packets so the attacker cannot tell that it was one iteration. \n8. From the code in the supplementary material, it looks like a random forest is also used when evaluating the inter-arrival time side-channel attack (Carlini & Nasr, 2024). However, that paper uses Gaussian Mixture Models and convolutional neural networks for predicting prompts/topics from the timing data. For a fair comparison, GMMs should be used for the time side-channel, as it may achieve better performance. \n9. More generally, did you try any methods other than random forests? Other methods may perform better on both the packet size and inter-arrival time data. \n10. Do real-world production systems such as ChatGPT, Claude, etc. actually send multiple tokens per packet? Are there systems that always send exactly one token per packet?\n\n### Notes\n\n1. \\> appears as ¿ in the abstract. \n2. Parenthetical citations are missing the parentheses throughout the paper. `\\citep` should be used. \n3. There is an extra indentation at the start of lines 142 and 154\\. \n4. Line 157 typo: “wand” should be “and” \n5. It would be good to have more descriptive titles for each experiment in addition to just “Experiment 1”.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T00:38:30",
"modification_date": "2025-11-12T13:14:31",
"review_url": "https://openreview.net/forum?id=zq40cmz1JD¬eId=YAO1FTCAHu",
"license": "CC BY 4.0"
},
{
"id": "YWSGU3ucAp",
"forum": "zq40cmz1JD",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission13973/Reviewer_QPdf",
"reviewer_name": "Reviewer_QPdf",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 2,
"summary": "This paper reveals that speculative decoding techniques used to accelerate inference in large language models pose severe privacy risks. By analyzing packet-size patterns in encrypted network traffic, attackers can infer whether internal speculations succeed or fail, enabling them to identify users’ sensitive queries or to extract the confidential parameters that drive speculation. \n\nExperiments across multiple speculative-decoding schemes and real-world deployment settings confirm the effectiveness of this side-channel attack. Although defenses such as packet padding or token aggregation exist, they typically force a trade-off between performance and privacy.",
"strengths": "1. This paper is the first to reveal a packet-size-based side-channel attack introduced by speculative decoding techniques in LLMs. It explicitly differentiates this work from prior LLM side-channel attacks, such as token-length leakage and timing attacks by focusing on input-dependent speculation patterns.\n\n2. The attack is validated across four speculative decoding schemes (REST, LADE, BiLD, EAGLE) and tested in both academic prototypes and the production-grade vLLM serving framework, confirming its novelty as the first systematic exploration of this specific side channel .\n\n3. The paper’s experiments are comprehensive, as evidenced by the design of its fingerprinting attack and multi-dimensional evaluation.",
"weaknesses": "1. There are still issues with writing and typesetting. For example, the caption of Figure 1; the font size in Figure 5 is excessively small; and in tables (e.g., Table 1, Table 4), the layout is overly compact.\n\n2. Although Experiment 3 (semantically similar but non-identical queries) and Section 4.8 (out-of-distribution training) evaluate the fingerprinting attack under approximate or out-of-distribution dataset setups, both configurations remain somewhat idealized.\n\n3. The paper provides no justification for choosing random forest over other advanced methods and does not explore whether using more sophisticated algorithms could reveal higher attack potential or more robust speculation patterns.\n\n4. While token aggregation is shown to reduce attack accuracy, the paper does not measure its impact on end-user perceived latency\n\n5. Common LLM optimizations like paged attention split the KV cache across multiple GPUs, which may interact with token aggregation to alter packet generation logic . The paper does not test this interaction, so it remains unknown if token aggregation is broken or weakened by paged attention.",
"questions": "1. Intuitively, the traffic pattern of each prompt–response pair seems likely to be unique. Would it be possible to introduce additional metrics and perhaps specific thresholds to make the \"same/different\" distinction more explicit?\n2. Could concurrent traffic affect the attack accuracy? Have any experiments measured the fingerprinting attack’s performance under real Internet conditions?\n3. Could employing other classifier algorithms improve the overall effectiveness to some extent?\n4. The current dataset appears modest in scale, and the prompts are short and straightforward. While this nicely demonstrates the attack under ideal conditions, I wonder how it would behave when the scenarios grow more complex.\n5. The attack’s success rate varies across decoding strategies. Is there any theoretical insight that helps explain why these differences emerge?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T23:45:03",
"modification_date": "2025-11-12T13:14:32",
"review_url": "https://openreview.net/forum?id=zq40cmz1JD¬eId=YWSGU3ucAp",
"license": "CC BY 4.0"
},
{
"id": "FClHFaXryX",
"forum": "zq40cmz1JD",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission13973/Reviewer_KSN3",
"reviewer_name": "Reviewer_KSN3",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "This paper considers information leakage in speculative decoding for LLMs. It proposes an attack based on packet sizes and demonstrates that it can identify user queries with high success rates across four speculative decoding schemes in specific medical query scenarios. It proposes several defense mechanisms, which can effectively reduces the attack success rates at high costs.",
"strengths": "1. The paper is mostly well-written and easy to follow.\n2. It is an interesting observation that per-iteration token count or packet size can leak private information.\n3. The attack works across different speculative decoding schemes.\n4. The defense mechanisms can effective reduce the risk of information leakage.",
"weaknesses": "1. The experiments is limited to a very special medical chatbot scenario where there are only a small number of diseases. There is no experiment about scaling up the number of possible labels, or diseases in this case. \n2. The proposed defense mechanisms are all very costly and not very practical.",
"questions": "1. How will the attack success rate scale with the number of diseases? \n2. How will the attack cost, e.g. in terms of number of profiling examples and training cost, scale with the number of diseases? \n3. In the out-of-distribution experiment, how different are the two distributions? In particular, how many of the \"50 common diseases\" overlap with the training examples? Since the attack success rate already drops significantly in the current experiment, should one expect the risk to be of little practical importance when the number of diseases is large?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-28T10:22:34",
"modification_date": "2025-11-12T13:14:32",
"review_url": "https://openreview.net/forum?id=zq40cmz1JD¬eId=FClHFaXryX",
"license": "CC BY 4.0"
},
{
"id": "CckZOANTk3",
"forum": "zq40cmz1JD",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission13973/Reviewer_LAHj",
"reviewer_name": "Reviewer_LAHj",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 3,
"presentation": 2,
"summary": "The paper proposes a side-channel attack on speculative decoding, where the adversary can infer the user prompt by observing the packet size sent from a remotely hosted LLM (proxy metric for a per-iteration token counts, i.e., speculation accuracy). For some speculative decoding schemes (e.g., REST), the attack can fingerprint user queries (i.e., can classify the query content out of 50 predefined classes) with near-perfect accuracy. Their method is effective under a real-world deployment scenario using vLLM (Section 4.7). They further discuss that this side channel can be exploited to allow a data extraction attack on an algorithm like REST that relies on a datasource for speculation (Section 5).",
"strengths": "1. The paper reveals the exploitability of speculation accuracy, which can be estimated in practice based on observing the packet size. It is an interesting and surprising observation that there exists a correlation between speculation accuracy and the input prompt.\n2. Their attack essentially works better when the speculation algorithm is more stable (lines 314-316), implying a security-utility tradeoff, and a growing concern as the speculation accuracy improves.\n3. They have multiple settings for many experiments (e.g., as described in Section 4.3, lines 240-260), ranging from the easiest setting that gives the upper bound of the attack success, to a more practical setting.",
"weaknesses": "1. **High-level concern about the contribution**: The method essentially reduces to a random‑forest–based 50‑class classifier (Section 4.4, lines 269–292) that uses “tokens per iteration” as the primary input feature. While reasonable, I believe either of the following must be satisfied/clarified for an acceptable contribution level: (i) technical novelty: include a component specific to the speculative‑decoding setting that materially improves attack success; or (ii) strong practicality: since the current setup assumes a predefined set of 50 classes, I remain unconvinced about the effectiveness in realistic settings where users ask arbitrary topics in arbitrary phrasings.\n\n2. **Detailed analysis of why the attack succeeds**: It is unclear “why” the attack works, i.e., what parts of the trace the classifier relies on when making predictions. The draft presents the method as generic and versatile, yet the experiments are limited to a medical dataset extracted from Han et al. (line 243), so it may be domain‑specific. For example, I hypothesize the correlation between speculation accuracy and user prompt could stem from something specific to disease names, and the method may degrade in other domains e.g., where no technical terms exist. Overall, I would like the authors to go one step deeper so they can claim that the method works for some principled reasons, and to discuss more on when / why it works.\n\n3. **Presentation (minor)**: The typo in the abstract (line 14, transposed “?”) is careless for a conference submission. Also, pasting dozens of raw prompts across 5+ pages (pages 12–18) without formatting/description (e.g., grouping or a table/figure) is unusual and makes the information hard to grasp.",
"questions": "1. **Concrete setup for OOD ablation** (Section 4.8): The authors conduct a training by using 50 diseases that are generated by GPT-4o as common topics users typically ask about, and then evaluate the prediction accuracy using the 50 predefined classes. Do the 50 classes for training and evaluation have one-to-one correspondence? If yes, how is it different from Experiment 3 (lines 250-260)? If not, how is the evaluation designed?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-27T03:05:42",
"modification_date": "2025-11-12T13:14:33",
"review_url": "https://openreview.net/forum?id=zq40cmz1JD¬eId=CckZOANTk3",
"license": "CC BY 4.0"
}
] |
if1Ndb6RWD
|
https://openreview.net/forum?id=if1Ndb6RWD
|
Information-based Value Iteration Networks for Decision Making Under Uncertainty
| 3.5
| 3
|
[
2,
6,
2,
4
] |
[
4,
4,
2,
2
] | 4
|
[
"Reinforcement Learning",
"value iteration networks",
"planning under uncertainty"
] |
Deep neural networks that incorporate classic reinforcement learning methods, such as value iteration, into their structure significantly outperform randomly structured networks in learning and generalization. These networks, however, are mostly limited to environments with no or very low amounts of uncertainty. In this paper, we propose a new planning module architecture, the VI$^2$N (Value Iteration with Value of Information Network), that learns to act in novel environments with a high amount of perceptual ambiguity. This architecture over-emphasizes reducing uncertainty before exploiting the reward. VI$^2$N can also utilize factorization in environments with mixed observability to decrease the computational complexity of calculating the policy and facilitate learning. Tested on a diverse set of domains, each containing various types of environments, our network outperforms other deep architectures. Moreover, VI$^2$N generates interpretable cognitive maps highlighting both rewarding and informative locations. These maps highlight the key states the agent must visit to achieve its goal.
|
We proposed a novel deep architecture for decision making under uncertainty based on planning for reward maximization and information gathering.
|
reinforcement learning
|
https://openreview.net/pdf?id=if1Ndb6RWD
| 2025-09-19T04:03:42
| 4
|
[
{
"id": "QkxtFCaSOb",
"forum": "if1Ndb6RWD",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission13955/Reviewer_wGbv",
"reviewer_name": "Reviewer_wGbv",
"rating": 2,
"confidence": 4,
"soundness": 1,
"contribution": 2,
"presentation": 3,
"summary": "This paper introduces new a differentiable planning architecture ($VI^2N$) for decision making under uncertainty in partially observable environments. The method extends Value Iteration Networks (VINs) by integrating a pairwise heuristic mechanism (from prior works) that aims to explicitly model and reduce uncertainty before exploiting rewards. The authors also proposed how $VI^2N$ can leverage Mixed Observable MDP (MOMDP) factorisation to reduce computational complexity and improve scalability. They then present various experiments in several simple grid-world domains with different degrees of uncertainty (covering cases where the agent’s position or the goal’s position is unknown) and shows that $VI^2N$ outperforms QMDP-Net (the most directly related prior work) in success rate. An ablation study on the recurrence depth of $VI^2N$ further highlights the importance of the model’s planning horizon.",
"strengths": "* **Significance and originality:** The paper tackles an important limitation of previous value iteration-based networks (VINs, QMDP-Nets) by explicitly integrating information-gathering behavior into planning under uncertainty. This is highly relevant for domains such as robotic navigation and grid-world planning tasks.\n* **Experimental coverage:** The study provides experiments in several partially observable gridworld settings, including tasks with unknown agent position / known goal and known agent position / unknown goal, capturing a different types of uncertainty.\n* **Comparative results:** The paper demonstrates consistent and often substantial improvement over QMDP-Net, the closest relevant baseline, showing that $VI^2N$ can perform better in environments with significant perceptual ambiguity.\n* **Insightful ablation:** The analysis on the number of recurrences in the VI and $VI^2$ modules reveals how performance depends on planning depth, providing useful interpretability and model understanding.\n* **Clarity and theoretical grounding:** The exposition of the pairwise heuristic is conceptually clear and builds logically on established literature in POMDPs.",
"weaknesses": "* **Limited generality:** Like QMDP-Net and other VIN-based models, $VI^2N$ assumes a discrete environment with spatially invariant transition kernels and discrete actions, which restricts applicability to continuous or large-scale real-world domains.\n* **Figure clarity:** The main architecture diagram (Figure 1) is dense and difficult to interpret, lacking a proper legend and a lot of missing arrow direction indicators, which makes following data flow challenging.\n* **Restricted evaluation scale:** Experiments are limited to small binary grid-worlds, which makes unclear how it will scale to larger maps or continuous domains (e.g., those used in the prior VIN or QMDP-Net works).\n* **Outdated baselines:** The authors only compare against QMDP-Net, which is a fairly old baseline (Karkus et al., 2017). They justify this by saying that the QMDP-Net paper showed that they perform significantly better than unconstrained networks. However there are several more recent POMDP baselines other than RNNs/LSTMs and behavior cloning (which is what Karkus et al. compared against), like transformers (e.g Decision transformers, TrXL, etc). Additionally, given that the experiments are all trained offline using expert trajectories, there are several more recent offline RL baselines (like CQL, IQL, etc).\n* **Weak evidence for claims:** It looks like the authors did not average their results across several training runs (or the number of seeds used is not reported). Hence it is unclear if the results are significant (let alone statistically significant). There are also no plots/results to validate the claim that $VI^2N$ focuses on reducing uncertainty *before* exploiting rewards. Finally, it is unclear if QMDP-Net also got/used the same factorised representation in Task 2 with the fully observable agent’s location (so the belief should similarly be only over unobserved variables).\n* **Little ablations and analysis:** Only one ablation (recurrence depth) is provided. No analysis of other hyperparameters and model failure modes is given. For example it is unclear how performance varies with $\\lambda$, and what tasks/situations are problematic for $VI^2N$ due to the changed bellman equation (Equation 4).\n* **Lack of robustness tests:** Although the architecture is designed for high uncertainty, the paper does not test performance under systematic increased stochasticity. E.g.\n - Increasing action slip probabilities\n - Noisy observation models like the classical “noisy TV” scenarios where observations become uninformative (e.g. where transitions into a specific grid position gives uniformly random observations).\n* **Scope of generalisation:** All experiments use offline expert demonstrations. Hence it is unclear how the method will be affected by non-expert demonstrations or the online RL settings. \n\nSome of these limitations are acknowledged in the last section (Discussion), including a couple additional ones. However given the severe lack of analysis as mentioned above (and no theory), the authors really should have included some of the experiments they leave to future works.",
"questions": "Please see the weaknesses above. Mainly:\n\n1. **Baselines:** Why were no more recent POMDP or offline RL baselines included? Could you compare against transformer-based or uncertainty-aware architectures/algorithms (like CQL) to contextualise performance?\n2. **Scalability:** How does $VI^2N$ scale computationally and in performance for larger or continuous environments? Have you tested on larger grid-worlds or 3D navigation tasks as suggested in discussion, since you claim it is easily doable?\n3. **Statistical significance:** How many training seeds were used? Could you report mean ± std over multiple runs to assess (or at least illustrate) the significance of the improvements?\n4. **Robustness to noise:** How would $VI^2N$ perform under increased stochasticity (e.g., transition noise or noisy observations)? Does the network maintain its advantage over QMDP-Net in such cases? Does the plots of informative areas through the value function of pairs change as one would expect? \n5. **Ablation depth:** Beyond recurrence count, could you ablate other design elements (e.g., factorisation, pairwise distinguishability threshold λ) to clarify what drives improvements and the corresponding tradeoffs? Are there failure cases as a result of the way the rewards and Bellman equations are modified (or can the authors prove they maintain optimality)? In general could the authors analyse failure cases?\n6. **Interpretability consistency:** Are the “informative area” maps consistent across runs and different environment structures, or do they vary significantly depending on training initialisation?\n7. **Generalisation to non-expert settings:** Given that experiments rely on expert trajectories, how would $VI^2N$ perform under non-expert trajectories?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-08T03:38:11",
"modification_date": "2025-11-12T13:14:17",
"review_url": "https://openreview.net/forum?id=if1Ndb6RWD¬eId=QkxtFCaSOb",
"license": "CC BY 4.0"
},
{
"id": "oI1Z6Rbn7k",
"forum": "if1Ndb6RWD",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission13955/Reviewer_6LxJ",
"reviewer_name": "Reviewer_6LxJ",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 4,
"summary": "The paper \"Information-based Value Iteration Networks for Decision Making Under Uncertainty\" proposes to combine Value-Iteration networks with Partially Observable MDPs (POMDPs) to account for uncertainty in the environment. To overcome the computational complexity of obtaining the optimal policy for POMDPs they adapt a solver that uses the pairwise heuristic that estimates the value $V(s,s')$ for states $s,s'\\in S$. Notably, the authors argue that this heuristic is only necessary for the features that are uncertain, which can drastically decrease the amount of required computations. They provide empirical evaluation on two original gridworld datasets and show that their approach notably outperforms QMDP in environments with high uncertainty. Finally, they present \"information maps\" for each state that appear to make model decisions more interpretable than the value function.",
"strengths": "The combination of the solid theoretical foundation of value iteration with MOMDPs to account for uncertainty in the environment is inspiring. The theory developed in this paper was very clearly stated and easy to follow, even without significant knowledge on partial observability or pairwise heuristics for such problems. Furthermore, the observable goal, unknown position environments were motivated by real-world applications (like robots with sonar sensors), justifying their relevance. The emperical results show the benefits of this approach for difficult navigation environments. On top of that, the analysis highlights that both reward exploitation and resolving uncertainty are essential for the model's success. Finally, the obtained information maps provide an impressive insight into the agent's capabilities.",
"weaknesses": "As mentioned in the discussion, the lack of evaluation beyond 2D navigation tasks is unfortunate, since it would help to understand the algorithms capabilities in other reinforcement learning domains. Specifically, you state that you successfully tested noisy environments; it would therefore have been interesting to see quantitative results for these experiments, since they are especially relevant for real-world applications. Furthermore, though QMDP-Net performs better than unconstrained networks, a comparison to at least one state-of-the-art baseline not specifically built for partial observability would have been beneficial.\n\nSo, the main issues are:\n- Limited experiments on relatively toy 2d environments \n- Lack of comparison against other established methods",
"questions": "I am unsure how the threshhold $\\lambda$ has to be selected by a domain expert and why it should be close to 1. The reasoning behind this choice would be an interesting note to fully grasp the presented approach.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T02:40:43",
"modification_date": "2025-11-12T13:14:18",
"review_url": "https://openreview.net/forum?id=if1Ndb6RWD¬eId=oI1Z6Rbn7k",
"license": "CC BY 4.0"
},
{
"id": "wzZw1SlbQT",
"forum": "if1Ndb6RWD",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission13955/Reviewer_SnV9",
"reviewer_name": "Reviewer_SnV9",
"rating": 2,
"confidence": 2,
"soundness": 2,
"contribution": 2,
"presentation": 1,
"summary": "The paper seems to extend partially observable VIN, in particular the method QMDP-Net, with a pairwise heuristic for solving POMDP. Experiments on simple gridworld tasks show their approach solves more randomly generated tasks than QMDP-Net.",
"strengths": "The pairwise approach seems to be especially well suited for tasks where the state space factors in observable and unobservable variables.",
"weaknesses": "I recommend to reject this paper, because I simply fail to understand it. I am familiar enough with the original VIN approach, but without reading the QMDP-Net paper (which is quite old by now) I doubt many will be able to understand this paper. What is needed here is a complete rewrite that explains (in equations) how the kernels are learned, how the belief is updated, and how the resulting VIN is actually solving the POMDP. In particular the main innovation, the pairwise approach, must be explained much more for the presented equations to make sense. I tried to read Section 3 a couple of times, but I am still unsure what is computed here, why it is computed, and how this is supposed to solve a POMDP.\n\nIf other reviewers disagree with this statement, I am happy to be convinced otherwise, but I believe an ICLR paper should be accessible even to non-expert of a field like this. \n\n**Detailed Comments**\n- To solve the induced belief-MDP, one needs to do value iteration over the space of all possible belief distributions. I do not understand how this can be achieved in VIN, which only seems to work over discrete state spaces (beliefs are continuous).\n- The term uncertainty is ambiguously used: I believe you mean partial observability, and sometimes noisy observations, but uncertainty is more often associated with stochastic environments (aleatoric), or incomplete sampling (epistemic).\n- Some symbols are never or insufficiently defined, like $Q$ in Equation 5. This extends to fairly important concepts as the belief distribution $b(s,s')=b(s)b(s')$, where it is never defined how these $b(s)$ are updated or why they are independent from each other. \n- The notation often changes during the text. For example, the reward function is defined (and first used) as $R(s,a)$, but then later used as $R(s)$ or $R(s,a,z)$ without defining these terms formally.\n- Equation 1 defines whether $s$ and $s'$ are *distinguishable*, but contains a sum over $s$ and $s'$, which does not seem to make any sense.\n- It is unclear to me how the values $V(s, s')$ and $V(s_v, s_h, s'_h)$ are actually represented in the architecture.\n- The experiments are missing baselines, i.e., other approaches to solve POMDPs. Just comparing to QMDP-Net is not enough for a top-tier conference.",
"questions": "- Why should the actions selected in Equation 5 solve a given POMDP?\n- What are the standard deviations in the tables over? Did you train the kernels multiple times and this is the STD over the random seeds? Is this about multiple runs of one set of kernels with noisy observations?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-28T01:48:54",
"modification_date": "2025-11-12T13:14:19",
"review_url": "https://openreview.net/forum?id=if1Ndb6RWD¬eId=wzZw1SlbQT",
"license": "CC BY 4.0"
},
{
"id": "VTSWt0lwqe",
"forum": "if1Ndb6RWD",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission13955/Reviewer_bA3z",
"reviewer_name": "Reviewer_bA3z",
"rating": 4,
"confidence": 2,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "The paper introduces a new DNN architecture for RL. This architecture builds on the idea of Value Iteration Networks, extending it to better contend with the partial observability of POMDPs using the idea of Pairwise Heuristic.",
"strengths": "1. The authors are very clear and direct about the limitations of the paper, which I greatly appreciate.\n2. I found the presentation for the most part very clear.\n3. The idea seems very natural for the purpose of applying VI nets to POMDPs.",
"weaknesses": "1. Limited evaluation: only navigation tasks. No RL experiments - only learning from an expert. \n2. Limited novelty: the paper takes two developed ideas (Value Iteration Networks (VIN) and Pairwise Heuristic (PH)) and combines them in an apparently straight forward manner.\n3. HP tuning: The HP tuning process for the method and the baseline QMDP are (I believe) unspecified. I.e. was there any? Was it for both methods? Was it under equivalent conditions (i.e. same amount of compute dedicated to the tuning of each?) and motivation for the decisions are missing.\n4. Statistical significance: There is a measure of stat. signif. in the tables. Is it standard div.? SEM? Other? I believe it is unspecified (unless I've missed it). Is it over different seeds (how many)? Is there a reason seeds are not necessary here? (random init of the DNN seems to justify different seeds, to me).\n5. Introduction to an RL audience could be expanded (see questions / comments). Specifically, I would have liked a brief overview of the training process (dataset with expert actions? learning through interaction with the environment? other?). If the authors could please include a brief explanation in the rebuttal, and add a description into the main paper.",
"questions": "I'm open to changing the review score/s. Specifically:\n1. If the authors could motivate well that: the evaluation is sufficient for the method to convincingly dominate the baseline in a major set of tasks. Alternatively, increasing the number and types of tasks in the evaluation.\n2. If the authors could motivate well that the combination of VIN and PH is not straightforward.\n3. If the authors will add and motivate the HP tuning proces and stat. signif. evaluation.\n\nAdditional comments:\n1. Uncertainty can refer to different things, that are traditionally dealt with very differently, in RL: the partial observability of POMDPs, the epistemic-uncertainty that drives exploration in sparse-reward domains, and the stochasticity of reward / transitions (\"aleatoric\" uncertainty) that makes everything more challenging. Although I think the presentation is rather clear in its focus on POMDPs exclusively, I would have liked it to be even more explicit - preferably as early as possibly (i.e. intro), on the types of uncertainty that will / will not be addressed in this work.\n2. Lines 019-020: \"Tested on a diverse set of domains\". Since the evaluation is limited to navigation tasks in grid envs. with goal / position known / unknown, I do not think that this can be considered \"a diverse set of domains\". I would be more comfortable with a claim along the lines of \"a range of grid-based navigation tasks\".\n3. I would have liked the QMDP baseline to be presented in more detail (perhaps in a related work section?), so that the reader can better understand and contrast the method and the (only, previously SOTA?) baseline.\n4. In Section 3 there are multiple references to \"the value V\". It's not clear to me whether this refers specifically the value of the optimal policy $V^*$, the value of a general policy $V^\\pi$, or specifically the value of some expert policy $V^{\\pi_e}$. Could the authors specify (And add to the paper)?\n5. Table 2 and Table 3 seem to measure the same thing (success rate) in different units (% and out of 1). Have I misunderstood? If not - is there a reason they are not presented in the same units?\n6. Non-VI-based baselines for POMDP solving (i.e. standard popular algorithms with an LSTM) would put the results in better context.\n7. Can the authors include additional results / discussion of compute cost contrast between QMDP and their method (comparable / one is significantly more expensive than the other?)\n\nAdditional minor comments\n1. Line 011: In my opinion (/understanding), value iteration is a dynamic programming method, not an RL method (it relies on knowing the full model of the env., not on learning from a dataset of interactions). I do not mean to nitpick, it is more that it would have been easier for me to follow the narrative had DP rather than RL been the term used.\n2. Line 017: \"This architecture over-emphasizes..\". Over means \"too much\" -> bad. Is that the intention? Perhaps simply \"emphasizes\"?\n3. Equation 1 sums across s' s', probably should be across s' s. I'd also denote the set $o \\in Z$ for the first sum, to improve clarity.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-17T17:52:58",
"modification_date": "2025-11-12T13:14:19",
"review_url": "https://openreview.net/forum?id=if1Ndb6RWD¬eId=VTSWt0lwqe",
"license": "CC BY 4.0"
}
] |
dlaNQM6YbZ
|
https://openreview.net/forum?id=dlaNQM6YbZ
|
The Flaw of Averages: Quantifying Uniformity of Performance on Benchmarks
| 4.5
| 3.25
|
[
6,
6,
2,
4
] |
[
3,
3,
4,
3
] | 4
|
[
"Benchmark reliability",
"meta-evaluation of benchmarks",
"evaluation reliability",
"diagnostic evaluation"
] |
Benchmarks shape scientific conclusions about model capabilities and steer model development. This creates a feedback loop: stronger benchmarks drive better models, and better models demand more discriminative benchmarks. Ensuring benchmark reliability is therefore essential for trustworthy evaluation and meaningful progress. In this work, we study benchmark reliability from a \emph{distributional} perspective and introduce benchmark harmony, which measures \textit{how uniformly a model's performance is distributed across the subdomains of a benchmark}. We posit that high harmony is a desirable benchmark property, indicating that the aggregate metric reflects uniform competence across subdomains. Across 19 multiple-choice benchmarks and five model families, we map each benchmark onto a mean-variance plane of harmony computed across models, where high mean and low variance signal more reliable evaluation. Our analysis shows that less harmonious benchmarks can give misleading results, since overall accuracy may be disproportionately influenced by specific subdomains. For instance, \emph{ARC-Easy} is overwhelmed by questions on \emph{Biological Concepts}, overshadowing other critical subdomains such as Geography, Physics, Chemistry, and Environmental Science. By recommending that harmony should be reported alongside accuracy, we reframe evaluation from
simple performance averages to a more robust, distributionally reliable measurement of performance.
|
datasets and benchmarks
|
https://openreview.net/pdf?id=dlaNQM6YbZ
| 2025-09-19T08:12:50
| 4
|
[
{
"id": "9tq7VP8KiW",
"forum": "dlaNQM6YbZ",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14643/Reviewer_F1zp",
"reviewer_name": "Reviewer_F1zp",
"rating": 6,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "Benchmarks critically shape model development, but their reliability is poorly understood. To this end, the authors introduced HARMONY, a metric quantifying how uniformly a model's performance is distributed across semantic subdomains of a benchmark. They partitioned benchmarks using \"predictive similarity\", a model-aware clustering based on KL divergence of model probability distributions. They then computed HARMONY (normalized Shannon entropy) for 36 models across 5 families on 19 MCQA benchmarks, positioning each benchmark in a mean-variance plane.\n\nThe paper's key contributions include the HARMONY metric, which measures performance uniformity across subdomains where higher values indicate more reliable benchmarks, and intorducing predictive similarity, a novel model-aware partitioning approach based on divergence of model logits. Testing 19 benchmarks, fragility evidence shows that low-HARMONY benchmarks have unstable aggregate scores under pruning while high-HARMONY benchmarks remain stable. They also test for model family specific patterns: Qwen and Llama show negative correlation between model size and HARMONY, while Gemma and OLMo show positive correlation.",
"strengths": "* The problem motivation is timely and important: benchmarks critically shape model development, are needed for governance mechanisms, and audit methods are genuinely needed. \n\n* The scale and comprehensiveness of the empirical study is thorough, evaluating 36 models across 5 families on 19 benchmarks with detailed ablations and cross-model analysis in the appendices. \n\n* The authors design a synthetic benchmark (RedundantQA) with clean ground-truth partitions (paraphrases vs. distractors), enabling controlled validation of the similarity metric without confounds. \n\n* The pruning experiments provide concrete evidence that low-HARMONY benchmarks are genuinely fragile and that their aggregate scores shift significantly when rebalancing, while high-HARMONY benchmarks remain stable. This is a great finding!\n\nI like the paper, but I have a few uncertainties that I would need to have sufficiently resolved before increasing my scores.",
"weaknesses": "## Weakness 1\n\nThe paper defines HARMONY using normalized Shannon entropy but never justifies this choice over alternative uniformity measures (e.g., Gini coefficient, Rényi entropy, coefficient of variation). Appendix B compares variants of KL divergence, but not fundamentally different approaches to measuring performance uniformity. Can you please comment on why Shannon entropy is specifically the right measure for \"uniformity of competence\"? Is HARMONY more robust than other measures to gaming by benchmark designers?\n\n## Wekaness 2\n\nWhile theoretically grounded (app B), it's unclear whether the symmetrized KL divergence approach to model-aware similarity is genuinely novel or how it compares to existing model-based similarity metrics in the literature. The paper could be strengthend by making the novelty claim more explicitly justified.\n\n## Weakness 3\n\nThe main text does not clearly specify how model-specific partitions are unified across 36 models to produce the mean-variance plane in Figure 2. It should explicitly state whether HARMONY is computed per-model then aggregated, or if a reference partition is used.\nCan you comment on whether you compute 36 separate partitions (one per model) and then compute HARMONY for each or is a single reference partition used across all models?\n\n## Weakness 4\n\nThe paper does not address whether the clustering procedure generalizes to benchmarks with overlapping/ambiguous semantic categories beyond the clean cases tested (RedundantQA, MMLU) with clearly separable domains. How does your method perform on benchmarks where categories have significant semantic overlap or ambiguous boundaries? \n\n## Weakness 5\n\nThe paper asserts uniform performance indicates \"broad competence\" (2.2) but provides no rigorous justification for why this is desirable beyond empirical stability in Figure 5. Some legitimate task domains may have inherently heterogeneous difficulty. Also, the paper does not seem to distinguish between low HARMONY caused by poor benchmark design and low HARMONY reflecting genuine domain heterogeneity (e.g., some subdomains are objectively harder). \n\nTo give a concrete example: This MCQA dataset https://arxiv.org/abs/2502.16051 can be used to evaluate LM decision-making in clinical mental healthcare settings. The dataset is split across task categories (due to day-to-day clinical decision-making composing different tasks) that vary significantly in complexity and nature. Some categories even have no true answer (due to the inherent ambiguities even to human domain experts). Thus, model performances vary drastically across categories due to different task natures and conflicts with moral ambiguity/model alignment methods. This dataset (if used as a benchmark) would probabyl get a low Harmony score and I am not sure if that is a good thing and whether this problem will increase with increasingly more nuanced and domain-specific benchmarks. (I'm also happy to hear arguments how such benchmarks would be bad to strengthen the Harmony justificaiton)\n\nEssentially, I think Harmony measures away to link accuracy/performance increase to a somewhat linear increase in performance. Why should we expect uniform performance across subdomains to be desirable? Aren't some domains legitimately harder than others and wouldn't enforcing harmony erase meaningful task heterogeneity? How do you distinguish between low HARMONY that indicates a design flaw versus low HARMONY that reflects legitimate variation in task difficulty across subdomains? \n\n## Weakness 6\n\nBy computing model-specific partitions, HARMONY seems to measure whether a particular model has learned semantic structure, not whether the benchmark itself is well-designed. This could conflate model alignment with benchmark audit, potentially undermining the claim to benchmark reliability assessment. If HARMONY is model-specific, how is this a benchmark audit rather than a model audit? Couldn't a randomly initialized model coincidentally achieve high HARMONY on arbitrary clusters, implying the benchmark is \"reliable\"? Also, to what extent can a model's predictive similarity be manipulated to inflate harmony?\n\n## Weakness 7 \n\nThe paper recommends \"report HARMONY alongside accuracy\" but provides no concrete thresholds, decision rules, or guidance for how practitioners should act on this information (e.g., when to disqualify benchmarks, how to weight accuracy vs. harmony tradeoffs). What are the practical decision rules? Should benchmarks with HARMONY < 0.5 be excluded? How should practitioners trade off accuracy improvements against harmony decreases (Figure 5)?\n\n## Weakness 8 \n\nAppendix H demonstrates hierarchical multi-dimensional evaluation but provides no principled criterion for when to stop recursively partitioning (e.g., you could partition MMLU Biology down to single-question granularity). This could undermine the generality of the framework. At what point will using HARMONY lead to evaluate benchmarks just to calculating HARMONY of increasingly narrow subcategories and thus loosing its value to make a statement about an entire benchmark?",
"questions": "I've embedded my questions into the weaknesses.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-03T12:08:13",
"modification_date": "2025-11-12T13:23:36",
"review_url": "https://openreview.net/forum?id=dlaNQM6YbZ¬eId=9tq7VP8KiW",
"license": "CC BY 4.0"
},
{
"id": "VHMYXTWOeR",
"forum": "dlaNQM6YbZ",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14643/Reviewer_EZ1V",
"reviewer_name": "Reviewer_EZ1V",
"rating": 6,
"confidence": 3,
"soundness": 4,
"contribution": 3,
"presentation": 4,
"summary": "This paper introduces HARMONY, an entropy-based metric that measures how uniformly a model’s performance is distributed across subdomains of a benchmark. It aims to reveal when aggregate accuracy obscures uneven competency.",
"strengths": "- The paper addresses an important and under-discussed issue in benchmark-based evaluation: the degree to which average scores mask uneven performance across subdomains.\n- The authors present a sufficiently large-scale empirical study across many model families and benchmarks.\n- Overall, the depth of analysis and in particular how confounding factors influence the findings was great. The authors made a significant effort to provide the reader with all necessary information to understand their methodology, along with all necessary information to arrive at genuine and robust interpretations of the findings (e.g., by reporting statistical significance in plots).",
"weaknesses": "- The paper does not clearly define what “benchmark reliability” means and uses the term in ways that overlap with concepts such as validity without clarifying the distinction. \n- Relatedly, The paper mentions external valdity in the related work section but validity more broadly is even more relevant in this context in my opinion, see for example [1] and [2] (but also many others, many of Hannah Wallach’s work is relevant here!)\n- Section 2.3 is hard to follow. Given that the partition induction & clustering are key parts of the approach, I would kindly ask the authors to provide some more context (and intuition) why they took the design decisions they took. Some open questions to me are: Why should the similarity be model-aware? Doesn't that mean that HARMONY of a benchmark changes depending which models are used to assess HARMONY? Isn't this easily gameable by developers? Why did the authors think that spectral clustering was preferable over other clustering methods? Why did they set 2<=k<=20 and decided that maximizing the silhouette score was a prudent choice? Adding some explanation what they hope this expresses would help the reader's understanding.\n- Some citations are missing throughout the text, e.g., general statements like the first sentence in line 032 which should be supported by a reference (as an example, a potential citation here could be [3]). Same with the second sentence that follows right after. Overall, there are a couple of instances like this in the paper where I recommend adding more references.\n- I would have liked to see more of a discussion how HARMONY scores should affect benchmark design or what the practical takeaway for benchmark designers should be. I.e., is the authors' standpoint that it's better to not have any general benchmarks that span multiple domains in the first place? I.e., should all benchmarks be specialized? This also seems to cut into a lot of validity discussions, i.e., it's much harder to design a valid benchmark for a broad abstract construct (e.g., \"undergraduate-level knowledge\" like MMLU than for a very specific domain (e.g., fundamentals of astronomy) and I would expect that HARMONY for the former would be much harder to achieve than for the latter.\n\nSmall nits that affected presentation score:\n- The circle sizes in the legend in Figure 2 are visually almost indistinguishable.\n\n\n[1] https://arxiv.org/abs/2505.10573\n[2] https://arxiv.org/pdf/2412.01934\n[3] https://dl.acm.org/doi/pdf/10.1145/3708359.3712152?casa_token=dsjgmGDH8AUAAAAA:CvbRJT-NGxwBc9vr3RegWENQzOPbfHXz78JCBN1UlsuiPIPTenGWWcYB2XS0FRVEFjpFk24ZpHDg",
"questions": "See weaknesses.\n\nI'm generally willing to updating my score if the points in the weaknesses section (in particular on reliability definition and the difference to validity and Section 2.3 questions) are clarified.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-02T11:52:19",
"modification_date": "2025-11-12T13:23:36",
"review_url": "https://openreview.net/forum?id=dlaNQM6YbZ¬eId=VHMYXTWOeR",
"license": "CC BY 4.0"
},
{
"id": "zTaZQWbWlI",
"forum": "dlaNQM6YbZ",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14643/Reviewer_5RAY",
"reviewer_name": "Reviewer_5RAY",
"rating": 2,
"confidence": 4,
"soundness": 1,
"contribution": 3,
"presentation": 3,
"summary": "The authors study benchmark reliability from a distributional perspective and introduce benchmark HARMONY, which measures how uniformly a model’s performance is distributed across the subdomains of a benchmark. The authors, however, represent HARMONY as essentially a measure of benchmark quality, positing that for a given model set, HARMONY is an independently informative measure, tracking uniform competence across discovered subdomains.",
"strengths": "* The topic the authors select for study, measures of benchmark reliability, is timely, and more research in this area is definitely needed.\n* The authors have provided a pretty extensive set of experimental results; I particularly appreciate their inclusion of the fully open Olmo2 models and their relatively large benchmark selection.\n* Although their focus in the main paper on MCQA comparisons may be problematic in the specifics, in general, I think that focus improves the rigor of the paper and the trustworthiness of the measure.\n* The description of the measure is quite thorough and detailed, which helps the reader understand the core contributions easily.\n* The controlled pruning experiments demonstrate that low-harmony benchmarks are fragile.",
"weaknesses": "* The paper's core methodology relies on 'predictive similarity' - clustering questions by the similarity of models' output probability distributions. However, \nin MCQA settings, these distributions are heavily concentrated on answer tokens (A/B/C/D or short answer phrases). It is not clear to me that the spectral clustering approach described by the authors would lead to meaningful or interpretable clusters in this setting. The conditional output probabilities generated by models would be (A) different for every model and (B) heavily concentrated on a small set of tokens.\n* The authors' mean-variance plane interpretation of benchmark reliability does not control for benchmark size; this has the potential to be a significant confound, as larger benchmarks can be expected to exhibit more stable variance characteristics. I am not convinced from the current analysis that HARMONY measure they would be independently informative when comparing benchmarks which are identical in size.\n* I have a doubt about the derivation of the clusters. In Section 2.3, we have -- \"we sweep 2 ≤ k ≤ 20 and select the value maximizing the silhouette score\". This means that k is data-driven and likely varies across benchmarks. Although the Harmony formula includes size weights: wi = |Ai|/|B|, larger benchmarks will tend to yield more stable cluster size estimates and lower sampling variance in wi. I cannot find in the paper a report on how many clusters each benchmark yielded, whether k correlated with benchmark size, whether k correlated with HARMONY.\n* HARMONY, as a measure of a *benchmark's* reliability, is potentially confounded by the fact that benchmark HARMONY is aggregated over a model set. Not all models are intended to be generalists, and not all benchmarks are intended to measure generalist capability. Furthermore, there are practical concerns. The temptation on the part of researchers would be to compare HARMONY across studies, but this will almost never be possible, because new models will come out and the model set and the HARMONY scores will no longer be directly comparable. There is also a risk of researchers cherry picking model sets to make benchmarks appear more or less harmonious, depending on their goals.\n* In Figure 2, the \"Benchmark Size\" field is not defined in the caption and is difficult to interpret from visual inspection, it's hard to figure out which benchmarks correspond to which sizes.",
"questions": "* In the main plot in Figure 2, what is the strength of correlation between harmony mean and harmony variance? From visual inspection it looks like it would be a pretty strong negative correlation, but I'm curious to know the exact value.\n* Why did the authors not conduct asystematic analysis of cluster characteristics, including an examination of whether cluster count or size distribution correlates with HARMONY?\n* Can the authors show that HARMONY clusters correspond to semantic categories? Specifically: (1) qualitative examples showing cluster coherence, (2) comparison of discovered clusters against ground-truth domain labels where available (beyond the 2 validation cases), and (3) ablation showing that predictive similarity outperforms random clustering of equal size and number?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-23T22:26:12",
"modification_date": "2025-11-12T13:23:37",
"review_url": "https://openreview.net/forum?id=dlaNQM6YbZ¬eId=zTaZQWbWlI",
"license": "CC BY 4.0"
},
{
"id": "khjIT0AQIO",
"forum": "dlaNQM6YbZ",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14643/Reviewer_85MX",
"reviewer_name": "Reviewer_85MX",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "This paper suggest benchmark harmony as a new metric to complement performance metrics like accuracy. A model's harmony score on a benchmark is a measure of variance between the performance on different (semantically meaningful) subsets of the benchmark. For a given benchmark, the authors consider the mean and variance of harmony accross models. They argue that high mean harmony and low harmony variance indicate more reliable evaluation.",
"strengths": "- Considering attributes of benchmarks beyond accuracy seems generally useful.\n- The proposed metric seems reasonably informative and relatively lightweight to compute\n- The writing is clear and easy to follow.",
"weaknesses": "- Some of the formulas used in the paper have large number of moving parts without strong apparent reason. It is not clear, whether results strongly depend on details in the used parameters, which would enable cherry-picking results. \n - For example, why not simply define harmony as the (weighted) variance of subset performances? Similarly, what is the intuition behind the formula used for pruning? \n- It seems like harmony scores could become very misleading whenever the clustering picks up on aspects of question difficulty. As an extreme example, if a valid benchmark was partioned into its easier and harder half, harmony would depend on model quality in a U-shape, potentially yielding both low mean harmony and decently high harmony variance. \n- While Figure 3 shows that ground-truth harmony and harmony based on the proposed clustering correlate on two datasets, it is unclear whether this remains true accross benchmarks. In addition, the regression coefficient is decently off from one in one of the two examples, suggesting that harmony measured by the proposed method is not really comparable between benchmarks. \n - More broadly, the motivation for the specific validation setup (for example, only 4 MMLU subdomains) is a bit unclear. \n- Overall, it remains a bit unclear what reporting harmony offers compared to directly reporting subdomain scores, especially when there are only a few subdomains (as in the examples from Figure 3)",
"questions": "- Figure 5: If I understand correctly, you prune low-harmony benchmarks more aggressively. How does the experiment ensure that the larger differences in accuracy for low-harmony benchmarks are not simply explained by pruning more data points?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-23T15:55:33",
"modification_date": "2025-11-12T13:23:37",
"review_url": "https://openreview.net/forum?id=dlaNQM6YbZ¬eId=khjIT0AQIO",
"license": "CC BY 4.0"
}
] |
|
ErED2dvR7Z
|
https://openreview.net/forum?id=ErED2dvR7Z
|
Cascaded Flow Matching for Heterogeneous Tabular Data with Mixed-Type Features
| 2.5
| 3.5
|
[
2,
2,
2,
4
] |
[
4,
2,
4,
4
] | 4
|
[
"tabular data",
"flow matching",
"generative modeling",
"synthetic data"
] |
Advances in generative modeling have recently been adapted to heterogeneous tabular data. However, generating mixed-type features that combine discrete values with an otherwise continuous distribution remains challenging.
We advance the state-of-the-art in diffusion-based generative models for heterogeneous tabular data with a cascaded approach.
As such, we conceptualize categorical variables and numerical features as low- and high-resolution representations of a tabular data row. We derive a feature-wise low-resolution representation of numerical features that allows the direct incorporation of mixed-type features including missing values or discrete outcomes with non-zero probability mass.
This coarse information is leveraged to guide the high-resolution flow matching model via a novel conditional probability path.
We prove that this lowers the transport costs of the flow matching model.
The results illustrate that our cascaded pipeline generates more realistic samples and learns the details of distributions more accurately.
|
A cascaded flow matching framework that generates details in tabular data conditioned on low-resolution features.
|
applications to computer vision, audio, language, and other modalities
|
https://openreview.net/pdf?id=ErED2dvR7Z
| 2025-09-19T21:57:07
| 4
|
[
{
"id": "BXQ9NHnc2f",
"forum": "ErED2dvR7Z",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission18697/Reviewer_5EfX",
"reviewer_name": "Reviewer_5EfX",
"rating": 2,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 1,
"summary": "Mixed-type tabular data generation with cascaded flow matching so it condition on categorical features and latent continuous features.\n\n\"To the best of our knowledge, this is the first work to address mixed-type feature generation,\ni.e., features following a mixture of categorical and continuous distributions, within diffusion-based\nmodels.\" This is an overclaim. You mentioned yourself prior reference of work doing mixed-type generation. You don't need this claim for your work to be relevant. Okay, after reading more I get what you are trying to see, please rephrase it to talked about diffusion cascade because right now it looks like an overclaim.\n\nMissing reference: \n- https://arxiv.org/abs/2309.09968\nhandles mixed-type generation and missing data using xgboost\n- https://openreview.net/forum?id=LFCSTy6MYe#discussion\nuses kernel density integral quantization (KDI) which sounds quite similar to your use of distributional trees\n\nProblem statement Inflated values: Why be limited to dirac (aka binary categorical feature)? There are multi-class categories. I'm not sure I get why this paragraph is needed.\n\n\"Previous diffusion models for tabular data can be trained on numerical features with missing values, but are\nnot designed to generate such instances.\" Nobody wants to generate data with missing values, its not useful. Then you have to discard those samples anyways if using something like linear regression or logistic classification (which is what people use for small data in medecine/psychology).\n\n\"he simplicity of learning categorical features\": Can you show Figure 2 with other methods included, e.g., TabDDPM, ForestDiffusion? This fits the figure and would strengthen the argument. One thing though to keep in mind is that categorical data can be very hard to model properly, it could just that for this data, getting the category right is easy. In retrospect, to be honest, I dont buy that claim either that categorical features are easier, it really depends on the dataset. I really dont buy it, you need a stronger argument or more proof or to remove that paragraph. Maybe multiple datasets with multiple methods.\n\nGeneral comment:\n- I feel like there is a lot of flafla text that could be trimmed down and a lot of unnecessary math equations that could be removed and replaced with a figure or one paragraph.\n- its overcomplicated, make it more simple\n- results sections need major rework\n\nCan you explain why you need a fifth-degree polynomial for the time schedule. This seems extremely overengineering, like a simple linear line from 0 to 1 would work. And why do we need feature-specific path?\n\nThe coupling and factorization make sense. I can see how it could help produce better data. I like the idea of learn z from GMM or DT.\n\n10% MNAR is extremely low. Real world data that data scientists deal with have 25-50% MNAR. Having an example with 25 or 50% would be important because its closer to the real world and will test the methods to their limit.\n\n SDMetrics shape and trend are not great metrics, they are not accounting for the whole distribution. The other metrics are good. But you need a distribution metric to really tackle distance in distribution. You can use something like the Wasserstein distance (see https://arxiv.org/abs/2309.09968). It's important to have such a metric. In my opinion it would be better to show your results as the average across datasets or the rank, this way you can have a single figure with all the metrics. You can leave the current tables to the appendix. Also right now it makes you look like you chose the best metrics to show in the paper and left the rest in appendix; from looking at the appendix, it sure looks that way. Make one table with all metrics in the paper, everything else is appendix info. Please also add a small table showing the N, N_cat_features, N_cont_features of each dataset included. I'm sorry to say this, but 6 datasets is not a lot with tabular data. I know that this is extra work, but ideally having more datasets would really help. When you switch to average or ranking, it will make it easy to add new datasets without getting tables that are too big.\n\nThe ablation is too small, you need a real ablation where components are removed including whether to go cascaded or not, GMM vs distributional trees vs Quantile-Transform, linear schedule versus your super complicated feature-dependent 5-degree polynomial, etc.\n\nIf the authors make the major changes requested, I'll revisit my score.",
"strengths": "cascaded structure is promising",
"weaknesses": "- There is a lot of flafla text that could be trimmed down and a lot of unnecessary math equations that could be removed and replaced with a figure or one paragraph.\n- its overcomplicated for no reason, explain in a simpler way\n- results sections need major rework\n- showing only good numbers in the paper and leaving bad ones in the appendix\n- lack of true ablation\n\nSee the \"Summary\"",
"questions": "Can you explain why you need a fifth-degree polynomial for the time schedule. This seems extremely overengineering, like a simple linear line from 0 to 1 would work. And why do we need feature-specific path?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-03T23:40:14",
"modification_date": "2025-11-12T14:19:57",
"review_url": "https://openreview.net/forum?id=ErED2dvR7Z¬eId=BXQ9NHnc2f",
"license": "CC BY 4.0"
},
{
"id": "TWntNoruim",
"forum": "ErED2dvR7Z",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission18697/Reviewer_SS8m",
"reviewer_name": "Reviewer_SS8m",
"rating": 2,
"confidence": 2,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "The paper proposes TabCascade, a cascaded flow matching framework for heterogeneous tabular data with mixed type features. Low resolution information, that is categorical variables and a discretized view of numeric variables, is generated first, and a high resolution conditional flow matching model then fills in continuous numeric details. The high resolution model uses a guided conditional probability path with feature specific time schedules and a data dependent Gaussian source distribution, together with a theorem showing that the tree based encoder can lower a transport cost bound. Experiments on six public datasets show strong detection scores and competitive Shape and Trend metrics, with an ablation over decision tree depth. However, the study does not include direct flow matching baselines, so it is hard to tell how much of the reported gains come from the cascade versus simply switching from diffusion to flow matching.",
"strengths": "### **Strengths**\n\n- Clear and practical framing for mixed type numeric features, with an ancestral sampling procedure that decides coarse states first and fills in numeric details when needed (eq 2.)\n- Guided conditional probability path with feature specific time schedules and a compact supervised target for the velocity field (eq 5 and 6).\n- Theorem showing reduced transport cost under the tree encoder, which gives non trivial support for the chosen coupling, see Appendix A.\n- Consistent improvements on detection scores and competitive Shape and Trend across six datasets, with a simple depth ablation.",
"weaknesses": "### **Weaknesses**\n\n- Too many changes at once, limited component ablation. The system introduces several factors at the same time, switch from diffusion to flow matching, data dependent source with mean and variance from z, learned feature specific time schedules, and the cascaded split with a strong low resolution model. The study does not sufficiently isolate the contribution of each factor. The current ablation varies only the depth of the tree encoder, which mainly changes the difficulty of the high resolution stage rather than the learning rule itself.\n- Missing flow matching baselines. The core high resolution component is a flow matching model, but Section 5 compares against diffusion models and non diffusion models, not against direct flow matching baselines. At minimum, include a straight flow matching baseline for tabular data with a linear path and a standard normal source. In addition, please compare to recent flow matching baselines suggested by the community (eg [1, 2]), or explain why they are not compared against.\n- Encoder dependence and leakage of difficulty. With deeper trees, integer valued features can be fully captured at low resolution, effectively removing them from the high resolution task, which can inflate joint realism without demonstrating stronger continuous modeling. A per dataset analysis of how often the high resolution stage is masked would help.\n\n\n### References \n\n- [1] **Exponential Family Variational Flow Matching for Tabular Data Generation** - Andrés Guzmán-Cordero, Floor Eijkelboom, Jan-Willem van de Meent\n- [2] **Generating and Imputing Tabular Data via Diffusion and Flow-based Gradient-Boosted Trees** - Alexia Jolicoeur-Martineau, Kilian Fatras, Tal Kachman",
"questions": "### **Questions**\n\n- Add a direct flow matching baseline for tabular data with a linear path and a standard normal source, and also a rectified flow baseline, both under the same architecture and training budget as your high resolution model, then report the main metrics in Tables 1 to 3 to separate the effect of the cascade from the effect of the objective. Ideally also compare against existing FM approaches.\n- Provide component wise ablations that keep the encoder fixed and switch off each ingredient in turn, no learned time schedules, set gamma t equal to t, no data dependent source, set mu equal to zero and sigma equal to the identity, keep the cascade but replace the high resolution flow matching with diffusion as in CDTD with the same model size and budget, and keep the high resolution flow matching but remove the cascaded conditioning to test unconditional flow matching, then report all three main metrics. \n- Report, for each dataset, the fraction of features and rows for which the high resolution model is masked by z, and how this fraction changes with encoder depth, to calibrate how much work is delegated to the low resolution stage.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T13:39:10",
"modification_date": "2025-11-12T14:19:58",
"review_url": "https://openreview.net/forum?id=ErED2dvR7Z¬eId=TWntNoruim",
"license": "CC BY 4.0"
},
{
"id": "DNXZAn6OzB",
"forum": "ErED2dvR7Z",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission18697/Reviewer_5gde",
"reviewer_name": "Reviewer_5gde",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "The paper introduces Cascaded Flow Matching, where the authors leverage ideas from cascaded diffusion and apply it to flow matching by first generating low resolution features (categorical features) then generating high resolution information (continuous features) conditioned on the generated categorical features. Experiments highlight that their generative paradigm provides competitive results.",
"strengths": "**Originality**. The paper integrate concepts from cascaded diffusion onto tabular data. As far as I know, this is the first paper that applies a multi-resolution generation concept to tabular data (low = categorical and high = numerical). \n\n**Quality**. Idea is interesting and results demonstrated are competitive.\n\n**Clarity**. Overall, the paper is easy to follow as it conveys the message and method well.\n\n**Significance**. Tabular data generation is important in many aspects such as privacy preservation.",
"weaknesses": "In the work, the authors claim that:\"this is the first work to address mixed-type feature generation, i.e., features following a mixture of categorical and continuous distributions\". However, this is not expressed very clearly. There are methods that unify the data representation including TabRep [1] that explores various encoding, TabbyFlow [4] that represents heterogeneous data types using a general exponential family distribution, TabSYN that projects the data onto a latent space, and StaSy that applies continuous diffusion to a unified data space via one-hot encoding.\n\nThe results seem to be underwhelming compared to the baselines reported in the paper. Not-so-recent baselines including TabRep and TabbyFlow that explores flow matching on tabular data should also be included for comparisons. Considering that TabRep-Flow and TabbyFlow both outperform TabDiff across the board, the margins for TabCascade will shrink.\n\nWhat is the motivation of using flow matching in the framework? TabRep and TabbyFlow leverages its sampling speed. If so, are there experiments to demonstrate this?\n\nOne of the most important use-cases of tabular relational data generation is privacy preservation. DCR experiments are conducted and demonstrates that it underperforms against its competitors. Additionally, existing literature in tabular data generation [1] and computational privacy [2] [3] have also highlighted the inadequacy of DCR in evaluating privacy preservation. Hence, its important to assess the privacy preservation via Membership Inference Attacks too.\n\n[1] Si, Jacob, et al. \"TabRep: Training Tabular Diffusion Models with a Simple and Effective Continuous Representation.\" arXiv preprint arXiv:2504.04798 (2025).\n\n[2] Georgi Ganev and Emiliano De Cristofaro. The inadequacy of similarity-based privacy metrics: Privacy attacks against \"truly anonymous\" synthetic datasets, 2024.\n\n[3] Joshua Ward, Chi-Hua Wang, and Guang Cheng. Data plagiarism index: Characterizing the privacy risk of data-copying in tabular generative models, 2024.\n\n[4] Guzmán-Cordero, Andrés, Floor Eijkelboom, and Jan-Willem van de Meent. \"Exponential Family Variational Flow Matching for Tabular Data Generation.\"",
"questions": "Please see weaknesses.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T02:37:43",
"modification_date": "2025-11-12T14:19:58",
"review_url": "https://openreview.net/forum?id=ErED2dvR7Z¬eId=DNXZAn6OzB",
"license": "CC BY 4.0"
},
{
"id": "JAIKKYmSW7",
"forum": "ErED2dvR7Z",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission18697/Reviewer_oVg1",
"reviewer_name": "Reviewer_oVg1",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This paper's main aim is to answer the research question “How can we generate realistic heterogeneous tabular data that includes mixed-type numerical features (continuous values with discrete point masses such as missings or inflated values), by reducing transport costs and improving fidelity via a cascaded flow-matching framework that leverages low-resolution information?”",
"strengths": "1. **[Important] Novel model design.** Cascaded factorisation of tabular generation. The paper proposes TabCascade, which factors the joint into a low-resolution model and a high-resolution flow-matching model, which is conceptually attractive.\n2. **Theoretical claim.** With a DT encoder, the authors seem to provide convincing evidence that data-dependent coupling lowers an upper bound on transport cost compared to independent couplings.\n3. **[Important] Empirical gains on standard metrics.** On six datasets, TabCascade (DT) achieves state-of-the-art Detection scores (C2ST) and competitive/better Shape/Trend scores.",
"weaknesses": "1. **[Important] Two-stage factorisation may under-capture cross-type dependencies.** Since $x_{\\text{cat}}$ and $x_{\\text{num}}$ do not seem to be generated jointly (high-res is conditioned on low-res outputs), subtle dependencies might be missed.\n2. **[Important] Limited coverage of benchmark generators.** Many competitive models are missing in the current results, such as foundation models, CTSyn [1] and TabPFN [2]. I would suggest the authors refer to relevant literature [3, 4] for a broader context of the benchmark setups. Existing coverage seems limited to reach conclusive results.\n3. **Privacy is only analysed superficially (DCR share) and no guarantees are claimed.** For sensitive tabular data, the absence of *any* privacy mechanism or DP ablation limits practical adoption.\n4. **Metric dependence and detector sensitivity.** The strong wins are most pronounced for Detection score (gradient-boosted C2ST); while Shape/Trend also improves, they are already near ceiling for many baselines. This raises questions about how general the gains are across orthogonal metrics and downstream utility.\n\n[1] Lin, Xiaofeng, et al. \"Ctsyn: A foundational model for cross-tabular data generation.\" *arXiv preprint arXiv:2406.04619* (2024).\n\n[2] Hollmann, Noah, et al. \"Accurate predictions on small data with a tabular foundation model.\" *Nature* 637.8045 (2025): 319-326.\n\n[3] Ma, Junwei, et al. \"TabPFGen--Tabular Data Generation with TabPFN.\" *arXiv preprint arXiv:2406.05216* (2024).\n\n[4] Margeloiu, Andrei, et al. \"Tabebm: A tabular data augmentation method with distinct class-specific energy-based models.\" *Advances in Neural Information Processing Systems* 37 (2024): 72094-72144.",
"questions": "Please refer to \"Weaknesses\" section.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-27T01:51:15",
"modification_date": "2025-11-12T14:20:00",
"review_url": "https://openreview.net/forum?id=ErED2dvR7Z¬eId=JAIKKYmSW7",
"license": "CC BY 4.0"
}
] |
9PpLnRAZjN
|
https://openreview.net/forum?id=9PpLnRAZjN
|
End-to-End One Step Flow Matching via Flow Fitting
| 4
| 4.25
|
[
2,
6,
4,
4
] |
[
5,
5,
4,
3
] | 4
|
[
"Flow matching",
"Single step generative models"
] |
Diffusion and flow-matching models have demonstrated impressive performance in generating diverse, high-fidelity images by learning transformations from noise to data. However, their reliance on multi-step sampling requires repeated neural network evaluations, leading to high computational cost. We propose FlowFit, a family of generative models that enables high-quality sample generation through both single-phase training and single-step inference. FlowFit learns to approximate the continuous flow trajectory between latent noise (x_0) and data (x_1) by fitting a basis of functions parameterized over time (t \in [0, 1]) during training. At inference time, sampling is performed by simply evaluating the flow only at the terminal time (t = 1), avoiding iterative denoising or numerical integration. Empirically, FlowFit outperforms prior diffusion-based single-phase training methods achieving superior sample quality.
|
generative models
|
https://openreview.net/pdf?id=9PpLnRAZjN
| 2025-09-19T15:39:30
| 4
|
[
{
"id": "EsTXV9su34",
"forum": "9PpLnRAZjN",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16679/Reviewer_me4z",
"reviewer_name": "Reviewer_me4z",
"rating": 2,
"confidence": 5,
"soundness": 1,
"contribution": 2,
"presentation": 1,
"summary": "This paper proposes flow fitting that train two networks which are velocity flow matching and flow model that align with the velocity flow matching model. The flow trajectory model is defined in terms of basis function.",
"strengths": "1. The model proposes to use separate network called trajectory flow to fit with training velocity model, which is new idea than the unified model like the meanflow and shortcut model.\n2. The model performance beats shortcut in some setting. \n3. The code implementation are provided.",
"weaknesses": "1. The paper writing seems unprofessional with poor writing in introduction (too many short paragraph). The related works is not well-studied and lacks many recent works like [1, 2, 3].\n\n[1]: Inductive Moment Matching - ICML\n\n[2]: Improved Training Technique for Latent Consistency Models - ICLR\n\n[3]: Consistency Models Made Easy - ICLR\n\n2. The authors do not clarify why not use single deep learning model instead of both basis function and deep network to estimate coefficient.\n\n3. The paper only compares with shortcut model in limited setting (undertraining). How about training model until converge with DiT-XL/2.\n\n4. In ablation study, the experiment details are not provided like what dataset, training iterations and other details. The authors just simply put a table without any explanation to guide the reader.",
"questions": "Please see the weakness",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T13:35:22",
"modification_date": "2025-11-12T13:52:14",
"review_url": "https://openreview.net/forum?id=9PpLnRAZjN¬eId=EsTXV9su34",
"license": "CC BY 4.0"
},
{
"id": "3R1gZrtTCj",
"forum": "9PpLnRAZjN",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16679/Reviewer_a4C4",
"reviewer_name": "Reviewer_a4C4",
"rating": 6,
"confidence": 5,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This manuscript proposes a new type of 1-step generative models called FlowFit. FlowFit approximates the flow trajectory using basis functions, and trains networks to predict the coefficients of the bases. Training is conducted by simultaneously training a flow matching model to learn the velocity field, and matching the derivative of the approximated trajectory to the learned velocity field. The proposed method shows promising results on the DiT-B architecture. Although the FID is not as strong as the SoTAs (e.g., MeanFlow DiT-B FID is 6.17 on ImageNet), the novelty of the method itself is a solid contribution that could inspire future works.",
"strengths": "- The idea of fitting basis functions to approximate the flow trajectory is very original, and the training objective of matching the derivative of the function also differs from previous one/few-step models, such as consistency models. The fact that a network is able to predict all the coefficients of the bases in one step is very intriguing. \n- Apart from the novelty, the method is also relatively simple and elegant, in contrast to SOTA consistency models which often involve adaptive losses, complex scheduling or inefficient JVP.\n- The generation quality, as measured by FID, appears strong enough to outperform older generations of consistency models, such as iCT, sCT.\n- The presentation is clear and focused, although adding a graphical illustration may further strengthen the clarity.",
"weaknesses": "- The major limitation of this work is that the FID is relatively underperforming compared to SOTA consistency models. For example, on ImageNet 256x256, MeanFlow (also using DiT-B) achieves an FID of 6.17, whereas the proposed approach attains an FID of 34.4.\n- Experiments are only done on DiT-B. Larger models (e.g., DiT-XL) are not tested.\n- The proposed method requires training two models, one for flow matching and one for trajectory matching. This is more like online distillation rather than \"end-to-end\".\n- Minor formatting issue: the reference format is not following the official ICLR template, which should be name + year instead of numbers.",
"questions": "The paper shows improving performance with increasing order, but stopped at order 8. In L411, the authors explain that this is due to computational constraints. Why is maximum order limited by computation? I thought most of the computation expenses are from the neural network, and increasing the number of bases would not introduce significant overhead in training and inference (because evaluating these simple functions is usually fast)?",
"flag_for_ethics_review": [
"Yes, Research integrity issues (e.g., plagiarism, dual submission)"
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T08:15:48",
"modification_date": "2025-11-12T13:52:15",
"review_url": "https://openreview.net/forum?id=9PpLnRAZjN¬eId=3R1gZrtTCj",
"license": "CC BY 4.0"
},
{
"id": "3Xvt4kqrOv",
"forum": "9PpLnRAZjN",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16679/Reviewer_YPvN",
"reviewer_name": "Reviewer_YPvN",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This paper presents FlowFit, a novel framework for flow modeling via basis function fitting that enables high-quality sample generation through both single-phase training and single-step inference, which avoids iterative numerical integration like denoising diffusion models. To do this, the paper directly parameterizes the continuous-time flows using a residual expansion over fixed basis functions. This framework can be used in one step generation and distillation of pretrained models. The paper experiments on CelebAHQ-256 and ImageNet-256 to show that the proposed method outperforms prior diffusion-based single-phase training methods.",
"strengths": "•\tThe paper is clearly organized with direct motives.\n\n•\tThe proposed method of using basis function to directly distill the learned velocity field is simple and straightforward.",
"weaknesses": "•\tThe content of the paper is inadequate. It would be beneficial to add more discussions on the choice of basis functions and the capacity on approximating certain flows for specific tasks. It remains suspicious to me that the proposed method could scale up to more general complex generation tasks, in which the optimal velocity field and the flow map is highly curved and the degree of the mapping function is large.\n\n•\tThe paper lacks theoretical analysis of the proposed method. It would be beneficial to theoretically discover how large the basis function family is needed for a given task in mathematical details, a simple synthetic example could suffice. The comparison may utilize the theory in polynomial regression analysis.\n\n•\tIn my view, the empirical comparison in the experiments is a bit obscure and misleading. It is somewhat unfair to put the proposed method FlowFit in the column of “1-step” with the other method’s “1-step” like standard flow matching and consistency models, since the FlowFit method regards the multi-step trajectory as a single complex function map. The complexity of the “1-step” for FlowFit method is much higher than the “1-step” for a single step of sample update according to the learned velocity field in standard flow matching and consistency models. It would be more convincing to show the effectiveness of the proposed method by presenting the exact numerical comparison of the total time cost.",
"questions": "•\tBesides the empirical comparison, could the authors provide an understanding on why directly fitting the map could be better than other few step generation methods based on updating samples according to the velocity field? It seems to me that the difficulty in fitting the velocity field is not less than directly updating the samples through the distilled velocity field in previous few step generation methods like diffusion distillation.\n\n•\tCould the flow fitting strategy be applied to other classical flow trajectories like diffusion paths in one step generation tasks like diffusion distillation? If yes, will the type of the true flow trajectory affect the choice of the basis function family and the difficulty of flow fitting? Could the authors provide some understanding on whether and why the flow fitting strategy will outperform other diffusion distillation methods in this case?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T02:24:56",
"modification_date": "2025-11-12T13:52:16",
"review_url": "https://openreview.net/forum?id=9PpLnRAZjN¬eId=3Xvt4kqrOv",
"license": "CC BY 4.0"
},
{
"id": "jqXJTvOAtn",
"forum": "9PpLnRAZjN",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16679/Reviewer_vFLX",
"reviewer_name": "Reviewer_vFLX",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "The paper proposes a single-phase method to learn one-step map to approximate the ODE trajectory in flow matching to enable one-step generation. The authors introduce basis functions in time to model the one-step map to reduce the computation of backpropagation in training.",
"strengths": "The paper is well organized and easy to follow. The authors introduced fixed basis functions in time to model the one-step map in order to reduce the computation of backpropagation in training. Experiment results also verify the efficiency of the proposed method.",
"weaknesses": "The idea is quite naive overall, using a one-step map to approximate the ODE trajectory via matching velocity. Although the author introduces the basis functions, it is basically doing distillation over a pre-trained flow matching model. I understand that this is a single phase method and the training of one-step map can be in parallel with FM, but the computation cost is still doubled. The method is not very appealing to me.",
"questions": "Can the authors provide results for multiple step evaluation of the proposed method? I understand the primary goal is to do one-step generation, but I still want to know if the proposed method can work well when increasing computational budget.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-29T13:28:17",
"modification_date": "2025-11-12T13:52:16",
"review_url": "https://openreview.net/forum?id=9PpLnRAZjN¬eId=jqXJTvOAtn",
"license": "CC BY 4.0"
}
] |
|
wcInjlUp8V
|
https://openreview.net/forum?id=wcInjlUp8V
|
CoTabBench: A Real-World Benchmark for Question Answering over Weakly-Structured and Heterogeneous Tables
| 4
| 4
|
[
2,
4,
4,
6
] |
[
4,
4,
4,
4
] | 4
|
[
"Table Question Answering",
"Large Language Models",
"Benchmark",
"Real-World Data"
] |
Recent advancements in Large Language Models (LLMs) have significantly propelled their capabilities in table-based question answering. However, existing benchmarks predominantly feature well-structured tables, failing to address the complexities of real-world data, which is often weakly-structured and contains highly heterogeneous content. This discrepancy limits the evaluation of model robustness on diverse and challenging formats, such as tables with intricate layouts and varied data types found in scientific papers or financial reports. To bridge this gap, we introduce CoTabBench, a large-scale, multi-domain, and intricate benchmark featuring over 2,700 real-world, weakly-structured tables and more than 8,600 question-answer pairs spanning 10 distinct domains. We further propose a novel complexity assessment framework, which quantitatively validates the inherent structural and content-based challenges within CoTabBench. Furthermore, we introduce CoTabInstruct, a large-scale training corpus with over 11,000 tables, and present CoTabLLM, a 7B model trained on it that outperforms even leading models like GPT-4.1 on our benchmark. Extensive experiments reveal a significant performance degradation for state-of-the-art models on CoTabBench, highlighting its critical role in advancing robust, real-world table understanding.
|
To address the fact that LLMs fail on complex, real-world tables, we created CoTabBench: a comprehensive benchmark and dataset designed to push models beyond simple structured data and foster more robust table understanding.
|
datasets and benchmarks
|
https://openreview.net/pdf?id=wcInjlUp8V
| 2025-09-17T16:36:52
| 4
|
[
{
"id": "m7pRuiRqVr",
"forum": "wcInjlUp8V",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8787/Reviewer_TtPK",
"reviewer_name": "Reviewer_TtPK",
"rating": 2,
"confidence": 4,
"soundness": 1,
"contribution": 1,
"presentation": 3,
"summary": "This paper presents CoTabBenc, a benchmark for tabular question answering. The authors argue that the existing benchmarks do not capture the complexity of real-world tables (complex structures, noise, etc.) and thus curate a collection of tables from various sources (academic documents, web-pages) and create CoTabBench -- a detaset or 2700+ tables, and 8600+ associated question-answer pairs. They also present a training corpus and present CoTabLLM (fine-tuned Quen on the dataset) and show that it outperforms other LLMs (not fine-tuned on the dataset) on CoTabBench.",
"strengths": "- The paper is written well and is easy to understand\n\n- The authors evaluate multiple LLMs on the proposed benchmark and present a detailed compartive anlaysis",
"weaknesses": "The biggest weakness of the work is the lack of details about the benchmark creation process. The paper provides a high level overview of the process, but offers little to no details. No one can read the paper and repeate the becnchmark creation process.\n\nFor instance, how the papers from arXiv and websites were chosen for sampling tables? \n\nThe benchmark is essentially created following a set of rules and question-answer pairs generated synthetically by an LLM. How does the process ensure quality and diversity of these questions? Further, the questions do not capture real-world nuances -- a key limitation of existing works by the authors.\n\nNo details are provided about the human validation process. What is the background of these validators? How de we judge the quality of their work? Are there any inter-annotator studies that are performed?\n\nWhile the authors have compared performance of multiple LLMs on the proposed benchmark, it would have been better to test different SoTA table question answering methods on the benchmark.\n\nLikewise, authors compare the performance of fine-tuned CoTabLLM with with different LLMs in a zero-shot setting. This is unfair. A model especially trained for a specific task is expected to outperform models not trained for the task.\n\nAlso, I could not find a link to the resources developed for the work.",
"questions": "Please see above.",
"flag_for_ethics_review": [
"Yes, Legal compliance (e.g., GDPR, copyright, terms of use, web crawling policies)"
],
"code_of_conduct": "Yes",
"review_date": "2025-11-06T03:42:02",
"modification_date": "2025-11-12T12:09:21",
"review_url": "https://openreview.net/forum?id=wcInjlUp8V¬eId=m7pRuiRqVr",
"license": "CC BY 4.0"
},
{
"id": "djHysG24Va",
"forum": "wcInjlUp8V",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8787/Reviewer_WZdo",
"reviewer_name": "Reviewer_WZdo",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "The authors introduce CoTabBench, a new large-scale, multi-domain QA benchmark over weakly‐structured, heterogeneous tables. CoTabBench includes over 2.7K real-world tables and 8.6K+ question–answer pairs spanning 10 domains. The paper proposes a novel complexity‐assessment framework to quantify each table’s structural irregularity and content heterogeneity. To support training, they release CoTabInstruct (~11K tables) and train a 7B model (CoTabLLM) on it; this model even outperforms GPT-4.1 on the CoTabBench tasks. Extensive experiments show that state-of-the-art LLMs suffer marked performance drops on CoTabBench, underscoring the benchmark’s value for advancing robust real-world table understanding.",
"strengths": "1. CoTabBench fills a clear gap by focusing on “weakly-structured and heterogeneous” tables from real sources. Unlike prior datasets (e.g. WikiTables, TableFact, FinQA) that use clean grids, CoTabBench tables often have merged cells, nested headers, multiline cells, etc., and cover 10 diverse domains (scientific and applied).\n\n2. The paper is well-structured and mostly well-written, with sections organized logically (construction pipeline, tasks, complexity, experiments, etc.) \n\n3. The paper evaluates over 20 models, including state-of-the-art proprietary models (GPT-4.1, Qwen-Turbo, Gemini 2.5) and many open-source LLMs (Llama3/4, Qwen2.5/3, DeepSeek, etc.). Both “non-thinking” and chain-of-thought (“thinking”) modes are tested. This breadth demonstrates the benchmark’s difficulty.",
"weaknesses": "1. The claim that CoTabLLM-7B “outperforms GPT-4.1” and other proprietory LLMs may be misleading. CoTabLLM is fine-tuned specifically on CoTabInstruct, whereas GPT-4.1 is evaluated off-the-shelf. It’s expected that a model trained on similar data will have an advantage. There should be a comparison or discussion in the paper around state-of-the-art trainable baselines (https://arxiv.org/abs/2402.01155, https://arxiv.org/abs/2107.07653) and few-shot table QA frameworks (https://arxiv.org/abs/2301.13808).\n\n2. The authors introduce CoTabInstruct (11k tables, 32k QAs) but give few details on how it was assembled or how it differs from CoTabBench. For example, it is unclear whether CoTabInstruct tables overlap with the CoTabBench test set or if they were drawn from separate sources/timeframes. If there is any overlap, it could inflate CoTabLLM’s performance. Similarly, how were the 32k QA generated (presumably a similar multi-agent pipeline)? More transparency about the train/validation/test split and annotation process for CoTabInstruct would improve reproducibility.\n\n3. The question-answer pairs are generated in an automated fashion. Therefore, the reliability on the correctness of the answers remains skeptical unless a manual qualitative evaluation is done on a random sample of QA pairs corresponding to the tables.",
"questions": "Look at the weaknesses above. I have a few more questions enumerated below.\n\n1. Could the authors clarify the source and curation of the CoTabInstruct training set? Specifically, are its tables and QA pairs strictly disjoint from CoTabBench (to prevent leakage)?\n\n2. For better understanding, it would be useful to see concrete examples of each task. Could the authors provide sample QA pairs for (a) row/column counting question, (b) TableQA question (with its reasoning steps), and (c) hallucination question (both Data-Absent and Attribution-Error types)? The descriptions in Sec.2.3 are clear, but actual examples would illustrate the difficulty and clarify the answer format.\n\n3. For the negative (hallucination) questions, how are the two types balanced? Are there equal numbers of Data-Absence vs. Attribution-Error questions? Also, is the model simply asked to answer these questions (where presumably the “correct” answer is some null/negation), or is it a classification (Yes/No) task?\n\n4. Is the code for the data creation pipeline be released? That can help the community create larger scale datasets and adapt the framework to particular use cases of Table QA.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T17:54:10",
"modification_date": "2025-11-12T12:09:22",
"review_url": "https://openreview.net/forum?id=wcInjlUp8V¬eId=djHysG24Va",
"license": "CC BY 4.0"
},
{
"id": "yVHlmlDC06",
"forum": "wcInjlUp8V",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8787/Reviewer_Z8Fj",
"reviewer_name": "Reviewer_Z8Fj",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "The paper introduces CoTabBench, a critically needed, large-scale benchmark for Table Question Answering (TableQA) designed to address the significant disparity between existing datasets, which rely on overly clean, well-structured tables, and the chaotic reality of real-world data. CoTabBench uniquely focuses on the joint challenge of Structural Irregularity and Content Heterogeneity, compiling over 2,700 weakly-structured tables and more than 8,600 question-answer pairs spanning 10 distinct domains, sourced from complex documents such as native LaTeX academic papers and diverse public web domains. The methodology is supported by a novel Complexity Assessment Framework that quantitatively validates the benchmark's rigor, showing superior metrics in both structural and content complexity compared to predecessors like WTQ and InfoTabs. Crucially, extensive experiments involving over 20 state-of-the-art Large Language Models (LLMs) reveal a significant performance degradation, with even highly sophisticated proprietary models like GPT-4.1 achieving a modest overall score of 53.4%. Conversely, the purpose-built CoTabLLM-7B model, trained on the companion CoTabInstruct corpus, surpasses these larger competitors, establishing that the primary constraint in achieving robust, real-world TableQA performance is the scarcity of appropriate instruction data tailored to these compound complexities.",
"strengths": "1. Bridging the Real-World Data Gap via Compound Complexity CoTabBench uniquely addresses a critical gap by sourcing tables from native LaTeX academic papers and diverse public websites, simultaneously ensuring both structural irregularity (e.g., merged cells) and deep content heterogeneity (e.g., formulas, long text). This dual-source methodology creates a highly authentic and rigorous testbed for real-world Table Question Answering (TableQA) challenges. \n\n\n2. The Rigorous Multi-Dimensional Complexity Assessment Framework The paper proposes a novel Complexity Assessment Framework that systematically quantifies difficulty along the distinct dimensions of Structural Irregularity and Content Heterogeneity. This framework uses objective metrics like Merged Cell Ratio (98.34%) and Header Depth Index (3.91) to empirically validate CoTabBench’s superior rigor compared to prior datasets. \n\n\n3. Validation Through Specialized Instruction Tuning and Robust Baselines The creation of the dedicated CoTabInstruct training corpus and the resulting CoTabLLM-7B model definitively proves that the current performance bottleneck is data scarcity, not model scale. CoTabLLM-7B establishes a robust baseline by achieving a higher overall accuracy (57.3%) that surpasses larger proprietary models like GPT-4.1 (53.4%).",
"weaknesses": "W1: Task Design Limitation: Decoupling Structural Interpretation from Complex Reasoning The task design intentionally simplifies reasoning chains (requiring only \"minimal yet essential operations\") to isolate structural challenges, which risks making the benchmark an optimized test of structural parsing rather than a holistic measure of real-world TableQA capabilities. This limits the rigorous stress-testing of models' ability to handle complex content like mathematical formulas or synthesizing long-form text, despite the benchmark including these heterogeneous elements. \n\n\nW2: Reliance on Proprietary LLM-as-a-Judge for Content Quantification A core part of the Complexity Assessment Framework uses the proprietary, closed-source Qwen-Plus-Latest model to quantify Content Heterogeneity (CCC, DTD, SDC). This dependency introduces significant reproducibility risk and transparency limitations, as the specific, nuanced scoring prompts and internal mechanism of the commercial API cannot be independently verified or debugged by the research community. \n\n\nW3: Insufficient Transparency in Complex Data Sourcing and Filtering The tables collected from \"public websites\" via a \"large-scale crawler\" are described vaguely, omitting crucial technical details about the crawler’s methodology, raw data volume, or explicit steps taken to mitigate selection or domain bias across the 10 targeted domains. This lack of transparency limits the ability to replicate the data collection process or fully assess the potential structural or semantic biases inherited in the web-scraped subset. \n\nW4: Ambiguity in Ethical Compliance and PII Handling The Ethics Statement provides only generic assurance that \"no personally identifiable information was included\" without detailing the specific PII removal protocol. This vagueness is insufficient given the sensitive nature of the real-world source material from domains like corporate annual reports (Finance) and pharmaceutical databases (Medicine), posing potential legal or distribution risks. \n\n\nW5: Generalization and Overfitting Concerns for CoTabLLM Despite its strong overall score (57.3%), the CoTabLLM-7B model shows a severe performance drop on the core TableQA task (36.1%), suggesting it may be highly specialized to the specific structural and annotation patterns of the CoTabInstruct training corpus. This disparity raises concerns that the model might struggle to generalize its complex reasoning capabilities to novel table structures or reasoning tasks outside of the distribution it was fine-tuned on. \n\n\nW6: Unverified Generalizability of Complexity Framework Metrics While the Complexity Assessment Framework is validated against only two prior benchmarks (WTQ, InfoTabs), the universal applicability of its calculated metrics (e.g., Merged Cell Span, Header Depth Index) is unverified against benchmarks specifically targeting hierarchical tables (like HiTab) or intense domain-specific reasoning (like FinQA). The framework needs broader testing to prove its long-term utility as a standardized measure for all types of complex tables.",
"questions": "same as weakness",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-29T12:55:55",
"modification_date": "2025-11-12T12:09:22",
"review_url": "https://openreview.net/forum?id=wcInjlUp8V¬eId=yVHlmlDC06",
"license": "CC BY 4.0"
},
{
"id": "fsPZUTFlDb",
"forum": "wcInjlUp8V",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8787/Reviewer_5E9K",
"reviewer_name": "Reviewer_5E9K",
"rating": 6,
"confidence": 4,
"soundness": 2,
"contribution": 3,
"presentation": 2,
"summary": "This paper introduces CoTabBench, a large-scale benchmark designed to evaluate LLMs on complex, real-world table question answering. Addressing the gap left by existing benchmarks that use well-structured data, CoTabBench features over 2,700 weakly-structured and heterogeneous tables from 10 domains. Experiments reveal a significant performance drop in state-of-the-art LLMs on this benchmark. However, the authors' fine-tuned 7B model, CoTabLLM, trained on their new 11,000-table CoTabInstruct dataset, outperforms even top models like GPT-4.1. This suggests the primary bottleneck for real-world table understanding is the lack of representative training data, not model architecture.",
"strengths": "1. The authors introduce CoTabBench, a new large-scale, multi-modal table question-answering dataset. They also propose quantitative metrics to measure and validate its structural irregularity and content heterogeneity.\n2. The authors constructed an instruction-tuning dataset, CoTabInstruct, which is shown to effectively improve model performance.\n3. Extensive experiments are conducted to demonstrate the challenging nature of the proposed CoTabBench dataset.",
"weaknesses": "1. Insufficient Comparison with Prior Work: While Tables 1 and 3 offer comparisons to some existing table QA datasets, several prior works have also introduced datasets focusing on structural irregularities and domain-specific knowledge, such as MMTBench [1], SPIQA [2], RealHiTBench [3], and ENTRANT [4]. The manuscript would be strengthened by a clearer discussion of CoTabBench's unique advantages over these datasets. Specifically, the tables in WTQ and InfoTabs, used for comparison in Table 3, are highly structured and sourced from general-domain Wikipedia content. A more compelling comparison against other recent datasets that are claimed to be structurally irregular, real-world, or domain-specific is expected.\n2. Lack of In-depth Error Analysis: The experiments lack a detailed analysis of the models' failure cases. For instance, what are the most common error types in the TableQA task? Are they primarily due to difficulties in understanding hierarchical headers, handling multi-row cells, or interpreting domain-specific terminology? I suggest that the authors provide and categorize representative error cases to offer deeper insights.\n\n[1] MMTBENCH: A Unified Benchmark for Complex Multimodal Table Reasoning\n\n[2] SPIQA: A Dataset for Multimodal Question Answering on Scientific Papers\n\n[3] RealHiTBench: A Comprehensive Realistic Hierarchical Table Benchmark for Evaluating LLM-Based Table Analysis\n\n[4] ENTRANT: A Large Financial Dataset for Table Understanding",
"questions": "1. Table 1, Figure 2, and Figure 3 are not referenced in the main text. Please ensure all tables and figures are properly cited.\n2. CoTabBench is a multi-modal dataset, yet the comparisons in Tables 1 and 3 are exclusively with text-only table QA datasets. It is recommended to include comparisons with other multi-modal table QA datasets, such as MMTBench [1], SPIQA [2], and ComTQA [3].\n3. The content presented in Figure 3 and Table 4 appears to be redundant. Please consider merging or removing one of them.\n\n[1] MMTBENCH: A Unified Benchmark for Complex Multimodal Table Reasoning\n\n[2] SPIQA: A Dataset for Multimodal Question Answering on Scientific Papers\n\n[3] TabPedia: Towards Comprehensive Visual Table Understanding with Concept Synergy",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-20T00:59:10",
"modification_date": "2025-11-12T12:09:22",
"review_url": "https://openreview.net/forum?id=wcInjlUp8V¬eId=fsPZUTFlDb",
"license": "CC BY 4.0"
}
] |
Pxd5mjwznl
|
https://openreview.net/forum?id=Pxd5mjwznl
|
Difference back propagation with inverse sigmoid function
| 0
| 4.666667
|
[
0,
0,
0
] |
[
4,
5,
5
] | 3
|
[
"Machine Learning",
"AI",
"Algorithm",
"Back Propagation"
] |
Since the proposal of neural network, the derivative-based back propagation algorithm has been the default setting. However, the derivative for a non-linear function is an approximation for the difference of the function values, and it would be a more precise way to do back propagation using the difference directly instead of the derivative. While the back propagation algorithm has been the rule-of-thumb for neural networks, it becomes one of the bottleneck in modern large deep learning models. With the explosion of big data and large-scale deep learning models, a tiny change in the back propagation could lead to a huge difference. Here we propose a new back propagation algorithm based on inverse sigmoid function to calculate the difference instead of derivative, and verified the effectiveness with basic examples.
|
We propose a new back propagation algorithm that calculates the back propagatiion updates using the difference instead of the derivative from the activation function
|
optimization
|
https://openreview.net/pdf?id=Pxd5mjwznl
| 2025-09-19T11:05:56
| 3
|
[
{
"id": "dI3B1VuSWi",
"forum": "Pxd5mjwznl",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission15419/Reviewer_ZEzY",
"reviewer_name": "Reviewer_ZEzY",
"rating": 0,
"confidence": 4,
"soundness": 1,
"contribution": 1,
"presentation": 2,
"summary": "The submitted manuscript considers an alternative to the classical back-propagation (BP) for activations and proposes replacing it with a finite difference approximation. The obtained numerical results demonstrate that this approach leads to minor improvement in convergence for the considered toy models.",
"strengths": "The submission is clearly written, and equations support the proposed approach.",
"weaknesses": "I have identified many weaknesses in the submitted work and have listed the most crucial ones below.\n1. The motivation for the proposed modification of the backpropagation (BP) is confusing. The exact gradient is essential for optimizers that update the model's parameters. If one approximates the gradient with a finite difference, then optimizers may converge to the wrong quasi-optimal parameters that do not correspond to the original problem. Moreover, typically only the stochastic gradient estimate is available in BP, and it remains unclear how this factor can be combined with the proposed approach.\n2. No theoretical analysis or even intuition on how the proposed approach resolves the stated problem of \"Although the models have shown great performance, it seems we are facing a bottleneck because nowadays we need to enlarge the models to billions of parameters to improve the accuracy by only a few percentages.\"\n3. The proposed approach is empirically tested only for the toy models and toy datasets, which is insufficient to make any well-supported conclusion about its effectiveness.\n4. Although generalization ability is crucial for deep learning models, I see that such analysis is explicitly excluded from the consideration.\n5. The observed gain in cost (btw what is cost in the y-axis in Figures 2 and 4?) looks not so large and can be explained with some random initialization. No proper statistical analysis of the significance of the presented gain is provided.",
"questions": "1. Do you have any results for medium or large-scale models and datasets? E.g., ResNet18 and CIFAR10 and/or some standard benchmark in NLP like LLaMa or GPT-like models?\n2. Backpropagation (BP) algorithms compute gradients of parameters to update them and minimize the loss function with a proper optimizer. Why have you considered the objective output of BP as an \"inconsistency\"? Inconsistency with respect to what? \n3. What optimizers have you used to obtain the reported learning curves for BP and your approach?\n4. Cross-entropy loss function incorporated sigmoid activation and avoided the mentioned instabilities since the sigmoid function is not computed standalone. So, which cases (specific application tasks) are suitable for your method? \n5. How can you explain that there is no difference between your method and BP for the topic classification task presented in Figure 5? The zoomed plots do not convince of the stability of the gain since the noise is large. Averaging across multiple runs and plotting the standard deviation are necessary to support the conclusion.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-02T15:19:36",
"modification_date": "2025-11-12T13:34:54",
"review_url": "https://openreview.net/forum?id=Pxd5mjwznl¬eId=dI3B1VuSWi",
"license": "CC BY 4.0"
},
{
"id": "djNfbiKa0r",
"forum": "Pxd5mjwznl",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission15419/Reviewer_Zs9A",
"reviewer_name": "Reviewer_Zs9A",
"rating": 0,
"confidence": 5,
"soundness": 1,
"contribution": 1,
"presentation": 1,
"summary": "The authors propose to modify the backpropagation equation in neural networks to improve training. They replace the derivative of the activation function (sigmoid here) by $(a-a')/(z-z')$. They observe improvements on neural networks with 3 and 5 neurons on a synthetic task.",
"strengths": "The motivation is grounded as backpropagation is at the core of every model training.",
"weaknesses": "- Line 29: \"To our knowledge, no new method for performing backpropagation has been proposed.\". The authors seem to be unaware of all the literature about alternatives to backpropagation. More generally, no related work is discussed, there are only 10 references and the most recent one is from 2021. I would advise the authors to look up \"alternative to backpropagation arxiv\" on any search engine.\n- The maths are wrong. The authors assume that after a step the activation $a$ will move exactly in the direction of its gradient (Eq. 3), which is wrong. However it may be a good guess.\n- I do not understand why the authors use the inverse sigmoid function to recompute $z$ from $a$ when it has already been computed during the forward pass.\n- The experimental section is extremely lacking. A 3-neurons network on a synthetic task is not a good benchmark, we would expect at the very least larger networks (e.g. 2 hidden layers, dimension 128) on CIFAR10 for instance.\n\nThe paper is only 4.5 pages long, which leaves plenty of room to include more experiments, related works, etc.",
"questions": "No question.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T18:53:09",
"modification_date": "2025-11-12T13:34:55",
"review_url": "https://openreview.net/forum?id=Pxd5mjwznl¬eId=djNfbiKa0r",
"license": "CC BY 4.0"
},
{
"id": "dcj3ynTZfQ",
"forum": "Pxd5mjwznl",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission15419/Reviewer_EQg7",
"reviewer_name": "Reviewer_EQg7",
"rating": 0,
"confidence": 5,
"soundness": 1,
"contribution": 1,
"presentation": 1,
"summary": "The paper proposes replacing the derivative of activation function with a difference quotient to fix an alleged inconsistency in backpropagation and claims this helps with vanishing gradients.",
"strengths": "n/a",
"weaknesses": "The described inconsistency in the paper does not exist. Gradient descent adjusts parameters, and activations a or pre-activations z are functions of parameters, so there is no inconsistency between updated z' and a'. Therefore, the paper is trying to solve a non-existent problem by replacing true gradient of activations by its finite difference approximation in backpropagation. Also, ignoring decades of prior work on training neural networks, makes the contribution appear uninformed.\n\nThe authors are strongly encouraged to further strengthen their conceptual understanding of deep learning and conduct a more thorough literature review before attempting to develop and publish new methods.",
"questions": "n/a",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-25T08:36:50",
"modification_date": "2025-11-12T13:34:55",
"review_url": "https://openreview.net/forum?id=Pxd5mjwznl¬eId=dcj3ynTZfQ",
"license": "CC BY 4.0"
}
] |
sJI2JCggyD
|
https://openreview.net/forum?id=sJI2JCggyD
|
Delta Activations: A Representation for Finetuned Large Language Models
| 3.333333
| 3.666667
|
[
2,
4,
4
] |
[
4,
4,
3
] | 3
|
[
"Representation",
"LLM",
"post-training",
"finetuning"
] |
The success of powerful open source Large Language Models (LLMs) has enabled the community to create a vast collection of post-trained models adapted to specific tasks and domains. However, navigating and understanding these models remains challenging due to inconsistent metadata and unstructured repositories. We introduce Delta Activations, a method to represent finetuned models as vector embeddings by measuring shifts in their internal activations relative to a base model. Clustering analysis shows that Delta Activations achieve strong separation of finetuned domains, significantly outperforming baselines such as flattened weights, salient parameter masks, and output embeddings, while being more lightweight and computationally efficient. Delta Activations also demonstrate desirable properties: it is robust across finetuning settings and exhibits an additive property when finetuning datasets are mixed. We also explore extensions of Delta Activations: it can represent tasks via few-shot finetuning for reliable model retrieval and guide model selection for merging by quantifying similarity between models. Furthermore, activations can be substituted with other representation extraction methods, demonstrating the flexibility of the broader Delta-X framework.
We hope Delta Activations can facilitate the practice of reusing publicly available models.
|
foundation or frontier models, including LLMs
|
https://openreview.net/pdf?id=sJI2JCggyD
| 2025-09-08T22:33:52
| 3
|
[
{
"id": "TP1UBBYsj8",
"forum": "sJI2JCggyD",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission3142/Reviewer_KPrH",
"reviewer_name": "Reviewer_KPrH",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 4,
"summary": "This paper proposes a new method called \"Delta Activations,\" which aims to create efficient vector representations (embeddings) for a vast collection of fine-tuned LLM that often lack metadata.\n\nThe method works by feeding a fixed set of generic probe datasets into both a fixed \"base model\" and a \"fine-tuned model.\" It then calculates the difference (the \"delta\") between their last-layer hidden states. By aggregating these different vectors (e.g., by averaging), a unique vector embedding is generated for the fine-tuned model.",
"strengths": "* The method shows superior performance when clustering models that are initialized from the same base model.\n\n* The paper is well-written and flows smoothly.",
"weaknesses": "The paper's main contribution, \"Delta Activations,\" has a fundamental limitation: it heavily relies on a shared, architecturally identical base model. This is because the method's core operation is the calculation of differences between high-dimensional activation vectors (e.g., 4096-D). However, in today's LLM ecosystem, models are often based on different architectures or are closed-source, making their internal architectures inaccessible. Therefore, the truly critical and pressing challenge is cross-architecture model representation and clustering.\n\nThe paper relegates this key challenge to an extension called \"Delta Meaning.\" This approach represents a massive compromise: it degenerates from a high-dimensional (4096-D) internal activation space to an extremely low-dimensional (20-D) external probabilistic space. As shown in Table 3, the representational power of Delta Meaning is far weaker than that of Delta Activations (a score of just 0.20 vs. 0.61), confirming that a significant amount of critical internal information is lost during the transition to this probabilistic space.\nIn essence, Delta Activations solves a \"simple problem\" that relies on overly strong assumptions and is limited in real-world scenarios. Meanwhile, the solution it provides for the \"real problem\" (cross-architecture clustering) is excessively compromised on performance. Consequently, it is not a convincing solution for organizing a heterogeneous model ecosystem.\n\nFurthermore, if the model zoo is very large, even a single inference pass per model will introduce significant computational overhead. The method also relies on a fixed, small, \"generic\" Probe Dataset.",
"questions": "What is the biggest and most critical application scenario for this method?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T21:19:52",
"modification_date": "2025-11-12T11:02:49",
"review_url": "https://openreview.net/forum?id=sJI2JCggyD¬eId=TP1UBBYsj8",
"license": "CC BY 4.0"
},
{
"id": "XsZz4MQkqy",
"forum": "sJI2JCggyD",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission3142/Reviewer_6MYB",
"reviewer_name": "Reviewer_6MYB",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 3,
"presentation": 2,
"summary": "Fine-tuned large language models (LLMs) are abundant but hard to reuse due to poor metadata and disorganization. This paper proposes **Delta Activations**—a lightweight way to turn fine-tuned models into vector embeddings by comparing internal activation differences between fine-tuned and base models on generic prompts. \n\nIt outperforms baselines in domain-based clustering (average silhouette score 0.614 across 3 base models) and has key strengths: robustness to training changes, additive properties for mixed datasets, and extension into the **Delta-X framework** (supporting logits/semantic representations). It also enables few-shot task embedding for model retrieval and better model selection for merging (2.0% BBH accuracy gain), aiding efficient reuse of public fine-tuned LLMs.",
"strengths": "1.\tAn interesting problem. It points out the difficulty of reusing fine-tuned LLMs caused by messy metadata and unorganized repositories.\n2.\tLightweight and efficient: Delta Activations only needs one forward pass to compute, avoiding complex calculations like matrix factorization.\n3.\tStrong clustering ability: It outperforms baselines (e.g., flattened weights, output embeddings) in grouping models by fine-tuned domains, with an average silhouette score of 0.614 across three base models.",
"weaknesses": "1.\tInsufficient evidence for research motivation: The paper claims fine-tuned LLMs are underused due to poor metadata, but lacks real-world data (e.g., stats on unused models or user surveys) to prove this problem.\n2.\tVague practical applications for reuse: It mentions aiding model reuse, but gives few details on how end-users (e.g., developers) would actually apply it, like no step-by-step example of retrieving a model for a real task.\n3.\tRelies on internal model access: It needs hidden activations, which are unavailable for closed-source LLMs—limiting its real-world use where many LLMs are proprietary.",
"questions": "1.\tCould you provide more real-world evidence (e.g., statistics on unused fine-tuned LLMs, surveys of developers’ reuse struggles) to support the scale and urgency of the \"poor metadata causing underused models\" problem?\n2.\tCan you give a concrete, step-by-step example of how end-users (e.g., a developer building a medical app) would apply Delta Activations to retrieve and reuse a domain-specialized fine-tuned model?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T16:04:43",
"modification_date": "2025-11-12T11:02:49",
"review_url": "https://openreview.net/forum?id=sJI2JCggyD¬eId=XsZz4MQkqy",
"license": "CC BY 4.0"
},
{
"id": "sfNQ1NAwj6",
"forum": "sJI2JCggyD",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission3142/Reviewer_9JVg",
"reviewer_name": "Reviewer_9JVg",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This paper introduces \"Delta Activations\", a simple and efficient method to create a compact vector \"fingerprint\" for any fine-tuned LLM. The method works by feeding a small set of generic, task-agnostic prompts into both the fine-tuned model and its original base model, and then calculating the _difference_ (the delta) between their internal activations at the final layer. The authors demonstrate that these DA fingerprints are highly effective, allowing models to be automatically and accurately clustered by their specialized domain. Furthermore, the paper shows that this embedding space has some potential capabilities, like additive property and model selection.",
"strengths": "1. Delta Activations method is simple and efficient. It only requires model inference.\n2. The paper discusses various potential methods and shows that Delta Activations works best.\n3. The paper points out potential further research directions.",
"weaknesses": "1. The motivation is not clear enough. Why representation of models in the same pool should be close? I feel it's more like a hypothesis assumption. The model with very close embedding to a specific task may have bad generalization.\n2. The paper's experimental results are mainly based on the silhouette score, which is just a \"proxy\" metric measuring how well the embeddings clustered. However, the main objective should be applications like downstream task performance, while this paper rarely shows such results.\n3. **Model selection and similarity measurement** (line 421) paragraph is the only place that shows downstream task results. However, this experimental setup is a bit vague. Why only identify the _single_ most-related model and sample the remaining 19 models randomly? Why not sample the top 20 most-related models, which is more aligned to the paper's hypothesis and should even show further improvement?\n4. **Additive property** experiments show Mixed and Sum has high similarity. But what's the benefit here is unclear. This setting has a big gap between model merging. And a maximum 0.73 cosine similarity in table 4 is not very strong.\n5. The experimental setup is too ideal. The data domains are separated clearly, and they are more independent. But real training data is usually mixed and complex.",
"questions": "1. Could you explain more about **Additive property** paragraph in line 290? I don't get what's the benefit there.\n2. In line 210, it should be **each** pool contains 15 models or it includes 15 models in total for three pools?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T11:16:29",
"modification_date": "2025-11-12T11:02:49",
"review_url": "https://openreview.net/forum?id=sJI2JCggyD¬eId=sfNQ1NAwj6",
"license": "CC BY 4.0"
}
] |
|
uS2FiaAkCz
|
https://openreview.net/forum?id=uS2FiaAkCz
|
Towards Monotonic Improvement in In-Context Reinforcement Learning
| 3
| 3.5
|
[
6,
4,
0,
2
] |
[
3,
3,
4,
4
] | 4
|
[
"Reinforcement Learning",
"Meta-RL",
"In-context Reinforcement Learning",
"Transformers",
"Learning to Learn"
] |
In-Context Reinforcement Learning (ICRL) has emerged as a promising paradigm for developing agents that can rapidly adapt to new tasks by leveraging past experiences as context, without updating their parameters. Recent approaches train large sequence models on monotonic policy improvement data from online RL, aiming to a continue improved testing time performance. However, our experimental analysis reveals a critical flaw: these models cannot show a continue improvement like the training data during testing time. Theoretically, we identify this phenomenon as *contextual ambiguity*, where the model's own stochastic actions can generate an interaction history that misleadingly resembles that of a sub-optimal policy from the training data, initiating a vicious cycle of poor action selection. To resolve the contextual ambiguity, we introduce *Context Value* into training phase and propose **Context Value Informed ICRL** (CV-ICRL). CV-ICRL use Context Value as an explicit signal representing the ideal performance theoretically achievable by a policy given the current context. As the context expands, Context Value could include more task-relevant information, and therefore the ideal performance should be non-decreasing. We prove that the Context Value tightens the lower bound on the performance gap relative to an ideal, monotonically improving policy. We fruther propose two methods for estimating Context Value at both training and testing time. Experiments conducted on the Dark Room and MiniGrid testbeds demonstrate that CV-ICRL effectively mitigates performance degradation and improves overall ICRL abilities across various tasks and environments. The source code and data of this paper are available at https://anonymous.4open.science/r/towards_monotonic_improvement-E72F.
|
reinforcement learning
|
https://openreview.net/pdf?id=uS2FiaAkCz
| 2025-09-16T22:52:04
| 4
|
[
{
"id": "7RMkPfkIZB",
"forum": "uS2FiaAkCz",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission7740/Reviewer_8tXY",
"reviewer_name": "Reviewer_8tXY",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 2,
"summary": "This paper deals with in-context reinforcement learning (ICRL) and proposes Context Value Informed ICRL to reduce Contextual Ambiguity. They show that context value tightens the lower bound on the performance gap when considering idealistic monotonically improving policy. They perform experiments across two benchmarks - Dark Room and Minigrid - to demonstrate that their approach works better.",
"strengths": "### Strengths:\n\n1. This work identifies and introduces “Contextual Ambiguity” which denotes that even random action taken at early interaction can generate an interaction history that may mislead to assume different context.\n\n2. They prove an improved performance bound with the guidance from the context value.\n\n3. Ablations are conducted legitimately, and the experimental results are encouraging to show the efficacy of the proposed approach.",
"weaknesses": "### Weaknesses:\n\n1. Without enough history, the context value in test time still could be sub-optimal. Do you have any bound on how much history it would need to clearly identify the context?\n\n2. While the paper validates the approach on two benchmarks, it is limited to very grid like setup. It would require more thorough experiments across other types of tasks to demonstrate wide applicability. Also, apart from AD-like baselines, comparisons with other strong baselines would strengthen the work.\n\n3. Further, the memory or computational overhead due to these additional components has not been discussed. Also, I would like to see some discussion on the limitations.",
"questions": "1. Is there any new hyperparameter that is introduced to learn the context-value?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-03T01:24:39",
"modification_date": "2025-11-12T11:56:31",
"review_url": "https://openreview.net/forum?id=uS2FiaAkCz¬eId=7RMkPfkIZB",
"license": "CC BY 4.0"
},
{
"id": "vJWIq0R8fd",
"forum": "uS2FiaAkCz",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission7740/Reviewer_nRch",
"reviewer_name": "Reviewer_nRch",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 3,
"presentation": 4,
"summary": "This paper proposes Context Value Informed ICRL (CV-ICRL), introducing a context value signal during training, which explicitly represents the ideal, theoretically achievable performance given the current context. It is argued that algorithm distillation-like ICRL algorithms work best with monotonically improving context, similar to their training data, while actual test-time contexts often contain noisy behaviors. The authors give theoretical results showing the addition of CV yields a tighter bound for performance, and empirically validates CV-ICRL in grid-world environments, especially regarding task generalization.",
"strengths": "- Addresses an important problem in ICRL. Robustness to noise is critical under long-context and OOD scenarios.\n \n- Provides theoretical guarantees about the benefits of the introduced context value.\n \n- Empirical results show good overall performance with OOD task generalizations.",
"weaknesses": "- The definition of context value is a bit vague. How to compute the optimal policy \"only based on C, without any other information of $\\tau$\"?\n \n- CV-ICRL seems to use the source policy for each episode of context in its training data as the context-optimal policy. There should be some explanations as to why this property holds.\n \n- Experiments are conducted in very simple grid-world environments.",
"questions": "- How to operationalize Def.1? Is there a way to practically compute the context-optimal policy?\n \n- Arguments about the correctness of training-time context values are missing, and seem to require some non-trivial assumptions (e.g. dataset contains only optimal policies for each environment)",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T19:50:15",
"modification_date": "2025-11-12T11:56:31",
"review_url": "https://openreview.net/forum?id=uS2FiaAkCz¬eId=vJWIq0R8fd",
"license": "CC BY 4.0"
},
{
"id": "qiBeh9afhk",
"forum": "uS2FiaAkCz",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission7740/Reviewer_1G4v",
"reviewer_name": "Reviewer_1G4v",
"rating": 0,
"confidence": 4,
"soundness": 1,
"contribution": 1,
"presentation": 1,
"summary": "The work proposes a modification to in context reinforcement learning via a so called context value. This context value is introduced to avoid ambiguities of contexts, by providing a clear label for individual context windows. The proposed context value based ICRL is evaluated for algorithm distillation style ICRL methods. In the empirical evaluation the work shows that their proposed method outperforms the algorithm distillation baseline. The idea of using a context value to avoid disambiguities of contexts is promising and could potentially inform further meta-RL research. However, I believe that the work is far from publication in it's current form. I am doubtful of some of the theoretical ideas as well as the empirical evaluation. Overall I vote for rejection.",
"strengths": "The idea of using a context value to avoid disambiguities of contexts is promising and could potentially inform further meta-RL research.",
"weaknesses": "I believe that a more thorough discussion of related work would be needed as meta-RL and ICRL methods are not just limited to the AD style approaches. Take for example the work by Melo (https://proceedings.mlr.press/v162/melo22a.html) which modifies the RL$^2$ paradigm to work with transformers. A follow-up on this work showed that cross-episode attention (through a hierarchical transformer architecture) essentially avoids disambiguities in context and enables better learning (https://openreview.net/forum?id=UENQuayzr1). Besides, contextual RL provides a different notion on what context means. In this setting, context is used to learn general policies/value functions to be able to adapt in a zero-shot setting. Works like \"Contextualize Me\" (https://openreview.net/forum?id=Y42xVBQusn) e.g. show how context-optimal policies behave and how \"Context Values\" can be used to learn general policies.\n\nWhen discussing the contextual ambiguity, I fail to see why it is reasonable to assume monotonic improvement, especially when training across different tasks. Policies that are specialized to solve one task perfectly are much more likely to fail on another dissimilar task. Thus the monotonicity assumption seems to likely be false.\n\nFurther, Property 1 says that the context value is monotonic simply since adding another tuple to the context provides more information. This seems extremely wrong to me. The additional tuple could provide redundant data which would not increase the information about the task or irrelevant information or otherwise not add anything. In some cases adding more data can actually decrease information content. So I fail to see why the monotonicity should hold for the context value.\n\nWhen discussing CV-ICRL with estimation of Vc through the context it is stated the reward-to-go is used as supervision signal. Since this value is then used as part the context to the agent. How exactly is that different to the reward-to-go used as context in the decision transformer?\n\nThe generalization experiments do not test for out-of-distribution generalization. Out of the 4 \"novel\" test tasks only the four rooms environment comes close to testing out of distribution capabilities as it is possible to observe states without any walls. All other environments are in the training distribution. Thus the experiments are testing interpolation capabilities but definitely not out-of-distribution capabilities. The survey by Kirk et al (https://www.jair.org/index.php/jair/article/view/14174/26890) provides a clear evaluation protocol for RL to assess out of distribution capabilities. I believe the claim (of the conclusion) that AD-like ICRL algorithms are capable of \"learning-to-learn\" does not hold.\n\nI fail to see the utility of the ablation studies. While I appreciate that not including $\\phi(C)$ provides insights about performance gains from auxiliary task training, not using any other target besides the reward to go would be more informative. Similarly, choosing a random function can not provide any meaningful insights at all. Since $\\phi(t)$ was proposed to focus on the monoticity assumption, choosing *any other monotonic function* would be more informative than a random function.",
"questions": "* How exactly is that different to the reward-to-go used as context in the decision transformer? \n* Why were only AD style methods considered?\n* Why did you not consider a combination of $\\phi(C)$ and $\\phi(t)$ that tries to incorporate both task knowledge as well as monotonicity?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-29T21:18:54",
"modification_date": "2025-11-12T11:56:31",
"review_url": "https://openreview.net/forum?id=uS2FiaAkCz¬eId=qiBeh9afhk",
"license": "CC BY 4.0"
},
{
"id": "36k6wt34Di",
"forum": "uS2FiaAkCz",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission7740/Reviewer_B2YQ",
"reviewer_name": "Reviewer_B2YQ",
"rating": 2,
"confidence": 4,
"soundness": 1,
"contribution": 2,
"presentation": 2,
"summary": "The authors conjecture that the performance degradation encountered in algorithm-distillation-like in-context reinforcement learning algorithms is due to having suboptimal decisions in the context due to randomness. To combat it, they propose Context Value Informed ICRL (CV-ICRL) that augments the context with the latent value of the context-optimal policy. They theoretically show that the performance bound is improved when the policy has access to the context value, and empirically demonstrate the effectiveness of CV-ICRL on Dark Room and Minigrid testbeds.",
"strengths": "- The authors propose a novel perspective on the cause of the degradation of performance when running algorithm distillation. They term it contextual ambiguity.\n- The work presents a novel algorithm called CV-ICRL that augments AD, which the authors claim addresses the contextual ambiguity.\n- The empirical study demonstrates improved returns of CV-ICRL and lower performance degradation frequencies.\n- The statistical analysis in the empirical study is reasonably rigorous.",
"weaknesses": "- One major concern is the lack of rigour in the theoretical claims. For instance, I found Definition 1 ambiguous. It is unclear how a context-optimal policy is defined. One interpretation could be that the context defines an empirical MDP, and the context-optimal policy is the optimal policy that solves that MDP. However, there could be more interpretations.\n- Some notations are confusing and overloaded multiple times. As an example, I've seen at least $V$, $V_C$ and $V(s; C)$ used to define different kinds of values. Sometimes, it's hard to tell if it's denoting a function or a value being mapped to.\n- The contextual ambiguity is merely a conjecture. The authors did not verify if it is the real culprit. If it is the real cause, one should expect the performance degradation to disappear once the contextual ambiguity is removed.\n- I am not sure if the proof of Theorem 1 is correct. Particularly, it is unclear how the authors arrive at line 645 from line 644. As far as I understand, $V$ here is a mapping from the space of state and context to a scalar, while $J$ here is a mapping from the policy space to a scalar. Therefore, I don't see how $||J^* - J||{\\infty}$ is an upper bound of $ ||V^* - V||{\\infty}$.\n- Using the data-generating policy as a proxy for the context-optimal policy seems unjustified. It also renders the method inapplicable in cases where the policy that generates the data is missing or inaccessible.\n- CV-ICRL only exhibits a relatively minor performance improvement over the AD baselines. I think it's partly due to AD can already solve most of the tasks in the testbed reasonably well. It would make a stronger case if the authors could demonstrate the robustness of CV-ICRL in tasks where AD's performance degrades significantly.\n\nMinor concerns:\n- Though not drastically affecting comprehension, there are frequent grammar and spelling errors across the text. The paper would benefit from polishing the writing.\n- The paper would benefit from referencing recent surveys on in-context reinforcement learning (e.g., Moeini et al., 2025) besides meta-reinforcement learning (Beck et al., 2023).",
"questions": "The idea of CV-ICRL conditions on the premise that the model can perform accurate policy evaluation by predicting the optimal value based on the context. It is a nontrivial task because the optimal policy can be recovered from the optimal value function. Thus, I wonder why we still wish to use the model as a policy? Why don't we directly use the model to predict, say, the optimal action values, and extract the optimal policy from them?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-20T08:33:25",
"modification_date": "2025-11-12T11:56:32",
"review_url": "https://openreview.net/forum?id=uS2FiaAkCz¬eId=36k6wt34Di",
"license": "CC BY 4.0"
}
] |
|
xindJJLSr1
|
https://openreview.net/forum?id=xindJJLSr1
|
ReWatch-R1: Boosting Complex Video Reasoning in Large Vision-Language Models through Agentic Data Synthesis
| 5
| 3.833333
|
[
6,
6,
6,
4,
4,
4
] |
[
4,
3,
4,
5,
3,
4
] | 6
|
[
"Video Reasoning",
"Large Vision-Language Models (LVLMs)",
"Agentic Data Synthesis",
"Multi-Agent ReAct",
"Reinforcement Learning with Verifiable Reward (RLVR)",
"Chain-of-Thought (CoT)"
] |
While Reinforcement Learning with Verifiable Reward (RLVR) significantly advances image reasoning in Large Vision-Language Models (LVLMs), its application to complex video reasoning remains underdeveloped. This gap stems primarily from a critical data bottleneck: existing datasets lack the challenging, multi-hop questions and high-quality, video-grounded Chain-of-Thought (CoT) data necessary to effectively bootstrap RLVR. To address this, we introduce ReWatch, a large-scale dataset built to foster advanced video reasoning. We propose a novel multi-stage synthesis pipeline to synthesize its three components: ReWatch-Caption, ReWatch-QA, and ReWatch-CoT. A core innovation is our Multi-Agent ReAct framework for CoT synthesis, which simulates a human-like "re-watching" process to generate video-grounded reasoning traces by explicitly modeling information retrieval and verification. Building on this dataset, we develop ReWatch-R1 by post-training a strong baseline LVLM with Supervised Fine-Tuning (SFT) and our RLVR framework. This framework incorporates a novel Observation \& Reasoning (O\&R) reward mechanism that evaluates both the final answer's correctness and the reasoning's alignment with video content, directly penalizing hallucination. Our experiments show that ReWatch-R1 achieves state-of-the-art average performance on five challenging video reasoning benchmarks, substantially outperforming models trained on all other open-source datasets. We also provide crucial insights into the training dynamics of SFT and RL for complex video reasoning.
|
We introduce an agent-based pipeline to synthesize a high-quality video reasoning dataset (ReWatch) and a novel reinforcement learning reward (O&R) to train LVLMs, achieving state-of-the-art performance.
|
applications to computer vision, audio, language, and other modalities
|
https://openreview.net/pdf?id=xindJJLSr1
| 2025-09-19T20:00:47
| 6
|
[
{
"id": "gSNBwjmODT",
"forum": "xindJJLSr1",
"review_number": 6,
"reviewer_id": "ICLR.cc/2026/Conference/Submission18045/Reviewer_tQx8",
"reviewer_name": "Reviewer_tQx8",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This submission presents ReWatch-R1, a framework for advancing complex video reasoning in large vision-language models (LVLMs). The core contributions include the creation of the large-scale, multi-stage ReWatch dataset aimed at challenging multi-hop video reasoning, a multi-agent data synthesis pipeline generating temporally precise captions, high-difficulty QAs, and video-grounded CoT traces, and a new Observation & Reasoning (O&R) reward mechanism for RL. By post-training an LVLM backbone with these elements through SFT and RL, the authors demonstrate state-of-the-art performance across five video reasoning benchmarks, alongside ablation studies.",
"strengths": "-The topic of video reasoning is of community’s interest and timely.\n-The hierarchical segmentation and multi-agent CoT pipeline used to generate the ReWatch dataset is methodically crafted and shown to have yielded data with deeper temporal grounding and higher question complexity.\n-The baselines compared are pretty up-to-date, which verifies the state-of-the-art achievement on the benchmarks tested in the paper.",
"weaknesses": "-The incremental benefit of the O&R reward mechanism is not fully dissected independently from other RL contributions. In Table 1, the improvement from RL (+O&R) appears relatively modest, and there is limited analysis delineating whether specific reasoning failures or hallucinations are directly ameliorated by the O&R reward in practice.\n-The paper leans heavily on LLMs (Gemini, GPT-4.1, Qwen, etc.) for practically all phases - data synthesis, answer verification, reward calculation, and benchmarking. Although this is common in the area, the cumulative propagation of LLM biases and potential “meta-overfitting” is insufficiently analyzed; for example, does repetitive LLM-based data filtering introduce subtle data shortcuts or annotation artifacts? A systematic error/robustness analysis would be welcome.\n-There is little systematic exploration of failure modes, such as where ReWatch-R1 still hallucinates, or which reasoning subtypes remain unsolved - as could be illustrated by extensive qualitative error analysis or confusion matrices per reasoning dimension (section 4.2 or appendix). This is important to inform the future community on limits.",
"questions": "-Can the authors provide concrete ablation or error-type breakdowns specifically isolating the effect of the O&R reward on hallucination rates or logical errors, perhaps through qualitative examples or confusion matrices? It is unclear if this reward is the main driver for reduced hallucinations, or if SFT data quality dominates.\n-Given the extensive use of LLMs at nearly every phase, is there a risk that ReWatch-R1’s strong performance partly reflects an overfit to LLM-specific language or annotation artifacts in the data/reward pipeline?\n-How sensitive is the model (and pipeline) to coarse versus fine semantic segmentation in the captioning stage? For example, does segment over-segmentation harm long-term reasoning due to context fragmentation?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-04T18:28:44",
"modification_date": "2025-11-12T14:10:32",
"review_url": "https://openreview.net/forum?id=xindJJLSr1¬eId=gSNBwjmODT",
"license": "CC BY 4.0"
},
{
"id": "1ar0QNFfC5",
"forum": "xindJJLSr1",
"review_number": 5,
"reviewer_id": "ICLR.cc/2026/Conference/Submission18045/Reviewer_kZ7P",
"reviewer_name": "Reviewer_kZ7P",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper mainly contributes to the following three points:\n\n1). Proposing a novel multi-stage synthesis pipeline to synthesize ReWatch, a large-scale dataset which includes ReWatch-Caption, ReWatch-QA, and ReWatch-CoT three components, for fostering advanced video reasoning. \n\n2). Proposing a new Observation & Reasoning (O&R) reward for RLVR that improves reasoning by rewarding both final-answer correctness and the factual grounding of intermediate steps in video content.\n\n3). Developing ReWatch-R1 by post-training a strong baseline LVLM with Supervised Fine-Tuning (SFT) and O&R framework, achieves state-of-the-art average performance on five challenging video reasoning benchmarks.\n\nThe dataset construction pipeline and the two-stage post-training framework proposed in this paper put the description of video change into the CoT, providing a new idea for the research of video reasoning from the perspectives of dataset and training.\n\nIn summary, this paper shows a high level in terms of method description (data construction pipeline,the O&R methods, etc.), experimental setup and writing, but for the situation that pipeline may cause error accumulation, some experimental results are not analyzed (this part will be described in detail in \"Weaknesses\"), which leads to the lack of analysis completeness. The quality of this paper will be improved after analyzing the missing parts.",
"strengths": "This paper proposes a dataset construction pipeline for video reasoning, which is enlightening for the dataset construction method in this field. The detailed description of video in different time steps is introduced based on semantic segmentation, which combines with the video summary to enhance the reasoning ability. In Addition, this paper uses three-layer filtering to screen the data that can best reflect the reasoning ability. The two-stage post-training framework simulates the \"Thought-Action-Observation loop\", and uses the improved O&R reward for RLVR to enhance the reasoning performance of ReWatch-R1.\n\nTherefore, this paper provides a new idea for the subsequent dataset construction method in this field and the improvement of reasoning performance.",
"weaknesses": "The weaknesses of this paper focus on the method, experiment and writing:\n\nIn method and experiment:\n\n1). Error accumulation: the stage of semantic segmentation and detailed description generation in the dataset construction pipeline designed in this paper may cause error accumulation. The error caused by segmentation may cause one semantic of the video to be put into multiple segments. The description generation may result in the missing description between multiple time steps, which may result in the description of an item appearing in the previous time step, but the description of the item is missing in the next step, resulting in error accumulation. This paper does not discuss and analyze this situation.\n\n2). In Table 1, the results of ReWatch-R1-SFT and ReWatch-R1+O&R on the CG AV counting are the same, which are not analyzed in the paper.\n\n3). Lack of comparative analysis between the vanilla RLVR and the improved O&R method under the dataset constructed in the paper.\n\n4). In the analysis in Section 4.2 (specifically line 411), the performance degradation caused by Video-R1 replacing ReWatch-CoT does not consider the reason that SFT data and RL data is mismatch, which seems to be somewhat contradictory to the previous sentence \"SFT is an independent prerequisite for RL\".\n\n5). Also in the analysis in Section 4.2 (specifically 414 lines), it seems that ablation study using different data combinations (e.g. only using ReWatch-Caption or ReWatch-QA or ReWatch-CoT) have not been carried out, and the conclusion that \"The Quality of QA data used for RL determines final performance.\" is somewhat far fetched.\n\n6). In Appendix C2, the performance degradation of 384 frame training is not analyzed compared with 192 frame training.\n\nIn writing:\n\n1). In Figure 1, the annotation order of the legend is inconsistent with the display order of each LVLM in the figure.\n\n2). The two marking symbols not appear in Table 3 of appendix C2.",
"questions": "1) Why are the number of ReWatch-QA and ReWatch-CoT inconsistent? Shouldn't one QA data correspond to one CoT data?\n\n2) In Section 2.3 (specifically line 247), why does a structured execution trajectory end with A_final instead of observation? (Thought-Action-Observation loop)",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T19:38:50",
"modification_date": "2025-11-12T14:10:32",
"review_url": "https://openreview.net/forum?id=xindJJLSr1¬eId=1ar0QNFfC5",
"license": "CC BY 4.0"
},
{
"id": "kFWMHLn917",
"forum": "xindJJLSr1",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission18045/Reviewer_UAaJ",
"reviewer_name": "Reviewer_UAaJ",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper introduces **ReWatch-R1**, which tackles the challenge of video reasoning from two aspects: \n1. Data. A large synthetic dataset called **ReWatch** is collected, which includes detailed video captions, challenging question-answer pairs, and high-quality, video-grounded reasoning traces (COT data). These are generated using a multi-agent ReAct system. \n2. Model. Authors post-train QwenVL by SFT and GRPO. They introduce a new Observation & Reasoning (O&R) reward, which evaluates the accuracy of video observations and the validity of reasoning process. \nReWatch-R1 achieves promising performance compared with other 7B models.\nAnalysis shows that high-quality reasoning data is crucial for RL and RL on \"thinking\" mode improves the reasoning efficiency.",
"strengths": "1. The proposed multi-agent COT data synthesis pipeline is scalable to curate large-scale video-grounding reasoning data.\n2. The reward design shows the emphasis on explicit observation and reasoning is beneficial for video reasoning.\n3. The data and model design achieves state-of-the-art performance on video reasoning and understanding benchmarks in 7B-scale models. The extensive analysis shows insights on the role and importance of SFT and RL.",
"weaknesses": "1. The information source for data synthesis is semantic segmentation and detailed video description from Gemini. The accuracy of this multi-step hierarchical captioning is not validated. Therefore there is no direct quality assessment of the synthetic data.\n2. All results are based on a 7B model. The benefits of high-quality COT data and O&R reward mechanism are not validated on larger-scale models.",
"questions": "1. Equation (20): the design of non-format rewards lacks motivation. For example, why not use the (weighted) summation of all rewards. The design has no explanation or experimental results.\n2. RL on 7B model shows promising improvements. However, the performance still lags behind the larger 32B model. Therefore, whether the same RL on 32B leads to improvement is questionable.\n\nSome typos:\n1. L255: \"a novel O&R reward mechanism we propos\" -> \"propose\"\n2. L771: \"Tabale 4\" -> \"Table 4\"",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T17:38:49",
"modification_date": "2025-11-12T14:10:33",
"review_url": "https://openreview.net/forum?id=xindJJLSr1¬eId=kFWMHLn917",
"license": "CC BY 4.0"
},
{
"id": "Ivv1eEfmEN",
"forum": "xindJJLSr1",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission18045/Reviewer_ymDU",
"reviewer_name": "Reviewer_ymDU",
"rating": 4,
"confidence": 5,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The paper proposes ReWatch-R1, a framework that enhances complex video reasoning in large vision-language models through agentic data synthesis and verifiable reinforcement learning. It introduces ReWatch, a large-scale, high-quality dataset built via a three-stage pipeline—hierarchical video captioning, high-difficulty QA generation, and multi-agent chain-of-thought synthesis—ensuring strong temporal grounding and reasoning diversity. Furthermore, the authors design an Observation & Reasoning (O&R) reward that jointly evaluates answer correctness and factual grounding of intermediate reasoning steps. Combined, these innovations enable ReWatch-R1 to achieve state-of-the-art performance on multiple challenging video reasoning benchmarks.",
"strengths": "1. The paper introduces a novel multi-stage data construction pipeline that enables the creation of ReWatch, a large-scale and high-quality dataset specifically designed for video reasoning. The dataset comprehensively includes caption, QA, and chain-of-thought (CoT) annotations, providing valuable and versatile resources that can benefit future research and community development in multimodal reasoning.\n2. The authors offer detailed experimental descriptions and implementation settings, ensuring strong reproducibility and facilitating independent verification and extension of their results.",
"weaknesses": "1. The paper lacks qualitative experimental results, such as example responses generated by ReWatch-R1 for specific video reasoning cases. Including such examples would help readers intuitively understand the model’s reasoning style, output quality, and advantages over baseline models.\n2. The selection of baseline models is relatively limited, lacking comparisons with stronger closed-source systems such as Gemini 2.5 Pro/Flash and GPT-4o, as well as with recent RL-based approaches like TW-GRPO. Including these baselines would provide a more comprehensive evaluation of ReWatch-R1’s effectiveness and competitiveness.",
"questions": "1. Does the proposed Observation & Reasoning (O&R) reward mechanism introduce additional reasoning overhead or inference latency compared to conventional reward designs? It would be helpful if the authors could provide quantitative results or analysis on the computational cost and efficiency trade-offs, as well as discuss potential optimization strategies to mitigate these issues.\n2. The paper currently compares ReWatch-R1 with a limited set of baseline models. Have the authors considered evaluating against stronger closed-source models (e.g., Gemini 2.5 Pro/Flash, GPT-4o) or recent RL-based approaches such as TW-GRPO, VersaVid-R1? Such comparisons would help better position ReWatch-R1’s performance and highlight its relative advantages within the broader landscape of contemporary video reasoning models.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T13:54:53",
"modification_date": "2025-11-12T14:10:33",
"review_url": "https://openreview.net/forum?id=xindJJLSr1¬eId=Ivv1eEfmEN",
"license": "CC BY 4.0"
},
{
"id": "Otng8FCwwb",
"forum": "xindJJLSr1",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission18045/Reviewer_bNUV",
"reviewer_name": "Reviewer_bNUV",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This paper introduces novel datasets about captions, QAs, and CoTs, with a training pipeline to improve complex video reasoning. Additionally, based on the synthesized CoT data, this paper further designs a new reward for RL training. The trained model achieves state-of-the-art performance on several video reasoning benchmarks, outperforming models trained on other open-source datasets.",
"strengths": "- This paper finds drawbacks in the current synthesized long CoT datasets, e.g., Video-R1, and synthesizes a quality-improved video reasoning dataset.\n- This paper redesigns the reward shaping for video understanding.\n- The trained model sets a new SOTA on several benchmarks.",
"weaknesses": "- The contribution of this paper mainly lies in the dataset construction. Can you break down the improvement of each part, like adding timestamps into the captions, adding question difficulty, and the proposed reward design?\n- The introduction about the observation is not clear. How do you define observation formerly? In common, observation is visual content, while in your settings, the observation is processed captions. Would you consider implementing the thinking-with-image style training?",
"questions": "- As shown in Tab.3, why does the trained model show marginal improvement on the most popular, general video question-answering datasets with reasoning?\n- How do you distinguish reasoning video benchmarks and general video benchmarks?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T17:11:16",
"modification_date": "2025-11-12T14:10:34",
"review_url": "https://openreview.net/forum?id=xindJJLSr1¬eId=Otng8FCwwb",
"license": "CC BY 4.0"
},
{
"id": "Aj5J9x9qxo",
"forum": "xindJJLSr1",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission18045/Reviewer_w6PG",
"reviewer_name": "Reviewer_w6PG",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "The paper proposes ReWatch-R1, a novel approach to improve complex video reasoning in LVLMs by addressing the critical data bottleneck in existing methods. The authors introduce ReWatch, a large-scale dataset synthesized via a multi-stage agentic pipeline that includes temporally dense captions, high-difficulty multi-hop QA pairs, and video-grounded CoT traces generated through a Multi-Agent ReAct framework simulating human-like “re-watching.” They further develop an Observation & Reasoning (O&R) reward mechanism for RL, which jointly evaluates answer correctness and the factual grounding of intermediate reasoning steps.",
"strengths": "1. Novel QA curation methods: ReWatch is carefully designed to enforce video dependency and multi-step reasoning through contrastive QA generation and rigorous filtering, effectively eliminating textual shortcuts and hallucination-prone supervision.\n\n2. Innovative O&R reward mechanism: By evaluating both final answers and the fidelity of intermediate observations and reasoning steps, the O&R reward explicitly discourages hallucination and promotes evidence-based reasoning.",
"weaknesses": "1. Formula 4 merges captions across each time interval independently, which leads to a loss of referential consistency. For example, the caption for 0–10s might be “A man…”, while that for 10–20s is again “A man…”, even though it refers to the same individual as in the earlier segment. This inconsistency compromises the overall quality of the generated captions.\n\n2. The observation mechanism introduces inference latency. Despite yielding performance gains on certain video reasoning benchmarks, it provides only marginal improvements on most general video understanding benchmarks.\n\n3. The paper lacks comparisons with more recent and advanced baselines, such as VersaVid-R1 and GRPO-CARE.\n\n4. Quantitative analysis is missing: the paper does not present any concrete inference results or case studies from ReWatch-R1.",
"questions": "1. Line 207: Why does the original set of 85K QA pairs yield over 170K multiple-choice QA pairs? \n\n2. From the Chain-of-Thought (CoT) example shown in the bottom-right corner of Figure 2, is retrieving explicit timestamps truly necessary? Could the reasoning path be simplified—for instance, as follows: \n> *\"... So, I’ll \\<action\\> retrieve segments focusing on the man with blonde, curly hair on a jet ski interacting with a passenger \\</action\\>. \\<observation\\> The man on the jet ski passes a sandwich to the passenger, who then takes a bite. \\</observation\\> This directly answers the question. The food item passed was a sandwich...\"*",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-16T11:32:27",
"modification_date": "2025-11-12T14:10:35",
"review_url": "https://openreview.net/forum?id=xindJJLSr1¬eId=Aj5J9x9qxo",
"license": "CC BY 4.0"
}
] |
mjLMdY0xul
|
https://openreview.net/forum?id=mjLMdY0xul
|
Efficient Self-Evaluation for Diffusion Language Models via Sequence Regeneration
| 3.333333
| 4
|
[
4,
2,
4
] |
[
4,
4,
4
] | 3
|
[
"Diffusion Large Language Models"
] |
Diffusion large language models (dLLMs) have recently attracted significant attention for their ability to enhance diversity, controllability, and parallelism. However, their non-sequential, bidirectionally masked generation makes quality assessment difficult, underscoring the need for effective self-evaluation. In this work, we propose DiSE, a simple yet effective self-evaluation confidence quantification method for dLLMs. DiSE quantifies confidence by computing the probability of regenerating the tokens in the entire generated sequence, given the full context. This method enables more efficient and reliable quality assessment by leveraging token regeneration probabilities, facilitating both likelihood estimation and robust uncertainty quantification. Building upon DiSE, we further introduce a flexible-length generation framework, which adaptively controls the sequence length based on the model’s self-assessment of its own output. Experiments demonstrate that DiSE consistently improves performance across multiple datasets, increasing likelihood evaluation by $4.0$\% and uncertainty evaluation by $6.4$\%, while achieving up to a $32\times$ speedup over Monte Carlo simulation baseline, and additionally improving flexible-length generation accuracy. These results establish DiSE as an efficient and versatile self-evaluation framework for diffusion-based language models.
|
We propose a simple yet effective self-evaluation confidence quantification method for diffusion large language models (dLLMs), and introduce a flexible-length dLLM generation framework based on it.
|
foundation or frontier models, including LLMs
|
https://openreview.net/pdf?id=mjLMdY0xul
| 2025-09-04T21:14:57
| 3
|
[
{
"id": "yP2WOCnO6x",
"forum": "mjLMdY0xul",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission2115/Reviewer_Ya5E",
"reviewer_name": "Reviewer_Ya5E",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "This paper proposes DiSE, a simple and highly efficient method for dLLMs to quantify their own confidence. The core idea is to feed the model's entire output sequence back into itself and calculate the probability of regenerating the existing tokens given the full sequence as context. This regeneration probability serves as a direct measure of the model's confidence in its own output. Meanwhile, the paper introduces a novel flexible length generation framework that uses DiSE to adaptively decide when to stop the generation process. Experiments demonstrate that DiSE increases likelihood evaluation accuracy by 4.0% and uncertainty evaluation by 6.4% on average.",
"strengths": "1. This paper proposes an effective self-evaluation method DiSE for dLLM that leverages regeneration probability. Compared to the iterative Monte Carlo baseline, which requires numerous forward passes, DiSE only needs a single forward pass.\n\n2. The paper introduces a flexible-length generation framework built on DiSE, which directly addresses the fixed-length generation constraint that typically limits dLLMs.\n\n3. Experimental results demonstrate the effectiveness of the proposed method. DiSE provides more accurate estimations of conditional likelihood and uncertainty, while the flexible length generation framework improves upon fixed-length generation across multiple datasets.",
"weaknesses": "1. The paper lacks experiments on a broader set of open-source dLLMs (e.g., Dream [1]) to sufficiently demonstrate the effectiveness and generalizability of the proposed DiSE.\n2. The experimental details regarding conditional likelihood estimation needs to be further clarified. For example, it is unclear whether the response used in the likelihood estimation is the model-generated output or the ground-truth answer.\n3. The experiments on flexible length generation lack comparison with other methods that also support dynamic length generation (e.g., DreamOn [2] and EditFlow [3]). Including these baselines is necessary for a comprehensive performance assessment.\n4. The proposed flexible length mechanism appears to have a limitation that it only supports increasing the length and can not support deletion. \n5. The performance of the proposed DiSE and flexible length generation is sensitive to hyperparameters. DiSE's performance varies with mode selection for different datasets (Fig. 6). And for flexible length generation, the optimal D is distinct across models (Fig. 7). This will limit generalizability to other dLLMs and tasks.\n6. The proposed DiSE needs to \"regenerate\", i.e., \"predict the tokens at positions that are already known.\" However, the dLLM training loss is calculated only for masked positions [1][4], which implies the model's predictive distribution over the non-masked (i.e., known) tokens is unsupervised. A critical point needs to be addressed: How does DiSE ensure that the \"regenerate\" probability distribution is reliable? The authors should provide a detailed discussion on this.\n\n[1] Dream 7B: Diffusion Large Language Models\n\n[2] DreamOn: Diffusion Language Models For Code Infilling Beyond Fixed-size Canvas\n\n[3] Edit Flows: Flow Matching with Edit Operations\n\n[4] Large Language Diffusion Models",
"questions": "See above weaknesses.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T09:01:04",
"modification_date": "2025-11-12T10:53:54",
"review_url": "https://openreview.net/forum?id=mjLMdY0xul¬eId=yP2WOCnO6x",
"license": "CC BY 4.0"
},
{
"id": "gjbmCtID8q",
"forum": "mjLMdY0xul",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission2115/Reviewer_6RsG",
"reviewer_name": "Reviewer_6RsG",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 1,
"presentation": 2,
"summary": "The paper proposes the DiSE score, as an alternative to Monte-Carlo (MC) estimation, to evaluate the generation of dLLMs. The authors show that the DiSE score can, in a single forward pass, compare continuations (e.g in MCQ benchmarks), which is more efficient than MC integration. The authors argue that DiSE \"facilitates the likelihood computation\" (abstract), compared to MC estimation, however do not show how DiSE relates to the true data likelihood.",
"strengths": "1. DiSE is faster than MC evaluation on the likelihood, and can be used to compare answers to MCQ benchmarks, in a single forward pass, while MC evaluations require many iterations to approximate the true likelihood, and pick the most likely answer.\n2. DiSE leads to higher accuracy on GPQA and Math benchmarks, and improves the RoC AUC, compared to MC integration *with few samples (1 or 32)*.",
"weaknesses": "### Summary of the weaknesses\n1. **Likelihood**: DiSE is not shown to estimate or bound the true data likelihood; the reported \"gains\" in likelihood are not clear vs MC bounds and AR perplexity.\n2. **Factual inaccuracy on generation length**: claims that dLLMs require fixed lengths ignore semi‑autoregressive/variable‑length approaches explored in Llada, Plaid, MDLM.\n3. **Insufficient citation/positioning**: closely related masked‑LM pseudo‑likelihood work [4-6] is not cited. The existence of these works negatively reflect on the novelty of this work.\n\n\n### Major weaknesses\n1. **Likelihood computation**. In the abstract, the authors argue the DiSE *\"facilitates the likelihood computation\"*. However, it is not clear how DiSE relates to the true likelihood, or whether it even bounds the log-likelihood, unlike the MC bound present in previous work (MDLM, MD4, RADD, SEDD). Furthermore, still in the abstract, the authors argue the DiSE improves the likelihood evaluation by 4%. What does that mean? Does that mean the likelihood bound is 4% more tight? If so, you need to show that DiSE is a valid bound on the data likelihood.\n\n2. **Factual inaccuracies** (introduction and line 256). While certain dLLMs are trained on fixed-length sequences, prior work has studied flexible-length generation. For example, Llada [1] (which is used in the submission), trains on sequences of varying length during the SFT phase, to handle flexible-length generation. Plaid [2] uses a stochastic length during training which allows the model to generate shorter sequences. Finally, MDLM [3] samples semi-autoregressively, generating text block by block. *These prior work are not sufficiently discussed*.\n\n3. **Missing prior work**: [4-6] have investigated similar ideas, using BERT-style models, which are similar, if not equivalent to masked dLLMs, and trained with cross-entropy. These prior works concluded that BERT-style models can compute a pseudo-likelihood that might capture sentence fluency better than autoregressive scores. These prior works diminish the novelty of the current work.\n\n\n\n### Other weaknesses\n1. **Conflating likelihood and quality**. Abstract: *\"This method enables more efficient and reliable quality assessment by leveraging token regeneration probabilities, facilitating both likelihood estimation and robust uncertainty quantification.\"* The fact that the model is confident in its generation does not mean that the generation is high quality. For example, GPT-2 will assign a high likelihood, to a repetitive sequence such as \"the the the the ...\", as it is easy to predict, while it is *not* high quality.\n2. **Evaluation details** (lines 199-200): *\"We sample 15 well-formed sentences\"*. What is a well-formed sentence? Are these extracted from a specific benchmark? Did you write them yourself?\n3. **Choice of visualization**: In Figure 2, visualizing the difference of DiSE score between 15 \"well-formed\" and random sequences, as a 1D sequence of green blocks (the color representing the difference of DiSE score), is not appropriate. Consider using an histogram instead.\n4. **Patience $K$**. On line 471, in the experiment section, and shortly before the conclusion, the authors introduce the \"Patience\" parameter, but do not elaborate on what it represents. This needs to be introduced clearly in the methods section. \n5. **Limitations are not discussed**: The authors do not discuss the limitations of their work. While I understand that authors may worry that detailing limitations could be used by reviewers as grounds for rejection, I believe it is important to include some limitations.\n\n\n[1] Large Language Diffusion Models, Nie et al, 2025\n\n[2] Likelihood-Based Diffusion Language Models, Gularjani et al, 2023.\n\n[3] Simple and Effective Masked Diffusion Language Models, Sahoo et al, 2024.\n\n[4] Pre-Training Transformers as Energy-Based Cloze Models, Clark et al, 2020.\n\n[5] Pseudolikelihood Reranking with Masked Language Models, Salazar et al, 2019.\n\n[6] Masked Language Model Scoring, Salazar et al, 2020.",
"questions": "1. DiSE relies on model predictions at clean (unmasked) positions, where no loss is applied during training. This may be problematic, as these outputs are unconstrained. Can you clarify how these predictions behave in practice? Are they typically peaked on the true token, or do they have higher entropy compared to masked positions? Some quantitative evidence would be helpful.\n\n2. From line 304, it seems your method generates only one token per forward pass. If so, what is the motivation for using a diffusion LLM instead of an autoregressive model, given that dLLMs cannot benefit from KV caching? Additionally, if the method only generates one token at a time (by masking the last $D$ tokens and generating a single new token), how does this compare in practice to autoregressive generation (e.g., with Llada) in terms of speed and quality?\n\n3. Lines 299-300: *\"Let $\\bar R$ be the sequence after removing all EOT (did you mean EOS?) tokens from $R$.\"* How exactly are EOS tokens removed? If they appear at the end, do you truncate the sequence? If they occur in the middle, do you delete or mask them?\n\n4. You stop generation based on the DiSE score instead of the EOS token. Did you compare with a semi-autoregressive baseline that stops at the first EOS? How do performance and speed compare?\n\n5. Lines 377-409: Please clarify what *\"a 5.9% gain compared to the perplexity method of an auto-regressive LLM\"* means. 5.9% gain with respect to what metric? Note that your approach does not seem to compute a proper perplexity or a valid likelihood bound, unlike MC integration, which is a true bound by variational arguments.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-29T20:35:25",
"modification_date": "2025-11-12T10:53:54",
"review_url": "https://openreview.net/forum?id=mjLMdY0xul¬eId=gjbmCtID8q",
"license": "CC BY 4.0"
},
{
"id": "CXEjBfobgE",
"forum": "mjLMdY0xul",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission2115/Reviewer_1P18",
"reviewer_name": "Reviewer_1P18",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "process makes quality assessment difficult. DiSE quantifies the model's confidence by calculating the probability of regenerating the tokens in its own output sequence given the full context; a higher regeneration probability signifies greater confidence in the output's quality. Building on this core idea, the research not only utilizes DiSE as an efficient tool for conditional likelihood estimation and uncertainty quantification but also proposes a flexible-length generation framework. This framework leverages DiSE as a real-time self-evaluation signal, enabling dLLMs to dynamically and adaptively determine the optimal output length, thereby overcoming the traditional limitation of fixed-length text generation. Experimental results demonstrate that DiSE significantly improves performance across multiple datasets, increasing likelihood evaluation by 4.0% and uncertainty evaluation by 6.4%, while achieving up to a 32x speedup compared to the conventional Monte Carlo simulation baseline and enhancing the accuracy of flexible-length generation. Ultimately, DiSE introduces an efficient and reliable self-evaluation mechanism for diffusion-based models.",
"strengths": "1. The method is simple and easy to use.\n\n2. The writing is clear and easy to follow.",
"weaknesses": "1. The author should obtain dllm in other training methods, such as the effectiveness of DiSE in dream.\n\n2. The author should explore the reasons why DiSE is feasible, rather than simply discovering this phenomenon. From the perspective of llada training, only the prediction of mask tokens will be supervised, while the logits generated by other known tokens are, intuitively speaking, invalid. If the author analyzes this phenomenon, the paper will be more convincing.\n\n3. The author should show the throughput (i.e., generation speed) of flexible generation using DiSE.",
"questions": "Q1-2: See weakness 1&3",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-28T10:39:58",
"modification_date": "2025-11-12T10:53:54",
"review_url": "https://openreview.net/forum?id=mjLMdY0xul¬eId=CXEjBfobgE",
"license": "CC BY 4.0"
}
] |
qVadFFSfrI
|
https://openreview.net/forum?id=qVadFFSfrI
|
Diagnosing and Remedying Knowledge Deficiencies in LLMs via Label-free Curricular Meaningful Learning
| 6
| 3.5
|
[
4,
6,
6,
8
] |
[
3,
4,
4,
3
] | 4
|
[
"Deficiency Diagnosis",
"Data Synthesis",
"LLMs Reasoning"
] |
Large Language Models (LLMs) have demonstrated impressive generalization ability by learning from extensive unlabeled text. However, they still exhibit reasoning mistakes, which can affect their trustworthiness and reliability. Although users can interact with LLMs and provide diverse and comprehensive queries to expose the flaws of LLMs, obtaining sufficient and effective feedback is demanding. Furthermore, comprehensively evaluating LLMs with limited labeled samples is difficult. These make it a challenge to diagnose and remedy the deficiencies in LLMs through rich label-free user queries. To tackle this challenge and considersing that LLMs' reasoning mistakes often stem from knowledge deficiencies, we propose label-free curricular meaningful learning (LaMer), which first employs relative entropy to diagnose and quantify knowledge deficiencies of LLMs in a label-free setting. Then, LaMer adaptively synthesizes augmentation data based on deficiency severity and progressively remedies them with a curricular remedy strategy. Experiments show that LaMer effectively diagnoses and remedies knowledge deficiencies in LLMs, improving various LLMs across seven out-of-distribution (OOD) reasoning benchmarks, achieving comparable results to baselines with only 40% training data. LaMer even surpasses methods that rely on labeled data for deficiency diagnosis. In application, LaMer offers a diagnostic tool for efficient LLM development.
|
Diagnose the knowledge deficiencies of LLMs and remedy them with a novel approach.
|
foundation or frontier models, including LLMs
|
https://openreview.net/pdf?id=qVadFFSfrI
| 2025-09-19T23:56:45
| 4
|
[
{
"id": "RXsidSYRgJ",
"forum": "qVadFFSfrI",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission19580/Reviewer_jb7t",
"reviewer_name": "Reviewer_jb7t",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper introduces a label-free framework for identifying and improving knowledge gaps in large language models (LLMs), which does not rely on costly human annotations. Specifically, the authors propose the relative-entropy-based diagnostic method that quantifies how much additional information external knowledge contributes to the LLM, which leads to detecting the areas of weakness. Then, the authors design the remedy process based on the curricular learning: synthesizing examples in proportion to deficiency severity and training the model from easier to harder deficiencies. The authors validate the proposed approach on four LLMs and multiple reasoning benchmarks, showing that it achieves consistent performance gains across them.",
"strengths": "* The processes to identify and remedy deficiencies in LLMs are convincing. \n* The proposed approach clearly outperforms existing relevant baselines.",
"weaknesses": "* In extracting the knowledge (needed to check and remedy deficiencies in LLMs), the assumption that, for each query, there should be relevant knowledge from an external knowledge base is very strong. In other words, what if the external knowledge base does not contain the relevant knowledge for each query? Additionally, the process of checking and remedying knowledge deficiencies can be done only for knowledge within the knowledge base, which seems a clear limitation of the proposed approach. Lastly, I am a bit confused whether the proposed approach is truly label-free: it requires the knowledge that is related and associated with the query, which may be considered as the label for the query. \n* There are relevant papers [A, B, C] that the authors should discuss and potentially compare with, especially [A] (which seems highly relevant). \n* The authors could more explicitly justify the advantage of the proposed approach over the unsupervised learning (or SFT) with experiments (i.e., the current setup does not fully justify its advantage over them, despite the claims in the paper). For example, the authors could train the LLMs with all the knowledge in the whole knowledge base and compare the proposed approach against it (i.e., the unsupervised setup) in both effectiveness and efficiency. \n* The performance of the baseline approaches on the Gemma-1.1 (2B) is inferior to the most basic setup (called Base), which may warrant more discussions. \n\n---\n\n[A] Structural Entropy Guided Agent for Detecting and Repairing Knowledge Deficiencies in LLMs, 2025.\n\n[B] R-Zero: Self-Evolving Reasoning LLM from Zero Data, 2025.\n\n[C] Self-Error-Instruct: Generalizing from Errors for LLMs Mathematical Reasoning, 2025.",
"questions": "Please see Weaknesses above.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T14:43:38",
"modification_date": "2025-11-12T15:10:51",
"review_url": "https://openreview.net/forum?id=qVadFFSfrI¬eId=RXsidSYRgJ",
"license": "CC BY 4.0"
},
{
"id": "n7F782o3vG",
"forum": "qVadFFSfrI",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission19580/Reviewer_85W6",
"reviewer_name": "Reviewer_85W6",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "They propose a method to first find the knowledge deficiency in LLMs which, based on the literature, is the source of mistakes in reasoning. To do so they use user queries and using an external knowledge base, they get the relevant info for each query. Then using relative entropy, they detect the knowledge deficiency. Afterwards, using curricular meaningful learning, they propose a method to remedy this knowledge deficiency. This method is label free and does not require human supervision.\nThey show that this method achieves comparable results to baselines with only 40% training data.",
"strengths": "No need for human annotation or labels.\nRich experiments\nClearly written and easy to understand",
"weaknesses": "This method is limited to adding knowledge from GenericsKB.\nThey do not show how much of knowledge in GenericsKB are actually missing in the LLM they study (maybe for some of them the knowledge is already there but fine tuning only makes that knowledge sharp.)\nHeavy reliance on ChatGPT (if ChatGPT makes errors, their method will as well).",
"questions": "1. in the paper you mention that: \"Subsequently, we adopt ChatGPT (Achiam et al., 2023) to synthesize the specified number of examples for the deficiencies in each group.\". How do you make sure that chatGPT has enough and correct information for that knowledge?\n2. why in Gemma-1.1 your method beats others in most of the cases but with Qwen2, it does not?\n3. \"We only keep the examples that possess valid answers for evaluation.\" what percentage of answers did you throw away? and how do you define “valid” here?\n4. \"we use ChatGPT to generate m= 4 pieces of knowledge for GSM8K\". What if ChatGPT makes a mistake? how do you guarantee ChatGPT is correct?\n5. Are you sure numbers is Table 6 are correct?\n6. \"Finally, we synthesize 3,750 examples to enhance Mistral, Qwen2, and Gemma-1.1, while 1,250 examples are synthesized to enhance LLaMA-3 due to denser knowledge in it.\". Why do you use different numbers for LLaMA-3?\n7. \"Therefore, Single enhances Mistral, Qwen2, and Gemma-1.1 with 1,500 examples, and it utilizes 600 examples to enhance LLaMA-3.\". Is it fair to use different number of examples for LLaMA?\n8. \"Naive and Single could supplement some knowledge to them but cause them to forget more useful knowledge.\" Your method also supplies some knowledge but it does not hurt the numbers. why single hurts? how do you select the example for single one?\n9. In section 3.6, item (2), you claim that “Naive and Single could supplement some knowledge to them but cause them to forget more useful knowledge.”. why this forgetting does not happen in LaMer?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T07:10:23",
"modification_date": "2025-11-12T15:10:52",
"review_url": "https://openreview.net/forum?id=qVadFFSfrI¬eId=n7F782o3vG",
"license": "CC BY 4.0"
},
{
"id": "OqAzHWM1jK",
"forum": "qVadFFSfrI",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission19580/Reviewer_4XWw",
"reviewer_name": "Reviewer_4XWw",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 2,
"summary": "This paper introduces LaMer, a novel framework for diagnosing and remedying knowledge deficiencies in LLM without relying on labeled data. The core idea is to use relative entropy to identify knowledge gaps by measuring the change in an LLM's output distribution before and after being provided with relevant external knowledge facts. Based on the severity of the diagnosed deficiency, LaMer employs a curricular learning strategy to synthesize a varying number of diverse examples to progressively fine-tune the model. The empirical results are strong and consistent across four different LLMs and seven OOD benchmarks, demonstrating the effectiveness and efficiency of the proposed method.",
"strengths": "1. The primary contribution (using relative entropy for label-free diagnosis of knowledge deficiencies) is well-motivated and novel. It addresses a critical and practical challenge in the continuous improvement of LLMs: how to perform targeted enhancements without expensive human annotation.\n2. The integration of curricular and meaningful learning is well-executed. The paper clearly demonstrates that diagnosing deficiencies first and then applying a targeted, progressive remedy (from easy to hard) is more effective than naive data augmentation or random-order training.",
"weaknesses": "1. The framework's effectiveness is highly dependent on two external components: a comprehensive knowledge base (GenericsKB) and a powerful teacher model (ChatGPT) for data synthesis. This reliance may limit its applicability in scenarios where a high-quality KB is unavailable or the cost of using a powerful synthesis model is prohibitive. The paper does not sufficiently discuss the impact of these dependencies.\n2. The RE thresholds used to categorize deficiencies into \"Easy,\" \"Normal,\" \"Hard,\" and \"Unfair\" are presented as heuristics. The paper lacks a rigorous justification or a sensitivity analysis for these values, making it unclear how robust the method is to these choices. \n3. The diagnosis step relies on knowledge retrieved via embedding similarity, which can be noisy and sometimes irrelevant. The paper does not address how such noisy knowledge might affect the stability of the posterior distribution Q and the resulting RE calculation. This raises concerns about whether the RE score is always a reliable indicator of a true knowledge deficiency.\n4. The methodology description contains ambiguities that hinder clarity. For instance, the paper refers to the \"negative log-likelihood (NLL) of each response oi conditioned on x,\"(Section 2.2) but 'x' is not defined in the context.",
"questions": "1. Regarding the knowledge retrieval process, how do you ensure the correctness of the knowledge recalled from the KB? Given that embedding-based matching can introduce noise, have you conducted any analysis on the impact of incorrectly retrieved or irrelevant knowledge on the RE calculation? How robust is the diagnosis mechanism to this noise?\n2. In the Section 2.2, you refer to \"The first situation suggests L might not grasp this knowledge or cannot properly apply this knowledge to problem-solving, while the second situation indicates that L does understand this knowledge but is easily misled by it.\", providing a specific interpretation for two scenarios of knowledge impact. Can you provide the reasoning behind the claim that when knowledge has a negative impact (misleading), it indicates \"L does understand this knowledge but is easily misled by it\"? Why does this scenario not also suggest a failure to properly integrate or contextualize new information, which could be seen as a form of \"not grasping\" the knowledge in a given context?\n3. Can you elaborate on the sensitivity of LaMer to the quality of its external components? Specifically, how would performance degrade if a less comprehensive knowledge base were used, or if a weaker, open-source model was used for data synthesis instead of ChatGPT?\n4. Regarding the RE thresholds in Table 1, have you performed any experiments to analyze their sensitivity? How were these specific values (0.1, 0.4, 0.7, 1.0) determined, and how critical are they to the overall performance of the curricular remedy strategy?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-25T11:32:52",
"modification_date": "2025-11-12T15:10:52",
"review_url": "https://openreview.net/forum?id=qVadFFSfrI¬eId=OqAzHWM1jK",
"license": "CC BY 4.0"
},
{
"id": "0alzCnbGzb",
"forum": "qVadFFSfrI",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission19580/Reviewer_oqoY",
"reviewer_name": "Reviewer_oqoY",
"rating": 8,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The paper proposes LaMer, a label-free pipeline that (1) retrieves generic facts for each unlabeled query from an external knowledge base, (2) diagnoses “knowledge deficiencies” by comparing a model’s predictions with and without the retrieved fact, and (3) remedies those deficiencies by synthesizing training examples whose quantity scales with the diagnosed severity and training in an easy-to-hard curriculum. Using open-weight models, the method reports consistent gains over several augmentation baselines on seven out-of-distribution reasoning benchmarks, often matching or surpassing baselines with about 40% of the training data.",
"strengths": "- The paper ties together retrieval, a simple distribution-shift diagnostic, and a curriculum that scales data to severity, turning unlabeled user queries into targeted training data without manual labels. The case study and “remedied examples” analysis make the mechanism concrete and inspectable.\n- The paper contrasts label-free deficiency detection against perplexity and a label-reliant data-mining baseline, studies the curriculum order (vs. shuffling), and separates “helpful” vs. “misleading” retrieved facts—showing both expose repairable deficiencies. \n- It specifies the KB (GenericsKB with confidence filtering), retrieval embedding (FlagEmbedding), the number of matched facts per query, the synthesis protocol, and PEFT training settings, which lowers the barrier to trying LaMer in practice.",
"weaknesses": "- The diagnosis hinges on retrieved facts being relevant and on a chosen threshold to flag deficiencies. If retrieval drifts (spurious or overly generic facts) or if the threshold is mis-set, the pipeline may teach to noise. \n- The severity buckets and the number of synthesized examples per bucket are fixed heuristics. While effective, it maybe interesting to explore the direction of an adaptive curricula (e.g., stopping rules per deficiency) or data-budget trade-offs across tasks and models. \n- While “remedied examples” are counted, there’s no probing of whether the method inadvertently forgets useful adjacent knowledge or whether it changes calibration/uncertainty properties.",
"questions": "- I am a bit curious about the retrieval robustness, how sensitive is the deficiency diagnosis to the retrieval pipeline (embedding model choice, number of facts, KB domain coverage)? What happens if retrieved facts are partially wrong or overly generic?\n- How did you select the deficiency threshold, and how stable are results across plausible ranges? Could you replace hard thresholding with an adaptive percentile per domain or model?\n- How would you adapt LaMer when queries are private or streaming (e.g., enterprise chat), and when external KBs are domain-specific or scarce?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-24T04:21:36",
"modification_date": "2025-11-12T15:10:53",
"review_url": "https://openreview.net/forum?id=qVadFFSfrI¬eId=0alzCnbGzb",
"license": "CC BY 4.0"
}
] |
7FfZc9MePg
|
https://openreview.net/forum?id=7FfZc9MePg
|
PersonBias: A Lightweight Framework for Personalized Bias Mitigation in Large Language Models
| 3
| 3.5
|
[
2,
4,
4,
2
] |
[
3,
4,
3,
4
] | 4
|
[
"Personalized Debiasing",
"Dynamic Intervention",
"Large Language Models",
"Bias-Utility Trade-off"
] |
Social bias in large language models (LLMs) outputs has emerged as a Social bias in large language model (LLM) outputs has emerged as a critical challenge in artificial intelligence. While existing bias detection methods pursue comprehensive identification and elimination of implicit biases, this \textit{one-size-fits-all} approach presents significant limitations. Excessive bias correction causes responses to deviate from user query intent, comprehensive detection demands extensive human annotation and computational resources, and critically, user heterogeneity dictates that different individuals with diverse backgrounds and personality traits exhibit varying sensitivities toward different bias types. To address these challenges, we propose PersonBias, a lightweight, personalized debiasing framework that balances bias mitigation with response quality optimization. Our approach leverages LLMs to automatically extract user personality features from conversational contexts, eliminating the need for explicit demographic data collection. We develop a dual-tower encoder architecture with cross-attention mechanisms to model user-specific bias sensitivities, employing parameter-efficient fine-tuning that freezes encoder parameters while optimizing only projection layers and attention mechanisms. Rather than requiring model-specific fine-tuning, PersonBias operates through real-time intervention during generation, dynamically evaluating and adjusting outputs at fixed token intervals to prevent bias accumulation while maintaining relevance and utility. Experiments on multi-turn dialogue datasets demonstrate that PersonBias achieves superior bias reduction and utility preservation compared to prompt-based and fine-tuning baselines, offering a practical and adaptive solution for personalized fairness in LLMs.
|
We introduce PersonBias, a plug-and-play module that detects and mitigates social biases in LLM outputs by dynamically adapting to individual user preferences, balancing fairness with response quality.
|
alignment, fairness, safety, privacy, and societal considerations
|
https://openreview.net/pdf?id=7FfZc9MePg
| 2025-09-17T14:57:48
| 4
|
[
{
"id": "W4XPLxbMIt",
"forum": "7FfZc9MePg",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8599/Reviewer_5vGB",
"reviewer_name": "Reviewer_5vGB",
"rating": 2,
"confidence": 3,
"soundness": 2,
"contribution": 1,
"presentation": 3,
"summary": "This paper introduces a framework to reduce LLM biases in individual-basis. The approach extracts individual-related features and built a cross-attention framework to train reward functions. These functions are then used to refine the LLM output iteratively. Experimental results show small improvements over existing baselines.",
"strengths": "* This paper recognizes that biases may manifest differently across different individuals and the need to mitigate them in personalized fashion.\n\n* The method is well written and easy to follow/understand.\n\n* The proposed approach accounts for the scalability challenges through parameter-efficient finetuning.",
"weaknesses": "* It is unclear whether the improvements that the authors show in Table 1 are statistically signficantly. It is well known that LLM judges are of high variance when asked to directly output scores. The improvements in US and BS are mostly within the range of 5 points, which could totally be noise rather than material improvements.\n\n* I think this paper conflates personal preferences and biases. The way that the reward model was trained can well be just about personal preferences rather than bias. It is unclear a user liking/disliking a response will necessarily have things to do with \"biases\" in LLMs. When it comes to learning personal preferences, there are a sea of existing literature in personalized LLMs that the authors didn't consider.\n\n* For prompt-based baselines, it will be more informative if the authors can compare to more capable models like GPT-4 series rather than 7B models that are limited in their prompting capabilities.",
"questions": "Please see points listed in weaknesses",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T12:14:34",
"modification_date": "2025-11-12T12:07:22",
"review_url": "https://openreview.net/forum?id=7FfZc9MePg¬eId=W4XPLxbMIt",
"license": "CC BY 4.0"
},
{
"id": "2m7SEEpMoA",
"forum": "7FfZc9MePg",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8599/Reviewer_paDE",
"reviewer_name": "Reviewer_paDE",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "In this work, the authors introduce a framework termed PersonBias, which is intended to mitigate personalized bias in LLMs. The authors argue that, unlike a one-size-fits-all bias mitigation strategy, which often overcorrects user intent, the proposed framework considers each user's background to then debias accordingly. Personality features, such as religion, gender, age, and country of origin, are inferred from conversational history without explicit demographic data. The paper then develops a cross-attention-based encoder that learns associations between user characteristics and text bias patterns using fine-tuning. Experiments on FairMT datasets (NegF, ScaQ, AnaE) with multiple base LLMs (Qwen2.5, Llama2-Chat) demonstrate improved Bias Scores (lower bias) while maintaining or improving Utility Scores (response quality).",
"strengths": "The following are the overall strengths of the paper:\nA. The work introduces personalized bias mitigation, which is a very underexplored field of study within bias and ethics in NLP, and therefore, the work is novel and tackles an interesting issue. \nB. The work combines both LLM-based personality extraction with a dual-tower reward model using fine-tuning. This approach technically strengthens the methodology. \nC. Real-time bias monitoring is definitely a step beyond static post-hoc debiasing and is a strength in the work.\nD. The results shown by the authors clearly demonstrate consistent gains in bias reduction and utility preservation across diverse LLM backbones, showcasing the impact of their proposed framework. \nE. The paper is well written and the illustrations help clarify the intend of the work and the narration.",
"weaknesses": "Even with the novel approach and the mentioned strengths, multiple weaknesses needs to be resolved in this work. They are as follows:\nA. Experiments are limited to benchmark datasets with constrained domains (FairMT, CREHate). It remains unclear whether the model generalizes to open-domain or real-world conversations. This raises questions about the results shown.\nB. As the bias showcased has ties to sociotechnical elements of bias mitigation in NLP, it was interesting that no human evaluation or qualitative analysis was done to validate whether personalization indeed aligns with user satisfaction or perceived fairness.\nC. The LLM-driven user feature extraction module lacks accuracy assessment. Errors in inferred user traits could propagate bias or misalignment downstream.\nD. While the paper’s ethics section acknowledges risks, the framework may still very much encode user-preferred biases (e.g., reflecting biased preferences of users). No mechanism is proposed to constrain such behavior, nor was the larger consequence of the same discussed in this work. \nE. The chosen baselines (P-Base and BiasDPO) are limited. Recent in-context debiasing and retrieval-based personalization methods are not compared. The argument needs to be stronger in explaining why this was chosen and how it's strongly relevant to the application the authors are trying to address. \nF. The inference of personal attributes from dialogue history, though novel, lacks quantitative validation. There are no accuracy metrics shown for the personality extraction module.\n\nMinor:\nA. Some implementation details (hyperparameters, dataset splits, specific reward model training process) are underspecified. This could strengthen reproducibility specifically. \nB. Occasional typos inconsistencies (e.g., “Merhod” heading)\nC. Some redundancy across Sections 3–4 in describing the motivation and dual-tower setup.",
"questions": "Answering the weakness stated above would help me better understand the overall relevance and strength of this work for ICLR.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T01:02:06",
"modification_date": "2025-11-12T12:07:23",
"review_url": "https://openreview.net/forum?id=7FfZc9MePg¬eId=2m7SEEpMoA",
"license": "CC BY 4.0"
},
{
"id": "DPEdleNPGP",
"forum": "7FfZc9MePg",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8599/Reviewer_dCCo",
"reviewer_name": "Reviewer_dCCo",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "PersonBias targets social bias mitigation in multi-turn conversations, rejecting a one-size-fits-all approach in favor of user-centered personalization to achieve a better trade-off between fairness and utility. The framework infers a user profile from dialogue history with an LLM, then uses a two-tower encoder with cross-attention to learn correlations between text spans and user preferences, producing a personalized reward that dynamically filters candidates during generation. This enables real-time suppression of user-disliked bias without retraining the base model. Experiments show simultaneous improvements in bias-mitigation metrics and utility/satisfaction on the given evaluations. The paper’s ethics section also acknowledges potential privacy and value risks introduced by personalization and attribute inference, emphasizing the need for appropriate safeguards.",
"strengths": "1.\tThe paper tackles an important and valuable problem in LLM fairness. First, fairness and debiasing in multi-turn conversations are closer to real-world scenarios and thus more meaningful. Second, the proposed approach avoids one-size-fits-all debiasing, which helps prevent over-protection and achieves a better balance between fairness and utility.\n2.\tThe writing is strong and well-organized, with clear structure and flow.\n3.\tThe use of a two-tower encoder with attention to learn correlations between specific text spans and user preferences, followed by Dynamic Personalized Debiasing that applies a personalized reward to periodically filter decoding candidates, allows the system to suppress bias the user dislikes without retraining the base model. Experiments show simultaneous improvements in bias mitigation scores and utility on the given evaluation, achieving a simple, cost-effective design with real-time control.",
"weaknesses": "1.\tIn Section 4.1, the current attribute inference allows strong inferences from weak cues—for example, mapping interests/occupations/household roles directly to gender, religion, or age group. This can “write in” stereotypes and errors at the system’s entry point and then propagate them along the personalization pipeline. Once the initial profile is biased or incorrect, subsequent filtering optimizes around the wrong user persona, potentially removing neutral/useful content and, in some scenarios, catering to or reinforcing harmful preferences. The paper lacks consideration of uncertainty in this stage and does not report attribute inference accuracy or calibration, making it difficult to assess overall robustness and compliance.\n---\n2.\tMechanistically, Dynamic Personalized Debiasing amounts to preference-weighted re-ranking/re-weighting of the generation process. If the personalization signal itself is biased (due to data issues or inference errors), attention will amplify correlations aligned with that bias, leading to bias amplification. At the same time, the paper lacks a general safety/fairness floor orthogonal to personalization (e.g., a hard rejection module for hate or discriminatory content). As a result, the system lacks verifiable guarantees for balancing “satisfying individual preferences” against “maintaining public safety and fairness baselines.”",
"questions": "See Weaknesses.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-26T21:51:39",
"modification_date": "2025-11-12T12:07:23",
"review_url": "https://openreview.net/forum?id=7FfZc9MePg¬eId=DPEdleNPGP",
"license": "CC BY 4.0"
},
{
"id": "9bmZQxOAT7",
"forum": "7FfZc9MePg",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8599/Reviewer_S2Kd",
"reviewer_name": "Reviewer_S2Kd",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "The paper introduces PersonBias, a lightweight and personalized debiasing framework for large language models (LLMs). The motivation is that existing fairness techniques treat all users the same, neglecting users' culture differences and preferences. The method extracts inferred demographics from conversation history, leverages a dual tower personalized reward model, and dynamically debias during inference. The authors test on multi-turn dialogue datasets and compare against prompt-based and fine-tuning-based debiasing with strong Bias Score and Utility Score. \n\nI think the personalized setting idea is more novel and I am not sure many papers explore this direction. However, the lightweight training and real time bias monitoring claims are incremental as the method is not inference time technique only and resembles FairSteer in terms of dynamic monitoring.",
"strengths": "The main novelty and strength revolves around the idea of personalized fairness: \nThe paper introduces an under-explored but intuitively compelling idea: that bias mitigation should adapt to user-level sensitivity rather than enforcing a single global fairness target. This reframing of fairness as personalized alignment adds an original perspective to the LLM bias literature, which has largely focused on population-level or dataset-level corrections. Doing so can reinforce stereotypes and create more biased users and the authors acknowledges this in the ethics statement, showing awareness of this challenge, framing their method as augmenting fairness sensitivity rather than tailoring harmful biases.\n\nThe method is technically sound: \nThe proposed method combines the user feature extraction, dual-tower reward model, and dynamic inference-time control fit together into a clear pipeline. Although it still involves some fine-tuning, the selective optimization of projection and attention layers represents a thoughtful compromise between computational tractability and adaptability. Attention to real time bias accumulation is also good, a nuance often missing in previous one-shot post-hoc debiasing.\n\nEmpirical validation across multiple models and datasets:\nExperiments span several base models (Qwen2.5-3B/7B, Llama2-7B) and datasets (FairMT subsets), showing consistent improvements in both Bias Score and Utility Score. The comparative baselines make sense too.",
"weaknesses": "Personalization conceptually interesting but empirically shallow:\nThe paper’s central claim is that users differ in “bias sensitivities,” and that personalizing debiasing improves both fairness and satisfaction. However, the experiments only simulate this effect using synthetic or inferred attributes without real user feedback or behavioral validation. Some user study or evidence that personalization meaningfully changes debiasing behavior can strengthen the main claim.\n\nLimited novelty in technical components and some overstating of contributions: \nThe architecture combines standard ingredients, none of which are novel in themselves. The “dynamic debiasing” mechanism closely resembles existing inference-time steering or filtering approaches (e.g., BiasFilter, FairSteer). The contribution lies more in integrating these ideas around a new conceptual framing than in introducing fundamentally new algorithms. Although the paper emphasizes resource efficiency, the method is not zero-shot or training free. Also, the dynamic inference part may increase inference latency, offsetting savings. \n\nDependence on conversation history: \nThe approach relies on extracting demographic or personality information from prior dialogue history to build user profiles. In realistic settings, many users lack sufficient history for meaningful inference (cold-start problem), and the inferred traits (religion, gender, etc.) raise privacy and ethical concerns. The paper acknowledges this risk but offers no mitigation strategy beyond general cautions. The framework may work only when sufficient prior data and controlled environments exist, limiting real-world applicability.\n\nResults are good but a bit weak and shallow: \nThe use of FairMT subsets and model-generated bias/utility scores is reasonable for benchmarking. However, the reported improvements (1–3 points on a 0–99 scale) may not be statistically significant. Some qualitative examples or error analyses are helpful in showing actual decreases in harmful stereotypes rather than superficial lexical cues.",
"questions": "Validity of personalization: \nHow can we be confident that PersonBias truly captures causal differences in user bias perception rather than artifacts of synthetic user features or dataset correlations?\nHave you tested whether swapping user profiles or ablating personalization changes outputs in meaningful and interpretable ways?\nHow reliable are these inferred attributes, and how do errors affect debiasing behavior?\nMore fundamentally, what safeguards prevent the system from amplifying or stereotyping users based on these inferred characteristics?\n\nScalability and deployment feasibility:\nHow scalable is this approach across thousands of users?\nWhat are the compute and latency costs relative to simpler inference-time debiasing (e.g., BiasFilter, activation steering)?\nCould a single shared model generalize across diverse users without retraining?\n\nEvaluation and significance of results:\nHave you validated these with human judgment or statistical significance testing?\nCan you provide qualitative examples illustrating how personalization changes model behavior in concrete dialogue contexts?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-18T11:07:12",
"modification_date": "2025-11-12T12:07:23",
"review_url": "https://openreview.net/forum?id=7FfZc9MePg¬eId=9bmZQxOAT7",
"license": "CC BY 4.0"
}
] |
l7Vb3yxmuz
|
https://openreview.net/forum?id=l7Vb3yxmuz
|
WINA: Weight Informed Neuron Activation for Accelerating Large Language Model Inference
| 5.666667
| 3
|
[
6,
6,
6,
6,
4,
6
] |
[
2,
3,
3,
3,
3,
4
] | 6
|
[
"Sparse Activation",
"Efficient Inference"
] |
The ever-increasing computational demands of large language models (LLMs) make efficient inference a central challenge. While recent advances leverage specialized architectures or selective activation, they typically require (re)training or architectural modifications, limiting their broad applicability. Training-free sparse activation, in contrast, offers a plug-and-play pathway to efficiency; however, existing methods often rely solely on hidden state magnitudes, leading to significant approximation error and performance degradation. To address this, we introduce WINA (Weight-Informed Neuron Activation): a simple framework for training-free sparse activation that incorporates both hidden state magnitudes and weight matrix structure. By also leveraging the ℓ2-norm of the model’s weight matrices, WINA yields a principled sparsification strategy with provably optimal approximation error bounds, offering better and tighter theoretical guarantees than prior state-of-the-art approaches. Overall, WINA also empirically outperforms many previous training-free methods across diverse LLM architectures and datasets: not only matching or exceeding their accuracy at comparable sparsity levels, but also sustaining performance better at more extreme sparsity levels. Together, these results position WINA as a practical, theoretically grounded, and broadly deployable solution for efficient inference. Our source code is anonymously available at https://anonymous.4open.science/r/wina-F704/README.md.
|
foundation or frontier models, including LLMs
|
https://openreview.net/pdf?id=l7Vb3yxmuz
| 2025-09-17T05:11:00
| 6
|
[
{
"id": "liI8CjRvSr",
"forum": "l7Vb3yxmuz",
"review_number": 6,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8065/Reviewer_zwra",
"reviewer_name": "Reviewer_zwra",
"rating": 6,
"confidence": 2,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The paper addresses the important challenge of reducing inference cost in LLMs without degrading output quality. Existing training-free sparse activation methods often rely solely on hidden state magnitudes, which can lead to significant approximation error, particularly at high sparsity levels.\n\nThe authors propose WINA (Weight-Informed Neuron Activation), a simple yet effective framework that incorporates both hidden state magnitudes and the ℓ2-norm of weight matrices into neuron selection. This provides a principled sparsification strategy with provably optimal approximation error bounds, yielding tighter theoretical guarantees than prior methods.\n\nThe method is empirically validated across multiple widely used LLMs, including Llama-2-7B, Llama-3-8B, Mistral-7B, and Phi-4-14B, and evaluated on diverse tasks such as general reasoning (MMLU), mathematics (GSM8K), and coding (HumanEval). WINA is compared against several strong baselines, including TEAL, R-Sparse, and CATS. The results show that WINA performs comparably to prior methods at low sparsity and significantly better at high sparsity, achieving several percent improvement in commonsense reasoning accuracy and sustaining performance under extreme sparsity levels.\n\nOverall, WINA is presented as a practical, theoretically grounded, and broadly deployable approach for efficient inference in LLMs.",
"strengths": "The problem addressed is highly relevant, as reducing LLM inference cost without sacrificing output quality is an important challenge. The method is theoretically grounded, as incorporating weight norms provides a principled sparsification strategy with provable error bounds. The empirical evaluation is extensive, covering multiple LLMs, a range of tasks, and both low and high sparsity levels, and the method is compared to strong baselines including TEAL, R-Sparse, and CATS. The approach is practical and easy to deploy, as it is training-free and plug-and-play, making it broadly applicable. The results demonstrate robustness, as the method maintains competitive performance across different sparsity regimes and models.",
"weaknesses": "The main contribution is a relatively straightforward extension of existing sparse activation methods, which could be considered incremental, though it is strengthened by solid theoretical and empirical support. The paper could benefit from a discussion of potential limitations, such as scenarios where weight-informed selection might be less effective or challenges when scaling to very large models beyond those tested.",
"questions": "The authors propose incorporating weight norms into the neuron selection process. Could the authors clarify whether this additional step increases computation amount during inference, and if so, provide benchmark comparisons to quantify the overhead relative to other training-free sparse activation methods?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-04T15:23:59",
"modification_date": "2025-11-12T12:00:53",
"review_url": "https://openreview.net/forum?id=l7Vb3yxmuz¬eId=liI8CjRvSr",
"license": "CC BY 4.0"
},
{
"id": "sMP6jmmOTK",
"forum": "l7Vb3yxmuz",
"review_number": 5,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8065/Reviewer_SJQR",
"reviewer_name": "Reviewer_SJQR",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The paper proposes WINA (Weight Informed Neuron Activation), a training-free sparse activation method that combines hidden state magnitudes with the weight matrix structure to guide neuron selection. WINA is proven to minimize approximation error under column-wise orthogonality and monotonic activation assumptions. In the experiments, it outperforms other training-free methods like CATS, R-Sparse and TEAL across multiple LLM architectures (Llama2/3, Mistral, Phi-4) and benchmarks (MMLU, GSM8K, HumanEval), achieving over 60% FLOPs reduction at 65% sparsity while preserving accuracy.",
"strengths": "- The proposed method introduces a simple yet effective training-free sparse activation mechanism that combines both hidden-state magnitudes and the column-wise L2-norm of weight matrices to guide neuron selection.\n- The theoretical analysis is rigorous and well structured, providing provably optimal approximation error bounds under clear and interpretable assumptions (column-wise orthogonality and monotonic activation).\n- The experiments are comprehensive, covering multiple model architectures, quantization methods, and ablations, demonstrating consistent improvements that align with theoretical predictions.",
"weaknesses": "- The models in the experiments are small dense LLMs. Large-scale or MoE architectures (e.g., DeepSeek-V3, Llama4, GPT-OSS) which are more common in product deployment workloads are not tested. It’s unclear whether WINA’s activation gating would maintain efficiency with expert routing sparsity in these larger models.\n- The evaluation focuses on theoretical FLOPs reduction but lacks real-world inference measurements such as latency or throughput on inference frameworks. Without kernel-level or runtime validation, the practical performance benefits of WINA remain unclear, especially given the hardware inefficiency of non-structured sparsity.\n- The theoretical assumptions rely on column-wise orthogonality and monotonic activation functions, which may not strictly satisfied in real transformer models.",
"questions": "- How does WINA perform on large MoE models (e.g., DeepSeek, Llama4, GPT-OSS)? It would help to understand how the method scales to production scale LLMs.\n- Could you evaluate WINA’s actual latency or throughput in real inference scenarios? What’s the challenges to integrate this method to inference frameworks?\n- How does WINA perform under long-context settings (e.g., 16K–128K tokens)? Are the top-K activation patterns stable as sequence length increases?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-04T02:48:02",
"modification_date": "2025-11-12T12:00:53",
"review_url": "https://openreview.net/forum?id=l7Vb3yxmuz¬eId=sMP6jmmOTK",
"license": "CC BY 4.0"
},
{
"id": "ADyOI3ABxA",
"forum": "l7Vb3yxmuz",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8065/Reviewer_DJxS",
"reviewer_name": "Reviewer_DJxS",
"rating": 6,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "The paper proposes a new sparse activation method, WINA, to improve LLM inference efficiency. WINA or Weight Informed Neuron Activation uses both the hidden state magnitude and weight matrix structure while sparsifying the activation; previous work only relies on the hidden magnitude. \nWINA uses the product of l2 norm of the column vector of the weight matrix and the hidden state magnitude to select the top-K neuron with theoretical justification. Extensive experiments are provided with popular benchmark datasets.",
"strengths": "1. The paper is well-written and easy to follow, and the idea is intuitive. \n\n2. Theoretical justification showing that using both the L2 norm of the column vector and the hidden state magnitude yields an optimal solution and reduces error (section 3). \n\n3. Results provided are quite extensive; the method is evaluated on multiple datasets and different downstream tasks. \n\n4. Results in tables 3 and 4 show consistent improvements, especially at higher sparsities, compared to other baselines, showing the efficacy of the method.\n\n5. Additional results showing on quantization are provided, showing WINA is compatible with quantization.",
"weaknesses": "1. The technical novelty of the method is limited (however, results show it improves over baselines). \n\n2. It is not clear why the orthogonality of the weight matrix is enforced (in sec 3.4)? Does this orthogonality hold in a general setting as well?\n\n3. Recent works have shown that LLM compression has an unintended impact on the model bias. It would be helpful to also evaluate the impact of the proposed method on model bias. \n\n\n[1]. Strubell et al., Understanding the Effect of Model Compression on Social Bias in Large Language Models",
"questions": "1. How is LayerNorm monotonically increasing? (Line 238)",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-03T15:51:48",
"modification_date": "2025-11-12T12:00:54",
"review_url": "https://openreview.net/forum?id=l7Vb3yxmuz¬eId=ADyOI3ABxA",
"license": "CC BY 4.0"
},
{
"id": "UcMZUIJTYb",
"forum": "l7Vb3yxmuz",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8065/Reviewer_1iPj",
"reviewer_name": "Reviewer_1iPj",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The paper aims to improve inference efficiency of LLMs and proposes skipping unnecessary neuron computations in feed-forward network (FFN) layers. The proposed method does this with a gating function that uses the magnitude of a neuron's output weight to assess its importance. Less important neurons (smaller output weight magnitue and smaller intermediate activation) are skipped. The paper has empirical results on Llama 3-8B and Qwen 2-7B, demonstrating significant speedups with minimal impact on model accuracy.",
"strengths": "- The proposed method, WINA, is easy to apply to existing pre-trained LLMs since it is a post-training method that does not require any fine-tuning or retraining. \n\n- To the best of my knowledge, assigning importance scores based on output weight's magnitude is a novel idea. \n\n- Empirical results are strong across various benchmarks and model sizes.",
"weaknesses": "- The linked source code is not available. \n\n- The success of WINA relies on the threshold used to determine which neurons to skip. Tuning this threshold would be costly.",
"questions": "Do the authors have any suggestions on how to tune the threshold efficiently?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-03T15:31:19",
"modification_date": "2025-11-12T12:00:54",
"review_url": "https://openreview.net/forum?id=l7Vb3yxmuz¬eId=UcMZUIJTYb",
"license": "CC BY 4.0"
},
{
"id": "JzLMG2UPB6",
"forum": "l7Vb3yxmuz",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8065/Reviewer_6SUJ",
"reviewer_name": "Reviewer_6SUJ",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This paper presents WINA (Weight-Informed Neuron Activation), a training-free sparse activation method that accelerates LLM inference by selecting neurons based on both hidden state magnitudes and weight matrix norms. This weight-informed approach achieves tighter theoretical error bounds and better accuracy than prior methods like TEAL and CATS, maintaining strong performance even at high sparsity levels across various LLMs and tasks, and showing compatibility with quantized models.",
"strengths": "1. The paper is well-organized and easy to follow.\n\n2. The figures are clearly and beautifully presented.\n\n3. The experiments conducted on extensive datasets provide strong validation and demonstrate the integrity of the proposed method.",
"weaknesses": "1. My main concern is related to the performance measurement. The authors claim that WINA is more efficient than previous methods, \"potentially translating to faster inference speeds and lower computational costs.\" Could the authors provide empirical evidence, such as wall-clock time or GPU memory usage, to support this claim?\n\n2. The improvement in synthetic results shown in Table 2 is substantial, but the gains in real-world LLM experiments are relatively modest. Could the authors clarify the reason for this large discrepancy between synthetic and real-world results?\n\n3. How can the assumption of \"column-wise orthogonality\" in the theorems be verified? Is there any experimental evidence to support this assumption?",
"questions": "See weaknesses above.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-02T16:28:27",
"modification_date": "2025-11-12T12:00:55",
"review_url": "https://openreview.net/forum?id=l7Vb3yxmuz¬eId=JzLMG2UPB6",
"license": "CC BY 4.0"
},
{
"id": "DOu8h7E4B9",
"forum": "l7Vb3yxmuz",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8065/Reviewer_iVmy",
"reviewer_name": "Reviewer_iVmy",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper proposed a framework WINA for training-free sparse activation that incorporates both hidden state magnitudes and weight matrix structure, which combines the magnitude of activations with the column-wise norm of the weight matrices to preserve the top-k activations. The authors claimed that WINA can achieve a lower approximation error bound under several assumptions and is model-agnostic. Methods are tested on Llama-2-7B, Llama-3-8B, Mistral-7B, and Phi-4-14B models across several benchmark datasets, which demonstrate that WINA can achieve superior performance under various sparsity ratios.",
"strengths": "1. The combinatorial gating strategy is reasonable, which produces a tighter approximation error bound.\n2. WINA as a training-free method is friendly for deployment.\n3. The paper is well written and easy to follow.",
"weaknesses": "1. There are no end-to-end latency performance comparisons between WINA and previous methods like TEAL, CATS, and R-Sparse.\n2. For the math reasoning task like GSM8K, the aggressive sparsity can induce significant damage to the model performance, dropping accuracy from 50 to 7, although WINA reported superior performance to the baseline methods.\n3. When the batch size is larger, will the sparsity be affected and further the speedup gain be degraded?",
"questions": "See weaknesses.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T12:17:10",
"modification_date": "2025-11-12T12:00:55",
"review_url": "https://openreview.net/forum?id=l7Vb3yxmuz¬eId=DOu8h7E4B9",
"license": "CC BY 4.0"
}
] |
|
qrKymA0zuY
|
https://openreview.net/forum?id=qrKymA0zuY
|
Explaining Multimodal LLMs via Intra-Modal Token Interactions
| 4
| 3.5
|
[
4,
6,
4,
2
] |
[
4,
4,
3,
3
] | 4
|
[
"XAI",
"Multimodal LLM"
] |
Multimodal Large Language Models (MLLMs) have achieved remarkable success across diverse vision-language tasks, yet their internal decision-making mechanisms remain insufficiently understood. Existing interpretability research has primarily focused on cross-modal attribution, identifying which image regions the model attends to during output generation. However, these approaches often overlook intra-modal dependencies. In the visual modality, attributing importance to isolated image patches ignores spatial context due to limited receptive fields, resulting in fragmented and noisy explanations. In the textual modality, reliance on preceding tokens introduces spurious activations. Failing to effectively mitigate these interference compromises attribution fidelity. To address these limitations, we propose enhancing interpretability by leveraging intra-modal interaction. For the visual branch, we introduce Multi-Scale Explanation Aggregation (MSEA), which aggregates attributions over multi-scale inputs to dynamically adjust receptive fields, producing more holistic and spatially coherent visual explanations. For the textual branch, we propose Activation Ranking Correlation (ARC), which measures the relevance of contextual tokens to the current token via alignment of their top-$k$ prediction rankings. ARC leverages this relevance to suppress spurious activations from irrelevant contexts while preserving semantically coherent ones. Extensive experiments across state-of-the-art MLLMs and benchmark datasets demonstrate that our approach consistently outperforms existing interpretability methods, yielding more faithful and fine-grained explanations of model behavior.
|
interpretability and explainable AI
|
https://openreview.net/pdf?id=qrKymA0zuY
| 2025-09-20T07:30:34
| 4
|
[
{
"id": "Ys70HRJ3G3",
"forum": "qrKymA0zuY",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission22000/Reviewer_UEeF",
"reviewer_name": "Reviewer_UEeF",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This paper proposes a framework to enhance the interpretability of Multimodal Large Language Models (MLLMs) by intra-modal interactions in vision-language modalities. The two core contributions are: 1. Multi-Scale Explanation Aggregation (MSEA): Aggregates visual token attributions across multiple image resolutions; 2. Activation Ranking Correlation (ARC): Identifies and suppresses spurious activations from irrelevant preceding tokens in the textual modality, based on alignment of top-k token ranking predictions.",
"strengths": "1. MSEA and ARC complementary work together to improve both visual and textual interpretability. SEA captures spatial context across visual tokens, while ARC reduces noise in textual attributions caused by irrelevant context tokens.\n\n2. Quantitative improvements and qualitative visualization results support the effectiveness of the approach.\n\n3. Good writing and presentation.",
"weaknesses": "1. High Overlap with TAM and Incremental Innovations:\n\nThe paper’s methodology is built upon TAM[1]. MSEA is a straightforward extension of multi-scale input techniques commonly used in computer vision tasks, offering limited innovation. SEA is also a technique reuse on LLM inference.\n\n2. Computational Complexity:\n\nMSEA introduces significant computational overhead due to multi-resolution processing and attribution aggregation, a clear trade-off on significant increased complexity and performance.\n\nARC share a similar case, that requires token-ranking alignment computations for every preceding token, further compounding the computational cost.\n\nThe paper does not provide a detailed analysis of runtime performance or scalability trade-offs, which raises concerns about practical applicability.",
"questions": "1. What is the runtime cost of MSEA and ARC compared to TAM or other baseline methods?\n\n2. How does the method scale with larger models?\n\n3. Robustness: How does the method handle challenging scenarios, such as: Images with multiple similar objects (e.g., distinguishing specific birds or cars)? Occluded objects where parts of the image are missing? Interaction-heavy scenes involving relational reasoning (e.g., a person interacting with tools)?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T14:51:53",
"modification_date": "2025-11-12T18:06:52",
"review_url": "https://openreview.net/forum?id=qrKymA0zuY¬eId=Ys70HRJ3G3",
"license": "CC BY 4.0"
},
{
"id": "Aa1atoRzIi",
"forum": "qrKymA0zuY",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission22000/Reviewer_hFsC",
"reviewer_name": "Reviewer_hFsC",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper addresses the limitation of current interpretability studies on Multimodal Large Language Models (MLLMs), which mainly focus on cross-modal attribution while neglecting intra-modal dependencies. The authors point out that, within the visual modality, existing methods often divide images into independent patches and thereby overlook the importance of spatial contextual relationships. To address this issue, the paper proposes a Multi-Scale Explanation Aggregation (MSEA) approach, which aggregates attributions across multiple image resolutions to dynamically adjust the receptive field, resulting in more coherent visual explanations.\nWithin the textual modality, to mitigate spurious activations caused by preceding tokens, the paper introduces an Activation Ranking Correlation (ARC) method. By evaluating the ranking consistency among the Top-k predicted tokens, ARC quantifies the relevance between each contextual token and the current token, and further suppresses spurious activations from irrelevant contexts based on this correlation.\nExperiments conducted on multiple MLLMs and datasets demonstrate that the proposed method achieves superior interpretability quality compared with existing mainstream approaches.",
"strengths": "The paper innovatively focuses on the intra-modal dependencies in the interpretability of Multimodal Large Language Models (MLLMs). In the visual modality, it captures more contextual visual information by adjusting the visual receptive field. In the textual modality, it suppresses irrelevant spurious activations by quantifying the ranking consistency among Top-k predicted tokens. Based on the experimental results and visualizations provided by the authors, the proposed method demonstrates superior performance compared to existing interpretability approaches.",
"weaknesses": "The presentation and explanation of the proposed method are overly abstract, with some steps omitted and theoretical aspects insufficiently detailed, which may make it difficult for readers to fully understand. Furthermore, some of the visualized results and analyses are limited.\n\n(1) The explanation of “logits” and the “Logit Lens interpretability mechanism” in Section 3.1 is somewhat abstract, which may make it difficult for readers to clearly understand the related background.\n(2) The motivation and interpretation of the proposed input–output level operations in Section 3.2 are also rather abstract, which may affect readers’ comprehension.\n(3) It is recommended to clarify in Section 3.2 which variable remains constant when the image scale is changed, and how this affects the variation of the receptive field.\n(4) As shown in Figure 2, it appears that one patch is divided into four sub-patches after tokenization. It is suggested to provide a clearer explanation of this process or include a formula to describe it.\n(5) For Equation (5), it is recommended to provide a more detailed explanation of how the results are rescaled to the original image size after the aggregation in Equation (4).\n(6) It is recommended to provide a more detailed explanation of the use of Rank-Biased Overlap (RBO) in Equation (7) and the settings of its parameters.\n(7) In Section 3.3, the definition of “base attribution” is not clearly described, particularly regarding its computation for the token with index k. Moreover, the way in which the defined “base attribution” is utilized is not clearly explained.\n(8) Figure 2 does not clearly illustrate the proposed method, especially the depiction of ARC, which appears misaligned with the textual description. It is recommended to refine both the figure and the corresponding explanation.\n(9) It is recommended to provide a detailed explanation of each evaluation metric, including its computation method and meaning, to help readers better understand these metrics.\n(10) It is suggested to provide additional visualizations of non-semantic tokens.",
"questions": "see weakness.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-28T21:00:27",
"modification_date": "2025-11-12T18:06:52",
"review_url": "https://openreview.net/forum?id=qrKymA0zuY¬eId=Aa1atoRzIi",
"license": "CC BY 4.0"
},
{
"id": "bkmgxpEKQR",
"forum": "qrKymA0zuY",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission22000/Reviewer_SxV5",
"reviewer_name": "Reviewer_SxV5",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "This paper proposes a novel framework to enhance the explainability (attribution) of Multimodal Large Language Models (MLLMs), focusing specifically on Intra-Modal Interactions. The framework aims to overcome the issues of fragmented, incoherent, and noise-contaminated explanations prevalent in existing methods, striving to generate more Faithful and Fine-grained attributions. The core contributions include:\n\n1. Multi-Scale Explanation Aggregation (MSEA): For the visual modality, this aggregates attribution results from different input scales, dynamically adjusting the receptive fields of visual tokens to capture spatial context, leading to more holistic visual explanations.\n\n2. Activation Rank Correlation (ARC): For the text modality, this method quantifies semantic relevance by comparing the Top-k prediction rank consistency (using the RBO metric) between the current token and context tokens. This mechanism is used to suppress spurious activations and noise originating from irrelevant contexts.\n\nThe approach is post-hoc and training-free. It is validated across state-of-the-art MLLMs like LLaVA-1.5 and Qwen2-VL, showing consistent superiority over existing baselines (e.g., TAM) in metrics like Obj-IoU, Func-IoU, and F1-IoU.",
"strengths": "1. Strong Solution to Technical Problems: The mechanistic design of MSEA and ARC effectively addresses known limitations in Logit Lens attribution. The introduction of Top-k Prediction Rank Correlation (RBO) in the text modality is a clever and effective mechanism for filtering semantic noise.\n\n2. Extensive Experimental Validation: Comprehensive testing across representative MLLM architectures (LLaVA-1.5, Qwen2-VL, and InternVL2.5) increases the credibility of the results.\n\n3. Practicality: As a post-hoc method, it requires no retraining or fine-tuning of the MLLM, making it easy to deploy and apply.",
"weaknesses": "1. Causal Logic Risk (MSEA): Multi-Scale Explanation Aggregation (MSEA) obtains attribution by altering the input scale, which may introduce a causal logic inconsistency. The attribution result may not strictly reflect the model's decision under the original input, and this lacks rigorous theoretical or empirical support.\n\n2. Missing Diagnostic Application/Value: Although the method achieves high fidelity, the paper lacks diagnostic case studies focusing on real model errors like MLLM Hallucination. Analyzing such error cases is crucial to demonstrate the method's diagnostic scope and application value, enabling attribution to truly fulfill its role in model analysis.\n\n3. Computational Burden: Since both MSEA (involving multiple forward passes for different scales) and ARC (involving Top-k rank consistency calculation, which can be resource-intensive with long contexts/outputs) are added on top of the base Logit Lens method, the computational overhead may be high, especially when applied to very long contexts or complex MLLMs.",
"questions": "1. Causal Consistency (MSEA): Given that MSEA aggregates attribution results by altering the input scale, we are concerned about potential causal logic inconsistencies. Could the authors provide deeper theoretical arguments or additional experimental evidence (e.g., ensuring the original decision remains unchanged) to guarantee the fidelity of the MSEA attribution results with respect to the model's decision mechanism under the original input?\n\n2. Practical Application (Hallucination Diagnosis): Since the method aims to improve attribution fidelity, we suggest the authors provide qualitative case analyses demonstrating how MSEA and ARC can effectively diagnose the root causes of MLLM Hallucinations (whether due to visual signal misinterpretation or spurious activations from text context). A comparison with baseline methods would be highly beneficial.\n\n3. RBO Sensitivity: Regarding the ARC module, we encourage the authors provide ablation studies or sensitivity analysis concerning the choice of the Top-k value (e.g., the default $k=50$), to prove the stability and reasonableness of this parameter selection.\n\n4. Computational Complexity: Considering the overhead of MSEA (multiple forward passes) and ARC's RBO calculation, could the authors provide a practical time breakdown? Specifically, what is the required wall-clock time to analyze a single case for different context lengths?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-27T21:42:38",
"modification_date": "2025-11-12T18:06:52",
"review_url": "https://openreview.net/forum?id=qrKymA0zuY¬eId=bkmgxpEKQR",
"license": "CC BY 4.0"
},
{
"id": "hPyDGtjFEu",
"forum": "qrKymA0zuY",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission22000/Reviewer_3Q2o",
"reviewer_name": "Reviewer_3Q2o",
"rating": 2,
"confidence": 3,
"soundness": 1,
"contribution": 1,
"presentation": 2,
"summary": "The paper proposes MSEA (Multi-Scale Explanation Aggregation) and ARC (Activation Ranking Correlation) to produce token-level visual attributions for multimodal LLMs. MSEA averages logit-lens maps computed from multiple input resolutions to reduce spatial fragmentation; ARC down-weights interference from preceding text tokens using rank-based similarity between their next-token distributions and that of the target token. Evaluations on captioning/scene datasets report IoU-style improvements over baselines (e.g., TAM).",
"strengths": "1. Clear, post-hoc, model-agnostic recipe (no retraining); easy to plug into popular MLLMs.\n\n2. Empirical gains on localization-type metrics across several models/datasets.",
"weaknesses": "1. Overstated novelty about “intra-modal interactions.” \nFull-graph methods (gradients, LRP/AttnLRP, attention rollout, TAM/LLaVA-CAM) already propagate relevance through all cross- and intra-modal paths; interaction effects are not “ignored” by default.\n\n2. Receptive-field misconception. \nIn ViTs, global self-attention makes a token’s effective receptive field image-wide after early layers. Fragmented maps arise from attribution noise/aggregation choices, not from an inherently local RF. MSEA’s value is ensembling across tokenizations, not “expanding RF.”\n\n3. Baselines lack crucial details. \nWhich layer/heads for CAM/Grad-CAM? How was AttnLRP configured? Without this, fairness and reproducibility are uncertain. The authors should write details at supplementary material.\n\n4. No faithfulness evaluation. \nResults rely on human-grounded IoU and “cleanliness.” There are no causal tests (insertion/deletion curves, AOPC/ROAR, evidence-erasure, logit drop when masking top-k regions). MSEA/ARC may yield prettier maps yet be less causal.\n\n5. Qualitative analysis is thin. \nFew examples; captions are terse; comparisons focus mainly on TAM. Lacks side-by-sides against other listed baselines (Grad-CAM/++, Rollout, LRP/AttnLRP, IG-style).\n\n6. Cost/latency not discussed. \nmulti-scale forward passes + ARC scoring add overhead; no wall-clock or memory analysis vs TAM/AttnLRP/perturbation methods.\n\n7. There is no detailed captions at every figure.",
"questions": "See above weaknesses.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-27T12:45:45",
"modification_date": "2025-11-12T18:06:52",
"review_url": "https://openreview.net/forum?id=qrKymA0zuY¬eId=hPyDGtjFEu",
"license": "CC BY 4.0"
}
] |
|
VPju7xAxb1
|
https://openreview.net/forum?id=VPju7xAxb1
|
Comprehend and Talk: Text to Speech Synthesis via Dual Language Modeling
| 2
| 4.75
|
[
2,
2,
2,
2
] |
[
4,
5,
5,
5
] | 4
|
[
"Text to Speech; Speech Signal Processing; Speech Language Modeling; Audio Language Models"
] |
Existing Large Language Model (LLM) based autoregressive (AR) text-to-speech (TTS) systems, while achieving state-of-the-art quality, still face critical challenges. The foundation of this LLM-based paradigm is the discretization of the continuous speech waveform into a sequence of discrete tokens by neural audio codec. However, single codebook modeling is well suited to text LLMs, but suffers from significant information loss; hierarchical acoustic tokens, typically generated via Residual Vector Quantization (RVQ), often lack explicit semantic structure, placing a heavy learning burden on the model. Furthermore, the autoregressive process is inherently susceptible to error accumulation, which can degrade generation stability. To address these limitations, we propose CaT-TTS, a novel framework for robust and semantically-grounded zero-shot synthesis. First, we introduce S3Codec, a split RVQ codec that injects explicit linguistic features into its primary codebook via semantic distillation from a state-of-the-art ASR model, providing a structured representation that simplifies the learning task. Second, we propose an ``Understand-then-Generate'' dual-Transformer architecture that decouples comprehension from rendering. An initial ``Understanding'' Transformer models the cross-modal relationship between text and the prompt's semantic tokens to form a high-level utterance plan. A subsequent ``Generation'' Transformer then executes this plan, autoregressively synthesizing hierarchical acoustic tokens. Finally, to enhance generation stability, we introduce Masked Audio Parallel Inference (MAPI), a nearly parameter-free inference strategy that dynamically guides the decoding process to mitigate local errors. Extensive experiments demonstrate that the synergy of our principled architecture and semantically-aware codec allows CaT-TTS to achieve new state-of-the-art performance in zero-shot voice cloning, with MAPI providing a measurable boost in generation robustness on benchmark datasets. Project page: \href{https://anonymous.4open.science/r/CaT-TTS-66A1/}{https://anonymous.4open.science/r/CaT-TTS-66A1}.
|
Propose a two stage method for audio language modeling
|
applications to computer vision, audio, language, and other modalities
|
https://openreview.net/pdf?id=VPju7xAxb1
| 2025-09-14T20:08:12
| 5
|
[
{
"id": "oiVs7XYTj7",
"forum": "VPju7xAxb1",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission5123/Reviewer_9G9x",
"reviewer_name": "Reviewer_9G9x",
"rating": 2,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This paper claims it provides three contributions.\n+ Using ASR to make one layer of RVQ has moe content information (via distillation) \n+ A two-stage decoding strategy: a semantic transformer and an acoustic transformer \n+ Masked Audio Parallel Infer ence (MAPI).",
"strengths": "The paper is easy to read. The proposed TTS model shows good performance compared with the baselines.",
"weaknesses": "### 1. ASR-Guided Semantic Distillation into RVQ\n\nThe proposed ASR-guided semantic distillation into RVQ represents an incremental contribution rather than a major innovation. Previous works such as SpeechTokenizer guided the first RVQ layer with a semantic teacher (HuBERT), and Mimi also distilled semantic features into RVQ-1. \nThis paper instead employs ASR-derived supervision, which is a different, but not fundamentally new.\n\nFurthermore, the idea of using ASR-based supervision to make one stream or token layer more “content-like” is not novel, either. For example: \n- PAST ([arXiv:2505.14470](https://arxiv.org/abs/2505.14470)) supervised its first codebook using phonetic/ASR tasks. \n- QTTS / QDAC ([arXiv:2507.12197](https://arxiv.org/abs/2507.12197)) explicitly employed an autoregressive ASR model to guide the first codebook.\n\nEven if PAST and QTTS/QDAC are treated as concurrent works, the authors’ claim that *“ASR teacher is better than SSL teacher”* remains unsupported. There is no ablation study that replaces Whisper with a strong SSL teacher for comparison, which weakens this claim.\n\n### 2. Two-Stage Decoding Architecture\n\nThe two-stage decoding approach is also incremental. While the paper introduces the idea of predicting continuous “semantic embeddings” with an MSE loss as an intermediate representation, this component’s benefit is not empirically verified. The paper lacks an ablation study demonstrating that this design improves performance compared with direct discrete prediction or other alternatives.\n\n### 3. Evaluation and Analysis\n\nThere are no subjective listening tests, which are essential to validate perceptual quality in speech generation tasks.",
"questions": "Please check the link you provided in your abstract. When I click the link to the demo page, I am unable to see the audio files. \n\nCan we consider the two-stage decoding approach here as an encoder-decoder architecture? The difference is that the encoder here is causal (although it might also be possible to try a full attention encoder here).",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-02T11:42:13",
"modification_date": "2025-11-12T11:24:54",
"review_url": "https://openreview.net/forum?id=VPju7xAxb1¬eId=oiVs7XYTj7",
"license": "CC BY 4.0"
},
{
"id": "u40ZxD241r",
"forum": "VPju7xAxb1",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission5123/Reviewer_Cx6b",
"reviewer_name": "Reviewer_Cx6b",
"rating": 2,
"confidence": 5,
"soundness": 3,
"contribution": 1,
"presentation": 2,
"summary": "This paper presents CaT-TTS, a zero-shot TTS that integrates a semantically-aware codec with a dual-Transformer architecture to separate understanding and generation. By leveraging semantic distillation and a masked audio parallel inference (MAPI) strategy for stable decoding.",
"strengths": "The paper introduces Masked Audio Parallel Inference and shows that MAPI effectively enhances generation stability and speech quality with minimal additional latency.",
"weaknesses": "- The major weakness is that the contribution of this paper appears incremental rather than novel. The overall architecture is largely similar to Moshi, and the proposed S3Codec, which modifies Mimi’s architecture with larger codebooks (from 2048 to 4096), shows only limited performance improvement. In addition, the performance of the proposed model is limited when compared to the baseline models.\n\n- The so-called “Understand-then-Generate” paradigm in CaT-TTS is conceptually similar to previously established frameworks such as Thinker-Talker or Planner-Decoder architectures. Therefore, while the paper presents a well-structured integration, its novelty in architectural design is limited, as the dual-Transformer separation between semantic understanding and acoustic generation has already been explored in prior works.\n\n- The paper points out the issue of degraded audio quality in Mimi’s split RVQ distillation and proposes a modified approach that performs semantic distillation separately on a plain VQ. However, Appendix E.2 only presents a comparison between S3Codec and DAC, showing that S3Codec preserves linguistic information more effectively. Since models employing semantic distillation are already known to retain semantic information better than standard RVQ-based codecs, this comparison is somewhat limited. Therefore, conducting a more thorough ablation study on the Mimi structure would strengthen the validity of the proposed modifications. Specifically, it would be important to analyze whether the observed improvement arises from (1) the difference between SSL and ASR teachers, (2) the benefit of applying distillation to a separate plain VQ, or (3) the trade-off between the increased codebook size and the resulting resource overhead.",
"questions": "- It is unclear whether the Semantic Transformer was initialized from an existing LLM or trained from scratch. The model is reported to have 0.4B parameters—where does this number come from?\n\n- The method increases GPU resource usage as the number of parallel streams grows — how does the model balance this trade-off in real-time or large-scale scenarios?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T16:45:05",
"modification_date": "2025-11-12T11:24:54",
"review_url": "https://openreview.net/forum?id=VPju7xAxb1¬eId=u40ZxD241r",
"license": "CC BY 4.0"
},
{
"id": "HE7RHB6BvM",
"forum": "VPju7xAxb1",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission5123/Reviewer_jTzu",
"reviewer_name": "Reviewer_jTzu",
"rating": 2,
"confidence": 5,
"soundness": 3,
"contribution": 1,
"presentation": 3,
"summary": "The paper proposes CaT-TTS (Comprehend-and-Talk), an autoregressive text-to-speech framework that combines a dual-stage architecture with a semantically-aware audio codec. The model introduces S3Codec, a split residual vector quantization (RVQ) codec designed to capture both semantic and acoustic information by distilling features from a pretrained ASR model (Whisper) into the first quantizer. Built on top of S3Codec, the system uses two stacked transformers: a semantic transformer that models the relationship between text and high-level semantic speech tokens, and an acoustic transformer that generates fine-grained acoustic tokens conditioned on this representation.\n\nTo address error accumulation during autoregressive decoding, the authors propose a Masked Audio Parallel Inference (MAPI) strategy, which samples multiple masked variants of the semantic representation in parallel and aggregates their outputs to improve stability. Experiments show that the proposed system performs comparably to or slightly better than existing models in terms of speech quality, similarity, and intelligibility, while maintaining zero-shot voice cloning capability.",
"strengths": "1. The paper is well-structured, with a logical progression from codec design to model architecture and inference.\n\n2. The proposed dual-transformer framework (semantic + acoustic) is consistent with the natural separation between linguistic and acoustic modeling in speech synthesis.\n\n3. The results are reasonable and within the expected range of current tokenized TTS systems.\n\n4. The authors some analysis of the inference-time stability problem, attempting to mitigate it through the proposed MAPI strategy.",
"weaknesses": "1. The overall novelty is limited. The main technical contributions—semantic distillation into the first codebook and the separation of semantic and acoustic transformers—are direct extensions of existing frameworks such as Mimi, SpeechTokenizer, and standard two-stage text-to-semantic-to-acoustic TTS pipelines.\n\n2. S3Codec primarily replaces the SSL model used for distillation with an ASR encoder (Whisper), which does not constitute a significant conceptual advance.\n\n3. The MAPI inference method appears to be a form of test-time augmentation by generating multiple masked inputs and averaging outputs. Similar effects could probably be achieved more efficiently by adjusting sampling temperature or using stochastic decoding.\n\n4. The model being fully autoregressive, and the added inference strategy further increases computational cost, which might limit practical benefit.\n\n5. The title and framing around “comprehension” are somewhat misleading; the system performs standard two-stage token generation rather than true semantic understanding.\n\n6. The paper relies on private training data, and the provided anonymous code link is a placeholder, raising reproducibility and transparency concerns.\n\n7. The results, while acceptable, do not clearly surpass strong existing baselines, reducing the paper’s empirical impact.",
"questions": "See Weaknesses above.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T20:40:36",
"modification_date": "2025-11-12T11:24:54",
"review_url": "https://openreview.net/forum?id=VPju7xAxb1¬eId=HE7RHB6BvM",
"license": "CC BY 4.0"
},
{
"id": "ezR5erDlRt",
"forum": "VPju7xAxb1",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission5123/Reviewer_mDeA",
"reviewer_name": "Reviewer_mDeA",
"rating": 2,
"confidence": 5,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "The authors propose CaT-TTS, a semantically-grounded framework for zero-shot text-to-speech synthesis (TTS). The core idea of CaT-TTS is to explicitly integrate linguistic structure into an RVQ-based audio codec. Specifically, CaT-TTS introduces S3Codec, a split RVQ codec that distills semantic information from an ASR model (Whisper) into a primary codebook, while parallel residual RVQ captures acoustic details. Two Transformer modules (Semantic and Acoustic Transformers) then autoregressively synthesize hierarchical acoustic tokens. Experimental results demonstrate that CaT-TTS achieves comparable performance to existing zero-shot TTS models, such as Spark-TTS.",
"strengths": "There are two major strengths:\n\n(1) Low frame rate is impressive. S3Codec operates at 12.5 Hz, which is competitive among recent neural audio codecs.\n\n(2) Comparisons against SOTA TTS indicate effectiveness of , CaT-TTS particularly in UTMOS.",
"weaknesses": "(1) Limited Novelty\\\nASR-based distillation has been extensively studied in neural audio codecs (NACs). Although the proposed split RVQ introduces some novelty, results presented in Table 1 do not demonstrate clear advantages over recent NACs. Specifically, performance metrics such as PESQ and Mel distance show notable degradation when compared to DAC-8 and MBCodec.\n\n(2) Fairness of Comparisons\\\nKey hyperparameters are inconsistently configured in Table 1. For example, the \"CB\" parameter (presumably indicating codebook size) is larger for S3Codec than for baseline NACs like Mini and MBCodec. Adjusting codebook size upward and frame rate downward for existing codecs could potentially narrow the observed performance gap. Therefore, broader and more balanced hyperparameter sweeps are required for both S3Codec and baseline models. Additionally, the absence of BiCodec from Table 1 (whereas Spark-TTS is included in Table 2) raises concerns about the comprehensiveness of the comparisons.\n\n(3) Low Frame Rate Evaluation\\\nEvaluation at frame rates below 12.5 Hz is necessary. Recent literature, such as HALL-E (ICLR 2025), has reported evaluations at 8 Hz for the first RVQ layer. Evaluating CaT-TTS at or below 8 Hz would further strengthen the claims made in the paper.\n\n(4) Ablation Studies\\\nTable 4 currently presents results solely with and without semantic guidance. Critical aspects, including the effectiveness of split RVQ and the specific selection of Whisper over alternative models such as HuBERT, are not independently analyzed.",
"questions": "Why was evaluation limited to frame rates at 12.5 Hz? Could the authors also provide results at or below 8 Hz?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-20T05:44:14",
"modification_date": "2025-11-12T11:24:54",
"review_url": "https://openreview.net/forum?id=VPju7xAxb1¬eId=ezR5erDlRt",
"license": "CC BY 4.0"
}
] |
vJBMYahZY5
|
https://openreview.net/forum?id=vJBMYahZY5
|
MSearcher: Self-Reflective Search Agent Empowered by Monte Carlo Tree Search Based Data Synthesis
| 4.5
| 3.75
|
[
4,
4,
4,
6
] |
[
4,
4,
4,
3
] | 4
|
[
"Data Construction",
"Monte Carlo Tree Search",
"Post Training",
"Reinforcement Learning",
"Question Answering"
] |
Recent advances in reinforcement learning (RL) have enabled large language models (LLMs) to perform multi-turn chain-of-thought (CoT) reasoning with tool use, where web search serves as the most critical tool for answering complex questions. However, most existing methods apply RL directly to off-the-shelf models without a supervised fine-tuning (SFT) cold start, resulting in unstable training and limited tool invocations. This difficulty is exacerbated by the high cost of curating long reasoning trajectories, which are expensive to annotate and prone to factual drift. We propose MSearcher, a two-stage trained search agent that combines reflective thinking with robust tool use for complex reasoning. A central contribution is an efficient data construction framework based on Monte Carlo Tree Search (MCTS), which produces self-reflective reasoning trajectories for the SFT cold start. This framework leverages both correct and flawed rollouts to generate natural and diverse reasoning data. We adopt a two-stage pipeline, first applying SFT with our constructed data and then further training the model with RL, achieving substantial improvements on multi-hop question answering: 67.6\% on HotpotQA and 52.0\% on Frames. These results highlight the importance of high-quality SFT in stabilizing RL and equipping LLMs with robust long-horizon reasoning capabilities.
|
reinforcement learning
|
https://openreview.net/pdf?id=vJBMYahZY5
| 2025-09-20T18:20:21
| 4
|
[
{
"id": "3zB7qa4SOb",
"forum": "vJBMYahZY5",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission25063/Reviewer_RG1c",
"reviewer_name": "Reviewer_RG1c",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This paper introduces MSEARCHER, a self-reflective search agent designed to address the instability and inefficiency of training large language models (LLMs) with RL for complex reasoning tasks. The authors propose a two-stage training pipeline that begins with a supervised fine-tuning \"cold start\" to provide the model with a stable foundation. The core innovation is a data construction framework based on Monte Carlo Tree Search , which decomposes complex questions into smaller sub-problems. This framework generates high-quality, self-reflective reasoning trajectories by leveraging both correct and flawed rollouts from the search tree, effectively teaching the model error-correction and robust reasoning. Following the SFT stage, the agent is further trained with RL to enhance its performance. The results demonstrate that MSEARCHER significantly outperforms previous methods on multi-hop question-answering benchmarks like HotpotQA and Frames, highlighting the effectiveness of using a high-quality SFT phase to stabilize RL training.",
"strengths": "1、The proposed method of using Monte Carlo Tree Search (MCTS) to synthesize reasoning trajectories is highly effective, with the generation of self-reflective trajectories being a particularly novel and valuable contribution.\n\n2、This paper thoroughly explores the two-stage Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL) paradigm, effectively demonstrating its power in enhancing an agent's reasoning capabilities.\n\n3、The experiments are comprehensive and the results are significant, showing that the proposed agent consistently outperforms strong baselines on multiple challenging benchmarks.",
"weaknesses": "1、The paper primarily quantifies the method's effectiveness through experimental results but lacks a deeper analysis, such as the underlying reasons for the observed improvements.\n\n2、Based on the experimental results, MSearcher does not appear to have a substantial advantage, especially when compared to ASearcher.\n\n3、Table 4 seems to indicate a performance drop of 4.8 on HotpotQA with SFT. What accounts for this decrease? Did you train a 7B-version of MSearcher, and what were its performance metrics from the base model to SFT and then to RL?\n\n4、What would be the effect if, instead of using a complex algorithm like MCTS for data construction, a simpler method such as Rejection Sampling (RFT) were employed for SFT data generation? The authors need to elaborate on their motivation for using MCTS.",
"questions": "Stated in Weaknesses",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T19:55:43",
"modification_date": "2025-11-12T18:28:28",
"review_url": "https://openreview.net/forum?id=vJBMYahZY5¬eId=3zB7qa4SOb",
"license": "CC BY 4.0"
},
{
"id": "vgew6EIXXr",
"forum": "vJBMYahZY5",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission25063/Reviewer_7uZ4",
"reviewer_name": "Reviewer_7uZ4",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "MSEARCHER: Self-Reflective Search Agent Empowered by Monte Carlo Tree Search-Based Data Synthesis proposes a two-stage training framework that combines supervised fine-tuning (SFT) with reinforcement learning (RL) to improve long-horizon, multi-hop reasoning for large language models (LLMs). The key innovation lies in a Monte Carlo Tree Search (MCTS)-based data construction process that generates self-reflective reasoning trajectories, leveraging both correct and flawed rollouts to produce high-quality synthetic training data. This approach enables stable RL training, better tool-use decision-making, and strong generalization across diverse QA benchmarks. Experiments demonstrate significant improvements over state-of-the-art search agents (e.g., DeepResearcher, ASearcher), achieving 67.6% on HotpotQA and 52.0% on Frames, confirming the importance of self-reflective data for enhancing reasoning robustness.",
"strengths": "1. Introduces an MCTS-based framework that synthesizes self-reflective reasoning data without requiring large reasoning models, improving data diversity and efficiency.\n2. Combines SFT cold-start with RL fine-tuning, effectively stabilizing early-stage training and enhancing reasoning depth.\n3. Outperforms multiple advanced baselines (DeepResearcher, Search-r1, ASearcher) on both in-domain and out-of-domain multi-hop QA benchmarks.\n4. Provides a clear, modular design for integrating decomposition, retrieval, and self-reflection—offering practical reproducibility and strong generalization.",
"weaknesses": "1. Although the paper proposes a reflective data construction framework, it lacks a theoretical analysis of the convergence and sampling efficiency of MCTS in high-dimensional reasoning spaces.\n2. While the paper categorizes retrieval, reasoning, and decomposition errors, it does not systematically discuss how these error types accumulate during the reinforcement learning stage.\n3. Despite claiming efficiency, the paper does not provide detailed comparisons of computational resources, time costs, or scalability, leaving the practical extensibility uncertain.\n4. Although partial ablation studies are conducted, the paper does not sufficiently demonstrate the independent contributions of different reflective signal types (e.g., retrieval vs. reasoning errors).",
"questions": "See the Weaknesses.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T14:12:03",
"modification_date": "2025-11-12T18:28:29",
"review_url": "https://openreview.net/forum?id=vJBMYahZY5¬eId=vgew6EIXXr",
"license": "CC BY 4.0"
},
{
"id": "B3xOe4Ynfl",
"forum": "vJBMYahZY5",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission25063/Reviewer_b45F",
"reviewer_name": "Reviewer_b45F",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "This paper introduces MSEARCHER, a two-stage trained search agent designed to perform complex multi-hop question answering by combining reflective reasoning with robust tool use. The key innovation is a data construction framework based on Monte Carlo Tree Search (MCTS) that generates self-reflective reasoning trajectories for supervised fine-tuning (SFT), serving as a cold start before reinforcement learning (RL). The method leverages both correct and incorrect rollouts to train the model to recognize and correct its own errors.",
"strengths": "1) The use of MCTS to generate diverse and self-reflective training data is creative and effective. \n2) The proposed two-stage training (SFT followed by RL) addresses a common issue in RL-based agent training: instability in early stages.\n3) MSEARCHER achieves state-of-the-art or competitive results on multiple datasets.",
"weaknesses": "1) While the MCTS-based data construction is innovative, the paper lacks a formal analysis or theoretical grounding for why this approach should yield better reasoning trajectories. For instance, why is binary decomposition of sub-questions optimal? Why not allow n-ary splits or dynamic decomposition strategies? The design choices appear heuristic and would benefit from ablation studies or theoretical motivation.\n2) The MCTS framework requires multiple rollouts, simulations, and tree expansions, which can be computationally expensive. The paper does not provide a detailed complexity analysis or discuss the scalability of this approach to larger datasets or more complex reasoning tasks. It is unclear how feasible this method would be for real-time applications or deployment in resource-constrained environments.\n3) Although the paper evaluates on a range of QA datasets, most are still within the realm of factoid or multi-hop question answering. The evaluation lacks diversity in task types (e.g., commonsense reasoning, dialog-based reasoning, or multimodal QA). This limits the generalizability of the claims about MSEARCHER’s reasoning capabilities.\n4) The agent’s performance is heavily dependent on the quality and availability of external search tools. The paper does not analyze the impact of search engine failures, biased retrieval, or noisy documents. In real-world settings, where search results may be unreliable or adversarial, the robustness of MSEARCHER is questionable and untested.",
"questions": "1) Why is binary decomposition (splitting one question into exactly two sub-questions) enforced at every MCTS expansion node, and what evidence is provided that this restriction is optimal compared with n-ary or adaptive decomposition?\n2) The reward function is relatively simple. Did you experiment with more sophisticated rewards, such as step-level correctness or a reward for synthesizing information across multiple turns?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T22:19:10",
"modification_date": "2025-11-12T18:28:30",
"review_url": "https://openreview.net/forum?id=vJBMYahZY5¬eId=B3xOe4Ynfl",
"license": "CC BY 4.0"
},
{
"id": "jWJBlQwMML",
"forum": "vJBMYahZY5",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission25063/Reviewer_xeVA",
"reviewer_name": "Reviewer_xeVA",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper proposes MSearcher, which is a self-reflective search agent for open-domain multi-hop question answering task. The agent is trained with Supervised FineTuning (SFT) first, and then trained with Reinforcment Learning (RL) using Dynamic sAmpling Policy Optimization (DAPO). The data used for training is synthesized by rollout using Monte Carlo Tree Search (MCTS), where each node is a partition of tasks and the leaves are lists of atomic tasks. The atomic tasks are then answered by the rollout model in a logically topological order to form a trajectory towards the final answer. The trajectories that leads to the correct final answer are used for supervised learning and comparison with other trajectories; the rest are categorized as either retrieval, reasoning or decomposition error, and can be rewritten as a self-reflective trajectory that \"turns\" to the correct trajectory on the first incorrect step. The proposed model outperforms several baselines on in-domain and out-of-domain multi-hop search task.",
"strengths": "1. The paper is well-written and easy to follow. The paper consists of two parts: MCTS data curation and SFT+RL training with the curated dataset, which are both very clearly conveyed. The MCTS's \"exploration\" and \"simulation\" are different from the usual use (a route from root to leaf on MCTS is not directly a solution, but only a division of tasks), but this is clearly explained in the paper and illustrated in Fig. 1.\n\n2. The ideas are intuitive: the use of MCTS (which is also essentially a planner-executor multi-agent framework) does not only increases the possibility of successful rollouts with the limited model ability, but it also provides higher data efficiency - the \"incorrect trajectories\" can be recycled into trajectories with reflective behavior.",
"weaknesses": "1. The purpose of using MCTS is to get rid of the depenence on expert large reasoning models (line 56), but the authors still use QwQ-32B, which is a much stronger model than the final MSearcher, to generate data. This design somewhat contradicts with the purpose - can the author further explain why do we not want to use expert large reasoning models in the first place, and how using QwQ-32B still supports this motivation? \n\n2. The empirical evaluation can be improved:\n\na) the ablation study does not include any experiment about the hyperparameter of MCTS, or analysis on the dynamics of data curation (e.g. how many trajectories are successful, what is the average number of steps in total, what is the average number of steps before failure for failed trajectory, what is the ratio for each type of error defined in Sec. 3.1.4, etc.)\n\nb) In Tab. 3, the result shows that MSearcher works better with self-reflective data. However, it is unsure whether this performance difference comes from the reflective behavior, or simply because it is now trained with less data.\n\n**Minor Weakness**\n\n1. Fig. 1, contry -> country.",
"questions": "I have two questions: \n\n1. Is the target question given in the form of multiple queries (i.e. $n>1$ in the prompt at line 90 $q=\\{q_1,q_2,\\dots,q_n\\}$), or are all subquestions the product of MCTS?\n\n2. in line 429, the authors mention that the performance decrease of SFT is because \"a stronger bias toward tool usage introduced by the SFT data\"; on the other hand, the lower average tool usage getting lower \"consequently\" leads to final performance decrease (line 425-426). Also, \"SFT gives a higher initial tool usage, leading to better performance\" (line 437-438). The effect of tool usage on final performance seems contradictory. Is more tool usage a good thing or bad thing?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-28T07:41:56",
"modification_date": "2025-11-12T18:28:30",
"review_url": "https://openreview.net/forum?id=vJBMYahZY5¬eId=jWJBlQwMML",
"license": "CC BY 4.0"
}
] |
|
Eqbay04527
|
https://openreview.net/forum?id=Eqbay04527
|
HICO-GT: Hidden Community Based Tokenized Graph Transformer for Node Classification
| 3.5
| 3.75
|
[
2,
4,
4,
4
] |
[
3,
4,
4,
4
] | 4
|
[
"graph Transformer",
"node classification",
"hidden community detection"
] |
Graph Transformers have been proven to be effective for the node classification task, of which tokenized graph Transformer is one of the most powerful approaches. When constructing tokens, existing methods focus on collecting multi-view node information as the Transformer's input. However, if a type of tokens only includes nodes having relations with a target node from one perspective, it will not provide sufficient evidence for predicting unknown labels. Directly applying self-attention to all tokens may also produce contradictory information as they are selected by distinct rules. Meanwhile, as an emerging concept on graphs, hidden communities refer to those with relatively weaker structures and being obscured by stronger ones. In this paper, inspired by the hidden community clustering method, we design a new multi-view graph Transformer called HICO-GT. We first reconstruct the input graph by merging the original topology and attribute information. Through an iterative process of weakening dominant and hidden communities in turn, we obtain two subgraphs both containing node information of topological relation and attributed similarity, and then generate two token sequences correspondingly. Along with another neighborhood sequence produced on the original graph, they are separately fed into the Transformer and fused afterwards to form the final representations. Experimental results on various datasets verify the performance of the proposed model, surpassing existing graph Transformers.
|
learning on graphs and other geometries & topologies
|
https://openreview.net/pdf?id=Eqbay04527
| 2025-09-19T16:34:10
| 4
|
[
{
"id": "wThICaGWDK",
"forum": "Eqbay04527",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16978/Reviewer_8rpR",
"reviewer_name": "Reviewer_8rpR",
"rating": 2,
"confidence": 3,
"soundness": 2,
"contribution": 1,
"presentation": 3,
"summary": "This paper proposes a Hidden Community-based Tokenized Graph Transformer model, named HICO-GT, to address the node classification problem. HICO-GT constructs a new weighted graph by fusing the topological and attribute information of the input graph, and generates two types of node token sequences from this weighted graph via a hidden community detection strategy to address insufficient neighborhood token sequence information. Experimental results verify the model’s effectiveness.",
"strengths": "1. Community is an inherent property of graphs, and it is reasonable to construct node token sequences using community information.\n\n2. The paper is well organized and explains the method very clearly.",
"weaknesses": "1. In Section 4.1, it is necessary to calculate the cosine similarity between node pairs and sort these similarity scores, which is an $O(n^2)$ operation.\n\n2. In Section 4.2, PageRank needs to be run on two different subgraphs for each target node, resulting in considerable computational overhead.\n\n3. In each iteration of the model, the Louvain algorithm must be executed on the reconstructed graph. Since the reconstructed graph may be dense, this increases the cost of community detection.\n\n4. The authors do not provide a computational complexity analysis of HICO-GT in the paper.\n\n5. This paper lacks the latest comparative baselines, such as [1] and [2].\n\n[1] Xu X, Zhou Y, Xiang H, et al. NLGT: Neighborhood-based and Label-enhanced Graph Transformer Framework for Node Classification. AAAI, 2025\n\n[2] Zhuo J, Liu Y, Lu Y, et al. Dualformer: Dual graph transformer. ICLR, 2025.",
"questions": "1. The primary motivation for tokenized graph Transformers is to overcome the quadratic computational complexity between node pairs, thereby achieving scalability. However, the cosine similarity score between pairs of nodes in graph reconstruction can be viewed as a special kind of attention score, which seems to contradict the design intent of tokenized graph Transformers.\n\n2. The cosine similarity has a value range of -1 to 1, so the reconstructed graph may contain negative edge values. However, two classic methods, Louvain and PageRank, both assume non-negative edge weights. Please discuss how to address the special case of negative edges in the reconstructed graph.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T18:31:17",
"modification_date": "2025-11-12T13:56:33",
"review_url": "https://openreview.net/forum?id=Eqbay04527¬eId=wThICaGWDK",
"license": "CC BY 4.0"
},
{
"id": "onQ9IePClG",
"forum": "Eqbay04527",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16978/Reviewer_nPtT",
"reviewer_name": "Reviewer_nPtT",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This paper presents HICO-GT, a tokenized graph Transformer for node classification that leverages dominant and hidden community structures. Node token sequences and neighborhood information are processed through separate Transformer modules and fused for final representations. Experiments on ten datasets demonstrate competitive or superior performance compared to GNN and graph Transformer baselines.",
"strengths": "1. The paper is clearly written and easy to follow.\n\n2. The experiments are conducted on multiple graph datasets, including both homophilic and heterophilic graphs. The ablation study in Table 2 demonstrates the necessity of both dominant and hidden tokens.",
"weaknesses": "1. From Table 1, except for Blog. and Tolo., the proposed method achieves performance comparable to existing works across the ten datasets.\n\n2. Additional comparisons with related GNN and graph Transformer methods [1–2] are needed to further validate the effectiveness of the proposed approach.\n\n3. To more comprehensively demonstrate its effectiveness, the proposed method should also be evaluated on large-scale or long-range datasets.\n\n[1] Luo, Yuankai, Lei Shi, and Xiao-Ming Wu. \"Classic gnns are strong baselines: Reassessing gnns for node classification.\" NeurIPS 2024.\n\n[2] Chen, Jinsong, et al. \"Rethinking tokenized graph transformers for node classification.\" arXiv preprint arXiv:2502.08101 (2025).",
"questions": "How does the choice of community detection algorithm (Louvain) and weakening schedule (Eq. (3)) affect classification performance compared to alternative algorithms? Have alternatives been attempted?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T12:17:21",
"modification_date": "2025-11-12T13:56:33",
"review_url": "https://openreview.net/forum?id=Eqbay04527¬eId=onQ9IePClG",
"license": "CC BY 4.0"
},
{
"id": "ox2UEq3ZWJ",
"forum": "Eqbay04527",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16978/Reviewer_txmc",
"reviewer_name": "Reviewer_txmc",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 3,
"presentation": 1,
"summary": "This work focuses on node classification, introduces a frame work to make all node tokens carrying both topology and attribute information. To be specific, it conducts hidden community mining methods on the node attributes similarity graph to identify two hidden important communities, and then use message passing on all the community graphs and original graphs to get embeddings. Finally, they are fed into a Transformer layer following a weighed readout layer to get final embedding for node classification.\n\n\n\nThe experiments show its competitive results to other graph transformers on ten datasets.",
"strengths": "1. The experimental results are significant, which achieves the SOTA performance on 9 datasets.\n2. The idea that conducts hidden community mining method with reduced weight method are well motivated and designed.\n3. The problem and notations are well defined.",
"weaknesses": "1. Although the hidden community mining method is completely introduced, there lacks analysis on the hidden community mining results. Is the proposed communities intuitively correct? what specific information has it mined from original graph? The authors should analyze/visualize the results more deeply.\n2. In the sequence generation part, the motivation to use PPR is not stated.\n3. The presentation in the method part is kind of redundant for me to follow. It should be more concise and direct.\n4. The framework figure (Figure 1) should include more important details, especially that related to the main idea and contributions.\n5. The ablation study is not well disentangled. The authors should include more ablations. For example, only use the original graph (ie, without DT and HT); compare the readout method; compare the PPR and others; etc.\n6. Parameter Sensitive analysis is lost (for example, the weighting parameter in Eq. 16).\n7. Some other issues about presentations:\n 1. Line 46: \"There are mainly two categories of tokens in tokenized GTs:\" Add references. No literature to support this taxonomy.\n 2. Line 84: What is weakening others' structure? it is not clear\n 3. Grammar issue: By this means the structures of the hidden communities emerge\n 4. Regarding the input of MLP stated in Line 324-325, I do not agree with the reason explained in Lin 322-323: \"In node token sequences, the elements excluding the first one are all generated mainly from features of other nodes, whose information has already been aggregated into the target node’s representation.\" Why does it abandon the information that has already been aggregated from the FC layer? This operation in Eq (13) deserves more intuitive or analytic discussions.",
"questions": "Please try to address the questions in Weaknesses.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T03:02:49",
"modification_date": "2025-11-12T13:56:33",
"review_url": "https://openreview.net/forum?id=Eqbay04527¬eId=ox2UEq3ZWJ",
"license": "CC BY 4.0"
},
{
"id": "y1iOqTJJy6",
"forum": "Eqbay04527",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16978/Reviewer_EzDS",
"reviewer_name": "Reviewer_EzDS",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "The proposed methodology addresses the node classification task in non-directed, attributed graphs. The authors present a methodology for performing node classification in graphs through the utilization of Hidden Community and Dominant Community information. The proposed approach first computes the similarity of attributes between nodes using Cosine Similarity, then converts these similarities into edge weights, thereby integrating topology information and attribute information to transform the graph into a non-directed, non-attributed graph. Hidden Communities are discovered in the new graph through Iterative Weakening employing a Reduce Weight technique. To prevent information mixing, the Neighborhood information, Dominant Community information, and Hidden Community information are individually tokenized. The generated tokens are input into a Transformer to produce representations; the Neighborhood Representation is processed in a manner similar to NAGphormer, while the remaining two representations are passed through FC layers, and these are subsequently combined to generate the Final Representation. The generated Final Representation is then input into a predictor to perform classification. The proposed method demonstrates superior node classification performance compared to conventional GNN approaches and other Graph Transformer methods.",
"strengths": "S1. The application of hidden community detection to tokenized graph transformers is creative and well-motivated. The idea of generating multi-view tokens where each type carries both topological and attributed information is interesting.\n\nS2. Explicitly aims to overcome issues of single-view tokens lacking evidence and potential contradictions from mixing differently derived tokens in prior multi-view GTs. Processing sequences separately is a key design choice.\n\nS3. The model achieves competitive or state-of-the-art results across 10 datasets (4 homophilic, 6 heterophilic), with particularly strong performance on heterophilic graphs where existing methods struggle.",
"weaknesses": "W1. Computational Complexity Not Fully Addressed. The iterative community detection process (Louvain + weakening × Tmax iterations) adds significant preprocessing cost. No runtime comparisons with baselines provided. Memory overhead of maintaining multiple subgraphs not discussed. The claim of computational efficiency needs empirical validation with timing experiments. In particular, the paper doesn't explicitly analyze the computational complexity or runtime compared to baselines. Given the multiple stages (especially the iterative weakening and multiple PPR runs), the overhead seems potentially substantial, particularly for large graphs, possibly offsetting the benefits gained from tokenization compared to simpler tokenized GTs.\n\nW2. Theoretical justification is limited. Why should hidden communities specifically be useful for node classification? The connection is intuitive but not rigorously established. Why is the weighted sum fusion (Eq. 16) the right way to combine the three token types? Justifications for specific choices within the pipeline (e.g., Louvain vs. other community detection, Reduce Weight vs. other weakening methods, PPR vs. other ranking, the specific fusion formula) could be strengthened.\n \nW3. The graph reconstruction step (Eq. 6-7) combines topology and attributes in a somewhat arbitrary way (cosine similarity + selecting top m pairs). In Section 4.1, the topology information and attribute information of the attributed graph are combined to transform it into a non-attributed graph. In this process, if an edge does not exist, a new edge is generated, and the edge weight is assigned by adding the Cosine Similarity. However, this approach may result in the topology information differing from the original graph, and the values could become excessively magnified compared to existing edges. The authors would benefit from providing an explanation or theoretical justification for this design choice.\n\nW4. Several important details appear to have been omitted. At the end of Section 4, it is stated that three Final Representations are input into the Predictor; however, no description of the Predictor is provided. Additionally, since ROC-AUC and Accuracy employ different loss functions for multi-class datasets and binary classification datasets, respectively, the absence of an explanation regarding how the Predictor was trained makes it challenging to reproduce the results. Furthermore, in Section 5, experimental details such as the number of experimental repetitions are omitted, which raises questions about the statistical significance of the results. It would also be helpful if the authors could clarify whether each token is input into separate L-Layer Transformers or into a single L-Layer Transformer.",
"questions": "Q1. Could the authors provide an analysis of the computational complexity (e.g., time complexity in terms of nodes/edges) of the HICO-GT pipeline, particularly the graph reconstruction and iterative weakening stages, and compare it empirically (e.g., wall-clock time) to key baselines like VCR-Graphormer or NAGphormer during training and inference?\nQ2. What is the actual runtime comparison with baselines? How much overhead does the iterative community detection add?\nQ3. Would it be possible for the authors to present results using alternative algorithms (such as Leiden) in addition t the Louvain algorithm?\nQ4. Can you provide theoretical or empirical evidence that hidden communities specifically help with node classification?\nQ5. How sensitive is the model to the Louvain algorithm's non-determinism?\nQ6. Why not learn the fusion weights (δ) rather than tuning them as hyperparameters?\nQ7. How do the learned token representations differ between dominant and hidden sequences? Can you visualize this?\nQ8. How sensitive is the model's performance to the lengths of the token sequences ($Q^D, Q^H, K$)? Is there a trade-off between performance and sequence length (computational cost)?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-26T19:59:19",
"modification_date": "2025-11-12T13:56:34",
"review_url": "https://openreview.net/forum?id=Eqbay04527¬eId=y1iOqTJJy6",
"license": "CC BY 4.0"
}
] |
|
WtbXgc9GVA
|
https://openreview.net/forum?id=WtbXgc9GVA
|
LoRA meets Riemannion: Muon Optimizer for Parametrization-independent Low-Rank Adapters
| 4
| 3.6
|
[
4,
6,
2,
4,
4
] |
[
4,
3,
4,
3,
4
] | 5
|
[
"Low-rank Adaption",
"Fine-tuning",
"Smooth manifolds",
"Riemannian optimization",
"Fixed matrix rank manifold",
"LLM",
"Diffusion Models"
] |
This work presents a novel, fully Riemannian framework for Low-Rank Adaptation (LoRA) that geometrically treats low-rank adapters by optimizing them directly on the fixed-rank manifold. This formulation eliminates the parametrization ambiguity present in standard Euclidean optimizers. Our framework integrates three key components to achieve this: (1) we derive **Riemannion**, a new Riemannian optimizer on the fixed-rank matrix manifold that generalizes the recently proposed Muon optimizer; (2) we develop a Riemannian gradient-informed LoRA initialization, and (3) we provide an efficient implementation without prominent overhead that uses automatic differentiation to compute arising geometric operations while adhering to best practices in numerical linear algebra. Comprehensive experimental results on both LLM and diffusion model architectures demonstrate that our approach yields consistent and noticeable improvements in convergence speed and final task performance over both standard LoRA and its state-of-the-art modifications.
|
generative models
|
https://openreview.net/pdf?id=WtbXgc9GVA
| 2025-09-20T02:34:41
| 5
|
[
{
"id": "hdoJDubxke",
"forum": "WtbXgc9GVA",
"review_number": 5,
"reviewer_id": "ICLR.cc/2026/Conference/Submission20503/Reviewer_wX7P",
"reviewer_name": "Reviewer_wX7P",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This paper identifies a key deficiency in standard Low-Rank Adaptation (LoRA) training: the lack of transformation invariance. The authors note that the update to a LoRA matrix $\\Delta W$ is dependent on its specific factorization ($A$, $B$), which can lead to unstable training and sub-optimal results.\n\nTo solve this, the authors propose a novel, fully Riemannian framework. Instead of optimizing the ambiguous factors ($A$, $B$) in Euclidean space, this work proposes to optimize the low-rank matrix $\\Delta W$ directly on the fixed-rank manifold $\\mathcal{M}_r = \\{X : rank(X) = r\\}$.",
"strengths": "1. Addresses a Fundamental Problem: The paper tackles a well-defined and important problem in parameter-efficient fine-tuning. The lack of transformation invariance is a genuine flaw in the standard LoRA optimization paradigm, and addressing it is a valuable contribution.\n2. Conceptually Elegant Solution: By re-framing the optimization problem on the fixed-rank manifold, the framework eliminates the root cause of the invariance problem by construction, as the ambiguous factors ($A$, $B$) are no longer part of the optimization. \n3. Novel Optimizer: The generalization of the Muon optimizer to the fixed-rank manifold (\"Riemannion\") is a novel algorithmic contribution.\n4. Thorough Handling of Efficiency: A primary concern for any manifold-based method is computational cost. The authors anticipate this and provide a convincing case for their method's efficiency. The design explicitly avoids forming full-size matrices, with a theoretical complexity of $\\mathcal{O}((m+n)r^{2}+r^{3})$.",
"weaknesses": "1. Insufficient Comparison to SOTA (LoRA-RITE): The paper's primary weakness is its engagement with its most direct baseline, LoRA-RITE (Yen et al., 2024).\n\n2. Conceptual Comparison: While LoRA-RITE is cited for solving the invariance problem whilst conducting adaptive regularization, the paper misses a clear opportunity to discuss why its geometric approach should be theoretically superior to LoRA-RITE's preconditioning approach. The review a-priori is that a geometrically-native solution should be more stable or effective, but this is not argued explicitly. A wall-time comparison with LoRA-RITE would be great to have. \n\n3. Mismatch in Optimization Space: The paper defines its optimization space as the manifold $M_r = \\{X : rank(X) = r\\}$. However, the true set of all possible LoRA adapters is $M_{\\le r} = \\{X : rank(X) \\le r\\}$. The paper does not discuss the implications of this distinction. The set $M_{\\le r}$ has \"singularities\" at all points where $rank < r$, and it is unclear how the optimizer behaves if the true optimal solution lies on one of these singularities. The $SVD_r$ retraction (Alg. 4, line 5) explicitly forces the update to stay on the $\\mathcal{M}_r$ manifold, which could potentially prevent the model from finding a simpler, lower-rank solution.\n\n4. The authors can mention an article on a related topic, specifically the \"manifold Muon\" optimizer described in the Modular Manifolds [1] article from Thinking Machines Lab. This article discusses a similar recipe (a Muon-like optimizer on a manifold) but applies it to the Stiefel manifold. A discussion of this related work is essential for correctly positioning the paper's novelty.\n\nminor: typo in line 215\n\n[1] Jeremy Bernstein, \"Modular Manifolds\", Thinking Machines Lab: Connectionism, Sep 2025.",
"questions": "Could you elaborate on the conceptual advantages of your geometric framework over the adaptive matrix preconditioning approach of LoRA-RITE? Why is optimizing on the manifold better than forcing invariance in Euclidean space?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T15:26:47",
"modification_date": "2025-11-12T15:52:09",
"review_url": "https://openreview.net/forum?id=WtbXgc9GVA¬eId=hdoJDubxke",
"license": "CC BY 4.0"
},
{
"id": "EZr6zm1Nh3",
"forum": "WtbXgc9GVA",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission20503/Reviewer_uGz9",
"reviewer_name": "Reviewer_uGz9",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper proposes a Riemannian optimization framework for Low-Rank Adaptation (LoRA) that directly optimizes low-rank adapters on the fixed-rank manifold instead of in standard Euclidean space. The approach introduces the Riemannion optimizer(a variant of Muon optimizer), along with a Riemannian gradient-informed initialization. Experiments on both large language models and diffusion architectures demonstrate improvements in convergence speed and performance compared to relevant methods.",
"strengths": "The paper is well-written and provides solid mathematical formulation such as gradient project,retraction and vector transports via automatic differentiation.\n\nThis paper derives Riemannion, the first optimizer that generalizes Muon to manifold of fixed rank matrices, addressing a fundamenta issue in LoRA traiing that different factorizations A,B lead to different optimization trajectories.\n\nEmpirical experiments demonstrate the proposed method outperform baselines across LLM an diffuision benchmarks.",
"weaknesses": "The main novelty(generalizing Muon) is somehow incremental.\n\nLLM results focus on Llama-3 8B using rank 16 and commonsense benchmarks only. No ablations for ranks, downstream tasks (e.g. summarization,instruction tuning), ViT models.\n\nThe computational overhead need to be clarified.",
"questions": "See weakness",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T11:22:29",
"modification_date": "2025-11-12T15:52:09",
"review_url": "https://openreview.net/forum?id=WtbXgc9GVA¬eId=EZr6zm1Nh3",
"license": "CC BY 4.0"
},
{
"id": "YKOspnCQ2m",
"forum": "WtbXgc9GVA",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission20503/Reviewer_52qu",
"reviewer_name": "Reviewer_52qu",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 1,
"summary": "This paper extends the Muon optimizer to fixed-rank matrix manifolds, proposes a Riemannion algorithm, and introduces an initialization strategy based on Riemannian gradient information. Finally, it demonstrates the performance of Riemannion on LLM fine-tuning and diffusion models.",
"strengths": "This is the first work to apply the Muon optimizer to low-rank matrix manifolds, combining Muon with Riemannian manifolds.",
"weaknesses": "1. Insufficient motivation in lines 63–69. \n\n$\\cdot$ The authors present the main motivation as extending Muon, but do not sufficiently explain the necessity of this extension. To strengthen the argument, the core advantages of Muon should be outlined to clarify the specific benefits of applying it to fixed-rank manifolds. Furthermore, a comparison with existing manifold optimizers is essential to emphasize the strengths of the proposed method.\n\n$\\cdot$ The statement, `our design inherits Muon’s geometry-aligned normalization, yielding transformation invariance of the learned update', lacks theoretical or empirical support in the paper. It should be supported by providing proofs or references, or by experiments validating.\n\n$\\cdot$ To demonstrate the advantages of Riemannian gradient–informed initialization, they should compare different initialization strategies for the same algorithm through experiments to showcase the benefits of their proposed initialization.\n\n2. In lines 160-161, the authors claim that `Note that acting on the two factors separately makes Muon non-reparameterization-invariant: its per-factor orthogonalization depends on arbitrary scalings or rotations, skewing the weight-space step and often letting one factor dominate.' Please provide the proof or detailed reasoning to support this claim.\n\n3. In lines 168–171, the symbol G is already used earlier in the paper to represent the gradient. Reusing G in this section creates confusion. \n\n4. Please provide detailed derivations or references to support (5).\n\n5. In line 188, the objective function is denoted as F, but is changed to L in Line 215. This inconsistency in notation should be resolved for clarity.\n\n6. The complexity and scalability of Algorithm 2 need clarification. If a full SVD is performed on an m×n matrix at each step to determine \n$A_L$ and $B_R$, the computational cost is typically 𝑂(𝑚𝑛 min{𝑚, 𝑛}). The algorithm requires computing a full SVD at each step, which becomes extremely expensive when \\(m\\) or \\(n\\) is very large, making the algorithm inefficient in these cases.\n\n7. In line 80, `we show the connection of this initialization to LoRA-GA', but this paper does not explain the differences and connection between the proposed initialization and LoRA-GA.\n\n8. The experiments only report results for r=4,8,16. It should include results for larger ranks (r=32,64, or even higher) to evaluate the stability, performance, and computational overhead of the proposed algorithm. \n\n9. The experimental results in Figure 4 regarding runtime provide limited insights. It would be more informative to include a comparison of the per-step computation time and memory usage of the proposed algorithm versus state-of-the-art algorithms.\n\n10. The paper does not provide any theoretical guarantees for the convergence of the proposed algorithm. Including such guarantees, or at least a discussion on convergence properties, would strengthen the paper.",
"questions": "as stated in weakness",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-29T23:23:21",
"modification_date": "2025-11-12T15:52:10",
"review_url": "https://openreview.net/forum?id=WtbXgc9GVA¬eId=YKOspnCQ2m",
"license": "CC BY 4.0"
},
{
"id": "IL95nB3Q5Y",
"forum": "WtbXgc9GVA",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission20503/Reviewer_h8Wt",
"reviewer_name": "Reviewer_h8Wt",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper proposed a novel Muon-style LoRA vairant through the lens of Riemannian optimization. Specifically, the authors proposed to apply the retraction-based Riemannian gradient update step (Eq. 8) to optimize LoRA factors, and augment the momentum term involved in the update step with the Muon-style orthorgonalization (Eq. 12). Empirically, the proposed method achieves decent performance on LoRA-style language model finetuning tasks (Table 1).",
"strengths": "- The authors propose a computationally efficient Muon-style extension of Riemannian optimization for LoRA parameterized LLMs (Algorithm 1).\n- The authors derive locally optimal initialization scheme to maximize the the LoRA factors can be optimized towards the fastest loss decrease direction on the low-rank manifold (Theorem 5.1).\n- The authors develops single backward-pass gradient trick, to compute gradient-times-matrix products efficiently (Algorithm 3).",
"weaknesses": "This paper is well-written. The proposed method is theoretically grounded, and empirically validated on several downstream tasks. I will raise my score upon my concerns being addressed.\n\n\n--------------\n\n**Concern 1. Feasible Region of (Eq 14)**\n\nTo my understanding, the authors aim to initialize the LoRA factors such that it can be optimized towards the fastest loss decrease direction on the low-rank manifold. In this case, does that means we should constraint the norm (or magnitude) of the LoRA factors in Eq. 14? Otherwise, scaling the LoRA initializaiton will lead to larger Riemannian gradients. \n\n--------------\n\n**Concern 2. Ablation on LOI**\n\nFrom my experience, the intialization of LoRA factors is usually chosen to ensure the adapted weight is identical to the loaded pretrained model, ensuring the model does not deviate from the well-trained local optima significantly. However, it seems that in LOI, a non-zero modification is made to the adapted weight. Does this affect the performance of the model? I recommend the authors to add discussion on this issue, and provide ablation studies to clarify the impace of LOI on the general performance of the proposed method.\n\n--------------\n\n**Concern 3. Effectiveness of the proposed methods on classic low-rank matrix optimization.**\n\nTo my understanding, the derivation of the proposed methods does not relies on specific assumptions on the optimization task. I recommend the authors to provide additional experiments on classic Riemannian optimization tasks to validate the effectiveness of the proposed methods in solving classic low-rank matrix optimization problems [1] that are more general and natural than the LLM finetuning tasks.\n\n[1]. Bioli, I., Kressner, D., & Robol, L. (2025). Preconditioned low-rank Riemannian optimization for symmetric positive definite linear matrix equations. SIAM Journal on Scientific Computing, 47(2), A1091–A1116. https://doi.org/10.1137/24M1688540",
"questions": "See **Weaknesses**.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-27T22:35:50",
"modification_date": "2025-11-12T15:52:10",
"review_url": "https://openreview.net/forum?id=WtbXgc9GVA¬eId=IL95nB3Q5Y",
"license": "CC BY 4.0"
},
{
"id": "92RDg8HKNt",
"forum": "WtbXgc9GVA",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission20503/Reviewer_vCt2",
"reviewer_name": "Reviewer_vCt2",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "The paper proposes a fully Riemannian LoRA framework that optimizes the low-rank adapter directly on the fixed-rank manifold to remove factorization ambiguity. It proposes a Muon-style Riemannian optimizer (“Riemannion”) plus a locally optimal initialization. Experiments on commonsense reasoning and subject-driven diffusion report gains over LoRA variants.",
"strengths": "1. Clear motivation for moving from factor-space updates to invariant steps on fixed rank manifold\n2. Concrete algorithms: a step-by-step Riemannian procedure with Ortho/Project components and explicit per-iteration complexity\n3. Experimental results suggest improved stability/accuracy vs. strong LoRA-style baselines (though see concerns below)",
"weaknesses": "(Please reply to the Questions section directly, where I write the details of the weaknesses)\n1. Insufficient positioning and no empirical comparison to LORO\n2. Unclear and potentially confusing use of G in Eq. (4).\n2. Missing memory-consumption analysis (vs. LoRA/PEFT)\n4. Hyperparameter protocol is opaque near the main tables",
"questions": "1. I think the idea of applying direct optimization on low rank manifold for LLM training is already considered in LORO [1]. Although LORO is for pre-training, it can be applied to fine-tuning very easily (with a fixed pre-trained model). I'm wondering how the proposed method perform conparing to this variant of LORO? Since LORO is the first to consider this proposed approach of low rank manifold optimization, I feel the discussion is not sufficient in the current manuscript to compare to it.\n2. Clarity around G in Eq. (4): Readers may be confused because $G$ appears in the manifold parameterization but seems to disappear in later sections. It remains unclear which quantities are actually treated as trainable parameters in Algorithm 4 and which are only auxiliary. Please state explicitly what is stored, what is recomputed during retraction, and what constitutes the model’s trainable state.\n3. No memory footprint comparison: The paper measures time overheads but omits GPU memory / parameter-state comparisons vs. LoRA, DoRA, etc. A table counting trainable params and optimizer states per layer (e.g., Adam’s m/v vs. Riemannion’s HB state) would strengthen claims of “no prominent overhead.” This is an important piece of information that is missing, since LoRA is designed to be a parameter or memory efficient optimizer.\n4. For Table 1, is it exactly one set of hyperparameters used for Riemannion to achieve all these superior performance **across all tasks**? If so, I think this would be quite astonishing. If not, would it be a bit unfair if the authors didn't search hyperparameters for other methods?\n5. A minor point: The geometry section is readable, but several practical choices (which Ortho operator was used in which layer types; retraction accuracy vs. rank; momentum transport details) are scattered. I suggest consolidating everything into an implementable algorithm, not just one step as in Algorithm 4.\n\nReferences:\n\n[1] Mo, Zhanfeng, Long-Kai Huang, and Sinno Jialin Pan. \"Parameter and memory efficient pretraining via low-rank riemannian optimization.\" The Thirteenth International Conference on Learning Representations. 2025.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-20T10:51:56",
"modification_date": "2025-11-12T15:52:11",
"review_url": "https://openreview.net/forum?id=WtbXgc9GVA¬eId=92RDg8HKNt",
"license": "CC BY 4.0"
}
] |
|
0fcVDzkGK2
|
https://openreview.net/forum?id=0fcVDzkGK2
|
DIVIDE-AND-DENOISE: A GAME THEORETIC METHOD FOR FAIRLY COMPOSING DIFFUSION MODELS
| 2.666667
| 3.333333
|
[
0,
4,
4
] |
[
3,
4,
3
] | 3
|
[
"Diffusion Models",
"Fair Composition",
"Game-Theoretic",
"Text-to-Image"
] |
The widespread availability of large-scale pre-trained generative models raises a
question: how can we best leverage them beyond their original training distribu-
tions? Two strategies provide partial answers. Composition combines multiple
diffusion models, typically through linear averaging of their predictions, to pro-
duce out-of-distribution samples. Guidance steers a single model by biasing its
generation with rewards or classifier scores. We unify these perspectives with
Divide-and-Denoise, a game-theoretic approach to compositional sampling from
multiple pre-trained diffusion models, coordinated through an allocation flow. At
each denoising step, we alternate between (i) partitioning the sample into regions
assigned to distinct models for denoising (composition) and (ii) aligning the sam-
ple with this division (guidance). The partition is determined by solving a fair al-
location problem under a shared alignment objective. We evaluate our method on
text-to-image generation. Using models conditioned on different prompts, Divide-
and-Denoise reliably generates images that capture the semantics of each prompt,
even surpassing joint-prompt conditioning. On the GenEval benchmark, it further
outperforms energy-based composition and joint prompting baselines, resolving
common issues such as missing objects and attribute mismatches.
|
a game-theoretic approach to compositional sampling from multiple pre-trained diffusion models
|
generative models
|
https://openreview.net/pdf?id=0fcVDzkGK2
| 2025-09-18T16:22:23
| 3
|
[
{
"id": "5zG3M4XOSY",
"forum": "0fcVDzkGK2",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission10873/Reviewer_nmYV",
"reviewer_name": "Reviewer_nmYV",
"rating": 0,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 1,
"summary": "This paper aims to tackle compositional generation with diffusion models by introducing \"Divide-and-Denoise\", a game theoretic sampling procedure that composes multiple pretrained diffusion model \"player\" models via fair division of the latent space at every denoising step. The method alternates between (i) an allocation step that infers soft segmentations by solving a fairness-constrained optimization using utilities derived from cross-attention maps, and (ii) a denoising step whose optimal Gaussian kernel has a mean that combines per-model updates masked by the allocation plus a guidance term driven by an alignment score; a fictitious background player and a KL term encourage sensible coverage and temporal smoothness.",
"strengths": "- ***Interesting & principled idea***: Recasts compositional generation as a fair-division game over soft region allocations, using cross-attention.",
"weaknesses": "- ***Writing quality***: The paper appears incompletely prepared at submission time. In the experiments section there are placeholder “?” citations, tables that overflow horizontally, and tables with missing entries. The manuscript also exceeds the 9-page limit, suggesting the writing and formatting were not finalized. These presentation issues significantly hinder readability and raise concerns about diligence in preparing the submission.\n\n- ***Experimental setups***: The Joint Prompt setup appears to be an extremely weak baseline. With such a simple enumeration-style prompt, the model has a high probability of failure. Instead, the authors should compare results when using a language model to generate natural prompts containing multiple objects. In the same vein, averaging is also far too simple as a baseline. It seems strange to expect that averaging score values from different conditions would work well.\n\n- ***Prompt division***: This paper focuses on effectively dividing and combining generation from multiple players, yet it doesn't address how to divide the conditions among them. For example, if there's a long prompt, there is a need to determine how to distribute its contents to each player. With the current approach, I have serious doubts about whether this can work for scenarios with complex multiple relations.\n\n- ***The title of Section 4.3***: I don't understand why this is considered out-of-distribution at all. Wouldn't \"conflict prompt\" be more appropriate?",
"questions": "N/A",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T04:55:45",
"modification_date": "2025-11-12T12:34:46",
"review_url": "https://openreview.net/forum?id=0fcVDzkGK2¬eId=5zG3M4XOSY",
"license": "CC BY 4.0"
},
{
"id": "GG0KrGioVr",
"forum": "0fcVDzkGK2",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission10873/Reviewer_roNH",
"reviewer_name": "Reviewer_roNH",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 2,
"summary": "The paper “Divide-and-Denoise” proposes a game-theoretic framework for compositional sampling from multiple pre-trained diffusion models. Rather than directly averaging denoising predictions (as in MultiDiffusion or joint-prompt methods), the authors formulate the problem as a fair division game, where latent coordinates are “goods” and each diffusion model acts as a “player.” This elegant formulation allows the model to dynamically allocate spatial responsibility among different diffusion processes in a principled, temporally coherent, and fairness-aware way.\n\nThe method alternates between two tightly coupled updates at each diffusion step:\n\t1.\tCompositional denoising, which generates a latent proposal based on soft region assignments Q_t;\n\t2.\tDynamic allocation, which optimizes Q_t via a bilevel optimization that enforces fairness, smoothness, and attention alignment across time.\n\nA key novelty is the introduction of the alignment score derived from cross-attention maps, which measures semantic consistency between denoised regions and textual prompts. The paper also introduces a “fictitious player” to handle unassigned or background regions, ensuring that all latent coordinates are properly modeled. Theoretical analysis leads to a closed-form softmax-like solution for Q_t (Theorem 2), while alternating optimization jointly refines both the denoising kernel and spatial allocation.",
"strengths": "(1) Conceptual originality: The use of game theory and fair division in diffusion model coordination is highly innovative and goes beyond heuristic compositional fusion.\n\n(2) Theoretical rigor: The bilevel formulation, connection to entropy-regularized MDPs, and derivations (Theorems 1–2) are mathematically sound and clearly motivated.\n\n(3) Strong empirical performance: On multi-object and attribute-binding tasks, Divide-and-Denoise significantly reduces object overlap and color confusion, outperforming joint-prompt and MultiDiffusion baselines.",
"weaknesses": "(1) Computational overhead: Alternating updates for Q_t and p_t^c introduce nontrivial cost during inference.\n\n(2) Dependence on cross-attention quality: The allocation accuracy relies heavily on stable and interpretable attention maps.\n\n(3) Limited evaluation scope: Current experiments are restricted to text-to-image synthesis; demonstrating broader modality coverage would further strengthen the claim of generality.\n\n**Important** (4 )Many figures are blurry, making them nearly unreadable. Several references are missing or incorrectly formatted, which severely reduces the paper’s professionalism and readability.",
"questions": "None",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T15:42:03",
"modification_date": "2025-11-12T12:34:47",
"review_url": "https://openreview.net/forum?id=0fcVDzkGK2¬eId=GG0KrGioVr",
"license": "CC BY 4.0"
},
{
"id": "hY8jwanFEY",
"forum": "0fcVDzkGK2",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission10873/Reviewer_9aeu",
"reviewer_name": "Reviewer_9aeu",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "The paper proposes a game-theoretic approach to combine multiple different text-to-image diffusion models. A crucial constraint is that the models must have the same latent dimensions. Ensuring fairness when dividing the noise maps among the models prevents the collapse into a single concept and only generating this concept. The experimental results show that the approach doesn't suffer from collapse to a single concept and can generate all the objects present in the prompt using different models.",
"strengths": "- The idea of fusing different models to generate an image is very intriguing.\n- The paper is well written, and even the mathematical details are easy to understand.",
"weaknesses": "- The figures in the paper have low resolutions. The text in the images is not readable.\n- A few more example images would be nice to illustrate what makes this approach better than other approaches.\n- The evaluation is not very thorough. For example for the generation of multiple objects and the attribute allocation only figure 3 is shown as evidence.\n- It is not clear why the prompts used in Section 4.3 are out-of-distribution.\n\nMinor:\n- In line 404, 412 and 413 the citations seem to be missing.\n- The figure number in line 423 is not correct\n- In line 466 the table number is not correct",
"questions": "Q1: I might have missed it, but how is the fairness ensured when dividing the pixels to the models? \nQ2: Why do the pixels have to be distributed to a fixed model? Wouldn't it also be possible, especially when two areas overlap, to average over the noise maps of multiple models? \nQ3: How does VQA measure the compositional correctness? If I am not mistaken, an image can be composed in different ways, while the VQA can still be correct. \nQ4: Why are there values missing for VQA in table 1? \nQ5: Why are the prompts used in Section 4.3 OOD? What does it mean if there are \"conflicts between individual prompts\"?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T00:44:11",
"modification_date": "2025-11-12T12:34:47",
"review_url": "https://openreview.net/forum?id=0fcVDzkGK2¬eId=hY8jwanFEY",
"license": "CC BY 4.0"
}
] |
vEh1ceS154
|
https://openreview.net/forum?id=vEh1ceS154
|
Partition Generative Modeling: Masked Modeling Without Masks
| 7
| 3
|
[
6,
6,
8,
8
] |
[
3,
2,
4,
3
] | 4
|
[
"masked generative modeling",
"discrete diffusion",
"masked diffusion language modeling",
"diffusion language modeling"
] |
Masked generative models (MGMs) are widely used to capture complex data and enable faster generation than autoregressive models (AR) through parallel decoding.
However, MGMs typically operate on fixed-length inputs, which can be inefficient: early in sampling, most tokens are masked and carry little information, leading to wasted computation. In contrast, AR models process only tokens generated previously, making early iterations faster.
In this work, we introduce the ``Partition Generative Model'' (PGM), a novel approach that combines the strengths of AR and MGMs. Rather than masking, PGM partitions tokens into two groups and employs sparse attention to block information flow between them.
Since there is no information flow between partitions, the model can process the previously-generated tokens only during sampling, while retaining the ability to generate tokens in parallel and in any order.
On OpenWebText, PGMs offer at least $5\times$ improvements in sampling latency and throughput, while producing samples with superior generative perplexity, compared to Masked Diffusion Language Models. In the ImageNet dataset, PGMs achieve up to $7\times$ better throughput compared to MaskGIT with only a small change in FID. Finally, we show that PGMs are compatible with distillation methods for MGMs, enabling further inference speedups.
|
We show that it is possible to train masked generative models without using MASK tokens, resulting in efficiency gains at inference.
|
generative models
|
https://openreview.net/pdf?id=vEh1ceS154
| 2025-09-17T01:36:32
| 4
|
[
{
"id": "LabNEsk09h",
"forum": "vEh1ceS154",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission7931/Reviewer_YDst",
"reviewer_name": "Reviewer_YDst",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The authors introduce Partition Generative Models (PGMs), based on the observation that masked generative models (MGMs) waste compute on masked tokens, which contain no information. \nInstead of masking tokens, PGMs partition the input tokens into two disjoint groups and train the model to predict one group from the other.\nThis approach allows the model to process only unmasked tokens which eliminating the need for explicit masking and leads to significantly faster sampling.",
"strengths": "- The GroupSwap layer and partition-aware transformer structure are well-motivated\n- Includes analyses of perplexity, latency, throughput, and ablations on masking vs. partitioning.\n- Strong empirical results across both text and image generation tasks, PGMs deliver substantial inference speedups (up to 7×) with little to no degradation in output quality.",
"weaknesses": "- The architectural details (e.g., data-dependent vs. data-independent queries) are dense and could be clarified or simplified, the paper is a bit difficult to follow.\n- The largest experiments are modest in size (268M parameters). It remains unclear if PGMs scale favorably compared to state-of-the-art large AR or diffusion model\n- No comparison against recent SOTA model non-autoregressive language models beyond MDLM.",
"questions": "- How does the choice of partition ratio (t) affect convergence and quality? Is it dynamically sampled or fixed?\n- why cant you use KVcache that would reduce the time complexity from sampling in MGM? Would PGM will be still faster?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T18:41:10",
"modification_date": "2025-11-12T11:59:13",
"review_url": "https://openreview.net/forum?id=vEh1ceS154¬eId=LabNEsk09h",
"license": "CC BY 4.0"
},
{
"id": "Ljcr4gb12z",
"forum": "vEh1ceS154",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission7931/Reviewer_pTqP",
"reviewer_name": "Reviewer_pTqP",
"rating": 6,
"confidence": 2,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper introduces a new generative modeling framework for language modeling, termed Partition Generative Models (PGM), aimed at improving inference efficiency. Unlike Masked Generative Models (MGM), PGM avoids applying the forward process to masked tokens, thereby reducing computational cost. The authors present tailored architectural modifications, along with corresponding training and inference strategies, to enable efficient generation within this framework. Experimental results indicate that PGM achieves faster inference than existing MGM approaches while maintaining comparable generation quality.",
"strengths": "1. The core idea of avoiding computation on masked tokens during inference, along with the corresponding training strategy, is interesting and effectively targets a key inefficiency in existing masked generative models.\n\n2. The empirical results demonstrate that PGM can significantly accelerate inference while maintaining generation quality comparable to other state-of-the-art generative models, supporting the practical value of the proposed approach.\n\n3. The paper is clearly written, well-structured, and easy to follow, making the technical contributions accessible to the reader.",
"weaknesses": "I did not identify any major weaknesses in this paper. I do, however, have one question for clarification:\n\nThe proposed training pipeline includes two prediction components that operate on the same batch of data, which suggests that training efficiency could potentially be better than MDLM. Could the authors provide quantitative results or analysis regarding training efficiency, such as training speed, computational cost, or resource usage compared to MDLM?",
"questions": "NA",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T06:42:24",
"modification_date": "2025-11-12T11:59:14",
"review_url": "https://openreview.net/forum?id=vEh1ceS154¬eId=Ljcr4gb12z",
"license": "CC BY 4.0"
},
{
"id": "MyKtVVbFCT",
"forum": "vEh1ceS154",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission7931/Reviewer_DktD",
"reviewer_name": "Reviewer_DktD",
"rating": 8,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 4,
"summary": "This paper proposes avoiding the repetitive computation of the `MASK` token in masked generative modeling (MGM), switching from the decoder-only MGM to an encoder-decoder architecture. The inference model is defined as:\n1. self-attention **only within known indices**.\n2. cross-attention swapping to unknown indices (with opposite-group masking to prevent leakage).\n3. cross-attention **only within unknown indices** (projecting to the embeddings of stage one).\n\nThe complexity is still O(L^2) (L = sequence length) but with a significantly smaller coefficient (encoder $(L-k)^2$ + decoder $k(L−k)$ per step when remaining sequence length = k). \nThe practical speedup is about 5x and is scalable, with comparable generation quality against MGM.\nDistillation-accelerated models maintains the acceleration against MGM.",
"strengths": "- The empirical benefit is strong: 5x faster than MGM (4.6x faster with nucleus sampling).\n\n- Complementary masking is a smart and original trick to let one training step effectively count as two steps.\n\n- Section 5.3: fair comparison against MDLM (MGM) by isolating the complementary masking trick.\n\n- The down-stream tasks spreads across image and language, and the evaluation is solid. Distillation is also explored, which improves the practical significance of the paper.",
"weaknesses": "- The fairness of Table 2's comparison is not immediately visible—I believe the fairness should outweigh matching performance. Since the paper switches from decoder-only to encoder-decoder architecture, controlling hyperparameters (width, head, depth and MLP width multipliers) seems crucial to get a fair comparison.\nIn LM1B, it is a good idea controlling parameter counts and comparing with PGM(6/6)\\~170M, but in OWT, that model is missing in the main text (only the dim. 1024 model is shown). I don't understand why it only appears in the appendix.\n\n- I don't understand the labels (5.3) (5.4) (5.5) in Figure 4 (right).\n\n- Minor: \"sparse attention\" is used to describe the masking mechanism, but I believe it is an overuse of the term, as the mask is not actually sparse—perhaps group-wise attention is more suitable.",
"questions": "- Except for the top-k/nucleus confident tokens, the computations are wasted. I wonder if it is possible to reuse these noisy states instead of re-initializing decoder queries at each denoising step?\n\n- The current decoder architecture is cross-attention-only—which makes it easy to control parameter count c.f. MDLM, but lacks the standard self-attention component. Have you thought about this variant?\n\n- The information exchange from known to unknown indices entirely relies on the swap xattention layer. I wonder if it is possible to do the exchange in each decoder layer instead? (Of course this will make the complimentary masking trick not possible.)",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-28T01:57:15",
"modification_date": "2025-11-12T11:59:14",
"review_url": "https://openreview.net/forum?id=vEh1ceS154¬eId=MyKtVVbFCT",
"license": "CC BY 4.0"
},
{
"id": "LP1ol2z26W",
"forum": "vEh1ceS154",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission7931/Reviewer_m818",
"reviewer_name": "Reviewer_m818",
"rating": 8,
"confidence": 3,
"soundness": 4,
"contribution": 4,
"presentation": 3,
"summary": "This paper introduces the PGM, a new framework with core architecture innovation that combines the strengths of AR and MGM. PGM partitions tokens into two disjoint groups and constrains attention such that each group predicts the other. This removes the need for explicit MASK tokens while preserving parallel decoding and arbitrary generation order and address the training inefficiency of MGM.\nExperiments on LM1B, OpenWebText, and ImageNet show that PGMs achieve up to 5–7× faster inference throughput than MDLM and MaskGIT, with similar or better metrics including perplexity and FID. PGMs also support distillation for additional speedups.\nThis work is overall sound to me, but I am not an expert in architecture design.",
"strengths": "1. Good novelty: replacing masking with partitioning is a simple yet powerful idea that effectively unifies the efficiency of AR models with the flexibility of MGMs.\n2. Solid architectural design: The GroupSwap mechanism and partition-wise attention are well-motivated and carefully engineered to achieve the partition mechanism.\n3. The experimental results are strong, with improved performance for both text and image generation.",
"weaknesses": "1. While the PGM is motivated be the inefficiency of MDM training, the authors are encouraged to provide evidence to show faster learning/convergence of PGM than MDLM. This probably relates to the training stability.",
"questions": "1. Does \"PGM 8 / 8\" mean 8 layers of encoder and 8 layers of decoder?\n2. The origin of the training instability. Do authors still observe this when PGM is trained without complementary masking.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-20T01:06:48",
"modification_date": "2025-11-12T11:59:15",
"review_url": "https://openreview.net/forum?id=vEh1ceS154¬eId=LP1ol2z26W",
"license": "CC BY 4.0"
}
] |
wSGle6ag5I
|
https://openreview.net/forum?id=wSGle6ag5I
|
Improving Diffusion Models for Class-imbalanced Training Data via Capacity Manipulation
| 6
| 3.5
|
[
6,
6,
6,
6
] |
[
4,
3,
3,
4
] | 4
|
[
"Imbalance",
"Diffusion Models"
] |
While diffusion models have achieved remarkable performance in image generation, they often struggle with the imbalanced datasets frequently encountered in real-world applications, resulting in significant performance degradation on minority classes. In this paper, we identify model capacity allocation as a key and previously underexplored factor contributing to this issue, providing a perspective that is orthogonal to existing research. Our empirical experiments and theoretical analysis reveal that majority classes monopolize an unnecessarily large portion of the model's capacity, thereby restricting the representation of minority classes. To address this, we propose Capacity Manipulation (CM), which explicitly reserves model capacity for minority classes. Our approach leverages a low-rank decomposition of model parameters and introduces a capacity manipulation loss to allocate appropriate capacity for capturing minority knowledge, thus enhancing minority class representation. Extensive experiments demonstrate that CM consistently and significantly improves the robustness of diffusion models on imbalanced datasets, and when combined with existing methods, further boosts overall performance.
|
unsupervised, self-supervised, semi-supervised, and supervised representation learning
|
https://openreview.net/pdf?id=wSGle6ag5I
| 2025-09-05T11:15:04
| 4
|
[
{
"id": "y03pIN2wNq",
"forum": "wSGle6ag5I",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission2255/Reviewer_NJJB",
"reviewer_name": "Reviewer_NJJB",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper addresses the problem of class imbalance in diffusion models, which leads to poor generation performance on minority classes. The authors identify model capacity allocation as a key overlooked factor, where majority classes dominate model parameters, leaving insufficient capacity for minorities. To mitigate this, they propose Capacity Manipulation (CM), a method that reserves model capacity for minority classes via low-rank decomposition of parameters and a novel capacity manipulation loss. The method is orthogonal to existing approaches and does not increase inference cost. Extensive experiments on multiple datasets demonstrate consistent improvements in minority-class generation without sacrificing majority-class performance.",
"strengths": "(1) The method is well-motivated, supported by both empirical observations (e.g., pruning sensitivity) and theoretical analysis (Theorems 2.1 & 3.1). The experimental setup is rigorous, covering multiple datasets, architectures, and metrics.\n\n(2) The paper offers an orthogonal viewpoint on class imbalance in diffusion models by focusing on model capacity allocation, diverging from prior works that primarily emphasize loss reweighting or knowledge transfer (e.g., CBDM and OC). The integration of low-rank decomposition with a tailored loss function represents a creative combination of ideas from parameter-efficient fine-tuning and imbalanced learning for targeted capacity reservation.",
"weaknesses": "(1) The term \"capacity\" is not clearly defined. Is it the number of parameters, the magnitude (e.g., L1-norm) of the weights or something else? The pruning experiment suggests a link to weight magnitude, but this connection is not explicitly made or theoretically grounded. Therefore, \" capacity\" remains a somewhat vague concept.\n\n(2) The capacity manipulation loss is designed to force minority-specific knowledge into the low-rank adapter. A potential risk is that this adapter becomes too specialized, failing to leverage the shared, general features learned by the main model. This could limit its ability to generate diverse minority samples that still rely on common underlying features (e.g., a \"rare breed of dog\" should still benefit from general \"dog\" features). The paper does not discuss or analyze this potential limitation.",
"questions": "(1) The method proposed in the paper primarily focuses on the context of known classes. A natural follow-up question is how Capacity Manipulation would perform in scenarios involving more compositional and fine-grained concepts. For example, in a dataset imbalanced towards \"photos of cats\" vs. \"paintings of dogs,\" how would the model reserve capacity for the minority concept of \"painting\" style, which is orthogonal to the object \"dog\"? Does this framework extend to reserving capacity for concepts rather than just classes? \n\n(2) The method's architecture—using a LoRA-like adapter—makes a strong, implicit assumption: that \"minority expertise\" is inherently low-rank. What is the theoretical or empirical justification for this? One could easily argue the opposite: minority classes might be more complex and have a higher intrinsic dimensionality (e.g., \"impressionist painting\" vs. \"female face\") but are simply under-sampled. If the minority knowledge is, in fact, high-rank, then the fixed low-rank of the adapter would become the primary performance bottleneck, ironically limiting the minority class's capacity more than a standard full-rank model. How does CM cope with this potential issue? \n\n(3) The current formulation appears to use a single $\\theta^e$ to capture the expertise for all minority classes collectively. On datasets with highly heterogeneous minority classes (e.g., the \"Few\" split in Imb. CIFAR-100 or ImageNet-LT, which can contain wildly different concepts), is it plausible that a single low-rank subspace can effectively represent this diverse and multimodal knowledge? Does this not create a new \"capacity collapse\" problem within the minority adapter itself? Have the authors considered a more flexible architecture, such as a Mixture-of-Experts (MoE) model for $\\theta^e$, where different \"experts\" (adapters) are dynamically allocated to different minority clusters?\n\n(4) The paper should discuss and cite relevant literature on reweighting or balancing techniques for generative models [1-5].\n\nReference:\n\n[1] Xie et al. Doremi: Optimizing data mixtures speeds up language model pretraining. NeurIPS, 2023.\n\n[2] Fan et al. DoGE: Domain Reweighting with Generalization Estimation. ICML, 2024.\n\n[3] Kim et al. Training unbiased diffusion models from biased datase. ICLR, 2024.\n\n[4] Li et al. Pruning then Reweighting: Towards Data-Efficient Training of Diffusion Models. ICASSP, 2025.\n\n[5] Liu et al. RegMix: Data Mixture as Regression for Language Model Pre-training. ICLR, 2025.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-07T07:15:55",
"modification_date": "2025-11-12T10:56:03",
"review_url": "https://openreview.net/forum?id=wSGle6ag5I¬eId=y03pIN2wNq",
"license": "CC BY 4.0"
},
{
"id": "a7RgfPGjct",
"forum": "wSGle6ag5I",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission2255/Reviewer_7n4d",
"reviewer_name": "Reviewer_7n4d",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper proposes Capacity Manipulation (CM), a method to improve diffusion models trained on class-imbalanced data. It identifies that majority classes dominate model capacity, limiting minority representation. CM explicitly reserves capacity for minority classes through low-rank parameter decomposition and a capacity manipulation loss that balances consistency and diversity. Experiments on multiple benchmarks show that CM consistently enhances minority-class generation quality and overall robustness.",
"strengths": "1. The paper is clearly written, well-structured, and easy to follow.\n2. The proposed Capacity Manipulation (CM) method is conceptually simple yet effective, relying on low-rank decomposition and a targeted regularization loss to reserve model capacity for minority expertise.\n3. Theoretical analyses provide solid intuition about how imbalance affects parameter updates and how CM mitigates this effect.\n4. Extensive experiments across small- and large-scale datasets convincingly demonstrate that CM improves minority-class quality without degrading majority-class performance.",
"weaknesses": "1. The calculation of loss change in figure 1(b) is not explained.\n2. Although the authors evaluate CM across multiple datasets, there is limited discussion of failure cases or sensitivity to extreme imbalance ratios beyond 100:1.\n3. Some comparisons (e.g., with Overlap Optimization) are only mentioned in passing, a direct experimental comparison would strengthen claims of superiority.",
"questions": "N/A",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-02T01:56:27",
"modification_date": "2025-11-12T10:56:03",
"review_url": "https://openreview.net/forum?id=wSGle6ag5I¬eId=a7RgfPGjct",
"license": "CC BY 4.0"
},
{
"id": "FTiqr7l1BR",
"forum": "wSGle6ag5I",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission2255/Reviewer_pAbs",
"reviewer_name": "Reviewer_pAbs",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper addresses the poor minority-class performance of diffusion models trained on long-tailed data, arguing that a key culprit is capacity misallocation—majority classes dominate parameter updates and monopolize representational space. \nTo tackle this, it proposes Capacity Manipulation (CM): each weight matrix is decomposed into a general/majority component and a reserved low-rank minority component, and training employs a capacity-manipulation loss that enforces consistency for majority classes while promoting diversity for minority classes. \nAt inference, parameters are merged, introducing no additional latency. \nAcross imbalanced CIFAR-10/100, CelebA-HQ, ImageNet-LT, iNaturalist, and ArtBench-10 (including Stable Diffusion fine-tuning), CM improves FID/KID and delivers especially strong gains on Medium/Few splits over strong baselines (e.g., CBDM, OC), while remaining orthogonal and complementary to them.",
"strengths": "The paper offers a clear and original lens—capacity allocation—and introduces a simple, effective mechanism that reserves low-rank capacity for minority classes, moving beyond reweighting or oversampling.\nMethod quality is strong: the parameter split plus a consistency/diversity loss is minimally invasive, theoretically motivated by gradient/representation analyses, and incurs no inference overhead due to weight merging.\nEmpirically, results are broad and convincing across multiple datasets/backbones (including SD fine-tuning), with especially large gains on Medium/Few splits and stable ablations over ranks and loss weights.\nThe approach is practical and orthogonal to existing long-tail remedies (e.g., CBDM, OC), making it easy to adopt and combine for further improvements.",
"weaknesses": "1. The paper should more clearly distinguish CM from class-balanced objectives, reweighting/oversampling, class-specific adapters/LoRA, and Mixture-of-Experts. Add a side-by-side comparison and reproduce at least one adapter/MoE-style baseline under matched compute.\n\n2. The analysis explains majority gradient dominance and motivates reserving rank, but does not specify conditions ensuring no loss of global likelihood or bounds on interference.\n\n3. Most results are class-conditional image benchmarks.\nFor instance, text-to-image (multi-attribute, compositional) and multi-label long-tails are underexplored.",
"questions": "1. How is the minority/majority split determined, and how sensitive are results to this choice under dataset drift or rebalancing?\n\n2. When merging weights at inference, how do the authors prevent cross-talk between the general and minority subspaces?\n\n3. Does reserving capacity degrade majority-class fidelity or diversity in any regimes?\n\n4. What are the exact training overheads introduced by the extra low-rank factors and CM loss? Do gains persist under tight compute budgets?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T11:13:56",
"modification_date": "2025-11-12T10:56:03",
"review_url": "https://openreview.net/forum?id=wSGle6ag5I¬eId=FTiqr7l1BR",
"license": "CC BY 4.0"
},
{
"id": "AkjtQger4z",
"forum": "wSGle6ag5I",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission2255/Reviewer_JwQN",
"reviewer_name": "Reviewer_JwQN",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This work investigates the challenge of generative modeling for imbalanced datasets. The hypothesis is that the poor generation quality for minority classes is primarily caused by an imbalance in \"model capacity,\" where the model's learning resources are disproportionately occupied by the head (majority) classes. To address this, the paper introduces a novel technique named Capacity Manipulation (CM), which explicitly reallocates and reserves model capacity for the tail (minority) classes. The proposed method employs a low-rank decomposition of the model's parameters, enabling fine-grained control over capacity allocation. A bespoke capacity manipulation loss function is introduced to ensure sufficient capacity is dedicated to learning the features of minority classes, leading to a significant enhancement in their generative representation. The claims are substantiated by comprehensive experimental results, and the overall methodology is presented with clear and coherent logic.",
"strengths": "1. I find this approach remarkably novel in how it attributes the class imbalance problem to \"uneven model capacity allocation.\" It represents a significant departure from traditional paradigms like data resampling or loss re-weighting, introducing a fresh perspective by intervening directly at the model parameter level. By the way, I'm also curious if the author's method could be applied to long-tail recognition tasks (e.g., with ResNeXt-50 on CIFAR). No detailed explanation is needed if the implementation is complex—I'm simply wondering about its potential.\n2. The design of loss function is exceptionally clear in its objective. By creating a \"push-pull\" dynamic between 'consistency' and 'diversity', it effectively channels distinct knowledge into separate parameter subspace.\n3. The paper doesn't just rest on solid experimental results; it also provides theoretical analysis (Theorems 2.1 and 3.1) to substantiate its core thesis: that majority classes indeed dominate parameter updates and that low-rank decomposition can effectively mitigate this dominance.\n4. The experimental validation is remarkably comprehensive. It covers a wide range of datasets (from simple to complex, low-res to high-res), various imbalance ratios, and multiple evaluation metrics, all benchmarked against strong baseline methods.",
"weaknesses": "1. I'm also curious if the author's method could be applied to long-tail recognition tasks (e.g., with ResNeXt-50 on CIFAR). No detailed explanation is needed if the implementation is complex—I'm simply wondering about its potential.\n2. My main question is about the capacity 'reservation.' The structure of the parameter decomposition seems to be fixed. This makes me wonder: is this 'hard partitioning' approach truly optimal? Could there be a way for the model to dynamically and adaptively decide how much capacity to allocate to each component during training, rather than relying on a predefined split?\n3. Are there any toy experiments that can visually illustrate this? For instance, using the two-class example you mentioned, could you show how the majority class ends up occupying most of the model's parameter capacity?\n4. I think this assumption has some limitations, especially with varying balance ratios like in ImageNet-LT. For example, in an extreme case with one head class and 999 tail classes, is a single, the setting of rank still appropriate? Or does the rank itself need to be adjusted based on class frequency?\n5. To empirically validate the hypothesis, is it possible to visually demonstrate that the class-specific parameters specialize in learning features unique to minority classes, while the class-agnostic parameters focus on capturing generic features dominated by the majority (head) classes? We propose achieving this through visualization techniques.\n6. My last question is about the diversity within the tail itself. Can you visualize them?",
"questions": "Please see the weakness.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-28T15:24:08",
"modification_date": "2025-11-12T10:56:04",
"review_url": "https://openreview.net/forum?id=wSGle6ag5I¬eId=AkjtQger4z",
"license": "CC BY 4.0"
}
] |
|
XGODWn7HeJ
|
https://openreview.net/forum?id=XGODWn7HeJ
|
Toward Principled Flexible Scaling for Self-Gated Neural Activation
| 6.666667
| 4
|
[
8,
6,
6
] |
[
4,
4,
4
] | 3
|
[
"Neural Activation Functions",
"Principled Neural Activation Modeling",
"Neural Activation Interpretation",
"Non-local Information Modeling"
] |
Neural networks necessitate nonlinearities to achieve universal approximability.
Traditional activation functions introduce nonlinearities through rigid feature rectifications.
Recent self-gated variants improve traditional methods in fitting flexibility by incorporating learnable content-aware factors and non-local dependencies, enabling dynamic adjustments to activation curves via adaptive translation and scaling.
While SOTA approaches achieve notable gains in conventional CNN layers, they struggle to enhance Transformer layers, where fine-grained context is inherently modeled, severely reducing the effectiveness of non-local dependencies leveraged in activation processes.
We refer to this critical yet unexplored challenge as the non-local tension of activation.
Drawing on a decision-making perspective, we systematically analyze the origins of the non-local tension problem and explore the initial solution to foster a more discriminative and generalizable neural activation methodology.
This is achieved by rethinking how non-local cues are encoded and transformed into adaptive scaling coefficients, which in turn recalibrate the contributions of features to filter updates through neural activation.
Grounded in these insights, we present FleS, a novel self-gated activation model for discriminative pattern recognition.
Extensive experiments on various popular benchmarks validate our interpretable methodology for improving neural activation modeling.
|
We identify, elucidate, and address the underexplored non-local tension problem and introduce FleS, a self-gated activation function that enhances discriminative visual recognition through adaptive scaling.
|
unsupervised, self-supervised, semi-supervised, and supervised representation learning
|
https://openreview.net/pdf?id=XGODWn7HeJ
| 2025-09-19T20:16:30
| 3
|
[
{
"id": "tkYg6DEAsm",
"forum": "XGODWn7HeJ",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission18129/Reviewer_rpDy",
"reviewer_name": "Reviewer_rpDy",
"rating": 8,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 4,
"summary": "The paper identifies a limitation in self-gated activation, which is argued to be the reason for limited effectiveness of self-gated activation in transformers due to saturation of gating components. This introduces what the authors call convergence limitation specifically in high-importance features wrt a filter where the difference between the importance become negligible and result in the tendency to lose discriminability wrt contributions of features. In transformers, this collapse of gating discriminability causes activation to neutralize contextual cues the architecture tries to capture.\n\nThe authors proposed flexible scaling in self-gated activation using horizontal and vertical dynamic scaling. Horizontal scaling shifts or stretches the gating curve to avoid saturation and vertical scaling increases the discriminability by increasing the range of gating values. The scaling coefficients are conditioned on channel-wise statistics of positively contributing features. In practice, the proposed activation, called FleS, computes per-channel effective responses, feeds them through lightweight MLPs, and outputs the two scaling coeffs. The authors benchmarked the proposed activation across ImageNet, CIFAR-100, long-tailed recognition, COCO detection, and GLUE where FleS consistently outperformed SOTA activations. Particularly notable are results in Swin-Transformer models.",
"strengths": "1. Clear identification of an important issue \n\nThe underlying cause of non-local tension problem was clearly discussed, something that previous works have not articulated. \n\n2. The logical framing of the problem, its cause and the proposed approach\n\nThe paper provides an intuitive interpretation of activations as \"importance modulators\". Then clearly identifies the harm of saturation in self-gating activations and draws a logical connection from convergence limitation, trivially discriminative gating weights phenomenon, and non-local tension problem. \n\n3. Practical design of activation with strong empirical results across different models\n\nFleS is simple, seems to be lightweight, and can easily be dropped into modern architectures. Performance improvements especially in Swin-Micro and Swin-T are substantial. The improvements in experiments in Metaformers, CNNs, detection backbones, and long-tailed classification are promising. This broad applicability suggests that the identified problem is real and not confined to a narrow architecture.",
"weaknesses": "1. Some theoretical claims rely on partially informal assumptions with more room for quantification/formalization:\n\nAlthough the results generally support the narrative, but some justification or quantification can show if attention-enhanced features regularly fall into the saturation regime. Also, the explanation for why positive-only feature responses should dominate importance is intuitive, but remains heuristic; the decision-theoretic interpretation could be formalized more rigorously.\n\n2. Insufficient analysis of optimization stability and dynamics\n\nThe paper mentions initializing \\gamma values but doesn't quantify sensitivity to initialization. Given that activations can strongly shape optimization trajectories, this lack of investigation is a methodological gap\n\n3. Dependence on channel-level statistics require more investigation/illustration of failure scenarios\nThe batch-dependence in channel statistics brings up questions about microbatch regimes, distributed training, highly-multimodal batches. \n\n4. Discussion of potential failures/limitations or stress-tests:\nIn addition to gains in performance, it's worth discussion more about limitations and scenarios that the proposed activation may fail to be useful.",
"questions": "1. Can the authors quantify how often real Transformer activations fall into the saturation regime?\n\n2. How do $\\kappa_h$ $\\kappa_o$ evolve during training? Please include training curves of these scalars and variance across layers. \n\n3. Is FleS stable in small-batch regimes? Also, how does FleS interact with batch normalization.\n\n4. The paper gives an intuition but more follow-ups on why to exclude negative values: What happens in architectures where negative values carry semantic meaning? Have the authors visualized the gradient contributions from negative vs positive responses?\n\n5. Is there any other costs other than Flops to be discussed and compared? This becomes important especially when impact of MLP size is also discussed.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-08T13:43:07",
"modification_date": "2025-11-12T14:11:30",
"review_url": "https://openreview.net/forum?id=XGODWn7HeJ¬eId=tkYg6DEAsm",
"license": "CC BY 4.0"
},
{
"id": "oxTAi3t8IA",
"forum": "XGODWn7HeJ",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission18129/Reviewer_Ewa3",
"reviewer_name": "Reviewer_Ewa3",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 1,
"summary": "This paper introduces a new gating mechanism. It builds on the intuition that having a fixed gating function can be problematic: If all features are salient, then a classical gating function like GELU will saturate and not meaningfully discriminate between these features. To address this, the authors introduce a new gating function which works as follows:\n\nFirst, for each channel they calculate the mean of the positive features. This is a measure of how likely this feature is going to saturate the activation function. Then, they feed these statistics to two small MLPs. These MLPs provide two scaling factors (of the inputs and the outputs).\n\nThe authors argue that this is particularly important for transformer networks, since the attention mechanism in transformers is likely to lead to having many salient activations within a channel. The reason the authors use an MLP rather than deriving scaling factors directly from the inputs is because it is (1) not possible to derive class-specific statistics at test time when the class is unknown, and (2) in shuffled batches the amount of information per-class can be very noisy. Hence, an MLP is a more appropriate way of estimating appropriate scaling factors given the small amount of noisy information found in a single batch.\n\nThe empirical results in the paper are very encouraging.",
"strengths": "* Strong empirical results\n* A very flexible method that applies to many networks/architectures\n* Robust method that seems relatively insensitive to hyperparameters",
"weaknesses": "* A bit heuristic (e.g. using positive-only means)\n* The connection between the theory and the practical algorithm is tenuous (given the use of MLPs in the final algorithm). Although it is interesting to see how the algorithm was motivated by the authors, it does end up feeling a lot like a post-hoc justification. I would prefer it if the authors approach their work as purely experimental and use this space in the paper for more exhaustive empirical validation.\n* Subpar presentation: I found the text needlessly filled with jargon, non-standard terminology and abbreviations, drawing spurious connections to other theories, too densely written, etc.\n * For example, abbreviating \"activation\" to \"Act\" and referring to pre-activations as \"projected responses\" really doesn't help with readability.\n * Then there is the list of newly introduced terms and accompanying abbreviations: non-local tension (NLT), convergence limitation (CL), trivially discriminative gating weights (TDGW), etc.\n * Connections to decision-making and neuronal stimulus-response mechanisms.\n * For example: Figure 1 should be a clear explanation of the problem this paper tries to tackle, in a way that readers can grok it after just reading the abstract. Instead, readers are presented with the following sentence, which is barely comprehensible (and contains several grammar mistakes): \"[...] the origin of the NLT and the key insights behind FleS shows how CL triggers TDGW problem, which in turn neutralizes the influence of external non-local cues through Act. and show two qualitative insights into addressing non-local tension: vertical and horizontal dynamic scaling strategies.\"",
"questions": "* Did you run any ablation studies over the MLP? Does a single linear layer suffice or is it essential to have non-linearities in the MLP?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T21:54:52",
"modification_date": "2025-11-12T14:11:30",
"review_url": "https://openreview.net/forum?id=XGODWn7HeJ¬eId=oxTAi3t8IA",
"license": "CC BY 4.0"
},
{
"id": "vPjKkessIu",
"forum": "XGODWn7HeJ",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission18129/Reviewer_HdQ3",
"reviewer_name": "Reviewer_HdQ3",
"rating": 6,
"confidence": 4,
"soundness": 4,
"contribution": 3,
"presentation": 3,
"summary": "This paper proposes a novel self-gated activation function, FleS (Flexible Scaling for Self-gated activation), aimed at addressing the Non-Local Tension (NLT) issue in Transformer architectures. From a decision-theoretic perspective, the authors argue that conventional activation functions (e.g., GELU, SiLU) exhibit convergence limitations when handling high-response features, reducing the efficiency of non-local information utilization. FleS adaptively adjusts the activation function’s boundary and steepness via dynamic vertical and horizontal scaling factors (κ_ve, κ_ho). Experiments on ImageNet, CIFAR-100, and COCO benchmarks demonstrate significant performance improvements. The work combines solid theoretical analysis, novel design, and comprehensive experiments, providing a new interpretable perspective for activation function modeling.",
"strengths": "# **Strengths**\n\n1. **Originality and Significance:** The paper identifies and clearly defines a novel and important problem, *Non-Local Tension (NLT)*. It explains why many advanced activation functions that perform well on CNNs fail to provide similar improvements in Transformer architectures. This is not only a technical innovation but also a conceptual contribution, offering a new perspective for understanding and improving activation mechanisms in modern neural networks.\n\n2. **Theoretical Quality:** The paper provides solid theoretical support for both the problem and the proposed solution. The authors construct a clear logical chain: from NLT to *Trivially Discernible Gating Weights (TDGW)*, and then to the root cause, *Convergence Limitation (CL)*. The analysis is rigorous, accompanied by intuitive explanations and formal theorems. FleS is tightly designed around this theory, using *effective average response* (considering only positive responses) to generate scaling factors, which is both a clever and theoretically justified heuristic.\n\n3. **Comprehensive Experimental Validation:** The experiments are thorough and well-designed, covering multiple tasks such as image classification, object detection, and natural language processing. Various network architectures are evaluated, including Swin Transformer, PoolFormer, and ResNet. FleS consistently demonstrates significant performance gains across all settings, strongly supporting its effectiveness, generalization, and robustness.\n\n4. **Clarity:** The paper is well-written, clearly structured, and logically coherent. Each part—from problem formulation, theoretical analysis, method design, to experimental validation—is clearly presented. Figures (e.g., Figures 1 and 2) help readers grasp core concepts and understand FleS’s operational mechanism, making complex theory and methodology accessible.",
"weaknesses": "# **Weaknesses**\n\n1. **Lack of a Unified Analysis Framework:** The paper mainly compares FleS with GELU, Meta-ACON, and other methods. Explaining the relationship between these approaches and FleS from a unified theoretical perspective would strengthen the academic rigor and interpretability of the work.\n\n2. **Complexity and Computational Overhead:** The practical version of FleS introduces an additional MLP, increasing the parameter count (e.g., Swin-Min from 11.9M to 13.8M), approximately a 10% increase that is not entirely negligible. The paper does not report inference speed (FPS); it is recommended to quantify the trade-off between computational overhead and performance improvement.\n\n3. **Hyperparameter Sensitivity:** FleS introduces new hyperparameters, such as the MLP channel reduction rate and the neighborhood size (9×15) for computing statistical measures. These settings are somewhat empirical, and further analysis is needed to understand the sensitivity of model performance to these parameters.\n\n4. **Insufficient Elaboration of the Decision-Theoretic Perspective:** Although the paper claims to be inspired by decision theory, the connection between the theoretical principles and the design of FleS remains high-level. Clearly specifying which decision-theoretic principles directly guided the FleS design would strengthen the motivation and theoretical foundation.",
"questions": "# **Questions**\n\n1. **Interaction with Normalization Layers:** \n FleS does not consider the effects of BN or LN, but these normalization layers are widely used in modern networks. Could LN, which normalizes along the channel dimension, interfere with FleS’s computation of channel-wise statistics?\n\n2. **Scope of Non-Local Tension (NLT):** \n FleS is also effective on CNNs such as ResNet. Does this indicate that the NLT problem exists in modern CNNs as well, or is the observed performance improvement mainly due to the general benefits of the adaptive scaling mechanism?\n\n3. **Implementation Details:** \n In the COCO experiments, how was the 9×15 neighborhood size chosen? Were other sizes tested? For NLP tasks, why do FleS-NLP and FleS-SeqGate use a *token-level indicator* and *depthwise separable 1D convolution*? Could the authors provide intuition or rationale for these design choices?\n\n4. **Applicability to Large Models:** \n Are there any numerical stability or training bottlenecks when applying FleS to large models such as ViT-L, LLaMA, or T5?\n\n5. **Version Consistency:** \n The paper presents multiple FleS variants (FleS-Proto, FleS, FleS-NLP, FleS-SeqGate). Is it possible to provide a unified version that supports multiple tasks without task-specific adaptations?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-26T14:52:05",
"modification_date": "2025-11-12T14:11:31",
"review_url": "https://openreview.net/forum?id=XGODWn7HeJ¬eId=vPjKkessIu",
"license": "CC BY 4.0"
}
] |
vIcqXbhU0Y
|
https://openreview.net/forum?id=vIcqXbhU0Y
|
Coherent Local Explanations for Mathematical Optimization
| 3.333333
| 4
|
[
4,
4,
2
] |
[
4,
4,
4
] | 3
|
[
"Optimization",
"Explainability",
"Interpretability",
"Sensitivity Analysis",
"Regression"
] |
The surge of explainable artificial intelligence methods seeks to enhance transparency and explainability in machine learning models. At the same time, there is a growing demand for explaining decisions taken through complex algorithms used in mathematical optimization. However, current explanation methods do not take into account the structure of the underlying optimization problem, leading to unreliable outcomes. In response to this need, we introduce Coherent Local Explanations for Mathematical Optimization (CLEMO). CLEMO provides explanations for multiple components of optimization models, the objective value and decision variables, which are coherent with the underlying model structure. Our sampling-based procedure can provide explanations for the behavior of exact and heuristic solution algorithms. The effectiveness of CLEMO is illustrated by experiments for the shortest path problem, the knapsack problem, and the vehicle routing problem.
|
optimization
|
https://openreview.net/pdf?id=vIcqXbhU0Y
| 2025-09-19T17:14:41
| 3
|
[
{
"id": "DbMQxAveE3",
"forum": "vIcqXbhU0Y",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission17192/Reviewer_LWrw",
"reviewer_name": "Reviewer_LWrw",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "This paper focuses on explaining mathematical optimization algorithms such as those used to solve the shortest path problem and the knapsack problem. The authors propose **Coherent Local Explanations for Mathematical Optimization (CLEMO)** to address the limitations of existing methods like LIME, which often generate **incoherent explanations** that violate structural constraints.\nExperiments on SPP, KP, and CVRP demonstrate that CLEMO produces **significantly more coherent explanations** than benchmark methods while maintaining comparable fidelity.",
"strengths": "1. **Simplicity and directness:** To address the incoherence of existing methods, the authors incorporate a coherence constraint directly into the optimization objective, ensuring that the generated explanations better satisfy the structural requirements of the problem.\n2. **Model-agnostic nature:** As a local explanation method, CLEMO can be applied to any black-box optimization model without relying on the internal structure of the model.",
"weaknesses": "1. **Performance trade-off:** Although coherence improves, fidelity decreases. While the authors claim that the loss in fidelity is acceptable, in real-world applications this reduction may compromise the practical usefulness of the explanations.\n2. **Limited baselines:** The paper only compares CLEMO with LIME and decision tree–based methods. Many more advanced explanation techniques exist, yet no comparison with them is provided.\n3. **Scalability issues:** The computational cost increases sharply as the problem size grows.\n4. **Dependence on convexity:** The reliance on convexity assumptions restricts the applicability of the proposed method.",
"questions": "1. The example provided in the introduction is confusing. For the original input parameters \\(a_{12}=4.1\\), the model already produces an infeasible solution. To my knowledge, most existing local model-agnostic explanation methods ensure that the surrogate accurately reproduces the prediction for the instance being explained. For example, LIME and SHAP both assign high weight to the original input during surrogate fitting. Could the authors clarify this point?\n2. Building on the previous question, the occurrence of infeasible predictions may indicate that the current setup goes beyond the neighborhood of local explanations. Would it be more appropriate for the user to specify a domain that only produces feasible solutions? Moreover, the proposed regularization introduces another concern.\n Consider a piecewise linear relationship between (x) and (\\theta):\n (x = \\theta) when (\\theta < 0.5), and (x = 0.5) when (\\theta \\geq 0.5),\n with an additional constraint (x \\leq 0.5).\n A baseline method might yield an explanation such as (x = \\theta) or (x = 0.9\\theta), which would produce an infeasible prediction at (\\theta = 1). In contrast, CLEMO’s regularization would favor an explanation closer to (x = 0.5\\theta), ensuring feasibility. However, note that the original explanation remains reasonably accurate for (\\theta < 0.5), thus effectively describing the model’s local behavior. CLEMO’s regularized explanation, on the other hand, may lose fidelity across the entire domain, making it less accurate everywhere. I suggest the authors discuss this trade-off explicitly.\n3. It would be helpful for the authors to provide a more intuitive example to better illustrate the performance of CLEMO.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T10:37:50",
"modification_date": "2025-11-12T13:59:27",
"review_url": "https://openreview.net/forum?id=vIcqXbhU0Y¬eId=DbMQxAveE3",
"license": "CC BY 4.0"
},
{
"id": "LK3blLopUB",
"forum": "vIcqXbhU0Y",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission17192/Reviewer_QR8b",
"reviewer_name": "Reviewer_QR8b",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "This paper introduces CLEMO, a method for generating local explanations for mathematical optimization models that ensures coherence between the predicted objective value and decision variables with the underlying problem structure. By incorporating coherence regularizers into a LIME-like framework, the approach aims to provide more reliable explanations for both exact and heuristic solvers. The method is evaluated on several classic optimization problems, including shortest path, knapsack, and vehicle routing, with comparisons to baseline explanation methods.",
"strengths": "+ The paper identifies a practical issue in post‑hoc explanations for optimization.\n+ The theoretical analysis is comprehensive and well‑developed. \n+ Implementation details and appendices support reproducibility.",
"weaknesses": "- The core methodological novelty is limited, as CLEMO primarily extends LIME by adding coherence penalties, a conceptually straightforward adaptation. This is particularly true given the proven redundancy of the objective coherence regularizer for problems with fixed linear objectives.\n\n- The method's reliance on an explicit, differentiable problem formulation for the feasibility regularizer is a major practical limitation. It remains unclear how CLEMO could be applied to black-box commercial solvers or complex heuristics where the internal constraint set is inaccessible.\n\n- The experimental evaluation is incomplete. It lacks ablation studies to dissect the contribution of each regularizer, does not normalize metrics for cross-problem comparison, and omits key baselines from the optimization literature, such as methods based on inverse optimization.",
"questions": "1. Why were the experimental comparisons limited to simple linear models and decision trees, excluding more relevant optimization-specific baselines like SHAP-based or inverse optimization or counterfactual explanation methods?\n\n2. What is the individual contribution of each coherence regularizer? An ablation study showing the performance with only $R_{C_1}$, only \\$R_{C_2}$, and both, would clarify their necessity.\n\n3. The runtime scales poorly with problem size. What specific algorithmic strategies or approximations could be implemented to make CLEMO feasible for large-scale optimization problems?\n\n4. How can the requirement for an explicit problem formulation be relaxed to apply CLEMO to black-box solvers where the internal constraints are not directly available?\n\n5. How sensitive are the explanations to the chosen hyperparameters? Was an ablation study conducted to understand the impact of different $\\lambda$ values on fidelity and coherence?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-27T00:25:50",
"modification_date": "2025-11-12T13:59:27",
"review_url": "https://openreview.net/forum?id=vIcqXbhU0Y¬eId=LK3blLopUB",
"license": "CC BY 4.0"
},
{
"id": "12t01a1Mxm",
"forum": "vIcqXbhU0Y",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission17192/Reviewer_jA3G",
"reviewer_name": "Reviewer_jA3G",
"rating": 2,
"confidence": 4,
"soundness": 1,
"contribution": 2,
"presentation": 1,
"summary": "This study aims to enhance model-agnostic local explanation methods for mathematical optimization problems. In the traditional model-agnostic explanation framework, the goal is to provide an interpretable rule that clarifies the prediction made by a black-box model for a specific data instance. Applying this framework to mathematical optimization presents two significant challenges. First, the result of an optimization algorithm is not just a single value, but rather a solution vector along with its corresponding objective value. Second, the mathematical optimization task is typically constrained within a feasible set, meaning that the explanation must adhere to these constraints. To tackle this explanation challenge, the authors propose an adaptation of the LIME method, originally designed for explaining regression tasks. In this present approach, the explanation is described as a vector of interpretable components (linear functions), with each component corresponding to a specific element of the output. Furthermore, the regularized fidelity function used in LIME is replaced with a regularized loss function that penalizes explanations that violate the constraints of the optimization task. This methodology is experimentally validated across three combinatorial optimization problems: shortest-path, knapsack, and vehicle routing.",
"strengths": "**S1.** One of the primary strengths of this paper is its focus on the problem being examined by the authors. Explaining the output of a solving algorithm in relation to the input parameters of a problem instance is more complex than standard post-hoc explanation tasks, as the output is a vector and the explanation must be consistent with the input constraints.\n\n**S2.** This study is well-motivated; the introduction effectively justifies the need for explaining mathematical optimization tasks, particularly for applications in sensitivity analysis and constraint modeling.",
"weaknesses": "**W1.** The notation used in this paper is quite dense and has not been adequately introduced, which leads to ambiguity in many definitions and results. Additionally, the classes of models employed to explain mathematical optimization problems are not clearly defined.\n\n**W2.** The proposed approach is primarily heuristic and lacks substantial theoretical guarantees. For example, Proposition 3.1 is straightforward, and the last paragraph of Section 3 (Lines 282-292) is quite ambiguous. \n\n**W3.** The sampling method is not explained in detail: virtually nothing is said about the underlying probability distribution. Notably, nothing ensures that the parameters drawn at random will result in a feasible problem. \n\n**W4.** The current framework does not consider the “succinctness” of explanations, which plays a critical role in their interpretability. The brief mention in Lines 203-204 is unclear, as the penalty parameter does not always ensure the sparsity of explanations. Furthermore, the sizes of explanations are not reported in the experiments.",
"questions": "Here are some comments and questions related to the above weaknesses:\n\n**C1.** As previously mentioned, the notation is not clearly introduced, which makes the paper quite difficult to understand. For the sake of clarity, the domain and co-domain of the optimization model $ h $ should be defined in Section 2 to help readers grasp what needs to be explained. Additionally, the classes $ \\mathcal{G} $ of explanation models are not formally defined. At the beginning of Section 3, each explanation model $ g$ is described as a $ p+1$-dimensional vector of local explanation models; however, later in Section 3, it is referred to as a vector $\\beta$ of coefficient vectors. This inconsistency is misleading and introduces an excessive amount of notation. I recommend formalizing an explanation as a square matrix of dimension $ (p + 1) \\times (p + 1) $, where each row represents a linear function that explains the $ c$-th component of the output. Furthermore, the class of vectors $ \\beta$ is not properly defined. Are the number and magnitude of the coefficients bounded?\n\n**C2.** The theoretical framework offers almost no theoretical guarantees. As noted earlier, Proposition 3.1 directly follows from equations (7) and (8), but it does not ensure that the computed explanation meets conditions (3) and (4) at convergence. Additionally, the paragraph in Lines 282-292 is quite ambiguous. Line 287 suggests that the objective is to minimize the square loss of a single vector $\\beta_c$, but we should actually be minimizing the square loss of the entire matrix of coefficient vectors. Moreover, in the proof of Theorem A.1 (which is, by the way, better formulated), it is assumed that the matrix in Line 776 is invertible; however, this is generally not the case. I would recommend rewriting the proof using the pseudo-inverse and demonstrating that the resulting solution satisfies conditions (3) and (4).\n\n**C3.** The sampling method should be clearly specified. Currently, there is no information about the probability distribution over the parameter space $\\Theta$. This is a critical aspect of the framework because if a sample $\\theta$ results in an infeasible problem (i.e., the set $X(\\theta)$ is empty), then the explanation task becomes vacuous. The paragraph from Lines 254-259 is too ambiguous; if $\\Theta$ includes infeasible parameter vectors, what is the distribution, and how can we efficiently sample $N$ instances from this distribution in polynomial time?\n\n**C4.** The explanation framework proposed in this study focuses solely on the coherence criterion. While finding coherent explanations is commendable, it is not the only important criterion. Succinctness (or sparsity) is often considered equally vital for interpretability (see, for example, Lage et al. 2019). Unfortunately, this criterion receives little attention in the study. Specifically, the statement in Lines 203-204 is insufficient. Combining the “constraint” regularizer $R_C$ with a “sparsity” regularizer $\\Omega$ does not guarantee that the resulting explanations will be sparse, even in the convex case, because the objective function must balance both regularizers. It would be helpful to include some comments on how we can ensure that explanations are both coherent and sparse. Additionally, the average size (number of nonzero coefficients) of the explanations should be reported in the experiments.\n\n**Reference**\n\nIsaac Lage, Emily Chen, Jeffrey He, Menaka Narayanan, Been Kim, Samuel J. Gershman, Finale Doshi-Velez: Human Evaluation of Models Built for Interpretability. Proceedings of the 7th AAAI Conference on Human Computation and Crowdsourcing (HCOMP), pages 59--67, 2019.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-20T01:27:34",
"modification_date": "2025-11-12T13:59:28",
"review_url": "https://openreview.net/forum?id=vIcqXbhU0Y¬eId=12t01a1Mxm",
"license": "CC BY 4.0"
}
] |
|
3CLscEAR9X
|
https://openreview.net/forum?id=3CLscEAR9X
|
ArtAug: Iterative Enhancement of Text-to-Image Models via Synthesis–Understanding Interaction
| 4.5
| 3.5
|
[
2,
6,
6,
4
] |
[
4,
3,
3,
4
] | 4
|
[
"Diffusion models",
"alignment",
"image synthesis"
] |
The emergence of diffusion models has significantly advanced image synthesis. Recent studies of model interaction and self-corrective reasoning approaches in large language models offer new insights for enhancing text-to-image models. Inspired by these studies, we propose a novel method called ArtAug for enhancing text-to-image models via model interactions with understanding models. In the interactions, we leverage human preferences implicitly learned by image understanding models to provide fine-grained suggestions for image generation models. The interactions can modify the image content to make it aesthetically pleasing, such as adjusting exposure, changing shooting angles, and adding atmospheric effects. The enhancements brought by the interaction are iteratively fused into the generation model itself through an additional enhancement module. This enables the generation model to produce aesthetically pleasing images directly with no additional inference cost. In the experiments, we verify the effectiveness of ArtAug on advanced models such as FLUX, Stable Diffusion 3.5 and Qwen2-VL, with extensive evaluations in metrics of image quality, human evaluation, and ethics. The source code and models will be released publicly.
|
A paper on enhancement methods for text-to-image models.
|
generative models
|
https://openreview.net/pdf?id=3CLscEAR9X
| 2025-09-18T10:03:37
| 4
|
[
{
"id": "7d5GHfTHbh",
"forum": "3CLscEAR9X",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission10112/Reviewer_seWm",
"reviewer_name": "Reviewer_seWm",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "Inspired by the recent studies of model interaction and self-corrective reasoning, this paper proposes ArtAug, which is a method for enhancing text-to-image models in terms of image quality and human preference. ArtAug leverages image understanding models to provide fine-grained suggestions for image generation models, and the interaction results can be fused back to the model itself through an additional enhancement module. Experimental results show that ArtAug can enhance existing text-to-image models to generate high-quality, aesthetically pleasing images.",
"strengths": "- Differential Training Method: The proposed differential training methods is interesting and provide steady improvements through multiple iterations. Experimental results in section 4.2 show that the aesthetic score, CLIP score, and similarity are steadily improved throughout the iterations.\n- Case Studies: By presenting numerous before-and-after image comparisons, they provide a straightforward and immediate impression of the practical effects of ArtAug.\n- Clarity of Presentation: The paper is well-organized. The overview of the ArtAug framework in Figure 2 is particularly effective, offering an intuitive and comprehensive illustration of the entire multi-stage pipeline.",
"weaknesses": "- Insufficient Experimental Comparisons: The experiments show that Base Model + ArtAug outperforms the Base Model. However, they lack baseline methods, which is a crucial component in experiments. The paper introduction claims that the existing three types of methods such as prompt engineering and alignment training have their \"certain limitations\", but the paper provides no direct empirical evidence to support this. To properly situate ArtAug's contribution, it should be benchmarked against these alternatives.\n- Insufficient Discussion of Related Work: Section 2.2, \"Aligning Models with Human Preferences,\" focuses almost on DPO methods that only learn the diffusion model weights. This narrows the scope of \"Aligning Models with Human Preferences\". For example, many recent studies aligning models with human preference by learning an isolate model outside the diffusion model with reinforcement learning such as Parrot [1]. A broader discussion of methods for alignment with human preferences is necessary.\n\n[1] Parrot: Pareto-optimal Multi-Reward Reinforcement Learning Framework for Text-to-Image Generation",
"questions": "How does ArtAug address the \"certain limitations\" of existing text-to-image methods? With many recent works continuously advancing this field, could you please provide evidence on the unique advantages of ArtAug?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-03T13:43:12",
"modification_date": "2025-11-12T12:24:50",
"review_url": "https://openreview.net/forum?id=3CLscEAR9X¬eId=7d5GHfTHbh",
"license": "CC BY 4.0"
},
{
"id": "JYPMqyaM4o",
"forum": "3CLscEAR9X",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission10112/Reviewer_M9ao",
"reviewer_name": "Reviewer_M9ao",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The paper proposes ArtAug, a synthesis–understanding interaction framework that uses a multimodal VLM (“AI art director”) to suggest fine-grained, region-conditioned edits to an image generated by a text-to-image model, then distills those improvements back into the generator via a differential LoRA “enhancement module.” The pipeline iterates: generate, understand, refine, construct image pairs, filter, and train differential LoRA, progressively improving aesthetics without extra inference cost at test time. Experiments on FLUX.1[dev] and Stable Diffusion 3.5 report consistent gains on aesthetic/CLIP and multiple preference metrics, plus a double-blind human study and a small ethics check.",
"strengths": "1. Writing and structure. The paper is easy to follow; the problem setting, modules (generation, understanding, enhancement), and the iterative loop are clearly laid out with an informative figure and concise pseudo-code. \n\n2. Practical, well-motivated method with diverse evaluation. The differential LoRA design that learns only the delta between original and refined images is simple and pragmatic; the study includes basic metrics, several preference models, a double-blind human comparison, and an ethics sanity-check—together suggesting the gains are not metric-specific.",
"weaknesses": "1. Although results are shown on FLUX.1[dev] and SD-3.5, the study would be stronger with additional architectures and with side-by-side comparisons against established alignment methods or prompt/data-refinement baselines under the same prompts and budgets.\n\n2. No comparison with other text–image alignment and aesthetics methods. The paper primarily compares “base vs. base+ArtAug”; adding head-to-head numbers against recent aesthetics/alignment enhancers and reporting statistical significance for human studies would better support the claim of improved alignment and appeal.",
"questions": "Please see the weaknesses part.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T17:02:01",
"modification_date": "2025-11-12T12:24:50",
"review_url": "https://openreview.net/forum?id=3CLscEAR9X¬eId=JYPMqyaM4o",
"license": "CC BY 4.0"
},
{
"id": "uGpPRdJHqD",
"forum": "3CLscEAR9X",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission10112/Reviewer_YUYD",
"reviewer_name": "Reviewer_YUYD",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper proposes ArtAug, a new framework that enhances text-to-image diffusion models via synthesis-understanding interaction. ArtAug introduces an interactive mechanism between a generation module and an understanding module. These interactions produce enhanced image pairs, which are then used for differential LoRA training, enabling the model to internalize the improvements without additional inference cost. Experimental results on FLUX.1[dev] show consistent improvements across aesthetic metrics and human evaluations, while maintaining text-image alignment.",
"strengths": "The paper introduces a new paradigm for improving generative models through cross-model interaction between synthesis and understanding. \n\nThe paper is well-written and easy to follow.",
"weaknesses": "- Maybe introducing some new metrics would make the evaluation session stronger and more convincing. Consider FID or other metric for high-quality generated model like Flux and SD 3.5 (Enhancing Reward Models for High-quality Image Generation: Beyond Text-Image Alignment, ICCV 2025) maybe would help.\n\n\n- The related work section could be enhanced by incorporating recent works in enhancement of text-to-image models:\n\n\n[1] Mastering Text-to-Image Diffusion: Recaptioning, Planning, and Generating with Multimodal LLMs. ICML 2024\n\n[2] Dynamic Prompt Optimizing for Text-to-Image Generation. CVPR 2024\n\n[3] Optimizing Prompts for Text-to-Image Generation. NeurIPS 2023",
"questions": "What is the computational cost and time cost of generating the interactive pairs, and how does it scale with prompt complexity?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T13:48:09",
"modification_date": "2025-11-12T12:24:51",
"review_url": "https://openreview.net/forum?id=3CLscEAR9X¬eId=uGpPRdJHqD",
"license": "CC BY 4.0"
},
{
"id": "lBRj8wJyoC",
"forum": "3CLscEAR9X",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission10112/Reviewer_TemC",
"reviewer_name": "Reviewer_TemC",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper introduces ArtAug, a novel framework for iteratively enhancing the aesthetic quality of text-to-image models without relying on extensive human annotation. The core contribution is a \"synthesis-understanding\" loop where a powerful multimodal language model (MLLM) acts as an \"AI Art Director,\" providing fine-grained, region-specific textual suggestions to improve an initially generated image. These suggestions are used to create an enhanced version, forming a high-quality pairwise dataset of (original, enhanced) images. The authors then propose a differential LoRA training method to distill this aesthetic improvement into a compact module that is fused back into the base model. This process is iterated to progressively refine the model's generative capabilities, demonstrably improving aesthetic scores and human preferences on strong baselines like FLUX and Stable Diffusion 3.5 without adding any inference cost.",
"strengths": "1. The paper provides extensive and compelling qualitative visualizations. The side-by-side comparisons of images before and after applying ArtAug (e.g., in Figures 1, 5, and 6) clearly and intuitively demonstrate the significant aesthetic improvements, providing strong visual evidence for the method's effectiveness.\n2. The core idea of creating a \"synthesis-understanding\" loop by coupling a generation model with an understanding model is novel and insightful. It's good to see the understanding MLLM model can achieve an aesthetic similar to that of humans.\n3. The proposed differential training mechanism using two separate LoRA modules is a clever and effective technical choice. By first anchoring the model to the original image with one LoRA and then learning only the aesthetic \"delta\" with a second, the method effectively disentangles reconstruction from enhancement, which likely leads to more stable and targeted training.\n4. The analysis presented in Figure 3 shows a consistent and promising trend of improvement across multiple iterations. The framework does not appear to suffer from an immediate performance bottleneck, suggesting its potential for sustained and progressive enhancement of the base model's capabilities.\n5. The paper is well-written and clearly organized. The methodology is presented in a logical, step-by-step fashion that makes the entire framework easy to understand and follow.",
"weaknesses": "1. the presented results are positive, the experimental validation lacks depth. The paper would be significantly stronger with a more comprehensive analysis, including:\n 1) **Comparisons to SOTA Alignment Methods:** The work is framed as an alternative to alignment training like DPO or RLHF. However, there are no direct comparisons to models fine-tuned with these methods, making it difficult to gauge the relative effectiveness and trade-offs of ArtAug.\n 2) **Ablation Studies:** Key design choices are not validated. For instance, the impact of the chosen MLLM on the quality of suggestions is critical but only briefly discussed in the appendix. An ablation on the number of generated image pairs (5k initial pairs seems relatively small) and its correlation with performance gains would be crucial to substantiate the claim of scalability.\n2. There is a noticeable gap between the striking improvements shown in the qualitative figures and the more modest results from the human evaluation (Table 3). The win rates of ~46% and ~51% are only slightly better than the baseline, which raises questions about whether the visualized examples are cherry-picked or if the aggregate improvement is less significant than implied.\n3. The paper correctly states that the final model has no inference overhead. However, the iterative training process itself—involving generation, MLLM-based refinement, filtering, and differential training—is computationally intensive. While this is likely cheaper than large-scale human annotation, the modest quantitative gains from human evaluation challenge the overall cost-benefit trade-off of this complex pipeline. A more detailed analysis of the training cost versus the achieved improvement would be beneficial.\n4. The manuscript contains several typographical errors (e.g., \"EVALUIATION\" in the heading for Section 4.1.5) that detract from its overall polish. A thorough proofreading is recommended to improve the presentation quality.\n5. This paper lacks REPRODUCIBILITY STATEMENT and THE USE OF LLMS",
"questions": "1. The paper positions ArtAug as a scalable alternative to human-feedback-based alignment methods like DPO and RLHF. However, the experiments lack a direct comparison. Could you provide any quantitative results, even on a smaller scale, comparing a model enhanced with ArtAug against a similarly-sized model fine-tuned with a public preference dataset (e.g., Pick-a-Pic)?\n2. The choice of Qwen2-VL-72B as the understanding model seems critical to the success of the pipeline.\nHow sensitive is the quality of the generated data to the specific MLLM used? For instance, what would be the impact of using a smaller open-source model or a more powerful closed-API model like GPT-4o? Could you also elaborate on the failure modes of this interaction? What happens if the MLLM provides nonsensical or aesthetically poor suggestions, and how effectively does your filtering process (aesthetic score, CLIP similarity, manual review) mitigate this?\n3. The experiments generate 5k initial pairs per iteration, which are filtered down to a small training set (1-2%). This number seems relatively low for training large models. Could you provide any analysis on the relationship between the number of generated pairs and the performance improvement?\n4. There appears to be a disconnect between the dramatic improvements shown in the qualitative figures and the more modest win rates in the human evaluation (Table 3), where ArtAug is only marginally preferred over the baseline. Can you comment on this discrepancy?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-21T13:25:54",
"modification_date": "2025-11-12T12:24:51",
"review_url": "https://openreview.net/forum?id=3CLscEAR9X¬eId=lBRj8wJyoC",
"license": "CC BY 4.0"
}
] |
RnNqSYqEcm
|
https://openreview.net/forum?id=RnNqSYqEcm
|
Online Multi-objective Convex Optimization: A Unified Framework and Joint Gradient Descent
| 3
| 3.5
|
[
2,
4,
2,
4
] |
[
4,
3,
3,
4
] | 4
|
[
"online multi-objective convex optimization",
"Pareto front",
"primal-dual method"
] |
Online Convex Optimization (OCO) usually addresses the learning task with a single objective; however, in real-world applications, multiple conflicting objectives often need to be optimized simultaneously. In this paper, we present an Online Multi-objective Convex Optimization (OMCO) framework with a novel multi-objective regret. We prove that, when the number of objectives in OMCO decreases to one, the regret is equal to the regret in OCO, thus unifying the OCO and OMCO frameworks. To facilitate the analysis of the proposed novel regret, we derive its equivalent form using the strong duality theory of convex optimization. Moreover, we propose an Online Joint Gradient Descent algorithm and prove that it achieves a sublinear multi-objective regret according to the equivalent regret form. Experimental results on several real-world datasets validate the effectiveness of our proposed algorithm.
|
optimization
|
https://openreview.net/pdf?id=RnNqSYqEcm
| 2025-09-04T17:10:44
| 4
|
[
{
"id": "EMfLr82i5c",
"forum": "RnNqSYqEcm",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission2016/Reviewer_Ggst",
"reviewer_name": "Reviewer_Ggst",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 1,
"presentation": 2,
"summary": "The paper studies Online multi-objective convex optimization by introducing multi-objective regret. The paper proposes a new multi-objective regret in OMCO based on translative scalarization that unifies the single and multi-objective frameworks. After an initial characterization of the problem, the paper proceeds to give algorithms based on the primal-dual framework. The paper is concluded by a set of experimental evaluations.",
"strengths": "The paper tackles an important extension of online convex optimization to the multi-objective setting, proposing a unified regret definition and algorithmic framework.",
"weaknesses": "I already reviewed this paper at NeurIPS 2025. I had included numerous points that concerned me, as well as several minor mistakes and typos. Unfortunately, none of these issues have been addressed in the updated version.\n\nThe central conceptual objection I had remains unresolved:\nThe paper still does not address the trivial algorithmic baseline: maintaining a separate regret minimizer for each objective and a meta-level regret minimizer that chooses which objectives to follow.\n\nThe lack of this discussion raises doubts about the necessity and novelty of the proposed method.\n\nAlso, the first half of the proof of Theorem 3 is just the standard calculations for OGD.",
"questions": "see weknesses",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T18:47:03",
"modification_date": "2025-11-12T10:53:09",
"review_url": "https://openreview.net/forum?id=RnNqSYqEcm¬eId=EMfLr82i5c",
"license": "CC BY 4.0"
},
{
"id": "TuhEG5zPqd",
"forum": "RnNqSYqEcm",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission2016/Reviewer_XQ86",
"reviewer_name": "Reviewer_XQ86",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper proposes a unified framework for Online Multi-objective Convex Optimization (OMCO), generalizing the standard Online Convex Optimization (OCO) setting to multiple objectives. The authors introduce a new definition of multi-objective regret based on translative scalarization, show that it reduces to classical regret when the number of objectives is one, and derive an equivalent dual form via convex duality. They further propose an algorithm named Online Joint Gradient Descent (OJGD) that updates both the primal decision and the objective weights jointly in an online manner. The paper provides sublinear regret bounds under standard assumptions and some empirical validation on convex regression tasks and multi-task learning benchmarks.",
"strengths": "1. The problem itself (OMCO) is important and relevant to the ICLR community, especially with the clear connections to multi-task learning.\n\n2. The unification of OCO and OMCO within a single regret framework is interesting and conceptually sound.\n\n3. The proposed OJGD algorithm is simple and computationally efficient (avoiding the QP of min-norm methods).",
"weaknesses": "1. The paper spends a lot of time building up the new regret from Definition 4, based on translative scalarization. However, Theorem 2 immediately shows that this is equivalent to min-max problem. This equivalent form in Eq. (6) looks very much like a standard minimax regret, i.e., finding the set of weights $\\lambda$ that defines the best scalarized regret against a learner. The idea of finding the best post-hoc scalarization is not new. The unification part (Theorem 1) also feels like an expected outcome. So, the contribution in Definition 4 feels more like a (slightly complex) re-formulation of a known concept rather than a fundamentally new performance metric.\n\n2. The algorithm is designed to solve the problem in Eq. (9), which is a classic online minimax (or saddle-point) problem. The update rules in Eq. (10) and (11) are a standard application of Online Gradient Descent-Ascent applied to the instantaneous loss $\\lambda^T {F}_t(x)$. The addition of the $\\alpha_t \\Delta(\\lambda_t)$ term is a minor modification (a regularization) to pull the weights back towards the initial $\\lambda_1$. This is a well-known algorithmic template.",
"questions": "1. Could you please clarify the novelty of the regret definition compared to the standard concept of a minimax regret (i.e., finding the optimal $\\lambda^*$ on the simplex that minimizes the weighted-sum regret)? The \"unification\" in Theorem 1 seems to follow directly from this, so the core contribution isn't entirely clear to me.\n\n2. How does the proposed OJGD algorithm (Eq. 10/11) fundamentally differ from a standard Online Gradient Descent-Ascent (OGDA) applied to the instantaneous game $\\mathcal{L}_t(x_t, \\lambda_t) = \\lambda_t^T F_t(x_t)$? It looks very similar to existing primal-dual methods for online saddle-point problems.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T04:20:26",
"modification_date": "2025-11-12T10:53:09",
"review_url": "https://openreview.net/forum?id=RnNqSYqEcm¬eId=TuhEG5zPqd",
"license": "CC BY 4.0"
},
{
"id": "PdnjQbacN5",
"forum": "RnNqSYqEcm",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission2016/Reviewer_6Pcp",
"reviewer_name": "Reviewer_6Pcp",
"rating": 2,
"confidence": 3,
"soundness": 2,
"contribution": 1,
"presentation": 2,
"summary": "This paper studies online multi-objective convex optimization (OMCO). A regret definition is proposed, derived from Translative scalarization. The authors show that this regret recovers the classical regret in online convex optimization (OCO) when the number of objectives is one. They further propose an algorithm that optimizes both action and the objective weights, and show that this algorithm achieves sub-linear regret under some assumptions.",
"strengths": "The paper is clearly structured. The core proofs are clear and appear technically correct on a first pass. The dynamic regret is provided in the appendix.",
"weaknesses": "Limited novelty compared to [Jiang et al, 2023]. I found the theoretical contribution of this paper to be weak. The proposed regret is essentially the same as [Jiang 2023], see Proposition 1, under convex conditions where we can intechange min and max. Besides, the algorithm seems to be a specialization of mirror-descent–style methods with Euclidean geometry (happy to be corrected if the authors think otherwise). The paper should make the precise relationship explicit and clarify what is genuinely new.\n\nUnconvincing discussion of “non-negative regret”. This paper claims in line 128-131 that [Jiang 2023] restrict the regret to be non-negative and positions this work as removing that restriction. In my opinion, this argument is weak since regret is almost non-negative. If negative values are possible under the authors' definition, the paper should provide an example demonstrating why this is meaningful. \n\nTheorem 1 shows that multi-objective regret is equal to the regret in OCO framework when p=1. This is very simple, expected, and does not advance understanding of the multi-objective case.\n\nThe experiments need more details. For example, in Fig 1 (a), how is the average regret computed at each round? How to compute the optimal \\lambda^* at each round?",
"questions": "How to compute the average regret in Fig 1(a)?\n\nWhat is the technical novelty compared to [Jiang 2023]?\n\nIn Assumption 2, is it the loss to be convex instead of its gradient?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T23:27:23",
"modification_date": "2025-11-12T10:53:09",
"review_url": "https://openreview.net/forum?id=RnNqSYqEcm¬eId=PdnjQbacN5",
"license": "CC BY 4.0"
},
{
"id": "0wM5uGcjtd",
"forum": "RnNqSYqEcm",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission2016/Reviewer_mgg6",
"reviewer_name": "Reviewer_mgg6",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "This paper investigates online multi-objective convex optimization (OMCO), where a multi-objective optimization problem has no unique optimal solution, but a set of efficient solutions that do not dominate each other. The authors first show that the regret of OMCO is equal to that in classical OCO when the number of objectives decreases to one. Furthermore, they propose an Online Joint Gradient Descent algorithm, which achieves a sublinear multi-objective regret by the upper bound of regret. Finally, they also conduct experiments to validate the effectiveness of their proposed algorithm.",
"strengths": "Overall, this paper is clearly written. The authors propose a novel multi-objective metric that improves upon previous work (Jiang et al., 2023). Moreover, the multi-objective regret can be reduced to the classical online setting regret. Finally, the experimental evaluation is thorough and convincing.",
"weaknesses": "This paper is primarily theoretical. However, the theoretical support is not entirely convincing. Regarding Theorem 1, I believe its theoretical contribution may not fully justify the designation of a theorem, and it might be more appropriate to present it as a proposition. In addition, my main concern lies in the significance of the proposed multi-objective regret and the corresponding OJGD algorithm. More specific issues are listed in the **Questions** section.",
"questions": "**Q1:** In Theorem 2, the authors present an equivalent form of the multi-objective regret. However, minimizing this metric is essentially equivalent to optimizing a weighted regret with arbitrarily chosen weights, due to $\\min_{\\lambda}$. Therefore, the proposed multi-objective regret may lack intrinsic significance. \n\n**Q2:** In the algorithmic development, the authors only consider minimizing the upper bound of the multi-objective regret in Eq. (9) rather than the original metric. It is evident that Eq. (8) represents a worst-case multi-objective target. Hence, it is unclear whether the proposed OJGD algorithm, which aims to optimize this worst-case formulation, provides substantial value.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-25T18:05:30",
"modification_date": "2025-11-12T10:53:10",
"review_url": "https://openreview.net/forum?id=RnNqSYqEcm¬eId=0wM5uGcjtd",
"license": "CC BY 4.0"
}
] |
|
q1Waov7fd2
|
https://openreview.net/forum?id=q1Waov7fd2
|
Normalized Matching Transformer
| 2
| 3.75
|
[
2,
2,
2,
2
] |
[
4,
3,
4,
4
] | 4
|
[
"Keypoint Matching",
"Graph Matching",
"Normalized Transformer",
"Hyperspherical Learning"
] |
We introduce the Normalized Matching Transformer (NMT), a deep learning approach for efficient and accurate sparse keypoint matching between image pairs. NMT consists of a strong visual backbone, geometric feature refinement via SplineCNN, followed by a normalized transformer for computing matching features. Central to NMT is our hyperspherical normalization strategy: we enforce unitnorm embeddings at every transformer layer and train with a combined contrastive InfoNCE and hyperspherical uniformity loss to yield more discriminative keypoint representations. This novel architecture/loss combination encourages close alignment of matching image features and large distance between non-matching ones not only at the output level, but for each layer. Despite its architectural simplicity, NMT sets a new state-of-the-art performance on PascalVOC and SPair-71k, outperforming BBGM (Rol´ınek et al. 2020), ASAR (Ren et al. 2022), COMMON (Lin et al. 2023) and GMTR (Guo et al. 2024) by 5.1% and 2.2%, respectively, while converging in at least ≥ 1.7× fewer epochs compared to other state of the art baselines. These results underscore the power of combining pervasive normalization with hyperspherical learning for geometric matching tasks.
|
unsupervised, self-supervised, semi-supervised, and supervised representation learning
|
https://openreview.net/pdf?id=q1Waov7fd2
| 2025-09-17T17:55:07
| 4
|
[
{
"id": "YCKIg78f08",
"forum": "q1Waov7fd2",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8931/Reviewer_Ygxx",
"reviewer_name": "Reviewer_Ygxx",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 1,
"presentation": 2,
"summary": "This paper proposes Normalized Matching Transformer for sparse matching between image pairs. \nThe architecture consists of a feature backbone, Spline CNN for feature refinement, and a normalized transformer to yield the matching features.\nThe authors propose the hyperspherical normalization strategy as the central component of NMT, enforcing unit-norm embeddings at every transformer layer, and yielding more discriminative keypoint features by a combined contrastive InfoNCE loss.\nQuantitative results on the PASCAL-VOC and SPair-71k datasets show that NMT obtains SoTA results, with the use of InfoNCE and hyperspherical loss showing the highest performance gain in the ablation experiments.",
"strengths": "- The overall architecture is simple and straightforward - feature extractor, feature refinement, and feature matching. \n\n- The proposed loss function (InfoNCE + hyperspherical loss) is effective and leads to strong performance gains.",
"weaknesses": "- Weak algorithmic novelty. The paper builds on well-built foundations of strong feature extractor, feature refinement via Spline CNN, and transformer-based feature matching via alternating self- and cross- attention of features. While the InfoNCE loss and hyperspherical loss brings about dramatic improvements, the introduction of such contrastive losses is also a well-known concept in image matching [1].\n\n- Lack of comparative experiments across a large body of baseline work on semantic matching. The current evaluation setting only considers the case when all the source and target keypoints are known, and thus the corresponding metric (accuracy) is being used. However, a larger body of work, e.g., [2][3][4][5], perform semantic matching where only the source keypoints are known, and the given architecture of NMT can be easily applied to such settings as well (e.g., by performing top-1 similarity for each source keypoint).\n\n- Lack of application on related sparse geometric matching work. SuperGlue is in fact a sparse matching method, as they rely on keypoints selected from SuperPoint. In that case, how would the proposed method fare in sparse matching scenarios given geometric matching datasets such as HPatches, Aachen Day-Night or MegaDepth datasets? The authors refer to SuperGlue and LightGlue as dense keypoint matching methods, but LoFTR is the only dense keypoint matching method among the three methods listed in L191. If the authors meant to use 'sparse' to mean 'a little number of keypoints', that would have to be specified more clearly. \n\n- Wrong writing formatting in the citations. All citations seem to be in `\\cite{}` instead of `\\citep{}` or `\\citet{}`.\n\n[1] Choy et al., \"Universal Correspondence Network\", 2016 \\\n[2] Cho et al., \"Cost Aggregation Transformers for Visual Correspondence\", 2021 \\\n[3] Kim et al., \"TransforMatcher: Match-to-Match Attention for Semantic Correspondence\", 2022 \\\n[4] Zhang et al., \"A Tale of Two Features: Stable Diffusion Complements DINO for Zero-Shot Semantic Correspondence\", 2023",
"questions": "- What would the authors propose as the main algorithmic novelty of NMT?\n\n- How does NMT fare against semantic matching baseline methods, which are also evaluated on SPair-71k and PASCAL-VOC datasets?\n\n- How does NMT perform compared to other sparse geometric-matching methods?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-03T22:30:38",
"modification_date": "2025-11-12T12:11:17",
"review_url": "https://openreview.net/forum?id=q1Waov7fd2¬eId=YCKIg78f08",
"license": "CC BY 4.0"
},
{
"id": "aKbf3OKhBs",
"forum": "q1Waov7fd2",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8931/Reviewer_MqaQ",
"reviewer_name": "Reviewer_MqaQ",
"rating": 2,
"confidence": 3,
"soundness": 1,
"contribution": 1,
"presentation": 1,
"summary": "This paper proposes the Normalized Matching Transformer (NMT) for sparse keypoint matching. The method integrates a Swin Transformer backbone, a SplineCNN for geometric feature refinement, a normalized transformer decoder, and a combination of InfoNCE and hyperspherical losses. The authors report state-of-the-art results on PascalVOC and SPair-71k datasets, with notably faster convergence.\n\nWhile the empirical results are strong, the paper suffers from fundamental issues regarding its core contribution, narrative, and experimental validation. The work is primarily presented as a composition of existing components from different domains (e.g., normalized transformers from NLP, SplineCNN from GNNs, hyperspherical losses), without delivering a novel insight or a clear, unifying principle for the problem of semantic keypoint matching. The narrative is largely experimental, reading like a technical report that documents a successful recipe but fails to provide a deeper understanding for the reader. Furthermore, the ablation studies are insufficient to substantiate the claimed contributions of key components beyond the use of a powerful backbone.",
"strengths": "The reported performance on PascalVOC and SPair-71k is impressive and exceeds current state-of-the-art methods.\n\nThe training convergence speed is notably faster than several compared baselines, which is a practical advantage.",
"weaknesses": "Lack of Conceptual Novelty and Insight: The main weakness of this paper is the questionable nature of its innovation. The architecture is a combination of well-known, off-the-shelf modules: a Swin backbone (from general vision), SplineCNN (from geometric deep learning), a normalized transformer (from recent NLP literature), and a standard contrastive loss (InfoNCE) paired with a hyperspherical uniformity loss. The paper positions the \"pervasive normalization\" as a key contribution, but this feels more like an engineering choice—applying a recently successful technique from another field—rather than an insight derived from an analysis of the semantic matching problem itself. The work does not answer why this particular combination is conceptually suited for keypoint matching beyond the fact that it yields higher numbers.\n\nInadequate Narrative and Scholarly Presentation: The paper is written as a sequence of technical decisions and experimental results. It fails to build a compelling scientific narrative. It does not sufficiently motivate why this specific combination of components is necessary from a theoretical or intuitive perspective, nor does it critically discuss the limitations or failure modes of the proposed approach. A reader is left with a \"what\" (the results) but not a clear \"why\" (the underlying reason for its success), which limits the paper's value to the community as more than a data point for a specific configuration.\n\nInsufficient and Unconvincing Ablation Studies: The ablation study in Table 4 is critically flawed and does not adequately support the authors' claims.\n\nIt does not include an ablation for the InfoNCE loss, which is claimed to be a central part of the method. The ablation only replaces the combined (InfoNCE + HS) loss with a cross-entropy loss, which is a drastic change. The individual contribution of InfoNCE versus the hyperspherical loss remains completely unquantified.\n\nThe results strongly suggest that the Swin-Large backbone is the primary driver of performance. The ablation shows that using a VGG backbone causes a -4.9% drop, while all other modifications (removing augmentation, layer loss, or using a vanilla transformer) result in smaller deficits (≤ -2.6%). This raises a serious question: how much performance gain is truly attributable to the novel aspects of the matching architecture (SplineCNN, normalized transformer, losses) versus simply using a much more powerful feature extractor? The current experiments cannot rule out the possibility that the marginal gains from other components are a result of cherry-picking or hyperparameter tuning rather than a fundamental improvement.",
"questions": "What is the specific, isolated performance contribution of the InfoNCE loss, separate from the hyperspherical loss?\n\nGiven the massive performance gap introduced by the backbone switch (Swin vs. VGG), can you provide a more detailed ablation that clearly disentangles the performance contributions of the SplineCNN, the normalized transformer, and the loss functions when using the same backbone?\n\nBeyond achieving high scores, what specific challenge in semantic keypoint matching does the \"hypersphere-centric paradigm\" solve that previous methods struggled with? Can you provide a qualitative or quantitative analysis that demonstrates this?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-03T00:38:31",
"modification_date": "2025-11-12T12:11:17",
"review_url": "https://openreview.net/forum?id=q1Waov7fd2¬eId=aKbf3OKhBs",
"license": "CC BY 4.0"
},
{
"id": "VY5tjCCA6n",
"forum": "q1Waov7fd2",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8931/Reviewer_HReK",
"reviewer_name": "Reviewer_HReK",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "This paper proposes Normalized Matching Transformer (NMT) for sparse semantic keypoint matching between image pairs. The method combines a Swin transformer backbone, SplineCNN for geometric refinement, and a normalized transformer decoder with hyperspherical normalization at every layer. The training incorporates InfoNCE and layer-wise hyperspherical uniformity losses. The paper reports state-of-the-art results on PascalVOC (88.7%) and SPair-71k (86.7%), outperforming recent methods like GMTR and COMMON by 5.1% and 2.2% respectively, while achieving 1.7× faster convergence.",
"strengths": "1. **Strong empirical results**: Substantial improvements on both benchmarks - +5.1% on PascalVOC, +2.2% on SPair-71k, with best performance on 17/20 and 13/18 categories respectively (Tables 2-3).\n2. **Informative ablations**: Table 4 clearly shows loss function contributes most (-15.1%), followed by backbone (-4.9%) and normalized transformer (-2.6%), validating design choices.\n3. **Faster convergence**: Only 6 epochs vs 10-16 for baselines, demonstrating improved optimization efficiency.\n4. **Clear presentation**: Effective visualizations (Figures 1-3) and comprehensive implementation details (Table 1) aid reproducibility.",
"weaknesses": "### Critical Issues\n\n1. **Questionable problem relevance for ICLR 2026**: The paper exclusively evaluates on legacy sparse semantic keypoint matching benchmarks (PascalVOC from 2010, SPair-71k from 2019) without demonstrating practical applications or broader impact. The introduction and conclusion mention applications vaguely (\"feature matching problem\", \"geometric tasks\") but provide no concrete use cases, real-world deployments, or downstream task evaluations. For an ML-focused (not CV-focused) venue like ICLR, the paper should demonstrate why this problem matters in the field of AI in the present era and how the insights generalize beyond these specific benchmarks. Related dense matching methods (SuperGlue, LoFTR) have clear applications in SLAM, robotics, and AR/VR - what are the equivalent applications here?\n2. **Undefined evaluation metric**: \"Matching accuracy\" is never formally defined. What exactly is \"intersection filtering\"? What percentage of keypoints are filtered? \n3. **Unjustified scope restriction**: The method is restricted to sparse matching when core components (normalized transformer, contrastive losses) apply equally to dense matching. No justification provided for this limitation, no discussion of whether extension is possible, and no explanation of why sparse-specific methods are needed in the field of AI in the present era.\n4. **Incremental novelty**: Combines existing techniques (Swin/SplineCNN/nGPT/InfoNCE/hyperspherical loss) with only minor addition of linear layer-wise weighting. No exploration of alternatives or theoretical justification for the simplest weighting scheme.\n5. **Missing broader impact**: What insights transfer to other domains? Normalized features with cosine similarity are already standard in face recognition, contrastive learning, and retrieval. Contribution to broader ICLR community is unclear.\n\n### Moderate Issues\n\n1. **Training-inference mismatch**: Sinkhorn only at inference while training uses raw cosine similarities. Impact never analyzed.\n2. **Incomplete details**: Missing Sinkhorn hyperparameters, temperature initialization, statistical significance (no error bars), and failure mode analysis despite showing failure cases (Figure 4).",
"questions": "### Critical\n\n1. **Can you demonstrate the method on at least one real-world application or downstream task?** (e.g., texture transfer, 3D shape correspondence, robotic manipulation)\n2. **Can you extend to dense matching with results on HPatches or MegaDepth?** If not feasible, provide detailed technical discussion of what prevents extension.\n3. **Please formally define \"matching accuracy\" with mathematical formula.** How many keypoints are filtered on average?\n4. **What specific applications of sparse semantic matching cannot be addressed by dense matching or foundation models** (CLIP, SAM)?\n\n### Secondary\n\n1. Why not use differentiable Sinkhorn during training? What's the performance impact?\n2. Your VGG16 result (83.8%) slightly outperforms GMTR with Swin (83.6%), suggesting loss is more important than backbone. Please clarify.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-02T20:14:30",
"modification_date": "2025-11-12T12:11:17",
"review_url": "https://openreview.net/forum?id=q1Waov7fd2¬eId=VY5tjCCA6n",
"license": "CC BY 4.0"
},
{
"id": "4Fgi4v3TSR",
"forum": "q1Waov7fd2",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8931/Reviewer_9m5L",
"reviewer_name": "Reviewer_9m5L",
"rating": 2,
"confidence": 4,
"soundness": 1,
"contribution": 1,
"presentation": 1,
"summary": "The paper proposes a novel model for keypoint matching between image pairs. The approach introduces unit hyperspherical normalization applied at each layer, combined with a global normalization step. Experimental results on two benchmark datasets demonstrate that the proposed method outperforms existing approaches.",
"strengths": "* The proposed NTM model achieves better performance compared to related work on two datasets: Pascal VOC and SPair-71k.\n* The normalization strategy has a positive impact, as evidenced by the ablation study.",
"weaknesses": "* The authors described that the model is faster because it requires fewer training epochs. However, wall-clock time is a better metric for comparing efficiency between models.\n* The Method section could be significantly improved. The current version does not clearly explain the motivation behind each model component, and the overall narrative looks like an assembly of separate modules rather than a cohesive design.\n* Figure 2 presents several issues: the left portion appears blurred; it displays cosine similarity between the same feature vectors (f1 or f1), which should be zero since they are identical; and the font size is too small for readability.\n* The proposed model shows a conceptual overlap with SuperGlue (Sarlin et al., 2020). Both methods employ an attentional graph neural network and a matching layer based on the assignment problem using the Sinkhorn algorithm. The paper does not sufficiently discuss the distinctions between the proposed method and SuperGlue, nor does it include a direct comparison in the experimental section.",
"questions": "* What are the key differences between SuperGlue and the proposed approach?\n* How does the model perform without the cross-attention mechanism or when using alternative visual feature extractors?\n* In Equation (3), is there a typo? The summation should likely range from j = 1 to m, since the matrix C is of size m × m.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T00:37:35",
"modification_date": "2025-11-12T12:11:18",
"review_url": "https://openreview.net/forum?id=q1Waov7fd2¬eId=4Fgi4v3TSR",
"license": "CC BY 4.0"
}
] |
|
TRM3GP3u2O
|
https://openreview.net/forum?id=TRM3GP3u2O
|
PSRT: Accelerating LRM-based Guard Models via Prefilled Safe Reasoning Traces
| 4
| 3.75
|
[
4,
6,
4,
2
] |
[
5,
3,
4,
3
] | 4
|
[
"AI Safety",
"LRM",
"Inference acceleration",
"Guard Model"
] |
Large Reasoning Models (LRMs) have demonstrated remarkable performance on tasks such as mathematics and code generation. Motivated by these strengths, recent work has empirically demonstrated the effectiveness of LRMs as guard models in improving harmful query detection. However, LRMs typically generate long reasoning traces during inference, causing substantial computational overhead.
In this paper, we introduce $\textbf{PSRT}$, a method that replaces the model's reasoning process with a $\textbf{P}$refilled $\textbf{S}$afe $\textbf{R}$easoning $\textbf{T}$race, thereby significantly reducing the inference cost of LRMs. Concretely, PSRT prefills "safe reasoning virtual tokens" from a constructed dataset and learns over their continuous embeddings. With the aid of indicator tokens, PSRT enables harmful-query detection in a single forward pass while preserving the classification effectiveness of LRMs.
We evaluate PSRT on 7 models, 13 datasets, and 8 jailbreak methods. In terms of efficiency, PSRT completely removes the overhead of generating reasoning tokens during inference. In terms of classification performance, PSRT achieves nearly identical accuracy, with only a minor average F1 drop of 0.015 across 7 models and 5 datasets
|
We replace the LRM-based guard model’s reasoning process with a prefilled safe reasoning trace, thereby preserving its capability while significantly reducing the computational overhead.
|
alignment, fairness, safety, privacy, and societal considerations
|
https://openreview.net/pdf?id=TRM3GP3u2O
| 2025-09-17T11:35:47
| 4
|
[
{
"id": "v56KOxMeMh",
"forum": "TRM3GP3u2O",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8365/Reviewer_1p4m",
"reviewer_name": "Reviewer_1p4m",
"rating": 4,
"confidence": 5,
"soundness": 3,
"contribution": 3,
"presentation": 4,
"summary": "The topic of this paper is the LRM-based guardrail model. And the authors aim to reduce the computational costs of this kind of guard model by replacing the model's reasoning process with a prefilled safe reasoning trace. The comprehensive experiments demonstrate the effectiveness of the proposed method. The computational overhead of generating reasoning tokens is removed yet the performance doesn't drop.",
"strengths": "1. The experiments are very comprehensive, e.g., the PSRT is evaluated on 7 models, 13 datasets, and 8 jailbreak methods. \n\n2. The code is provided, which ensures reproducibility. \n\n3. The paper is well-motivated and the topic is practical.",
"weaknesses": "1. The color in Figure 1 is confused. For example, for the Qwen3-8B model, the line is blue, but the delta and the circle are black. Besides, it seems to be hard to identify GuardReasoner-3B and GuardReasoner-8B. In addition, it is not clear why these instruct models like LLaMA-3.1-8B-Instruct or base models like Qwen3-8B will generate more tokens than the LRM-based guardrail model, i.e., GuardReasoner.\n\n2. The idea of the proposed method is similar to Coconut [1]. Please discuss it and identify the novelty. \n\n3. The efficiency experiments are missing, i.e., time costs and GPU memory costs of the LRM-based guard models and the proposed models. Please detail the inference process of the proposed method. Does it support vLLM?\n\n4. Although the authors claim the proposed method can reduce the reasoning tokens significantly, it seems to reduce the explainability of the LRM-based models since the prefilled embeddings of safe reasoning traces are not readable.\n\n5. Minor: missing discussion on an LRM-based guard model [2] in the related work part. The notation table is missing.\n\n\n[1] Training Large Language Models to Reason in a Continuous Latent Space\n\n[2] GuardReasoner-VL: Safeguarding VLMs via Reinforced Reasoning",
"questions": "1. How's the inference process of the proposed method? Does it support vLLM?\n\n2. How can the proposed method keep the explainability of the LRM-based models?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T22:08:05",
"modification_date": "2025-11-12T12:04:33",
"review_url": "https://openreview.net/forum?id=TRM3GP3u2O¬eId=v56KOxMeMh",
"license": "CC BY 4.0"
},
{
"id": "78MMC61bry",
"forum": "TRM3GP3u2O",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8365/Reviewer_3png",
"reviewer_name": "Reviewer_3png",
"rating": 6,
"confidence": 3,
"soundness": 4,
"contribution": 4,
"presentation": 4,
"summary": "The paper proposes Prefilled Safe Reasoning Trace (PSRT), a novel method designed to enhance the efficiency of Large Reasoning Models (LRMs) used for safety detection (e.g., harmful or jailbreak query classification). Traditional LRM-based guard models achieve strong safety performance but suffer from significant inference-time overhead due to long reasoning traces. PSRT addresses this issue by replacing explicit reasoning generation with prefilled “safe reasoning virtual tokens”, effectively compressing the reasoning process into a single forward pass.\nThe proposed framework introduces three key components:\n\n1. Safe Reasoning Dataset Construction: A curated reasoning dataset is built using DeepSeek-V3.1 to generate reasoning traces and safe/unsafe labels for queries.\n\n2. Safe Reasoning Token Initialization: Prefilled “safe reasoning tokens” r_s are initialized in the embedding space by averaging reasoning embeddings, replacing explicit reasoning sequences.\n\n3. Single-Pass Binary Classification: The model leverages the prefilled r_s to directly classify queries as safe or unsafe without generating reasoning tokens.",
"strengths": "The paper has two notable strengths:\n\n**First,** it provides a clear and practical solution for accelerating safety reasoning in Large Reasoning Models (LRMs). By introducing Prefilled Safe Reasoning Traces (PSRT), the authors successfully remove the need for explicit reasoning generation while maintaining nearly the same detection performance. This represents a meaningful step toward efficient and deployable LRM-based safety systems, especially in latency-sensitive scenarios.\n\n**Second,** the experimental evaluation is extensive and convincing. The authors validate PSRT across multiple model families (e.g., Qwen, Llama, GLM, Mistral) and a wide range of datasets (including harmful and jailbreak benchmarks), with detailed quantitative analysis and qualitative visualization. This comprehensive setup provides strong empirical evidence for the method’s robustness and general applicability.",
"weaknesses": "Some concerns arise regarding the scalability and generalization of using a single r_s.\n\n**First**, the current dataset construction heavily relies on existing safety reasoning datasets (e.g., GuardReasoner, ReNeLLM), which raises questions about the model’s cross-distribution generalization, whether it has truly learned generalizable safety reasoning logic or merely memorized dataset-specific patterns.\n\n**Moreover**, as the scope and diversity of safety-related datasets continue to grow, it remains unclear whether a single global r_s can adequately cover the full spectrum of safety requirements, and whether its generalization performance can be maintained under larger and more diverse settings.",
"questions": "See questions in the weakness part.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T08:47:28",
"modification_date": "2025-11-12T12:04:34",
"review_url": "https://openreview.net/forum?id=TRM3GP3u2O¬eId=78MMC61bry",
"license": "CC BY 4.0"
},
{
"id": "tJg72hzsBj",
"forum": "TRM3GP3u2O",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8365/Reviewer_93No",
"reviewer_name": "Reviewer_93No",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper proposes PSRT, a method to accelerate Large Reasoning Model (LRM)–based guard models by eliminating explicit reasoning token generation during inference. PSRT introduces Prefilled Safe Reasoning Traces, represented as optimized “safe reasoning virtual tokens” in the embedding space, allowing the model to perform harmful-query detection with a single forward pass. The authors demonstrate the method’s generality across 7 models, 13 datasets, and 8 jailbreak attacks, showing comparable detection performance (≤0.015 average F1 drop) while completely removing reasoning overhead.",
"strengths": "- Novel and practical contribution: The paper addresses a real bottleneck in LRM deployment, inference latency due to reasoning traces, and proposes an elegant solution by embedding “prefilled reasoning” directly into model inputs.\n\n- Strong empirical validation: Extensive experiments across diverse models (Qwen, Llama, ChatGLM, Mistral, GuardReasoner) and datasets (StrongReject, JBB, SimpleSafetyTest, AdvBench, etc.) show consistent performance with drastically reduced computational cost.\n\n- Well-motivated theoretical grounding: The connection between reasoning trace averaging and point-estimate optimality (Proposition B.5), and the ELBO interpretation for training objective, make the approach conceptually sound.",
"weaknesses": "- Limited conceptual novelty: The idea is closely related to p-tuning and prompt embedding averaging, which have been explored for efficiency. The main novelty lies in the specific application to guard models rather than a fundamentally new optimization principle.\n\n- Ablation insufficiently deep: The ablation (Fig. 3) focuses mainly on SFT and averaging initialization; it would be valuable to test different trace lengths, embedding dimensions, or virtual token counts to probe robustness.\n\n- Evaluation scope: The paper is entirely focused on binary harmful query detection. Demonstrating that PSRT is effective on multi-class classification or structured safety reasoning tasks (e.g., toxicity type detection) is more impactful",
"questions": "Please refer to the weakness section.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T01:17:22",
"modification_date": "2025-11-12T12:04:34",
"review_url": "https://openreview.net/forum?id=TRM3GP3u2O¬eId=tJg72hzsBj",
"license": "CC BY 4.0"
},
{
"id": "hKp4JThCR6",
"forum": "TRM3GP3u2O",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8365/Reviewer_vQT3",
"reviewer_name": "Reviewer_vQT3",
"rating": 2,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "This submission pertains to large reasoning models (LRMs) used as guard models for detecting harmful queries. It proposes PSRT, a method that replaces the generation of reasoning traces with a fixed, prefilled \"reasoning trace\" composed of optimized embedding vectors. The paper presents an extensive evaluation of PSRT in terms of models (reasoning guard models, non-reasoning guard models, non-guard models trained to become reasoning guard models) and datasets (harmful, jailbreak, harmless, mixed). The overall finding is that PSRT preserves the detection performance of reasoning guard models while avoiding generation of reasoning traces, thereby substantially reducing the number of generated tokens.",
"strengths": "- The main idea is a clever application of p-tuning (prefix tuning, prompt tuning) to reasoning traces, specifically to optimize a fixed reasoning trace for harmful query detection.\n- Demonstrates that generation of reasoning traces is not needed to achieve almost the same detection performance (and in a few cases even better performance) compared to reasoning guard models",
"weaknesses": "1. The most important shortcoming for me is the use of number of generated tokens as a proxy for computational cost. With PSRT (as I understand it), since the query varies and occurs before the fixed reasoning trace, it is still necessary to perform a forward pass on the tokens of the reasoning trace (computing all their internal representations, etc.). Simply reporting the number of generated tokens does not measure the cost of this forward pass, nor whatever computational savings are achieved by performing this forward pass on fixed tokens rather than generating a similar number of new tokens.\n1. I am unsure about the significance of including the SFT-only models (Qwen3-8B, Llama-3.1-8B-Instruct, etc.), as well as the components related to them, namely the dataset construction and SFT in Section 3.1. Figure 1 shows that these models are Pareto-dominated by GuardReasoner (lower F1 score, more generated tokens). Moreover, it is not clear to me how novel is the method in Section 3.1 for training guard models, or how specialized it is for harmful query detection (please see the next point). Thus, the main significance that I see is to show the \"generality of PSRT across diverse model architectures,\" but I am not sure that this warrants so much space in the main paper. I would have been more interested in seeing the additional results on GuardReasoner (Appendix A.1) in the main paper and discussed in greater depth, since GuardReasoner is a stronger model.\n1. The paper limits itself to detecting harm in the query/model input and not in the model output. The reason for this limitation is not clear.\n1. Section 2 cites prior work on shortening reasoning traces. It would have been good to see one of these methods used as an experimental comparison because it would be an intermediate approach that does not avoid generating reasoning traces completely.\n1. The paper does not provide deeper insight into why PSRT works. Can the virtual reasoning trace be interpreted somehow? What are the relative contributions of the averaging initialization and subsequent fine-tuning?\n\nMore minor:\n1. I find the term \"safe reasoning trace\" confusing because the predominant reading of this term is \"a reasoning trace that is safe,\" i.e., free from harmful content, not \"reasoning about safety.\" I think \"safety reasoning trace\" would be better.\n1. Section 3.1 implies that DeepSeek-V3.1 is used as the judge of harmfulness. If this is correct, then this dependence on a single LLM could be acknowledged as a limitation.\n1. Lines 261-262: Sections 3.1 and 3.2 are not experiment sections. Perhaps wrong references?\n1. It would be good to perform the second ablation (omitting the average initialization) for GuardReasoner models also.\n1. Line 724: Should Table 4 be Table 5? Table 4 is on mixed datasets.",
"questions": "1. Number of generated tokens after PSRT: For the GuardReasoner models, is the number of generated tokens still around e.g. 17 in Table 2 because the model generates that many as answer tokens? Why are the corresponding numbers for the SFT-only models much higher, in the 70s or 80s or even higher?\n1. Lines 360, 363: I do not see the exact numbers quoted here (99.26%, etc.) in Table 2. Are these numbers averages over the three sizes of GuardReasoner models?\n1. In the ablation study, what initialization is used instead of the average embeddings?\n1. In Table 5, why are the TPRs of the GuardReasoner models so uneven, and in particular, why is the original 8B one so poor?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-18T06:00:15",
"modification_date": "2025-11-12T12:04:36",
"review_url": "https://openreview.net/forum?id=TRM3GP3u2O¬eId=hKp4JThCR6",
"license": "CC BY 4.0"
}
] |
FcuJY1dK7s
|
https://openreview.net/forum?id=FcuJY1dK7s
|
Reasoning Scaffolding: Distilling the Flow of Thought from LLMs
| 5.5
| 3.75
|
[
6,
6,
6,
4
] |
[
3,
4,
4,
4
] | 4
|
[
"LLM Reasoning Distillation",
"Large Reasoning Model",
"Reasoning Scaffolding",
"Semantic Signals"
] |
The prevailing approach to distilling reasoning from Large Language Models (LLMs)—behavioral cloning from textual rationales—is fundamentally limited. It teaches Small Language Models (SLMs) to mimic surface-level patterns rather than the underlying algorithmic structure of thought, resulting in a critical lack of logical robustness. We argue that instead of cloning text, distillation should transfer this algorithmic structure directly. We introduce Reasoning Scaffolding, a framework that reframes reasoning as a structured generation process. Our method first abstracts the teacher's thought process into a sequence of discrete, interpretable semantic signals (e.g., Contrast, Addition) that act as a scaffold. The student model is then trained via a multi-task objective to both (1) predict the next semantic signal, anticipating the reasoning flow, and (2) generate the corresponding step, conditioned on that signal. This multi-task scheme acts as a powerful regularizer, compelling the student to internalize the computational patterns of coherent reasoning. On a suite of challenging reasoning benchmarks, our method significantly outperforms state-of-the-art distillation in both accuracy and logical consistency, providing a path towards creating smaller models that are genuine reasoners, not just fluent mimics.
|
We introduce Reasoning Scaffolding, a new reasoning distillation framework that transfers reasoning patterns—not just text—from large to small language models, resulting in stronger small reasoning models.
|
foundation or frontier models, including LLMs
|
https://openreview.net/pdf?id=FcuJY1dK7s
| 2025-09-18T17:04:36
| 4
|
[
{
"id": "W5UVveaKeR",
"forum": "FcuJY1dK7s",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission10987/Reviewer_qrY1",
"reviewer_name": "Reviewer_qrY1",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 2,
"summary": "This work proposed novel distillation approach for distilling LLM reasoning thinking data into small models. The approach can be summarized as follows\n\n1. Identify thinking (reasoning) words in the reasoning chain of thought, like (Additionally, So, Then ...)\n2. Group these worlds into 7 types \n3. Split the DeepSeek generated reasoning chain using identified keywords.\n4. During distillation, let the model predict both reasoning type and reasoning tokens for each hidden states\n5. During inference, an adaptive strategy is adopted, if the confidence computed using reasoning step type predictor, the reasoning is terminated.\n6. Another experiment based on only using \"Conclusion and Summary\" steps in reasoning chain is used for distillation, and the result looks good. \n\nThe experimental results show that this approach out-perform direct distillation and SFT using CoT directly. \n\nAblation study is performed to demonstrate the effectiveness of predicted reasoning type, and correct reasoning type can guide reasoning effectively.",
"strengths": "1. The proposed approach for distillation is effective and reached high performance compare with baseline\n2. The designed experiments clearly explained the motivation of of the proposed approach. \n3. The ablation study is comprehensive",
"weaknesses": "1. The models tested are only from Qwen 2.5 families. However, previous works [1], have questioned about the behaviour of Qwen 2.5 on math reasoning. It could be more convincing to adopt other models on this approach to check the effectiveness and performance. \n2. The proposed approach is expensive and rely on advanced model. It's hard to scale up. \n3. The loss described in the paper not matches the given code. In paper, equation 2 on L221 is a classification loss while regression loss is used in your code `custom_qwen_model.py` , L200. Also a lot of evaluation python scripts is not available. \n4. Figures are not intuitive and self-explaining. In figure 2, the detailed example text can be removed and maybe replaced with abstract icon to emphasize more on the approach. For figure 3, it could be better to use some toy examples (not long and real, but just conceptual) to illustrate the pruning process. \n5. Only reasoning chains from DeepSeek is tested, and it remains unknown how reasoning chain of other style (like from gpt-oss) perform using this approach. \n6. The work can be more intuitive if decoding algorithm and be explained in pseudo code. And I can not find adaptive decoding codes in codebases.",
"questions": "1. Is the code the latest version ?\n2. Please refer to weakness and resolve my concern.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-02T11:33:48",
"modification_date": "2025-11-12T12:36:15",
"review_url": "https://openreview.net/forum?id=FcuJY1dK7s¬eId=W5UVveaKeR",
"license": "CC BY 4.0"
},
{
"id": "ZwEcTsukzt",
"forum": "FcuJY1dK7s",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission10987/Reviewer_5EW9",
"reviewer_name": "Reviewer_5EW9",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper introduces Reasoning Scaffolding, a distillation framework that extracts the structural flow of reasoning from LLMs into discrete semantic signals (e.g., Contrast, Addition, Elaboration) to train SLMs. Unlike traditional behavioral cloning from Chain-of-Thought rationales, which mimics surface-level text, this method uses a multi-task objective where the SLM learns to predict the next semantic signal (anticipating reasoning flow) and generate the corresponding step conditioned on it. This acts as a regularizer for logical coherence. The approach includes data preparation via keyword matching and LLM validation, a dual-branch model architecture, and inference with signal-guided generation and pruning for efficiency. Evaluations on benchmarks like StrategyQA, CommonsenseQA, TruthfulQA, GSM8K, and MATH show improved accuracy and robustness over baselines like CoT SFT and Long-Thinking distillation, using Qwen models of varying sizes.",
"strengths": "* Tackles a fundamental flaw in reasoning distillation by shifting focus from text imitation to transferring algorithmic structure, which is a timely and innovative contribution to creating more robust SLMs.\n\n* The multi-task training with signal prediction as a regularizer is technically sound and provides interpretability, potentially advancing mechanistic understanding of reasoning in models.\n\n* Comprehensive experiments demonstrate substantial gains (e.g., ~14% average over originals, ~8% over CoT baselines), with notable benefits for smaller models, and the framework shows scalability across model sizes and tasks.\n\n* Includes practical optimizations like confidence-based termination and pruning of reasoning traces for token efficiency, making it applicable for real-world deployment.",
"weaknesses": "* The semantic signal extraction relies heavily on an external LLM (e.g., GPT-4) for validation and labeling, which could propagate biases or inconsistencies from the labeler, and the choice of exactly 7 categories seems somewhat arbitrary without broader justification or sensitivity analysis.\n\n* While results are strong on the selected benchmarks, the paper lacks evaluation on out-of-distribution tasks or diverse reasoning domains (e.g., code generation, planning), limiting claims of general robustness; comparisons are mostly to CoT variants rather than other structured distillation methods like modular architectures or rationale decomposition.\n\n* Inference depends on a tunable threshold τ for signal confidence, but the paper provides limited ablation on its impact across datasets, and the pruning strategy might discard useful intermediate details in complex problems.\n\n* The dataset construction uses zero-shot prompting from a single LRM (Deepseek-R1), which may not capture diverse reasoning styles; details on dataset size, diversity, or quality control are sparse in the provided sections.",
"questions": "* How was the set of 7 semantic signals determined, and what happens if you expand or reduce the categories—does performance change significantly?\n\n* Did you observe any biases in the semantic signals assigned by GPT-4, such as favoring certain transitions based on the teacher's style?\n\n* How does the method perform on tasks beyond QA and math, like creative writing or multi-agent reasoning, where reasoning flows might be less linear?\n\n* What is the computational overhead of the dual-branch training compared to standard CoT distillation, and how sensitive is inference efficiency to the pruning strategy?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-02T02:23:48",
"modification_date": "2025-11-12T12:36:15",
"review_url": "https://openreview.net/forum?id=FcuJY1dK7s¬eId=ZwEcTsukzt",
"license": "CC BY 4.0"
},
{
"id": "N5iYvGoL4y",
"forum": "FcuJY1dK7s",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission10987/Reviewer_b3Cu",
"reviewer_name": "Reviewer_b3Cu",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "The paper introduces Reasoning Scaffold: a distillation method for improving the reasoning capabilities of small language models (SLMs). The method works by decomposing reasoning traces from a large teacher model into annotated reasoning steps, categorized into reasoning types using keyword-based and LLM annotation. The SLM is then tasked to learn the category of the current reasoning step and generates reasoning tokens conditioned on this information. This method intends to better structure the reasoning of the SLM. Experiments on reasoning benchmarks demonstrate the effectiveness of the proposed method on reasoning benchmarks for the Qwen model family, with models ranging from 0.5 to 14B parameters. The method, however, achieves its performance at a higher inference cost than standard distillation as it requires more generated tokens.",
"strengths": "1. The proposed work is simple yet effective for improving distillation in SLMs. It can serve the research community by providing an easy scaffold to yield better small reasoners.\n2. The experiments show improved performance on reasoning tasks and accurately support the claims made by the paper.\n3. The presented analysis interestingly shows that additional structuring signals, even weak or random can help organize the model's reasoning thoughts and improve reasoning capabilities.",
"weaknesses": "1. The proposed work, while useful, does not present an original method or novel findings as similar studies already exist [1].\n2. The proposed structure categorization is very high-level, potentially making the category prediction task trivial as argumentation usually follow the same steps and thus, reducing the information in the signal.\n3. The method relies on handcrafted keyword matches (complemented by an LLM), which can be brittle in out-of-domain tasks, particularly as the LLM is only used for assigning a category to a piece of text but not for the division into steps.\n4. The method balances two training objectives with the same weight to both losses. This can lead to training instability if they do not have the same magnitude or variance.\n5. Experiments are only performed on models from the Qwen family and it is unclear if the findings can transfer to other models or if they are artifacts specific to Qwen. Similarly, only one teacher model 5DeepSeek-R1) is used.\n6. The proposed method yields longer reasoning chains than the best baseline (Thinking Distill), which could account for the performance gap.\n\n\nMinor comment:\n1. Results from the tables are a bit hard to read. Highlighting the best results in bold would be helpful to the reader.\n2. It is not clear from Figure 4 how the proposed method improves the reasoning signal as the examples follow the same reasoning structures.\n\n\n\n[1] Li, D., Cao, S., Griggs, T., Liu, S., Mo, X., Tang, E., ... & Stoica, I. (2025). LLMs Can Easily Learn to Reason from Demonstrations Structure, not content, is what matters!. arXiv preprint arXiv:2502.07374.",
"questions": "1. Have you investigated if additional, more precise and informative, categories could improve the guiding signal and improve the generation of the reasoning traces?\n2. Have you investigated the keyword-LLM agreement for splitting the traces into categories? can the keyword create additional unneeded steps or miss a transition to a new step?\n3. Have you investigated the evolution of the two training losses and compared their magnitude and variance? Does including a hyperparameter controlling the weighting factor of one of the losses improves the learning?\n4. As the \"Conclusion and Summary\" denote both intermediate and final outputs, how is the final output differentiated from the intermediate ones?\n5. Have you performed experiments with other model families? Both as teacher and student?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T21:05:29",
"modification_date": "2025-11-12T12:36:16",
"review_url": "https://openreview.net/forum?id=FcuJY1dK7s¬eId=N5iYvGoL4y",
"license": "CC BY 4.0"
},
{
"id": "eg6nX7gMiM",
"forum": "FcuJY1dK7s",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission10987/Reviewer_3pgg",
"reviewer_name": "Reviewer_3pgg",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 2,
"summary": "This paper proposes Reasoning Scaffolding, a framework for distilling reasoning ability from large language models (LLMs) into smaller models (SLMs). Instead of directly imitating the teacher's Chain-of-Thought (CoT) text, the method abstracts reasoning traces into semantic signals (e.g., Contrast, Elaboration, Conclusion) that serve as scaffolds for step-by-step reasoning. The student model is trained via a dual-branch architecture, predicting both the next semantic signal and the corresponding reasoning step, to encourage internalization of the reasoning structure. Experiments on multiple benchmarks (StrategyQA, CommonsenseQA, TruthfulQA, GSM8K, MATH-500) show improvements over standard CoT and Long-Thinking distillation baselines.",
"strengths": "1. **Interesting idea**: The paper explores a creative perspective on reasoning distillation by introducing semantic scaffolding as a middle-level representation between textual rationales and abstract reasoning steps.\n\n2. **Comprehensive experiments**: Evaluation spans multiple reasoning benchmarks and model scales (0.5B, 7B, 14B), providing a broad empirical basis.\n\n3. **Clear empirical comparisons**: The ablation studies (e.g., signal quality and token analysis) are informative and show that structured supervision can improve reasoning stability.\n\n4. **Good motivation**: Addressing the brittleness of current CoT distillation is an important and timely research direction.",
"weaknesses": "1. **Experimental irregularities (Table 1).**\nThe main results table is somewhat confusing. For example, the Qwen2.5-0.5B results under \"Long-Thinking Distill\" are missing, while the Qwen2.5-7B model performs worse than its SFT counterpart under this setting (except on MATH-500). These inconsistencies raise questions about the fairness and reproducibility of the comparison. It would be helpful to clarify whether the fine-tuning setup (data size, training epochs, loss weighting) is kept consistent across all baselines. Moreover, teacher models such as DeepSeek-R1 could serve as stronger baselines.\n\n2. **Model- and task-specific signal design.**\nThe categorization of reasoning signals (Section 3.1) appears heuristic and data-specific. The seven signal types (e.g., Addition, Contrast, Conclusion) seem derived from particular verbal patterns in GSM8K-style reasoning traces. It is unclear whether this taxonomy would generalize to other domains (or teacher models), such as scientific reasoning, logic puzzles, or multi-modal contexts. The framework's dependence on these fixed signal categories limits its general applicability.\n\n3. **Ambiguity in contribution novelty.**\nWhile the idea of structured distillation is valuable, much of the implementation (keyword matching, LLM labeling, multi-task fine-tuning) builds directly on existing CoT or discourse-signal techniques. The conceptual advancement beyond structured rationale distillation remains incremental without stronger theoretical or analytical insight.\n\n4. **Writing and clarity.**\nThe overall writing is understandable but sometimes verbose and repetitive. Some sections (e.g., 3.1–3.3) contain long procedural details that could be condensed and improved. Minor language issues also appear throughout, which detract slightly from readability.",
"questions": "See weakness",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T02:28:40",
"modification_date": "2025-11-12T12:36:17",
"review_url": "https://openreview.net/forum?id=FcuJY1dK7s¬eId=eg6nX7gMiM",
"license": "CC BY 4.0"
}
] |
eHxQc2Q0aw
|
https://openreview.net/forum?id=eHxQc2Q0aw
|
Stability and Generalization for Bellman Residuals
| 4
| 3.25
|
[
2,
2,
6,
6
] |
[
4,
3,
2,
4
] | 4
|
[
"statistical learning theory",
"algorithmic stability",
"generalization analysis",
"offline reinforcement learning",
"inverse reinforcement learning"
] |
Offline reinforcement learning and offline inverse reinforcement learning aim to recover near–optimal value functions or reward models from a fixed batch of logged trajectories, yet current practice still struggles to enforce Bellman consistency. Bellman residual minimization (BRM) has emerged as an attractive remedy, as a globally convergent stochastic gradient descent–ascent based method for BRM has been recently discovered. However, its statistical behavior in the offline setting remains largely unexplored. In this paper, we close this statistical gap. Our analysis introduces a single Lyapunov potential that couples SGDA runs on neighbouring datasets and yields an $\mathcal{O}(1/n)$ on-average argument-stability bound—doubling the best known sample-complexity exponent for convex–concave saddle problems. The same stability constant translates into the $\mathcal{O}(1/n)$ excess risk bound for BRM, without variance reduction, extra regularization, or restrictive independence assumptions on minibatch sampling. The results hold for standard neural-network parameterizations and minibatch SGD.
|
Our analysis yields an $\mathcal{O}(1/n)$ on-average argument-stability bound for Bellman residual minimization—doubling the best known sample-complexity exponent for convex–concave saddle problems.
|
learning theory
|
https://openreview.net/pdf?id=eHxQc2Q0aw
| 2025-09-14T17:17:33
| 5
|
[
{
"id": "dVNEQM20Wb",
"forum": "eHxQc2Q0aw",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission5065/Reviewer_MpYr",
"reviewer_name": "Reviewer_MpYr",
"rating": 2,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "The paper considers the problem of minimizing the Bellman error in a TD (Temporal Difference) update and cast this as a minimax problem of optimizing an objective that involves two parameterized functions : 1) Q function as a function of state and action and 2) The other is a parameterized neural net that given current state and action approximates the value function of the future sampled state. This optimization objective is derived (from prior work) by characterizing the bias between the squared Bellman error with respected to the expected TD operator and sampled TD Bellman error. Further the surprising fact about this parameterization is that the problem is concave with respect the second function and the objective after inner optimization satisfies the PL condition with respect to the first Q function when you consider the stochastically approximated variant under general parameterizations (specifically linear function approximation).\n\nMotivated by this, the authors propose to perform a stability analysis that would bound the generalization error (in terms of the duality gap) between the mini max problems which sees the population version and the the sample version. Authors adopt the stability analysis (that is known to imply generalization in the sense of duality gap from prior work) where the mini max problem see two sets of sequence of samples (state transitions) where one of the samples is different and authors seek to bound the distance of between the primal and dual iterates of these two coupled minimax problems. \n\n Authors introduce two interesting ideas: 1) Ghost index which is an index independently sampled from the dataset which is independent of the Filtration and gradient with respect to this sample in expectation can approximate the population gradient 2) PL condition implies for the outer problem and strong concavity for the inner problem imply contraction for a Lyapunov function that is a combination of the primal gap and the dual gap in the expected function value.\n\nAuthors use this and existing results about stability to prove generalization of the primal and dual gap from sample to the population version.",
"strengths": "The paper (to my knowledge) is the first to consider stability analysis exploiting the PL condition and strong concavity of the respective problem to show generalization errors in primal and dual gaps. There are a lot of algebraic manipulations that deftly use the ghost index, contraction properties of the outer and inner problem to establish bounds on generalization error. The application to Bellman residual optimization is noteworthy although it borrows heavily from prior work.",
"weaknesses": "1) My first concern is inadequate quoting of results from Kang et al 2025 that misleads reading this paper. Line 230 and 231 says that Kang et al. 2025 proved that PL condition is satisfied with respect to the parameters of the Q function (primal variables) when parameterized by a Neural Network. I read the prior paper. There are lots of caveats to the Neural Network result - it traces back to the result in https://arxiv.org/pdf/2003.00307 - where authors show that - wide and deep neural nets satisfy the PL condition over a radius around a random initialization if the width scales as radius^depth. Further, the theorem is easily proven only for linear function approximation in Kang et.al. 2025.\n\n2) Second concern is that ghost index trick works because, say for the inner problem, gradient is assumed to be uniformly bounded. This is rather a very strong assumption. However, the inner problem is strongly concave and *Page 2 of this ICML paper https://proceedings.mlr.press/v80/nguyen18c.html shows that unless the ball of iterates is bounded explicitly, uniform gradient norm bound contradicts strong convexity (or concavity) !*\n\nAuthors can have uniform bound G on gradient norm only if the iterates stay within a ball of certain radius from where it starts at least for the inner concave problem. The algorithm described is unprojected SGDA and the problem needs to project itself on every update to some ball. In the RL context that would mean projecting the iterates of the parameters of the Q function to a ball that would encapsulate the optima - rather a very strong assumption. Even the Neural net satisfying bounds of gradient, Hessian and Jacobian operator (assumption 5 in Kang et al 2025 paper) is possibly within some small ball around the initialization for a network of given width.",
"questions": "1) Can you answer the above 2 weakness points ? Question about the need for projected steps if gradient bound is assumed is rather concerning and could be a serious weakness as written\n\n2) Paper quotes the deadly triad relating to convergence of Q learning. There is a recent paper on resolving it for linear function approximation (https://arxiv.org/abs/2203.02628) using truncation and target network. Discussing these alternative works is very important.\n\nI think the gradient bound issue is more serious. Therefore, I have given rating of 2. I would wait for authors to respond to that and I can raise my score.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-10T02:51:17",
"modification_date": "2025-11-12T11:24:11",
"review_url": "https://openreview.net/forum?id=eHxQc2Q0aw¬eId=dVNEQM20Wb",
"license": "CC BY 4.0"
},
{
"id": "Qy2w5RgCff",
"forum": "eHxQc2Q0aw",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission5065/Reviewer_yTNX",
"reviewer_name": "Reviewer_yTNX",
"rating": 2,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "The paper analyzes the excess risk bound for offline reinforcement learning in view of Bellman residual minimization.",
"strengths": "The problem of analyzing the excess risk bound for Bellman residual minimization does seem open so far.",
"weaknesses": "- The comparison to existing works in approximate dynamic programming methods e.g. projected Bellman equation-based approaches seems inadequate. Is Bellman residual minimization the only way to accommodate the difficulty of enforcing Bellman consistency? What are the other existing risk bounds when incorporating function approximations and how do these results compare?\n- The techniques used seem to be standard, e.g. PL for analyzing SGDA etc. It seems unclear from the manuscript what are the technical challenges and the techniques developed in this paper that are independent of the developments from combining Kang et al. 2025 and Wang et al 2022. What is the motivation when defining the Lyapunov potential? Some discussions around lines 369-375 when introducing this object would greatly help the reader.\n- The presentation of Theorem 6 and in general Section 3 can be improved. As far as I understand, this paper is considering the specific problem of learning the (action)-value function, and thus introducing 9 assumptions for a general function F and auxiliary results about general risks introduces additional notation while not clear to what extent they are helpful in elucidating the final result (Theorem 6). I would think a clearer explanation why value functions and lyapunov potential satisfy the assumptions needed to establish Theorem 6 and intuition of the result would be more helpful than the results about general F along with 9 additional assumptions (that will automatically be satisfied).\n\n\nMinor points:\n- There are superfluous \"equation\" when referring to equations throughout the paper, e.g., Equation 4, etc. Please remove those.\n- Line 80: \"Throughout, focus on single-agent decision making problem interacting with a discounted Markov Decision Process (MDP) described by the tuple ( S, A, P, r, β , ν 0)\" is lacking a subject.\n- Bellman consistency in line 38 comes out directly without motivation or explanation. Why do we want consistency and what does it mean? In the last sentence you said \"satisfies the Bellman optimality equations even though no new state–action pairs can be queried.\" but Bellman consistency means fixed point of Bellman equations, which is not shown here.",
"questions": "See previous section",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-02T19:19:41",
"modification_date": "2025-11-12T11:24:11",
"review_url": "https://openreview.net/forum?id=eHxQc2Q0aw¬eId=Qy2w5RgCff",
"license": "CC BY 4.0"
},
{
"id": "QrhskCHEoU",
"forum": "eHxQc2Q0aw",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission5065/Reviewer_uiVV",
"reviewer_name": "Reviewer_uiVV",
"rating": 6,
"confidence": 2,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper studies Bellman Residual Minimization (BRM) for offline RL. Using a bi-conjugate reformulation, minimizing MSBE is turned into a Polyak--Łojasiewicz (PL)–strongly-concave minimax problem that can be solved by SGDA, thereby avoiding the double sampling problem. The analysis couples two SGDA runs on neighboring datasets and proves on-average algorithmic stability with an $O(1/n)$ rate, without requiring variance reduction or independence assumptions. By stability-to-generalization transfer, the work bounds (i) the gap between population and empirical Bellman-residual risks and (ii) the population Bellman-residual risk of the SGDA output.",
"strengths": "- Without requiring independence assumptions on the sample indices nor variance reduction, the paper establishes an $O(1/n)$ on-average stability and, via stability-to-generalization transfer, an $O(1/n)$ generalization bound for BRM, doubling the exponent from $1/2$ to $1$ over prior work.\n \n- The population excess risk is cleanly decomposed into an optimization term that decays with training and a sample-size–dominated statistical term, naturally aligning with standard minibatch SGDA.\n \n- All assumptions are stated explicitly and clearly, making the analysis easy to follow.",
"weaknesses": "- It would be helpful to add illustrative examples and comparisons to aid understanding (see Q 1 and 2).\n\n\n- Sections~2 and 3 include substantial repetition of well-known material, and the exposition feels overly long. For example, the standard SGDA routine could be moved to the appendix for brevity.",
"questions": "- How strong is Assumption A8? Do the constants remain unchanged under a single-sample replacement in general setting, and could the authors provide a concrete example illustrating when A8 holds or fails?\n\n- In Corollary~4, could you quantify the iteration threshold $T^\\star$ at which the optimization term is below the statistical term formally? Additionally, for the small-$T$, could you provide a comparison with prior methods? \n \n- Would it be possible to use one of $(w,v)$ or $(\\theta_1,\\theta_2)$ to unify the notation since these seem to denote the same primal/dual variables?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-02T10:00:00",
"modification_date": "2025-11-12T11:24:12",
"review_url": "https://openreview.net/forum?id=eHxQc2Q0aw¬eId=QrhskCHEoU",
"license": "CC BY 4.0"
},
{
"id": "gtGlUB77Ze",
"forum": "eHxQc2Q0aw",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission5065/Reviewer_kFyc",
"reviewer_name": "Reviewer_kFyc",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper analyzes the statistical behavior of Bellman Residual Minimization (BRM) for offline RL/IRL. Building on the recent optimization view that the bi-conjugate BRM objective induces a PL–strongly-concave minimax structure, the authors couple two SGDA runs on neighboring datasets via (i) a single Lyapunov potential that mixes primal suboptimality and primal–dual mismatch, and (ii) a “ghost-index” device to decouple sampling noise. They prove on-average argument stability of SGDA with an O(1/n) rate (under Robbins–Monro stepsizes), and transfer this to O(1/n) generalization and an excess-risk bound that cleanly decomposes optimization and estimation errors. The setup, assumptions (A1–A9), and the transfer to weak PD-gap follow the minimax stability framework.",
"strengths": "Originality\n- Closes a real theoretical gap: Prior minimax stability analyses (e.g., Wang–Lei–Ying–Zhou, NeurIPS 2022) deliver O(n^{-1/2}) rates under convex–concave assumptions. This paper’s O(1/n) stability and generalization results for SGDA in a PL–strongly-concave regime appear novel.\n- Combines multiple theoretical tools—bi-conjugate BRM formulation, PL geometry, a Lyapunov potential, and ghost-index coupling—into a coherent analysis without variance reduction or independence assumptions.\n- The unification of optimization and generalization analysis through a single Lyapunov potential is an elegant methodological contribution.\n\nQuality\n- The proofs are internally consistent and technically sound under the stated assumptions (A1–A9). The Lyapunov-based stability recursion is clearly constructed and all major theorems are proven in full.\n- The paper avoids dependence on variance-reduction or mixing assumptions, deriving O(1/n) bounds via standard SGDA under Robbins–Monro step sizes.\n- The key limitations lie in the strong assumptions—bounded per-sample gradients, uniform constants across neighboring datasets, and uniqueness of the saddle—that may not strictly hold for deep neural networks.\n\nClarity\n- The exposition is clear, particularly in articulating the problem gap (“optimization picture is clear; statistical picture remains open”).\n- The algorithmic setup, potential function, and contraction argument are well explained with intuitive justification for summability of noise terms.\n- Proof dependencies and structure are explicitly cross-referenced in the reproducibility statement, ensuring transparency.\n\nSignificance\n- The results provide the first O(1/n) generalization bound for Bellman Residual Minimization in offline reinforcement learning, doubling the exponent achieved in prior convex–concave analyses.\n- The theoretical framework may generalize to other PL-minimax problems beyond BRM, influencing theoretical and algorithmic directions in RL and IRL.\n- While the assumptions restrict direct practical application, the analysis sets a higher theoretical standard for understanding statistical generalization in nonconvex–concave RL objectives.",
"weaknesses": "1) Assumptions feel strong and under-motivated for neural BRM\nIssue: The analysis depends on assumptions such as bounded per-sample gradients, uniform constants across neighboring datasets, and uniqueness of the saddle. These are not linked to concrete architectural or data-level conditions.\nActionable Fixes:\n- Provide sufficient conditions (e.g., Lipschitz activations, spectral normalization, weight decay) ensuring these assumptions hold.\n- Add perturbation lemmas for small constant drift across neighboring datasets.\n- Explain how regularization ensures uniqueness of the saddle.\n\n2) Positioning vs. existing stability literature could be sharper\nIssue: The claimed novelty (O(1/n) vs O(1/√n)) relative to convex–concave minimax works (e.g., Wang et al., NeurIPS 2022) lacks a clear side-by-side comparison.\nActionable Fixes:\n- Include a comparison table contrasting assumptions, settings, and rates.\n- Explicitly highlight which steps rely on PL–strong concavity and would fail otherwise.\n\n3) Minibatch dependence not clearly quantified\nIssue: Theorems mention minibatch adaptation “verbatim” without giving explicit batch-size-dependent constants.\nActionable Fixes:\n- Add a corollary deriving ε_T(B) with explicit 1/B scaling and its impact on generalization and excess-risk bounds.\n- Provide practical guidance on choosing batch size B.\n\n4) Lack of empirical sanity checks\nIssue: The paper claims parametric O(1/n) scaling but shows no supporting experiment.\nActionable Fixes:\n- Include a toy experiment using linear BRM satisfying all assumptions to empirically verify slope ≈ –1 in log–log plots.\n- Compare against convex–concave baselines to show contrast.\n\n5) Clarity gaps in bi-conjugate BRM formulation\nIssue: The connection from the bi-conjugate Bellman residual to the minimax form is hard to follow for non-experts.\nActionable Fixes:\n- Add a concise boxed derivation linking the BRM objective to the dual variable.\n- Include a diagram illustrating shared-index coupling and “hit” events.\n\n6) Excess-risk decomposition underemphasized\nIssue: The clean decomposition between stability and optimization error appears late and without clear interpretation.\nActionable Fixes:\n- Promote the decomposition as a boxed equation in the main text.\n- Explain how tuning T and η_t balances the two error terms.\n\n7) Limited discussion beyond entropy-regularized BRM\nIssue: It is unclear whether the results extend to non-entropy (hard-max) BRM formulations.\nActionable Fixes:\n- Add remarks outlining when PL–strongly-concave structure persists under different smoothings (e.g., Moreau envelopes).\n\n8) Ambiguity in “one pass over n samples” phrasing\nIssue: The notion of “one pass” may be misread without clarifying total gradient calls or sampling scheme.\nActionable Fixes:\n- Specify whether T ≈ n steps correspond to one epoch and whether sampling is with or without replacement.\n\nOverall, the paper would improve by making its assumptions verifiable in practice, providing explicit batch-size scaling, and including minimal empirical verification. These additions would make the theory more credible, checkable, and actionable for the ICLR audience.",
"questions": "1. On Assumptions and Applicability\n- Could you provide explicit sufficient conditions on the neural-network architecture or data distribution that ensure assumptions (A5) and (A8) hold? For example, do ReLU or tanh activations satisfy the Lipschitz and gradient-boundedness assumptions under spectral normalization or weight clipping?\n- The analysis assumes a unique saddle point, yet neural networks are often overparameterized. Is uniqueness strictly necessary, or could the analysis extend to a set of equivalent saddles?\n\n2. On Novelty and Positioning\n- The claimed improvement from O(n^{-1/2}) to O(1/n) hinges on the PL–strongly-concave structure. Could you explicitly summarize which elements of your proof break down in purely convex–concave settings?\n- To what extent could your Lyapunov and ghost-index coupling analysis extend to other PL-minimax settings (e.g., actor–critic or distributional RL formulations)?\n\n3. On Practical Interpretability\n- You mention that the minibatch setting follows “verbatim” with rescaled constants. Could you please provide the explicit scaling law of ε_T(B) in terms of B and n?\n- When stating that you achieve the O(1/n) rate “after one pass over n samples,” do you mean T ≈ n SGDA steps, one epoch with sampling with or without replacement?\n\n4. On Theoretical Sharpness\n- Your current bounds are in expectation. Do you think similar rates could hold with high probability using martingale inequalities (e.g., Azuma or Freedman)? If so, how would the constants or rates degrade?\n- Could you comment on how sensitive your results are to the condition numbers L/μ_PL and L/ρ?\n\n5. On Empirical Verification\n- Would you be open to adding a toy experiment (e.g., linear-quadratic BRM under the assumptions you make) to confirm the slope of the generalization error versus sample size?\n- Even a small-scale plot could visually substantiate the theoretical rate and convince a broader ICLR audience.\n\n6. On Extensions and Generality\n- Your analysis focuses on the softmax (entropy-regularized) case. Could you clarify whether the PL–strongly-concave geometry and stability proof extend to hard-max or Moreau-smooth Bellman operators?\n- Would your argument still hold under Markovian dependence rather than i.i.d. samples? If not directly, what modifications would be necessary to handle the mixing-time dependence?\n\n7. On Presentation and Readability\n- Could you include a short boxed derivation showing how the Bellman residual minimization problem transforms into the minimax form involving the dual variable?\n- The final decomposition separating optimization and generalization errors is one of your most interpretable results. Consider moving it earlier into the main body with a brief intuitive discussion.\n\n8. On Possible Future Directions\n- How do you envision extending your analysis to policy-based or actor–critic settings, where the loss is not strictly bi-convex/bi-concave?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T14:02:03",
"modification_date": "2025-11-12T11:24:12",
"review_url": "https://openreview.net/forum?id=eHxQc2Q0aw¬eId=gtGlUB77Ze",
"license": "CC BY 4.0"
}
] |
0miO9v1jeC
|
https://openreview.net/forum?id=0miO9v1jeC
|
TAR: Token Adaptive Routing Framework for LLMs Token-level Semantic Correction Inspired by Neuro-Linguistic Pathways
| 3
| 3
|
[
2,
2,
4,
4
] |
[
4,
3,
3,
2
] | 4
|
[
"large language models; math reasoning; brain-inspired; adaptive routing; token semantic correction"
] |
Large language models (LLMs) often suffer from cascading errors in math reasoning due to token-level semantic defects. A key limitation is that the reliance on unidirectional feedforward pathways makes LLMs unable to dynamically correct token-level defects during reasoning. In contrast, neuro-linguistic pathways in the human brain—centered on Broca’s and Wernicke’s areas—operate as a closed loop, integrating semantics through feedforward pathways while leveraging feedback circuit for error correction and signal adaptation. The loop involves conflict detection in the anterior cingulate cortex (ACC), cross-regional error transmission via the arcuate fasciculus/IFOF, and compensatory reprocessing in the DLPFC–Broca circuit. Inspired by the functional architecture of neuro-linguistic pathways, we propose a Token Adaptive Routing (TAR) framework that establishes a brain-inspired self-correcting loop in LLMs without requiring parameter fine-tuning. TAR comprises three components: (1) \textbf{Semantic Defect Monitor}, analogous to the anterior cingulate cortex (ACC) for identifying tokens with semantic defects; (2) \textbf{Adaptive Router}, resembling the arcuate fasciculus/IFOF for routing defective tokens to the most compatible LLM functional block; and (3) Feedback-based Re-representation, inspired by the DLPFC–Broca circuit for correcting semantic defects. Experiments show that TAR improves accuracy and reduces the number of inference tokens. On the challenging AIME25 benchmark, TAR improves the accuracy of Qwen3-1.7B by +3.36% while reducing inference tokens by 13.7%. Furthermore, we reveal that maintaining high token confidence is essential for reasoning performance, and deeper blocks in LLMs play a crucial role in shortening reasoning depth. Our code is available at https://anonymous.4open.science/r/warehouse-25F5
|
We propose a brain-inspired Token Adaptive Routing framework that enables LLMs to self-correct token-level semantic errors, improving reasoning accuracy while reducing inference tokens.
|
unsupervised, self-supervised, semi-supervised, and supervised representation learning
|
https://openreview.net/pdf?id=0miO9v1jeC
| 2025-09-20T16:22:51
| 4
|
[
{
"id": "voMGPoVWiW",
"forum": "0miO9v1jeC",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission24412/Reviewer_JgFs",
"reviewer_name": "Reviewer_JgFs",
"rating": 2,
"confidence": 4,
"soundness": 1,
"contribution": 2,
"presentation": 1,
"summary": "The paper presents TAR, Token Adaptive Routing, a method for self-correction of LLMs inspired by neuroscience, pathways in the brain that are non-directional and thus not only forward (one directional), for the task to correct incorrect LLM outputs.\n\nThe method is triggered when the output is of low confidence (threshold based), triggering an external adaptive router that looks at model-internal representations of layers ('ability vectors') to match best for so-called 'requirement vector' to match, to override the output layer ('to re-inject into block B_a() for feedback based re-presentation'). The adaptive router is trained with act as a router, by comparing SFT loss before and after routing to guide the router toward better policies. In an LLM with L layers, each re-representation offers L + 1 routing options, and the paper tests all L + 1 policies to find the best. Finally, a regularization term is added to stabilize training. \n\nThe method is tested on three math reasoning benchmarks, GSM8K, MATH500 and AIME25. LLMs tested are Qwen2.5/0.5B and Qwen3/1.7B and trained on math data generated by itself. Each adapter is trained for one epoch.",
"strengths": "- The paper presents a method for self-correction of an LLM in math domains.\n\n- The paper is quiet well written (see comment below) and the method and setup is clearly explained.",
"weaknesses": "- **Limited evaluation.** The evaluation is severely limited.\n\n - **Overly strong claim**. The paper motivates the method as \" token-level semantic defects\" detection method. However, and most importantly, the method is essentially a *self-correction method for math reasoning problems*, I strongly disagree with a claim for \"semantic defects\" when tested only on simple math problems. The title uses \"linguistic pathways\" which is too strong if tested in such a narrow domain where arguably linguistics is not really the key to solve the problem. Instead, the paper proposes a math error detection method. It would be stronger to compare and contrast to related work model-internal injections on natural language understanding (e.g. like the multi-hop natural language understanding problem in related work in Biran et al.). Moreover, the writing, especially in the introduction does not situate the method in math reasoning, but claims to provide a bigger human-inspired solution for \"semantic defects\", which I find misleading. There are no experiments beyond math problems, thus the claim of the current paper writeup is too strong and not supported by empirical validation. \n - **Small LLMs only**. The method tests only two small LLMs and these are from the same model family (Qwen). This severely limits generalization. The method should be tested in at least two different model families to test generalizability beyond small Qwen models.\n - **Lack of comparison to upper bound or other method** The method does not compare against any existing method. For example, the activation space like back-patching (Biran et al.'s method) could be applied by identifying a simple prompt that 'hints' at the solution by rephrasing the math problem (if the task is to solve 2 + 2 = 4, the test could be 1+3 = ?) and creating a probing classifier to identify the layer. This would also be a more lightweight approach and could help understand if the (quiet complex) method is useful. At least one comparison method should be included.\n - **Improvements** in Table 1 seem small (up to 3.2% accuracy). This raises again the question whether the routing method is useful.\n - **Lack of details of hyperparameter** The method relies on a confidence threshold (if the confidence if low, the routing fires. However, the paper does not provide any information of what threshold is used, nor how it was determined, nor how sensitive the method is to this threshold or how generalizable it is across the three math datasets. This is an important aspect left undermined.\n\n- **Repetitive text parts**: The neuroscience inspiration part is quiet long and repeated 3x in the paper (abstract, into, method section).\"TAR comprises three core components, each inspired by a function of the biological neuro-linguistic pathways ...\" \n\n\n- **Lack of complexity analysis** The method needs to identify which layer out of all layers for each token. This is very expensive. This is also perhaps why the paper only evaluates very small LLMs. The paper would be strengthened by providing a complexity analysis and judgement to what degree the method would scale up to larger LLMs.",
"questions": "- how did you determine the threshold? how sensitive is your approach to the threshold?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T22:07:05",
"modification_date": "2025-11-12T18:24:33",
"review_url": "https://openreview.net/forum?id=0miO9v1jeC¬eId=voMGPoVWiW",
"license": "CC BY 4.0"
},
{
"id": "LnJYR39z81",
"forum": "0miO9v1jeC",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission24412/Reviewer_4TYC",
"reviewer_name": "Reviewer_4TYC",
"rating": 2,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 1,
"summary": "This paper introduces Token Adaptive Routing (TAR), a closed-loop mechanism that lets LLM detect and correct token-level semantic defects during inference without fine-tuning the backbone weights. The paper is motivated by how humans think and implement a \"which-where-how\" process. A semantic defect monitor to flag low-confidence tokens, an adaptive router that selects the most compatible model block, and lastly, a feedback-based representation that reinjects the token into the block to repair its semantics.",
"strengths": "1. The papers introduces a TAR which is insipred by neuro-linguistic pathways that are present in the human brain\n2. The method shows improved performance without fintuning the backbone weights.",
"weaknesses": "1. The methodology section is hard to follow. The authors introduce a lot of components, where the motivation of each of the components within the router is missing. For example, the role of the ability vector and the requirement vector is hard to follow.\n2. Does the method only select a single block? If yes, why is only a single block necessary for larger models, since multiple layers can be necessary to recalibrate a token\n3. How many times does the self-correction loop go on? Is it a single loop, or is there an exit based on the confidence of the tokens?\n4. Since the author introduces a lot of components in the training, ablation results are necessary to verify the importance of each component; however, most of the results have been moved to the appendix.\n5. The authors train the model on AIME2024 and then test it on AIME2025; there is a strong chance of contamination here.\n6. The model only focuses on math reasoning. Why was no experiment run on OOD tasks?\n7. The paper chooses very small model,s and the router might not work properly when the number of layers increases in the model",
"questions": "1. Do we require a separate router for each model?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T07:17:08",
"modification_date": "2025-11-12T18:24:33",
"review_url": "https://openreview.net/forum?id=0miO9v1jeC¬eId=LnJYR39z81",
"license": "CC BY 4.0"
},
{
"id": "0ugjulmRMx",
"forum": "0miO9v1jeC",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission24412/Reviewer_5XsK",
"reviewer_name": "Reviewer_5XsK",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "This paper proposes a token-adaptive routing framework to enhance the performance of LLMs by providing a second opportunity to answer questions at the token level. Inspired by neuro-linguistic pathways, the framework establishes a brain-inspired self-correcting loop that integrates seamlessly with LLMs without requiring additional fine-tuning. Through experiments conducted on two LLMs and comparisons with baseline methods, the framework demonstrates significant performance improvements.",
"strengths": "1. The token-level framework does not require fine-tuning the LLMs, offering a self-correcting loop that enhances performance.\n\n2. The framework design is grounded in theories of human brain function, making it more natural and theoretically sound.\n\n3. The paper presents well-motivated research objectives, and the framework components effectively address these motivations.",
"weaknesses": "1. As mentioned in the strengths, the framework does not fine-tune the LLMs directly, but instead tunes a router to enhance their representations. However, I would argue that this approach resembles LoRA-based fine-tuning, which also avoids modifying the main model parameters but introduces additional trainable components. Although the router operates differently from LoRA, both approaches still require training. From the paper, it is unclear whether the baseline model (i.e., Qwen) was fine-tuned with LoRA or not, but the router clearly involves training. Therefore, comparing a LoRA-based version and a router-based version would provide a fairer evaluation than the current setup.\n\n2. The paper appears to draw inspiration from concepts in brain science, but the connections between these concepts and the proposed framework are not clearly established. Providing more details on how these ideas relate to the framework would help clarify the motivation and theoretical grounding.\n\n3. There has been extensive research on enhancing LLM performance by modifying or reinterpreting their representations in a plug-and-play manner, without training any additional modules. How does the proposed framework compare in this regard? Can it function as a plug-and-play method, or does it necessarily require additional training?",
"questions": "See Weakness",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T00:19:54",
"modification_date": "2025-11-12T18:24:34",
"review_url": "https://openreview.net/forum?id=0miO9v1jeC¬eId=0ugjulmRMx",
"license": "CC BY 4.0"
},
{
"id": "QDeI6V99pb",
"forum": "0miO9v1jeC",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission24412/Reviewer_vYqw",
"reviewer_name": "Reviewer_vYqw",
"rating": 4,
"confidence": 2,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "Inspired by the structure of the human brain, this paper proposes TAR, a framework for token-level semantic correction. Based on the authors’ experiments, the proposed framework appears to yield certain performance improvements.\n\nHowever, overall, I find the claimed connection between the method and biological brain structures somewhat tenuous. The paper does not provide sufficient justification or clear evidence for the claimed correspondence. This is my first time reviewing an AI paper that attempts to draw inspiration from biological structures, so I will lower my confidence accordingly.",
"strengths": "**[S1]** The paper attempts to establish a connection between the proposed AI method and human brain structures, reflecting an interesting biological inspiration in the design of artificial intelligence systems.",
"weaknesses": "**[W1]** The relationship between the proposed method and the biological brain structures needs to be strengthened. At present, the explanation feels rather superficial and unconvincing.\n\n**[W2]** The paper devotes a large amount of space (around 7 pages) to describing the method and its connection to the brain, but the experimental and analytical sections are very limited (less than 2 pages). The paper feels more narrative-driven than methodologically innovative.\n\n**[W3]** The experiments are insufficient. The evaluation is only conducted on small Qwen models, and several results are missing — e.g., GSM8K lacks Qwen3-1.7B evaluation, MATH500 lacks Qwen2.5-0.5B, and AIME25 lacks Qwen2.5-0.5B. Therefore, the results do not convincingly demonstrate the effectiveness of the proposed approach.\n\n**[W4]** Token length is not an ideal metric for measuring efficiency, since the introduction of a router modifies the model structure. Reporting inference latency would provide a more reasonable and fair comparison.\n\n**[W5]** The method description is not sufficiently clear. I spent a considerable amount of time trying to understand the proposed framework, and Figure 3 fails to clearly illustrate the core design.",
"questions": "Please refer to the weaknesses section above.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T00:17:56",
"modification_date": "2025-11-12T18:24:34",
"review_url": "https://openreview.net/forum?id=0miO9v1jeC¬eId=QDeI6V99pb",
"license": "CC BY 4.0"
}
] |
WzLjwv8KAn
|
https://openreview.net/forum?id=WzLjwv8KAn
|
Which Cultural Lens Do Models Adopt? On Cultural Positioning Bias and Agentic Mitigation in LLMs
| 2.5
| 3.5
|
[
2,
2,
4,
2
] |
[
3,
4,
3,
4
] | 4
|
[
"Bias",
"Culture",
"LLM",
"Generation",
"Agent"
] |
Large language models (LLMs) have unlocked a wide range of downstream generative applications.
However, we found that they also risk perpetuating subtle fairness issues tied to culture, positioning their generations from the perspectives of the mainstream US culture while demonstrating salient externality towards non-mainstream ones.
In this work, we identify and systematically investigate this novel **culture positioning bias**, in which an LLM’s default generative stance aligns with a mainstream view and treats other cultures as "outsiders".
We propose the ***CultureLens*** benchmark with 4,000 generation prompts and 3 evaluation metrics for quantifying this bias through the lens of a *culturally situated interview script generation* task, in which an LLM is positioned as an on-site reporter interviewing local people across 10 diverse cultures.
Empirical evaluation on 5 state-of-the-art LLMs reveals a stark pattern: while models adopt insider tones in over 88\% US-contexted scripts on average, they disproportionately adopt mainly outsider stances for less dominant cultures.
To resolve these biases, we propose *2 inference-time mitigation methods*: a baseline prompt-based **Fairness Intervention Pillars (FIP)** method, and a structured **Mitigation via Fairness Agents (MFA)** framework consisting of 2 pipelines:
(1) **MFA-SA (Single-Agent)** introduces a self-reflection and rewriting loop based on fairness guidelines.
(2) **MFA-MA (Multi-Agent)** structures the process into a hierarchy of specialized agents: a Planner Agent(initial script generation), a Critique Agent (evaluates initial script against fairness pillars), and a Refinement Agent (incorporates feedback to produce a polished, unbiased script).
Empirical results demonstrate that agent-based MFA methods achieve outstanding and robust performance in mitigating the culture positioning bias:
For instance, on the CAG metric, *MFA-SA reduces bias in Llama model by 89.70 \% and MFA-MA mitigates bias in Qwen by 82.55\%*.
These findings showcase the effectiveness of agent-based methods as a promising direction for mitigating biases in generative LLMs.
|
alignment, fairness, safety, privacy, and societal considerations
|
https://openreview.net/pdf?id=WzLjwv8KAn
| 2025-09-20T15:05:18
| 5
|
[
{
"id": "Ks50sMgkwg",
"forum": "WzLjwv8KAn",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission24027/Reviewer_N7qu",
"reviewer_name": "Reviewer_N7qu",
"rating": 2,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This work identifies a novel culture positioning bias in large language models (LLMs), where generations default to mainstream U.S. cultural perspectives and marginalize other cultures. To measure this bias, the authors introduce CultureLens, a benchmark with 4,000 prompts and 3 metrics that evaluate cultural stance through interview-style text generation across 10 global cultures. They further propose Fairness Intervention Pillars (FIP) and an agent-based Mitigation via Fairness Agents (MFA) framework, showing that MFA methods dramatically reduce cultural bias—by up to 89.7%—and offer a robust path toward fairer generative LLMs.",
"strengths": "1. The paper proposes CultureLen to evaluate culture positioning bias problem.\n2. It also proposes a baseline prompt-based Fairness Intervention Pillars (FIP) method, and a structured Mitigation via Fairness Agents (MFA) framework to mitigate culture positioning bias problem.",
"weaknesses": "1. I think this experiment in Sec 5.1 is not rigorous. There are lots of cultural knowledge, covering different aspects. The paper just did experiments on Reddit and Wikipedia and claims that culture-specific knowledge can't improve fairness performance. To get this conclusion, the authors need to do large-scale experiments.\n2. I don't think the paper proposes some novel findings. For the culture positioning bias, it seems not new.\n3. For the Fairness Intervention Pillars, I still don't think the method is novel.",
"questions": "See weaknesses",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T06:06:55",
"modification_date": "2025-11-12T18:21:42",
"review_url": "https://openreview.net/forum?id=WzLjwv8KAn¬eId=Ks50sMgkwg",
"license": "CC BY 4.0"
},
{
"id": "y3EEXg7ckY",
"forum": "WzLjwv8KAn",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission24027/Reviewer_ngpY",
"reviewer_name": "Reviewer_ngpY",
"rating": 2,
"confidence": 4,
"soundness": 1,
"contribution": 2,
"presentation": 3,
"summary": "This paper investigates how large language models reflect cultural bias by asking, \"through which cultural lens do these models see the world?\" Focusing on interview script generation, the authors present CULTURELENS, a benchmark of 4,000 prompts spanning ten culturally diverse contexts. It assesses whether LLMs take an insider or outsider stance when producing culturally grounded content. Three quantitative metrics: Cultural Externality Percentage, Cultural Perspective Deviation, and Cultural Alignment Gap, are used to measure bias systematically. Experiments with several leading LLMs show a clear US-centric tilt, with non-dominant cultures like Papua New Guinea often framed from an outsider view. To mitigate this, the paper introduces Fairness Intervention Pillars, a targeted strategy leveraging both single-agent and multi-agent setups to meaningfully narrow cultural positioning disparities.",
"strengths": "- **S1:** The paper introduces CULTURELENS, a well-designed benchmark addressing cultural positioning bias in depth for the first time.\n- **S2:** It uses three clear and interpretable metrics (CEP, CPD, CAG) that systematically measure cultural bias.\n- **S3:** The Fairness Intervention Pillar (FIP) offers a practical, effective way to reduce bias, making the work actionable.\n- **S4:** The Mitigation via Fairness Agents (MFA) framework is well-structured, with two pipelines: MFA-SA (Single-Agent) and MFA-MA (Multi-Agent).",
"weaknesses": "- **W1**: Only five models were tested, mostly small ones (7B), and major families like Gemini, Gemma, or Claude were missing. Also, using smaller models likely skews results, making the findings less representative of actual model capabilities, as larger models perform better most times. Overall, in this sense, some findings of the current evaluation results can be misleading and inappropriate in general; and the findings do not provide a clear picture in terms of evaluation.\n- **W2:** The evaluation focuses only on the interviewer’s (LLM-generated) questions and ignores the interviewee responses, which limits depth and misses key aspects of cultural reasoning. As cultural understanding is a very complex topic, in this case, this method doesn’t seem reliable (as shown in agreement scores) and it questions the credibility of the whole study.\n- **W3:** Why did the evaluation focus solely on the interviewer’s (LLM-generated) questions rather than also assessing interviewee responses, as it seems that judging from questions which do not have enough content to judge this reliably (even human agreements are low)? Without analyzing the interviewee role, the results feel surface-level and don’t truly assess cultural understanding. Including the interviewee role may provide a deeper or more accurate view of the models’ cultural reasoning abilities.\n- **W4:** The inter-annotator agreement (κ = 0.60) and human–AI agreement (κ = 0.53) indicate only moderate consistency, raising legitimate concerns about the reliability and clarity of human judgments. As noted in the literature [1], Fleiss’ kappa values in this range suggest moderate agreement on a scale from –1 to 1, where 1 represents perfect consensus and 0 corresponds to chance-level agreement. Given this, I find the use of an LLM as a judge questionable, especially considering the task’s design flaws, since the evaluation centers on the interviewer’s perspective (the questions only), the generated responses (interview questions) are unlikely to exhibit strong cultural nuances. Moreover, even human experts show considerable difficulty in maintaining consistency, with only 35–40% of the annotated data deemed reliable [2], as reported in lines 937–940. Together, these issues cast doubt on the robustness and reliability of the presented results.\n- **W5:** The criteria for “insider” or “outsider” cultural positioning are not well-defined, leaving the human evaluation guidelines vague. Were any written guidelines or examples provided? The prompt (C.3) says, \"Does the interviewer use language that suggests they are not part of the interviewee’s culture? Answer only in yes or no.\" There is no clear instruction about that in B.7 too.\n- **W6:** The claim that cultural nuances were “manually verified” is unclear—no mention of checklists, documentation, or how intra-cultural variation was handled. What process was used for the “manual verification” of cultural nuances mentioned in Line 494? Was there a checklist, documentation, or standardized approach?\n- **W7:** The lack of structured verification steps makes the manual validation process seem weak and unreliable. How did the authors ensure that intra-cultural variations (regional, ethnic, or social) were considered during the manual verification process?\n\n[1] Hartling L, Hamm M, Milne A, et al. Validity and Inter-Rater Reliability Testing of Quality Assessment Instruments [Internet]. Rockville (MD): Agency for Healthcare Research and Quality (US); 2012 Mar. Available from: https://www.ncbi.nlm.nih.gov/books/NBK92293/ https://www.ncbi.nlm.nih.gov/books/NBK92287/table/executivesummary.t2/?report=objectonly\n\n[2] McHugh M. L. (2012). Interrater reliability: the kappa statistic. Biochemia medica, 22(3), 276–282.",
"questions": "Please address the above weaknesses.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-29T22:39:23",
"modification_date": "2025-11-12T18:21:42",
"review_url": "https://openreview.net/forum?id=WzLjwv8KAn¬eId=y3EEXg7ckY",
"license": "CC BY 4.0"
},
{
"id": "cyxpqqWrUF",
"forum": "WzLjwv8KAn",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission24027/Reviewer_G1gt",
"reviewer_name": "Reviewer_G1gt",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "The authors identify the problem of culture positioning bias, in which LLMs default to adopting an insider lens for certain cultures, but an outsider lens for other, often not as well-resourced, cultures. They introduce an interview script generation task to evaluate this bias across different LLMs. They find that all LLMs are biased toward an American cultural lens, but that certain prompt-based mitigations can reduce this bias on the interview task.",
"strengths": "- The paper identifies an important direction in cultural alignment that has been relatively neglected. While lots of work has analyzed LLM default behavior in MCQ settings, as well as biases related to cultural steering in open-ended settings, they consider default behavior in an open-ended setting through the lens of culture positioning bias.\n- Extensive analyses reveal that culture positioning bias is an issue in language models, and effective prompt-based mitigations are identified to address this for the task in question.",
"weaknesses": "- The paper focuses on a very narrow set of interview script generation tasks, which are an uncommon use case for LLMs - due to the narrow task focus, it’s unclear whether the results shown would generalize to more realistic real-world tasks in ways that would perpetuate the representational or allocational harms discussed.\n- The analysis of the qualitative results in Section 4.3.2 don’t seem to be well-grounded in past work on stereotype mitigation. In particular, the rationale behind the color-coded labels in tables 2-3 is not explicitly given, and labels seem to be ad-hoc (e.g. it doesn’t seem problematic for “soviet” and “orthodox” to be associated with Russia, and “Punjab” is a region in Pakistan, so it’s unclear why it’s highlighted but American states are not).\n- The proposed mitigations, such as FIP, seem somewhat ungrounded as well - the FIP prompt given to the model is zero-shot GPT-4o output.",
"questions": "- To what extent are the results explained by the United States being the only country in the list of 10 where English is the most commonly used language? The Rystrøm 2025 work cited uses language as a cultural control - one hypothesis for the effect seen in this work is that LLMs form associations between the language used and the insider/outsider status of a country. For example, if we prompted in Urdu and the model switches to insider status when generating Pakistani transcripts, this might suggest that the effects seen are related to model inference of insider/outsider status based on the prompt given, rather than unawareness of task-specific norms.\n- What happens if you LLMs the ability to critique/reflect without any task-specific guidelines? It would be useful to know if the multi-agent gains are attributable to task-specific knowledge, or just the ability to reflect in general.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-26T04:11:37",
"modification_date": "2025-11-12T18:21:42",
"review_url": "https://openreview.net/forum?id=WzLjwv8KAn¬eId=cyxpqqWrUF",
"license": "CC BY 4.0"
},
{
"id": "Z2wDEviiDv",
"forum": "WzLjwv8KAn",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission24027/Reviewer_waHg",
"reviewer_name": "Reviewer_waHg",
"rating": 2,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This paper analyzes culture bias for large language models (LLMs), focusing on an insider vs outsider stance when generating content about different cultures. The authors propose benchmarks with new metrics (CEP, CPD, CAG) to quantify this bias and present qualitative and quantitative analyses showing clear insider vs outsider asymmetries across multiple models. They further propose two prompt-based mitigation frameworks (FIP and MFA) to reduce such biases.\n\nSince culture bias in LLMs have been explored by several prior papers, the main contribution of this paper is the insider vs outsider framing and the associated metrics. Although it's great that the authors also proposed mitigation methods in addition to bias detection, the analysis is only done on the proposed metrics and hard to make comparison with prior work.",
"strengths": "Novel framing via the insider vs outsider idea:\nThe insider vs outsider distinction provides a clear and intuitive way to think about cross-cultural asymmetry. Even though conceptually simple, this framing could help future work reason about whose voice a model takes when discussing cultural contexts. Compared to detecting differences on same given small dataset, this approach theoretically has deeper implications to downstream tasks and model understanding in general. \n\nStrong execution and coverage: \nThe study evaluates multiple models across ten cultures, offering a broad snapshot of how cultural positioning manifests. The qualitative examples and lexical analyses are accessible and help ground abstract claims in concrete evidence.\n\nReasonable dataset and metric design: \nThe proposed benchmark and derived metrics (CEP, CPD, CAG) provide a structured way to quantify the insider vs outsider phenomenon. All of the ideas introduced are smart and well-designed despite not super technically innovative. \n\nBias mitigation in addition to bias detection: \nIn addition to introducing a challenge, the authors also made an attempt to solve the problem using the two prompt-based mitigation frameworks (FIP and MFA) and the results show decent improvements compared to the baseline.",
"weaknesses": "Too many different ideas and contributions but not enough depth: \nThe paper touches on three distinct areas: bias detection, metric design, and bias mitigation. As a result, the work reads as three partial contributions rather than one cohesive advance. The cultural bias framing, metric proposal, and agent-based mitigation could each justify a separate study, but none are developed deeply enough to stand alone at ICLR level in terms of standard of innovation or analytical rigor.\n\nLimited novelty: \nCultural bias and Western centrism in LLMs have been widely studied. The insider vs outsider framing adds rhetorical clarity but not a fundamentally new conceptual or analytical dimension. Prior work has already characterized similar problems and it's not obvious how the proposed framing compares or improves upon prior findings. Since the insider vs outsider is perhaps the most important contribution, deeper analysis would greatly help. For example, how it affects downstream applications, what we can learn from this, how likely is this going to transfer to existing harmful cases etc. \n\nWeak technical depth and comparisons in mitigation: \nThe mitigation section (FIP and MFA) is underdeveloped. Both are high-level prompting or agentic reformulations evaluated only against the authors’ own metrics, with no comparison to established debiasing or alignment baselines. The methods lack algorithmic substance and do not yield generalizable insights about how to mitigate cultural bias beyond prompt engineering.",
"questions": "Core contribution on insider vs outsider framing: \nHow do you see the insider vs outsider framing as conceptually distinct from prior discussions of cultural bias and Western centrism in LLMs? Can you articulate what new understanding this framing provides that was not already captured by “cultural alignment” or “representational disparity” studies?\nDid you conduct human evaluations to assess whether these quantitative scores correlate with human judgments of insider vs outsider stance?\nWhat drives the insider vs outsider asymmetry observed? Is it primarily data imbalance, instruction tuning bias, or cultural salience in the training corpus?\nHave you analyzed whether the same patterns hold for multilingual or region-specific models trained outside Western datasets?\n\nOn bias mitigation: \nThe mitigation strategies (FIP, MFA) are tested only on your benchmark. How do they perform on existing cultural or social bias benchmarks?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-20T11:36:59",
"modification_date": "2025-11-12T18:21:43",
"review_url": "https://openreview.net/forum?id=WzLjwv8KAn¬eId=Z2wDEviiDv",
"license": "CC BY 4.0"
}
] |
|
avdPTUXdPG
|
https://openreview.net/forum?id=avdPTUXdPG
|
Dissecting Demystifying Region-Based Representations in MLLMs
| 3
| 3
|
[
4,
4,
2,
2
] |
[
3,
3,
3,
3
] | 4
|
[
"Vision Language Models",
"Multimodal Models"
] |
Multimodal Large Language Models (MLLMs) typically process visual information as a flat sequence of image patch tokens, which is computationally expensive and lacks explicit semantic structure. This paper provides a systematic, vision-centric analysis of region-based representations, which group patches into semantically meaningful regions, as a more efficient and interpretable alternative. Our investigation is grounded in a key finding: MLLM performance is surprisingly robust to the input order of patch tokens, as the visual encoder already encode spatial information within the patches. This insight provides a foundational justification for reorganizing patches into semantically coherent regions. We further identify that the success of region-based methods depends on the quality of the visual features, particularly their smoothness and locality. We systematically evaluate how to enhance these properties through vision backbone selection, feature normalization, and hybrid partitioning strategies. Through comprehensive evaluations, we demonstrate that optimized region-based representations are a competitive alternative to patch-based ones, offering a compelling path towards more efficient, interpretable, and performant MLLMs.
|
Dissecting Demystifying Region-Based Representations in MLLMs
|
applications to computer vision, audio, language, and other modalities
|
https://openreview.net/pdf?id=avdPTUXdPG
| 2025-09-19T22:59:05
| 4
|
[
{
"id": "PY4oNc5j0b",
"forum": "avdPTUXdPG",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission19156/Reviewer_xyPG",
"reviewer_name": "Reviewer_xyPG",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This paper aims to dissect the region-based representation in the mul-timodal large language models. The paper shows that region-based representations are robust to patch token order, and their effectiveness depends on smooth, localized visual features. The proposed in-sights are straightforward and easy to follow, though somewhat lacking in depth. The experiments and visualizations are clearly designed to illustrate the findings, though they sometimes lack quantitative support or deeper analysis.",
"strengths": "- The insights are straightforward and easy to follow.",
"weaknesses": "- Intuitive but Shallow Conclusions: The biggest strength of this paper is also its main weakness. The conclusions are intuitive and easy to follow, but their usefulness for guiding practical applications or informing further theoretical analysis may be limited. They lack underlying theoretical explanations, which makes the paper feel more like a report of experimental observations rather than a thorough dissection.\n\n- Lack of Quantitative Experimental Support: Some of the reasoning would be more convincing if supported by quantitative metrics. For example, in Finding 3, the authors mention that feature non-smoothness poses a challenge for region-based methods. Intuitively, the authors should provide a quantitative analysis of feature non-smoothness and establish its relationship with performance. The absence of such quantitative analyses reduces the depth and the inspirational value of the paper.",
"questions": "Please refer to the weakness.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T15:43:21",
"modification_date": "2025-11-12T15:06:06",
"review_url": "https://openreview.net/forum?id=avdPTUXdPG¬eId=PY4oNc5j0b",
"license": "CC BY 4.0"
},
{
"id": "gIYq8bfUVa",
"forum": "avdPTUXdPG",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission19156/Reviewer_Y2ak",
"reviewer_name": "Reviewer_Y2ak",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "This paper study the region-based visual representation for MLLM input instead of traditional patch-based representations that have high computational costs (quadratic token growth) and lack semantic structure\nThe paper find that MLLM performance is robust to patch token order, as visual encoders encode spatial info into patch features—providing a key basis for region reorganization. It identifies raw visual feature incoherence as the main challenge for region-based representations aggregation . To tackle this, it proposes strategies: using agglomerative backbones (e.g., RADIOv2.5), adding normalization (e.g., RMSNorm), and hybrid regions (SAM segmentation + DBSCAN clustering) .\nExperiments show optimized region-based MLLM match patch-based MLLM in performance, while cutting visual tokens for efficiency and boosting interpretability via focused attention .",
"strengths": "- It is meaningful to investigate how to construct a region-based visual representation suitable for MLLM input—one that can facilitate LLM understanding while reducing training overhead.\n- The paper is highly accessible, featuring a well-organized structure that allows readers to easily grasp its core ideas.\n- Extensive experiments and visualizations provide solid support for the research findings. The paper conducts in-depth analyses and deduces/validates each finding through systematic reasoning.",
"weaknesses": "- The paper mainly discusses the region feature aggregation in the vision part. However, how the LLM attends to the patch features and region features remains under exploration. \n- The paper proposes a simple way to obtain the region-based representation, which is a post-processing step of the patch vision features. Yet, it does not study how to learn region features (suitable for MLLMs) within the ViT. \n- Lack of efficiency discussion: The paper proposes using SAM and clustering methods to extract region-based vision features, but it fails to analyze whether the use of SAM and clustering brings additional computational overhead compared to the patch-based method. \n- Missing results in several experiments raise doubts about the correctness of the findings. While Table 1 evaluates 7 benchmarks, Tables 2, 3, and 4 only evaluate on 2 or 4 benchmarks, which may cause confusion for readers.\n- Table 3: Configurations E, F, and G are missing results for POPE, OCRBench, and CV-Bench. \n- Figure 3 does not indicate which models the visualizations correspond to. \n- Typos: Line 256 has a missing reference (marked as `Table ??`).",
"questions": "- Visualization of the attention mask when altering the order of vision tokens is required. I am curious whether vision tokens in random order exhibit the same positional attention patterns as those in sequential order. \n- Many papers on token reduction or token pruning indicate that dropping 75% of tokens even above only slightly impacts performance. Does this mean LLMs do not need to capture all vision tokens and only need to attend to a few vital vision tokens?\n- The paper obtains the region-based representation based on SAM mask and token merging. It should compare to other token merging and token pruning methods, like PyramidDrop, (CVPR25), SparseVLM (ICML25), FasterVLM (ICCV25), VisionZip(CVPR2025)\n- Each region might be an object or the background with the same concept. Does the region representation for one element correspond to a single token or a set of tokens? Additionally, does the position of regions in the input sequence matter for LLMs? \n- As mentioned in the paper, DINOv2 exhibits better feature coherence due to its self-supervised learning (SSL) training. How about the performance of using the clustering results of DINOv2 features to aggregate CLIP or SigLIP features? \n- In Table 5, what does `768x` denote? If it refers to resolution, why does this setting differ from those in other ablation experiments?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T02:19:59",
"modification_date": "2025-11-12T15:19:05",
"review_url": "https://openreview.net/forum?id=avdPTUXdPG¬eId=gIYq8bfUVa",
"license": "CC BY 4.0"
},
{
"id": "89esP1N32A",
"forum": "avdPTUXdPG",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission19156/Reviewer_Hmy2",
"reviewer_name": "Reviewer_Hmy2",
"rating": 2,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "The paper explores region-based representations as an efficient and interpretable alternative to patch-based representations. It builds on the observation that MLLM performance remains robust to the input order of visual tokens, implying that spatial information is already embedded within patch features. This insight motivates reorganizing patches into semantically coherent regions. The paper also provides comprehensive experimental evaluation and in-depth analysis to support its findings.",
"strengths": "The paper offers a comprehensive and systematic analysis of region-based representations. It identifies feature smoothness as a key factor underlying their effectiveness and proposes concrete strategies to leverage this property. The visualization and attention analyses are clear and compelling, demonstrating how region-based methods produce more structured and interpretable attention maps while significantly reducing the number of tokens.",
"weaknesses": "While the paper provides valuable analysis, its methodological novelty is limited. The work primarily examines existing components—such as segmentation, clustering, and normalization—rather than introducing new architectures or learning mechanisms. The performance improvements from region-based representations are moderate, and the exploration of aggregation strategies remains incomplete. In particular, the proposed cross-attention-based aggregation yields marginal gains, indicating that a more sophisticated design may be required. Additionally, the study relies on frozen visual encoders, which restricts insight into how region-based representations might interact with end-to-end optimization or benefit from joint training.",
"questions": "1. Would unfreezing the visual encoder during fine-tuning enhance feature coherence and potentially reduce the reliance on post-hoc normalization?\n2. How sensitive are the results to the number and granularity of regions? Could an adaptive region selection mechanism based on image complexity further improve performance?\n3. Is it feasible to integrate the hybrid segmentation–clustering approach into the training process itself, rather than using it solely as a preprocessing step?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T12:26:27",
"modification_date": "2025-11-12T15:06:06",
"review_url": "https://openreview.net/forum?id=avdPTUXdPG¬eId=89esP1N32A",
"license": "CC BY 4.0"
},
{
"id": "TGX4Vgu3uT",
"forum": "avdPTUXdPG",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission19156/Reviewer_8ZZf",
"reviewer_name": "Reviewer_8ZZf",
"rating": 2,
"confidence": 3,
"soundness": 2,
"contribution": 3,
"presentation": 2,
"summary": "This paper focuses on the effect of aggregating token-level visual representations into region-level visual representations in vision-language models on general multimodal tasks. Authors show how region-level representations improve the performance of MLLM across various multimodal tasks. The authors also conducted a comprehensive discussion and experiments on several related points: (1) how order non-sensitivity of MLLM guarantees that region-level representation won’t collapse the model, (2) how the quality of visual representations influences the effectiveness of region-level representation, (3) how normalization helps region-level representations, (4) how different approaches of forming the region and pooling features influence the effect of region-level representations.",
"strengths": "The paper transitions from region-based representations’ usage for vision-only tasks to multimodal tasks. The authors ask an interesting question related to “why region-based representations work on MLLM”. They use an analysis of how the order of the visual token influences the performance of MLLM.",
"weaknesses": "1. The explanation about the drop in OCR performance needs to be further justified by experiments, e.g., if one creates the region for each independent character, will the performance of region-level representation improve the performance on OCR tasks?\n2. The lack of sufficient explanation or label for particular figures makes some of the conclusions less convincing: \n - There is no explanation about the difference between “random order (trained)” and “random order (w/o training)” in Table 3. Does that mean the model is further fine-tuned after reordering the input tokens? Why are these two methods distinguished and compared only in patch-based RADIOv2.5, whereas for CLIP and region-based RADIOv2.5, there is only a single “random order” condition? \n - Why not test the pre-shuffle condition on CLIP and region-based RADIO v2.5?\n - Is there any explanation about why some of the entries are empty in Table 3?\n - Figure 3 does not have any label indicating which image belongs to which model, making it hard to tell anything from the figure here.\n3. As discussed in the paper, agglomerative visual encoders could offer better visual representation. To make the conclusion more solid, I would like to see more results from different agglomerative visual encoders and compare with the traditional visual encoder rather than only RADIOv2.5.\n4. The results from Table 4 are not sufficient enough to support the point that RMSNorm helps the region-based representation, as there is only an improvement on MMStar. Also, the format of the plot here is confusing, as there are three models for patch-based representation but only one model for region-based representation. Are these the results for RADIOv2.5? How does RMSNorm work on the region-based CLIP and SigLIP2?\n\nMinor:\n1. The location of the “G” letter in figure (b) is different from the “G” in ( c) and (d), also those three “G”s seem to have different luminance.\n2. Line 256, the link to the table is not working correctly (Table ??)\n3. Figure 6 in the appendix is not centered.",
"questions": "see weaknesses.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T12:03:51",
"modification_date": "2025-11-12T15:06:07",
"review_url": "https://openreview.net/forum?id=avdPTUXdPG¬eId=TGX4Vgu3uT",
"license": "CC BY 4.0"
}
] |
vK6iDcs8SM
|
https://openreview.net/forum?id=vK6iDcs8SM
|
BulletGen: Improving 4D Reconstruction with Bullet-Time Generation
| 4
| 3.75
|
[
4,
6,
4,
2
] |
[
4,
4,
3,
4
] | 4
|
[
"4D reconstruction",
"bullet-time",
"generative models"
] |
Transforming casually captured, monocular videos into fully immersive dynamic experiences is a highly ill-posed task, and comes with significant challenges, e.g., reconstructing unseen regions, and dealing with the ambiguity in monocular depth estimation. In this work we introduce BulletGen, an approach that takes advantage of generative models to correct errors and complete missing information in a Gaussian-based dynamic scene representation. This is done by aligning the output of a diffusion-based video generation model with the 4D reconstruction at a single frozen "bullet-time" step. The generated frames are then used to supervise the optimization of the 4D Gaussian model. Our method seamlessly blends generative content with both static and dynamic scene components, achieving state-of-the-art results on both novel-view synthesis, and 2D/3D tracking tasks.
|
We improve 4D reconstruction from monocular videos by augmenting with bullet-time reconstructions from a generative model.
|
applications to computer vision, audio, language, and other modalities
|
https://openreview.net/pdf?id=vK6iDcs8SM
| 2025-09-18T23:08:10
| 4
|
[
{
"id": "oHmENQ7Rpb",
"forum": "vK6iDcs8SM",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12477/Reviewer_q93b",
"reviewer_name": "Reviewer_q93b",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This paper presents BulletGen, a method for 4D dynamic scene reconstruction from monocular videos. Its core contribution is the integration of static, diffusion-based bullet-time generation with dynamic 3D Gaussian Splatting to address under-constrained regions. The approach iteratively augments the scene representation at selected frozen timestamps. Evaluations on the DyCheck iPhone and Nvidia Dynamic datasets demonstrate state-of-the-art performance in novel view synthesis and 2D/3D tracking.",
"strengths": "1. the generation steps alternate with the training of a Gaussian-based global 4D representation.\n2. The paper provides thorough quantitative evaluations on multiple datasets, demonstrating improvements in metrics like PSNR, SSIM, LPIPS, and tracking accuracy. The ablation study effectively analyzes the contributions of key hyperparameters, such as the number of bullet-time stamps and generations.",
"weaknesses": "1. A similar idea has already been employed in 3D reconstruction works like VistaDream, which also involves initializing a 3D Gaussian Splatting (3DGS) reconstruction and then iteratively inpainting it. This paper should further discuss its relationship with such literature.\n2. The outputs of diffusion models often exhibit insufficient 3D coherence, which can complicate the alignment process and introduce visual artifacts, especially when handling complex dynamic scenarios.\n3. The description of key technical components lacks clarity. The explanation of the generative augmentation pipeline and the loss function, while detailed, suffers from unnecessary complexity. It often fails to provide a clear intuition for design choices. For example, the iterative optimization loop and the mechanism for integrating generated views into the global 4D representation are not explained in a coherent, step-by-step manner, making the core contribution difficult to follow.",
"questions": "My main consider are the novelty of involving the video gen model to 4d reconstruction and detailed technique contribuitions.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-03T03:38:09",
"modification_date": "2025-11-12T12:55:52",
"review_url": "https://openreview.net/forum?id=vK6iDcs8SM¬eId=oHmENQ7Rpb",
"license": "CC BY 4.0"
},
{
"id": "Bv6LyxFunj",
"forum": "vK6iDcs8SM",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12477/Reviewer_pJEB",
"reviewer_name": "Reviewer_pJEB",
"rating": 6,
"confidence": 4,
"soundness": 4,
"contribution": 2,
"presentation": 3,
"summary": "This paper presents BulletGen, a method for 4D dynamic scene reconstruction from a single monocular video. The core problem it addresses is the highly ill-posed nature of this task, particularly in reconstructing unseen regions and resolving depth ambiguities.\nThe authors demonstrate that this approach achieves state-of-the-art results on the DyCheck iPhone and Nvidia datasets, significantly improving novel-view synthesis quality and 2D/3D tracking accuracy.",
"strengths": "1. Strong Empirical Results: The method shows impressive quantitative and qualitative results. It achieves state-of-the-art performance on standard benchmarks for both novel-view synthesis and 2D/3D tracking, clearly outperforming its baseline (Shape-of-Motion) and other recent methods. The qualitative results for extreme novel views (e.g., Fig. 1 and Fig. 3) are particularly strong.\n\t2. Simple but effective strategy. A significant strength is the method's practicality. Instead of requiring a complex, computationally expensive 4D video diffusion model, the authors leverage a generator trained only on static scenes. This \"bullet-time static diffusion strategy\" is a clever way to augment a dynamic reconstruction while using more accessible and abundant static training data",
"weaknesses": "1. Limited Novelty of the Core Concept: The central idea of using diffusion models to generate novel views as supervision for a 3D/4D neural representation is not new. This concept has been substantially explored in the 3D reconstruction domain, particularly for sparse-view inputs (e.g., ReconFusion [1], Difix+[2], and others ). The paper's primary contribution is the application of this idea to the monocular 4D setting. While effective, this can be seen as an incremental, though logical, extension of existing work\n\t2. Unconvincing \"Bullet-Time\" Strategy and Temporal Consistency: My main concern lies with the \"bullet-time\" generation. The generative model is static and, more importantly, generates novel views for each time stamp independently. The paper's claim is that optimizing the global 4DGS representation (which uses shared motion bases ) is sufficient to enforce temporal consistency across these independently generated views.\n\tOther works in 4D object generation (e.g., EG4D [3], SV4D [4]) have proposed more principled solutions, such as attention-based mixing or latent-space temporal models, to explicitly enforce consistency during the generation phase. The paper lacks a discussion or comparison against such temporally-aware generative methods. The provided temporal slice (Fig. 5) only compares against the SoM baseline, which is insufficient to prove the temporal coherence of the generated content.\n\t\n\t[1] Wu et al. CVPR 2024.\n\t[2] Zhang et al. CVPR 2025.\n\t[3] Sun et al. ICLR 2025.\n\t[4] Xie et al. ICLR 2025.",
"questions": "1. The justification for using Shape-of-Motion (SoM) as the baseline is unclear. Given that SoM itself has notable limitations, could the authors elaborate on why this particular model was chosen over other, stronger 4D reconstruction methods?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T16:55:48",
"modification_date": "2025-11-12T12:55:53",
"review_url": "https://openreview.net/forum?id=vK6iDcs8SM¬eId=Bv6LyxFunj",
"license": "CC BY 4.0"
},
{
"id": "JZdoNkRcDm",
"forum": "vK6iDcs8SM",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12477/Reviewer_27FN",
"reviewer_name": "Reviewer_27FN",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "- BulletGen addresses the ill-posed problem of 4D reconstruction from monocular videos by leveraging a diffusion-based video generation model to correct errors and complete missing information in Gaussian-based dynamic scene representations.\n- The method aligns generated frames at specific \"bullet-time\" stamps with the initial 4D reconstruction to supervise and \"iteratively\" optimize the dynamic 4D Gaussian model using a robust loss incorporating photometric, perceptual, semantic, and depth err.",
"strengths": "- Generative augmentation for unobserved regions: The method employs a frozen diffusion-based image/video generator at \"bullet-time\" instants to hallucinate novel views (e.g., back sides, occluded areas), providing missing appearance and geometry cues. \n- Integration of 2D generative priors with global 4D scene representation: Generated 2D frames undergo pose-tracking and depth-alignment, then iteratively supervise a dynamic 3D Gaussian-splatting representation to maintain spatio-temporal consistency.",
"weaknesses": "- Since the method ultimately relies on diffusion loss, performance improvements are inevitably limited when dealing with complex objects or large motions. This can be confirmed by the modest gains observed in SoM. (Particularly, while there is some gain shown in Table 3, the 0.06dB improvement in Table 2 is too minimal to be considered significant.)\n- Furthermore, the use of CLIP loss tends to work better primarily on object-centric scenes. Consequently, as observed, the method performs better on object-centric and near-rigid scenes such as \"spin\" and \"paper-windmill,\" which represents a limitation.",
"questions": "- I'm curious about how the performance would change if this approach were applied to MoSca, a more recent method.\n- Since the generation ultimately depends on the prompt, which frame was used to select the prompt? For example, I understand that the prompt can vary quite significantly across frames when objects are moving.\n- Was the generative model not fine-tuned?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T15:05:18",
"modification_date": "2025-11-12T12:55:53",
"review_url": "https://openreview.net/forum?id=vK6iDcs8SM¬eId=JZdoNkRcDm",
"license": "CC BY 4.0"
},
{
"id": "nLprp5IoJ6",
"forum": "vK6iDcs8SM",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12477/Reviewer_EcZr",
"reviewer_name": "Reviewer_EcZr",
"rating": 2,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "The paper proposes BulletGen, a method for dynamic 3D scene reconstruction from monocular videos that leverages a generative diffusion model to enhance 4D reconstruction. The key innovation is using the static image-to-video diffusion models to generate novel views at a selected timestep, which are then aligned to the scene representation and used for supervision. The method benchmarks its performance on DyCheck and Nvidia datasets.",
"strengths": "- The main strength of the method is the proposed iterative approach that allows for iterative refinement of the Gaussian scene representation. \n- The use of a novel view generation diffusion model, given a static scene, is a nice way of using a static model for dynamic reconstruction.\n- The proposed alignment algorithm seems to be working well. Authors make a really good use of the state-of-the-art models for priors.\n- The provided evaluation shows consistent improvement over the baseline Shape-of-Motion.\n- The qualitative results align with the quantitative ones in terms of visible improvements with respect to Shape-of-Motion.",
"weaknesses": "- A big part of the paper's contribution is dependent on the 'internal controllable image-to-video diffusion model'. This raises several concerns. Firstly, the reproducibility of the method will be largely limited unless the model is released. Any further comparison with this method will highly likely not be feasible. While it is fair to use an internal model to achieve a good performance, in my opinion, it should be accompanied by a detailed comparison of the same pipeline but with a publicly available diffusion model. This is particularly important given the fact that authors use ViewCrafter in their experiments. There is nothing in the paper suggesting that using ViewCrafter would not be feasible in this setup. In the current state, it is not clear how much of the performance improvement comes from the diffusion model.\n- The novelty of the paper is more limited than the authors mention; several papers with a rather similar approach are not mentioned, and the evaluation is missing important comparisons.\n\t- Regarding novelty - I believe two important references are missing. Firstly, Difix3D+ [1] is an important work that proposed an iterative refinement process of the 3D scene with the use of a generalisable enhancement diffusion model. Not only does this work propose a significant method in part similar to this paper's contributions, but the diffusion model is available and could be used in an ablation study. Further, ViDAR [2] proposes a reconstruction method in which the novel views are generated and further enhanced with personalised diffusion to serve as the reconstruction supervision. This work uses a similar idea of generating an additional supervision signal in novel views and seems highly relevant as a related work. \n\t- Regarding evaluation - the authors cite MoSca in line 133; the evaluation should include MoSca as the compared method. It looks like MoSca would outperform BulletGen in some metrics (PSNR, SSIM). Further, the aforementioned ViDAR could be included as well (performing better in PSNR, SSIM, LPIPS). Whilst the work recently got accepted to NeurIPS, and it seems not to have released the code yet, the arXiv release includes the numerical results on DyCheck in the same setup as in this paper, which makes it the same comparison as the Vivid4D used here.\n- In terms of ablation, it would be good to see isolation of contributions, i.e., add contributions to Shape-of-Motion one by one to show the reader the importance of each. This would help with the previously raised point of not being able to isolate the impact of the internal model. \n- To strengthen the claim on the new synthesised plausible parts of the scene, one could show that both quantitatively and qualitatively. Namely, given that DyCheck provides covisibility masks, presenting results outside of such masks would effectively measure the performance of the approach in unseen parts of the scene.\n- CAT4D is currently way past the date of being a concurrent work; it was published at CVPR 2025 and released on arXiv even earlier. \n- It would be good to see examples of the captions and whether they differ between times and views.\n\t\n[1] Jay Zhangjie Wu, Yuxuan Zhang, Haithem Turki, Xuanchi Ren, Jun Gao, Mike Zheng Shou, Sanja Fidler, Zan Gojcic, Huan Ling, *Difix3D+: Improving 3D Reconstructions with Single-Step Diffusion Models*, CVPR 2025\n\n[2] Michal Nazarczuk, Sibi Catley-Chandar, Thomas Tanay, Zhensong Zhang, Gregory Slabaugh, Eduardo Pérez-Pellitero, *ViDAR: Video Diffusion-Aware 4D Reconstruction From Monocular Inputs*, NeurIPS 2025",
"questions": "- Could you generate all your novel views with the diffusion model from the input video? In this way, the prior provided to the diffusion model would be the strongest, as opposed to a degraded view produced by baseline reconstruction. It would be an interesting ablation.\n- Regarding one of the weaknesses, can any diffusion be used in the pipeline?\n- In higher numbers of n_g, what is the rationale of doing multiple generations the same way?\n- It sounds like, given a timestep, an extreme view among input poses is selected as the prior for generation. Therefore, for some timesteps, the view will be close to the input, and for some, very far. For the far views, the prior (i.e. novel view) will highly likely contain some artefacts. This would, in turn, introduce a likely noisy supervision to the reconstruction model. Did you observe anything like that? Is there a way to mitigate that?\n- Regarding time complexity, could you report a time for generating one sequence of novel views given the input image for the diffusion model?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-27T08:41:58",
"modification_date": "2025-11-12T12:55:53",
"review_url": "https://openreview.net/forum?id=vK6iDcs8SM¬eId=nLprp5IoJ6",
"license": "CC BY 4.0"
}
] |
lNcc1TypMd
|
https://openreview.net/forum?id=lNcc1TypMd
|
Beyond Log Likelihood: Probability-Based Objectives for Supervised Fine-Tuning across the Model Capability Continuum
| 5
| 3.75
|
[
4,
6,
6,
4
] |
[
3,
4,
4,
4
] | 4
|
[
"Post-Training",
"SFT",
"training objectives"
] |
Supervised fine-tuning (SFT) is the standard approach for post-training large language models (LLMs), yet it often shows limited generalization. We trace this limitation to its default training objective: negative log likelihood (NLL). While NLL is classically optimal when training from scratch, post-training operates in a different paradigm and could violate its optimality assumptions, where models already encode task-relevant priors and supervision can be long and noisy. To this end, we study a general family of probability-based objectives and characterize their effectiveness under different conditions. Through comprehensive experiments and extensive ablation studies across 7 model backbones, 14 benchmarks, and 3 domains, we uncover a critical dimension that governs objective behavior: the *model-capability continuum*. Near the *model-strong* end, prior-leaning objectives that downweight low-probability tokens (*e.g.,* $-p$, $-p^{10}$, thresholded variants) consistently outperform NLL; toward the *model-weak* end, NLL dominates; in between, no single objective prevails. Our theoretical analysis further elucidates how objectives trade places across the continuum, providing a principled foundation for adapting objectives to model capability.
|
We revisit supervised fine-tuning (SFT) for large language models, introducing a model-capability continuum that shows negative log-likelihood is not universally optimal and characterizes when alternative objectives succeed or fail.
|
foundation or frontier models, including LLMs
|
https://openreview.net/pdf?id=lNcc1TypMd
| 2025-09-19T08:22:36
| 4
|
[
{
"id": "ZOgxijcz1o",
"forum": "lNcc1TypMd",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14671/Reviewer_FZpg",
"reviewer_name": "Reviewer_FZpg",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "LLMs are usually post trained using SFT, where the model is taught to reproduce a reference answer token by token using NLL loss. The authors argue that once a model has been pretrained, NLL is no longer universally optimal because the model already encodes strong priors and SFT supervision can be noisy or irrelevant. The paper introduces a general family of objective functions that work under different conditions (MS, MI, MW). They show that through this formulation, they improved performance across 14 benchmarks.",
"strengths": "* This paper introduces a general family of probability based objectives, it broadens the space of loss functions and connects NLL and accuracy as special cases.\n* The proposed idea of model capability continuum is neat, though the way to measure a models MS, MI and MW could be improved.",
"weaknesses": "* Experimental results focus on narrow domains (math, medical and puzzles) It would be good have results on some other general benchmarks (wild bench, arena hard, IF-eval, some code and agentic evals)\n* The continuum proposed by the paper relies on the mean predicted probability and pretraining coverage as proxies for prior strength. LLMs are often miscalibrated. Using a single scalar to rank tasks may overlook nuanced factors such as variance, entropy or distributional mismatch.\n* The paper does not study whether thresholding harms knowledge retention, fairness, or calibration.\n* The authors claim that RL‑inspired methods such as implicit reward learning, importance sampling and PPO‑style clipping are special cases of their prior leaning objectives. This should be backed with some empirical comparisons.",
"questions": "1. do you anticipate the same continuum behavior will hold for much larger LLMs (> 30B)? Could larger and more capable models potentially benefit even more from prior leaning objectives, or might new challenges (like optimization instability or diminished gains) arise at that scale?\n2. Have you considered using UQ methods as a more principled metric for assessing model capabilities. Such methods might better capture the epistemic vs aleatoric uncertainty and could help automate the classification of MS, MI and MW\n3. The experiments use a fixed threshold and show that training on the top 10 % of tokens yields strong improvements. How sensitive are the results to this choice?\n4. RL‑based methods such as RLHF, DPO, RPO and one‑token rollout also downweight low reward or low probability tokens by sampling. Could you provide a comparison between your probability based objectives and these RL approaches\n5. Does downweighting low‑probability tokens have any adverserial effects, like does it affect calibration or fairness?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T13:34:34",
"modification_date": "2025-11-12T13:24:01",
"review_url": "https://openreview.net/forum?id=lNcc1TypMd¬eId=ZOgxijcz1o",
"license": "CC BY 4.0"
},
{
"id": "v5EhTOET6P",
"forum": "lNcc1TypMd",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14671/Reviewer_ozBm",
"reviewer_name": "Reviewer_ozBm",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper argues that the standard negative log-likelihood (NLL) objective used in supervised fine-tuning (SFT) of large language models is not always the optimal choice during post-training. The authors observe that pretrained models already contain strong prior knowledge, and forcing them to imitate every supervision token can lead to overfitting and poor generalization. They introduce and study a broader family of probability-based training objectives that either emphasize or downweight low-probability tokens. Through experiments across multiple model sizes, datasets, and domains, they identify a “model-capability continuum”: in domains where the base model already has strong priors (e.g., math), objectives that downweight low-probability tokens (such as −p or thresholded −log p) outperform NLL. In domains where the model has weak priors (e.g., unseen puzzles), NLL performs better because it forces learning from unlikely tokens. In intermediate domains (e.g., medical reasoning), no objective clearly dominates. The paper further supports these findings with theoretical analysis showing how gradients and learning dynamics differ across capability regimes.",
"strengths": "The paper addresses an important and timely question in LLM post-training by re-examining the default SFT objective, which is usually taken for granted. The experimental evaluation is broad, covering multiple model families, diverse datasets, and capability levels, demonstrating the generality of the results. The conceptual introduction of a “model-capability continuum” provides an intuitive and practical framework for understanding when different objectives should be used. The empirical results are supported by gradient-based theoretical reasoning, which makes the findings more convincing. The paper has clear motivation, thorough ablations, and actionable insights for practitioners who want to improve fine-tuning outcomes.",
"weaknesses": "The classification of domains into “model-strong,” “model-intermediate,” and “model-weak” can feel somewhat heuristic and may not be straightforward to estimate for new tasks in practice. The proposed approach still requires manual selection of the objective based on the capability regime, and the paper does not yet provide an automated or adaptive method for doing this. While the theoretical explanation is suggestive, it relies on simplified assumptions and does not fully capture the complexity of real training dynamics. In some intermediate settings, the differences between objectives are small, which may limit the practical impact in many real-world SFT use cases. The paper also evaluates improvements mainly on reasoning-heavy tasks, so it is less clear how broadly the results generalize to conversational or stylistic alignment tasks.",
"questions": "The paper discusses a continuum from model-weak to model-strong domains, but the operationalization of this continuum is not fully specified. How should a practitioner determine where a new task sits on this continuum before training? Is there a quantitative diagnostic metric that can be computed prior to fine-tuning, rather than one derived from already trained or partially trained models?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T07:16:50",
"modification_date": "2025-11-12T13:24:02",
"review_url": "https://openreview.net/forum?id=lNcc1TypMd¬eId=v5EhTOET6P",
"license": "CC BY 4.0"
},
{
"id": "0Fbkn3XV6m",
"forum": "lNcc1TypMd",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14671/Reviewer_nHnp",
"reviewer_name": "Reviewer_nHnp",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper revisits the standard practice of supervised fine-tuning (SFT) for large language models by questioning the negative log-likelihood (NLL) training objective. Authors propose a family of probability-based objectives that generalize NLL (which is the limit as $α→0$). They find alternative objectives (example $-p$ or $-(p^{10})/10$, which downweight low probability tokens) can outperform NLL on certain tasks. The key contribution is identifying a \"model capability continuum\": when the base model is strong (already has high prior knowledge on the task), prior-leaning objectives (that trust the model's prior) yield better generalization than NLL. When the base model is weak, the NLL objective is better for learning from scratch. In intermediate capability settings, no single objective is consistently better. The authors run comprehensive experiments across 7 models, 14 benchmarks, and 3 domains, demonstrating up to 16% accuracy gains with prior-leaning losses on strong models, whereas NLL remains best on weaker models. A theoretical analysis is provided to explain the performance of objectives. The work provides a practical guidance to choose objectives based on current model capability to improve generalization.",
"strengths": "* **Novel Perspective:** The paper offers a new viewpoint by questioning the default use of NLL for fine-tuning large pre-trained models. It introduces the concept of a model capability continuum, which is a clear way to understand how a model's prior knowledge should influence training strategy.\n\n* **Thorough Empirical Validation:** The experimental evaluation is very comprehensive. The authors conduct tests on 7 different LLM backbones (of varying sizes and domains) and 14 benchmarks covering diverse tasks (math problem solving, medical question answering, logic puzzles, etc.). This breadth gives good credibility to the findings, the continuum pattern (prior-leaning losses excel with strong models, NLL excels with weak models) is consistently observed, not just a one off result. Significant performance gains (sometimes doubling accuracy) are achieved in few settings using the new proposed objectives.\n\n* **Theoretical Insight:** Beyond empirical results the paper provides a theoretical analysis that supports its claims. The authors derive conditions under which one objective will outperform another, and show that these conditions flip between the \"model strong\" and \"model weak\" ends of the spectrum. This adds a lot of weight to the work it's not just \"we tried this new loss and it worked\" but why it works is partly explained through a formal lens.\n\n* **Clarity and Context:** The paper is well written and not hard to follow. It motivates the problem clearly (highlighting how long chain-of-thought supervision and strong pretrained priors violate assumptions of NLL's optimality). It also contextualizes the work in the literature: for example it contrasts its approach with reinforcement learning from human feedback (RLHF) and other recent techniques like PPO-inspired fine-tuning, importance sampling in SFT, and selective data training.\n\n* **Significance:** The findings have notable implications for the community. If NLL is not universally optimal for post-training, this could prompt many researchers and practitioners to reconsider their fine-tuning procedures. The idea that one should \"lean on the model’s knowledge when it's strong, and override it when it’s weak\" is a valuable guideline.",
"weaknesses": "1. **Objective Adaptation in the Intermediate Regime.**\n The paper identifies that no single objective consistently works well in the model-intermediate regime, but does not propose a method to handle this case. This is a practical gap, since many real-world tasks likely fall in this zone.\n\n2. **Deciding Model Capability in Practice.**\n The framework relies on knowing whether a model is \"model-strong\" or \"model-weak\" on a task, but the paper does not provide a way to assess this ahead of time. The current categorization is done post hoc.\n\n3. **Forgetting on Prior Tasks.**\n The paper focuses on improving performance on new tasks during fine-tuning but does not study how different objectives affect retention of previously learned capabilities. This matters for applications where continual learning is important and accuracy needs to be high on the entire sequence of tasks being fine-tuned on.",
"questions": "1. Did the authors explore or consider adaptive objective schedules during training (example starting with NLL and transitioning to a prior-leaning loss)? If not what challenges do you expect to see in implementing such an approach?\n\n2. How should practitioners determine model capability before fine-tuning? Can simple metrics like zero-shot accuracy or mean token confidence be used reliably to choose the right objective?\n\n3. Did the authors measure or observe any differences in forgetting on prior capabilities when using prior-leaning objectives like $-p$ or $-p^{10}$ compared to NLL? Would you expect more or less forgetting in these cases?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T02:41:51",
"modification_date": "2025-11-12T13:24:02",
"review_url": "https://openreview.net/forum?id=lNcc1TypMd¬eId=0Fbkn3XV6m",
"license": "CC BY 4.0"
},
{
"id": "awZR5tdSWm",
"forum": "lNcc1TypMd",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14671/Reviewer_Fva5",
"reviewer_name": "Reviewer_Fva5",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 4,
"summary": "The paper examines the effects of the negative log likelihood loss on various domains. The findings reveal that there is a relationship between the type of loss used and the model's priors on the task. Based on this, the paper proposes a model capability continuum as a way to formalize the spectrum of models' priors and their relationship to various probability-based learning priors. More specifically,y they find that models with weak priors on task tend to benefit more from NLL loss as compared to models with strong priors. This end of the spectrum benefits more from down-weighting low probability tokens. At the center, they find that no one objective function has a clear advantage.",
"strengths": "- The motivation is stated clearly in how SFT is applied to LLM alignment compared to classification training\n- Research questions are stated clearly \n- The paper lays out the theoretical background of the loss functions used for SFT and a generalized version of it\n\nMethod and Experiment:\n- Paper shows extensive experiments on the Model Strong and Model Moderate settings with a number of benchmarks\n- Ablation studies: The paper does extensive ablation on the high, low, and mid probability tokens' fine-tuning using various values of alpha",
"weaknesses": "Related work Depth:\n\t\n- The paper has not examined existing literature exploring alternatives to CE loss. [1, 2]\n\t\n- It would be great to get a comparison to this work and a more comprehensive literature review of the existing landscape of alternative loss functions\n\nMethod and Experiment:\n- The experiments done on mode weak are not extensive. The choice of benchmark for model weak is much more restrictive compared to the other 2 settings\n\n1. Entropic Distribution Matching in Supervised Fine‑tuning of LLMs: Less Over‑fitting and Better Diversity\n2. Computer Vision Losses for Large Language Model Fine‑Tuning",
"questions": "- Can the author discuss more about the connection with RL? This setting is similar to a policy with a strong prior setting in RL.\n- I would like to know if the results of model-weak still hold when using some other domain like coding, science, multi-lingual, etc (anyone could work).\n- For the model strong setting, it would be intresting to see results on a dataset that emphasizes knowledge memorization where the model has strong priors (Wikipedia, etc.).\n- Following on the previous question, does the proposed continuum still stand when we make a distinction of datasets that are reasoning/skills vs pure knowledge memorization? In other words is NLL loss sub-optimal choice for each class of the dataset when there strong prior and vice versa",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-29T11:59:46",
"modification_date": "2025-11-12T13:24:03",
"review_url": "https://openreview.net/forum?id=lNcc1TypMd¬eId=awZR5tdSWm",
"license": "CC BY 4.0"
}
] |
BZ1vutP53o
|
https://openreview.net/forum?id=BZ1vutP53o
|
TEN-DM: Topology-Enhanced Diffusion Model for Spatio-Temporal Event Prediction
| 4
| 3.666667
|
[
6,
2,
4
] |
[
3,
4,
4
] | 3
|
[
"Spatio-temporal point process",
"Diffusion model",
"Topological data analysis"
] |
Spatio-temporal point process (STPP) data appear in many domains. A natural way to model them is to describe how the instantaneous event rate varies over space and time given the observed history which enables interpretation, interaction detection, and forecasting. Traditional parametric kernel-based models, while historically dominant, struggle to capture complex nonlinear patterns. In contrast, deep learning methods leverage the representational power of neural networks to aggregate historical events and integrate spatio-temporal point processes. However, existing deep learning methods often process space and time independently, overlooking the spatio-temporal dependencies. To address this limitation, we propose a novel method called Topology-ENhanced Diffusion Model (TEN-DM), including two key components namely spatio-temporal graph construction and multimodal topological feature representation learning. Further, we use temporal query technique to effectively capture periodic temporal patterns for learning effective temporal representations. Extensive experiments show the effectiveness of TEN-DM on multiple STPP datasets compared to state-of-the-art methods.
|
learning on time series and dynamical systems
|
https://openreview.net/pdf?id=BZ1vutP53o
| 2025-09-19T14:14:20
| 3
|
[
{
"id": "kl72bsgome",
"forum": "BZ1vutP53o",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16261/Reviewer_rwpn",
"reviewer_name": "Reviewer_rwpn",
"rating": 6,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "The paper introduces Topology-ENhanced Diffusion Model (TEN-DM) for predicting future events in spatio-temporal point process (STPP) data.\n\nThe central problem the authors address is that existing methods often fail to capture the complex, higher-order dependencies between the spatial and temporal dimensions. Their solution is a conditional diffusion model where the conditioning signal is a sophisticated, fused representation of the event history. It fuses Spatio-Temporal Graph, Spatial information, temporal information and Topological Learning (converts the STPP data into a time-series of 2D images). This fused embedding guides the diffusion model’s denoising process to accurately predict both the time and location of the next event.\n\nExperiments on five real-world datasets (e.g., earthquakes, crime, COVID-19) show that TEN-DM achieves state-of-the-art performance",
"strengths": "The motivation is straightforward: conditioning a spatio-temporal generative model on more available information generally improves performance. The approach proves highly effective, achieving top results across five diverse real-world datasets and outperforming 17 baselines.",
"weaknesses": "The proposed pipeline is exceptionally complex. It involves GNN pre-training, data-to-image conversion, multi-scale cubical zigzag persistence computation (which is notoriously expensive), a CNN on persistence images, and a conditional diffusion model. This complexity may make the model impractical.\n\nThe conversion of event data to a 2D image (Section 3.2) is a critical step, but it is underspecified and potentially lossy. The paper states, \"we rasterize the events geo-coordinates onto the 2D image by recording as each pixel's value the associated temporal attribute.\" But What happens if two or more events fall into the same grid and same time patch? How to choose the grid resolution?\n\nThe GCL module feels less developed than the TTL module. The graph is described as a similarity-based $\\epsilon$-graph (using cosine similarity), which is known to be highly sensitive to the choice of the threshold $\\epsilon$ (or $\\mathbb{R}^r$ in the paper). The mechanism by which the aggregation weights $\\{\\alpha_r\\}$ are \"updated adaptively\" is not explained",
"questions": "Could the author please clarify the rasterization process in Section 3.2?\n\nWhy did you choose the image-based cubical complex representation?\n\nIn Section 3.1, how are the adjacency matrix aggregation weights $\\{\\alpha_r\\}$ \"updated adaptively\"? Are these learnable parameters optimized end-to-end, or set via a separate process?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T15:56:40",
"modification_date": "2025-11-12T13:46:50",
"review_url": "https://openreview.net/forum?id=BZ1vutP53o¬eId=kl72bsgome",
"license": "CC BY 4.0"
},
{
"id": "6bPYQbAd05",
"forum": "BZ1vutP53o",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16261/Reviewer_Wz4A",
"reviewer_name": "Reviewer_Wz4A",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This paper presents TEN-DM, a Topology-Enhanced Diffusion Model for spatio-temporal point process (STPP) prediction. The authors aim to address the limitations of existing deep learning models in capturing complex, non-stationary spatio-temporal dependencies, especially under sparse and noisy conditions. TEN-DM introduces three core components: (i) a graph construction and learning module to model event interactions, (ii) a temporal topological learning (TTL) framework based on zigzag persistence to extract dynamic topological features from time-series images, and (iii) a temporal query-guided self-attention (TQ-SA) mechanism to capture periodic patterns. The model is evaluated on five real-world datasets and outperforms the baselines in both spatial and temporal prediction tasks.",
"strengths": "- Originality: This is the first work to integrate zigzag persistence and diffusion models for STPP forecasting. The use of topological data analysis (TDA) in the form of zigzag persistence images (ZPI) to capture time-evolving shape patterns is novel and well-motivated. The idea of converting STPP data into image time-series and analyzing their topological evolution is creative and technically sound.\n\n- Technical Quality: The paper is mathematically rigorous, with formal definitions of cubical complexes, filtrations, and zigzag persistence. The stability theorem (Theorem 3.2) provides theoretical grounding for the robustness of ZPI under noise. The Lipschitz bound for the proposed attention mechanism (TST-MHA) further demonstrates the model’s theoretical controllability.\n\n- Clarity: Despite the complexity of the methodology, the paper is well-organized and clearly written. Each module is introduced with both intuition and formalization. Figures (e.g., Fig. 1, Fig. 2) effectively illustrate the pipeline and help readers understand the workflow.\n\n- Significance: The paper addresses a real-world problem, forecasting discrete events in space and time (e.g., earthquakes, crimes, disease outbreaks), and proposes a unified, interpretable, and robust solution. The integration of geometry, topology, and diffusion opens a new research direction in spatio-temporal modeling, especially for sparse and irregular data.",
"weaknesses": "- Scalability and Efficiency Concerns: While the model is effective on small-scale datasets (e.g., ~10K events), its scalability to large-scale urban data (e.g., millions of taxi trips or tweets) is unclear. The zigzag persistence computation and image rasterization steps may become prohibitively expensive for high-resolution or long-duration data. A complexity breakdown or runtime scaling analysis is missing.\n\n- Limited Ablation on Topological Hyperparameters: The paper does not thoroughly explore the sensitivity of ZPI parameters, such as filtration resolution, patch size, or zigzag directionality. The multi-scale mixing uses fixed weights (βq = 0.25), but adaptive or learned weighting could be more effective. An ablation on these choices would strengthen the contribution.\n\n- Generalization Across Domains: Although the model is tested on five datasets, they are all from the US or Japan, and mostly urban or seismic events. There is no evaluation on human mobility, social media, or climate events, which are also common STPP scenarios. A cross-domain generalization test would better support the claim of universality.\n\n- Baseline Diversity: While 17 baselines are included, few recent graph-based or transformer-based STPP models are missing. Also, no comparison with other TDA-based methods is provided. This limits the completeness of the empirical evaluation.",
"questions": "- Scalability: How does TEN-DM scale to larger datasets (e.g., >1M events)? What is the time and memory complexity of ZPI generation and TTL module with respect to image resolution and sequence length?\n\n- Topological Sensitivity: How does the model performance change with different filtration functions, patch sizes, or zigzag directions? Have you tried adaptive weighting for multi-scale ZPI fusion?\n\n- Cross-domain Evaluation: Have you tested TEN-DM on non-urban, non-seismic data, such as animal movement, social media check-ins, or climate anomalies? How general is the topological assumption?\n\n- Comparison with TDA Baselines: Why not compare with other TDA-enhanced models? This would better highlight the unique value of zigzag persistence.\n\n- Real-time Forecasting: Is TEN-DM suitable for real-time or online forecasting? Can the ZPI and TTL modules be incrementally updated as new events arrive?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T19:05:16",
"modification_date": "2025-11-12T13:46:51",
"review_url": "https://openreview.net/forum?id=BZ1vutP53o¬eId=6bPYQbAd05",
"license": "CC BY 4.0"
},
{
"id": "z3u9mQoHuk",
"forum": "BZ1vutP53o",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16261/Reviewer_UFd9",
"reviewer_name": "Reviewer_UFd9",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "This paper proposes a TEN-DM framework for modeling Spatio-Temporal Point Processes (STPPs). The method integrates three key components: a Graph Construction and Learning (GCL) module to represent STPP data as graphs, a Temporal Topological Learning (TTL) framework using zigzag persistence to capture evolving topological features, and a Temporal Query-guided Self-Attention (TQ-SA) mechanism. The authors combine these into a diffusion model to learn complex spatio-temporal dependencies, particularly under sparse and noisy regimes. Extensive experiments on five real-world datasets are presented with promising results.",
"strengths": "Strengths:\n\n- The paper proposes several advanced paradigms, such as the graph neural networks and diffusion models, into a single framework for STPP modeling.\n\n- The paper provides a thorough experimental section, benchmarking TEN-DM against a set of baselines across multiple real-world datasets. \n\n- The proposed model is decomposed into clear, modular components, making the architecture easy to understand.",
"weaknesses": "Weaknesses:\n\n- One of the main issues of this paper is the lack of motivation and problem analysis. Although there are some discussions for the choices of the key components, the justifications are somewhat insufficient. For example, the rationale for using graph abstraction is \"graph abstraction offers a flexible... framework\" and \"never been used\". This does not articulate what specific challenges or limitations exist in current STPP methods.\n\n- Similarly, the motivation for using diffusion models is that they are \"a new powerful machinery\" and have not been applied to STPPs before, which is somewhat insufficient. Thus, I suggest that to provide a theoretical analysis to show that under what conditions the proposed method is better than the previous methods.\n\n- Furthermore, it would be better if it could provide some case studies to illustrate the key increment and its intuition compared with other STTP methods.",
"questions": "See the weaknesses above.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T10:05:17",
"modification_date": "2025-11-12T13:46:51",
"review_url": "https://openreview.net/forum?id=BZ1vutP53o¬eId=z3u9mQoHuk",
"license": "CC BY 4.0"
}
] |
|
yirunib8l8
|
https://openreview.net/forum?id=yirunib8l8
|
Depth Anything 3: Recovering the Visual Space from Any Views
| 7
| 3.5
|
[
8,
8,
6,
6
] |
[
3,
4,
4,
3
] | 4
|
[
"Depth Estimation"
] |
We present Depth Anything 3 (DA3), a model that predicts spatially consistent geometry from an arbitrary number of visual inputs, with or without known camera poses.
In pursuit of minimal modeling, DA3 yields two key insights:
a single plain transformer (e.g., vanilla DINOv2 encoder) is sufficient as a backbone without architectural specialization, and a singular depth-ray prediction target obviates the need for complex multi-task learning. Through our teacher-student training paradigm, the model achieves a level of detail and generalization on par with Depth Anything 2 (DA2).
We establish a new visual geometry benchmark covering camera pose estimation, any-view geometry and visual rendering. On this benchmark, DA3 sets a new state-of-the-art across all tasks, surpassing prior SOTA VGGT by an average of 35.7\% in camera pose accuracy and 23.6\% in geometric accuracy. Moreover, it outperforms DA2 in monocular depth estimation. All models are trained exclusively on public academic datasets.
|
Depth Anything 3 uses a single vanilla DINOv2 transformer to take arbitrary input views and outputs consistent depth and ray maps, delivering leading pose, geometry, and visual rendering performance.
|
applications to computer vision, audio, language, and other modalities
|
https://openreview.net/pdf?id=yirunib8l8
| 2025-09-12T02:22:07
| 4
|
[
{
"id": "88WiRwkmUt",
"forum": "yirunib8l8",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4157/Reviewer_xgar",
"reviewer_name": "Reviewer_xgar",
"rating": 8,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The paper presents Depth Anything 3, a single model that unifies geometric understanding across any number of views. The method jointly predicts a depth map and a ray map, using a ViT backbone with input adaptive cross view attention. A Dual DPT head shares reassembly modules and branches only at the final fusion stage to jointly infer depth and rays. Experiments across diverse benchmarks show consistent state of the art results in pose estimation, geometric reconstruction, and feed forward novel view synthesis, demonstrating strong accuracy and efficiency.",
"strengths": "1. This paper presents a thoughtful analysis of what modalities are truly necessary for strong vision understanding tasks. It argues that depth together with a ray map is a minimal and sufficient target set. The ablation in Table 5 convincingly supports this claim by outperforming alternatives. Although recent work MapAnything also discusses incorporating ray maps into a unified representation, it is a contemporaneous work and does not needed to be considered here. \n\n2. The Dual DPT head is well designed. By sharing reassembly modules and branching only at the final fusion stage, the approach enforces pixel level alignment while avoiding redundant representations, which benefits both accuracy and efficiency. \n\n3. The experimental study is extensive and persuasive. The method is validated across pose estimation, geometric reconstruction, and feed forward novel view synthesis, consistently achieving SOTA results.",
"weaknesses": "I did not find any major weaknesses. While I know recent advances in this area, I am not fully confident about all technical nuances and distinctions among closely related methods. I am open to perspectives from other reviewers and will continue to track the discussion.",
"questions": "I wonder whether Depth Anything 3 is the most suitable title. The previous work is called Depth Anything v2, so if the intention is to follow the series it would be better to use Depth Anything v3 for consistency. Additionally, both v1 and v2 focus primarily on depth prediction. The current title can easily be read as another improvement targeted at depth prediction. It may be worth considering an alternative title that more clearly conveys the contribution..",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-02T21:51:06",
"modification_date": "2025-11-12T11:13:22",
"review_url": "https://openreview.net/forum?id=yirunib8l8¬eId=88WiRwkmUt",
"license": "CC BY 4.0"
},
{
"id": "LVNsYgomgc",
"forum": "yirunib8l8",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4157/Reviewer_vvhC",
"reviewer_name": "Reviewer_vvhC",
"rating": 8,
"confidence": 4,
"soundness": 4,
"contribution": 3,
"presentation": 4,
"summary": "This paper introduces Depth Anything 3. Different from Depth Anything and Depth Anything v2 that can only work with single image, Depth Anything 3 is able to process any number of images. Different from previous methods such as DUSt3R and VGGT, Depth Anything 3 simplify the architecture, making it more scalable to numerous images. In addition, a teacher-student paradigm is used to provide high-quality data. Depth Anything 3 achieves state-of-the-art performance in various tasks, including pose estimation, 3D reconstruction, and rendering.",
"strengths": "1. The architecture of Depth Anything 3 is simpler than previous methods. Depth Anything 3 uses a single vision transformer, while previous methods typically use vision transformer and following self- & cross-attention. Input-adaptive self-attention is used in vision transformer to enable cross-view attention without introducing new attention layers. With a simpler structure, Depth Anything 3 is able to process more images, which is meaningful for the future research. \n\n2. Extensive and thorough evaluation. Performance of pose estimation, 3D reconstruction, and rendering are thoroughly evaluated, where Depth Anything 3 achieves state-of-the-art performance. \n\n3. Ablation study of depth-ray representation shows that it explicitly outperforms previous representations, e.g. depth+pcd+cam used by VGGT.",
"weaknesses": "1. In Table 1 and Table 2, I recommend adding some state-of-the-art methods that are not feed-forward models. This can help the readers have a better understanding of the performance difference between different methods. For example, classical pipelines generally outperform feed-forward models in 3D reconstruction.\n\n2. If the teacher is not used, would the performance degrade explicitly? Currently, I am not sure if the mainly improvement is from the powerful teacher.",
"questions": "1. L142: the equation looks wrong. $P$ denotes the 3D point in world coordinate frame, $D_i(u,v) K_i^{-1} p$ denotes the 3D point in camera local frame. To make the equation correct, $R_i, t_i$ should represent camera pose (transformation from camera to world), instead of extrinsics (world to camera). \n\n2. Sec. 2.4: Is GS-DPT head the only optimizable module, i.e. backbone is fixed?\n\n3. L967: On ETH3D, is the tolerance 0.25 meter? Could the authors provide individual performance on each scene since the scale of scene vary a lot?\n\n4. Typo:\n * L823: a “identity” camera -> an “identity” camera",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T20:00:34",
"modification_date": "2025-11-12T11:13:22",
"review_url": "https://openreview.net/forum?id=yirunib8l8¬eId=LVNsYgomgc",
"license": "CC BY 4.0"
},
{
"id": "lDUXJ3Wugl",
"forum": "yirunib8l8",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4157/Reviewer_4H8r",
"reviewer_name": "Reviewer_4H8r",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The authors address the problem of 3D geometry estimation. They argue that depth and ray predictions constitute the minimal set of 3D predictions necessary for rendering 3D geometry, demonstrating this is the optimal choice. They extend existing transformer architectures by introducing a cross-view interaction transformer layer to handle multi-view inputs. Their method achieves significantly superior performance compared to existing models in both pose estimation and geometry estimation.",
"strengths": "- The paper utilized depth and ray map representations to enable full 3D reconstruction from an arbitrary number of input images.\n- Discovered an effective architecture design that outperforms previous methods while requiring minimal modifications to DINOv2.\n- The paper demonstrates their model's effectiveness across various experimental settings.",
"weaknesses": "- **Unclear advantage of depth+ray over point map:** To my knowledge, point maps can effectively represent various 3D information such as depth and pose, and a point map is essentially a combination of depth and ray maps. However, Table 5 shows that point maps hurt pose accuracy. What is the reason for this performance degradation? This finding appears to contradict the ablation study in VGGT, which argues that point map accuracy increases with multimodal outputs. I would like to see a more comprehensive analysis explaining why the combination of ray and depth maps outperforms point maps.\n- **Missing ablation studies with point maps:** I am curious about additional experiments in Table 5 calculate the metrics using point map representations not using ray map and depth map. Specifically, what are the results when training with: (1) point maps only, and (2) point maps combined with ray maps and depth maps?\n- **Pixel-wise ray map origin justification:** I understand that the origin of the ray map is identical for each image. Is there a specific reason to set the ray map origin in a pixel-wise manner? What is the benefit of this design choice?\n- **Distinction between camera head and ray predictions:** What is the precise difference between the camera head and ray predictions? I am also curious about the performance gap between these two approaches. To my knowledge, camera parameters can generate ray maps, and conversely, ray maps can also be used to estimate camera parameters. Could you clarify the relationship and trade-offs between these representations?",
"questions": "Please see the weakness section.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T18:13:14",
"modification_date": "2025-11-12T11:13:23",
"review_url": "https://openreview.net/forum?id=yirunib8l8¬eId=lDUXJ3Wugl",
"license": "CC BY 4.0"
},
{
"id": "DbbRKpAX0D",
"forum": "yirunib8l8",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4157/Reviewer_Yy63",
"reviewer_name": "Reviewer_Yy63",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The paper presents Depth Anything 3 (DA3), which learns to predict dense depth maps and ray maps for estimating 3D geometry and camera poses. DA3 focuses in simplifying geometric understanding both from its model architecture, and also by using depth ray as representation for prediction. The architecture simply adopts a pre-trained DINOv2, which has been powerful in various 2D and 3D tasks, and rearranges the tokens for certain layers to compute the full attention for image tokens across different frames, allowing the information exchange between views. Furthermore, DA3 predicts camera rays instead of point maps, which consists of the camera origin and the direction for each of the pixels, and thoroughly demonstrates that camera rays serve as better representations compared to point clouds.",
"strengths": "1. The paper shares an interesting finding that understanding 3D geometry can be done in a simplistic manner, especially without specialized architecture for incorporating multiple views or enforcing geometric constraints. Despite that it has been a recent trend for applying more and more generic architecture in 3D tasks, it is very intriguing to see that it could be done within a pre-trained DINOv2 by tweaking some of its attention layers.\n\n2. The paper formulates a novel framework for predicting camera rays, which is the ray connecting the camera origin and pixels in the image plane. The results and the ablations thoroughly show that the combination of depth and camera ray suffices for effectively understanding 3D geometry, \n\n3. The proposed method establishes state-of-the-art performance across various geometry tasks, while having similar parameter counts. Furthermore, the method also shows strength in efficiency thanks to its simplistic architecture with having only few layers for cross-view understanding.\n\n4. The paper proposes a benchmark for evaluating visual geometry, and introduces HiRoom dataset which the authors will release in the future.",
"weaknesses": "1. Some of the claims are not well justified, or seems to be overclaiming in some points.\n- L.157-158 (While point maps are insufficient to ensure consistency, redundant targets can improve pose accuracy but often introduce entanglement that compromises it.) What does the authors intend with \"entanglement\"? Does this mean that the results deduced from different heads could be problematic (e.g. pose from pose prediction head v. pose from point maps)? The authors should better elaborate the problems of \"redundant\" prediction to better establish their motivation for predicting camera rays.\n- The camera head, despite being optional, is specified to have 0.48B parameters for the Giant variant. Despite the authors show that the computation is negligible as it only has few tokens, having a camera head that has nearly half of the parameter count of the backbone does not seem \"lightweight\".\n\n2. It is a bit unclear on why depth+ray is more effective compared to point maps. To the reviewer's understanding, depth and ray actually seems like decoupling the point map prediction from Dust3r into two separate predictions. Why would this be better, asides from the empirical results?\n\n3. Although the teacher model seems crucial, the ablations seems to be missing.",
"questions": "1. Considering that all pixels from the same camera should have identical origins, it is interesting to see that all of the pixels are required to predict the origin in a dense manner, in addition to the ray direction. Is this simply a design choice, or does this have impact on training? It would also be interesting to see the variance of the predicted origin across each pixels within a single image, and study whether averaging all of the pixels for the pose estimation strategy is helpful.\n\n2. The idea for the architecture seems to share some ideas with ViTDet[1], as it modifies existing pre-trained ViTs to alternate between local/global attentions. However, it is interesting to see that limiting the blocks for cross-view attention was more effective for DA3, as opposed to the results from ViTDet, apart from the obvious efficiency gains. Could the authors provide further analysis on why Full Alt. performs worse? Could it be from the gap between the pre-training and the downstream task, where drastically modifying all of the layers within DINOv2 to all handle cross-view attention be problematic?\n\n[1] Li, Yanghao, et al. \"Exploring plain vision transformer backbones for object detection.\" European conference on computer vision. Cham: Springer Nature Switzerland, 2022.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T13:12:39",
"modification_date": "2025-11-12T11:13:23",
"review_url": "https://openreview.net/forum?id=yirunib8l8¬eId=DbbRKpAX0D",
"license": "CC BY 4.0"
}
] |
DDaaA4Uldp
|
https://openreview.net/forum?id=DDaaA4Uldp
|
XTransfer: Modality-Agnostic Few-Shot Model Transfer for Human Sensing at the Edge
| 4
| 3.5
|
[
4,
2,
6,
4
] |
[
3,
3,
4,
4
] | 4
|
[
"Human Sensing",
"Cross-Modality Few-Shot Model Transfer",
"Edge AI"
] |
Deep learning for human sensing on edge systems presents significant potential for smart applications. However, its training and development are hindered by the limited availability of sensor data and resource constraints of edge systems. While transferring pre-trained models to different sensing applications is promising, existing methods often require extensive sensor data and computational resources, resulting in high costs and poor adaptability in practice. In this paper, we propose XTransfer, a first-of-its-kind method enabling modality-agnostic, few-shot model transfer with resource-efficient design. XTransfer flexibly uses single or multiple pre-trained models and transfers knowledge across different modalities by (i) model repairing that safely mitigates modality shift by adapting pre-trained layers with only few sensor data, and (ii) layer recombining that efficiently searches and recombines layers of interest from source models in a layer-wise manner to create compact models. We benchmark various baselines across diverse human sensing datasets spanning different modalities. Comprehensive results demonstrate that XTransfer achieves state-of-the-art performance while significantly reducing the costs of sensor data collection, model training, and edge deployment.
|
This paper proposes a pioneering and scalable method that enables modality-agnostic few-shot model transfer for advancing human sensing on edge systems.
|
transfer learning, meta learning, and lifelong learning
|
https://openreview.net/pdf?id=DDaaA4Uldp
| 2025-09-18T22:42:35
| 4
|
[
{
"id": "R0qel6PfsU",
"forum": "DDaaA4Uldp",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12243/Reviewer_BTLu",
"reviewer_name": "Reviewer_BTLu",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This paper proposes XTransfer, addressesing the data scarcity and resource constraints of human sensing on edge devices by enabling modality-agnostic, few-shot model transfer. It repurposes pre-trained models for diverse sensing modalities using very few labeled sensor samples. Its core pipeline includes two key components: (1) Model Repairing via a Splice-Repair-Removal (SRR) pipeline—aligning latent feature distributions across modalities; (2) Layer Recombining via Layer-Wise Search (LWS) control—selecting and recombining only useful repaired layers to build compact models. \nExperiments on 8 source datasets (image/text/audio/sensing) and 7 target datasets show XTransfer outperforms SOTA baselines",
"strengths": "- Novel Modality-Agnostic Paradigm: Unlike prior transfer methods (limited to same-modality or paired cross-modal data), XTransfer achieves transferring knowledge from image/text pre-trained models to sensing modalities with few labeled data. This setting resolves the high cost of sensing data collection and leverages public pre-trained models as \"free\" knowledge sources.\n\n- Theoretically Grounded and Empirically Valid Method Mechanism: The SRR pipeline’s design (PCA orthogonal space, anchor-based loss, class pairing) is justified by Transformer layer dynamics. It effectively mitigates modality shift as evidenced by experiments.\n\n- Resource-Efficient Design for Edge Deployment: LWS control’s layer selection and pre-search check reduce model size by 2.4–16.5× in FLOPs vs. source backbones, while maintaining SOTA accuracy. On edge devices, latency is cut by 1.4–21×, making it practical for resource-constrained human sensing.",
"weaknesses": "- Dependence on PCA for Feature Alignment: XTransfer relies on linear PCA to reduce dimensionality and align features. However, the concern is that PCA fails to capture non-linear relationships between source and target modalities (e.g., text embeddings vs. Doppler radar signals), which may limit performance in highly dissimilar cross-modality scenarios (e.g., text → ECG). \n\n- Brittleness in Extremely Low-Shot Settings: While XTransfer performs well in 5–10-shot scenarios, it struggles with 3-shot settings—e.g., accuracy lags the oracle baseline on HHAR/Gesture datasets. This raises concerns for ultra-scarcity sensing tasks (e.g., rare medical conditions).\n\n- Homogeneous Source Model Assumption: The framework assumes pre-trained source models have homogeneous architectures (e.g., all ResNet variants). Extending to heterogeneous backbones is not fully validated—layer recombination across structurally diverse models (e.g., CNN vs. Transformer) may break MMC shift estimation and layer-wise dependence, limiting scalability to multi-modal source pools.\n\n- the writing can be further improved for clarity. Too many abbreviated terms,such as MMC, may weakean readability.",
"questions": "- How would XTransfer perform with non-linear feature alignment methods? \n\n- Can XTransfer be extended to ultra-low-shot (1–2-shot) scenarios?\n\n- How does XTransfer handle heterogeneous source models?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T23:42:03",
"modification_date": "2025-11-12T12:53:03",
"review_url": "https://openreview.net/forum?id=DDaaA4Uldp¬eId=R0qel6PfsU",
"license": "CC BY 4.0"
},
{
"id": "mh8BNC8asi",
"forum": "DDaaA4Uldp",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12243/Reviewer_aQ89",
"reviewer_name": "Reviewer_aQ89",
"rating": 2,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 1,
"summary": "This paper introduces XTransfer, a cross-modal adaptation framework bridging pretrained models and sensing applications through layer manipulations. The authors propose two components, a Spliece-Repair-Removal pipeline that adapts pretrained layers to new sensor modalities using limited sensor data, and a Layer Wise Search that recombines effective layers for a compact, efficient model. The paper conducts thorough evaluations on 8 datasets and shows improved performance and resource efficiency under limited-sample scenarios. The appendix also contains ablation studies to understand the significance of the components proposed.",
"strengths": "1. The attempt to reuse pre-trained models from heterogeneous modalities such as images and text to accelerate sensor-domain adaptation is novel. \n2. Extending few-shot learning to a modality-agnostic context is novel and has great applicability.\n3. Evaluation across multiple modalities and domains is comprehensive.\n4. Overall the motivation is strong and the proposed method outperforms the baselines",
"weaknesses": "- The paper needs significant work on the presentation for better clarity. Some examples below:\n - The term channels used during the removal and repair stages was not clarified and could be misleading, given its context in signal processing.\n - Preliminary motivation is unclear. Figure 3 shows relationships among MMC, accuracy, and other metrics, but fails to specify the details like the models and domains. Moreover, sensing as a modality remains underspecified. Baselines plotted in Figure 3 are never introduced until later sections.\n - The methods proposed (SRR and LWS) modules are described very densely with poor structures with little intuition or top-down explanation. Figures are overcrowded and fail to clearly depict information flow across stages and are very far away from where it is referenced.\n- The authors should also compare self-supervised methods. Current work seems to be evaluating on supervised pretrained source models. However, self-supervised models already show great generalizability and cross-domain transfer capabilities. This could improve the impact of the work.\n- The authors reported the training-time statistics but do not discuss the convergence rate, especially given the inclusion of a generator-based repair module. The paper does not compare convergence speed with standard SSL or linear-probe methods, which makes it uncertain whether the proposed system actually converges faster or simply trains less data per step.\n- The LWS module is described as a search process for selecting effective layers over NAS, but it lacks a comparison against any established search or pruning methods. So it is hard to determine the significance of prior search works.\n- The newest baselines, SemiCMT, seem to be a self-supervised cross-modal alignment framework that would require paired data. It is confusing how SemiCMT was trained given there is no cross-modal pairs between source and target domains. It is unclear how the baselines are trained for fair comparison\n- It is unclear on the exact number of samples used for each source dataset, reporting only the number of classes and input shapes. Since source data scale strongly affects transfer quality, it is unclear on the cross-modality transfer performance, since image source datasets usually have a larger scale and are likely to have higher transfer performance compared to other modality source datasets. So it i s unclear on the validity of the conclusion in 6.2 Impact of different sources.\n- Most of the baselines are relatively old (19 - 22), the most recent baselines are SemiCMT which was designed for cross-modal alignment that requires multimodal pair and GPT2 which is a generative model not suitable for the downstream classification.",
"questions": "Please see the weakness for most of the concerns. Some questions for authors to discuss are:\n- Differences between area A and area B trends (where MMC correlates differently with accuracy) are not explained. Can authors provide more clarification on this?\n- Since SemiCMT requires multimodal pairing, how is it adapted to the unpaired cross-domain case where source and target modalities differ completely?\n- What are the scales of the dataset in terms of number of samples?\n- Is there any established search or pruning algorithms (e.g., NAS, lottery-ticket, or L2-pruning) used for comparison?\n- Most of the time the target domain might have more than just 10 samples per class, what happens when there are more target domain samples, would XTransfer still have the competing performance?\n- Can authors elaborate more on comparison against SSL finetune with additional input head and downstream head for cross-modal and cross-domain adaptation?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T09:18:30",
"modification_date": "2025-11-12T12:53:04",
"review_url": "https://openreview.net/forum?id=DDaaA4Uldp¬eId=mh8BNC8asi",
"license": "CC BY 4.0"
},
{
"id": "ikrWWatQpM",
"forum": "DDaaA4Uldp",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12243/Reviewer_yVS9",
"reviewer_name": "Reviewer_yVS9",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 4,
"summary": "This paper introduces a modality-agnostic few-shot transfer learning framework tailored for resource-constrained human-sensing applications on edge devices.\nThe method leverages pre-trained models as sources and combines layer repairing to mitigate modality shift with layer-wise recombination to select only beneficial layers, thereby producing compact, efficient models.\nThe authors evaluated the proposed method across several sensing datasets and showed state-of-the-art accuracy while reducing sensor data requirements, training cost, and deployment resource overhead.",
"strengths": "- The paper identifies a timely and well-motivated challenge in human-sensing systems and tackles few-shot cross-modality transfer on resource-constrained edge devices.\n- The proposed XTransfer framework integrates a structured SRR pipeline for modality repair with a principled layer-wise recombination strategy. The design addresses both representation alignment and parameter efficiency, demonstrating a thoughtful mechanism for reusing heterogeneous pre-trained models.\n- The study evaluates the approach across multiple sensing modalities, diverse benchmarks, and real edge-device settings. Results consistently show improvements in accuracy-resource trade-offs and training efficiency, providing convincing evidence of the method's scalability and practicality for deployment.",
"weaknesses": "- The paper would benefit from a discussion of failure cases, sensitivity to noisy or highly heterogeneous sensor data, and robustness under severe domain shifts\n- The SRR and layer-wise search procedures introduce methodological complexity, and the paper does not fully quantify the tuning burden, search overhead under diverse hardware constraints, or potential stability issues when scaling to larger sets of heterogeneous source models.\n- The evaluation focuses primarily on cross-modality human-sensing tasks, and comparison to broader transfer paradigms (e.g., recent foundation-model or prompt-based adaptation techniques) is limited.",
"questions": "See weaknesses.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T04:18:26",
"modification_date": "2025-11-12T12:53:04",
"review_url": "https://openreview.net/forum?id=DDaaA4Uldp¬eId=ikrWWatQpM",
"license": "CC BY 4.0"
},
{
"id": "1Lv4JwoKes",
"forum": "DDaaA4Uldp",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12243/Reviewer_F1Vc",
"reviewer_name": "Reviewer_F1Vc",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This paper introduces a framework for modality-agnostic, few-shot model transfer tailored for human sensing on edge devices. The core contributions are a splice repair removal pipeline that mitigates modality shift by aligning latent feature distributions of target sensor data with pre-trained source models using an anchor-based loss in a reduced PCA space, and a layer wise search mechanism that efficiently searches and recombines useful layers from single or multiple source models to construct a compact, high-performance target model.",
"strengths": "Tackles a practical problem at the intersection of few-shot learning, cross-modal transfer, and edge AI. The proposed method's ambition is to leverage readily available pre-trained models from vastly different modalities (e.g., image, text) for specialized sensing tasks. The quality of experimental evaluation is good.",
"weaknesses": "The method's reliance on mean magnitude of channels and its s-score as the primary metric for guiding layer repairing and selection feels under-justified. While it is presented as a lightweight metric, its suitability for capturing feature discriminability across drastically different modalities (e.g., vision to IMU) is not intuitively clear, and a more thorough justification or comparison against other feature distribution metrics (e.g., MMD) would strengthen this core design choice. \n\nThe proposed approach is also quite complex, involving multiple components, stages, and hyperparameters (e.g., PCA dimensionality, search parameters), which could pose challenges for reproducibility and practical implementation. \n\nFinally, LWS could face scalability issues as the number and depth of source models increase and the robustness of the proposed $rate^{est}$ model for the pre-search check is not fully explored, especially in highly dissimilar transfer settings.",
"questions": "The \"Model Repairing\" component centers on aligning feature spaces to minimize MMC shift. Could you elaborate on the intuition for using a channel-magnitude metric like MMC for cross-modality transfer, where feature representations are fundamentally different? Have you experimented with alternative feature alignment techniques, such as adversarial alignment or MMD, and how do they compare?\n\nThe LWS recombines layers sequentially from source models. How does this mechanism handle non-sequential architectural elements, such as the skip connections in ResNet architectures? Are these connections discarded, or does your method have a way to preserve or reconstruct them in the final compact model?\n\nThe pre-search check's efficiency depends on an exponential growth model for the repair rate. How was this specific model form chosen, and how robust is the search process if the actual repair rate for a given source-target pair deviates a lot from assumed exponential trend?\n\nTable 4 shows that in the challenging 3-shot setting, XTransfer's accuracy is slightly below the oracle baseline on some tasks. Does this point to a fundamental limit on the minimum data required for stable alignment, and could this gap be closed by integrating few-shot data augmentation techniques into the SRR pipeline?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T03:41:28",
"modification_date": "2025-11-12T12:53:05",
"review_url": "https://openreview.net/forum?id=DDaaA4Uldp¬eId=1Lv4JwoKes",
"license": "CC BY 4.0"
}
] |
YM3SskmtCE
|
https://openreview.net/forum?id=YM3SskmtCE
|
ATTS: Asynchronous Test-Time Scaling via Conformal Prediction
| 6
| 3
|
[
8,
8,
2
] |
[
3,
2,
4
] | 3
|
[
"Conformal Prediction",
"Test-Time Scaling",
"Speculative Decoding"
] |
Large language models (LLMs) benefit from test-time scaling but are often hampered by high inference latency. Speculative decoding is a natural way to accelerate the scaling process; however, scaling along both the parallel and sequential dimensions poses significant challenges, including substantial memory-bound execution and synchronization overhead. We introduce *ATTS* (Asynchronous Test-Time Scaling), a statistically guaranteed adaptive scaling framework that follows the hypothesis testing process to address these challenges. By revisiting arithmetic intensity, *ATTS* identifies synchronization as the primary bottleneck. It enables asynchronous inference through online calibration and proposes an ordinal classification algorithm that supports a three-stage rejection sampling pipeline, scaling along both the sequential and parallel axes. Across experiments on the MATH, AMC23, AIME24, and AIME25 datasets and across multiple draft–target model families, we show that *ATTS* delivers up to *56.7x* speedup in test-time scaling and a *4.14x* throughput improvement, while maintaining accurate control of the rejection rate, reducing latency and memory overhead, and incurring no accuracy loss. By scaling both in parallel and sequential dimensions, we enable the 1.5B/70B draft/target model combination to achieve the performance of the state-of-the-art reasoning model o3-mini (high) on the AIME dataset. We submit the anonymous repository: anonymous.4open.science/r/Asynchronous-Test-Time-Scaling-5940.
|
applications to computer vision, audio, language, and other modalities
|
https://openreview.net/pdf?id=YM3SskmtCE
| 2025-09-18T14:52:30
| 10
|
[
{
"id": "HqbK5DjFzW",
"forum": "YM3SskmtCE",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission10643/Reviewer_LsxV",
"reviewer_name": "Reviewer_LsxV",
"rating": 8,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The authors propose ATTS (Asynchronous Test-Time Scaling), designed to alleviate the memory peak and synchronization overhead issues in speculative decoding under parallel and sequential test-time scaling settings.\n\nThe authors characterize system bottlenecks using asynchronous arithmetic intensity, explicitly incorporating synchronization time into the performance metric. The authors find that they synchronous time is the main bottleneck. Therefore, ATTS adopts conformal prediction: it computes a p-value for each candidate and compares it with a threshold α to decide acceptance or rejection, instead of using global softmax and ranking which needs synchronous. This provides statistical control over the target model intervention rate without requiring distributional assumptions.\n\nBuilding on this principle, ATTS implements a three-stage rejection sampling pipeline that scales along both the parallel and sequential axes. It further employs online calibration and ordered decision-making to avoid global synchronization.\n\nExperiments on MATH, AMC23, AIME24/25, and various draft/target model combinations show that ATTS achieves up to 56.7× end-to-end acceleration and 4.14× throughput improvement without sacrificing accuracy, while significantly reducing peak memory usage and latency. Notably, on the AIME benchmark, a 1.5B draft / 70B target setup reaches performance close to that of strong closed-source reasoning models.\n\nThe main limitations lie in the sensitivity to online calibration quality and the choice of α, the lack of guarantees for global optimality in each round, and the evaluation focus on mathematical reasoning and specific inference engine configurations.",
"strengths": "1. This paper addresses a highly important and practical problem in inference infrastructure, which has significant implications for optimizing the efficiency of model deployment.\n2. The authors introduced a new performance metric to analyze specific bottlenecks, which guided the subsequent algorithm design.\n3. The paper is well-structured and easy to understand.\n4. The proposed method achieves a significant improvement in efficiency while maintaining stable performance.",
"weaknesses": "1. The proposed method heavily relies on the stability of online calibration. The rejection rate control and accuracy are quite sensitive to the calibration distribution, the number of parallel samples, and the value of α.\n2. The proposed method does not aim for globally optimal candidates and cannot guarantee obtaining the top-1 candidate in each round, which may be suboptimal for applications that strictly require optimal ranking.",
"questions": "How can online calibration maintain long-term stability of the rejection rate under non-stationary workloads?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T19:03:54",
"modification_date": "2025-11-12T12:31:36",
"review_url": "https://openreview.net/forum?id=YM3SskmtCE¬eId=HqbK5DjFzW",
"license": "CC BY 4.0"
},
{
"id": "UOgwHeUIyp",
"forum": "YM3SskmtCE",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission10643/Reviewer_KbTN",
"reviewer_name": "Reviewer_KbTN",
"rating": 8,
"confidence": 2,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper proposes ATTS, an asynchronous test-time scaling framework for large language models (LLMs) that addresses high inference latency and memory bottlenecks in test-time scaling. By redefining asynchronous arithmetic intensity to identify synchronization as the key bottleneck, integrating online calibration with conformal prediction, and designing a three-stage rejection sampling pipeline, ATTS enables both sequential and parallel scaling. Experiments on MATH, AMC23, AIME24/25 datasets show up to 56.7x speedup, 4.14x throughput improvement, and no accuracy loss—even allowing 1.5B/70B draft/target models to match o3-mini (high) performance on AIME.",
"strengths": "1. The introduction of \"asynchronous arithmetic intensity\" effectively quantifies synchronization overhead, a critical but understudied issue in test-time scaling.\n2. Leveraging conformal prediction for ranking and rejection sampling ensures controlled rejection rates and coverage (marginal/conditional), adding theoretical rigor absent in many efficiency-focused works.\n3. Significant speedup and throughput gains across model families (e.g., Qwen, DeepSeek, Llama) and challenging math benchmarks demonstrate real-world utility.",
"weaknesses": "1. ATTS frames sequential scaling (increasing turns) as a strength, but experiments only test up to 10 turns. The paper fails to analyze when sequential scaling stops adding value—e.g., whether 20+ turns lead to accuracy plateaus while inflating token costs. \n2. The evaluation on more challnaging benchmarks would make the results more robust\n3. While conformal prediction enables statistical guarantees for rejection sampling, the paper overlooks a critical tension: its p-value calculation (Eq.13) requires comparing test samples to the full calibration set. For large batch sizes (128+), this comparison could introduce hidden per-sample latency—undermining ATTS’s core goal of reducing overhead.",
"questions": "See weakness",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T17:34:40",
"modification_date": "2025-11-12T12:31:37",
"review_url": "https://openreview.net/forum?id=YM3SskmtCE¬eId=UOgwHeUIyp",
"license": "CC BY 4.0"
},
{
"id": "tZ5UXbkVkq",
"forum": "YM3SskmtCE",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission10643/Reviewer_ct8q",
"reviewer_name": "Reviewer_ct8q",
"rating": 2,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper proposes ATTS (Asynchronous Test-Time Scaling) to address the high latency problem in large model inference. The authors introduce the concept of asynchronous arithmetic intensity to measure the ratio between the time spent sampling candidates and the time waiting for the large model to verify them, showing that as the number of samples increases, the cost from waiting for verification becomes more significant. The method leverages ideas from conformal prediction to reduce the candidate set that requires verification by the large model, thereby lowering compute costs. The resulting ATTS framework applies conformal prediction for theoretically lossless speedup of test-time inference.",
"strengths": "The paper provides valuable insights into how to reduce the time cost of test-time scaling (TTS), especially through the lens of asynchronous arithmetic intensity.\n\nEffectively applies conformal prediction to minimize the workload of the large model during inference, and demonstrates both theoretical and practical speedup.\n\nThe experimental section covers several representative math reasoning tasks, and the results are fairly comprehensive for the considered setting.",
"weaknesses": "The method is not empirically compared against other prominent recent approaches in the field, such as SpecReason and Speculative Thinking, making it difficult to objectively evaluate the unique advantages of ATTS.\n\nExperiments suggest that when the small and large models are trained on data from different distributions, the benefits of ATTS decrease noticeably, highlighting a potential issue: the method does not fundamentally address the extra rejection cost caused by distribution mismatch between the small and large models.\n\nTheoretical guarantees rely on key assumptions from conformal prediction (e.g., exchangeability and normalization of scores). However, the implementation omits score normalization, which may break these assumptions and cause the rejection rate control to fail, especially in multi-task or distribution-shifted settings.",
"questions": "1.Validity of theoretical assumptions\nThe omission of normalization in the implementation may break the core assumptions of conformal prediction, potentially leading to excessive rejections for some tasks and too few for others in mixed or real-world scenarios. How does the method ensure fairness and validity of the rejection rate when handling multi-task or large-scale deployments?\n\n2.Lack of comparison with strong baselines\nThe experimental section does not compare ATTS with other competitive methods such as SpecReason or Speculative Thinking. What are the practical or theoretical advantages of ATTS over these methods, and is there any empirical evidence of its superiority or unique use cases?\n\n3.System/engineering considerations\nThe paper does not discuss the impact of modern inference engine features, such as supporting asynchronous verification requests. Can ATTS be integrated with inference backends that support asynchronous calls, and is there any experimental evidence for further gains from such integration?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T15:35:16",
"modification_date": "2025-11-12T12:31:37",
"review_url": "https://openreview.net/forum?id=YM3SskmtCE¬eId=tZ5UXbkVkq",
"license": "CC BY 4.0"
}
] |
|
MnQD69han5
|
https://openreview.net/forum?id=MnQD69han5
|
DFVEdit: Conditional Delta Flow Vector for Zero-shot Video Editing
| 4
| 3.5
|
[
4,
4,
4,
4
] |
[
3,
4,
3,
4
] | 4
|
[
"zeroshot",
"video editing",
"traning free",
"video transformer"
] |
The advent of Video Diffusion Transformers (Video DiTs) marks a milestone in video generation. However, directly applying existing video editing methods to Video DiTs often incurs substantial computational overhead, due to resource-intensive attention modification or fine-tuning. To alleviate this problem, we present DFVEdit, an efficient zero-shot video editing method tailored for Video DiTs. DFVEdit eliminates the need for both attention engineering and fine-tuning by directly operating on clean latents via flow transformation. To be more specific, we observe that editing and sampling can be unified under the continuous flow perspective. Building upon this foundation, we propose the Conditional Delta Flow Vector (CDFV) -- a theoretically unbiased estimation of DFV -- and integrate Implicit Cross Attention (ICA) guidance as well as Embedding Reinforcement (ER) to further enhance editing quality. DFVEdit excels in practical efficiency, offering at least 20x inference speed-up and 85% memory reduction on Video DiTs compared to attention-engineering-based editing methods. Extensive quantitative and qualitative experiments demonstrate that DFVEdit can be seamlessly applied to popular Video DiTs (\emph{e.g.}, CogVideoX and Wan2.1), attaining state-of-the-art performance on structural fidelity, spatial-temporal consistency, and editing quality.
|
applications to computer vision, audio, language, and other modalities
|
https://openreview.net/pdf?id=MnQD69han5
| 2025-09-19T10:03:48
| 4
|
[
{
"id": "igcn3kldv7",
"forum": "MnQD69han5",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission15070/Reviewer_eh2p",
"reviewer_name": "Reviewer_eh2p",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "The authors provide DFVEdit for zero-shot video editing method for Video Diffusion Transformers that bypasses the computational overhead of attention modification and fine-tuning. It operates directly on latents by unifying editing and sampling through a continuous flow perspective and using the Conditional Delta Flow Vector (CDFV). The authors additionally provide Implicit Cross-Attention Guidance for masking latents and Target Embedding Reinforcement to amplify editing text embeddings.",
"strengths": "- The method is model-agnostic, model-training free and significantly outperforms existing methods in terms of computational efficiency.\n- The authors conduct comprehensive comparisons and ablation studies.",
"weaknesses": "- Please use the correct citation format (e.g., \\cite, \\citep).\n- Can the authors provide pseudo code or a simplified algorithm section for CONDITIONAL DELTA FLOW VECTOR?\n- Please clarify which parts of the 3.1 UNIFIED CONTINUOUS FLOW PERSPECTIVE ON SAMPLING AND EDITING are the contribution of the paper. For example, L192-194 are already widely known to the community (e.g., Diffusion Meets Flow Matching: Two Sides of the Same Coin, 2024).\n- Are the baseline methods employing all the same backbone model? It's not surprising that T2V methods are outperforming T2I-based video edting methods. \n- Except for SDEdit baseline, it appears only the proposed method is deployed on CogVideoX and Wan models, which are far better models than the baseline methods use. Can this be evaluated as a fair comparison? \n- The details on base model should appear in the main paper, no in supplementary sections.\n- If the method binarizes cross attention section from full-attention map, and applies masking to prevent editing unintended region, how does this enable flexible editing of global attribute or style edting? \n- In L299-300, the authors state 'We observe that in 3D Full-Attention, the effect of text embeddings diminishes as frame length increases.', can the authors show this phenomenon both qualitatively and quantitatively? Isn't it attributed to deviating from training distribution for the number of frames? I am quite not convinced the extensive number of frames actually have correlation with the effect of text embeddings. Can the authors provide theoretical ground for this?\n- Can the authors provide similar re-weighting methods for image/video editing that have similar philosphy as the 'Target Embedding Reinforcement'.",
"questions": "Please address the weakness.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-03T09:58:54",
"modification_date": "2025-11-12T13:29:51",
"review_url": "https://openreview.net/forum?id=MnQD69han5¬eId=igcn3kldv7",
"license": "CC BY 4.0"
},
{
"id": "tyiUt8BjrP",
"forum": "MnQD69han5",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission15070/Reviewer_KCoA",
"reviewer_name": "Reviewer_KCoA",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper introduces DFVEdit, a novel zero-shot video editing method specifically tailored for modern Video Diffusion Transformers (Video DiTs). The core contribution is a flow-transformation framework that operates directly on latents, bypassing the need for computationally expensive attention modification or model finetuning. By unifying the editing and sampling processes under a continuous flow perspective, the method proposes a Conditional Delta Flow Vector (CDFV) to estimate the transformation from the source to the target video. This approach, enhanced by Implicit Cross-Attention (ICA) guidance and Embedding Reinforcement (ER), achieves state-of-the-art results in fidelity and consistency while offering speed-up and memory reduction.",
"strengths": "-The primary strength lies in its conceptual shift away from attention engineering. Instead of manipulating the internal query/key/value matrices, the method cleverly reframes editing as a continuous flow transformation in the latent space. Conditional Delta Flow Vector (CDFV) as a theoretically-backed estimate of the \"delta\" between the source and target latents is an clever thing that directly helps the use of Video DiTs. \n\n- The paper is well-written and clearly structured. The motivation for the work is well established in the first figure.\n\n- The authors provide strong motivation for their work, focusing on the critical need for an efficient editing solution for Video DiTs. The evaluation is thorough, using a well-chosen suite of metrics to measure distinct properties: CLIP-F for temporal consistency, E_warp for motion fidelity, M.PSNR and LPIPS for fidelity and background preservation, and CLIP-T for prompt alignment. This comprehensive quantitative and qualitative analysis, along with a user study, makes a convincing case for the method's superiority.",
"weaknesses": "- The reported CLIP-F score of 0.9924 (Table 1) is exceptionally high. While this is presented as a strength, a score this close to 1.0 could imply that the edited frames are almost identical to each other, suggesting the CLIP-F metric may not be sensitive enough to detect subtle, fine-grained changes or potential flickering. It seems unlikely for a video to be meaningfully edited and still retain this level of inter-frame similarity unless the video was already static. It would be beneficial for the authors to provide a more detailed, one-by-one results breakdown for the evaluation dataset, or at least discuss this high score and why it doesn't indicate a lack of meaningful editing.\n\n- The comparison setup could be strengthened. The paper's method (DFVEdit) is applied to Video DiT backbones (CogVideoX-5B, Wan2.1-14B). However, many of the baselines (e.g., FateZero, TokenFlow, VideoDirector) are evaluated on their original, often U-Net-based backbones like Stable Diffusion 1.5 (as noted in Appendix D.5 and Table T2). While the paper does test extending some baselines to CogVideoX (Fig 1b) to show they are computationally infeasible, the primary qualitative and quantitative comparisons are between methods running on different underlying generative models. Text-to-Video (T2V) models or methods specifically designed for Video DiTs would serve as a more direct and fair comparison group to truly isolate the contribution of the editing method (DFVEdit) versus the power of the backbone model (Video DiTs).\n\n- The paper does not clearly state the total size of the evaluation dataset used for the main quantitative results in Table 1. Appendix D.1 mentions using the public DAVIS 2017 dataset and Pexels videos. Appendix D.2 mentions \"10 DAVIS videos, 30 diverse prompts\" for the M.PSNR metric, and Appendix D.3 mentions \"80 video-prompt pairs\" for the user study. However, the total number of videos used to calculate CLIP-F, E_warp, LPIPS, and CLIP-T in Table 1 is not specified. Clarifying the scale of the automated evaluation would help in assessing the robustness of the reported quantitative claims.",
"questions": "Please see weaknesses",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T04:17:09",
"modification_date": "2025-11-12T13:29:53",
"review_url": "https://openreview.net/forum?id=MnQD69han5¬eId=tyiUt8BjrP",
"license": "CC BY 4.0"
},
{
"id": "QuSW0nrXos",
"forum": "MnQD69han5",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission15070/Reviewer_7GLJ",
"reviewer_name": "Reviewer_7GLJ",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "This paper uses pre-trained diffusion or flow-matching-based video text-to-video models and introduces a new video-editing method. This is again done with a flow-based formulation, which is derived with similar methods to the original flow matching. Lots of qualitative and quantitative evaluations show the methods good performance.",
"strengths": "Paper gives a method for using both pre-trained diffusion and fm T2V-models for video editing that is well grounded in the theory on flow matching/generative modelling.\nThe resulting videos look to be good-looking and are competitive with other competing methods.\nTraining-free scheme is great for plug-n-play and comparison of different T2V models.\n\nWriting is, apart from the issue with the citations, mostly clear and concise and easy to follow.",
"weaknesses": "ISSUE SAM: Unclear when, if ever, SAM masks are used\n- Manuscript (Fig. 1) says that they can be used optionally\n- Section C.3 says that it may be used for multi-object editing, such as in Figure F10.\n- it never becomes clear, when exactly SAM masks are used. Ist just states them to be \"optional\"\n- SAM masks are never really compared in their effectiveness to the ICA approach\n- no principled experiment really evaluates their usage\n\n\nISSUE EMBEDRF: Difference between DFVEdit and DFVEdit w/o EmbedRF\n- only negligibly small gains are observed for EmbedRF. Does EmbedRF really help that much? Can we be sure the results are robust? Were several runs conducted?\n- together with table T4 it becomes apparent, that using EmbedRF does not do anything. The gains lie within the standard deviation for all reported metrics, except for CLIP-T\n- is the only real \"benefit\" to make the videos look more stylized? This can probably not be shown by any metric\n\n\n\nMajor Presentation Issue:\n- None of the citations (with \\cite) flow properly with the text and have been used with no regard to the ICLR template. Many if not most should have been put in parantheses. This destroys the flow of reading the paper in a major way!\n\nMore Minor Presentation Issues:\n - Figure F9: the prompt mentions \"a man doing moonwolk\", but the example given here is the video of the bear. The entire figure seems misplaced or wrong?\n\n - Typos:\n - line 185/186: Wrong verb conjugation: \"f(x,t) is the drift coefficient corresponds to...\"\n - wrong equation linked in line 211. Eq. 17 is in the appendix. Do you mean eq. 5?\n - wrong equation linked in line 217. Eq. 31 is in the appendix. Do you mean eq. 6?\n - line 248: sentence weird?: \"if we set winner process of Z_0 and \\hat{Z}_0 is equal, then ....\"\n - inconsistent references: some have no dates (see Cong et al. or Yang et al.)\n - line 1099: \"We thank the reviewer for this suggestion\". This is a first draft at ICLR. Any mentions of previous reviewing-iterations at other venues should not be in the manuscript anymore!",
"questions": "Regarding ISSUE SAM:\n1) When exactly are SAM masks used? Why was SAM not evaluated as an actual alternative masking scheme, when it is described as an important, yet optional, part of the method?\n\nRegarding ISSUE EMBEDRF:\n2) Why was this kept, when performance gains were easily within one standard deviation?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-28T18:10:57",
"modification_date": "2025-11-12T13:29:54",
"review_url": "https://openreview.net/forum?id=MnQD69han5¬eId=QuSW0nrXos",
"license": "CC BY 4.0"
},
{
"id": "zJQIjEVMEQ",
"forum": "MnQD69han5",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission15070/Reviewer_sFW3",
"reviewer_name": "Reviewer_sFW3",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "This paper introduces DFVEdit, a training-free zero-shot video editing framework for Video Diffusion Transformers (Video DiTs). Unlike attention-engineering-based approaches that require high memory and computational cost, DFVEdit reformulates the editing process under a continuous flow perspective and directly manipulates latent representations instead of attention maps. The key innovation is the Conditional Delta Flow Vector (CDFV), an unbiased estimator of the latent flow difference between source and target prompts. The model further integrates Implicit Cross-Attention (ICA) and Embedding Reinforcement (ER) to enhance spatiotemporal coherence and prompt fidelity. Experiments on CogVideoX and Wan2.1 demonstrate strong editing fidelity, temporal consistency, and 20×–85% efficiency gains over previous zero-shot methods.",
"strengths": "1. Unification of diffusion sampling and video editing under continuous flow dynamics.\n2. Training-free and efficient, avoiding attention engineering with major gains in VRAM and latency.\n3. CDFV formulation provides theoretical grounding for latent editing dynamics.\n4. Extensive experiments on multiple base models (CogVideoX, Wan2.1) showing generalization and scalability.",
"weaknesses": "1. Theoretical assumptions remain unverified.\nThe paper claims the CDFV to be an “unbiased” flow estimator, yet does not empirically validate this property or its convergence stability. It would strengthen the work to provide quantitative analysis or synthetic experiments demonstrating the unbiasedness.\n\n2. Limited novelty beyond integration.\nAlthough the continuous-flow formulation is elegant, it largely reformulates existing diffusion mathematics rather than proposing fundamentally new theory. The contribution lies mainly in design efficiency and practical deployment.\n\n3. Partial dependence on implicit cross-attention.\nThe ICA module still indirectly relies on attention maps (albeit derived from full attention). Thus, the “attention-free” claim is somewhat overstated—memory reduction is significant but not absolute.\n\n4. Limited evaluation scope.\nThe dataset is mostly limited to DAVIS2017 and short Pexels clips; long videos or open-domain scenes are not tested. It remains unclear whether DFVEdit maintains temporal stability across longer durations or higher frame rates.\n\n5. Comparative fairness concerns.\nSome baselines (e.g., FateZero, TokenFlow) were directly re-implemented on CogVideoX without architecture adaptation, which might favor DFVEdit’s efficiency comparisons. A discussion of reimplementation fairness would improve transparency.\n\n6. Minor clarity issues.\nThe mathematical derivations (Eqs. 8–13) could benefit from clearer notation and better linking between theoretical constructs (e.g., Δvₜ vs ∇log P(·, t)).",
"questions": "1. How sensitive is DFVEdit to inaccuracies in flow estimation or text embedding alignment?\n2. Does ICA extraction from 3D full attention introduce any temporal artifacts when editing long sequences?\n3. Could DFVEdit extend naturally to multimodal or audio-conditioned video editing?\n4. What are the runtime and memory trade-offs between CogVideoX and Wan2.1 backbones?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-24T22:25:12",
"modification_date": "2025-11-12T13:29:55",
"review_url": "https://openreview.net/forum?id=MnQD69han5¬eId=zJQIjEVMEQ",
"license": "CC BY 4.0"
}
] |
|
bl9hFm04Lc
|
https://openreview.net/forum?id=bl9hFm04Lc
|
Can AI Truly Represent Your Voice in Deliberations? A Comprehensive Study of Large-Scale Opinion Aggregation with LLMs
| 5
| 3.5
|
[
6,
4,
4,
6
] |
[
4,
4,
3,
3
] | 4
|
[
"Human Study; Reliable LLM; Public Deliberation; Computational Social Science; Large-Scale Evaluation"
] |
Large-scale public deliberations generate thousands of free-form contributions that must be synthesized into representative and neutral summaries for policy use. While LLMs have been shown as a promising tool to generate summaries for large-scale deliberations, they also risk underrepresenting minority perspectives and exhibiting bias with respect to the input order, raising fairness concerns in high-stakes contexts. Studying and fixing these issues requires a comprehensive evaluation at a large scale, yet current practice often relies on LLMs as judges, which show weak alignment with human judgments. To address this, we present DeliberationBank, a large-scale human-grounded dataset with (1) opinion data spanning ten deliberation questions created by 3,000 participants and (2) summary judgment data annotated by 4,500 participants across four dimensions (representativeness, informativeness, neutrality, policy approval). Using these datasets, we train \model, a fine-tuned DeBERTa model that can rate deliberation summaries from individual perspectives. DeliberationJudge is more efficient and more aligned with human judgements compared to a wide range of LLM judges. With DeliberationJudge, we evaluate 15+ LLMs and reveal persistent weaknesses in deliberation summarization, especially underrepresentation of minority positions. Our framework provides a scalable and reliable way to evaluate deliberation summarization, helping ensure AI systems are more representative and equitable for policymaking.
|
alignment, fairness, safety, privacy, and societal considerations
|
https://openreview.net/pdf?id=bl9hFm04Lc
| 2025-09-04T00:20:50
| 4
|
[
{
"id": "RKsweVwgMJ",
"forum": "bl9hFm04Lc",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission1768/Reviewer_GiWi",
"reviewer_name": "Reviewer_GiWi",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The authors address the problem of evaluating large-scale public deliberation summarization, in which LLMs tend to underrepresent minority opinions and exhibit biases. They introduce DELIBERATIONBANK, a large-scale, human-grounded benchmark comprising 3,000 free-form opinions on ten deliberation questions and 4,500 human annotations evaluating summaries on four dimensions: Representativeness, Informativeness, Neutrality, and Policy Approval. They fine-tune a DeBERTa-based model called DELIBERATIONJUDGE, which achieves much stronger alignment with human judgmentsUsing DELIBERATIONJUDGE, they benchmark 18 LLMs and conduct detailed analyses of performance factors.",
"strengths": "The paper is well-structured and adresses an important and underexplored area: fairness and representation in deliberative AI summarization for policy contexts. The authors provide a comprehensive benchmark which is large, well-structured, and grounded in human annotations. The fine-tuned DeBERTa model achieves impressive correlation with human ratings, outperforming all general-purpose LLM judges. The authors evaluate 18 diverse models across four human-centric criteria.",
"weaknesses": "Small Comments:\n- in Figure 1 under \"Judge Model Training\" it says \"Indivisual Opinion\" instead of \"Individual Opinion\"\n- 143: two periods at the end of the sentence",
"questions": "Did you try other judges (e.g., RoBERTa, Longformer, or Mistral 7B) to assess whether DeBERTa is uniquely suited?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T04:49:24",
"modification_date": "2025-11-12T10:51:23",
"review_url": "https://openreview.net/forum?id=bl9hFm04Lc¬eId=RKsweVwgMJ",
"license": "CC BY 4.0"
},
{
"id": "GL6vDrJKSB",
"forum": "bl9hFm04Lc",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission1768/Reviewer_Dtym",
"reviewer_name": "Reviewer_Dtym",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 4,
"presentation": 2,
"summary": "The main contribution is to train DeliberationJudge, a fine-tuned DeBERTa model that, given a LLM-generated summary of opinions on a topic, and a specific individual's opinion on that topic, scores on various dimensions the extent to which that individual's opinion is captured by the summary. The training dataset is based on a large-scale dataset of human opinions and ratings for 10 different topics. In Section 3.3, they show that DeliberationJudge outperforms out-of-the-box LLM-based approaches. In Section 4, they use DeliberationJudge as a primitive to measure the abilities of different LLMs to write summaries of peoples' opinions on the various topics. Finally, Section 5 highlights that minority opinions may not always be captured by these summaries.",
"strengths": "S1. The dataset collected, DeliberationBank, is extremely comprehensive, involving thousands of participants' opinions, ratings, and comparisons. It's also valuable that a broad array of different topics are considered.\n\nS2. The general research approach of training and validating DeliberationJudge, and then using it as a primitive to evaluate LLM-generated summaries, is sound.",
"weaknesses": "W1. The list of the paper's contributions says (Ln 106-7): \"We conduct a rigorous and comprehensive study of LLM summarizers that surfaces systematic biases (e.g., minority-stance under-coverage, order/verbosity sensitivity)\" but I do see not any analysis of order or verbosity sensitivity in the paper. I can only find a mention in connection to Figure 25 in the appendix, but Figure 25 does not directly appear to establish any relevant claims regarding order or verbosity bias. \n\nW2. For the study on minority opinions, it is unclear the extent to which participants' self-assessment of their minority status is accurate. The analysis would be more convincing if it instead directly determined whether a participant holds a minority opinion (this should at least be possible for \"Tarrif Policy\", a binary question). \n\nW3. It would be helpful to put these contributions in context with the closely related literature on LLM-assisted deliberation that does not appear to be cited, most notably \"Fine-tuning language models to find agreement among humans with diverse preferences\" (2022) and \"Generative Social Choice\" (2023). Both papers build systems that fill a similar role to DeliberationJudge: in the first paper it is the reward model, and in the second paper it is the discriminative query. And then both papers use these systems to evaluate LLM-generated statements. \n\nW4. In terms of presentation, many aspects of the experimental design are explained using what I would view as an unnecessary amount of mathematical notation. To give one example, the equation on Line 141-2 strikes me as unnecessary. The prose would be clearer if things were described more directly.",
"questions": "Q1. Judge performance in Figure 3 and 4 is reported in terms of correlation coefficient, which can be difficult to interpret. (For example, systems that systematically give wildly over- or underestimates would still get perfect scores.) What do the results look like if instead L1 accuracy is reported? \n\nQ2. In Figure 23, since the order of summaries is randomized, why are the histograms not symmetric about the midpoint? Does this have implications about the reliability of the human annotaters? \n\nQ3. Does DeliberationJudge outperform a few-shot baseline (particularly with a strong LLM such as GPT-5)?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T04:31:05",
"modification_date": "2025-11-12T10:51:24",
"review_url": "https://openreview.net/forum?id=bl9hFm04Lc¬eId=GL6vDrJKSB",
"license": "CC BY 4.0"
},
{
"id": "EWfzATjmDI",
"forum": "bl9hFm04Lc",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission1768/Reviewer_wcNP",
"reviewer_name": "Reviewer_wcNP",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 3,
"presentation": 3,
"summary": "This paper presents DELIBERATIONBANK, a large-scale dataset for studying large language models (LLMs) in the context of public deliberation summarisation, and introduces DELIBERATIONJUDGE, a fine-tuned DeBERTa model designed to evaluate summaries across four human-centered dimensions: representativeness, informativeness, neutrality, and policy approval. The dataset combines 3,000 free-form opinions from ten societal deliberation topics with 4,500 human annotations rating summaries generated by 18 different LLMs. Using this benchmark, the authors analyse systematic weaknesses in deliberation summarisation (e.g., underrepresentation of minority perspectives, input-order sensitivity) and demonstrate that their fine-tuned judge model achieves higher correlation with human judgments and greater efficiency compared to general-purpose LLM evaluators.",
"strengths": "* The creation of DELIBERATIONBANK fills an important gap in large-scale, human-grounded evaluation of deliberation summarisation, a topic with high social and policy relevance.\n* The evaluation across 18 LLMs, multiple topics, and controlled input scales provides a thorough empirical picture of current model capabilities and weaknesses.\n- The study explicitly examines minority opinion coverage, a dimension rarely addressed in summarisation benchmarks, adding ethical and societal depth.\n* The human evaluation design (rating + comparison tasks) is clear, systematic, and well-documented, yielding high-quality supervision data.\n- The paper is well-written and easy to follow, with clear figures and a well-structured argument.",
"weaknesses": "* The main modelling component, DELIBERATIONJUDGE, is a straightforward fine-tuning of DeBERTa with a regression head and Huber loss. While practical, it introduces no methodological innovation beyond supervised fine-tuning. It would be good if the authors could highlight generalisable technical components of their work that might be useful for the community.\n\n- The train/test split (random 80/20) likely leads to substantial overlap in topics, question types, and summarisation styles, so the model’s generalisation to unseen deliberations or unseen LLM summarisers is unclear. Other political debate datasets such as X-Stance (Vamvas et al. 2020) make clear distinctions between topics and questions, i.e., topics in train are not in test. This does not become apparent from this work.\n \n- The study identifies fairness and minority-representation gaps but does not probe why these biases arise or how to mitigate them, which limits the conceptual insight.\n\n- The use of a fine-tuned DeBERTa judge resembles prior “LLM-as-a-judge” work, differing mainly by domain rather than by technique.",
"questions": "1. How distinct are the train and test sets in terms of deliberation topics and summarisation models? Can the authors show that the topics handled in train and test are genuinely different and train is not leaking information to test?\n \n2. Did the authors evaluate performance on held-out questions or unseen summarisers to test generalisation beyond the training distribution?\n \n3. Could the approach be extended to explicitly model deliberative diversity (e.g., stance-conditioned evaluation or contrastive objectives)?\n \n4. What interpretability analyses, if any, were conducted to understand what linguistic or semantic cues DeliberationJudge relies on?\n \n5. Beyond efficiency, how does the judge perform when used for ranking or selection of summaries, rather than scoring them in isolation?\n\n\n\nI find this paper interesting and valuable as a dataset and benchmark contribution, but technically limited in terms of modelling innovation. The empirical analysis is careful and socially relevant, but the methodological core (a DeBERTa fine-tune) does not offer new ideas or generalisable techniques. I recognise the value of a deliberation dataset, therefore I would currently rate it as **4 (Borderline / Reject)** but would increase my score if the authors strengthen the framing as a dataset/benchmark paper and clarify the out-of-distribution evaluation.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T19:07:16",
"modification_date": "2025-11-12T10:51:24",
"review_url": "https://openreview.net/forum?id=bl9hFm04Lc¬eId=EWfzATjmDI",
"license": "CC BY 4.0"
},
{
"id": "QyWU3Sl5v3",
"forum": "bl9hFm04Lc",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission1768/Reviewer_pDNN",
"reviewer_name": "Reviewer_pDNN",
"rating": 6,
"confidence": 3,
"soundness": 2,
"contribution": 3,
"presentation": 3,
"summary": "LLMs should theoretically be capable of supporting deliberation by summarizing discussions in ways that are appropriate for use by policymakers, but no large-scale evaluations exist to benchmark their ability to do so. In response to this, the authors introduce DeliberationBank, a large-scale human-grounded dataset of crowdworker opinions and summary evaluations. They then train automatic evaluators on this dataset, finding that they comfortably outperform zero-shot LLM-as-a-judge, and use it to assess LLM summarization performance.",
"strengths": "- The problem space is well-motivated, and their contributions (the dataset DeliberationBank and the DeliberationJudge model) are highly useful contributions to this subfield. In particular, the large-scale data collection will be very useful once published.\n- Well-designed experiments make a strong case that off-the-shelf LLMs are insufficient as-is for the summarization task due to limited neutrality and representativeness. These are supplemented by detailed order analysis.\n- Thorough ablations for the judge design - it’s interesting that DeBERTa does better even than more recent LLMs.",
"weaknesses": "- Limited discussion of interannotator agreement when collecting human judgments. Many of the annotation dimensions seem highly subjective, and it would be useful to verify that there is high IAA between crowdworker judgments to verify that their judgments can be used as ground-truth values for the dataset. For example, it seems plausible that crowdworkers would have limited understanding of what kind of summaries would be useful for policymakers. \n- The minority representation case study seems pretty limited - only running on two topics, given the high inter-topic variance cited in Section 4.2, seems like it may not give the full picture. I’m also unconvinced that self-reports are the best way to classify opinions as minority or non-minority, as participants may not be well-calibrated about others’ opinions. The authors note significant past work on extracting classes of opinions from deliberation datasets - why wouldn’t a similar automatic approach work here?",
"questions": "- How did you choose the full spectrum of topics in Appendix B1?\n- Why is Spearman’s used in some places and Pearson’s in others?\n- How do you envision others leveraging your dataset and model in future work?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-24T13:22:46",
"modification_date": "2025-11-12T10:51:24",
"review_url": "https://openreview.net/forum?id=bl9hFm04Lc¬eId=QyWU3Sl5v3",
"license": "CC BY 4.0"
}
] |
|
DzecbBEmud
|
https://openreview.net/forum?id=DzecbBEmud
|
Differentially and Integrally Attentive Convolutional-based Photoplethysmography Signal Quality Classification
| 2.5
| 4
|
[
2,
2,
4,
2
] |
[
4,
5,
3,
4
] | 4
|
[
"Differential Attention",
"Differential Inteh Attention",
"Signal Quality",
"Photoplethysmography",
"Wearables"
] |
Photoplethysmography (PPG) is a non-intrusive and cost-effective optical technology that detects changes in blood volume within tissues, providing insights into the body’s physiological dynamics over time. By analyzing PPG data as a time series, valuable information about cardiovascular health and other physiological parameters such as Heart Rate Variability (HRV), Peripheral Oxygen Saturation (SpO2), and sleep status can be estimated. With the ever increasing user adoption of wearable devices like smartwatches, Health Monitoring Applications (HMA) are gaining popularity due to their ability to track various health metrics, including sleep patterns, heart rate, and activity tracking, by making use of PPG sensors to monitor different aspects of an individual’s health and wellness. However, reliable
health indicators require high-quality PPG signals, which are often contaminated with noise and artifacts caused by movement when using wearables. Hence, Signal Quality Assessment (SQA) is crucial in determining the trustworthiness of PPG data for HMA applications. We present a new PPG SQA approach, leveraging recent advancements in differential and integral attention-based strategies coupled with a two-stage procedure for promptly discarding highly anomalous segments, as a means of enhancing the performance of Convolutional Neural Network (CNN)-based SQA classifiers, balancing storage size and classifier accuracy in resulting models of increased robustness across PPG signals from different devices. Our methods are capable of achieving F1-scores between 0.9194 and 0.9865 across four expert-annotated datasets from different wearable devices.
|
Improving signal quality classification in photoplethysmography-based health applications using differential and integral attention
|
other topics in machine learning (i.e., none of the above)
|
https://openreview.net/pdf?id=DzecbBEmud
| 2025-09-16T02:18:58
| 4
|
[
{
"id": "g59Nl6Pi7G",
"forum": "DzecbBEmud",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission6230/Reviewer_7LK8",
"reviewer_name": "Reviewer_7LK8",
"rating": 2,
"confidence": 4,
"soundness": 1,
"contribution": 2,
"presentation": 3,
"summary": "This paper proposes a two-stage PPG signal-quality classifier for 3-s windows: (i) a fast amplitude-range threshold to discard obvious low-quality segments, followed by (ii) a compact NAS-discovered CNN whose global pooling is replaced by Differential and Differential-Integral attention layers. Experiments are reported on four Samsung datasets (Galaxy Watch 5/6/7 and Galaxy Ring), labeled by a single cardiologist, with subject-wise splits; two-stage models with the new attentions outperform in-paper attention baselines and several classical descriptors.",
"strengths": "1. Practical framing for on-device use (tiny models + cheap first stage), with a clear architectural description and an ablation on attention scaling init and one- vs two-stage design.",
"weaknesses": "1. Incomplete and imbalanced baseline coverage. The paper omits several recent, closely related PPG signal-quality / artifact-detection approaches:\na. Chen, Guo, Ding, Hu, & Rudin (2024) — Sparse learned kernels for interpretable and efficient medical time series processing (Nature Machine Intelligence, 6, 1132–1144), doi:10.1038/s42256-024-00898-4.\nb. Kasaeyan Naeini, Sarhaddi, Azimi, Liljeberg, Dutt, & Rahmani (2023) — A deep learning–based PPG quality assessment approach for heart rate and heart rate variability (ACM Transactions on Computing for Healthcare, 4(4), Article 24), doi:10.1145/3616019.\n2. Four of the five “specific SQA algorithms” cited come from two closely connected publications and lack public code. Two baselines are variants from Lucafo et al., and two are from Garcia Freitas et al.. None of these four provide public code, and Garcia Freitas et al., 2025 is a patent, not peer-reviewed. This concentration raises concerns about diversity of comparisons and reproducibility of baselines.\n3. Proprietary, single-vendor datasets limit external validity and reproducibility. All four evaluation sets are non-public and collected solely on Samsung wearables; labels come from a single cardiologist. This prevents third-party reproduction, obscures cross-brand generalization (e.g., Apple/Fitbit/medical PPG), and leaves inter-rater reliability unquantified.",
"questions": "1. Are there plans to release (even a subset of) GW5/6/7/RING or to benchmark on public, motion-rich sets to support reproducibility?\n2. Any “train on GW5/6 → test on GW7/Ring” or cross-brand results to probe generalization beyond a single vendor?\n3. Please justify the choice of a 3-second window and report a sensitivity analysis (e.g., 2/3/5 s, with/without overlap).\n4. Please report on-device efficiency metrics (latency in ms/inference, energy per inference, and peak RAM/flash) on a representative wearable SoC, not just parameter count.",
"flag_for_ethics_review": [
"Yes, Responsible research practice (e.g., human subjects, annotator compensation, data release)"
],
"code_of_conduct": "Yes",
"review_date": "2025-11-05T15:15:04",
"modification_date": "2025-11-12T11:35:58",
"review_url": "https://openreview.net/forum?id=DzecbBEmud¬eId=g59Nl6Pi7G",
"license": "CC BY 4.0"
},
{
"id": "ZtYPPCNCZa",
"forum": "DzecbBEmud",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission6230/Reviewer_hrHY",
"reviewer_name": "Reviewer_hrHY",
"rating": 2,
"confidence": 5,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This paper proposes a lightweight two-stage framework for PPG signal quality assessment on wearable devices. It first removes obviously corrupted segments using a simple amplitude threshold, then classifies the remaining signals with a compact CNN enhanced by Differential and Differential-Integral attention layers. The model architecture is discovered via Neural Architecture Search (NAS) to ensure efficiency. Experiments on four wearable PPG datasets show high accuracy and generalization, demonstrating suitability for embedded health monitoring applications.",
"strengths": "The paper adapts differential and integral attention mechanisms to 1D PPG signal quality assessment, the technical quality is solid, with clear architecture design, well-motivated ablations, and consistent results across multiple datasets. The paper is clearly written, presenting equations and experimental procedures in an accessible and organized manner. Its significance lies in demonstrating an efficient and accurate on-device SQA solution that can enhance the reliability of wearable health monitoring systems.",
"weaknesses": "1. Limited real-world validation: All datasets were collected under resting conditions; no tests were conducted under motion or exercise scenarios, which are crucial for practical wearable applications.\n\n2. Weak novelty in method composition: The DIFF and DINT attention mechanisms are borrowed from prior works; the main novelty lies in their application to PPG, which limits conceptual originality.\n\n3. Insufficient cross-device generalization analysis: Although the study includes four datasets from different Samsung devices, they share similar sensor designs and processing pipelines. Thus, the results mainly show intra-brand consistency rather than true cross-device generalization. Testing on PPG signals from other vendors (e.g., Apple, Fitbit or Huawei) would better validate the model’s robustness and practical applicability.\n\n4. Lack of demographic diversity reporting: The paper does not report participants’ demographic information, such as skin tone, ethnicity, or age distribution. Since optical PPG signals are known to vary with melanin levels, skin thickness, and vascular properties, omitting this information limits the assessment of model generalization across diverse populations and real-world users.\n\n5. Single-expert labeling limits reliability: All PPG segments appear to have been annotated by a single expert. Given the subjective nature of signal quality assessment, relying on one annotator raises concerns about label noise and inter-rater bias. Incorporating multiple experts and reporting inter-rater agreement metrics would provide stronger evidence of labeling reliability and improve the validity of model evaluation.\n\n6. Unclear training protocol: There is an inconsistency between the stated 100-epoch training and the 1000-epoch results table, raising reproducibility concerns.\n\n7. No runtime or deployment benchmarks: Although the method targets embedded devices, there are no latency, FLOPs, or power-consumption measurements to substantiate on-device feasibility.\n\n8. Lack of interpretability or feature visualization: The paper does not provide attention maps or qualitative examples to show what temporal patterns DIFF/DINT actually capture, limiting insight into model behavior.",
"questions": "Please refer to the weaknesses discussed above.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T20:57:37",
"modification_date": "2025-11-12T11:35:58",
"review_url": "https://openreview.net/forum?id=DzecbBEmud¬eId=ZtYPPCNCZa",
"license": "CC BY 4.0"
},
{
"id": "3qdNEobUE4",
"forum": "DzecbBEmud",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission6230/Reviewer_6JFB",
"reviewer_name": "Reviewer_6JFB",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This paper proposes a signal quality classification method based on convolutional neural network (CNN) and combined with differential and integral attention mechanism for the problem of photoplethysmography (PPG) signal quality assessment (SQA).",
"strengths": "1. The experimental design is well-designed and the dataset is rich. This experiment uses multiple datasets from different devices (Samsung Galaxy Watch 5/6/7 and Galaxy Ring), ensuring the generalizability of the model.\n2. Experimental results demonstrate that the proposed method demonstrates good performance in metrics such as accuracy, F1-score, and AUC, particularly demonstrating good adaptability across different hardware platforms.",
"weaknesses": "1. The introduction fails to clearly demonstrate the specific contributions of the proposed method or its similarities and differences with previous approaches.\n2. Differential and integral attention layers appear to be the core contributions of this study, but they are not original.\n3. The results section lacks qualitative analysis of the results, such as the addition of interpretability analysis of differential (DIFF) attention and differential integral (DINT) attention. It would be helpful to add discriminant analysis with latent feature visualization.",
"questions": "Same as weaknesses.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-29T09:44:04",
"modification_date": "2025-11-12T11:35:59",
"review_url": "https://openreview.net/forum?id=DzecbBEmud¬eId=3qdNEobUE4",
"license": "CC BY 4.0"
},
{
"id": "WvwjhK7bNV",
"forum": "DzecbBEmud",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission6230/Reviewer_TDdY",
"reviewer_name": "Reviewer_TDdY",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "The paper proposes a two-stage framework to enhance CNN-based PPG signal quality assessment using two attention mechanisms: differential attention and differential integral attention. These mechanisms aim to improve the model’s ability to focus on informative waveform features by minimizing discrepancies in the attention mechanism. The methods are evaluated on four devices/datasets, showing improved performance over selected baselines.",
"strengths": "- Signal quality of PPG measurements is an important problem that can affect performance of downstream tasks. This paper tackles this challenge by evaluating many different models. \n- The datasets used in the paper consists of different devices and the labels have been annotated by a cardiologist. As a result, both the dataset quality and the insights derived from the analysis have the potential to be highly meaningful and impactful.",
"weaknesses": "**Explanation of core contributions and related work**: The paper does not adequately explain the core contributions. In particular: \n- **(1) Difference from previous work:** While the related work describes prior work in PPG SQA, it does not clarify how the proposed work is different\n- **(2) Technical contribution:** The technical contribution of the paper remains unclear because core idea of this work and text focus on differential and differentially-integral attention. Both these approaches are proposed by other works [1, 2]. Instead the paper needs to clarify the core contribution, perhaps it lies in a novel integration, adaptation, or application of these methods.\n- **(3) Significance to ICLR:** I believe that paper's contribution to ICLR is fairly limited because of the following reasons: (a) The technical contributions are limited as explained above. (b) On the other hand, the application of the paper is limited to SQA using PPG. While application-oriented papers can fit well within ICLR, previous PPG-related studies [3, 4] have demonstrated stronger methodological and conceptual contributions.\n\n**Experimental Results**: The difference between using DPFAL and DINT is negligible (e.g., 0.8967 and 0.8952). Therefore, I wonder if this difference is meaningful in any way. Adding confidence intervals or some statistical measure will provide better information. \n\nFigure 1. indicates DINT Attentive layer and Differential Attentive Layer are used together. However, from the writing in Section 2.5, DFPAL refers to Differential Attentive Layer and DINTAL refers to DINT Attentive Layer. Is this a discrepancy?\n\nIt is challenging to interpret which model performs best for the task because of two reasons. (a) While I appreciate the implementation of several models to evaluate signal quality, the paper should include a more focused analysis and provide guidance on which models to choose. At the moment, it seems that the number of proposed models is more than the number of baselines. (b) Many of these metrics can be moved to the appendix and the main paper can only describe metrics that are relevant to the problem (e.g., AUC and MCC).\n\n**Data Provenance**: Several important details about the dataset are missing, including participant demographics, the total number of signal segments, and the ratio of reliable to unreliable signals.\n\n**Results, Discussion, and Writing**: The results section provides only a superficial overview without emphasizing the most important findings. Moreover, there is no discussion that draws meaningful insights or interprets the implications of the results. The writing in these sections needs significant improvement to better communicate the key takeaways and their relevance. Additionally, the paper should include a dedicated Limitations section to acknowledge potential shortcomings and outline directions for future work.\n\n**Efficiency Analysis**: One of the stated motivations of this work is its potential for deployment on resource-constrained devices. However, the evidence supporting this claim is limited. The paper only reports model sizes, which provides an incomplete picture of efficiency. Additional metrics such as latency and throughput are needed to thoroughly assess the model’s suitability for resource-limited environments.\n\n[1] Ye, T., Dong, L., Xia, Y., Sun, Y., Zhu, Y., Huang, G., & Wei, F. (2024). Differential transformer. arXiv preprint arXiv:2410.05258.\n\n[2] Cang, Y., Liu, Y., Zhang, X., Zhao, E., & Shi, L. (2025). Dint transformer. _arXiv preprint arXiv:2501.17486_.\n\n[3] Abbaspourazad, S., Elachqar, O., Miller, A. C., Emrani, S., Nallasamy, U., & Shapiro, I. (2023). Large-scale training of foundation models for wearable biosignals. _arXiv preprint arXiv:2312.05409_.\n\n[4] Pillai, A., Spathis, D., Kawsar, F., & Malekzadeh, M. (2024). Papagei: Open foundation models for optical physiological signals. _arXiv preprint arXiv:2410.20542_.",
"questions": "- **Data Availability**: Indicate if the dataset will be released with paper upon publication.\n- Please fix missing citations throughout the paper (related work question marks and missing citations for baselines).\n- Is $\\delta$ simply the subtraction the highest and lowest magnitude?\n- **Method**: Figure 1. indicates DINT Attentive layer and Differential Attentive Layer are used together. However, from the writing in Section 2.5, DFPAL refers to Differential Attentive Layer and DINTAL refers to DINT Attentive Layer. Is this a discrepancy?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-28T16:48:21",
"modification_date": "2025-11-12T11:35:59",
"review_url": "https://openreview.net/forum?id=DzecbBEmud¬eId=WvwjhK7bNV",
"license": "CC BY 4.0"
}
] |
bppDDqbO3V
|
https://openreview.net/forum?id=bppDDqbO3V
|
Dissecting the Role of Positional Encoding in Length Generalization
| 4.5
| 3
|
[
2,
4,
6,
6
] |
[
4,
2,
3,
3
] | 4
|
[
"Mechanistic Interpretation",
"Positional Encoding",
"Length Generalization",
"Iteration Head",
"Reasoning Tasks."
] |
Length generalization (LG) is a persistent challenge for Transformers. Despite recent studies improving the models' LG capability, its underlying mechanisms are still underexplored. To better understand LG, we propose that LG requires alignment of the model’s inductive bias with the task’s computational structure, and validate this view with experiments on Transformers. Focusing on iterative tasks (e.g., Polynomial Iteration, Parity, Binary Copy), we systematically analyze different PEs and find that the misalignment persists for Transformers: the structural bias of softmax attention and computational biases from PEs destabilize LG under extrapolation. Notably, Transformers without positional encoding (NoPE) could show partial LG capability, potentially because implicit position encoding through hidden-state statistics and contextual token distributions preserves the consistent computation in extrapolation, though these signals decay with length, leaving the encoding misaligned with the task. Building on this mechanistic analysis, we introduce a lightweight enhancement—value-side relative coding with logit rescaling—that better aligns inductive bias with task structure. This sustains iterative computation and improves LG, offering insights for future PE design.
|
Exploring the mechanism of Positional Encoding in Length Generalization on Reasoning Tasks
|
interpretability and explainable AI
|
https://openreview.net/pdf?id=bppDDqbO3V
| 2025-09-19T19:05:25
| 4
|
[
{
"id": "yvMFEH7FbZ",
"forum": "bppDDqbO3V",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission17734/Reviewer_nP5W",
"reviewer_name": "Reviewer_nP5W",
"rating": 2,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This paper addresses an important question in Transformers — why models struggle to generalize to longer sequences than those seen during training. The authors propose that length generalization (LG) depends on how well the model’s inductive bias aligns with the computational structure of the task. The authors study this question on Polynomial Iteration task, which has clear recursive computational rules. The authors examine how various positional encodings (PEs) — including Absolute (APE), Relative (T5, RoPE), and No Positional Encoding (NoPE) — influence LG. The authors find that Transformers can simulate iterative computation when trained on Polynomial Iteration, but the alignment is fragile under extrapolation. The model's inductive biases - computational (from PEs) and structural (from attention) are misaligned with the inherent computational structure of the task. Interestingly, NoPE sometimes generalizes better than explicit encodings due to implicit positional information in hidden-state statistics and contextual token distributions. The authors aim to reduce the two sources of misalignment: (i) structural bias from softmax attention and (ii) computational bias from PEs. They propose ViPE (Value-side relative position encoding with logic rescaling), which stabilizes the computation and improves LG.",
"strengths": "1. The observation that LG emerges from alignment between a model’s inductive bias and the computational structure of the task — is both intuitive and interesting.\n\n2. The findings that NoPE models can exhibit partial LG by implicitly encoding positional information through hidden-state mean/variance and contextual token distributions is interesting.",
"weaknesses": "1. The evaluation is performed on synthetic tasks. It would be interesting to see how the approach (particularly ViPE) performs on realistic tasks. Overall, limited evaluation that does not cover a good range of tasks. \n\n2. Limited connections with other existing works on PE (e.g., spectral analysis of PEs). \n\n3. ViPE performance is reported only on polynomial iteration task. Since the authors aim to reduce the two sources of misalignment: (i) structural bias from softmax attention and (ii) computational bias from PEs and propose ViPE (Value-side relative position encoding with logic rescaling), it would be good to evaluate this comprehensively. \n\n4. The propositions presented in the paper rely on uniform attention, which is used as a simplifying assumption but it is not clear to what extent does this limit a realistic scenario.",
"questions": "How would the approach perform on other more realistic tasks, e.g., NLP tasks? \n\nHow does the approach connects to other PEs-based modeling?\n\nHow sensitive are the results to the assumption of uniform attention?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-03T07:54:27",
"modification_date": "2025-11-13T00:51:49",
"review_url": "https://openreview.net/forum?id=bppDDqbO3V¬eId=yvMFEH7FbZ",
"license": "CC BY 4.0"
},
{
"id": "1xn00MlY1d",
"forum": "bppDDqbO3V",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission17734/Reviewer_ivHQ",
"reviewer_name": "Reviewer_ivHQ",
"rating": 4,
"confidence": 2,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This paper investigates the mechanisms behind length generalization in Transformers, proposing that LG depends on the alignment between a model’s inductive bias and the computational structure of the task. Through synthetic experiments on iterative reasoning tasks, the authors analyze various positional encodings, including RoPE, NoPE, and find that most fail to generalize to longer sequences. They show that NoPE can partially achieve LG via implicit positional signals emerging from hidden-state statistics and contextual token distributions, though these signals decay with length. Building on this analysis, the paper introduces ViPE combining value-side relative coding and logit rescaling, aligning model bias with task structure and substantially improving extrapolation performance.",
"strengths": "1. The paper provides a novel explanation of how NoPE implicitly encodes positional information through hidden-state statistics and contextual token distributions, contributing theoretical clarity to understanding Transformers without explicit positional encodings. The proposed method ViPE introduces value-side relative encoding and logit rescaling, significantly improving length extrapolation and demonstrating the practical value of aligning model inductive bias with task structure.\n\n2. The experiments are thorough and clearly presented, covering multiple positional encodings and iterative tasks. The visualization of attention maps and performance degradation effectively supports the paper’s main claims about misalignment and fragility in length extrapolation.\n\n3. The paper offers a fresh view by framing length generalization as an alignment problem between a model’s inductive bias and the computational structure of the task, providing an insightful analytical framework.",
"weaknesses": "1. All experiments are conducted solely on synthetic iterative tasks, leaving it unclear whether the conclusions generalize to natural language or more complex reasoning tasks. This considerably limits the paper’s practical value. For instance, in general length generalization settings using pretrained models (e.g., Qwen2.5), would the attention maps still exhibit such clear structural patterns? \n\n2. Since the paper focuses exclusively on synthetic tasks, and ViPE appears somewhat tailored to tasks with precise computational structures (is that correct?), I wonder how the authors envision its performance on more typical tasks, including natural language and general length generalization benchmarks. Given resource and time constraints, additional experiments are unnecessary, but I would appreciate the authors’ perspective on this point.\n\n3. The analysis of NoPE seems to show only that NoPE can use statistical information to distinguish positions, but not that it actually does so in practice. The experiments in Section 5.3 merely demonstrate that NoPE encodes absolute and relative positions. Should this be considered only a lower bound on NoPE’s capability? That said, the authors’ analysis is valuable in that it inspired the design of ViPE, which is a positive contribution.\n\nHowever, I’m not fully confident in my own judgment, and I'm willing to adjust my score after seeing other reviewers’ comments and the authors’ rebuttal.",
"questions": "See weakness.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T17:37:20",
"modification_date": "2025-11-12T14:06:07",
"review_url": "https://openreview.net/forum?id=bppDDqbO3V¬eId=1xn00MlY1d",
"license": "CC BY 4.0"
},
{
"id": "G8iF2SvLsR",
"forum": "bppDDqbO3V",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission17734/Reviewer_KVcN",
"reviewer_name": "Reviewer_KVcN",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The paper analyzes the role of different positional encoding for length generalization on synthetic tasks like parity, binary copy, and such (typically tasks that can be accomplished by iterative local updates). The paper arguments for the misalignment between inductive biase of position encoding and the task as the key factor harming performance - and tries to propose some fixes that would created a better aligment - e.g. logit control and rescaling.",
"strengths": "* Overall interesting analyses\n* Sound lightweight extensions (ViPE) that shows effective results.",
"weaknesses": "* Lack of benchmarking of ViPE on realistic benchamrks greatly undermines scope of the paper. \n* While the paper provides some valuable insights, part of it feels somewhat \"obvious\" -- of course, one would think that failure to length generalize is an issue of the lacking the right inductive biase; and adding more task-specific inductive bias, or better invariance-mainetance across length increase, length generalization can be improved. This does not feel like a substantively new insight- although the key strength that redeems the paper is in proposing a potential solution.\n* Similar ideas (in the context of RNNs - but the principles seem to translate) have been also explored here [1]. The benchmarks in [1] (including those from its appendix) could be have been also useful to evaluate on.\n* Even the proposed method still seems to disgracefully degrades around sequence length 43-48 -- suggesting that the generalization may not scale well. \n\n[1] Monotonic Location Attention for Length Generalization - Ray Chowdhury et al.",
"questions": "n/a",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-28T09:09:53",
"modification_date": "2025-11-12T14:06:08",
"review_url": "https://openreview.net/forum?id=bppDDqbO3V¬eId=G8iF2SvLsR",
"license": "CC BY 4.0"
},
{
"id": "WgwA9UZNVq",
"forum": "bppDDqbO3V",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission17734/Reviewer_VJmt",
"reviewer_name": "Reviewer_VJmt",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This paper focuses on the impact of positional encoding strategies (ALiBi, APE, FIRE, NoPE, RoPE, T5, YARN) on length generalization of Transformers on iterative tasks (specifically Polynomial Iteration, Parity, Binary Copy). The authors propose that successful LG relies on alignment between the iterative task's computational structure and the inductive bias of the positional encoding. Their mechanistic analysis shows that many popular PEs are misaligned with iterative tasks, helping to explain why they often perform worse than NoPE. Finally, they propose modified PEs aimed at improving alignment with iterative tasks.",
"strengths": "The mechanistic analysis is clear, convincing, and interesting. The NoPE statistical analysis nicely complements the constructive argument in Kazemnejad.",
"weaknesses": "A limitation is studying only iterative tasks. In particular, the Logit controller and Value-side relative PE appear to be specifically designed to improve LG for iterative tasks, but their impact on LG for other types of tasks (such as the many others studied in Kazemnejad) is unclear. To be practically useful, we would hope for PEs that could improve LG on many kinds of tasks, not just a limited subset.\n\nSee also questions.",
"questions": "Can you more explicitly position the paper relative to Kazemnejad, noting the novel contributions w.r.t. Kazemnejad? Kazemnejad et al. (2023) show the failure of LG and the relative superiority of NoPE over other PEs over a range of tasks (Fig F.5 shows (lack of) LG for Parity for NoPE, T5, ALiBi, APE). Kazemnejad further prove that NoPE can theoretically represent both absolute and relative PEs, e.g. for a specific weight configuration in the first layer, and all subsequent layers, respectively. In my reading, the novelty of the current paper lies in: a specific study of *iterative tasks* only (adding the tasks Polynomial Iteration and Binary Copy to Kazemnejad which already studies Parity), a mechanistic explanation of the specific failure-modes of various PEs for this task, and a new statistical analysis of NoPE’s ability to encode position information (distinct from Kazemnejad’s proof which relies on constructing specific weight matrices). Is this accurate?\n\nStudying 2- and 3-layer Transformers makes sense for the mechanistic analysis where you are looking for particular expected attention patterns, but do you know whether training deeper Transformers (more layers) on the same tasks show the same behavior shown in Figure 3 (i.e. does length generalization still degrade relatively quickly OOD, with NoPE extrapolating better than other choices of PE)? The trend where the LG improves from 2- to 3-layer makes one wonder if it might continue to improve with more depth -- and whether the relative performance of the different PEs might change.\n\nWhat can the study of iterative tasks tell us about other classes of tasks for which LG is desired? Can we expect the Logit controller and Value-side relative PE to improve (or at least not harm!) LG for other classes of tasks with different structure (e.g. arithmetic, etc.)?\n\nMinor notes:\nL17. positional enconding (PE) (abbrev. never introduced)\nL175 Typo “Algins”",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-21T11:04:47",
"modification_date": "2025-11-12T14:06:08",
"review_url": "https://openreview.net/forum?id=bppDDqbO3V¬eId=WgwA9UZNVq",
"license": "CC BY 4.0"
}
] |
ehtVTpcjES
|
https://openreview.net/forum?id=ehtVTpcjES
|
T³: Test-Time Model Merging in VLMs for Zero-Shot Medical Imaging Analysis
| 3.5
| 3.5
|
[
2,
4,
2,
6
] |
[
4,
3,
3,
4
] | 4
|
[
"medical imaging",
"vision language models",
"zero-shot generalization",
"model merging",
"healthcare"
] |
In medical imaging, vision-language models face a critical duality: \textit{pretrained} networks offer broad robustness but lack subtle, modality-specific characteristics, while fine-tuned \textit{expert} models achieve high in-distribution accuracy yet falter under modality shift. Existing model-merging techniques, designed for natural-image benchmarks, are simple and efficient but fail to deliver consistent gains across diverse medical modalities; their static interpolation limits reliability in varied clinical tasks.
To address this, we introduce \textbf{T}est-\textbf{T}ime \textbf{T}ask adaptive merging ($\mathbb{T^{3}}$), a backpropagation-free framework that computes \textit{per-sample} interpolation coefficients via the Jensen–Shannon divergence between the two models’ output distributions. $\mathbb{T^{3}}$ dynamically preserves local precision when models agree and defers to generalist robustness under drift. To overcome the inference costs of sample-wise merging, we further propose a batch-wise extension, $\mathbb{T^{3}}_{\mathcal{B}}$ that computes merging coefficient across a batch of samples, dramatically reducing computational bottleneck.
Recognizing the lack of a standardized medical-merging benchmark, we present a rigorous cross-evaluation protocol spanning in-domain, base-to-novel, and corruptions across four modalities. Empirically, $\mathbb{T^{3}}$ sets new state-of-the-art in Top-1 accuracy and error reduction, outperforming strong baselines while maintaining efficiency, paving the way for adaptive MVLM deployment in clinical settings.
|
We propose a sample-wise test-time model merging in vision-language models validating enhanced performance across four medical imaging classification tasks on a practical cross-dataset evaluation medical benchmark.
|
applications to physical sciences (physics, chemistry, biology, etc.)
|
https://openreview.net/pdf?id=ehtVTpcjES
| 2025-09-19T00:00:27
| 5
|
[
{
"id": "NMIjszVuSH",
"forum": "ehtVTpcjES",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12886/Reviewer_VAaC",
"reviewer_name": "Reviewer_VAaC",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This paper presents a backpropagation-free framework for dynamically merging pretrained and fine-tuned medical vision–language models (MVLMs) to improve zero-shot robustness across diverse imaging modalities. The authors identify a gap between generalist pretrained models (robust but insensitive to modality nuances) and domain-expert fine-tuned models (accurate but overfitted). To bridge this, T3 introduces a per-sample interpolation coefficient derived from the Jensen–Shannon divergence between model output distributions, guiding adaptive parameter fusion at test time. A batch-wise variant averages interpolation weights across samples to reduce computational overhead. The paper further proposes a standardized cross-modality benchmark across MedMNIST, MediMeta, and MedMNIST-C for assessing model-merging methods under in-domain, base-to-novel, and corrupted conditions. Empirical results show that it consistently improves accuracy and while maintaining inference costs comparable to single-model baselines.",
"strengths": "- The use of mutual information for per-sample adaptive interpolation offers a sound and interpretable improvement over entropy-based schemes.\n- Four imaging modalities, multiple OOD shifts, and comparisons to static/dynamic baselines provide convincing empirical support.",
"weaknesses": "- Theoretical depth is limited: the paper lacks a formal justification for why JS divergence optimally balances confidence and disagreement beyond empirical correlation plots.\n- Some baseline comparisons (e.g., to modern multi-adapter or PEFT-based methods) are missing, which could contextualize the relative merit of merging.\n- The benchmark’s clinical realism could be enhanced with higher-resolution or multimodal (text + image) tasks.\n- Writing style, though clear, is verbose and occasionally repetitive; streamlining could improve focus.\n- No exploration of failure cases, e.g., when both models are wrong but confident, despite being central to the JS-based rationale.",
"questions": "- How sensitive is performance to the λ bounds (λₘᵢₙ, λₘₐₓ) and extrapolation factor δ? Have you tested adaptive scaling per modality?\n- Would using layer-wise or token-level mutual information improve merging granularity beyond global JS divergence?\n- How does T3 perform when merging more than two experts (e.g., multi-institution specialists)?\n- Could the authors release the benchmark splits and code to facilitate fair comparison and adoption?\n- How does the approach handle cases where both models are confidently incorrect (high agreement but low accuracy)?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-08T22:12:37",
"modification_date": "2025-11-12T13:01:09",
"review_url": "https://openreview.net/forum?id=ehtVTpcjES¬eId=NMIjszVuSH",
"license": "CC BY 4.0"
},
{
"id": "mqRFu40gZ1",
"forum": "ehtVTpcjES",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12886/Reviewer_GBht",
"reviewer_name": "Reviewer_GBht",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This work presents a backprop-free method for interpolating between specialist and generalist MVLM models in the medical data setting. To motivate the work and evaluate the method, they construct a benchmark and perform extensive experiments across datasets and comparing against many related work.",
"strengths": "Motivation is very clear that unlike some synthetic natural image tasks, medical data may have varying types of distribution shift within a modality that require finer-grained interpolation between MVLM models. Can work with single inputs or batches of inputs, which gives more controllability in how interpolation happens.\n\nIntuition for the construction of the method is also clearly discussed and figures are well formatted to explain. \n\nThe method itself, based on JSD metric, is also very simple and interpretable, but shows strong positive results against static merging techniques. Great to see lots of comparison results done to many related work.",
"weaknesses": "Unclear what happens if the specialist and generalist models have radically different architectures, since the method uses a simple linear interpolation between parameters. Seems like a huge downside of using this method in practical applications, despite the strong results. \n\nNeed error bars to show if this method of dynamic merging is indeed better than the related work DaWin across all modalities. \n\nTypo line 52 \"decisions to reach to an\"",
"questions": "Is there intuition for why such a simple linear interpolation would be sufficient for getting combined model weights?\n\nIs the assumption that the two models will have same architectures? Is this a reasonable assumption in practice? Why or why not? To me, this doesn't seem like the most realistic assumption given diversity of models, pace of development, etc.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T10:46:51",
"modification_date": "2025-11-12T13:01:10",
"review_url": "https://openreview.net/forum?id=ehtVTpcjES¬eId=mqRFu40gZ1",
"license": "CC BY 4.0"
},
{
"id": "xI20VZtyC7",
"forum": "ehtVTpcjES",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12886/Reviewer_omq4",
"reviewer_name": "Reviewer_omq4",
"rating": 2,
"confidence": 3,
"soundness": 1,
"contribution": 1,
"presentation": 2,
"summary": "The paper addresses the challenge of combining generalist, pretrained Vision-Language Models (VLMs) with specialized, fine-tuned \"expert\" models for medical imaging analysis. The authors propose $\\mathbb{T^3}$, a Test-Time Task-adaptive merging framework that is backpropagation-free. The core of the method is a novel interpolation coefficient, λ(x), computed per-sample (or per-batch) based on the Jensen-Shannon (JS) divergence between the output distributions of the two models. The framework also introduces a new benchmark for model merging in medical imaging, upon which the authors claim their method achieves state-of-the-art (SOTA) results in accuracy and robustness.",
"strengths": "**1. Test-Time Practicality:** The proposed $\\mathbb{T^3}$ framework is designed to be backpropagation-free. This makes it suitable for test-time adaptation, as it doesn't require optimization, gradient computation, or access to training labels during inference.\n\n**2. Novelty of Mechanism:** The use of Jensen-Shannon divergence as the mechanism for calculating the interpolation coefficient based on output distribution agreement is a novel idea in the model merging space.",
"weaknesses": "**1. Misleading Tables:** The results in Table 2 are misrepresented, with incorrect bolding favoring the authors' method over superior baselines (e.g., Fundoscopy mean accuracy).\n\n**2. Misleading Complexity Analysis:** The O(B) vs. O(N) claim is fundamentally incorrect. Both methods are O(N) as B=N/BS. This shows a lack of rigor in the theoretical analysis.\n\n**3. Poor Evaluation Strategy:** Creating a new benchmark and claiming SOTA on it is poor scientific practice. The paper fails to demonstrate its method's value on any existing, standardized model-merging benchmark.\n\n**4. Narrow Application:** The paper's contribution is not shown to be general. It is only tested on medical images, making it a poor fit for ICLR.\n\n**5. Unexplained Computational Cost:** The 3800s inference time for the $\\mathbb{T^3}$ method in Table 3 is anomalous and unexplained. It suggests the described cost of 3 forward passes is either wrong or the implementation is extremely inefficient, making it non-competitive with methods like DaWin (124.7s).\n\n**6. Weak Theoretical Justification:** The merging logic seems to function as a regularization technique, rather than an optimal selection mechanism. By defaulting to the generalist when predictions tend to align (low JS divergence), its primary effect is to pull the output toward the generalist in a scenario offering limited apparent gain.\n\n**7. Excessive use of analogies:** There are lots of analogies in the paper between two physicians. There are figures and boxes throughout the body and appendix. I suggest removing everything regarding those analogies from the body of the paper because they are loose representations of the proposed architecture that do not beolong in a scientific paper. If the authors want this loose explanation in the paper I strongly suggest for it to be only in an appendix.",
"questions": "1. Can you please justify the claim that the batch-wise method has O(B) complexity while the sample-wise has O(N)? Do you agree that since B=N/BS, both methods are asymptotically linear, O(N)?\n\n2. Can you please explain the 3800s inference time for $\\mathbb{T^3}$ in Table 3? Why is it over 30 times slower than DaWin (124.7s), when both are described as requiring 3 forward passes?\n\n3. Why use a model merging technique when most of the results where almost equal to the specialist model?\n\n4. Since the method has nothing that is specific to medical imaging in its architecture, why not test in other domains?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T02:07:35",
"modification_date": "2025-11-12T13:01:10",
"review_url": "https://openreview.net/forum?id=ehtVTpcjES¬eId=xI20VZtyC7",
"license": "CC BY 4.0"
},
{
"id": "qnbaS8FiqW",
"forum": "ehtVTpcjES",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12886/Reviewer_6j9S",
"reviewer_name": "Reviewer_6j9S",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The paper introduces a Test-Time Task adaptive model merging framework (T3) for medical vision-language models (MVLMs). The challenge in medical imaging is balancing the performance between generalist models, which are robust but not specialized, and expert models, which perform well in their trained domain but struggle with out-of-distribution (OOD) data. T3 dynamically merges these two models based on their predictive agreement, using Jensen-Shannon divergence to calculate per-sample interpolation coefficients. This method adapts efficiently at test time without the need for backpropagation, addressing various medical imaging tasks under conditions like distribution shifts and corruption.",
"strengths": "Strengths:\n1.The test-time adaptive merging of pretrained and expert models is novel and addresses real-world challenges in medical imaging, where different datasets and conditions can greatly vary.\n2.The backpropagation-free nature of the method makes it computationally efficient, which is crucial for medical applications that require quick and reliable inferences.\n3.The method outperforms several baselines across multiple medical modalities, demonstrating its robustness and generalization capabilities.\n4.The approach works across various imaging modalities, making it applicable in diverse medical settings.\n5.The manuscript details the experimental setup and provides clear pseudocode for the T3 algorithm, which should be beneficial for practitioners looking to replicate or build upon this work.",
"weaknesses": "Weaknesses:\n1.While the use of Jensen-Shannon (JS) divergence to guide model merging is well-motivated empirically, the manuscript could benefit from a more formal theoretical justification of why this divergence is the most suitable metric for this task, especially in comparison to other divergences like Kullback-Leibler divergence (KL) or Total Variation (TV). An in-depth theoretical analysis of the advantages and limitations of JS divergence in the context of model merging could provide a more robust foundation for the proposed method.\n2.The manuscript demonstrates excellent results across multiple medical modalities. However, it would be insightful to see how T3 performs in the context of more diverse real-world scenarios, such as multi-modal inputs (e.g., combining different imaging modalities such as CT scans and MRI) or more heterogeneous data sources (e.g., cross-institutional data).\n3.The authors briefly discuss how the interpolation coefficient λ(x) adapts based on mutual information. However, the interpretability of this process could be expanded. For instance, providing visualizations of how λ(x) varies across different input samples (such as under various corruption types or novel class scenarios) could help readers better understand how the model makes its adaptive decisions.\n4.The manuscript compares T3 to several static and dynamic merging methods, such as Model Ensemble, Task Arithmetic, and DaWin. While these comparisons are comprehensive, it would be helpful to also compare T3 against methods that explicitly focus on zero-shot or few-shot medical image classification, such as transfer learning-based approaches or methods that utilize large language models like MedCLIP[1].\n5.It would be useful to also consider additional evaluation metrics, such as F1-score, area under the curve (AUC), or confusion matrices, to give a more complete picture of the method's performance, especially when dealing with imbalanced datasets or novel classes.\n6.The method shows promise under typical domain shifts and data corruptions, but medical imaging data can be highly variable, and edge cases (e.g., images with extreme noise or artifacts, or rare diseases) might not be well-represented in the current experiments.",
"questions": "Questions:\n1.Why refer to JS divergence as “mutual information” (Eq. 5, Section 3.2)? This is technically incorrect—mutual information is between random variables, not two distributions over the same variable. Please explain.\n\n2.Why exclude TTA methods like TPT[2] or CoOp-based[3] adaptation as baselines? Even if they adapt a single model, they are strong zero-shot competitors in OOD settings. A comparison would better position T3’s value.\n3.The paper claims “zero-shot medical imaging analysis,” but the expert model is fine-tuned on labeled in-domain data. Please clarify that “zero-shot” refers only to test-time inference without task-specific adaptation, not the entire pipeline.\n4.The paper primarily focuses on classification tasks. How would the approach perform in other medical imaging tasks such as segmentation or detection? Are there plans to extend the framework to such tasks?\n5.While T3 is effective when merging pretrained and expert models, how would it perform in scenarios where pretrained models may not be available or where expert models are extremely specialized?\n\n[1]Wang Z, Wu Z, Agarwal D, et al. Medclip: Contrastive learning from unpaired medical images and text[C]//Proceedings of the Conference on Empirical Methods in Natural Language Processing. Conference on Empirical Methods in Natural Language Processing. 2022, 2022: 3876.\n[2]Shu Manli, Nie Weili, Huang De-An, Yu Zhiding, Goldstein Tom, Anandkumar Anima, and Xiao Chaowei. Testtime prompt tuning for zero-shot generalization in visionlanguage models. In NeurIPS, 2022.\n[3]Kaiyang Zhou, Jingkang Yang, Chen Change Loy, and Ziwei Liu. Learning to prompt for vision-language models. International Journal of Computer Vision (IJCV), 2022.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T23:01:47",
"modification_date": "2025-11-12T13:01:11",
"review_url": "https://openreview.net/forum?id=ehtVTpcjES¬eId=qnbaS8FiqW",
"license": "CC BY 4.0"
}
] |
XNk56rmmiy
|
https://openreview.net/forum?id=XNk56rmmiy
|
Towards Adaptive ML Benchmarks: Web-Agent-Driven Construction, Domain Expansion, and Metric Optimization
| 3.333333
| 3.333333
|
[
2,
2,
6
] |
[
4,
4,
2
] | 3
|
[
"Benchmark",
"Large Language Models",
"Language Agents",
"End-to-End Machine Learning",
"Evaluation Framework",
"Data Science Automation"
] |
Recent advances in large language models (LLMs) have enabled the emergence of general-purpose agents for automating end-to-end machine learning (ML) workflows, including data analysis, feature engineering, model training, and competition solving. However, existing benchmarks remain limited in task coverage, domain diversity, difficulty modeling, and evaluation rigor, failing to capture the full capabilities of such agents in realistic settings.
We present TAM Bench, a diverse, realistic, and structured benchmark for evaluating LLM-based agents on end-to-end ML tasks. TAM Bench features three key innovations:
(1) A browser automation and LLM-based task acquisition system that automatically collects and structures ML challenges from platforms such as Kaggle, AIcrowd, and Biendata, spanning multiple task types and data modalities (e.g., tabular, text, image, graph, audio);
(2) A leaderboard-driven difficulty modeling mechanism that estimates task complexity using participant counts and score dispersion, enabling scalable and objective task calibration;
(3) A multi-dimensional evaluation framework incorporating performance, format compliance, constraint adherence, and task generalization.
Based on 150 curated AutoML tasks, we construct three benchmark subsets of different sizes—Lite, Medium, and Full—designed for varying evaluation scenarios. The Lite version, with 18 tasks and balanced coverage across modalities and difficulty levels, serves as a practical testbed for daily benchmarking and comparative studies.
|
datasets and benchmarks
|
https://openreview.net/pdf?id=XNk56rmmiy
| 2025-09-18T21:19:00
| 3
|
[
{
"id": "IoNKb2h43G",
"forum": "XNk56rmmiy",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission11551/Reviewer_HQoU",
"reviewer_name": "Reviewer_HQoU",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 1,
"summary": "The paper introduces TAM-Bench, a new benchmark designed to test how well LLM-based agents can handle end-to-end machine learning tasks. Instead of relying on manual curation, it automatically gathers and standardizes real competition tasks from sites like Kaggle using a web-agent system. It also estimates task difficulty from leaderboard data and evaluates agents across several aspects, including performance, constraint following, and output format correctness. In experiments with AIDE and OpenHands using GPT-4.1 and DeepSeek-V3, GPT-4.1 was generally more stable and reliable, while DeepSeek-V3 showed strong results on certain tasks. Overall, TAM-Bench aims to provide a more practical and scalable way to evaluate AutoML agents in realistic settings.",
"strengths": "1.\tThe paper presents an automated and scalable benchmark pipeline that reduces manual effort and ensures diverse task coverage.\n2.\tThe leaderboard-based difficulty modeling offers a more objective and reproducible way to assess task complexity.\n3.\tThe evaluation framework is comprehensive, considering both performance and practical constraints.",
"weaknesses": "1.\tThe experimental design is shallow. TAM-Bench evaluates two open-source AutoML agent frameworks, but each framework’s base language model includes only one open-source model (DeepSeek-V3) and one closed-source model (GPT-4.1). Evaluating only two models is far from comprehensive and cannot reflect the capability boundaries of diverse AutoML agents, offering limited value to the community.\n2.\tThe selection of base models is arbitrary. Excluding the Qwen series models simply because they “encountered JSON parsing errors during execution” is unreasonable, as this issue could be resolved through function calling or post-processing the responses. Furthermore, it is unclear why the authors chose DeepSeek-V3 instead of Llama-3 or other comparable language models.\n3.\tThe authors propose an automatic pipeline for benchmark construction, but they do not systematically discuss the quality of the synthesized data, nor do they conduct any manual quality inspection of the benchmark samples. I am seriously concerned about the reliability of the automatically generated data.\n4.\tThe writing is poor. For example, Figure 1 is never mentioned in the main text, and its caption fails to provide any meaningful information, which leaves readers confused.",
"questions": "1.\tTAM-Bench focuses on language model-based agents, so how does it handle inputs such as audio and images?\n2.\tThe evaluation metrics in TAM-Bench are all based on final submissions, yet in long-sequence agent tasks, assessing the intermediate process is also meaningful. Why does TAM-Bench only consider result-based metrics?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T19:14:29",
"modification_date": "2025-11-12T12:44:00",
"review_url": "https://openreview.net/forum?id=XNk56rmmiy¬eId=IoNKb2h43G",
"license": "CC BY 4.0"
},
{
"id": "kn7s2rIYuo",
"forum": "XNk56rmmiy",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission11551/Reviewer_zzJz",
"reviewer_name": "Reviewer_zzJz",
"rating": 2,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "In this work, the authors aim to address several limitations of existing agent benchmarks—such as high manual annotation cost, imbalanced task distribution, and poorly calibrated task difficulty. To this end, they propose TAM-Bench, a diverse, realistic, and well-structured benchmark for evaluating LLM-based agents on end-to-end machine learning tasks. While the benchmark demonstrates clear advantages in terms of task diversity and scale, there remain notable shortcomings in the overall framework of its construction and evaluation methodology.",
"strengths": "1. The task scale is 150, much larger than existing benchmarks such as MLEBench (75 tasks).\n2. The proposed benchmark contains more task fields like commerce, which is important for real-scenarios.",
"weaknesses": "1. In this work, the authors propose a difficulty modeling method via leaderboard structure, with many details unclear and questionable.\n(1) Since they use the score from the participants to determine the task difficulty, is there any filter mechanism on the participants? If no, how to avoid the distribution shift led by the difference of participants?\n(2) Current inclusion of number of participants seems not reasonable. Is there any scene that one task is too difficult / heavy to run such that its number of participants would be 1/100 or even 1/1000 of other simple-to-run tasks? In such case, will the difficulty be influenced in a wrong way?\n(3) Given all factors except the \"mean score\" fixed in eq (3), we might conclude that the higher the mean score is, the more difficult the task is, which is not reasonable.\n\n2. While the format validity metric is reasonable to evaluate the performance of agents, I think previous benchmarks might in-explicitly consider it, i.e., if it does not follow to the format, its answer might not even be parsed. Furthermore, I would appreciate it if the authors would provide more details of the generation of format requirements: test_labels.csv. If it is inherit from the construction of the task, I wonder its validness and diversity to evaluate agents' capability on this.\n\n3. The evaluation of this benchmark is not sufficient. Only GPT-4.1 & Deepseek-V3 are tested, and their performance seems different from the common sense knowledge on these two models. Further analyses are expected.\n\n4. Please adjust the usage of \\cite, \\citep, \\citet in the latex.",
"questions": "See Weaknesses,",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-23T17:38:42",
"modification_date": "2025-11-12T12:44:01",
"review_url": "https://openreview.net/forum?id=XNk56rmmiy¬eId=kn7s2rIYuo",
"license": "CC BY 4.0"
},
{
"id": "t75Ur1iLwN",
"forum": "XNk56rmmiy",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission11551/Reviewer_ghri",
"reviewer_name": "Reviewer_ghri",
"rating": 6,
"confidence": 2,
"soundness": 2,
"contribution": 3,
"presentation": 3,
"summary": "This paper proposes TAM Bench, a diverse, realistic, and structured benchmark for evaluating LLM-based agents on end-to-end ML tasks. TAM Bench features three key innovations: (1) A browser automation and LLM-based task acquisition system that automatically collects and structures ML challenges; (2) A leaderboard-driven difficulty modeling mechanism that estimates task complexity using participant counts and score dispersion, enabling scalable and objective task calibration; (3) A multi-dimensional evaluation framework.",
"strengths": "1. Automation and Scalability: The Web-Agent-driven task acquisition method improves task collection efficiency.\n\n2. Objective Difficulty Modeling: The leaderboard-based difficulty assessment is more objective and scalable than previous manual time estimates.\n\n3. Enhanced Benchmark Diversity: The Full version offers significantly broader coverage across data modalities and application domains.\n\n4. Comprehensive Multi-Dimensional Evaluation: The inclusion of Constraint Adherence and Format Compliance metrics effectively addresses the limitations of single-metric evaluations in existing benchmarks.",
"weaknesses": "The evaluation relies on an LLM (e.g., GPT-4) as the judge. The paper, however, does not discuss whether LLM-based evaluation can faithfully and objectively reflect the true capabilities of the models. It is suggested that necessary experiments be added to demonstrate (1) the gap between LLM evaluation and human evaluation, (2) the reliability of different LLM judges, and (3) whether GPT-4 can be replaced by an open-source model, especially given the relatively high cost of calling the GPT-4 API.",
"questions": "Please see the weakness",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-22T02:44:47",
"modification_date": "2025-11-12T12:44:01",
"review_url": "https://openreview.net/forum?id=XNk56rmmiy¬eId=t75Ur1iLwN",
"license": "CC BY 4.0"
}
] |
|
4T9ncuf08p
|
https://openreview.net/forum?id=4T9ncuf08p
|
Dataset Regeneration for Cross Domain Recommendation
| 6
| 3
|
[
6,
4,
8
] |
[
3,
3,
3
] | 3
|
[
"Recommender System",
"Cross-domain recommendation",
"Dataset Regeneration"
] |
Cross-domain recommendation (CDR) has emerged as an effective strategy to mitigate data sparsity and cold-start challenges by transferring knowledge from a source domain to a target domain. Despite recent progress, two key issues remain: (i) Sparse overlap. In real-world datasets such as Amazon, the proportion of users active in both domains is extremely low, significantly limiting the effectiveness of many state-of-the-art CDR approaches. (ii) Negative transfer. Existing methods primarily address this problem at the model level, often assuming that logged interactions are unbiased and noise-free. In practice, however, recommender data contain numerous spurious correlations, and this issue is exacerbated in CDR due to domain heterogeneity.
To address these challenges, we propose a dataset regeneration framework. First, we leverage a prediction model to generate a pool of high-confidence candidate interactions to link non-overlapping target-domain users and source-domain items. Second, inspired by causal inference, we introduce a filtering process designed to prune spurious interactions. This process identifies and removes not only noisy edges created during generation but also those from the original dataset, retaining only the interactions that have a positive causal effect on the target-domain performance. Through these two processes, we can regenerate a source-domain dataset that exhibits a tighter coupling and a more explicit causal connection with the target domain.
By integrating our method with three representative recommendation backbones—LightGCN, BiTGCF, and CUT—we show that it significantly boosts their predictive accuracy on the target domain, achieving substantial gains of up to 23.81\% in Recall@10 and 22.22\% in NDCG@10.
|
applications to computer vision, audio, language, and other modalities
|
https://openreview.net/pdf?id=4T9ncuf08p
| 2025-09-19T14:22:37
| 3
|
[
{
"id": "LCnihkLKE7",
"forum": "4T9ncuf08p",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16305/Reviewer_p3jn",
"reviewer_name": "Reviewer_p3jn",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 2,
"summary": "This paper proposes a dataset enhancement strategy for cross-domain recommendation models. It aims to address the sparse cross-domain user overlap and noisy cross-domain signal (negative transfer) issues. The proposed strategy takes two stages. The first stage generates more user-item connections to address the sparse cross-domain user overlap issue, by learning a model that reconstructs edges in the source domain user-item graph. The second stage learns to identify spurious edges in the source domain user-item graph which should be removed to mitigate the negative transfer issue. Experimental results on two commonly used datasets, Amazon and Douban, showed the effectiveness of the proposed strategy.",
"strengths": "S1. The paper is motivated well with a detailed example to illustrate issues of existing cross-domain recommendation solutions. \n\nS2. The proposed technique works on the dataset level and is orthogonal to cross-domain recommendation models, which has the potential to be applied to and strengthen different cross-domain recommendation models. \n\nS3. The proposed technique is shown to be effective on commonly used benchmark datasets.\n\nS4. Source code is available.",
"weaknesses": "W1. Technical details:\n\n- The synthetic edge set contains edges between every non-overlapping user and their top-$k$ relevant items in the source set. Even the top-$k$ items might not be very relevant for some of the users, and hence there may be false positives. Using a fix $k$ for all users might not be the most effective. How about using a score threshold to filter the items instead (or a combination of both)? Also, how is the value of $k$ chosen in the experiments, and how does its value impact overall accuracy? \n\n- The NP-hardness of Problem $\\overline{P}$ needs a proof. \n\n- How are the node embeddings in $\\mathcal{F}_\\theta^T$ initialized?\n\nW2. Experiments:\n\n- The performance gains obtained by using the proposed Gen/Del dataset preparation strategy is quite small as shown in Table 1 (noting the statistical significance test results). The second-best results in the two N columns of the Douban datasets didn't seem to be labeled correctly. \n\n- It would be interesting to see model running time results, model effectiveness results as $K$ (as in Recall/NDCG@$K$) varies, and model effectiveness results as the number of cross-domain overlapping users varies. \n\nW3. Presentation: \n\n- The preliminaries section should be moved to the main text to set up the context for the methodology section. Without it, the methodology section is difficult to follow. \n\n- Even with the preliminaries section, the paper needs a notation table to explain what the many symbols mean in the paper. \n\n- The final sentence in Appendix A, \"The next section details the optimization techniques used to implement this filtering, integrating the pre-trained prediction model with edge weight adjustments to achieve the desired causal pruning.\", seems to be disconnected from the subsequent section. \n\n- Typo: \"”science fiction”\" => \"``science fiction”\"; \"in the Appendix B\" => \"in Appendix B\"; \"The single-domain baselines, trained exclusively on the target dataset\" => \"The single-domain baselines are trained exclusively on the target dataset\"",
"questions": "Please refer to the Weaknesses section.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-02T16:11:34",
"modification_date": "2025-11-12T13:47:24",
"review_url": "https://openreview.net/forum?id=4T9ncuf08p¬eId=LCnihkLKE7",
"license": "CC BY 4.0"
},
{
"id": "vY2fbWxQW9",
"forum": "4T9ncuf08p",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16305/Reviewer_eJuQ",
"reviewer_name": "Reviewer_eJuQ",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This paper focuses on the cross-domain recommendation (CDR) task and addresses key challenges, including sparse user overlap across domains and negative transfer caused by spurious correlations in heterogeneous data. To tackle these issues, the authors propose a dataset regeneration framework that (1) generates high-confidence candidate interactions to link non-overlapping users and items, and (2) applies a causal-inference-inspired filtering process to remove spurious interactions from both the generated and original data. This approach enhances the causal connection between source and target domains. When integrated with recommendation models such as LightGCN, BiTGCF, and CUT, it substantially improves target-domain performance, achieving up to 23.81% gain in Recall@10 and 22.22% in NDCG@10.",
"strengths": "1. This paper focuses on the cross-domain recommendation (CDR) task and addresses two major challenges: sparse user overlap across domains and negative transfer caused by spurious correlations in heterogeneous data.\n2. To tackle these challenges, the authors propose a dataset regeneration framework. This approach strengthens the causal connection between the source and target domains.\n3. The proposed framework, when integrated with recommendation models such as LightGCN, BiTGCF, and CUT, substantially improves target-domain performance.",
"weaknesses": "1. The core argument of this paper is that prior work primarily addresses sparse overlap and negative transfer at the model level, whereas this work tackles these challenges from a data-centric perspective. In fact, in the cross-domain recommendation (CDR) field, several studies have already explored data-centric solutions, such as [1][2][3]. The authors also provide a comparative analysis between their approach and these existing data-centric methods.\n\n[1]https://arxiv.org/pdf/2405.20710\n[2]https://arxiv.org/abs/2307.13910\n[3]https://dl.acm.org/doi/10.1145/3626772.3657902\n\n2. There is an inconsistency between the paper title in the main text and the title on OpenReview. The authors should ensure that the titles are consistent before submission.\n\n3. The proposed framework is divided into two stages: generation followed by filtering.\n- For the generation stage, the authors employ self-supervised pretraining, which is a common practice in graph learning, and therefore this stage lacks significant novelty.\n- For the filtering stage, the authors adopt counterfactual interaction filtering. It would be helpful to clarify the motivation for using this technique compared with existing filter-based methods. Are there unique challenges that the counterfactual approach specifically addresses?",
"questions": "see weaknesses",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T17:16:41",
"modification_date": "2025-11-12T13:47:25",
"review_url": "https://openreview.net/forum?id=4T9ncuf08p¬eId=vY2fbWxQW9",
"license": "CC BY 4.0"
},
{
"id": "72YCS7Q83i",
"forum": "4T9ncuf08p",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16305/Reviewer_MGmq",
"reviewer_name": "Reviewer_MGmq",
"rating": 8,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The paper proposes a data-centric framework, Generate-and-Filter (Gen/Del), for\ncross-domain recommendation (CDR). Instead of focusing on model-level transfer, the\nauthors address data sparsity and negative transfer by regenerating a causal and\ndenoised source-domain dataset. The framework consists of two stages:\n(1) Generation phase: A self-supervised model generates synthetic source-domain\ninteractions for users who exist only in the target domain, using masked-edge\nreconstruction and BPR loss.\n(2) Filtering phase: A counterfactual inference module assigns causal importance\nweights to each generated or existing edge and filters out non-causal or spurious ones.\nThe resulting regenerated dataset can be plugged into any backbone recommender\n(e.g., LightGCN, CUT, BiTGCF). Experiments on Douban and Amazon datasets show\nconsistent improvements across multiple backbones, with gains up to 23.8% in\nRecall@10.",
"strengths": "•\tOriginality: Presents a fresh, data-centric perspective on CDR, shifting focus from model-level transfer to dataset regeneration. The integration of causal counterfactual filtering with GNN-based representation learning is particularly innovative.\n\t•\tQuality: Methodology is sound and well-formulated, with strong empirical results across multiple datasets and backbone models. Ablation studies effectively demonstrate the framework’s ability to mitigate negative transfer.\n\t•\tClarity: The paper is clearly written and well-structured, with intuitive explanations and informative figures.\n\t•\tSignificance: The framework is model-agnostic and has broad applicability, offering a principled foundation for future research on causal data manipulation and transfer learning.\n\nOverall, the work is conceptually original, empirically convincing, and highly relevant to data-centric and causal learning in recommender systems.",
"weaknesses": "(1) Limited Analysis of Computational Cost and Scalability\nWhile the proposed Generate-and-Filter framework is conceptually appealing, the paper lacks a systematic evaluation of its computational overhead. The counterfactual filtering stage requires training an additional GNN and repeatedly assessing target-domain performance, which could be computationally intensive for large-scale datasets. However, the paper provides no quantitative analysis of runtime, memory consumption, or scaling behavior with respect to dataset size, leaving the practicality of the approach for industrial-scale recommender systems uncertain.\n\n(2) Incomplete Symbol Definitions in the Counterfactual Interaction Filtering Section\nSeveral key symbols in Section 2.3—such as F_t^s, y_i, E_t^s, and the mapping l(E_t)—are introduced without explicit definitions or consistent explanations. This lack of clarity makes the mathematical formulation difficult to follow and reproduce. A concise summary table of notations or explicit variable definitions would greatly enhance readability and reproducibility.\n\n(3) Lack of Qualitative Analysis and Interpretability of Filtering Results\nAlthough the paper presents quantitative improvements in metrics such as Recall@10 and NDCG@10, it lacks qualitative analysis of the filtering process. There are no examples or visualizations illustrating which user–item edges are pruned or retained by the counterfactual filtering stage. Without such interpretability analysis, it is difficult to understand what types of interactions the model identifies as causal versus spurious.\n\n(4) Unclear Contribution of the Generation Phase\nAblation results suggest that most of the performance gains arise from the counterfactual filtering module rather than the data generation phase. However, the paper does not analyze the characteristics or quality of the generated interactions—such as their distribution, overlap with observed data, or effect on coverage. Consequently, the empirical contribution and necessity of the generation component remain ambiguous.",
"questions": "(1) Address scalability and efficiency concerns\nProviding details on the model’s runtime, computational cost, and resource usage\nduring experiments would help readers better understand the practical feasibility and\nefficiency of the proposed framework.\n(2) Deeper analysis of generation and filtering behavior\nAnalyzing how the generation phase adds synthetic edges and how the\ncounterfactual filtering module removes or retains interactions in practice would help\nreaders better understand the model’s decision behavior and its contribution to\nperformance improvements.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-23T13:48:11",
"modification_date": "2025-11-12T13:47:25",
"review_url": "https://openreview.net/forum?id=4T9ncuf08p¬eId=72YCS7Q83i",
"license": "CC BY 4.0"
}
] |
|
xFdT63wm5e
|
https://openreview.net/forum?id=xFdT63wm5e
|
Unified Continuous Generative Models for Denoising-based Diffusion
| 5.5
| 3.5
|
[
4,
6,
6,
6
] |
[
3,
3,
3,
5
] | 4
|
[
"generative modeling",
"denoising diffusion",
"consistency model",
"image generation"
] |
Recent advances in continuous generative models, encompassing multi-step processes such as diffusion and flow matching (typically requiring $8$-$1000$ steps) and few-step methods such as consistency models (typically $1$-$8$ steps), have yielded impressive generative performance.
However, existing work often treats these approaches as distinct paradigms, leading to disparate training and sampling methodologies.
We propose a unified framework for the training, sampling, and analysis of diffusion, flow matching, and consistency models.
Within this framework, we derive a surrogate unified objective that, for the first time, theoretically shows that the few-step objective can be viewed as the multi-step objective plus a regularization term.
Building on this framework, we introduce the **U**nified **C**ontinuous **G**enerative **M**odels **T**rainer and **S**ampler (**UCGM**), which enables efficient and stable training of both multi-step and few-step models.
Empirically, our framework achieves state-of-the-art results.
On ImageNet $256\times256$ with a $675\text{M}$ diffusion transformer, UCGM-T trains a multi-step model achieving $1.30$ FID in $20$ steps, and a few-step model achieving $1.42$ FID in only $2$ steps.
Moreover, applying UCGM-S to REPA-E improves its FID from $1.26$ (at $250$ steps) to $1.06$ in only $40$ steps, without additional cost.
|
generative models
|
https://openreview.net/pdf?id=xFdT63wm5e
| 2025-09-20T17:57:38
| 4
|
[
{
"id": "c2rOfFPh8s",
"forum": "xFdT63wm5e",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission24943/Reviewer_iL3K",
"reviewer_name": "Reviewer_iL3K",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This manuscript proposes a shared framework for many-step diffusion models and few-step consistency models. Specifically, starting from a consistency-style high-level objective, the authors demonstrate that this objective is equivalent to a flow matching plus a self-consistency regularization, which can be implemented to be diffusion models, consistency models, or interpolation between them. On top of the framework, the authors also propose a sampling procedure for this framework, as well as advanced training techniques and improvements, such as time distribution, CFG-enhanced score function, and high-performance autoencoders. Experimental results demonstrate that using the proposed pipeline improves the FID on both multi-step and few-step settings with high-resolution ImageNet benchmarks.",
"strengths": "* Training a strong few-step generative model from scratch is an important topic.\n* The writing is easy to follow.\n* The experimental analysis and ablation study are well executed.",
"weaknesses": "**The high-level objective**: \nI have concerns about the necessity of the proposed high-level objective in Eqn. (4). When beyond the case of $\\lambda = 0$ and $\\lambda \\to 1$, the behavior of the optimal solution of the objective, and how to leverage the learned quantity, remains unclear to me. While the authors discuss this point in a simple case in Appendix F.1.4, it remains unclear to me how to reasonably leverage the learned quantity $\\lambda \\in (0, 1)$ unless I have missed something. One possible scenario would be to have a closed-form relationship of the conditional expectation (the diffusion model), the pushforward operation (the consistency model), and the learned quantity, but the current presentation did not shed any light on this.\n\nEmpirically, according to Table 5, the main results are obtained from the $\\lambda = 0$ and $\\lambda \\to 1$, which further makes the $\\lambda \\in (0, 1)$ part unclear. If so, then the implementation would boil down to diffusion models and a consistency model (or a finite difference version of sCM [1]).\n\n**The sampling procedure**: The current sampling procedure needs more justification than provided, especially under the $\\lambda \\to 1$ case (again, unless I have missed something, in that case, this needs to be clarified explicitly *in the main text*). For example, consider the linear coupling case, the \"decomposition\" and \"reconstruction\" become one Euler discretization (or equivalently one DDIM step). So it is unclear whether using a pushforward $f_\\theta^x(x_t, t)$ could simulate a path that is marginal preserving in this way. A relevant discussion is in IMM [2], where the authors show that one solution of marginal preserving simulation path with DDIM needs the network to condition on two timesteps. Here, the sampling process is achieved by only conditioning on one timestep. This needs more clarification/discussion.\n\n**The comparison for samplers**: The proposed UCGM-S couples (narrow-sense) sampler, timestep selection, CFG scale, and stochasticity together. Could the author elaborate on the baselines used for comparing the sampler and provide insights on which part contributes the most to reducing the confounders?\n\nI am open to revising my rating if the above concerns are addressed.\n\n(Minor)\n* Could the author provide some results for the sampler, as well as the training recipe (may use fine-tuning) on larger-scale text-to-image tasks, preferably examining the hard cases such as detailed text rendering?\n\n## Reference\n[1] Lu, Cheng, and Yang Song. ‘Simplifying, Stabilizing and Scaling Continuous-Time Consistency Models’. (ICLR 2025)\n\n[2] Zhou, Linqi, et al. ‘Inductive Moment Matching’. (ICML 2025)",
"questions": "See weaknesses.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T10:36:19",
"modification_date": "2025-11-12T18:27:32",
"review_url": "https://openreview.net/forum?id=xFdT63wm5e¬eId=c2rOfFPh8s",
"license": "CC BY 4.0"
},
{
"id": "yMExzAj96s",
"forum": "xFdT63wm5e",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission24943/Reviewer_dJfh",
"reviewer_name": "Reviewer_dJfh",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "The paper proposes UCGM, a single theoretical and practical framework that unifieds diffusion, flow matching and consistency models by demonstrating that they are all special cases of one continous-time objective and sampler. The paper also introduced a unified trainer as well as a sampler, enabling one backbone to generate high-fidelity images efficiently",
"strengths": "•\tThe paper gives one continuous-time formulation (UCGM) that covers diffusion, flow matching, and consistency models, the derivation is clean and non-trivial. \n\n•\tThey also gives a single derivation that directly link multi-step diffusion-like training and few-step training by introducing a self-alignment term that forces the model to agree with its own predictions. While this term also provides insights for instability in few step model.\n\n•\tThe paper provides extensive experiments across models, resolutions, and sampling regimes: they show that a single training formulation (UCGM-T), controlled by a consistency ratio λ, can be used toward either the traditional high-step diffusion / flow-matching regime (small λ) or the ultra-low-step consistency-style regime (large λ), so one can explicitly optimize for different latency/quality tradeoffs without redesigning the whole training algorithm.",
"weaknesses": "•\tMy major concern came from the claims that provides a single “unified” generative framework covers both multi- and few-step sampling. However, in practice this is not realized as one universally deployable model: the authors actually train multiple separate checkpoints, each with a different value of the consistency ratio λ (they report training three models with λ ∈ {0.0, 0.5, 1.0}), and then show how those different checkpoints behave under different sampling budgets. This means the system is unified at the level of theory and loss design, but not yet unified at the level of a single set of weights that performs optimally across both the high-step and ultra-low-step regimes.\n\n•\tIt’s a minor concern but it would be better to include more implementation details. Especially relevant in the λ≈1 few-step regime, where stability depends on undocumented tricks (e.g., second-order estimator, clipping, Beta time sampling), making true reproducibility and stability claims hard to verify.",
"questions": "Please see weakness sections",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T21:55:05",
"modification_date": "2025-11-12T18:27:33",
"review_url": "https://openreview.net/forum?id=xFdT63wm5e¬eId=yMExzAj96s",
"license": "CC BY 4.0"
},
{
"id": "YPfKwTWW6c",
"forum": "xFdT63wm5e",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission24943/Reviewer_YWsJ",
"reviewer_name": "Reviewer_YWsJ",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 4,
"summary": "The paper proposes UCGM, a unified framework for continuous generative models that encompasses diffusion, flow matching, and consistency models under one set of transport coefficients and a unified objective. A key theoretical result derives an equivalent surrogate loss showing that the few-step objective = multi-step objective + a self-alignment regularizer, clarifying why few-step training can become unstable as the consistency ratio λ→1. Built on this, the authors introduce UCGM-T (trainer) and UCGM-S (sampler). UCGM-T smoothly interpolates between multi-step and few-step regimes, while UCGM-S acts as a plug-and-play sampler that can reduce NFEs and sometimes improve FID for existing pre-trained models. Experiments on ImageNet-1K (256² & 512²) with DiT-style backbones and multiple VAEs report SOTA/competitive FIDs in both regimes.",
"strengths": "- A principled formulation that subsumes diffusion, flow matching, and consistency models; provides shared notation, training, and sampling views. \n- The surrogate objective neatly decomposes few-step training into multi-step + regularization, offering an intuitive explanation of instability at high λ. \n- UCGM-T tunes one knob (λ) to target many NFE budgets; UCGM-S accelerates existing models without retraining. \n- Competitive/SOTA FIDs at both 256² and 512² across multiple autoencoders; graceful degradation as steps shrink; broad compatibility with DiT/UNet families.",
"weaknesses": "- Almost all results are on class-conditional ImageNet-1K at 256² and 512²; CIFAR-10 only appears for ablations. There are no text-to-image or multimodal tasks, so it’s unclear how the method behaves with language conditioning or other modalities. The paper itself states the primary datasets are ImageNet-1K (512×512, 256×256) and uses CIFAR-10 (32×32) just for ablations; training is in latent space with specific autoencoders (e.g., SD-VAE, VA-VAE, E2E-VAE at 256²; DC-AE or SD-VAE at 512²). This tight focus limits external validity to broader generative settings.\n- The main comparisons and ablations emphasize FID (and step count/NFEs). Even the “plug-and-play” sampler section frames gains largely as “same or better FID with fewer steps,” and the system-level tables report FID (with occasional IS), but there’s no precision/recall, density/coverage, CLIP-based faithfulness, or calibration/diversity measures. This narrow metric set makes it hard to judge mode coverage and semantic alignment beyond FID.",
"questions": "Most results are on ImageNet with certain VAEs/backbones. It’s unclear if this also works well for text-to-image.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T17:39:49",
"modification_date": "2025-11-12T18:27:33",
"review_url": "https://openreview.net/forum?id=xFdT63wm5e¬eId=YPfKwTWW6c",
"license": "CC BY 4.0"
},
{
"id": "dGEvXr9Tn1",
"forum": "xFdT63wm5e",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission24943/Reviewer_qqse",
"reviewer_name": "Reviewer_qqse",
"rating": 6,
"confidence": 5,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "The paper proposes a unified framework for training, sampling, and analysis of diffusion, flow matching, and consistency models. The authors claim novelty in this unification. They introduce a unified trainer called **UCGM-T** and a unified sampler called **UCGM-S**. The sampler is compatible with pretrained diffusion, flow matching, and consistency models. They provide empirical validation of UCGM-{T, S} on ImageNet, as well as of UCGM-S applied to a pretrained REPA-E model, and report that their methods reach or surpass state-of-the-art performance in these evaluations.",
"strengths": "1. In Sec. 3.1, a unifying loss function (Eq. 4) is derived, and the assumptions required to recover the respective diffusion, flow matching, and consistency model instances are explicitly stated. In addition, a surrogate loss function (Eq. 5 / 13) is introduced and its equivalence to the original loss is formally proven in the appendix (though I did not check the proof). The surrogate loss provides additional conceptual insight and appears useful for analytical investigations.\n\n2. The parameter choices by which the diffusion, flow matching, and consistency model instances are obtained are explicitly listed (Tab. 1).\n\n3. In Sec. 3.3, the authors present UCGM-S, a sampler that (as claimed) generates samples for all model types — diffusion, flow matching, and consistency — in a unified algorithmic way. In particular, it is claimed that the underlying model does not need to have been trained with UCGM-T but can come from any existing diffusion, flow matching, or consistency model training data.\n\n4. In Sec. 4, extensive experiments on ImageNet-1K at 256×256 and 512×512 resolutions with various baselines are reported.",
"weaknesses": "1. The introduction of a third, equivalent loss function (Eq. 6) appears abrupt and entirely unmotivated. Simply referring to “previous studies” is insufficient — especially since this creates the impression that those prior works may already have introduced a loss function unifying the same model families considered here, which would render the proposed framework (at least for training) largely obsolete.\n\n2. A convergence or stability analysis of UCGM-S is entirely missing. Theorem 7 (in the appendix) only shows that the extrapolated step is consistent and locally of order $O(h^2)$. A proof of global convergence (and hence correctness) or of the convergence order is absent.\n\n3. Clarity is sometimes lacking. For example, $p$ in Eq. 1 is never defined, and the (experienced) reader must infer that $p(z,x)=p_{\\text{prior}}(z)p_{\\text{data}}(x)$ is intended. It is not clear — and if it is, it should be explicitly stated — whether $z$ and $x$ are meant to be dependent in the general setting. Moreover, it is mathematically questionable (strictly speaking incorrect) to denote both the data and prior distributions by the same symbol $p$, distinguishing them only by the argument ($x$ vs. $z$).\n\n4. In the experiments, image quality is evaluated solely using FID, computed on only 50k samples. It is well known that FID estimates with this sample size can be far from converged, undermining comparability — especially at the decimal level. It is also unclear whether baseline FIDs were re-evaluated or taken from the corresponding papers; in the latter case, implementation-dependent differences in FID magnitude can further distort comparisons. No additional perceptual or diversity-based metrics are provided, and neither training nor sampling time is reported. The only computational metric considered for sampling cost is NFE.",
"questions": "1. If (Lu & Song, 2024) already introduced the loss function in Eq. 6 and this formulation already encompasses diffusion, flow matching, and consistency models (as the introductory sentence of Sec. 3.2 suggests — though I did not verify this claim), then what additional contribution does the present paper make toward unifying the training of these models?\n\n2. When UCGM-S is applied to a pre-trained model that was *not* trained with UCGM-T but instead obtained from existing diffusion, flow matching, or consistency model training data, is any form of conversion required to ensure compatibility with UCGM-S? Or can UCGM-S truly be used in a plug-and-play fashion? If conversion is necessary, can a clear description or implementation provided to perform it?\n\n3. Further questions arise from the weaknesses listed above.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-28T01:37:51",
"modification_date": "2025-11-12T18:27:33",
"review_url": "https://openreview.net/forum?id=xFdT63wm5e¬eId=dGEvXr9Tn1",
"license": "CC BY 4.0"
}
] |
|
RDAhLHEHDm
|
https://openreview.net/forum?id=RDAhLHEHDm
|
Lost in Tokenization: Context as the Key to Unlocking Biomolecular Understanding in Scientific LLMs
| 6.5
| 3.5
|
[
6,
6,
6,
8
] |
[
3,
3,
4,
4
] | 4
|
[
"Biomolecular learning",
"Protein sequence"
] |
Scientific Large Language Models (Sci-LLMs) have emerged as a promising frontier for accelerating biological discovery. However, these models face a fundamental challenge when processing raw biomolecular sequences: the tokenization dilemma. Whether treating sequences as a specialized language, risking the loss of functional motif information, or as a separate modality, introducing formidable alignment challenges, current strategies fundamentally limit their reasoning capacity. We challenge this sequence-centric paradigm by positing that a more effective strategy is to provide Sci-LLMs with high-level structured context derived from established bioinformatics tools, thereby bypassing the need to interpret low-level noisy sequence data directly. Through a systematic comparison of leading Sci-LLMs on biological reasoning tasks, we tested three input modes: sequence-only, context-only, and a combination of both. Our findings are striking: the context-only approach consistently and substantially outperforms all other modes. Even more revealing, the inclusion of the raw sequence alongside its high-level context consistently degrades performance, indicating that raw sequences act as informational noise, even for models with specialized tokenization schemes. These results suggest that the primary strength of existing Sci-LLMs lies not in their nascent ability to interpret biomolecular syntax from scratch, but in their profound capacity for reasoning over structured, human-readable knowledge. Therefore, we argue for reframing Sci-LLMs not as sequence decoders, but as powerful reasoning engines over expert knowledge. This work lays the foundation for a new class of hybrid scientific AI agents, repositioning the developmental focus from direct sequence interpretation towards high-level knowledge synthesis.
|
applications to physical sciences (physics, chemistry, biology, etc.)
|
https://openreview.net/pdf?id=RDAhLHEHDm
| 2025-09-16T23:39:46
| 4
|
[
{
"id": "mP2ddusOo8",
"forum": "RDAhLHEHDm",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission7811/Reviewer_oqHE",
"reviewer_name": "Reviewer_oqHE",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This paper tackles how Sci-LLMs handle biomolecular sequences. It argues current methods are stuck in a \"tokenization dilemma\": either they treat sequences as language, breaking up important motifs, or as a separate modality, which creates an alignment gap. The authors propose a \"context-driven\" approach, skipping raw sequences entirely. Instead, they use bioinformatics tools (BLAST, Pfam) to create a text summary for the LLM . Their experiments show this context-only method works best, and that adding the raw sequence back in actually hurts performance, acting like noise.",
"strengths": "The paper's \"tokenization dilemma\" concept is a really clear and smart way to frame a major hurdle for Sci-LLMs. The main idea—that feeding LLMs text context from tools like BLAST is better than giving them the raw sequence—is surprising but backed up well by the experiments. The finding that raw sequences just add \"noise\" and make things worse is a big deal. The visualizations (like in Figure 3) showing how alignment fails are also very convincing . This work is important because it questions the push for end-to-end models and offers a practical, hybrid alternative.",
"weaknesses": "The main drawback, which the authors rightly point out, is that this method can't handle mutation effect prediction. The bio-tools (BLAST, etc.) used to create the context just aren't sensitive to tiny, single-point changes, so the context for a normal protein and its mutant look the same . This is a major limitation, as it rules out a big area of computational biology. Also, the claims about it working on DNA are mostly tucked away in the appendix, not fully explored in the main paper.",
"questions": "Given the issue with mutations, do you have ideas for how this context-driven method could be adapted for those tasks? Maybe by using different tools that are sensitive to mutations to generate the context?\n\nYou mention your method is efficient because it avoids retraining, but running tools like InterProScan and BLAST for every query isn't free. How does the real-world inference time/cost of your pipeline compare to running a big, end-to-end model?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T14:08:21",
"modification_date": "2025-11-12T11:57:29",
"review_url": "https://openreview.net/forum?id=RDAhLHEHDm¬eId=mP2ddusOo8",
"license": "CC BY 4.0"
},
{
"id": "DHv82GdNeg",
"forum": "RDAhLHEHDm",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission7811/Reviewer_CSFM",
"reviewer_name": "Reviewer_CSFM",
"rating": 6,
"confidence": 3,
"soundness": 2,
"contribution": 3,
"presentation": 3,
"summary": "The paper proposes a “paradigm shift” for how Scientific Large Language Models (Sci-LLMs) are trained, leveraging context-centric approaches driven by high-level structured knowledge from bioinformatics tools (e.g., GeneOntology, ProTrek, BLASTp, etc.). The solution addresses two key tokenization “dilemmas” that have posed challenges on the Sci-LLM space: sequence-as-language and sequence-as-modality. This approach accounts for multiple levels of language used to describe biomolecular phenomena – from human-encoded knowledge to genetics/evolutionary-encoded knowledge. Strikingly, the context-only approach largely outperforms joint context + raw sequences, suggesting that raw sequences contribute more to information noise. The contribution suggests that Sci-LLMs don’t necessarily require solving complex biological “language” from scratch but can leverage decades of accumulated biological knowledge contained within structured databases.",
"strengths": "1.\tOverall: The paper and aims to address a novel challenge in the Sci-LLM space, making a case that Sci-LLMs are better served as “reasoning engines over expert knowledge”, rather than pure sequence decoders. While this is noted and there is some evidence that this is the case, it does raise some circular logic around the quality of the annotations derived from the bioinformatics knowledgebases (addressed below in the weaknesses).\n2.\tGeneralizability: The solution in generalizable, with applications ranging from known proteins to “novel” proteins, as well as different biomolecular types.\n3.\tPracticality: The solution as it is described is practical, as it allows to more easily keep models up to date with new biological knowledge with lower development costs. (Although it could be argued that most of the effort is derived from maintaining the bioinformatics knowledgebases).",
"weaknesses": "1.\tCircular Logic: The approach works well when high-quality annotations exist, yet the solution also exists to propose annotations to fill in knowledge gaps. This counter-intuitively raises a bit of a “Catch 22” scenario.\n2.\tCore Argument: The basis of the manuscript suggests that there is in fact valuable information encoded within the evolutionary language through sequence tokens, yet the results suggest the opposite – and that human context exclusively drives the value.",
"questions": "1.\tHow do you address the circular reasoning between the strengths of the approach (incorporating high-quality expert annotations) and using this approach to predict those annotations where they do not yet exist? Could tool-calling agents solve this rather than building directly into the LLM? What are the tradeoffs?\n2.\tAlong this line of questioning, does the core contribution put a focus on the LLMs, or are you simply demonstrating that tradition bioinformatics pipelines already solve most of the problems around understanding protein function?\n3.\tHave these results been validated against human expert annotators?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T03:54:14",
"modification_date": "2025-11-12T11:57:30",
"review_url": "https://openreview.net/forum?id=RDAhLHEHDm¬eId=DHv82GdNeg",
"license": "CC BY 4.0"
},
{
"id": "3XgS2SWa7e",
"forum": "RDAhLHEHDm",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission7811/Reviewer_J4os",
"reviewer_name": "Reviewer_J4os",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper argues that Scientific Large Language Models face a \"tokenization dilemma,\" struggling to interpret raw biomolecular sequences, which are either broken down into meaningless components or difficult to align with natural language. Through systematic experiments, the authors demonstrate that a \"context-only\" approach, where models are given high-level, human-readable knowledge from bioinformatics tools (like BLAST or Pfam) , consistently and substantially outperforms models given the raw sequence.",
"strengths": "Pros:\n- The authors proposed a new “context-only” method, which achieved significantly \n- The context-driven approach achieve good performance.",
"weaknesses": "Cons:\n- Context-only approach sounds interesting. However, compared with raw biomolecular sequences input, an inevitable con of this approach would be significant information loss (by discarding too many detailed information).\n- The capability of this approach is capped by the bioinformatics tools being used, e.g., InterProScan and BLAST.\n- As the context-only model relies majority on prior, it may not be a good tool for exploring “novel” findings (which may be out of distribution a bit).\n- Why in Table 1, QWEN series of models are not considered, while in Figure 2, for “ours” model, the author choose to use Qwen-embedding. What about the embedding visualization for specialized language models [1] like ESM series\n\n\n[1] Zheng, Y., Koh, H. Y., Ju, J., Yang, M., May, L. T., Webb, G. I., ... & Church, G. (2025). Large language models for drug discovery and development. Patterns.",
"questions": "See Weaknesses",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T01:06:44",
"modification_date": "2025-11-12T11:57:31",
"review_url": "https://openreview.net/forum?id=RDAhLHEHDm¬eId=3XgS2SWa7e",
"license": "CC BY 4.0"
},
{
"id": "b9E8GWVKK2",
"forum": "RDAhLHEHDm",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission7811/Reviewer_2DYC",
"reviewer_name": "Reviewer_2DYC",
"rating": 8,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper identifies and investigates a fundamental challenge in Scientific Large Language Models (Sci-LLMs) for biomolecular understanding, which the authors term the \"tokenization dilemma.\" They argue that existing paradigms—\"sequence-as-language\" (tokenizing sequences into atomic units) and \"sequence-as-modality\" (encoding sequences via specialized encoders)—suffer from weak representation and semantic misalignment, respectively. As a solution, the authors propose a \"context-driven\" paradigm, which bypasses raw sequence input. Instead, it leverages established bioinformatics tools (e.g., InterProScan, BLASTp) to generate high-level, human-readable textual context (e.g., functional domains, GO terms) that is natively aligned with the LLM's linguistic space. The authors evaluated three input modes: sequence-only, context-only, and a combination of both. Through extensive empirical evaluation on protein QA, EC number prediction, and DNA mutation tasks, the authors demonstrate that the context-only approach consistently and substantially outperforms all other modes. They find that adding raw sequence information to context often degrades performance, acting as \"informational noise.\"",
"strengths": "- The paper clearly articulates the \"tokenization dilemma\" as a critical, yet overlooked, bottleneck in Sci-LLMs. The conceptual framing of the two existing paradigms and their respective weaknesses is compelling and well-supported by prior work.\n- The central claim—that raw sequences can be detrimental when combined with high-level context—is counter-intuitive and strongly supported by systematic experiments across multiple models (Intern-S1, Evolla, NatureLM, GPT-4o, etc.) and tasks (protein function, pathway, localization, EC prediction). The consistent performance drop in \"Sequence + Context\" settings is a powerful result.\n- The authors evaluate their method on a wide range of benchmarks, including their own reconstructed dataset, temporal splits, and sequence identity-based splits (Easy/Medium/Hard). The inclusion of DNA-based tasks also demonstrates generalizability beyond proteomics.\n- The paper goes beyond mere performance comparisons. The layer-wise analysis of Evolla (Section 5.3, Appendix F) convincingly shows how semantic alignment (via Q-Former) erases fine-grained mutation signals, providing a mechanistic explanation for the limitations of the sequence-as-modality approach.",
"weaknesses": "- The context-driven approach relies heavily on the quality and coverage of external tools (InterProScan, BLAST). While an ablation study is provided (Appendix E), it does not fully explore the performance ceiling—what happens when these tools fail completely on highly novel proteins? The method's performance is inherently tied to the underlying databases' completeness and timeliness.\n- The paper equates \"biomolecular understanding\" primarily with high-level functional annotation (GO terms, pathways). It does not assess whether the model gains *mechanistic* or *structural* insights that might require raw sequence analysis (e.g., predicting the effect of a point mutation). The limitation section (Appendix J) correctly notes this but underscores a fundamental constraint of the proposed paradigm.\n- The strong performance of general LLMs (Gemini, GPT) in the context-only setting raises questions about potential memorization of public protein annotations from their vast pre-training corpora. While the authors take care to prevent label leakage in their *context generation*, they do not explicitly audit whether the test proteins' annotations were already in the LLMs' training data.\n- The primary metric (LLM-Score) relies on another LLM (DeepSeek-V3) to judge answer quality. While this is a reasonable approach for open-ended QA, it introduces potential biases and lacks the objectivity of exact-match metrics used in tasks like EC prediction.\n- Code is not provided in the current submission, providing it would be helpful to make work reproducible.",
"questions": "- Given the high performance of general-purpose LLMs like Deepseek-v3, Gemini2.5 Pro and GPT-5, what steps did you take to ensure that the ground-truth annotations for your test proteins were not present in these models' pre-training data? Could the results be partly explained by memorization rather than reasoning?\n- Your approach depends on external tools. Can you provide a qualitative analysis or failure case study for proteins where InterProScan and BLASTp return no or incorrect hits? How does the performance of your method degrade in such \"orphan\" scenarios, and what are the potential remedies?\n- The paper convincingly shows that context is superior for *retrieving* known functional annotations. However, do you believe your paradigm can be extended to tasks that require *discovering* novel functions or reasoning about structure-sequence relationships that are not yet captured in existing databases?\n- You note your method is computationally efficient as it avoids Sci-LLM retraining. However, running InterProScan and BLASTp for every query in a real-time application could be costly and slow. Could you comment on the latency and scalability of the full context-generation pipeline compared to a single forward pass of a sequence-as-language and a sequence-as-modality model?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-29T14:07:29",
"modification_date": "2025-11-12T11:57:31",
"review_url": "https://openreview.net/forum?id=RDAhLHEHDm¬eId=b9E8GWVKK2",
"license": "CC BY 4.0"
}
] |
|
q6kXd8Gpfj
|
https://openreview.net/forum?id=q6kXd8Gpfj
|
LearNAT: Learning NL2SQL with AST-guided Task Decomposition for Large Language Models
| 6
| 4.333333
|
[
4,
6,
8
] |
[
5,
4,
4
] | 3
|
[
"Large Language Model",
"Text-to-SQL"
] |
Natural Language to SQL (NL2SQL) aims to translate natural language queries into executable SQL statements, offering non-expert users intuitive access to databases. While recent approaches leveraging large-scale private LLMs such as GPT-4 have achieved state-of-the-art results, they face two critical challenges: the lack of openness and reproducibility, and the prohibitive computational cost of test-time scaling. To address these issues, we explore improving the model-level performance of small-scale public LLMs in NL2SQL under resource-constrained settings. Our exploratory experiments reveal the potential of task decomposition for enhancing NL2SQL performance, but also highlight the difficulty of enabling LLMs to decompose queries effectively. Motivated by these findings, we propose LearNAT, a novel framework designed to enhance LLMs’ decomposition capabilities. LearNAT introduces (1) a Decomposition Synthesis Procedure, which leverages AST-guided search with pruning strategies to generate verifiable and efficient decompositions, and (2) Margin-Aware Reinforcement Learning, which provides fine-grained preference optimization for multi-step reasoning beyond standard DPO. Extensive experiments on benchmark datasets demonstrate that LearNAT significantly improves the performance of small-scale LLMs, achieving results comparable to GPT-4 with only a 7B parameter model. These results validate the effectiveness of verifiable decomposition and fine-grained preference learning in advancing NL2SQL towards openness, transparency, and efficiency.
Our code is publicly available at https://anonymous.4open.science/r/LearNAT.
|
applications to computer vision, audio, language, and other modalities
|
https://openreview.net/pdf?id=q6kXd8Gpfj
| 2025-09-20T10:39:26
| 3
|
[
{
"id": "SLzmpRkouv",
"forum": "q6kXd8Gpfj",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission22830/Reviewer_kPT4",
"reviewer_name": "Reviewer_kPT4",
"rating": 4,
"confidence": 5,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "This paper introduces LearNAT, a novel framework that starts from task decomposition for NL2SQL. The core innovation lies in leveraging an AST-guided Monte Carlo Tree Search (MCTS) reasoning framework for efficient reasoning and data synthesis, as well as integrating AST-based structural alignment into the optimization objective to enhance the DPO algorithm. These methods collectively boost the baseline model’s performance by more than 10%.",
"strengths": "1. Proposes an AST-guided Chain-of-Thought (CoT) task decomposition and verification mechanism, achieving high controllability and impressive success rates in intermediate process validation.\n2. Innovatively improves the DPO algorithm by incorporating AST skeleton contrast in the optimization target, enabling fine-grained supervision of multi-step reasoning.",
"weaknesses": "1. The writing lacks clarity, particularly regarding the model inference stage: implementation details, methods used, and specific parameters are not sufficiently described. It remains unclear whether Monte Carlo Tree Search (MCTS) or voting methods were employed during the inference process. Furthermore, the rationale behind the specific parameter settings is not discussed, nor is it specified whether hyperparameter analysis was conducted to optimize the inference performance.\n2. The baseline selection in this paper is notably insufficient and lacks relevance. Current comparisons fail to directly target key methods such as DPO [1] and MCTS [2,3] with similar model scales, making LearNAT's claimed advantages difficult to substantiate. Without rigorous and fair evaluations against established approaches, the performance improvements may be unconvincing. The authors must provide more targeted and transparent baseline comparisons to truly demonstrate the superiority of LearNAT.\n3. Although the abstract and introduction highlight the heavy test-time computational burden of existing methods, the paper does not explicitly quantify the inference efficiency gains brought by AST-pruned MCTS, nor provide detailed time cost statistics. Supplementary experiments in this regard are recommended.\n\n\n[1] Uncovering the Impact of Chain-of-Thought Reasoning for Direct Preference Optimization: Lessons from Text-to-SQL\n\n[2] SQL-o1: A Self-Reward Heuristic Dynamic Search Method for Text-to-SQL\n\n[3] Alpha-SQL: Zero-Shot Text-to-SQL using Monte Carlo Tree Search",
"questions": "1. Given the diversity of SQL queries—where different SQL skeletons result in varying AST structures—how does the AST-guided MCTS handle such cases during data synthesis? Are these instances treated as error trajectories?\n2. Can the authors provide a more detailed analysis of sample correctness to elucidate the intrinsic incentives of the improved DPO? Specifically, it would be helpful to demonstrate under what kinds of samples LearNAT’s margin-aware DPO exhibits advantages over vanilla DPO, rather than only presenting final aggregate metrics.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-06T14:29:46",
"modification_date": "2025-11-12T18:13:22",
"review_url": "https://openreview.net/forum?id=q6kXd8Gpfj¬eId=SLzmpRkouv",
"license": "CC BY 4.0"
},
{
"id": "r6zKKAW7bs",
"forum": "q6kXd8Gpfj",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission22830/Reviewer_RUNy",
"reviewer_name": "Reviewer_RUNy",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The paper propose LearNAT, a framework that enhances LLMs' ability to decompose complex queries through decomposition synthesis procedure and margin-aware reinforcement learning. Decomposition synthesis procedure uses AST-guided search and pruning to precede efficient and verifiable decomposition on the BIRD-train dataset for the margin-aware reinforcement learning. Margin-aware reinforcement learning modified DPO's loss function by a AST-based reward distinction between samples. The experiment shows that LearNAT enables 7B-parameter models to reach performance close to GPT-4.",
"strengths": "1. The paper introduces a novel approach that leverages ASTs for task decomposition, enabling the synthesis of training data for reinforcement learning.\n2. This paper further proposes a modification to the DPO framework by incorporating an AST-distance-based reward to better estimate reward margins and enhance performance on BIRD and Spider datasets.",
"weaknesses": "1. The authors acknowledge that although LearNAT does not achieve state-of-the-art performance among system-level approaches, it consumes fewer tokens during inference. However, LearNAT should also be compared against model-level approaches of similar model size, such as Reasoning-SQL and OmniSQL, which demonstrate stronger performance on the BIRD leaderboard.",
"questions": "None.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-28T22:02:15",
"modification_date": "2025-11-12T18:13:22",
"review_url": "https://openreview.net/forum?id=q6kXd8Gpfj¬eId=r6zKKAW7bs",
"license": "CC BY 4.0"
},
{
"id": "V2zuLfQeii",
"forum": "q6kXd8Gpfj",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission22830/Reviewer_HybD",
"reviewer_name": "Reviewer_HybD",
"rating": 8,
"confidence": 4,
"soundness": 4,
"contribution": 3,
"presentation": 4,
"summary": "This paper presents LearNAT, a framework designed to enhance the performance of small, open large language models (LLMs) on natural language to SQL (NL2SQL) tasks. The method combines two ideas:\n\n1. An AST-guided decomposition synthesis process that uses Monte Carlo Tree Search (MCTS) guided by the SQL Abstract Syntax Tree (AST) of gold queries to generate verifiable intermediate subtasks (sub-SQLs).\n\n2. A Margin-Aware Direct Preference Optimization (MDPO) objective that introduces AST-based structural margins between positive and negative steps, providing fine-grained reward signals without a learned reward model.\n\nExperiments on BIRD and Spider benchmarks demonstrate substantial accuracy improvements for open Qwen2.5-coder models (7B/14B/32B) and notable efficiency advantages over GPT-4-based system-level pipelines. The paper provides ablations, cost analyses, and code release, emphasizing openness and reproducibility.",
"strengths": "- **Motivated practical problem:** Tackles a highly relevant challenge — enabling small, public models to achieve competitive NL2SQL performance without expensive test-time pipelines.\n\n- **Strong methodological alignment:** AST-guided decomposition is both interpretable and efficient, providing verifiable supervision that directly matches SQL’s structural nature.\n\n- **Novel preference learning variant:** The margin-aware DPO objective elegantly integrates structured information into preference learning without requiring a learned reward model.\n\n- **Empirical results and cost analysis:** Large gains on BIRD and Spider, along with token-cost comparisons, demonstrate both effectiveness and efficiency.\n\n- **Reproducibility:** Clear method description, ablation studies, and commitment to open code release.",
"weaknesses": "### Offline Dependence on Gold SQL ASTs\nThe synthesis process relies on gold ASTs ($AT(Y)$) for search and reward computation, limiting scalability to unlabeled settings.\n\n### Baseline Comparison Fairness\nModel-level and system-level results (e.g., GPT-4 pipelines) are mixed without clear labels or cost normalization.\n\n### Limited Reward-Learning Baselines\nMDPO is compared only to vanilla DPO.\n\n### Compute and Cost Transparency\nSynthesis and fine-tuning costs are underreported.\n\n### Robustness and Variance Reporting\nMain results appear from single runs, which limits reliability.\n\n### Scope Limitation to Canonical ASTs\nThe method depends on well-defined SQL ASTs, which may not exist in less-structured domains.\n\n### Relation to Concurrent Structured-Reasoning Work\nThe paper should cite Struct-LLM (Stoisser et al., 2025), which also explores structured reasoning over SQL and Cypher using reinforcement learning. Briefly contrast LearNAT’s offline AST-guided preference learning with Struct-LLM’s online RL-based reasoning approach.",
"questions": "### Method Clarity & Assumptions\n\n1. **Gold AST availability** \n You mention that the decomposition synthesis uses the gold SQL AST to guide MCTS.\n - How does this affect scalability to datasets without gold SQLs?\n - Can LearNAT generate training data in a semi-supervised setting, or does it strictly rely on gold supervision?\n\n2. **Verification signal granularity** \n You mention “verifiable intermediate subtasks.”\n - Are these subtasks verified purely syntactically (AST match) or also semantically (execution match on DB)?\n - How do you handle equivalent but syntactically different SQL forms?\n\n3. **MDPO stability** \n - Did you observe training instability compared to vanilla DPO due to margin scaling or structural rewards?\n - Are the AST-based margins dynamically computed or fixed?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-27T18:29:25",
"modification_date": "2025-11-12T18:13:22",
"review_url": "https://openreview.net/forum?id=q6kXd8Gpfj¬eId=V2zuLfQeii",
"license": "CC BY 4.0"
}
] |
|
EbgCEd8gyN
|
https://openreview.net/forum?id=EbgCEd8gyN
|
Sysformer: Safeguarding Frozen Large Language Models with Adaptive System Prompts
| 5
| 3.25
|
[
6,
4,
6,
4
] |
[
3,
3,
3,
4
] | 4
|
[
"Large Language Models",
"AI Safety",
"Jailbreaks",
"Guardrails",
"Frozen Model adaptation"
] |
As large language models (LLMs) are deployed in safety-critical settings, it is essential to ensure that their responses comply with safety standards. Prior research has revealed that LLMs often fail to grasp the notion of safe behaviors, resulting in either unjustified refusals to harmless prompts or the generation of harmful content. While substantial efforts have been made to improve their robustness, existing defenses often rely on costly fine-tuning of model parameters or employ suboptimal heuristic techniques. In this work, we take a novel approach to safeguard LLMs by learning to adapt the system prompts in instruction-tuned LLMs. While LLMs are typically pre-trained to follow a fixed system prompt, we investigate the impact of tailoring the system prompt to each specific user input on the safety of the responses. To this end, we propose Sysformer, a transformer model that updates an initial system prompt to a more robust system prompt in the LLM input embedding space while attending to the user prompt. While keeping the LLM parameters frozen, the Sysformer is trained to refuse to respond to a set of harmful prompts while responding ideally to a set of safe ones. Through extensive experiments on 5 LLMs from different families and 2 recent benchmarks, we demonstrate that Sysformer can significantly enhance the robustness of LLMs, leading to upto 80% gain in the refusal rate on harmful prompts while enhancing the compliance with the safe prompts by upto 90%. Results also generalize well to sophisticated jailbreaking attacks, making LLMs upto 100% more robust against different attack strategies. We hope our findings lead to cheaper safeguarding of LLMs and motivate future investigations into designing variable system prompts.
|
We present Sysformer, a transformer-based mechanism to adapt system prompt based on the user prompts to boost the robustness of LLMs.
|
alignment, fairness, safety, privacy, and societal considerations
|
https://openreview.net/pdf?id=EbgCEd8gyN
| 2025-09-18T23:43:05
| 4
|
[
{
"id": "DnGSgQPPsM",
"forum": "EbgCEd8gyN",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12772/Reviewer_BNqo",
"reviewer_name": "Reviewer_BNqo",
"rating": 6,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "This paper presents Sysformer, a lightweight transformer model that adapts the system prompt for a frozen LLM. By doing so, it significantly improves the model’s safety while maintaining compliance on safe prompts, and is deployable without full model retraining. It offers a practical step toward safer LLM deployment in real‐world settings.",
"strengths": "The paper uses multiple benchmarks—JailbreakBench and StrongReject—plus 16 jailbreak variants. The proposed method show a strong empirical performance on these benchmarks, shows that adaptive system prompts can meaningfully improve LLM safety and robustness without modifying model weights.",
"weaknesses": "While the paper includes solid ablation studies on loss components and demonstrates impressive generalization to unseen jailbreak attack types, it does not assess cross-benchmark transfer — e.g., training Sysformer on JailbreakBench and evaluating on StrongReject (or vice versa). As a result, it remains unclear how well the learned safety behavior generalizes to qualitatively different harmful-prompt distributions. Including such a cross-dataset evaluation (or at least reporting zero-shot transfer results) would strengthen the claim that Sysformer captures general safety principles rather than dataset-specific artifacts.",
"questions": "Since Sysformer is trained on labeled data from existing safety benchmarks, it is unclear how general the method is to new domains or harmful behaviors. It would be very helpful to see an ablation where Sysformer is trained on one benchmark and evaluated on another, with comparison to the baselines, to better assess its cross-benchmark generalization.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-04T18:10:01",
"modification_date": "2025-11-12T12:59:42",
"review_url": "https://openreview.net/forum?id=EbgCEd8gyN¬eId=DnGSgQPPsM",
"license": "CC BY 4.0"
},
{
"id": "TaXWa2W51z",
"forum": "EbgCEd8gyN",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12772/Reviewer_mrLV",
"reviewer_name": "Reviewer_mrLV",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "The paper introduces Sysformer, a lightweight transformer module that enhances LLM safety by adapting the system prompt based on each user input instead of fine-tuning model parameters. The transformer module transforms the system prompt in embedding space to enforce refusals on harmful prompts and compliance on safe ones, offering an efficient, modular approach to safeguard frozen LLMs through adaptive system-prompt optimization.",
"strengths": "+ The paper reads smooth and clear.\n+ The baseline evaluation is rather comprehensive, containing efficienct fine-tuning (LoRA) and embedding space optimization. Dataset selection looks good.",
"weaknesses": "- The transformer component takes in user prompts, which means the embedding prompt is generated on every query. While the motivation statement criticized efficiency of prior defense methods, Sysformer also introduces overhead but not evaluated.\n- The traiing loss uses predefined fixed strings like \"I cannot help you\" as a signal of refusal, which restricts the flexibility of the training method. Not sure if the training pipeline is working on larger and more powerful models that do not answer fixed strings as refusal (like GPT-5). It is also a risk of overfitting.\n- The evaluation does not evaluate the quality of answers to safe questions. Does the injected embedding ever harm model performance in normal tasks like text comprehension, math, etc?\n- As demenstrated in the evaluation, Sysformer cannot defend unseen jailbreaking attacks. The dependence on he training data limits the usefulness of Sysformer, as data nowadays is a bottleneck of model development. Requiring the data of attack also opens the oppotunity of adaptive attacks.",
"questions": "* If a model does not have clear fixed string for refusal, how should the training loss be computed?\n* Will Sysformer affect the quality of answering normal prompts?\n* While Sysformer can defend jailbreaking attacks by augmenting the training with the corresponding data, can jailbreaking attacks also evolve to defeat Sysformer given the knowledge of the transformer component?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-25T08:03:10",
"modification_date": "2025-11-12T12:59:42",
"review_url": "https://openreview.net/forum?id=EbgCEd8gyN¬eId=TaXWa2W51z",
"license": "CC BY 4.0"
},
{
"id": "ZvjFSySDlj",
"forum": "EbgCEd8gyN",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12772/Reviewer_UKuL",
"reviewer_name": "Reviewer_UKuL",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper presents a method for LLMs Jailbreak defense by generating an input-dependent system prompt. The advantage of this method is that the target LLM to be protected can be freezed, in other words, there is no need to finetune it. Experiments on two datasets show the proposed method has good defense performance.",
"strengths": "1. The proposed method is novel to my knowledge.\n\n2. The defense effectiveness is good.\n\n3. This paper is well written.\n\n4. The defense method does not rely on finetuning the target LLM to be protected.",
"weaknesses": "1. The baseline methods compared in this paper are very scarce. Many prompt based especially system prompt based defense methods are not discussed or compared at all.\n\n2. The proposed method relies on an additional dataset for training the prompt generation model. It is not clear how the proposed method relies on the size and quality of the training data. In addition, it is unclear whether the proposed method can work for the new attacks which are not covered by the training data.\n\n3. The proposed method needs to generate a new prompt for each user request, which brings additional computation and delay.\n\n4. The experiments are conducted on small and weak LLMs. It is unclear whether the findings hold for frontier models.",
"questions": "see my above comments",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-24T17:27:31",
"modification_date": "2025-11-12T12:59:44",
"review_url": "https://openreview.net/forum?id=EbgCEd8gyN¬eId=ZvjFSySDlj",
"license": "CC BY 4.0"
},
{
"id": "BzlbVHYk7d",
"forum": "EbgCEd8gyN",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12772/Reviewer_TQAQ",
"reviewer_name": "Reviewer_TQAQ",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This paper proposes SysFormer, an adaptive system prompt optimization framework designed to safeguard large language models (LLMs) against harmful prompts. The method introduces a trainable transformer that optimizes system prompts in the input embedding space, enabling the model to refuse harmful prompts while maintaining proper responses to safe ones. Experiments conducted on five LLMs and two benchmarks demonstrate the method’s effectiveness in improving safety without retraining model parameters.",
"strengths": "- The paper is well-written and generally easy to follow.\n- The idea of enhancing LLM safety via adaptive system prompt refinement is an important and timely research direction, especially as system prompts become a key component of deployed LLM systems.\n- The experimental results suggest that the proposed method can improve refusal behavior across multiple models.",
"weaknesses": "1.\tThreat model clarity:\nThe paper’s threat model needs clearer justification. If the goal is to protect models using developer-provided system prompts, then the possibility of double-jailbreak attacks should be considered. On the other hand, if attackers do not have access to the system prompt, the described threat scenario may not fully hold.\n2.\tTraining complexity and stability:\nThe proposed optimization involves multiple loss terms. It remains unclear how these losses are balanced and whether training is stable and convergent in practice. A discussion or ablation study on this would strengthen the paper.\n3.\tComparison with related work:\nA closely related approach, SOP (Adaptive Content Restriction for Large Language Models via Suffix Optimization, 2025), also optimizes system suffix components for output control. The paper should include a direct comparison or discussion to clarify the conceptual and empirical differences between SysFormer and SOP.\n4.\tAlternative design choices:\nWhy not use a smaller auxiliary model or a lightweight controller to enhance or rewrite the system prompt dynamically? Direct instruction-level modification could be more straightforward—please justify this design decision.\n5.\tOptimization domain and textual space:\nSince the optimization is performed in the embedding space, it is unclear whether similar effects can be achieved directly in the textual space (e.g., using gradient-guided optimization such as GCG). A comparison or reasoning would be valuable.\n6.\tComparison with decoding-based defenses:\nSome decoding-level defenses can repair or filter harmful outputs more efficiently without modifying inputs. The paper should provide comparisons or explain why SysFormer is preferable in terms of flexibility or deployment.\n7.\tTransferability across models:\nHow transferable is the learned system transformer? Can the same parameters generalize to different LLMs, or is separate optimization required for each model? This issue affects the scalability of the approach.\n8.\tBlack-box applicability:\nThe paper does not discuss performance on black-box models (e.g., GPT series). Since system prompt control is particularly relevant for black-box deployments, experiments or analysis in this setting would be important to demonstrate broader applicability.",
"questions": "Overall, this paper explores a promising and practically relevant idea—leveraging adaptive system prompts for improving LLM safety. However, several conceptual and empirical issues remain open, particularly regarding the threat model, training stability, and comparison with existing methods. Addressing these concerns would significantly strengthen the contribution. I encourage the authors to expand the analysis, include more comprehensive baselines (especially SOP and decoding-based defenses), and clarify the deployment assumptions. I would like to reconsider my rating after reading the authors' response.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-24T12:55:32",
"modification_date": "2025-11-12T12:59:45",
"review_url": "https://openreview.net/forum?id=EbgCEd8gyN¬eId=BzlbVHYk7d",
"license": "CC BY 4.0"
}
] |
VYQuICALXj
|
https://openreview.net/forum?id=VYQuICALXj
|
Cross-Modal Redundancy and the Geometry of Vision–Language Embeddings
| 5
| 3.5
|
[
8,
4,
6,
2
] |
[
3,
3,
3,
5
] | 4
|
[
"multimodal",
"concepts",
"sparse autoencoder",
"modality gap",
"applications of interpretability"
] |
Vision–language models (VLMs) align images and text with remarkable success, yet the geometry of their shared embedding space remains poorly understood.
To probe this geometry, we begin from the Iso-Energy Assumption, which exploits cross-modal redundancy: a concept that is truly shared should exhibit the same average energy across modalities.
We operationalize this assumption with an Aligned Sparse Autoencoder (SAE) that encourages energy consistency during training while preserving reconstruction.
We find that this inductive bias changes the SAE solution without harming reconstruction, giving us a representation that serves as a tool for geometric analysis.
Sanity checks on controlled data with known ground truth confirm that alignment improves when Iso-Energy holds and remains neutral when it does not.
Applied to foundational VLMs, our framework reveals a clear structure with practical consequences:
**(*i*)** sparse *bimodal* atoms carry the entire *cross-modal* alignment signal;
**(*ii*)** *unimodal* atoms act as *modality-specific* biases and fully explain the modality gap;
**(*iii*)** removing unimodal atoms collapses the gap without harming performance;
**(*iv*)** restricting vector arithmetic to the bimodal subspace yields in-distribution edits and improved retrieval.
These findings suggest that the right inductive bias can both preserve model fidelity and render the latent geometry interpretable and actionable.
|
Understanding the geometry of multimodality through a concept-based approach, leading to applications like semantic vector arithmetic and modality gap free embeddings.
|
interpretability and explainable AI
|
https://openreview.net/pdf?id=VYQuICALXj
| 2025-09-18T18:16:51
| 4
|
[
{
"id": "P2FGxaMJlL",
"forum": "VYQuICALXj",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission11140/Reviewer_U7cA",
"reviewer_name": "Reviewer_U7cA",
"rating": 8,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 4,
"summary": "This paper proposes an Iso-Energy prior for learning aligned sparse concept dictionaries on top of VLM embeddings. By mildly enforcing equal second-moment (“energy”) of a concept across image/text domains, the aligned SAE separates bimodal atoms (semantic carriers) from unimodal atoms (modality-specific bias). This yields two actionable interventions: (i) closing the modality gap by masking unimodal atoms without hurting retrieval, and (ii) performing robust semantic vector arithmetic within the bimodal subspace, reducing OOD drift.",
"strengths": "1. Clear and effective framing of a testable modeling intuition.\nThe paper presents a well-motivated and conceptually coherent formulation. It articulates a precise inductive bias: that shared cross-modal concepts should exhibit similar activation statistics across modalities. This idea is not only intuitively appealing but also operationalized in a mathematically minimal way through second-moment constraints. The writing and structural clarity further reinforce this framing, making the contribution accessible and theoretically grounded.\n\n2. Methodologically grounded execution with dual functionality.\nThe proposed method delivers more than conceptual framing. It constructs a sparse, interpretable bimodal subspace that supports both analysis and intervention. The same subspace allows for attribution-style interpretation as well as semantically coherent editing, demonstrating that the learned structure is not only intelligible but also functionally controllable. This dual capacity is rarely achieved in the interpretability literature and gives the method both analytical and practical value.",
"weaknesses": "1. Sufficiency versus necessity of the Iso-Energy criterion.\nEqual second moments across modalities can indicate shared concepts, but they are not required. Without invariance to modality-specific anisotropy or rescaling, genuinely shared factors may be labeled unimodal. It would be better to add invariance controls such as per-modality whitening or variance normalization, and to compare with covariance-aware baselines such as CCA or CORAL to verify that the findings are not driven by marginal variance.\n\n2. Sensitivity to pairing noise and frequency imbalance.\nThe alignment term relies on paired image and text data, where long-tail frequencies and noisy matches are common. Energy equality can be confounded by corpus artifacts rather than semantics. It would be better to add two controls: a frequency-matched subsample that balances concept prevalence across modalities, and a shuffled-pairs stress test to quantify robustness to misalignment noise.",
"questions": "1. To what extent do the conclusions generalize to more complex tasks and architectures, such as VQA on LLaVA-series models?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T16:23:18",
"modification_date": "2025-11-12T12:38:29",
"review_url": "https://openreview.net/forum?id=VYQuICALXj¬eId=P2FGxaMJlL",
"license": "CC BY 4.0"
},
{
"id": "Q6n5sx19o3",
"forum": "VYQuICALXj",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission11140/Reviewer_Yfxj",
"reviewer_name": "Reviewer_Yfxj",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "The paper studies the geometry of vision–language embeddings through a proposed Iso-Energy assumption, which states that shared cross-modal concepts should have equal activation energy across modalities. To explore this, the authors introduce an aligned sparse autoencoder (SAE-A) that adds a cosine-similarity–based alignment loss to a standard sparse autoencoder. The numerical experiments on CLIP, OpenCLIP, and SigLIP embeddings show that the aligned SAE could improve cross-modal alignment metrics while maintaining reconstruction quality.",
"strengths": "1. The paper provides an interesting perspective on the geometry of vision–language embeddings by introducing the Iso-Energy assumption.\n\n2. The numerical results are consistent, showing that the aligned SAE can improve cross-modal alignment metrics without damaging reconstruction quality.",
"weaknesses": "1. The connection between the Iso-Energy Assumption in Definition 2 and the implemented loss in Equation (1) is not that clear. Definition 2 describes a population-level equality of per-coordinate activation energies across modalities, whereas the alignment loss in (1) simply quantifies the batch-level sum of cosine similarity between sample codes. The paper does not provide a derivation or justification showing that this cosine similarity sum term directly enforces or meaningfully approximates the Iso-Energy property.\n\n2. The alignment loss in Equation (1) effectively reduces to a vanilla sum of cosine similarities between the latent codes from two modalities. This formulation looks too simple and somewhat ad hoc, lacking a clear connection to encourage equalized energy statistics as defined by the Iso-Energy assumption.\n\n3. The paper introduces the aligned sparse autoencoder without providing sufficient background on the baseline SAE formulation, its reconstruction, and sparsity terms. This makes the method less self-contained and more difficult for readers less familiar with the SAE framework to follow.\n\n4. Some of the mathematical definitions, particularly in Definition 2, are not presented rigorously. The conditional expectation is written as if conditioned on the specific sample $X$, which collapses the expectation to the outcome for that given value of $X$ in the conditional expectation of (1).",
"questions": "1. Can the authors clarify the precise theoretical link between the Iso-Energy Assumption in Definition 2 and the cosine-similarity–based alignment loss in Equation (1)? \n\n2. As the alignment loss in (1) reduces to a simple sum of cosine similarities, did the authors experiment with other similar regularizers (e.g., the sum of the squared or absolute value of the inner products in (1)) or other regularizers that can more directly enforce the Iso-Energy property in Definition 2?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T13:59:11",
"modification_date": "2025-11-12T12:38:29",
"review_url": "https://openreview.net/forum?id=VYQuICALXj¬eId=Q6n5sx19o3",
"license": "CC BY 4.0"
},
{
"id": "vXdTMs94vx",
"forum": "VYQuICALXj",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission11140/Reviewer_BMn2",
"reviewer_name": "Reviewer_BMn2",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper studies the geometry of VLM embedding spaces via the Iso‑Energy Assumption—shared concepts should have domain‑invariant average squared activation. The authors train an Aligned Matching‑Pursuit SAE with a small cross‑modal alignment regularizer, yielding a dictionary that separates bimodal atoms (which carry all cross‑modal alignment) from unimodal atoms (a few high‑energy modality‑specific biases explaining the modality gap). On synthetic data and CLIP/OpenCLIP/SigLIP variants, this preserves reconstruction while markedly improving multimodality metrics, and enables interventions such as removing unimodal atoms to close the modality gap without hurting retrieval and performing in‑distribution semantic arithmetic restricted to the bimodal subspace.",
"strengths": "1. **Clear Problem Formulation and Strong Motivation:** The paper articulates a pertinent and significant problem in VLM interpretability and manipulability. By focusing on the geometric underpinnings of cross-modal alignment and the \"modality gap,\" the work addresses a critical area for improving VLM transparency and control. \n2. **Novel and Intuitive Hypothesis:** The \"Iso-Energy Hypothesis\" offers an elegant and interpretable statistical prior for identifying shared concepts within a sparse dictionary. This hypothesis provides a concrete, measurable criterion that transforms the abstract notion of \"cross-modal redundancy\" into an actionable constraint for dictionary learning.\n3. **Demonstrated Practical Interventions:** The ability to close the modality gap by masking uni-modal atoms and to perform \"in-distribution\" semantic arithmetic within the bi-modal subspace represents a significant practical contribution. These interventions offer concrete pathways for improving the robustness and interpretability of VLM applications.",
"weaknesses": "1. **Reliance on Paired Data for Alignment Regularization:** Although lines 158-160 allude to the potential of leveraging \"cross-modal redundancy alone,\" the current formulation of the alignment regularizer explicitly requires instance-level image-text pairs. The robustness of the method to noisy or imperfect pairings, or its applicability in settings with weak or no explicit pairings (e.g., using only domain labels), remains unexplored. This dependency may limit its generality and practical scope.\n2. **Limited Assessment of Dictionary Stability and Generalizability:** While the paper aims to enhance SAE dictionary stability via the Iso-Energy Assumption and demonstrates improved recovery on synthetic data during \"Sanity check\", it lacks a systematic and multi-faceted analysis of this robustness on large-scale real-world VLM datasets. The reproducibility of the learned dictionary under varying conditions, such as different expansion ratios, sparsity targets, or subsets of training data, remains unexplored. Thus, the evaluation of this crucial aspect in practical scenarios is not yet comprehensive.\n3. **Scope of Evaluation and Downstream Task Relevance:** While the paper demonstrates strong results on retrieval-oriented metrics and interventions, the generalizability to other VLM tasks (e.g., visual question answering, image generation, localization, counting, spatial reasoning) is not fully explored. The claim that \"masking uni-modal atoms does not hurt performance\" might hold for certain tasks, but could be detrimental for tasks that rely on more modality-specific information.",
"questions": "**External Validation of Atomic Concepts:** While visualizations are provided, the \"semantic stability\" of the atoms is largely qualitative. Is it possible to introduce quantitative measures for concept purity, namability, or alignment with human annotations to further validate the interpretability and meaningfulness of the identified bi-modal and uni-modal atoms? This would provide stronger evidence that the method is indeed recovering genuine, human-understandable concepts.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T22:53:38",
"modification_date": "2025-11-12T12:38:30",
"review_url": "https://openreview.net/forum?id=VYQuICALXj¬eId=vXdTMs94vx",
"license": "CC BY 4.0"
},
{
"id": "DuOtipv5xc",
"forum": "VYQuICALXj",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission11140/Reviewer_XysU",
"reviewer_name": "Reviewer_XysU",
"rating": 2,
"confidence": 5,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "The paper investigates the geometry of the embedding space of CLIP-like models using sparse autoencoders. The authors augment SAE training by an iso-energy regularization term that encourages SAE latents to have similar spreads (i.e., second moments) for both modalities (Def. 2). They show that only a small set of features explain the modality gap and the remaining features are sufficient for cross-modal alignment. Removing former features reduces modality gap while retaining performance and the latter allows for vector arithmetic.",
"strengths": "* S1: The energy penalty (Def. 2/ Eq. 1) is interesting and a simple addition to the SAE loss.\n\n* S2: Synthetic & real experiments confirm that the proposed aligned SAE better matches the geometry of CLIP-like models. Particularly, if the SAE features can distinguish between shared or modality-gap-specific.\n\n* S3: The paper introduced four metrics to evaluate whether the SAE variants capture the geometrical or functional properties of the VLMs.\n\n* S4: The proposed SAE allows semantic vector arithmetic.",
"weaknesses": "* W1: 3 out of the 4 key findings have been reported in previous work (see bullet points below). While the findings are reached using a different, more complex approach, the current paper seems to re-report these findings.\n\t* Few (unimodal) features fully explain the modality gap (Fig. 2 left, 3) ~> see Fig. 4 in [3] or Fig. 3 in [4]\n\t* Bimodal features carry the entire cross-modal alignment signal (Fig. 2 right, Fig. 3) ~> cross-modality transferability experiments in [2], e.g., Tab. 2.\n\t* Removing those modality-gap features reduces the modality gap without loss of performance (Fig. 4) ~> again, see cross-modality experiments in [2], e.g., Tab. 2.\n\n* W2: The paper provides little to no experimental details in the main text, making it hard to understand the results without searching the supplemental.\n\n* W3: It is assumed that bimodal atoms are semantically aligned across modalities (“bimodal atoms encode the shared conceptual backbone” l. 345) and few qualitative examples are provided in Appendix G. However, there is no quantitative evaluation for this claim.\n\n* W4: Only contrastive models are evaluated. For example, the modality gap has been also observed in multimodal LLMs. It’d be important to include such results.\n\n## Comment\n\n* C1: I’d encourage the authors to include discussions on missing relevant literature [1-4].\n\n* C2: This work’s proposition 1 seems closely related to [2]’s proposition A.1. The only difference seems to be that the modality information can be adaptive here.\n\n* C3: The caption of Fig. 4 is partially occluded from Fig. 5.\n\n---\n\n[1] https://www.mlmi.eng.cam.ac.uk/files/2021-2022_dissertations/understanding_and_fixing_the_modality_gap_in_vision-language_models_reduced.pdf \n\n[2] https://openreview.net/forum?id=D-zfUK7BR6c \n\n[3] https://openreview.net/forum?id=uAFHCZRmXk\n\n[4] https://openreview.net/forum?id=QGUju9B68Z",
"questions": "* Q1: Is the standard SAE (l. 176/177, 185) the MP-SAE or is it truly standard SAE?\n\n* Q2: How are unimodal or bimodal features separated?\n\n* Q3: Do the unimodal features approximate the modality gap vector? Related to that, does it explain why they all have such high cosine similarities (Fig. 16b)?\n\n* Q4: What is $\\mu$ in Fig. 2 left?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-20T00:37:53",
"modification_date": "2025-11-12T12:38:30",
"review_url": "https://openreview.net/forum?id=VYQuICALXj¬eId=DuOtipv5xc",
"license": "CC BY 4.0"
}
] |
32QQlzm9ft
|
https://openreview.net/forum?id=32QQlzm9ft
|
REFLEX-Med: Reinforcement for Label-Free Explainability in Unified Medical Reasoning
| 3.666667
| 3.5
|
[
4,
4,
2,
6,
2,
4
] |
[
4,
4,
4,
2,
3,
4
] | 6
|
[
"medical reasoning",
"large vision-language models",
"explainability"
] |
Clinicians urgently need explanations they can audit, not merely fluent chains. Yet prevailing practices conflate interpretability with subjective human/LLM rationales, with post-hoc visuals loosely aligned to answers, or with answer rationale consistency. These proxies are annotation-hungry, bias-prone, and crucially do not certify process verifiability: where the model looked and why it looked there. Meanwhile, reinforcement learning from feedback excels at answer verifiability but offers little support for constraining the provenance of attention or penalizing visually ungrounded reasoning. We introduce REFLEX-Med, a reinforcement framework that instantiates label-free explainability through two verifiable prerequisites: (i) faithful visual grounding that is text-conditioned localization in the image, and (ii) bi-directional cross-modal provenance, that is a cycle of mutual traceability across image-text and text-text semantics. REFLEX-Med couples curriculum GRPO with two frozen rewards computed by a medical vision-language encoder: a visual fidelity reward aligning text-conditioned saliency between the model's own answer and an anchor text, and a bi-modal provenance reward enforcing image-text and text-text consistency in embedding space. Together with standard format and semantic-matching rewards, REFLEX-Med resists large VLM hallucination and attention-think drift, improving both answer quality and auditable faithfulness on unified medical reasoning (open and close-ended VQA) all without human or LLM rationale annotations.
|
alignment, fairness, safety, privacy, and societal considerations
|
https://openreview.net/pdf?id=32QQlzm9ft
| 2025-09-13T17:07:44
| 6
|
[
{
"id": "7tRBIbg4Sf",
"forum": "32QQlzm9ft",
"review_number": 9,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4734/Reviewer_vYv4",
"reviewer_name": "Reviewer_vYv4",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper introduces REFLEX-Med, a reinforcement learning framework designed to provide process verifiability for unified medical VQA without requiring human annotations. The authors argue that current explainability methods, such as post-hoc saliency maps or chain-of-thought approaches that demand extensive annotations, have inherent flaws. To address this, the paper proposes two core verifiable pre-conditions: \"faithful visual grounding\" and \"bi-directional cross-modal provenance.\" Specifically, the method employs a frozen medical vision-language model as a \"judge\" to compute two novel reward signals, a Visual Fidelity Reward (VFR) and Bi-modal Provenance Reward (BPR). These rewards are used to fine-tune a LVLM via curriculum learning and the GRPO algorithm. Experimental results demonstrate that the proposed method outperforms baselines on several in-domain and out-of-domain medical VQA benchmarks.",
"strengths": "1. The paper tackles a problem of critical importance and significant challenge in the medical AI domain. In high-stakes clinical scenarios, it is crucial for a model not only to provide the correct answer but also to offer a reasoning process that can be audited and trusted by physicians.\n2. The paper conducts extensive experiments across multiple standard medical VQA datasets, comparing the proposed method against a range of baselines and demonstrating its superiority on several metrics.\n3. The paper is generally clearly structured, and easy to follow. The authors effectively articulate the problem background, motivation, and the proposed methodology.",
"weaknesses": "Despite its strengths, I have several major concerns regarding the novelty of the methodology, the rigor of the experimental evaluation, and the completeness of the exposition.\n1. **Limited Novelty of the VFR**: The core idea of VFR, enforcing visual grounding consistency by comparing the IoU of attention maps, is not a new concept. This technique has been widely used in computer vision and multi-modal learning, for instance, in Grad-CAM [1] and its variants, as well as in conditional image-text embedding networks [2]. Furthermore, the idea of \"text-visual consistency\" as a regularization or reward mechanism has been explored in prior work [3-4]. Consequently, the contribution of this component feels more like an application of existing techniques rather than a fundamental innovation.\n2. **Lack of Comparative Experiments**: The authors mention that the GRPO algorithm has already been applied to medical VQA tasks. However, the experimental comparison section lacks a direct comparison with existing GRPO-based medical VQA methods [5-7]. This makes it difficult for readers to accurately assess the true performance gain of the proposed method over the most relevant state-of-the-art work. \n3. **Clarity Issues and Lack of Symbol Definitions**: The paper's clarity suffers in several key areas. Symbols such as $y_i$, $\\pi_\\theta$, $c_i$ on page 5, lines 231-232, and $r_i$ in Equation (10) appear to be used without clear prior definition, which hinders comprehension. And Equation (8) introduces a LoopTight term, but the paper fails to explain its specific function, design rationale, or how it is utilized within the algorithm.\n4. **Insufficient Justification and Support for Reward Design**: The designed rewards, VFR and BPR, all use indicator functions. The paper claims this \"stabilizes group-standardized advantages,\" but provides no theoretical derivation or experimental evidence to support this crucial assertion. Using continuous values like IoU or cosine similarity directly as rewards is a more natural choice. And the reward design introduces several key hyperparameters ($τ_{IoU}=0.5, τ_{tt}=0.8, τ_{it}=0.5$). The paper provides no basis for selecting these specific thresholds.\n5. **Incompleteness of Ablation Studies**: The current ablation study only tests the scenario where VFR and BPR are removed simultaneously. This is insufficient for understanding the individual contribution of each reward component. A more comprehensive ablation study should include: 1) Experiments where only VFR is removed, and only BPR is removed. 2) An ablation on the choice of the \"medical judge\" model. 3) An ablation on the curriculum learning strategy.\n\n**References**:\n\n[1] Grad-cam: Visual Explanations from Deep Networks via Gradient-based Localization, In ICCV 2017.\n\n[2] Conditional Image-text Embedding Networks, In CVPR 2018.\n\n[3] Learning from Observer Gaze: Zero-shot Attention Prediction Oriented by Human-object Interaction Recognition, In CVPR 2024.\n\n[4] Human Attention in Visual Question Answering: Do Humans and Deep Networks Look at the Same Regions? Computer Vision and Image Understanding, 2017.\n\n[5] Medvlm-r1: Incentivizing Medical Reasoning Capability of Vision-language Models (vlms) via Reinforcement Learning, In MICCAI 2025.\n\n[6] Medreason: Eliciting Factual Medical Reasoning Steps in llms via Knowledge Graphs, Arxiv, 2025.\n\n[7] Med-r1: Reinforcement Learning for Generalizable Medical Reasoning in Vision-Language Models, Arxiv 2025.",
"questions": "The questions are provided above.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-11T04:27:40",
"modification_date": "2025-11-12T11:18:59",
"review_url": "https://openreview.net/forum?id=32QQlzm9ft¬eId=7tRBIbg4Sf",
"license": "CC BY 4.0"
},
{
"id": "yGSLiotx0x",
"forum": "32QQlzm9ft",
"review_number": 8,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4734/Reviewer_oSrf",
"reviewer_name": "Reviewer_oSrf",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "The paper presents an RL fine-tuning framework for medical VLMs that aims to make explanations auditable without rationale labels. The method converts each (question, answer) pair into a declarative statement and evaluates two frozen signals from a medical vision-language encoder (BioMedCLIP): (1) a Visual Fidelity Reward (VFR) that grants a bonus if text-conditioned saliency for the model’s statement sufficiently overlaps (IoU) with saliency for an “anchor” statement built from the dataset ground-truth answer; (2) a Bi-modal Provenance Reward (BPR) that requires both text-text and image-text cosine similarities to exceed margins. These rewards are added to the conventional format and answer rewards and optimized with curriculum GRPO (i.e., close-ended first, then open-ended QA). Experiments are performed on multiple datasets, including both in-domain and out-of-domain evaluation.",
"strengths": "1. The paper is well-organized and well-written. \n2. The motivation is sound. \n3. The experiments are conducted on six datasets.",
"weaknesses": "1. The comparison with previous works should be improved. \n- Label-Free RL has been widely explored [1][2][3][4]. \n- The core contributions of this work are the proposed faithful visual grounding and bi-directional cross-modal provenance rewards for RL training; however, these have already been introduced in previous studies [5][6][7].\n\n2. The definition of “Label-free” is not clear, as the ground-truth anchors are provided during training.\n\n3. The main claim of this paper is that the proposed method can provide auditable and faithful explanations. \n- However, the paper does not include experiments to support this claim. For example, for faithfulness, the authors should evaluate the saliency maps against human-annotated ROIs.\n- In addition, external benchmarks for hallucination and robustness are missing, which weakens the core anti-hallucination argument.\n\n4. The comparison with previous works should be improved. \n- Medical RL methods are not included.\n- The ablation study of the curriculum setting is missing.\n\n5. It is unclear how the saliency map 𝑆(𝐼,𝑡) is computed.\n\nRefs:\n\n[1] Absolute Zero: Reinforced Self-play Reasoning with Zero Data, ArXiv, 2025.\n\n[2] Learning to Reason without External Rewards, ArXiv, 2025.\n\n[3] Maximizing Confidence Alone Improves Reasoning, ArXiv, 2025.\n\n[4] Unsupervised Post-Training for Multi-Modal LLM Reasoning via GRPO, ArXiv, 2025.\n\n[5] Grounded Reinforcement Learning for Visual Reasoning, ArXiv, 2025.\n\n[6] X-VILA: Cross-Modality Alignment for Large Language Model, ArXiv, 2024.\n\n[7] Reinforced Cross-modal Alignment for Radiology Report Generation. ACL, 2022.",
"questions": "See weaknesses.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-10T12:39:07",
"modification_date": "2025-11-12T11:19:00",
"review_url": "https://openreview.net/forum?id=32QQlzm9ft¬eId=yGSLiotx0x",
"license": "CC BY 4.0"
},
{
"id": "6Fb69CzXhL",
"forum": "32QQlzm9ft",
"review_number": 7,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4734/Reviewer_C48v",
"reviewer_name": "Reviewer_C48v",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 3,
"presentation": 2,
"summary": "This paper proposed Reflex-Med as a new reinforcement learning framework for VLM. It proposed to use a CLIP model to compute (1) an image-text saliency map for fine-grained visual alignment reward, and (2) a text-text and text-image similarity score for cross-modal semantical reward. The paper has evaluated the proposed method on both in-domain and out-of-domain datasets and shows an improved overall performance against the existing baselines, demonstrating the effectiveness of the proposed method.",
"strengths": "1. The proposed method is proven to be effective via massive experiments. The evaluation includes as many as 7 datasets from different sources and different focuses. And the proposed method demonstrated a non-trivial overall improvement against a same-size baseline model with large-scale pre-training. This is still quite impressive, considering the simplicity of the proposed method (in a positive way).\n\n2. The proposed idea of using a frozen CLIP model for multi-modal reward is convincing. Different from a discrete accuracy reward or text-only reward, the proposed method takes the image input into consideration and measures the correlation between the model output and the image.\n\n3. The core code is provided in the supplement.",
"weaknesses": "First, and foremost, the paper has obviously modified the paper margin. Its bottom margin is increased, while the left and right margin is decreased. It is unclear whether the paper has gained or lost space from this modification, but I believe this is a clear violation of the conference paper requirement, which clearly describes the page margin. Given that, I think I have no choice but to reject this paper. Yet, I do have some more comments about the paper's weakness, listed below.\n\n1. While the experimental results are impressive, the paper seems to overstate its contribution, from the reviewer's point of view. The paper claims the proposed method can help avoid hallucination and attention-think drift, and further improve the faithfulness of the reasoning and explainability. \n\n However, the proposed rewards only rely on the text output (modal statement $t_1$), and it is computed via a stand-alone CLIP model. This leads to two problems: **(1)** Is the CLIP model as a judge reliable? All the proposed rewards are computed as semantic similarity in the CLIP model's embedding space, which could be error-prone from the first hand. The chosen BioMedCLIP is clearly not pre-trained for fine-grained text-image alignment, making the saliency map unreliable as well. There is also no fact-check or direct chain-of-thought quality assessment, which means the claim of avoiding hallucination is questionable. **(2)** The rewards are indirect quantities in the GRPO optimization, which means optimized rewards don't guarantee a better reasoning or explainability, but just higher semantic similarity between model output and input text-image pair. *Eventually, GRPO is optimizing the model in a direction that generates output more similar to the anchor text, rather than improving the reasoning.* This could be fine in terms of improving performance, but no proof or evaluation shows that this is helpful for explainability.\n\n2. The paper claims the proposed method is a **label-free** solution, but it is not that solid a point. Compared with all the baselines mentioned in the paper, the proposed method uses the same VQA data, where the ground-truth answer text is necessary. Of course, the proposed method does not need a fine-grained local corresponding map or heatmap, but none of the baselines or commonly used methods need these additional annotations. To better validate this point, one may want to compare with a baseline that requires such annotation.\n\n3. The proposed method also claims it can improve the visual grounding capability. However, from the limited visual example in Figure 6, it is not obvious that the model is really capable of visual grounding. Figure 3 looks nice, but it is the saliency map for the CLIP model, rather than the actual attention of the VLM. Moreover, optimizing the VFR reward only improves the quality of the saliency map, but not the internal attention of the VLM. Providing more visual examples and reasoning results could help answer this question.\n\n4. It might just be the problem of the reviewer, but it would be great if some of the points could be better clarified. For example, how is the saliency map computed? Also, it will be much easier to follow the paper if the frozen CLIP reward model could be clarified earlier in the paper, rather than just using a vague description.",
"questions": "1. Can you provide some more visualization, like Figure 3 and Figure 6? The reviewer is very interested in the quality of the visual grounding for both the CLIP model and the final VLM.\n\n2. When computing the text embedding for the model output, does it include the thinking part of the output? Or is it just about the answer part? \n\n3. Also, the reviewer wonders what will happen when the question asks about questions related to global information, e.g., imaging modality, and how the grounding reward will be helpful in this case. Also, for a yes/no question, if the question itself is wrong, what will happen?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-08T12:26:19",
"modification_date": "2025-11-12T11:19:00",
"review_url": "https://openreview.net/forum?id=32QQlzm9ft¬eId=6Fb69CzXhL",
"license": "CC BY 4.0"
},
{
"id": "lC3li99ifa",
"forum": "32QQlzm9ft",
"review_number": 6,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4734/Reviewer_X1rV",
"reviewer_name": "Reviewer_X1rV",
"rating": 6,
"confidence": 2,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "Motivation: medical VQA models often give plausible answers without image-grounded reasoning (“answer-right, look-wrong”), and existing explainability needs scarce rationale labels.\n\nProposal: REFLEX-Med, a GRPO-based RL framework with a Visual Fidelity Reward that aligns text-conditioned saliency with an anchor from the gold answer and a Bi-modal Provenance Reward that enforces text–text and image–text agreement using a frozen VLM.\n\nResults: consistent gains over vanilla GRPO on VQA-RAD, SLAKE, and PathVQA, improved cross-modality transfer, and qualitatively tighter attention maps, with ablations showing both rewards matter.",
"strengths": "1. The paper addresses a timely and important problem in medical vision–language modeling, namely improving answer grounding without extra process supervision.\n\n2. VFR optimizes IoU between text-conditioned saliency masks from a frozen medical VLM, and BPR enforces text–text and image–text agreement through explicit thresholds. The pipeline is straightforward to implement and uses only answer labels, avoiding rationales, region annotations, or segmentations by deriving anchors and embeddings from the gold answers.\n\n3. Declarativizing questions and answers into canonical statements yields a single interface for computing saliency and embeddings, which unifies close ended and open ended VQA under one policy.\n\n4. The paper reports results across multiple medical VQA benchmarks and modalities with in-domain and out-of-domain tests, includes cross-modality transfer analyses, and provides ablations that isolate the contribution of VFR and BPR.",
"weaknesses": "1. Faithfulness is assessed through a single frozen medical VLM judge for both saliency and embeddings. Improvements could reflect increased agreement with that judge rather than truthfulness to the image. There is no external grounding metric or human assessment of localization to break this circularity.\n\n2. The saliency maps come from the judge without calibration. IoU between two unvalidated masks might not correlate with clinical localization quality. The paper lacks any quantitative localization benchmark or sanity checks on the saliency mechanism.\n\n3. BLEU, ROUGE, and BERTScore are known to be poorly aligned with clinical correctness in free-form medical text. Without clinically grounded scoring or exactness checks on key entities, the reported open-ended gains may overstate clinical utility.",
"questions": "1. Since VFR and BPR use a single frozen judge for both saliency and embeddings, the policy may align to that judge rather than the image and evaluation can become circular. How do you demonstrate that the gains reflect real grounding? Do results persist when you replace the judge with a different model after training?\n\n2. The method uses fixed hyperparameters for the saliency quantile and thresholds. A robustness analysis to these choices would improve the contribution, as this could affect learning dynamics and reported gains.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-07T15:04:32",
"modification_date": "2025-11-12T11:19:01",
"review_url": "https://openreview.net/forum?id=32QQlzm9ft¬eId=lC3li99ifa",
"license": "CC BY 4.0"
},
{
"id": "hKztxFWSVa",
"forum": "32QQlzm9ft",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4734/Reviewer_cpQL",
"reviewer_name": "Reviewer_cpQL",
"rating": 2,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 1,
"summary": "The authors propose to use GRPO to improve the visual grounding and cross-modal provenance (without labels for these) of medical MLLMs. They evaluate their method on the medical VQA task, showing improved answer quality and faithfulness, claiming reduced hallucination and attention drift. Experiments on cross-modal (medical image modality) and zero-shot generalization are also presented.",
"strengths": "1)Provides label label-free framework for improving visual grounding and provenance, thereby trying to reduce hallucination in large medical VLMs.\n2)The framework shows good results in answer utility, cross-modal, and zero-shot performance and could be easily transferable to different backbones and settings.",
"weaknesses": "1)The presentation of the paper could have been much better. It is hard to follow the text for various reasons. Some are listed below:\n a) The density of custom terminology is high; the core ideas are obscured by the constant use of these terms.\n b)The structure of the paper could be improved so that the reader can follow through easily without it being convoluted for no reason.\nc)Some of the mathematical notations are not defined. (Ex: c, G in line 231, etc.)\nd) More verbose captions for some figures (Figure 3, 6, 7) could help better understand the figure on its own. \ne)Redundant description of some of the techniques/processes of the framework throughout the paper. \n2)The paper claims to resist “attention-think drift” without providing any substantial evaluation. \n3)The paper claims about the reasoning capabilities of their framework; they briefly analyze this in a subsection through the <think> component in their model’s responses, the details of evaluating this <think> component quantitatively, which can show the reasoning of the model, are not presented. \n4)Some of the recent and relevant paper that employs GRPO for medical reasoning have been mentioned by the paper in the related work section (MedVLM-R1, MedReason), but they were not used as baselines by the paper. Justification as to why not use them as baselines was also not provided.",
"questions": "1)The paper uses many thresholding parameters. How sensitive is the framework to the selection of these parameters? Was there any such study performed?.\n2) Is there any analysis of the sensitivity of the framework to the choice and quality of the frozen judge?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T21:39:22",
"modification_date": "2025-11-12T11:19:01",
"review_url": "https://openreview.net/forum?id=32QQlzm9ft¬eId=hKztxFWSVa",
"license": "CC BY 4.0"
},
{
"id": "Ufn74bb4Up",
"forum": "32QQlzm9ft",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4734/Reviewer_JQbZ",
"reviewer_name": "Reviewer_JQbZ",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "This paper introduces REFLEX-Med, a RL framework designed to improve the explainability of medical VLM without relying on costly human-annotated rationales. The core idea is to instantiate \"label-free explainability\" through two verifiable prerequisites: faithful visual grounding (where the model looks) and bi-directional cross-modal provenance (why it looks there).\n\nTo achieve this, the authors propose two novel reward signals computed by a frozen, pre-trained medical VLM acting as a \"judge\":\n1. A **Visual Fidelity Reward ($R_{VFR}$)**, which encourages the policy's attention (saliency map) to align with the attention of an \"anchor\" (ground-truth) statement.\n2. A **Bi-modal Provenance Reward ($R_{BPR}$)**, which enforces semantic consistency in the embedding space between the model's generated answer, the anchor text, and the input image.",
"strengths": "1. **Originality & Significance:** The paper proposes a novel and important shift in medical VLM explainability, moving from subjective (and annotation-hungry) rationales to objective, \"label-free\" verifiable criteria. The core concept of using a frozen judge to reward faithful grounding ($R_{VFR}$) and semantic provenance ($R_{BPR}$) is a creative and promising approach.\n2. **Problem Formulation:** The work correctly identifies a critical failure mode of current VLMs (\"attention-think drift\") and proposes a concrete mechanism to penalize it. The goal of instantiating \"process verifiability\" (where and why the model looked) is highly relevant for high-stakes domains like medicine.\n3. **Methodology:** The design of the two reward functions is intuitive and directly maps to the stated goals. Using a frozen judge to provide a stationary reward signal and prevent reward hacking is a sound design choice within an RL framework.",
"weaknesses": "1. **Unexplained Catastrophic Performance on PathVQA:** The most glaring weakness is the model's performance on the PathVQA dataset, as shown in Table 1. The Qwen2.5-VL (SFT) baseline achieves 87.8% (c) and 79.3% (o). In contrast, REFLEX-Med-7B scores 80.9% (closed) and a shockingly low 30.3% (o). This is a massive performance degradation on an in-domain dataset. The paper fails to acknowledge, analyze, or explain this result. This strongly suggests that the proposed reward framework may be fundamentally flawed or, at best, highly detrimental to specific modalities like pathology. This single result undermines the paper's primary claims of improving answer quality.\n2. **Unvalidated Reward Signal Quality:** The methodology critically depends on the frozen BioMedCLIP judge providing accurate saliency maps and meaningful embeddings. The paper provides zero evidence that BioMedCLIP is a reliable judge, especially for saliency. Medical grounding models are notoriously unreliable outside of the domain they were trained on (e.g., CXR). If the judge produces low-quality masks for CT or pathology images, the $R_{VFR}$ signal is optimizing the policy for noise, which would explain the poor performance on PathVQA. The authors must validate the judge's performance before using it as a source of truth.\n3. **Outdated and Limited Experimental Setup:**\n * **Baselines:** The baselines are missing more recent, state-of-the-art VLM, such as those from the InternVL series or the newer Qwen-VL models and InternVL.\n * **Judge Model:** BioMedCLIP is an outdated choice. Newer, more powerful medical foundation models (e.g., BIOMEDICA) trained on far larger and more diverse datasets exist and would almost certainly provide a more reliable reward signal.\n * **Scale:** The experiments are limited to 3B and 7B models.\n4. **Marginal Improvements in Ablations:** As seen in Figure 4, the improvements from adding the $R_{VFR}$ and $R_{BPR}$ rewards are often minimal. For example, in the rightmost panel (Train on X-Ray), the test accuracy on MRI for the full model is 93.0%, while the ablation (w/o R-VFR + R-BPR) is 92.7%. A 0.3% gain is not a compelling argument for the added complexity of the method, especially given the catastrophic failure on PathVQA. The low gains on MRI data also raise questions about the judge's effectiveness on this modality.",
"questions": "1. Can you please provide a detailed explanation for the massive performance drop on the PathVQA dataset (Table 1) when applying REFLEX-Med, compared to the simple SFT baseline? Why does your method perform so much worse (87.8% -> 80.9% c, 79.3% -> 30.3% o)?\n2. How did you validate the quality of the saliency maps generated by the frozen BioMedCLIP judge? Can you provide quantitative or qualitative evidence that these masks are accurate, especially for the non-CXR modalities (PathVQA, CT, MRI)? Is it possible that your model is simply learning to match a *bad* set of saliency maps?\n3. The improvements in the cross-modal ablation (Figure 4) are very marginal, especially for the MRI modality (e.g., 0.3% gain in the right panel). Why do you think the gains are so small? Does this suggest the judge model is ineffective on MRI, or that the rewards themselves have limited impact?\n4. Could you clarify if the RL implementation is online or offline? The use of GRPO and sampling from the policy suggests an online setup, but this is not explicitly stated.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-27T19:50:28",
"modification_date": "2025-11-12T11:19:01",
"review_url": "https://openreview.net/forum?id=32QQlzm9ft¬eId=Ufn74bb4Up",
"license": "CC BY 4.0"
}
] |
|
wWkyL8D9xd
|
https://openreview.net/forum?id=wWkyL8D9xd
|
FastFlow: Accelerating The Generative Flow Matching Models with Bandit Inference
| 5.5
| 3.5
|
[
4,
6,
6,
6
] |
[
4,
3,
3,
4
] | 4
|
[
"generative modelling",
"faster inference."
] |
Flow-matching models deliver state-of-the-art fidelity in image and video generation, but the inherent sequential denoising process renders them slower. Existing acceleration methods like distillation, trajectory truncation, and consistency approaches are static, require retraining, and often fail to generalize across tasks. We propose FastFlow, a plug-and-play adaptive inference framework that accelerates generation in flow matching models. FastFlow identifies denoising steps that produce only minor adjustments to the denoising path and approximates them without using the full neural network models used for velocity predictions. The approximation utilizes finite-difference velocity estimates from prior predictions to efficiently extrapolate future states, enabling faster advancements along the denoising path at zero compute cost. This enables skipping computation at intermediary steps. We model the decision of how many steps to safely skip before requiring a full model computation as a multi-armed bandit problem. The bandit learns the optimal skips to balance speed with performance. FastFlow integrates seamlessly with existing pipelines and generalizes across image generation, video generation, and editing tasks. Experiments demonstrate a speedup of over $2.6\times$ while maintaining high-quality outputs.
|
Adaptive inference method for accelerating flow matching based visual generation.
|
generative models
|
https://openreview.net/pdf?id=wWkyL8D9xd
| 2025-09-20T18:17:51
| 4
|
[
{
"id": "AGajkCDJso",
"forum": "wWkyL8D9xd",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission25044/Reviewer_D8Ff",
"reviewer_name": "Reviewer_D8Ff",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "This paper proposes a training free, plug and play acceleration scheme for flow matching Euler sampler at inference time using multi-arm bandit algorithms to choose the most relevant steps given a low-budget step sizes. More specifically, FastFlow aims to skip a variable number of intermediate time steps and approximate the missing velocities using a finite difference in time, and the choice of how many steps to skip is cast as a multiarmed bandit with reward. A theorem gives a bound on the terminal deviation between the approximated and full trajectories under smoothness assumptions, with uniform step size and a set $\\mathcal{S}$ of skipped steps. Experiments on image generation, image editing, and video generation claim speedups up to about 2.6 times while maintaining GenEval and CLIP based IQA metrics near full sampling. Qualitative examples are shown for BAGEL and FLUX models and HunyuanVideo.",
"strengths": "I think the most notable point is that the method is training-free and easy to integrate into existing flow matching pipelines. The speedup figures are plausible given the cost model of flow samplers where every velocity evaluation dominates wall time. I also like recasting the step-size selection as a bandit objective, whichh directly encodes the speed-accuracy tradeoff",
"weaknesses": "- I think the novelty is thinner than the paper suggests. The velocity extrapolator collapses to a two step Adams Bashforth style predictor in uniform time-step. The work should at least acknowledge this equivalence and position itself relative to, for example, PNDM [1], and other linear multistep sampling strategies already common in diffusion code bases. Empirically, a direct comparison to a simple two step predictor that still evaluates the model at checkpoints would be informative. Moreover, the statement that most alternative accelerators require retraining is not fully accurate. TeaCache and DeepCache are training free, and the recent adaptive skipping line is also training free in some variants. These should be acknowledged and compared.\n\n- The theory is reassuring but optimistic in scale. For example, with $T=50$ and $∣S∣=25$, the error upper bound term $O(|S|/T^3)$ suggests very small terminal deviations unless the bounding constants are large. However, in the empirical evaluation, the experiments do show quality drop at aggressive skip levels, so either the constants are large or the bound does not capture the dominant error channel. A local error monitor beyond the velocity mismatch would be more principled, for example, an embedded predictor-corrector or curvature proxy, as in adaptive time-stepping literature.\n\n- The experimental section omits several highly related baselines. AdaptiveDiffusion and AdaDiff are the most obvious, but there are also solver learning baselines such as Bespoke Solvers and S4S that reduce NFE without training the base generator. A comparison would help position FastFlow on the quality versus NFE Pareto.\n\n\n[1] Luping Liu, Yi Ren, Zhijie Lin, Zhou Zhao (2022); Pseudo Numerical Methods for Diffusion Models on Manifolds, ICLR 2022.\n\n[2] Neta Shaul, Juan Perez, Ricky T. Q. Chen, Ali Thabet, Albert Pumarola, Yaron Lipman (2023), Bespoke Solvers for Generative Flow Models, ICLR 2024.",
"questions": "- Please quantify compute precisely. Report average number of model calls per sample and the distribution of skip lengths $\\alpha_t$.\n- Please compare against AdaptiveDiffusion and AdaDiff under the same backbones and prompts, and include DeepCache on image tasks and TeaCache on both image and video. Use the same target speed levels and report NFE matched comparisons.\n- See also other remarks in Weaknesses on the theoretical bound.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-03T11:33:16",
"modification_date": "2025-11-12T18:28:15",
"review_url": "https://openreview.net/forum?id=wWkyL8D9xd¬eId=AGajkCDJso",
"license": "CC BY 4.0"
},
{
"id": "xHNKVgfn43",
"forum": "wWkyL8D9xd",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission25044/Reviewer_qEHL",
"reviewer_name": "Reviewer_qEHL",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper proposes FastFlow, a plug-and-play adaptive inference framework that accelerates generation in flow matching models. FastFlow identifies denoising steps that produce only minor adjustments to the denoising path and approximates them without using the full neural network models used for velocity predictions. The approximation utilizes finite-difference velocity estimates from prior predictions to efficiently extrapolate future states, enabling faster advancements along the denoising path at zero compute cost.",
"strengths": "- the motivation to accelerate flow-matching models is reasonable.\n- the proposed method is plug-and-play and introduce negalectable extra costs",
"weaknesses": "- missing comparisons on ImageNet 256",
"questions": "can the method combined with modern fast samplers instead of Euler?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T17:19:21",
"modification_date": "2025-11-12T18:28:15",
"review_url": "https://openreview.net/forum?id=wWkyL8D9xd¬eId=xHNKVgfn43",
"license": "CC BY 4.0"
},
{
"id": "MIl0tkU0JZ",
"forum": "wWkyL8D9xd",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission25044/Reviewer_Tpv2",
"reviewer_name": "Reviewer_Tpv2",
"rating": 6,
"confidence": 3,
"soundness": 4,
"contribution": 3,
"presentation": 3,
"summary": "This paper introduces FastFlow, a plug-and-play adaptive inference framework to accelerate generative flow-matching models by skipping redundant denoising steps. The key insight is that flow-matching generative trajectories are often approximately linear, so intermediate states can be extrapolated cheaply instead of recomputing every step. FastFlow uses a finite-difference (Taylor series) approximation of the model’s velocity field to predict future states, allowing the method to advance multiple steps at zero neural network cost. Crucially, the framework employs a multi-armed bandit (MAB) at each timestep to adaptively decide how many steps to skip before the next full model evaluation. The bandit’s reward balances two objectives: (i) speed (skipping more steps) and (ii) accuracy (penalizing deviation from the true model trajectory). By learning this trade-off online per sample, FastFlow dynamically skips only those steps that would have minimal effect on final output. The approach is model-agnostic (no retraining or extra networks required) and integrates seamlessly into existing flow-matching pipelines.\n\nThe paper provides a theoretical bound on the error induced by skipping steps, formulates the skip decision as an online bandit problem, and demonstrates various experiments on text-to-image generation, image editing, and text-to-video generation. Empirically, FastFlow achieves over 2.6× speedup in inference while maintaining output quality comparable to the full model across these tasks. This represents a significant improvement over prior static acceleration methods, which often require retraining or sacrifice fidelity.\n\nThough this method is effective when multiple steps are necessary, there are already many few-step or even one-step models (e.g., distillation or shortcut models) available today, so I am not sure whether this method is truly useful in practical scenarios.",
"strengths": "Unlike static acceleration schemes, FastFlow adapts to each sample’s complexity. The multi-armed bandit dynamically decides per timestep how many steps to skip, meaning simpler cases automatically run faster while complex cases get more compute. This adaptive inference is novel and ensures no one-size-fits-all schedule, leading to greater robustness across diverse inputs.\n\nThe proposed framework is model-agnostic and plug-and-play, so it can be applied to existing pretrained flow-matching models without any retraining or fine-tuning. There’s no need for distilling a new model or training an auxiliary network, which makes the method very practical. It can be integrated into current pipelines with minimal effort, offering immediate speed benefits.",
"weaknesses": "It requires some exploration to learn the optimal skipping policy. You acknowledge that the speedup may not fully materialize in the very first steps or first few samples due to this exploration phase. In practice, you mitigate this by seeding the bandit with one full generation, but if a user only generates a handful of samples, the adaptive policy might not have time to reach peak efficiency. In scenarios with very few inference runs, the benefit of FastFlow could be less pronounced.\n\nThe effectiveness of FastFlow rests on the assumption that the generative trajectories are locally smooth/linear enough to be extrapolated. While flow-matching models do encourage linear paths, there might be cases of highly non-linear or complex dynamics where the Taylor approximation could be less accurate. Therefore, essentially, FastFlow may be less effective if the model’s velocity field changes rapidly in unpredictable ways.\n\nTable 1 and Table 2 are overlapped. Please adjust the margin via \\vspace.\n\nThis paper contains some typos and grammatical issues. Here are the ones I found just by skimming through it:\n* L67: a a theoretical -> a theoretical\n* L95: We setup -> We set up\n* L170: a static criteria -> a static criterion\n* L364: is applied is as -> is applied as\n* L388: an the -> and the\n* L490: it’s content -> its content",
"questions": "How many samples or iterations does it typically take for the bandit policy to stabilize? In your experiments, after seeding with one full generation, does FastFlow achieve near-optimal skipping immediately on the next sample, or does it require a few generations to fully adapt? Clarifying this can help understand use-cases. Any insight into how the bandit’s learning curve looks would be helpful.\n\nDid you observe any failure cases or significantly reduced speedups for particular input types or prompts that might cause non-linear dynamics? Analyzing a case where FastFlow nearly defaults to the full model would illustrate its limits and robustness.\n\nTheorem 3.1 gives an error bound $O(|S|/T^3)$. Did you empirically measure how close the practical error comes to this bound? In other words, is the bound reasonably tight or very conservative? Some intuition or experiment on how the final output error grows with number of skips in practice.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T12:44:34",
"modification_date": "2025-11-12T18:28:16",
"review_url": "https://openreview.net/forum?id=wWkyL8D9xd¬eId=MIl0tkU0JZ",
"license": "CC BY 4.0"
},
{
"id": "k5DCUl64mo",
"forum": "wWkyL8D9xd",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission25044/Reviewer_r4c2",
"reviewer_name": "Reviewer_r4c2",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The paper proposes FastFlow, a plug-and-play adaptive inference framework that accelerates flow-matching generative models without retraining. The key idea is to approximate redundant denoising steps using a finite-difference Taylor expansion of the model’s velocity field, thereby skipping expensive neural evaluations when the dynamics are locally smooth. To decide how many steps can be safely skipped, FastFlow formulates the process as a multi-armed bandit (MAB) problem that adaptively balances efficiency and fidelity during sampling. Overall, FastFlow achieves acceleration while maintaining perceptual and semantic fidelity across multiple generative domains.",
"strengths": "The paper is clearly written and easy to follow, with a well-motivated goal: accelerating flow-matching models to benefit the broader generative modeling ecosystem. I especially appreciate the inclusion of image editing, where real-time interaction is critical. While reusing the previous step’s velocity is not new, bandit-driven policy for adaptive step skipping is novel in this context to my knowledge and is presented in a concrete, convincing way.",
"weaknesses": "The proposed method relies on heuristic parameters such as $p$ for velocity approximation and $\\mu$ for the reward regularization term. It would be helpful to clarify how these parameters are chosen in practice and whether the method is robust to variations in their values.\n\nFig. 2–3 consistently show that FastFlow has higher latency than TeaCache (for comparable compute). Is this overhead coming from the multi-armed bandits? It seems that the overhead is not negligible. Can the authors clarify this behavior? Similarly, the result for FastFlow-10 in image generation (Table 1) shows only minor gains compared to Full-10. Additionally, the authors mention that the baselines follow the official hyperparameters. What are these parameters, and could this violate an apples-to-apples comparison? Overall, my concern is that the experiments either show marginal gains or may have presented in an unfair manner. \n\nLastly, the paper would be strengthened by including an ablation study on the contribution of the multi-armed bandit algorithm. For instance, comparing FastFlow against simpler alternatives such as uniform or piecewise-constant skipping schedules.",
"questions": "How does the method perform when coupled with quantization method? Also can this be utilized in flow models after reflow training? Ideally, reflow models have straight, non-crossing paths where the proposed method might not be as effective. I supposed experiment presented in Figure 4 using rectified flow models FLUX schnell can show a different trend.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-24T14:04:25",
"modification_date": "2025-11-12T18:28:17",
"review_url": "https://openreview.net/forum?id=wWkyL8D9xd¬eId=k5DCUl64mo",
"license": "CC BY 4.0"
}
] |
vv8EcCoBfr
|
https://openreview.net/forum?id=vv8EcCoBfr
|
Bilateral Information-aware Test-time Adaptation for Vision-Language Models
| 4.333333
| 4.166667
|
[
6,
6,
4,
4,
4,
2
] |
[
3,
5,
5,
4,
4,
4
] | 6
|
[
"Test-time Adaptation",
"Vision Language Model"
] |
Test-time adaptation (TTA) fine-tunes models using new data encountered during inference, which enables the vision-language models to handle test data with covariant shifts. Unlike training-time adaptation, TTA does not require a test-distributed validation set or consider the worst-case distribution within a given tolerance. However, previous methods primarily focused on adaption-objective design, while the data tend to be fully utilized or simply filtered through a fixed low-entropy selection criteria. In this paper, we analyze the weakness of previous selection criterion and find that only selecting fixed proportion of low-entropy samples fails to ensure optimal performance across various datasets and can lead the model to becoming over-confident in wrongly classified samples, showing unexpected overfitting to atypical features and compromising effective adaptation. To improve upon them, we propose Bilateral Information-aware Test-Time Adaptation (BITTA), which simultaneously leverages two distinct parts of the test inputs during adaptation. Specifically, a dynamic proportion of low-entropy samples are used to learn the core representation under covariant shifts, while high-entropy samples are adopted to unlearn atypical features. This dual approach prevents the model from undesired memorization and ensures extensive optimal performance. Comprehensive experiments validate the effectiveness in various datasets and model architectures.
|
unsupervised, self-supervised, semi-supervised, and supervised representation learning
|
https://openreview.net/pdf?id=vv8EcCoBfr
| 2025-09-17T09:29:50
| 6
|
[
{
"id": "dSxbk5YjnL",
"forum": "vv8EcCoBfr",
"review_number": 6,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8180/Reviewer_kaur",
"reviewer_name": "Reviewer_kaur",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 4,
"summary": "The authors argue that previous methods have focused on improving the objective functions and have ignored the dataset perspective of test time adaptation. The standard paradigm revolves around picking up the most confident samples and applying TTA methods on these. This relies on an assumption that all low entropy samples are correct predictions. However, this can lead to memorization of atypical features as shown in figure 2 of their motivation where model becomes confident about its wrong predictions. Authors propose an interesting work around to this, they utilize high entropy samples and maximize the entropy on these samples. This leads to unlearning of such atypical features leading to higher performance across the board. Lastly, to improve the training process, authors propose that the optimal percentage for low entropy samples vary for each dataset.",
"strengths": "- Paper introduce a novel data centric perspective to the test time adaptation. \n- paper is well motivated. \n- presentation is well done. \n- Results are promising.",
"weaknesses": "It is unclear which features are atypical. could it also be because of high confident incorrect predictions being part of the training mix? maybe some sort of attention map visualizations would be nice here as well. \nHow does it compare to other regularization techniques such as weight decay.",
"questions": "- Comparison with other regularization techniques. \n- visualization of attention maps over low confident misclassifications before TTA and high confident misclassifications after TTA.\n- You could have high entropy correctly classified samples. Does something like this exist and if yes, then does it impact your method negatively?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-11T01:15:04",
"modification_date": "2025-11-12T12:02:18",
"review_url": "https://openreview.net/forum?id=vv8EcCoBfr¬eId=dSxbk5YjnL",
"license": "CC BY 4.0"
},
{
"id": "mIOtRgqnmb",
"forum": "vv8EcCoBfr",
"review_number": 5,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8180/Reviewer_c2Yw",
"reviewer_name": "Reviewer_c2Yw",
"rating": 6,
"confidence": 5,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper addresses the problem in test-time adaptation of vision–language models whereby updating solely on low-entropy samples leads to inadvertent overfitting and sample-selection ratios that lack cross-domain robustness. The proposed method minimizes entropy for low-entropy samples while simultaneously strengthening image–text alignment and inter-class separability, and maximizes predictive entropy for high-entropy samples to suppress memorization of atypical features, thereby striking a balance between adaptation and robustness. The authors also provide theoretical support regarding the separability of hard samples and coverage guarantees for proportion prediction, and demonstrate consistent performance gains on CIFAR-10/100-C, ImageNet-C, and multiple cross-domain datasets, as well as in combination with methods such as TPT, CTPT, and BAT.",
"strengths": "1. Proposes a bilateral mechanism and uses dynamic proportion estimation to mitigate overfitting and sensitivity to sample selection.\n2. Provides an analysis of the separability of hard samples and coverage guarantees for proportion prediction, enhancing the method’s interpretability and robustness.\n3. Achieves consistent gains on CIFAR-10/100-C, ImageNet-C, and multiple cross-domain datasets.",
"weaknesses": "The paper needs additional experiments to further demonstrate the effectiveness of the method.",
"questions": "1. How is the linear relationship between the dynamic low-entropy proportion and the number of classes fitted? Which data points are used, what is the goodness of fit, and are scatter plots with regression lines on CIFAR-10-C, CIFAR-100-C and ImageNet-C, together with a small-scale sensitivity check, included?\n2. Why is the high-entropy proportion fixed at 0.1? Is a brief sweep at 0.05, 0.10, 0.15, 0.20 on CIFAR-10-C and ImageNet-C (severity 5), reporting both Top-1 and ECE?\n3. In the component ablations, how much gain is attributed to low-entropy learning only, high-entropy unlearning only, and both together? Are the independent contributions of each component in the bilateral mechanism quantified?\n4. Does unlearning improve model uncertainty and risk control? Are ECE, NLL, and rejection AUROC reported based on existing outputs and compared with baselines?\n5. On which corruption types are the gains primarily concentrated? Are results broken down by distortion type provided, in addition to the main table, to clarify performance differences across corruptions?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-10T02:55:00",
"modification_date": "2025-11-12T12:02:19",
"review_url": "https://openreview.net/forum?id=vv8EcCoBfr¬eId=mIOtRgqnmb",
"license": "CC BY 4.0"
},
{
"id": "Mvs42rP0Yo",
"forum": "vv8EcCoBfr",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8180/Reviewer_obJ8",
"reviewer_name": "Reviewer_obJ8",
"rating": 4,
"confidence": 5,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This paper proposes BITTA, a method for test-time adaptation (TTA) that simultaneously performs learning on low-entropy (confident) samples and unlearning on high-entropy (uncertain) samples. Empirical evaluation is performed on several corruption and domain shift benchmarks, showing small but consistent improvements over prior TTA baselines.",
"strengths": "S1. **Conceptually simple but reasonable intuition.** The idea of balancing learn and unlearn at test time is intuitive, and connects nicely to existing observations that confident-only updates lead to confirmation bias. \n\nS2. **Dynamic adaptation ratio.** Instead of fixing the proportion of confident vs. uncertain samples, BITTA adjusts it adaptively based on batch entropy values. This is a lightweight heuristic that makes the algorithm more flexible. \n\nS3. **Well-written and structured.** The methodology section is easy to follow, and the figures illustrating the bilateral update flow are clear.",
"weaknesses": "W1. **Marginal improvement magnitude.** Most gains over strong baselines are within +0.3--1.0 percent points, often within the variance range reported in prior TTA studies. It would be great if the authors report the performance of average and variance of each experiment with multiple times. \n\nW2. **Comparison of previous unlearning methods.** The paper states that high-entropy samples trigger *unlearning* to mitigate the overconfidence. I believe that there are a number of research in terms of unlearning works, so it is important to compare them with the proposed method that authors proposed in this paper. Sorry for not referring several unlearning methods to compare due to lack of expertise about unlearning domain. \n\nW3. **No computational analysis.** BITTA claims to be lightweight, but no runtime or memory cost comparison is reported against other baselines. \n\nW4. **Incremental novelty.** I agree that the authors address an important problem in the TTA domain. However, the proposed method is conceptually simple and lacks substantial novelty. If the approach had demonstrated a larger performance gap over prior TTA methods, its impact could have been justified despite the simplicity. Unfortunately, the observed improvements are rather marginal (as described in W1), which limits the overall significance of the contribution.",
"questions": "BITTA is a clean and well-written paper that presents a modest yet reasonable enhancement to entropy-based test-time adaptation. The idea of learning from confident samples and unlearning uncertain ones is conceptually sound and practically implementable. However, the contribution remains incremental, and the performance gains are minor. I recommend **rejection** at this stage, but I believe the idea has potential. If the authors further refine their method and demonstrate a larger improvement over existing baselines, the work could be strong enough for acceptance in a future submission.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T10:57:50",
"modification_date": "2025-11-12T12:02:19",
"review_url": "https://openreview.net/forum?id=vv8EcCoBfr¬eId=Mvs42rP0Yo",
"license": "CC BY 4.0"
},
{
"id": "yNcj08Hsda",
"forum": "vv8EcCoBfr",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8180/Reviewer_q3aD",
"reviewer_name": "Reviewer_q3aD",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The article addresses the problem of test-time adaptation of a vision-language model given a stream of test data. In particular, the model selects samples with low entropy, assuming that the model's confidence reflects certainty in its predictions, and optimizes the model to further minimize entropy. Deviating from previous works, the approach selects samples of high entropy as well. For the latter, it is assumed that they have atypical features that need to be unlearned, something achieved by maximizing their entropy. The tradeoff between the two objectives aims to prevent overfitting and overconfidence while still learning from the stream of data. Experiments across domains and corruptions show the effectiveness of the approach (BITTA), outperforming recent competitors.",
"strengths": "1. The use of samples with high entropy/low confidence is sound and very interesting. It is interesting because it has not been explored by previous approaches, focusing mostly on learning from high-confidence samples. While it is unclear what type of atypical information is included in samples with high entropy, maximizing the entropy of the latter can still act as a regularizer for avoiding overconfidence, thus improving the results. \n\n2. The paper shows several analyses concerning multiple aspects of the approach, such as the impact of batch-size and update steps (Fig. 6, Fig. 8.a), hyperparameters (Fig. 8.b), selection module (Fig. 7), and changes in the entropy dynamic (Fig. 8.c). Moreover, experiments showcase the generality of the approach to other architectures/methods (e.g., Tab. 5 and Tab. 8). Overall, these analyses support the design choices and provide insights on the potential of the method, how the latter works, and the tradeoffs to keep in mind when applying it. \n\n3. The appendix contains several details regarding design choices (e.g., the threshold estimate module of Appendix E), and the supplementary material includes the code. All in all, these additions make the submission transparent and provide strong support for its reproducibility.",
"weaknesses": "1. The setting is very similar to that of Episodic TTA (e.g., [a,b,c]), where the model is updated as the stream of target data becomes available. Some of these competitors are missing in the experimental results, and including them would make the comparisons more comprehensive. \n\n2. Related to the previous point, these methods test with a batch size of 1 (i.e., [a,c]), assuming no priors on the batch constitution. On the other hand, BITTA is very sensitive to the batch-size (e.g., results of Fig. 8.a and the adaptive threshold of 294-306). Open questions are whether the model would be i) effective for extremely low batch sizes and ii) robust to non i.i.d. batches (e.g., samples of the same class, as in [d,e]). \n\n3. While it is intuitive that maximizing the entropy of low confident examples can both prevent overfitting and reduce overconfidence, the fact that high entropy samples share atypical features with low entropy ones (65-70) is not intuitive and not clarified with the qualitative examples of Fig. 3.a and H.3. It would be helpful to either clarify the meaning of atypical features or provide evidence of these shared spurious factors, or down weigh the related statements. \n\n4. Lines 270-274 indicate that minimizing entropy on high-confidence samples leads to overconfidence and maximizing entropy on low-confidence ones reduces overfitting. This is also suggested by Fig. 2, Fig. 3.b, and Fig. 8.c at the level of entropy. To further analyze the phenomenon of overconfidence, it could be interesting to show the expected calibration error [f], an analysis reported by related TTA works exploring overconfidence and potential solutions (e.g., [g,h]). \n\n5. While maximizing entropy is a good unlearning strategy, it would have been interesting to explore other alternatives (e.g., [i]) to justify this design choice. Note that different unlearning choices could lead to different effects in terms of overfitting and overconfidence. Moreover, related works do not discuss how the paper relates to the machine unlearning literature and related approaches that used unlearning for downstream tasks (e.g., debiasing [j]).\n\n\n**References** ([a,b,f] already in the manuscript):\n\n[a] Karmanov, Adilbek, et al. \"Efficient test-time adaptation of vision-language models.\" CVPR 2024.\\\n[b] Zhang, Ce, et al. \"Dual prototype evolving for test-time generalization of vision-language models.\" NeurIPS 2024.\\\n[c] Zhou, Lihua, et al. \"Bayesian test-time adaptation for vision-language models.\" CVPR 2025.\\\n[d] Gong, Taesik, et al. \"Note: Robust continual test-time adaptation against temporal correlation.\" NeurIPS 2022.\\\n[e] Niu, Shuaicheng, et al. \"Towards stable test-time adaptation in dynamic wild world.\" ICLR 2023.\\\n[f] Guo, Chuan, et al. \"On calibration of modern neural networks.\" ICML, 2017.\\\n[g] Yoon, Hee Suk, et al. \"C-TPT: Calibrated test-time prompt tuning for vision-language models via text-feature dispersion.\" ICLR 2024. \\\n[h] Farina, Matteo, et al. \"Frustratingly easy test-time adaptation of vision-language models.\" NeurIPS 2024.\\\n[i] Fan, Chongyu, et al. \"Salun: Empowering machine unlearning via gradient-based weight saliency in both image classification and generation.\" ICLR 2024.\\\n[j] Chen, Ruizhe, et al. \"Fast model debias with machine unlearning.\" NeurIPS 2023.",
"questions": "Following from the weaknesses above:\n1. How does the method compare with other TTA approaches working online?\n2. Is the method robust to the batch composition?\n3. Is it possible to clarify the meaning of atypical features?\n4. Is BITTA reducing the overconfidence of the model/improving its calibration?\n5. How does BITTA relate to approaches for machine unlearning?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T03:00:38",
"modification_date": "2025-11-12T12:02:19",
"review_url": "https://openreview.net/forum?id=vv8EcCoBfr¬eId=yNcj08Hsda",
"license": "CC BY 4.0"
},
{
"id": "4SEinTlpMP",
"forum": "vv8EcCoBfr",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8180/Reviewer_uPAE",
"reviewer_name": "Reviewer_uPAE",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "The paper proposes BITTA, a bilateral information-aware TTA framework for VLMs (e.g., CLIP): low-entropy samples are used for learning (entropy minimization) and high-entropy samples for unlearning (entropy maximization), with a heuristic to dynamically set the low-entropy selection ratio. Experiments on CIFAR-10/100-C and ImageNet-C show consistent but modest gains and some compatibility with existing TTA methods.",
"strengths": "1. Clear identification of a failure mode of fixed low-entropy selection (overconfidence on errors).\n2. Simple, general plug-in that works with multiple TTA baselines and backbones.\n3. Sensible diagnostics (entropy dynamics, t-SNE) and reasonable ablations (batch size, steps, λ).\n3. Low computational overhead; easy to implement.",
"weaknesses": "1. Technical novelty is limited; the idea (minimize on confident, maximize on uncertain) is incremental, and the theory is high-level with strong assumptions.\n2. ImageNet generalization is under-evaluated: key variants (e.g., ImageNet-R/A/V2/Sketch) are not systematically included in the main results.\n3. Missing important baselines (e.g., DiffTPT [1], DMN-ZS [2]), weakening the empirical case.\n4. Gains are often modest with occasional regressions; clearer analysis of when it helps/hurts is needed.\n\n[1] Diverse data augmentation with diffusions for effective test-time prompt tuning\n[2] Dual Memory Networks: A Versatile Adaptation Approach for Vision-Language Models",
"questions": "See weaknesses.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T02:20:09",
"modification_date": "2025-11-12T12:02:20",
"review_url": "https://openreview.net/forum?id=vv8EcCoBfr¬eId=4SEinTlpMP",
"license": "CC BY 4.0"
},
{
"id": "KrVJPQ8UZ0",
"forum": "vv8EcCoBfr",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8180/Reviewer_y1BL",
"reviewer_name": "Reviewer_y1BL",
"rating": 2,
"confidence": 4,
"soundness": 1,
"contribution": 2,
"presentation": 2,
"summary": "The paper tackles a known failure mode in VLM TTA: selecting only low-entropy samples for entropy minimization (EM) can overfit \"atypical features\" and amplify overconfidence on misclassified cases. The paper argues that the high-entropy samples which the current model is unconfident can serve as a potential candidate for regularization. Thus, BITTA proposes a bilateral strategy: (i) learn with low-entropy samples using a standard TTA objective, and (ii) unlearn with a small set of high-entropy samples by maximizing their predictive entropy. The intent is to curb memorization of atypical features while preserving core representations. BITTA is used as plug-in to multiple TTA algorithms, yielding gains on multiple benchmarks.",
"strengths": "- Clear diagnosis of a common TTA pitfall (confidence selection → overfitting to atypical cues).\n\n- Simple, compatible design: bilateral learning + unlearning that can wrap around existing TTA learners.\n\n- Dynamic selection ratio that reflects dataset/noise distribution rather than a fixed value.",
"weaknesses": "- What the method assumes. The paper assumes two things:\n(1) Low-entropy and high-entropy samples both carry spurious (atypical) cues; and\n(2) those cues are stronger or more frequent in the high-entropy group.\nThis is why the method adds an “unlearning” branch on high-entropy samples.\n\n- What is missing. The paper does not make this assumption clear; it does not define what counts as an “atypical feature,” and never measures how much atypicality exists in each entropy group (low / medium / high). \n\n- Why this matters. Without a clear definition and measurement regarding “atypical feature,” we cannot tell whether the gains of BITTA come from truly removing reliance on \"atypical\" features or from a generic regularization effect. That weakens the main motivation for the bilateral design and the authors' claims.\n\n - What evidence would resolve this. Provide a formal definition an operational test for “atypical features”, and report their level by entropy bin. Then show that the unlearning process for high-entropy samples reduces these atypical signals—especially for misclassified low-entropy cases.",
"questions": "- Regarding \"atypical feature\" assumption, what are \"atypical features\"? Would it be possible to provide an operational definition of “atypical features”?\n\n- Would it be possible to quantify their prevalence/strength of \"atypical features\" by entropy bins (low/medium/high) to substantiate the \"atypical\" feature assumption?\n\n- Would it be possible to show that the unlearning process for high-entropy samples can reduce these atypical signals—especially for misclassified low-entropy cases?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-21T16:31:55",
"modification_date": "2025-11-12T12:02:21",
"review_url": "https://openreview.net/forum?id=vv8EcCoBfr¬eId=KrVJPQ8UZ0",
"license": "CC BY 4.0"
}
] |
|
1AYy3T3Xjk
|
https://openreview.net/forum?id=1AYy3T3Xjk
|
A Process-Level Method for Creativity Evaluation in LLM-Assisted Learning
| 2.5
| 3.5
|
[
2,
2,
4,
2
] |
[
4,
3,
3,
4
] | 4
|
[
"LLM",
"Creativity assessment",
"Process-level evaluation"
] |
Interpretable creativity assessment remains challenging, and the adoption of large language models (LLMs) in education amplifies issues of subjectivity and opacity. This study presents a process-level evaluation approach for LLM-assisted learning that attributes learner-versus-model contributions from multi-turn student–LLM dialogues and scores four expert-elicited dimensions with rationale texts. Using 1,273 cleaned dialogues from 81 undergraduates across multi domains, an auditable attribution protocol and an instruction-tuned evaluator are introduced to produce process-linked, interpretable rationales. Empirical evaluation with expert assessments indicates alignment with expert judgments. Claims are explicitly scoped to the studied tasks and domains, and code and evaluation scripts will be released for reproducibility.
|
other topics in machine learning (i.e., none of the above)
|
https://openreview.net/pdf?id=1AYy3T3Xjk
| 2025-09-20T07:12:15
| 4
|
[
{
"id": "WtYVSc2PGG",
"forum": "1AYy3T3Xjk",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission21916/Reviewer_PNVN",
"reviewer_name": "Reviewer_PNVN",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "The paper introduces a new framework called CREDO to assess creativity in human–LLM collaboration. Unlike traditional creativity assessments that focus on final outputs, CREDO emphasizes the process of idea generation and reasoning. It combines two key components: (1) the Innovation Traceability Atlas, which breaks down multi-turn student–LLM dialogues into cognitive steps (questioning, reframing, integrating, generating) and distinguishes between human and model contributions; and (2) an instruction-tuned evaluator, fine-tuned on the DeepSeek-32B model using LoRA and knowledge distillation, that produces interpretable creativity scores (1–5) along four new process-oriented dimensions: interdisciplinary innovation, problem reframing, risk-driven innovation, and resource integration efficiency. Experiments on 1,273 student–LLM dialogues show that the fine-tuned model achieves 90% of human-level agreement (QWK = 0.728) and can reliably distinguish student vs. model contributions (F1 = 0.84).",
"strengths": "1. Propose a new creativity evaluation method with the help of LLM which can assess creativity in real time and can be used in daily life.Traditional questionnaire-based methods cannot evaluated in real time but just offer snapshots, and recording transcript-based methods can hardly be used in daily life.\n2. An interesting application for the crucial dilemma of controlling students to use LLMs as assistants: strict control impedes students to use new tools while no control leads to creativity decay. While there are many tools like AI-generated text detection that try to prevent students from using AI too often, new AI models can always hack those detection algorithms as they are stronger. The proposed creativity evaluation method can be used in the communication between students and LLMs, in which we not only let the students use tools, but also get the creativity as a metrics to prevent students use LLMs too much.",
"weaknesses": "Limited Dataset and Generalizability – The dataset includes only 81 undergraduate students from STEM domains, restricting applicability to other disciplines or educational levels.\n\nImplementation Details Missing – The paper lacks practical training details such as hardware setup, fine-tuning time, and exact hyperparameter values needed for reproducibility.\n\nInconsistency in Methodological Description – The paper claims to use a “fully fine-tuned teacher model” for knowledge distillation but also states that full fine-tuning is “computationally prohibitive,” creating a logical contradiction.\n\nLack of Transparency in Review References – Mentioning “Area Chair comments” during the review phase is inappropriate for a double-blind submission, suggesting possible misunderstanding or template-based phrasing.\n\nOverly Polished and Synthetic Writing Style – The writing is highly formal, repetitive, and uniformly structured, which, combined with perfect consistency in technical phrasing and reference formatting, gives an impression of automated generation.",
"questions": "1. Is it possible to substitute human expert annotators with LLMs, that is to say, you can automate the whole data process pipeline and only need expert to check the data after finish process instead of annotate each data manually.\n2. In line 316, \"to address the core concern raised by an Area Chair regarding whether, XXXX\", who is the Area Chair? Why you can recieve comments from AC before the ICLR submission deadline?\n3. About Table A2, row of \"w/o LoRA (Full Fine-tuning)\", the author do not provide experiment here because of \"Computationally prohibitive\". I am consued, if you cannot full fine-tuning a LLM, where do the authors got the teacher model to conduct knowledge distillation. In line 312-313, The author claim that \"A Teacher is obtained via full-parameter FT on the same training set\", which seems contradictory with the authors' explanation of why do not get the experiment result of \"w/o LoRA\".\n4. The authors lack \"implementation details\" section. Readers need to know the size of the datasets, configuration of the server, and training time to determine if it is possible to reproduce the author's experiment on their own computer.\n\n\nTypos:\n1. Line 74: engi -neering => engineering\n2. Line 75: screen- ing => screening\n3. Line 78: overlook -ing => overlooking\n4. Line 81: meaning -ful => meaningful",
"flag_for_ethics_review": [
"Yes, Responsible research practice (e.g., human subjects, annotator compensation, data release)"
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T01:06:22",
"modification_date": "2025-11-12T18:06:19",
"review_url": "https://openreview.net/forum?id=1AYy3T3Xjk¬eId=WtYVSc2PGG",
"license": "CC BY 4.0"
},
{
"id": "NtjBZzJPUd",
"forum": "1AYy3T3Xjk",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission21916/Reviewer_Cp81",
"reviewer_name": "Reviewer_Cp81",
"rating": 2,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "The paper proposes CREDO, a process-level framework for evaluating creativity in human–LLM collaborative learning. Instead of judging only final artifacts, the method analyzes multi-turn student–LLM dialogues to (i) attribute learner vs. model contributions via an Innovation Tracing Atlas (ITA), and (ii) score four process-centric dimensions—Interdisciplinary Innovation, Problem Reframing, Risk-Driven Innovation, and Resource Integration Efficiency—using an instruction-tuned evaluator that outputs 1–5 ratings with rationales. The study curates a dataset of 1,273 cleaned dialogues from 81 undergraduates across multiple domains, reports high inter-rater reliability for expert annotations (weighted κ=0.81; Cronbach’s α=0.86), and fine-tunes a DeepSeek-32B model with LoRA (plus knowledge distillation) to produce scores and concise explanations. On the held-out test set, the evaluator achieves QWK=0.728 (≈90% of human ceiling 0.81), r=0.811, and MAE=0.505; a targeted experiment suggests macro-F1=0.84 for learner–vs–LLM attribution categories. Claims are scoped to STEM-leaning academic inquiry contexts, and the authors plan code/evaluation release.",
"strengths": "1. Moves beyond outcome scoring by elevating dialogue trajectories as primary evidence and explicitly attributes human vs. LLM roles; defines four process dimensions tailored to collaboration (vs. Torrance-style outputs). \n\n2. Ethical data collection; multi-stage cleaning/standardization; double-blind expert annotation with arbitration; high IRR (κ=0.81, α=0.86); clear objectives and ablations; teacher–student KD + LoRA for practicality. \n\n3. Concrete workflow figure; CREDO vs. classical mapping; precise loss definitions; interpretable score+rationale outputs; helpful ITA visualization.",
"weaknesses": "1. The dataset (81 undergraduates; two universities; STEM-oriented tasks) constrains generalization to broader populations (K-12, humanities/arts, diverse cultures/languages). The paper acknowledges this but evaluation remains single-context. Actionable ask: run cross-institution and non-STEM validations (even small pilots) to probe transportability. \n\n2. Comparing only to GPT-4 zero-shot and untuned DeepSeek-32B underestimates strong alternatives (e.g., prompt-programmed judges, few-shot rubric-prompting, instruction-tuned evaluators without LoRA, calibrated ordinal regressors over handcrafted features). Actionable ask: add tuned LLM-judge baselines (few-shot rubric, chain-of-thought with rubric anchors) and a non-LLM baseline (e.g., logistic/ordinal regression over process features). \n\n3. The attribution experiment (macro-F1=0.84) relies on expert-labeled categories on the same type of data used to train the evaluator. This is valid for alignment to experts but leaves open whether attribution corresponds to causal contribution or downstream learning gains. Actionable ask: show that high attribution quality predicts independent outcomes (e.g., subsequent task performance, transfer, rubric-blind human judgments). \n\n4. Main metrics lack confidence intervals, per-dialogue variance, and significance tests between models. QWK and r are informative, but error analysis is thin (few failure cases, limited per-dimension uncertainty). Actionable ask: add bootstrap CIs, paired significance, and calibration metrics for ordinal predictions. \n\n5. The semantic drift filter (cosine <0.15 for three consecutive pairs), cluster-then-stratify split (k=50), and ITA node definitions could influence results; there is no sensitivity analysis. Actionable ask: report robustness to cleaning thresholds, k, and ITA labeling variations; include prompt perturbation tests for the evaluator. \n\n6. No breakdowns across demographics, domains, or dialogue lengths; potential bias if certain discourse styles are favored. Actionable ask: provide subgroup QWK/MAE and differential item functioning checks. \n\n7. The method presumes access to multi-turn logs and an evaluator pass; latency/compute and annotation cost (for gold standards) are not quantified. Actionable ask: report inference cost, throughput, and a human-in-the-loop review budget for classroom deployment.",
"questions": "See above",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-29T10:28:14",
"modification_date": "2025-11-12T18:06:19",
"review_url": "https://openreview.net/forum?id=1AYy3T3Xjk¬eId=NtjBZzJPUd",
"license": "CC BY 4.0"
},
{
"id": "6S5ki4DMQs",
"forum": "1AYy3T3Xjk",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission21916/Reviewer_J6hi",
"reviewer_name": "Reviewer_J6hi",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "The paper introduces CREDO, a process-level evaluation framework for assessing creativity in LLM-assisted learning. Unlike classical tests (TTCT, AUT), CREDO focuses on dialogue process traces, separating human vs. LLM contributions through an Innovation Tracing Atlas (ITA) and scoring four new creativity dimensions (Interdisciplinary Innovation, Problem Reframing, Risk-Driven Innovation, Resource Integration Efficiency).\n\nAn instruction-tuned DeepSeek-32B model is fine-tuned (using LoRA + Knowledge Distillation) to predict 1–5 scores and rationales, trained on 1,273 annotated dialogues. Empirical evaluation shows a Quadratic Weighted Kappa (QWK) of 0.728 (≈90% of human reliability ceiling), with an attribution F1 of 0.84. The dataset is claimed to be ethically collected and reproducible.",
"strengths": "1. Timely problem: \nThe paper tackles an increasingly critical question: how to assess human creativity in an era of pervasive LLM support.\n2. Conceptual novelty: \nThe process-level perspective (tracing human cognitive trajectories) is fresh and potentially impactful.\n3. Interpretable design: \nCombining score with rationale and transparent attribution mechanisms is commendable.\n4. Dataset collection validity:\nUsing ongoing course projects allows students to explore topics they already find relevant, reducing artificiality.",
"weaknesses": "### 1. Ambiguous Causal Claims\nThe paper makes a strong and central claim that its framework “traces the cognitive trajectory of creative thinking.” However, this assertion remains conceptually plausible but empirically unverified. The study presents correlational evidence, model outputs that align with expert judgments on dialogue data, but no temporal or causal validation that demonstrates the framework genuinely captures the evolution of creative cognition.\n\nTo substantiate the claim of tracing cognitive trajectories, one would expect to see process-causal analyses such as: (1) longitudinal correlations between CREDO-derived process indicators and subsequent creative achievements or outputs, or (2) human post-hoc interviews or think-aloud protocols to verify whether identified “origination” and “development” nodes correspond to participants’ subjective sense of idea generation.\n\nWithout such evidence, the results demonstrate correlation rather than causation. The framework successfully maps interactions and assigns attributions, but it does not yet prove that these attributions reflect the underlying causal mechanisms of creative thought. As it stands, the paper captures surface patterns of dialogue behavior rather than validating that those patterns cause or constitute creative cognition.\n\n\n### 2. Statistical Reporting Limitations\nThe paper’s statistical reporting lacks the depth necessary for confident interpretation of model performance. While mean metrics such as MSE, MAE, Pearson correlation, and Quadratic Weighted Kappa (QWK) are provided, the authors do not report confidence intervals, variance across cross-validation folds, or per-dimension error distributions.\n\nGiven the relatively small test set (128 samples), random variance could significantly influence the reported results. The observed improvement in QWK (0.728 for the fine-tuned model vs. 0.513 for GPT-4 and 0.342 for the baseline DeepSeek) appears substantial, yet the statistical significance of this improvement is not established. Bootstrapped confidence intervals or pairwise statistical tests (e.g., Fisher’s z-test for correlation or permutation tests for ordinal ratings) would be needed to determine whether these differences are meaningful rather than due to sampling noise.\n\nAdditionally, no error analysis by creativity dimension is included in the main text, even though later appendices reveal variability across dimensions. Reporting these results with appropriate variance measures and standardized effect sizes would clarify which creativity dimensions are reliably captured and which remain unstable.\n\n### 3. Potential Data Leakage\nA major methodological concern lies in the model ecosystem overlap between data generation and evaluation. Students interacted with the DeepSeek LLM during data collection, and the same model family (DeepSeek-32B) was later fine-tuned as the evaluator. This design introduces a significant risk of self-evaluation bias or data leakage at the stylistic level.\n\nThe evaluator may learn superficial linguistic or stylistic features characteristic of DeepSeek-generated text, enabling it to classify or score more accurately, not because it understands creativity, but because it recognizes its own generative patterns. For instance, DeepSeek’s distinctive discourse markers, lexical cohesion patterns, or turn-taking rhythms might act as unintended cues that correlate with specific CREDO scores.\n\nTo mitigate this concern, the authors should conduct cross-model generalization tests, evaluating dialogues generated using a different assistant model (e.g., GPT-4, Claude, or Mistral), to confirm that the evaluator’s performance persists beyond its native language patterns. Alternatively, a style-controlled or paraphrased dataset could assess whether performance drops when superficial linguistic features are normalized. Without such analyses, it remains unclear whether the system is genuinely assessing creativity or merely detecting DeepSeek’s conversational fingerprint.\n\n\n### 4. Dataset collection\n#### (1) Task framing\nThe data collection protocol emphasizes academic inquiry tasks in STEM fields (e.g., rock classification, carbon emission modeling). While suitable for studying analytical reasoning, this framing inherently biases the observed behavior toward convergent and knowledge-based reasoning rather than divergent or imaginative creation. Participants are more likely to synthesize or reformulate factual information than to produce novel conceptual constructs, limiting the ecological range of creativity being captured.\n\n#### (2) Absence of motivation manipulation\nParticipants were not explicitly instructed to “generate original ideas,” “take creative risks,” or “explore unconventional solutions.” In creativity research, such goal framing is critical: without motivational priming, individuals tend to default to task-completion strategies rather than expansive ideation. Consequently, much of the observed dialogue likely reflects problem-solving or academic reasoning, not genuine creative exploration.\n\n#### (3) Time constraint\nEach dialogue was capped at a maximum of 30 turns, with an average of fewer than 10. Creative cognition, however, often involves incubation and iterative recombination, requiring time for reflection and restructuring. A short interaction window may prematurely truncate these processes, reducing the opportunity for authentic creative leaps.\n\n### 5. Scope of creativity measured\nThe type of creativity captured by the study is best described as adaptive scientific or analytical creativity under LLM mediation, rather than open-ended or expressive creativity. Although the framework effectively documents how students interact with a large language model to refine and extend ideas, the creative behaviors observed remain bounded by the task structure and the cognitive affordances of the dialogue format.\n\nWithin this setting, students demonstrate certain forms of constructive and integrative thinking. For instance, they engage in problem reframing, such as transforming a classification question into a modeling or prediction challenge, or cross-domain linking, such as relating geological pattern recognition to convolutional neural networks in computer vision. These behaviors reflect valuable aspects of creative inquiry—they show flexibility, synthesis, and the ability to transfer knowledge across domains.\n\nHowever, such creativity is fundamentally situational and instrumental. The dialogues promote analytical exploration and knowledge integration, but they rarely foster divergent ideation or imaginative generation—the kind of creativity that involves proposing novel metaphors, aesthetic concepts, inventions, or speculative ideas that extend beyond the given problem space. The framework captures how effectively students navigate within known cognitive and disciplinary boundaries, not how they transcend them.\n\nA key factor limiting the expressive range of creativity lies in the dual role of the LLM itself. The model acts as both a creative amplifier and a creative filter. On one hand, it scaffolds ideation by providing examples, explanations, and domain connections that can inspire students to think more broadly. On the other hand, it constrains the conceptual search space to the statistical and semantic regularities of its own training data. Consequently, student–LLM interactions are guided toward plausible and conventional combinations rather than toward radical novelty or risk-taking. The result is a form of bounded creativity, oriented toward optimization and coherence rather than surprise or aesthetic invention.\n\nFrom this perspective, the creativity being measured is processual and pragmatic, focused on reasoning quality and interdisciplinary synthesis rather than on originality in the strong sense of the term. It reflects what might be called “creative inquiry competence”—the ability to collaborate productively with an AI system to reformulate problems, integrate evidence, and explore solution pathways—rather than “creative cognition” in its broader, generative, or expressive manifestations.\n\nIn this light, the data collection strategy and the resulting evaluation framework are methodologically sound but conceptually narrow. They provide valuable insight into how students co-develop ideas with LLMs and how such processes can be quantified, but they do not yet encompass the full spectrum of creative thought recognized in cognitive science, psychology, or the arts. Accordingly, the framework’s claims should be reframed from “creativity evaluation” to “creative inquiry evaluation.”\n\nFuture work should expand the empirical scope to include divergent and expressive tasks, for example, open-ended design problems, creative writing, or interdisciplinary invention challenges, where participants are encouraged to take conceptual risks, generate original constructs, and depart from established solution patterns. Only through such extensions can the framework legitimately claim to measure the broader construct of creativity rather than its current, narrower variant of collaborative analytical innovation.",
"questions": "1. The paper refers to “creativity evaluation” in broad terms. Could you explicitly define whether CREDO targets general creativity, domain-specific creative inquiry, or LLM-mediated problem solving?\n\n2. How do you conceptualize the boundary between creative reasoning and effective analytical reasoning in your framework? What makes a response “creative” rather than simply “high-quality reasoning”?\n\n3. During data collection, were students given any specific prompts or instructions emphasizing originality, risk-taking, or novelty, or were they simply asked to pursue academic inquiries?\n\n4. The paper links CREDO to Bloom’s Taxonomy and the PISA framework.\nCould you briefly elaborate on how each of the four CREDO dimensions maps onto these established theories in concrete operational terms (e.g., specific cognitive operations or learning behaviors)?\n\n5. To what extent do you view the CREDO framework as model-agnostic?\nCould it, in principle, be applied to dialogues generated by other LLMs or even human–human collaborations without retraining?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-26T16:47:18",
"modification_date": "2025-11-12T18:06:19",
"review_url": "https://openreview.net/forum?id=1AYy3T3Xjk¬eId=6S5ki4DMQs",
"license": "CC BY 4.0"
},
{
"id": "0kWLKEUOvs",
"forum": "1AYy3T3Xjk",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission21916/Reviewer_woVg",
"reviewer_name": "Reviewer_woVg",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "This work proposes a process-level framework for evaluating the creativity processes of human–LLM collaboration. It consists mainly of three components: (1) redefined metrics—CREDO creativity dimensions, (2) ITA-based attributions, and (3) a fine-tuned evaluation model.\nTo validate their method, the authors curated a dialogue dataset and conducted expert annotations as ground-truth labels.\nThe results show that the fine-tuned model aligns more closely with expert scores than the baselines (GPT-4 zero-shot and non-tuned DeepSeek-32B).",
"strengths": "* This work curates a dataset with 1,273 expert-annotated dialogues covering multiple domains.\n* It provides both qualitative and quantitative analyses and employs multiple evaluation metrics (Pearson, MAE, QWK, etc.) as well as inter-rater agreement measures to ensure alignment and validity.",
"weaknesses": "* The proposed framework is heuristic. I do not see a clear correspondence between the classical four dimensions and the four CREDO dimensions in Table 1, and the authors do not provide strong theoretical foundations for constructing these new dimensions.\n* Similarly, the ITA deconstructs dialogues into origination nodes, development nodes, and scaffolding supports; however, the paper lacks detailed explanation of the logic and robustness behind this step-by-step construction. This process may heavily depend on the authors’ subjective interpretation, which could introduce bias.\n* The work only compares its method against two baselines and does not report the performance of state-of-the-art models. Even considering cost constraints, GPT-4o would have been a cheaper and more capable alternative than the GPT-4 model used in this study.\n* The paper provides insufficient details about the prompts for evaluation models and expert annotation instructions, limiting reproducibility.",
"questions": "* How were the four CREDO dimensions selected? Are they intended to be orthogonal and to comprehensively capture the dimensions of creativity? For instance, Interdisciplinary Innovation and Risk-Driven Innovation both appear to assess aspects of innovation and may overlap. Could the authors provide a concrete example that clearly illustrates how these four dimensions differ in practical evaluation?\n* What are the exact prompts, instructions, or criteria for the 1–5 scoring scale used by both the model and the experts? My understanding is that LLM judges are highly prompt-sensitive and often exhibit one-sided bias (e.g., tending to give scores of 3–5 while rarely assigning 1 or 2). How does this work address such issues?\n* What is the distribution of the “gold-standard” expert scores across the training, validation, and test subsets? To evaluate student involvement/contribution effectively, each level of involvement/contribution should contain a sufficient number of cases. If the distribution is too narrow, the positive results reported might simply reflect the model’s tendency to fit to certain frequent score ranges (similar to the bias issue mentioned above).\n* From a higher-level perspective, what practical scenarios can this framework be applied to, and how could it be extended further?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-22T22:01:08",
"modification_date": "2025-11-13T09:44:43",
"review_url": "https://openreview.net/forum?id=1AYy3T3Xjk¬eId=0kWLKEUOvs",
"license": "CC BY 4.0"
}
] |
|
IsOMU137M3
|
https://openreview.net/forum?id=IsOMU137M3
|
scCMIA: Self-supervised Dual Model for Mitigating Information Loss in Single-cell Cross-Modal Alignment
| 3
| 3.75
|
[
4,
2,
2,
4
] |
[
3,
4,
4,
4
] | 4
|
[
"Single-cell",
"Self-supervised",
"Alignment",
"Reconstruction",
"scRNA",
"scATAC"
] |
Recent technological advances in single-cell sequencing have enabled simultaneous profiling of multiple omics modalities within individual cells. Despite these advancements, challenges such as high noise levels and information loss during computational integration persist. While existing methods align different modalities, they often struggle to balance alignment accuracy with the preservation of modality-specific information needed for downstream biological discovery. In this paper, we introduce scCMIA, a novel framework guided by Mutual Information (MI) principles that leverages a VQ-VAE architecture. scCMIA achieves robust cross-modal alignment in a unified discrete latent space while enabling high-fidelity reconstruction of the original data modalities. Crucially, our framework transforms the learned discrete representations into a tool for tangible biological discovery, allowing for the quantification of regulatory programs and cross-modal relationships. Our extensive experiments demonstrate that scCMIA achieves state-of-the-art performance across multiple datasets. Our code is available at: https://anonymous.4open.science/r/scCMIA-77E3.
|
applications to physical sciences (physics, chemistry, biology, etc.)
|
https://openreview.net/pdf?id=IsOMU137M3
| 2025-09-19T00:19:32
| 4
|
[
{
"id": "jW5h2MOgxY",
"forum": "IsOMU137M3",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12979/Reviewer_JfMn",
"reviewer_name": "Reviewer_JfMn",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 2,
"summary": "This paper proposes a deep learning framework that is designed to operate on multiple single-cell data modalities. It solves the problem of alignment of single cells across modalities as well as the problem of translating between modalities. The method uses an InfoNCE loss for alignment, and uses a discrete codebook to improve interpretability. Extensive empirical experiments suggest that the method outperforms various state-of-the-art competitors.",
"strengths": "The proposed model includes several components (the VQ module and the mutual information module) that are well motivated and seem to provide significant improvements relative to the state of the art.",
"weaknesses": "A substantive assessment of the weaknesses of the paper. Focus on constructive and actionable insights on how the work could improve towards its stated goals. Be specific, avoid generic remarks. For example, if you believe the contribution lacks novelty, provide references and an explanation as evidence; if you believe experiments are insufficient, explain why and exactly what is missing, etc.\n\nA major problem with this paper is that the exposition is difficult to follow. For example, the second paragraph of the introduction fails to clarify exactly what problem you are working on. Indeed, by describing multimodal protocols that assay multiple aspects of the same single cell, I was misled about what tasks you are interested in solving. What would help is a precise, formal description of the problems you are addressing. More generally, I found the text very difficult to follow. It would be better if you carefully defined terms before using them. Below I outline some of the questions that arose as I worked through the manuscript.\n\nIn general, I think a missing piece here is assessing how well these models generalize beyond the specific data set they are trained on. I think that each model is trained and validated on splits of the same data set (though I don't know for sure, because you don't tell us how this is done). So a reasonable question is whether you can apply the trained model to a new, independent dataset, generated from a different type of cell. The multimodal alignment methods mentioned at the start of Section 2 work directly in such a scenario, whereas a trained model like yours inherently has to worry about generalizability. In practice, to be useful your model has to generalize to single-modality data (i.e., I only measured scRNA-seq, and you tell me what the corresponding scATAC-seq would look like). A discussion of this issue, and some experimental characterization of it, would substantially strengthen the paper. \n\nI thought your description of the challenges associated with multi-modal data (lines 43-49) was imprecise and not very informative. For example, what does it mean to say that there are \"substantial discrepancies\" between scATAC-seq and scRNA-seq? They measure entirely different things. To my mind, the fact that there are differences in feature spaces is not a \"challenge\" per se; it's just definitional. You wouldn't say that multimodal analysis of text and images is \"challenging\" because pixels don't look like words, right?\n\nI don't actually believe your claim (line 55) that if you don't embed data into a shared space, then you \"cannot fully exploit potentially complementary information across modalities.\" This is a very bold claim that requires substantial evidence. Indeed, I don't know how you could conclusively prove such a claim.\n\nI am not convinced that *mean* FOSCTTM is the most useful measure. Have you considered computing a p-value for improvement of the FOSCTTM? You get a FOSCTTM score for each cell, so you could do something like a sign test.\n\nIn the related work section, the fact that alignment methods \"suffer from poor alignment robustness when handling noisy [data]\" is not a substantive critique, in my opinion. All methods degrade in performance in the presence of noise.\n\nI do not understand the critique (line 104) of methods that do multimodal reconstruction without relying on a shared embedding space. You say that \"their utility for tasks requiring direct cross-modal comparison, querying, and label transfer can be limited.\" Why? It's pretty straightforward to do, e.g., label transfer with an accurate multimodal reconstruction method: just reconstruct from one space to the other and then use nearest neighbors to transfer. There is no reason you have to do nearest neighbors in a latent space. I think this critique is misguided or needs to be explained much more carefully.\n\nI found the text in lines 144-149 difficult to understand. For example, what is the difference between \"modality-specific features\" and \"semantic characteristics\"? What do you mean by the \"bounds of MI\"? Similarly, the sentence at lines 162-164 is not grammatical. I'm also confused about what it means to be \"insufficient for effectively decoupling ... in a directed manner\" (lines 167-168).\n\nI wish you had introduced your assumption (line 184) earlier, since it seems to be important to understand the basis of much of this work. I guess this is what you were alluding to when you talked about \"modality-specific features\" versus \"semantic characteristics.\"\n\nIn the description of the datasets, you should indicate what previous papers used these datasets for benchmarking, and indicate what paper you extracted results from (unless you ran all the tools yourself, in which case indicate that).\n\nI was surprised that all the talk about mutual a bound on MI ultimately seems to boil down to just doing an InfoNCE alignment loss.\n\nMinor:\n\nline 192: uses -> use\n\nline 270: objection -> objective\n\nYou should delete the sentence at line 293 (\"Single-cell multi-omics data are often hindered by complex and sophisticated techniques, low throughput, and high noise levels.\"). Just say what data you used. It doesn't even make sense to say that data is hindered by something.\n\nIncidentally, I think calling cross-modal translation \"reconstruction\" is misleading, since reconstruction typically refers to starting and ending from the same place; e.g., reconstructing a scRNA-seq profile from a masked or compressed version thereof. I do recognize that other papers in the literature use \"reconstruction\" to mean \"translation.\"",
"questions": "Did you compute the performance measures in Tables 1-4, or were some of these taken from previous publications? If the latter, did you use the same cross-validation splits?\n\nHow was train/test splitting done for each dataset?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T08:55:33",
"modification_date": "2025-11-12T13:02:12",
"review_url": "https://openreview.net/forum?id=IsOMU137M3¬eId=jW5h2MOgxY",
"license": "CC BY 4.0"
},
{
"id": "n9sYzgMaUx",
"forum": "IsOMU137M3",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12979/Reviewer_uq18",
"reviewer_name": "Reviewer_uq18",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This paper propose a new method for cross modality integration and alignment. The methods focusing on scRNA and scATAC data integration are already well studied, and thus it is hard to figure out the main contributions of this paper to this field.",
"strengths": "The framework is clearly presented.",
"weaknesses": "I have several questions or concerns regarding the current model design and model performance. I think these challenges preclude the paper from publication in this conference, at least in this format.\n\n1. What is the unique contribution of this paper? Using the VQ-based method for multi-omic data integration or biological data learning has already been studied in several papers (https://www.nature.com/articles/s41540-020-00158-2, CVQVAE, or scBeacon). This method lacks innovation, and the training design is not very appealing.\n\n2. The motivation is not so well established. The central dogma only allows one-directional information flow, and thus, we do not need to model the bidirectional information. RNA can never come back to chromosomes, and thus, this method lacks biological interpretation.\n\n3. The benchmarking result is also very weird. Why can we find some baselines with variance reported, but others not? The authors should unify the presentation mode and provide variance for every model. Moreover, reconstruction in single-cell multi-omic data analysis is not a useful metric, as the expression profiles always have noise. The authors should consider one or two new tasks to perform the evaluation. I recommend the authors' reading: https://www.nature.com/articles/s41592-025-02856-3 for including more baseline methods.\n\n4. The comparison should be fair. The authors need to tune hyperparameters for all methods to ensure a fair comparison.\n\n5. I can not find the information about the data scale. Are all the testing data on a large scale or a small scale?\n\n6. How about applying the method to integrate proteomic data such as CITE-seq? Since the authors do not model noise, this framework should work well.",
"questions": "Please see the weaknesses.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-28T21:19:27",
"modification_date": "2025-11-12T13:02:12",
"review_url": "https://openreview.net/forum?id=IsOMU137M3¬eId=n9sYzgMaUx",
"license": "CC BY 4.0"
},
{
"id": "MewHB8zfzW",
"forum": "IsOMU137M3",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12979/Reviewer_uR8P",
"reviewer_name": "Reviewer_uR8P",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This paper introduces scCMIA, a self-supervised framework designed to address the challenges of integrating single-cell multi-omics data, particularly focusing on cross-modal alignment between scRNA-seq and scATAC-seq modalities. The key innovation lies in leveraging mutual information (MI) principles to decouple modality-specific and semantic features within a unified discrete latent space using a VQ-VAE architecture. The proposed method aims to mitigate information loss during integration by combining intra-modal decoupling (via CLUB-based MI minimization) and inter-modal alignment (via contrastive learning with InfoNCE loss).",
"strengths": "1. The integration of MI bounds for intra-modal decoupling and cross-modal alignment is theoretically grounded\n2. The paper provides a rigorous evaluation across multiple datasets and tasks (alignment, reconstruction, clustering, label transfer).",
"weaknesses": "1. My main concern is the novelty of this work. The proposed framework is a patchwork of existing techniques, and shows no insights or benefits for the community.\n2. While four datasets are used, they primarily focus on well-studied protocols (e.g., 10x Multiome). Broader validation on more complex tissues or rare cell types would strengthen generalizability.\n3. The paper lacks comparison with cutting-edge approaches like scButterfly or graph-based methods beyond GLUE. Including these would better contextualize scCMIA’s advancements.",
"questions": "Please see the weaknesses.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-28T11:43:18",
"modification_date": "2025-11-12T13:02:13",
"review_url": "https://openreview.net/forum?id=IsOMU137M3¬eId=MewHB8zfzW",
"license": "CC BY 4.0"
},
{
"id": "8E9t0E4er2",
"forum": "IsOMU137M3",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12979/Reviewer_RPAg",
"reviewer_name": "Reviewer_RPAg",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This paper introduces multi-modal alignment between scRNA (single-cell RNA-sequencing) and scATAC (single-cell Assay for Transposase-Accessible Chromatin using sequencing) data using a VQ-VAE (Vector Quantized Variational Autoencoder) architecture.",
"strengths": "The justification for the modeling based on Mutual Information is well-established.",
"weaknesses": "Limited Novelty\n- The justification based on Mutual Information has been thoroughly explored in previous research (e.g., the CLUB paper).\n- Techniques like VQ-VAE are all existing methods.\n- Are there specific challenges unique to single-cell data, and does the paper introduce a corresponding novel technique to address them?\n\nDecoupling Explanation: More explanation is needed regarding decoupling.\n- Why is decoupling necessary?\n- Consideration is needed on how the decoupled representations could be used independently if required.\n\nApplicability to Uni-modal Data: The method was only applied to single-cell multi-modal data. Does it have utility for uni-modal data as well? Showing that the method performs well even on uni-modal data through experiments could further justify the use of multi-modality in the model.",
"questions": "See weakness section",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-28T02:24:04",
"modification_date": "2025-11-12T13:02:13",
"review_url": "https://openreview.net/forum?id=IsOMU137M3¬eId=8E9t0E4er2",
"license": "CC BY 4.0"
}
] |
|
LwjUKEWAvt
|
https://openreview.net/forum?id=LwjUKEWAvt
|
SafetyChat: Learning to Generate Physical Safety Warnings in Instructional Assistants
| 4
| 3.5
|
[
4,
4,
6,
2
] |
[
3,
4,
3,
4
] | 4
|
[
"Physical Safety",
"Instructional AI Assistant",
"LLM"
] |
While large language models (LLMs) excel in language generation and conversational abilities, their broader utility hinges on meeting additional requirements to ensure reliability and safety. Recent research has explored areas such as minimizing hallucinations, grounding outputs in credible sources, and safeguarding user privacy. However, the critical aspect of physical safety has received limited attention—an oversight that becomes increasingly important as LLMs are integrated into multimodal voice assistants (e.g., smart glasses) that are capable of guiding users through complex, safety-critical tasks such as automotive repair. In this work, we investigate the limitations of current LLMs in generating effective and contextually appropriate safety warnings in the context of complex repair tasks. We introduce SafetyChat, a multi-domain dataset that can evaluate LLMs’ ability to model and prioritize safety awareness. We further enhance model alignment by post-training on this data, comparing the performance of various techniques. Through this process, we identify key challenges and establish robust baselines, paving the way for future research on integrating physical safety considerations into LLM-driven instructional systems. We will release data and code to reproduce our results on publication.
|
A new physical safety task for LLM chat assistant, a new dataset, and strong alignment results.
|
alignment, fairness, safety, privacy, and societal considerations
|
https://openreview.net/pdf?id=LwjUKEWAvt
| 2025-09-19T21:35:40
| 4
|
[
{
"id": "zyVNBiJZ57",
"forum": "LwjUKEWAvt",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission18539/Reviewer_6je4",
"reviewer_name": "Reviewer_6je4",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This paper addresses an important and overlooked problem: the inability of Large Language Models (LLMs) to generate context-aware safety warnings when acting as instructional assistants for complex tasks with physical risks (e.g., automotive or electronics repair). The authors point out that even advanced models like GPT-4o often fail to provide critical physical safety instructions.\n\nTo tackle this issue, the paper makes two main contributions:\n1. **The SAFETYCHAT Dataset**: A novel, multi-domain (automotive and electronics repair) large-scale conversational dataset. It is built upon real-world repair guides (such as iFixit, wikiHow, and TSBs) and collected through multi-turn, role-playing dialogues between human annotators and GPT-4o.\n2. **Safety Alignment Experiments**: The authors performed Supervised Finetuning (SFT) and Direct Preference Optimization (DPO) on open-source models (e.g., Llama-3.1-8B) using SAFETYCHAT.\n\nExperimental results show that models trained on SAFETYCHAT significantly outperform (and even surpass) GPT-4o on tasks involving the classification and generation of physical safety warnings. This demonstrates that alignment with high-quality, domain-specific data can effectively enhance the physical safety awareness of LLMs.",
"strengths": "1. **Importance and Novelty of the Problem**: The paper addresses a critical and under-researched area: the **physical safety** of LLMs. As models are increasingly integrated into smart glasses, AR/VR, or embodied agents, the ability to foresee and warn against physical hazards during instructional tasks is paramount.\n2. **High-Quality Dataset Construction**: The SAFETYCHAT dataset is a core contribution of this work. It is built on authoritative, real-world repair guides (including professional TSBs) and employs a rigorous collection methodology. Notably, having human annotators **rewrite** GPT-4o's responses that missed safety warnings provides an exceptionally high-quality training signal for SFT and DPO.\n3. **Effective Alignment Strategy**: The experiments demonstrate that alignment using SFT and DPO on a domain-specific dataset is extremely effective. It is noteworthy that the finetuned 8B model surpasses GPT-4o on physical safety tasks, suggesting that for specific safety concerns, targeted data and alignment are more effective than larger, general-purpose models.",
"weaknesses": "1. **The Inherent Paradox of LLM-as-a-Judge Evaluation**: A fundamental weakness lies in the evaluation methodology. The paper first establishes that GPT-4o is deficient in identifying physical safety hazards, yet it paradoxically relies on this same \"incapable\" model as the primary \"judge\" for evaluating the safety generation tasks. This contradiction undermines the validity of the results, as a small-scale human verification is insufficient to resolve the concern that the judge model has the very blind spots it is supposed to be evaluating.\n\n2. **Lack of a Multimodal Evaluation**: Despite introductory scenarios like \"smart glasses\" and the use of images during data collection, all experiments remain purely textual. The evaluation framework fails to assess the model's ability to perceive physical danger from visual context, which is a critical component of the very problem the paper aims to solve.\n\n3. **The Dataset Name \"SafetyChat\" is a Significant Overclaim**: The name implies a general-purpose safety model, whereas the work is narrowly focused only on procedural physical safety for automotive and electronics repair. This is misleading and potentially dangerous, as it falsely suggests the model is suitable for other safety domains (like social, privacy, or even medical safety) where it has no training.",
"questions": "How do the authors view the generalization capabilities of models trained on SAFETYCHAT? For instance, could a model trained on automotive repair data handle a physical safety warning for \"bicycle repair,\" which it has never seen? Has the model learned specific textual patterns for \"jack safety\" and \"battery safety,\" or has it grasped a more general concept of \"physical harm avoidance\"?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T22:05:23",
"modification_date": "2025-11-12T14:17:15",
"review_url": "https://openreview.net/forum?id=LwjUKEWAvt¬eId=zyVNBiJZ57",
"license": "CC BY 4.0"
},
{
"id": "wwwDNbg6HU",
"forum": "LwjUKEWAvt",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission18539/Reviewer_RX83",
"reviewer_name": "Reviewer_RX83",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "The paper introduces SafetyChat, a multi-domain, multi-turn benchmark for teaching/evaluating LLMs to insert physical safety warnings into instructional dialogues, mainly for automotive and electronics repair. It also shows that small open models fine-tuned on this data can beat prompted GPT-4o on warning identification and on generating safety-aware rewrites.",
"strengths": "- The paper introduces an interesting dataset in a space that is still underexplored - task-oriented, physical safety in instructional dialogues (automotive/electronics). This is a useful complement to the more common policy/content-safety benchmarks.\n\n- Using iFixit/wikiHow/TSBs plus AR-style chat simulation keeps the dialogues realistic (images, long steps, workshop vs DIY differences). The Ford TSBs in particular justify the “high-stakes” framing; few safety datasets actually contain OEM technical bulletins.\n\n- The task formulation is good: Separating (i) safety-warning identification from (ii) safety-aware response generation mirrors how production systems are actually built.",
"weaknesses": "- Everything is repair-like: automotive, electronics, all from 3 sources. It’s plausible that the model is just learning “car-repair warning priors” (always tell them to park & cool down) and “electronics warning priors” (unplug, discharge capacitor) rather than contextual reasoning. The paper claims “realistic, multi-turn” but never shows cross-domain or out-of-domain transfer to kitchen, DIY home improvement. A cross-domain evaluation would make the “first step toward physically safe assistants” claim much stronger.\n\n- The paper does not clearly describe the safety/technical expertise of the annotators (e.g., whether they had automotive/electrical background or were trained annotators following a rubric). Since the task is about physical risk, clarifying annotator qualification and quality control is important.\n\n- On the modeling side, there is limited methodological novelty - mainly SFT and DPO on the collected data. This is fine for a dataset paper, but the work would be stronger with richer baselines (e.g. retrieval-augmented warning injection). The authors optionally can compare the approach of this paper: AURA: Affordance-Understanding and Risk-aware Alignment Technique for Large Language Models. https://arxiv.org/abs/2508.06124\n\n- The evaluation relies heavily on LLM-as-a-judge. For a physical-safety benchmark, a small human study (even 15–20 domain-informed mechanics/DIYers/technicians) to validate that the model’s warnings are appropriate and non-redundant would significantly strengthen the empirical claims.\n\n**Minor Comments**\n\n- Image usage. Procedures often include images; but the generation task seems text-only. Say explicitly whether images are in the public release (some iFixit assets aren’t).\n\n- A very intuitive baseline is “prepend domain-specific boilerplate chosen by IR from the taxonomy” (retrieve top-k warnings by BM25 over the step).",
"questions": "See weaknesses",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T17:52:24",
"modification_date": "2025-11-12T14:17:16",
"review_url": "https://openreview.net/forum?id=LwjUKEWAvt¬eId=wwwDNbg6HU",
"license": "CC BY 4.0"
},
{
"id": "7rK7M34DIB",
"forum": "LwjUKEWAvt",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission18539/Reviewer_DUDk",
"reviewer_name": "Reviewer_DUDk",
"rating": 6,
"confidence": 3,
"soundness": 4,
"contribution": 3,
"presentation": 4,
"summary": "This paper introduces SafetyChat, a novel, multi-domain dataset designed to enhance Large Language Models (LLMs) in generating accurate and contextually relevant physical safety warnings for complex, multi-step instructional tasks like automotive and electronics repair. The authors argue that existing LLMs often fail to adequately address real-world physical hazards during conversational guidance, an oversight that becomes critical as these models integrate into voice assistants. SafetyChat consists of conversational benchmarks grounded in authentic repair procedures, with human annotators providing gold-standard safety rewrites for LLM responses that missed necessary warnings. Experiments confirm that while off-the-shelf LLMs perform poorly in hazard identification and warning generation, post-training on SafetyChat significantly improves their safety awareness, demonstrating a clear path toward developing safer instructional AI assistants.",
"strengths": "- The paper is well written and well structured\n- The authors present a good analysis of related work and datasets, as well as a good understanding of the state-of-the-art\n- The paper introduces a potentially useful dataset of considerable size (6,391 annotated turns across 528 repair procedures) for evaluating AI assistants that provide instructions with greater safety awareness\n- The SafetyChat dataset includes 1077 human-authored rewrites to address cases of missing warnings from GPT-4o responses.",
"weaknesses": "- The experimental procedure could eventually be improved with some simple baseline approaches (e.g., see question 1). Essentially, since the main contribution is a benchmark dataset, demonstrating the performance of a more considerable number of base models and techniques would make up for a much more informative experiment.\n- Given the nature of the contribution of the paper, having access to the dataset/anonymized code repository would be quite useful. For this reason, I can't comment on reproducibility.\n- The experimental setting evaluates the models on a hold-out set. However, in this setting, out-of-distribution performance can be quite important (i.e., how do models perform on safety-sensitive tasks unrelated to any of the categories included in the train/validation subsets?). Something as simple as using one of the categories as a hold-out set could give readers an idea of how generalizable fine-tuning a model with this dataset could be (or employing something like a k-fold validation, where a fold would be a different category, for example).",
"questions": "1. How does a fine-tuned model on this dataset perform (safety-wise) on out-of-distribution tasks, comparatively to its base version? Similarly, how would a prompting approach (i.e., using a model with specific instructions to be extra conscious on safety procedures, which I believe was the authors' approach with GPT-4o?) perform, comparatively to the fine-tuning and the base model approach, without such instructions? How would a model perform if, instead of fine-tuning, one employs retrieval-augmented generation instead (using the train set partition you defined)?\n2. How many annotators were used? What was the selection criteria? In my opinion, this type of information should be better detailed (if not in the main body, at least in the appendix). I couldn't find much information on the annotation phase other than the interface developed.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T00:51:39",
"modification_date": "2025-11-12T14:17:16",
"review_url": "https://openreview.net/forum?id=LwjUKEWAvt¬eId=7rK7M34DIB",
"license": "CC BY 4.0"
},
{
"id": "J7fH6RPPHQ",
"forum": "LwjUKEWAvt",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission18539/Reviewer_Lqwh",
"reviewer_name": "Reviewer_Lqwh",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 1,
"presentation": 3,
"summary": "The paper introduces SAFETYCHAT, a real-world, multi-turn benchmark for physical safety warnings in repair tasks and shows that post-training on this data materially improves LLMs’ ability to anticipate and communicate relevant hazards, surpassing several baselines and setting foundations for safer instructional assistants.",
"strengths": "Novel dataset. Proposes a multi-turn benchmark focused on physical safety in automotive and electronics repair scenarios.\n\nClear task formulation. Formulate the problem as warning classification and warning generation, enabling systematic evaluation.\n\nConvincing empirical evidence. Shows that popular LLMs underperform on these tasks, while simple post-training such as SFT and DPO yields substantial performance gains.",
"weaknesses": "Beyond the two works cited, there are several recent datasets/benchmarks for physical-scenario safety in embodied/agent settings (e.g., SafeAgentBench; Agent-SafetyBench; robot constitutions/semantic safety; task-planning safety frameworks)[1,2,3,4]. Even if tasks differ, the paper should clarify what is new or harder here and why the proposed two tasks are distinctly more important than existing benchmarks.\n\nThe dataset is text-only, yet the source corpora (iFixit-Auto, wikiHow, TSB, iFixit-Elec) and many related benchmarks are multimodal (include images). For repair scenarios, users often rely on photos to communicate context. The paper should justify the text-only choice, discuss what is lost without images, and consider a multimodal extension.\n\nParts of the dataset are generated/evaluated with GPT-4o. To avoid model-specific bias, the pipeline should incorporate multiple diverse LLMs (and/or human adjudication) and report agreement and robustness across models.\n\nMissing baselines likely to be strong. (1) RAG baseline: Given the availability of procedural documents (iFixit/wikiHow/TSB), a retrieval-augmented approach (retrieve instructions → summarize applicable warnings) is natural and may perform well; if no relevant instruction is found, classify as safe. (2) Reasoning models: Since the tasks require deciding whether to warn and what to warn about, chain-of-thought / reasoning models (or reasoning-enabled decoding) are appropriate baselines.\n\nThe post-training relies on standard SFT/DPO; there is no algorithmic contribution. Consider framing physical safety as hard/soft constraints during training or decoding (e.g., constrained optimization, safety filters, or control-theoretic constraints) to increase novelty and rigor.\n\nEvaluation clarity issues. (1) Table 4 caption: It says “GPT-4-as-judge,” but the table has two columns: “GPT-4o Judge” and “Claude-3.7 Judge.” Please reconcile the caption with the content and clearly describe how GPT-4 (vs. GPT-4o) is used (and where it is first introduced). (2) Formatting: In §4.2, spacing differs between Query and Resp; ensure consistent English spaces throughout formulas and text.\n\n[1]Yin, Sheng, et al. \"Safeagentbench: A benchmark for safe task planning of embodied llm agents.\" arXiv preprint arXiv:2412.13178 (2024). \n[2]Sermanet, Pierre, et al. \"Generating robot constitutions & benchmarks for semantic safety.\" arXiv preprint arXiv:2503.08663 (2025). [3]Zhang, Zhexin, et al. \"Agent-safetybench: Evaluating the safety of llm agents.\" arXiv preprint arXiv:2412.14470 (2024).\n[4]Huang, Yuting, et al. \"A Framework for Benchmarking and Aligning Task-Planning Safety in LLM-Based Embodied Agents.\" arXiv preprint arXiv:2504.14650 (2025).",
"questions": "Did the authors encounter convergence or stability issues when training DPO given that preferred responses are human-authored while non-preferred responses are GPT-4o–generated (i.e., a clear source/distribution mismatch)? With a relatively small fine-tuning set, such shift can cause the model to learn source cues rather than preference signals and may hinder convergence.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-23T11:17:16",
"modification_date": "2025-11-12T14:17:17",
"review_url": "https://openreview.net/forum?id=LwjUKEWAvt¬eId=J7fH6RPPHQ",
"license": "CC BY 4.0"
}
] |
6L3yCjx9s3
|
https://openreview.net/forum?id=6L3yCjx9s3
|
Dimension-Adaptive MCTS: Optimal Sample Complexity for Continuous Action Planning
| 4.5
| 3
|
[
6,
4,
4,
4
] |
[
3,
4,
3,
2
] | 4
|
[
"Monte-Carlo Tree Search; Continuous Reinforcement Learning Planning"
] |
We study continuous-action Monte Carlo Tree Search (MCTS) in a $d$-dimensional action space when the
optimal action-value function $Q^*(s,\cdot)$ is $\beta$-Hölder continuous with constant~$L$. We show that a
dimension-adaptive $\varepsilon$-net schedule combined with power-mean backups and a polynomial exploration
bonus finds an $\varepsilon$-optimal action in $ \tilde{O}\left(\sigma^2 L^{d/\beta} \varepsilon^{-(d/\beta+2)}\right) $
simulations, matching standard continuum-armed lower bounds up to logs while remaining practical
via on-demand, capped random nets. We further demonstrate that our method significantly outperforms
baseline methods on continuous control planning problems. Our work bridges the gap between theoretical
reinforcement learning and practical planning algorithms, providing a principled approach to
high-dimensional continuous action space exploration.
|
reinforcement learning
|
https://openreview.net/pdf?id=6L3yCjx9s3
| 2025-09-19T23:10:22
| 4
|
[
{
"id": "7HT5wVygqT",
"forum": "6L3yCjx9s3",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission19231/Reviewer_pCZE",
"reviewer_name": "Reviewer_pCZE",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper proposes PM-DA-MCTS, a planning algorithm for continuous-action reinforcement learning based on Monte-Carlo Tree Search (MCTS) under the assumption that the optimal value function is β-Hölder smooth. The method adaptively discretizes the action space, uses power-mean backups to trade off between optimistic and average value estimates, and employs polynomial-confidence exploration bonuses to manage noisy returns. The authors prove high-probability sample-complexity bounds that match minimax rates for continuum-armed bandits up to logarithmic factors, and present experiments on MuJoCo environments showing improvements over progressive widening, HOOT, and other MCTS baselines.",
"strengths": "The work analyzes continuous-action MCTS and achieves optimal dimension-dependent sample-complexity rates under β-Hölder smoothness. Extending non-asymptotic MCTS analysis to adaptive discretization with stochastic returns is a meaningful contribution.Combining dimension-adaptive grids, power-mean backup operators, and polynomial concentration bonuses is conceptually interesting and grounded in existing theory. Experiments on MuJoCo tasks demonstrate tangible improvements over established MCTS methods, and ablations highlight the contribution of each algorithmic component.",
"weaknesses": "I do not have many complaints. \n1. While technically sound, the exposition could be improved somewhat to convey the necessity behind the power-mean operator, and the role of polynomial concentration. \n2. Regarding the empirical evaluations, the comparison to MCTS baselines is appropriate for validating the planning approach. However, since continuous control is often handled with policy-gradient methods (e.g., PPO, SAC), benchmarking against such baselines research would better contextualize practical usefulness.",
"questions": "If we consider the finite action set case (which is trivially embeddable in d=|A| dimensions), it seems that we recover a sample complexity that is exponential in |A|. Is this the correct rate for MCTS in the finite-action setting?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-05T07:09:38",
"modification_date": "2025-11-12T15:06:51",
"review_url": "https://openreview.net/forum?id=6L3yCjx9s3¬eId=7HT5wVygqT",
"license": "CC BY 4.0"
},
{
"id": "nx7uN8NdzA",
"forum": "6L3yCjx9s3",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission19231/Reviewer_TSvv",
"reviewer_name": "Reviewer_TSvv",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "The authors address planning on continuous action spaces via Monte-Carlo tree search. Under a standard Holder smoothness assumption, they propose an adaptive discretization schedule, and incorporate it into other tweaks proposed within recent literature. The resulting procedure achieves the minimax lower bound. Empirically, numerical experiments justify performance.",
"strengths": "- The paper offers a theoretically sound contribution for continuous action planning with MCTS. This is especially nice since planning has been underaddressed within recent literature. \n- The authors provide a discretization strategy, and analyze its performance along a host of other tweaks present in the literature to obtain rigorous convergence guarantees. \n - Said discretization strategy, in particular, could indeed be valuable in practice, though a user is likely to randomly sample along the lines of the scale suggested in the paper in practice instead. \n- The method achieves the minimax lower bound, up to a logarithmic term. \n- The experiments are not bad at all for a theory paper.",
"weaknesses": "1. **Limited novelty.** The algorithm amounts to MCTS with a confidence bonus, plus a clever discretization. It is nice that it accommodates power mean Bellman backups, but this does not seem to be necessary for the convergence of the MCTS procedure. The polynomial exploration bonus is not new either, following Shah et al. (2022).\n- As such, the contribution is sound, albeit limited -- the only novelty in algorithmic design appears to be in the choice of discretization. Accordingly, the analysis appears to follow from the analysis of Dam et al. (2024), Dam et al. (2025).\n2. **Clarity and formatting.** \n- While the power mean backups are attributed to Dam et al. (2019) and the polynomial bonus to Shah et al. (2022) within the paper, it is not clearly stated within the section on key contributions (to be fair, it is stated before it, but not during it) and the algorithm overview. An inattentive reader could mistakenly attribute the contribution to this paper.\n- There are quite a few issues with the formatting. \n - In the section on key contributions, there is quite a bit of ``\\vspace{}`` abuse present to make the bullet points more condensed. This is not necessarily a problem in itself, but the settings are far too aggressive to not be noticed. \n - The authors should have used ``\\citep{}`` and not ``\\cite{}`` in many places.",
"questions": "1. Is there novelty in the theoretical analysis beyond Dann et al. (many papers), Shah et al. (2022), and other recent literature, beyond incorporating the discretization error into the convergence bound?\n2. Methodologically, is the adaptive discretization the only new methodological contribution?\n\nMy concerns mainly relate to novelty, and to some lesser degree clarity. At the moment, I am on the fence between a 4 and a 6, and am willing to increase my score if I am proven wrong or my concerns are addressed.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T12:07:05",
"modification_date": "2025-11-12T15:06:52",
"review_url": "https://openreview.net/forum?id=6L3yCjx9s3¬eId=nx7uN8NdzA",
"license": "CC BY 4.0"
},
{
"id": "xAUBamoOaa",
"forum": "6L3yCjx9s3",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission19231/Reviewer_xF1x",
"reviewer_name": "Reviewer_xF1x",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 2,
"summary": "This paper investigates the Monte Carlo Tree Search (MCTS) problem in a random environment with a $d$-dimensional continuous action space. The authors propose a new algorithm named \"Power Mean Dimension Adaptive MCTS\" (PM-DA-MCTS), which combines three new techniques: 1) an adaptive $\\epsilon$ based on the $\\epsilon$-net discretization strategy of dimension $d$ and the $\\beta$-Holder continuity of the value function; 2) a power mean backtracking operator for the random environment; 3) a polynomial exploration reward. The core contribution of this work is that it theoretically proves that the algorithm can find an $\\epsilon$-optimal action with an optimal sample complexity of $\\tilde{\\mathcal{O}}(\\epsilon^{-(d/\\beta+2)})$, and it significantly outperforms existing continuous action MCTS baseline methods in empirical evidence.",
"strengths": "1. The paper presented the theoretically grounded MCTS algorithm for high-dimensional continuous action spaces that\nachieves optimal sample complexity bounds while using power mean backup in stochastic\nenvironments.\n2. This method overcomes the problem of lack of theoretical guarantee in existing work and successfully extends the recent theoretical progress on power averaging estimators (originally limited to discrete settings) to the continuous action space.\n3. The experiment results show that PM-DA-MCTS outperforms multiple baselines in both mean and variance on several low-dimensional and high-dimensional tasks in the style of MuJoCo, supporting the theoretical claims.",
"weaknesses": "1. The paper explains in detail that its adaptive discretization is based on \"uniform grid discretization\". All theoretical analysis strictly depend on this structured uniform grid. However, this discretization is exponential to $d$: $\\mathcal{N}_k = O((1/\\epsilon_k)^d)$. The paper does not fully explain how a uniform grid can be \"lazily\" loaded in high-dimensional space while maintaining computational feasibility.\n2. In abstract, the author mentioned that :\"matching standard continuum-armed lower bounds up to logs while remaining practical via on-demand, capped random nets\". What does \"capped random nets\" mean? It seems that this paper did not detailedly discuss this issues.\n3. I wonder if the experiments for HUMANOID ($d = 17$) involves such capped method to maintain practical efficiency. If directly apply the algorithm in Section 4, does the algorithm still maintain computationally feasible in the experiement settings?\n\nI would like to raise my score if authors could well address my concerns.",
"questions": "See weakness part.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T10:53:23",
"modification_date": "2025-11-12T15:06:52",
"review_url": "https://openreview.net/forum?id=6L3yCjx9s3¬eId=xAUBamoOaa",
"license": "CC BY 4.0"
},
{
"id": "ewJHOcBoB6",
"forum": "6L3yCjx9s3",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission19231/Reviewer_azdV",
"reviewer_name": "Reviewer_azdV",
"rating": 4,
"confidence": 2,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This paper discusses the challenge of extending Monte Carlo Tree Search (MCTS) to high-dimensional continuous action spaces, where existing methods suffer from exponential sample complexity and lack theoretical guarantees. The authors highlight that while prior works like progressive widening offer empirical success, they fail to capture the relationship between dimensionality, smoothness, and efficiency. The paper introduces Power-Mean Dimension-Adaptive MCTS (PM-DA-MCTS) — an algorithm that combines adaptive discretization of the action space with power-mean backups and polynomial exploration bonuses. The proposed method achieves provably optimal sample complexity and good empirical performance in stochastic, high-dimensional environments.\n\n.",
"strengths": "1- well-written paper and easy to read.\n2- Correct and solid theorems and theoretical results.",
"weaknesses": "1- The theoretical contribution is incremental. The idea is to extend an established convergence analysis in discrete-action spaces to continuous action spaces by discretizing the continuous space with a dimension-adaptive scheme, effectively reducing the problem to the discrete-action setting.\n\n2- The use of power-mean operators and a polynomial exploration bonus is not novel; both have been used in prior planning methods.\n\n3- Regarding the results, UCT is a very basic baseline, yet it performs comparably to PM-DA-MCTS in most domains. The only domain showing a significant difference is MountainCar, which I believe is due to the environment’s characteristics and the paper’s exploration strategy.\n\n4- To better evaluate the discretization strategy, more high-dimensional environments are needed; only two are provided.",
"questions": "1- Is POLY-HOOT (2020) the SOTA performance of MCTS in continuous-action domains?\n\n2- How does your method perform in high-dimensional domains with sparse feedback, such as goal-conditioned mazes?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T04:30:17",
"modification_date": "2025-11-12T15:06:53",
"review_url": "https://openreview.net/forum?id=6L3yCjx9s3¬eId=ewJHOcBoB6",
"license": "CC BY 4.0"
}
] |
|
PRHNKeaZpP
|
https://openreview.net/forum?id=PRHNKeaZpP
|
Human-in-the-Loop Policy Optimization for Preference-Based Multi-Objective Reinforcement Learning
| 4
| 3.75
|
[
4,
4,
4,
4
] |
[
4,
4,
3,
4
] | 4
|
[
"Multi-objective reinforcement learning",
"human-in-the-loop",
"preference learning"
] |
Multi-objective reinforcement learning (MORL) seeks policies that effectively balance conflicting objectives. However, presenting many diverse policies without accounting for the decision maker’s (DM’s) preferences can overwhelm the decision-making process. On the other hand, accurately specifying preferences in advance is often unrealistic. To address these challenges, we introduce a human-in-the-loop MORL framework that interactively discovers preferred policies during optimization. Our approach proactively learns the DM’s implicit preferences in real time, requiring no a priori knowledge. Importantly, we integrate this preference learning directly into a parallel optimization framework, balancing exploration and exploitation to identify high-quality policies aligned with the DM's preferences. Evaluations on a complex quadrupedal robot simulation environment demonstrate that, with only
interactions, our proposed method can identify policies aligned with human preferences, e.g., running like a dog. Further experiments on seven MuJoCo tasks and a multi-microgrid system design task against eight state-of-the-art MORAL algorithms fully demonstrate the effectiveness of our proposed framework. Demonstrations and full experiments are in https://sites.google.com/view/pbmorl/home.
|
We propose PBMORL, a human-in-the-loop MORL framework that learns preferences from limited feedback to efficiently discover high-quality, preference-aligned policies.
|
reinforcement learning
|
https://openreview.net/pdf?id=PRHNKeaZpP
| 2025-09-18T22:48:56
| 4
|
[
{
"id": "6bI8VtVUo8",
"forum": "PRHNKeaZpP",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12302/Reviewer_yZQN",
"reviewer_name": "Reviewer_yZQN",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 2,
"summary": "This paper proposes a framework for MORL that interactively incorporates DM's preferences during policy optimization. The paper consists of 3 stages: 1. The Seeding stage initializes diverse policies; 2. The Preference Elicitation stage queries DM and translates preferences into weight vectors 3. The Policy Optimization stage refines policies in parallel. Evaluation shows it achieves superior alignment performance compared to baselines.",
"strengths": "1. The idea of incorporating querying the DM's preferences into the MORL learning pipeline is novel. The Preference Elicitation stage actively queries the DM via pairwise policy comparisons and translates these responses into weight vectors. The approach avoids the need to design scalarization. \n\n2. Extensive empirical evaluation covers a wide range, showing that it outperforms the baselines.",
"weaknesses": "1. A key concern is that, despite presenting itself as a multi-objective RL framework, the method effectively reduces the problem to a form of preference-weighted single-objective optimization after the elicitation stage. While the Preference Elicitation stage is novel, the final policy optimization is conducted in a scalarized utility space. Although the algorithm remains multi-policy in structure (e.g., using multiple weighted tasks in MOPPO), the search is guided toward a narrow vector space, ignoring global trade-offs. This raises questions about the \"multi-objective\" claim, especially in settings where a full Pareto frontier might be desired.\n2. While the framework is claimed as \"human-in-the-loop\", all user preferences are simulated with predefined vectors. An empirical study with actual humans would strengthen the paper. \n3. The writing tends to be verbose and redundant. A tighter structure and more concise explanations would improve readability.",
"questions": "1. The proposed method narrows the optimization to the learned preference region, whereas baselines (e.g., scalarized PPO) attempt to cover the full objective space. This may make the comparison unbalanced, especially since the method benefits from more targeted exploration. Could the authors clarify whether the baselines were also given any access to preference vectors or if they were evaluated using the same simulated DM? How do you ensure a fair evaluation?\n2. How does PBMORL perform when queried with test-time preferences that differ from the training-time preference region? For example, if a DM changes their mind or reveals a new preference vector after training, can the method generalize to this unseen region? Or is it limited to the narrow region learned during interaction?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-03T10:09:07",
"modification_date": "2025-11-12T12:53:50",
"review_url": "https://openreview.net/forum?id=PRHNKeaZpP¬eId=6bI8VtVUo8",
"license": "CC BY 4.0"
},
{
"id": "lWdf0yHJxl",
"forum": "PRHNKeaZpP",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12302/Reviewer_sKsk",
"reviewer_name": "Reviewer_sKsk",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 3,
"presentation": 2,
"summary": "In multi-objective reinforcement learning (MORL), the utility function of the decision maker is often unknown. In this context, MORL algorithms typically learn the whole coverage set of optimal trade-offs, so that the decision maker can select their preferred solution a posteriori. This can be impractical, as the coverage set scales exponentially with the number of objectives, and requires thorough exploration of the state-space to learn many different policies. This work proposes PBMORL, an interactive MORL approach that learns the utility function over time to reduce the search space. PBMORL keeps a population of policies optimized for different utility functions, and uses a Gaussian Process (GP) to estimate the utility function. It refines the GP using pairwise preference elicitation, and then updates the population of policies with a 50/50 split between the top performing policies according to the current estimate of the utility function, and policies optimizing on new utility functions (that are biased towards the utility functions of the top performing policies). This process is then repeated for a fixed number of iterations.",
"strengths": "Overall, the authors make a laudable effort towards interactively learning the optimal policy wrt the decision maker's preferences, a setting that has been mostly investigated in multi-armed bandits [1-3] and not RL. This is an important topic, which could be realistic to put in practice, as they show that only 10 to 40 interactions with the user are enough to accurately model their preferences. I find the approach interesting, as it could drastically improve sample efficiency compared to learning the Pareto front. However, I found it very hard to assess the results of the diverse experiments, as they use different metrics, different baselines, with many of the details and comparisons spread across the appendix.",
"weaknesses": "My main concerns are as follows:\n\na) Many of the design choices of the algorithm seem ad-hoc, and would benefit from some clarifications, e.g.:\n 1. PBMORL uses a population of independent PPO agents, each optimized on their own scalarization weights. This is quite inefficient, as many of the experiences could be shared across policies. Existing methods, such as [4,5], instead condition the policies on scalarization weights, such that the same network can be used regarless of the utility function. Is there a reason for not doing the same here? Also, even though the authors mention a multi-objective variant of PPO (Algo3), it is unclear what is different from running independent PPO instances.\n 2. The setting focuses on a weighted sum over objectives as scalarization function. In that case, there is no need to model the utility function as a GP, which would handle non-linear utility functions. I am curious as to why methods from prior work in the interactive multi-objective bandits literature were not used, such as Bayesian logistic regression [1] or particle filtering [3]. Incorporating prior knowledge (i.e., the fact the the utility function is linear) into the belief should reduce the number of queries required to accurately predict the decision maker's utility function.\n 3. During Step 4 of preference translation (line 235), weights are sampled evenly then biased towards promising weights. Since the GP naturally encompasses uncertainty, why not sample weights from the belief posterior? This would seem like a more principled approach to weight selection.\n\nb) I found it hard to assess the results of the diverse experiments:\n 1. The authors propose an \"approximation accuracy\" metric $\\epsilon^\\star$ and \"average accuracy\" metric $\\bar{\\epsilon}$. I don't understand the insights that can be gained from $\\bar{\\epsilon}$, as the baselines approximate the whole Pareto front, which of course will result in worse $\\bar{\\epsilon}$ values than PBMORL, that focuses on one region of interest. Moreover, the optimal policy used to compute both $\\epsilon^*$ and $\\bar{\\epsilon}$ is called the \"golden policy\" (Appendix C5). But the values of the golden policy are defined by arbitraty, extreme values that are not guaranteed to be attainable in practice.\n 2. The scalarized PPO baseline (Section 4.1.1) is run with 5 arbritrary weight combinations, who may not correlate with the optimal policy. As such, it us unsurprising that the learned policies may underperform compared to PBMORL, which actually learn the weights of the decision maker.\n 3. Appendix D5 shows fuzzy preferences, and shows coverage sets of solution where \"$f_1$ is weakly preferred\". This seems to indicate that the GP does a good job at learning weights, but it does not show how close the learned weights are to the true weights, nor does it show how close the corresponding policy is to the optimal policy (the policy trained on the ground-truth weights).\n 4. From my understanding, except D5 and Section 4.1.1, all the other experiments focus on optimizing one of the objectives (eg, \"$f_1$ is preferred\"). And so, the vast majority of the experiments involving multi-objective baselines focus on learning extreme policies that maximize a single objective, not trade-offs. I believe this simplifies the core MORL problem of balancing conflicting objectives.\n 5. Section 4.1.1 contains extensive reward engineering, trying to fit basic and complex reward functions to optimize the objectives. I would appreciate comments on the reason why this has been done, since the advantage of using MORL is that you can optimize multiple, interpretable objectives, and should not have to perform reward engineering.\n \nI believe a more principled way to evaluate PBMORL would be to:\n\na) sample random weights that are considered the (a priori unknwon) decision maker's utility function\n\nb)\n 1. train PBMORL, where the queries to the decision maker are resolved by saying that $\\pi_1$ is preferred over $\\pi_2$ if the utility of $\\pi_1$ is higher than the one of $\\pi_2$ (using the ground-truth weights)\n 2. train an oracle (e.g., PPO) directly using the ground-truth weights that serves as an upper bound\n 3. Similar to the spirit of Appendix D3, use the whole query budget a priori (e.g., on random returns sampled from the objective space), then train an agent (e.g., PPO) on the weight from the belief\n 4. train preference-based baselines and MORL baselines (this is already done in the paper)\n\nc) compare the utility of all the trained policies (using the ground-truth weights), and compare the expected utility loss [luiza] wrt the optimal policy (i.e., the oracle).\n\nI apologize for the long review, and hope the authors are not discouraged by all the comments, as overall I like the proposed approach, and will gladly change my score depending on the discussion with the authors and the clarifications they can provide.\n\n[1] Roijers, D. M., Zintgraf, L. M., & Nowé, A. (2017). Interactive thompson sampling for multi-objective multi-armed bandits. In International conference on algorithmic decision theory (pp. 18-34). Cham: Springer International Publishing.\n\n[2] Roijers, D. M., Zintgraf, L. M., Libin, P., Reymond, M., Bargiacchi, E., & Nowé, A. (2020). Interactive multi-objective reinforcement learning in multi-armed bandits with gaussian process utility models. In Joint European conference on machine learning and knowledge discovery in databases (pp. 463-478). Cham: Springer International Publishing.\n\n[3] Reymond, M., Bargiacchi, E., Roijers, D. M., & Nowé, A. (2024). Interactively Learning the User's Utility for Best-Arm Identification in Multi-Objective Multi-Armed Bandits. In Proceedings of the 23rd International Conference on Autonomous Agents and Multiagent Systems (pp. 1611-1620).\n\n[4] Abels, A., Roijers, D., Lenaerts, T., Nowé, A., & Steckelmacher, D. (2019). Dynamic weights in multi-objective deep reinforcement learning. In International conference on machine learning (pp. 11-20). PMLR.\n\n[5] Yang, R., Sun, X., & Narasimhan, K. (2019). A generalized algorithm for multi-objective reinforcement learning and policy adaptation. Advances in neural information processing systems, 32.",
"questions": "please see the comments above",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T05:17:52",
"modification_date": "2025-11-12T12:53:51",
"review_url": "https://openreview.net/forum?id=PRHNKeaZpP¬eId=lWdf0yHJxl",
"license": "CC BY 4.0"
},
{
"id": "zBc6ZeyESC",
"forum": "PRHNKeaZpP",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12302/Reviewer_1tU6",
"reviewer_name": "Reviewer_1tU6",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "This work proposes a human-in-the-loop multi-objective reinforcement learning (MORL) framework that learns preferences from human feedback. The preferences are modeled using a Gaussian process, and the method is evaluated across various robotic environments.",
"strengths": "1. The proposed method is conceptually straightforward and easy to follow.\n2. The evaluation on realistic Unitree robotic environments adds practical relevance and credibility to the study.\n3. The work is well-presented, with clear figures and informative videos that effectively illustrate the results.",
"weaknesses": "1. Limited technical novelty:\nWhile the proposed framework is well-structured and clearly presented, it lacks substantial technical innovation. Each component builds on existing techniques, and the overall contribution may be more suitable for a strong course project rather than meeting the bar for a top-tier venue like ICLR.\n\n2. Potential limitations of the preference model:\nThe method models preferences via a latent utility function over objective values. However, it is unclear whether this formulation can always capture nuanced human preferences. For instance, two policies with similar objective values might still differ significantly in terms of perceived desirability by the DM, which this model may fail to distinguish.\n\n3. Assumption of stationary preferences:\nThe experiments assume that the human’s preferences remain fixed throughout the optimization process. In realistic human-in-the-loop scenarios, preferences often evolve as users observe different policies or as external conditions change. The proposed framework may therefore struggle to handle non-stationary or dynamically shifting preferences.",
"questions": "Are there other ways to design the high level MDP",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-29T21:45:31",
"modification_date": "2025-11-12T12:53:51",
"review_url": "https://openreview.net/forum?id=PRHNKeaZpP¬eId=zBc6ZeyESC",
"license": "CC BY 4.0"
},
{
"id": "V4JVXsEbLv",
"forum": "PRHNKeaZpP",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12302/Reviewer_rpgj",
"reviewer_name": "Reviewer_rpgj",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This paper investigates an interesting problem — exploring the decision maker’s preferred regions on the Pareto front in MORL. The authors propose a human-in-the-loop framework that interactively infers implicit preferences and guides policy optimization. However, the experimental analysis is relatively weak and requires further clarification and justification.",
"strengths": "1. The paper addresses a timely topic, integrating human feedback into MORL with clear motivation.\n2. The proposed pipeline is clearly presented and supported by detailed engineering implementation.\n3. Experiments on the Unitree robot are technically solid and visually demonstrate distinct preference-aligned behaviors.\n4. The overall idea of focusing learning on human-desired regions of the Pareto front is interesting and potentially useful.",
"weaknesses": "1. Unfair baseline setup. Under different DM preference profiles, PBMORL explores specific regions on the Pareto front, while baselines still aim to cover the entire front.\n2. Questionable evaluation metric. The use of a hand-crafted “golden policy” as reference is under-explained and may not align with the DM preferences used during training, thus potentially biasing results toward extreme objectives.\n3. Scalability and practicality of human feedback. The approach assumes frequent pairwise preference queries; it is unclear how feasible this remains when the number of objectives grows or in realistic human-in-the-loop settings.",
"questions": "1. The paper states that the DM provides pairwise preference feedback, but it is not specified how this feedback is generated in simulation. Could the authors provide the exact rule or scoring function used by the oracle to decide which policy is preferred in each preference profile?\n2. In the benchmark experiments, you evaluate using “golden policies.” Do these golden policies correspond to (or approximate) the same utility models that the DM uses during training to choose between policies? If not, how should we interpret the reported alignment scores?\n3. Can you provide baselines that are constrained to the preference-aligned region of the Pareto front as PBMORL, rather than baselines that attempt to cover the entire Pareto front uniformly? For example, if you are considering the evolutionary methods, such as PGMORL, maybe you can conduct a rough selection of policies for specific regions? If this is impractical, please provide a detailed explanation.\n4. How are training steps allocated to PBMORL and the baselines? If PBMORL is allowed to concentrate its entire budget in a small region of interest while the baselines are required to cover the full front under the same total budget, this seems structurally unfair. Please clarify.\n5. In the UNITREE experiments, how many distinct policies are actually trained per preference profile for the scalarized PPO baseline? The paper notes that PBMORL returns multiple candidate policies per preference profile — what is the corresponding number allowed for scalarized PPO under each profile?\n6. In the UNITREE experiments, how are the scalarization weights for the PPO baseline chosen? For example, why are extreme weightings such as [1, 0] and [0, 1] not included, and how sensitive are the baseline results to this choice?\n7. The “approximation accuracy” metric equation is unclear. The text suggests an L2 distance of policies, but is that distance computed in policy parameter space, or in objective space? Please clarify the intended interpretation.\n8. The paper claims that most MORL work considers only 2–3 objectives, and that work with more than 3 objectives exists only in toy settings. This is incorrect. There are recent methods that explicitly tackle higher-dimensional objective spaces (4+ objectives, and up to 9) in realistic domains such as robotics, transport planning, and scheduling [1-3]. (Since the paper primarily emphasises the human-in-the-loop framework, I don’t expect extensive many-objective experiments. However, the authors should correct the misleading statement.)\n\nOverall, I find the problem important and the design interesting, but the experimental setup leaves several open questions. I would be happy to raise my score if the authors can clarify these issues.\n\n[1] Huang, Bo-Kai. Q-pensieve: Boosting sample efficiency of multi-objective RL through memory sharing of q-snapshots. MS thesis. National Yang Ming Chiao Tung University, 2022. (their arXiv version includes experiments up to 5 objectives)\n\n[2] Liu, Ruohong, et al. \"Efficient Discovery of Pareto Front for Multi-Objective Reinforcement Learning.\" The Thirteenth International Conference on Learning Representations.\n\n[3] Michailidis, Dimitris, et al. \"Scalable multi-objective reinforcement learning with fairness guarantees using lorenz dominance.\" arXiv preprint arXiv:2411.18195 (2024).",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-28T23:49:54",
"modification_date": "2025-11-12T12:53:51",
"review_url": "https://openreview.net/forum?id=PRHNKeaZpP¬eId=V4JVXsEbLv",
"license": "CC BY 4.0"
}
] |
G3uNHQpP7J
|
https://openreview.net/forum?id=G3uNHQpP7J
|
Multi-Domain Transferable Graph Gluing for Building Graph Foundation Models
| 6
| 3.5
|
[
4,
8,
6,
6
] |
[
2,
4,
4,
4
] | 4
|
[
"Multi-domain graph pre-training",
"graph neural network",
"graph foundation model",
"Riemannian geometry"
] |
Multi-domain graph pre-training integrates knowledge from diverse domains to enhance performance in the target domains, which is crucial for building graph foundation models. Despite initial success, existing solutions often fall short of answering a fundamental question: how is knowledge integrated or transferred across domains? This theoretical limitation motivates us to rethink the consistency and transferability between the pre-trained model and target domains. In this paper, we propose a fresh differential geometry perspective, whose core idea is to merge any graph dataset into a unified, smooth Riemannian manifold, enabling a systematic understanding of knowledge integration and transfer. To achieve this, our key contribution is the theoretical establishment of neural manifold gluing,
which first characterizes local geometry using an adaptive orthogonal frame and then “glues” the local pieces together into a coherent whole. Building on this theory, we present the GraphGlue framework, which supports batched pre-training with EMA prototyping and provides a transferability measure based on geometric consistence. Extensive experiments demonstrate its superior performance across diverse graph domains. Moreover, we empirically validated GraphGlue’s geometric scaling law, showing that larger quantities of datasets improve model transferability by producing a smoother manifold.
|
From differential geometry perspective, we present a novel framework that merges multi-domain graphs into a unified, smooth manifold with geometric consistency, enabling quantifiable transferability and geometric scaling behavior.
|
learning on graphs and other geometries & topologies
|
https://openreview.net/pdf?id=G3uNHQpP7J
| 2025-09-15T23:32:49
| 4
|
[
{
"id": "Zw6xvN1iuH",
"forum": "G3uNHQpP7J",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission6004/Reviewer_w9Kq",
"reviewer_name": "Reviewer_w9Kq",
"rating": 4,
"confidence": 2,
"soundness": 2,
"contribution": 3,
"presentation": 3,
"summary": "This paper focuses on how knowledge is theoretically integrated and transferred during multi-domain graph pre-training, which is a fundamental and underexplored problem in the field of graph foundation models (GFMs). Authors propose a novel perspective from differential geometry, with the core idea of gluing diverse graph datasets into a unified Riemannian manifold. Moreover, authors empirically validate GRAPHGLUE’s geometric scaling law and show that larger quantities of datasets could improve the transferability of models by producing a smoother manifold.",
"strengths": "- This paper is good in originality and introduces a principled and powerful theoretical framework from differential geometry. This Neural Manifold Gluing concept provides a new, systematic, and theoretically sound language to model knowledge integration in graphs, which is a conceptual advance for the GFM field.\n- The experiments provide strong support for the theory. The Geometric Scaling Law experiment shows that 1-shot accuracy improves and transfer loss decreases as more datasets are added, which is a powerful validation of the smoother manifold hypothesis. Furthermore, the case study in Figure 5 shows that GRAPHGLUE benefits from adding semantically distinct domains in mitigating negative transfer is a strong practical result.",
"weaknesses": "- The proposed theory assumes that all source and target domains can be glued into a single smooth manifold. However, it is unclear how the model would perform if a new domain is fundamentally geometrically incompatible. For example, what if a new domain possesses a markedly different intrinsic dimensionality? Therefore, further discussion on the limitations and potential failure cases would be valuable for practical use. \n- Several key design choices in the GRAPHGLUE framework are not ablated. In particular, what is the impact of the EMA prototyping and the L_proto loss? How critical is the Riemannian MoE during adaptation compared to a simpler prompting scheme? I expect a more comprehensive ablation that provides deeper comprehension regarding the impact of individual components within the full framework. \n- The proposed framework involves several non-trivial operations. For instance, (k, M)-sparse perturbation, maintaining an Adaptive Orthogonal Frame (AOF), and calculating gluing losses (L_holo, L_curv). I am curious about the computational cost and memory complexity of the overall pre-training and adaptation process.",
"questions": "Please see the weakness.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-07T10:38:36",
"modification_date": "2025-11-12T11:33:28",
"review_url": "https://openreview.net/forum?id=G3uNHQpP7J¬eId=Zw6xvN1iuH",
"license": "CC BY 4.0"
},
{
"id": "psws3K42SH",
"forum": "G3uNHQpP7J",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission6004/Reviewer_RdxQ",
"reviewer_name": "Reviewer_RdxQ",
"rating": 8,
"confidence": 4,
"soundness": 4,
"contribution": 4,
"presentation": 3,
"summary": "This paper introduces GraphGlue, a novel framework for multi-domain graph pre-training that leverages principles from differential geometry to address the long-standing challenge of knowledge transfer across diverse graph domains. By conceptualizing graphs as pieces of a unified Riemannian manifold, the authors provide a systematic approach to ensure that these domains are seamlessly “glued” together, preserving both local and global geometric consistency. The framework employs an Adaptive Orthogonal Frame (AOF) to model local graph structure, then uses holonomy-based regularization to smoothly align these pieces into a coherent whole.\n\nOne of the key contributions of this work is the Geometric Transfer Metric (GTM), which quantifies the transfer difficulty between pre-trained models and target domains. The experiments show that GraphGlue achieves superior transferability and scalability across multiple graph domains, outperforming existing methods. The paper also introduces a new empirical insight: geometric scaling law, which suggests that increasing the number of domains during pre-training leads to a smoother manifold and better generalization.",
"strengths": "1. The paper introduces a novel approach to multi-domain graph pre-training by treating the graphs as local pieces of a larger, unified Riemannian manifold. This fresh perspective allows for a more systematic understanding of knowledge transfer across domains, which is a critical issue in graph foundation models.\n2. The concept of “neural manifold gluing” is well-formulated, using differential geometry to tie together multiple domains. The method employs an Adaptive Orthogonal Frame (AOF) to model local geometry and uses holonomy and curvature constraints to ensure smooth global alignment. This provides both theoretical depth and practical utility.\n3. The experiments cover a range of graph domains and demonstrate that the proposed method outperforms previous models in terms of transferability. The geometric scaling law showing that increasing the number of domains improves transferability is both a novel and insightful contribution to the field.\n4. Convincing theoretical analyses.",
"weaknesses": "1. The method heavily relies on triangle-based holonomy for graph gluing. However, in sparse graphs or those with few cycles, this assumption may not hold, limiting the approach's applicability. Further analysis of the method’s behavior on sparse or acyclic graphs is needed.\n2. The paper introduces the AOF for local geometry estimation, but does not provide sufficient analysis on how sensitive the results are to the choice of hyperparameters like perturbation strength and neighborhood size.\n3. Some parts of the paper writing needs to be polished.",
"questions": "1. See weaknesses.\n2. Could you provide a clear logical chain explaining the connections and roles of the numerous theorems and lemmas? **This is my main concern**, and I hope the authors can primarily address this question within a limited number of characters.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T16:03:08",
"modification_date": "2025-11-12T11:33:29",
"review_url": "https://openreview.net/forum?id=G3uNHQpP7J¬eId=psws3K42SH",
"license": "CC BY 4.0"
},
{
"id": "8TzRodaKm8",
"forum": "G3uNHQpP7J",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission6004/Reviewer_9yab",
"reviewer_name": "Reviewer_9yab",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper presents a novel differential-geometry view of multi-domain graph pretraining and domain adaptation, which the authors call a neural manifold gluing framework. Locally the model learns tangent-space bases and local metrics via an Adaptive Orthogonal Frame (AOF) and a (k, M)-sparse perturbation; globally the framework “glues” local pieces using edge tangent translation and holonomy/curvature-based smoothing to induce a smooth Riemannian metric across domains. The method includes practical engineering (EMA-based prototyping and a Riemannian MoE) to scale to large multi-graph settings. Experiments on multiple datasets and leave-one-domain-out few-shot transfer show consistent improvements over baselines and an empirical “geometric scaling law.”",
"strengths": "1. Strong theoretical foundation and unification. The paper provides a mathematically solid perspective, casting multi-domain pretraining and domain adaptation as a manifold-gluing problem; this is novel and helps unify several previously disparate ideas about metric compatibility, holonomy, and transferability. Theorem-level results and carefully defined operators (e.g., edge tangent translation) make the theoretical contribution convincing.\n\n2. Practical scaling via an EMA strategy. The use of EMA prototypes and the design choices to make the framework operate on large graphs are valuable. These engineering choices materially improve the applicability of multi-domain graph learning to realistic, large-scale settings.\n\n3. Comprehensive experiments and diverse datasets. The authors evaluate across a variety of datasets and tasks (leave-one-domain-out few-shot settings, ablations, scaling-law experiments). The empirical section is thorough and supports the main claims.",
"weaknesses": "1. The idea of “manifold gluing” may depend on the number of domains and on how many pairwise/collective gluing operations are needed. The paper briefly remarks that a QR-based subroutine reduces complexity, but it lacks a clear, end-to-end complexity and empirical runtime/memory analysis showing how cost scales with the number of domains, graph size, and manifold dimension.\n2. The paper states (line ~1532) “For pretraining, we extract the 2-hop ego-graph with 10 neighbors each hop for single graph datasets and adopt a 2-layer GCN...” Please clarify: how are those 10 neighbors chosen? Are they random uniform samples among neighbors, the top-10 by degree/score, or chosen by some importance sampling / structural heuristic? This will inform whether the pretraining pipeline’s effectiveness depends on a particular sampling heuristic.\n3. Many of the node-classification datasets used appear to be homophilic. The manifold gluing assumptions may rely implicitly on local coherence of labels/structure; it is unclear whether the proposed framework adapts to heterophilic graphs. We suggest that the authors either include experiments on heterophilic benchmarks (or a small toy study) or provide a discussion of expected behavior if heterophilic graphs are included in pretraining.",
"questions": "See weaknesses.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T22:12:00",
"modification_date": "2025-11-12T11:33:30",
"review_url": "https://openreview.net/forum?id=G3uNHQpP7J¬eId=8TzRodaKm8",
"license": "CC BY 4.0"
},
{
"id": "X08xAlpCxp",
"forum": "G3uNHQpP7J",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission6004/Reviewer_5PYX",
"reviewer_name": "Reviewer_5PYX",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The paper proposes a differential-geometry view of multi-domain graph pre-training: neural manifold gluing that merges graphs from diverse domains into a unified smooth Riemannian manifold. Locally, geometry is estimated via a (k, M)-sparse perturbation and an Adaptive Orthogonal Frame (AOF) to form tangent-space bases and local metrics. Globally, pieces are “glued” by edge tangent translation (isometries) with holonomy (triangle) triviality and further curvature-based smoothing to promote a smooth global metric. Experiments over six domains show strong few-shot transfer and a geometric scaling law: adding more pre-training datasets yields smoother manifolds and better transfer.",
"strengths": "1. Conceptual originality & unification: Frames multi-domain GFM pre-training/adaptation in a single geometric framework with local-to-global gluing, tying transferability to metric compatibility, holonomy, and Ricci-related smoothness—offering interpretable levers rather than only heuristics.\n2. Experiments span diverse graph domains and demonstrate solid performance and scalability. The observed geometric scaling law is an insightful and interpretable empirical finding that supports the theoretical motivation.\n3. The proposed framework is well structured, moving clearly from local metric learning (AOF) to global manifold construction (holonomy and curvature smoothing) and then to practical pre-training and adaptation.",
"weaknesses": "1. Although the method is conceptually rich, the explanation is dense, with long mathematical expressions and limited intuition. The main framework diagram is visually cluttered and could better emphasize the data flow between modules. Simplifying the narrative and improving the visuals would greatly enhance clarity.\n2. The paper lacks a clear analysis of computational cost and training scalability — given that manifold operations such as matrix logarithm and curvature regularization can be expensive, a runtime or memory comparison would help readers assess practicality for large-scale deployment.\n3. The method relies on geometric assumptions such as triangle-based holonomy and curvature smoothness, but many real graphs are sparse or irregular, with very few closed loops. It remains unclear how well the model behaves when these geometric constraints cannot be fully satisfied. Adding experiments or visualizations on sparse or highly heterogeneous graphs would make the theory–practice connection more convincing.",
"questions": "1. Triangle coverage & sparsity: How does performance degrade when the underlying graph (or inter-domain scaffold) has few triangles, so triangle holonomy regularization is weak or inapplicable? Do you back off to cycle-based approximations or add synthetic motifs? (Additionally, some theorems are based on the assumption that every edge belongs to at least one triangle, e.g., Theorem 4.8)\n2. How sensitive is the learned manifold geometry to the hyperparameters of the AOF module (e.g., k and perturbation scale)?\n3. Could you provide more intuition or examples to help readers understand how the “manifold gluing” process works in practice?\n4. From which previously introduced equations or lemmas is **equation (30)** derived?\n5. See weaknesses.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T06:10:43",
"modification_date": "2025-11-12T11:33:31",
"review_url": "https://openreview.net/forum?id=G3uNHQpP7J¬eId=X08xAlpCxp",
"license": "CC BY 4.0"
}
] |
7AXP2RYw2N
|
https://openreview.net/forum?id=7AXP2RYw2N
|
Video-MTR: Reinforced Multi-Turn Reasoning for Long Video Understanding
| 4.666667
| 4
|
[
6,
4,
4
] |
[
3,
5,
4
] | 3
|
[
"Long-form video understanding;MLLM; multi-turn reasoning"
] |
Long-form video understanding, characterized by long-range temporal dependencies and multiple events, remains a challenge. Existing methods often rely on static reasoning or external visual-language models (VLMs), which face issues like complexity and sub-optimal performance due to the lack of end-to-end training. In this paper, we propose Video-MTR, a reinforced multi-turn reasoning framework designed to enable iterative key video segment selection and question comprehension. Unlike traditional video reasoning pipeline, which generate predictions in a single turn, Video-MTR performs reasoning in multiple turns, selecting video segments progressively based on the evolving understanding of previously processed segments and the current question. This iterative process allows for a more refined and contextually aware analysis of the video. To ensure intermediate reasoning process, we introduce a novel gated bi-level reward system, combining trajectory-level rewards based on answer correctness and turn-level rewards emphasizing frame-query relevance. This system optimizes both video segment selection and question comprehension, eliminating the need for external VLMs and allowing end-to-end training. Extensive experiments on benchmarks like VideoMME, MLVU, and EgoSchema demonstrate that Video-MTR outperforms existing methods in both accuracy and efficiency, advancing the state-of-the-art in long-form video understanding.
|
leveraging end-to-end RL to enable MLLMs to perform multi-turn reasoning.
|
applications to computer vision, audio, language, and other modalities
|
https://openreview.net/pdf?id=7AXP2RYw2N
| 2025-09-17T11:49:28
| 3
|
[
{
"id": "oVe9T3jgzW",
"forum": "7AXP2RYw2N",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8384/Reviewer_StqD",
"reviewer_name": "Reviewer_StqD",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The paper proposes Video-MTR, a reinforcement-learning based framework for multi-turn reasoning in long video understanding. The main idea is to start with uniform frame sampling, and then use an MLLM to iteratively decide whether to retrieve additional frames or answer the question. The policy is optimized using PPO with a gated bi-level reward that combines final-answer correctness and turn-level frame-query relevance. Experiments on VideoMME, MLVU, and EgoSchema show improvements over existing open-source baselines on long videos while using relatively few training examples.",
"strengths": "- The paper studies the under-explored idea of combining reinforcement learning with long-video understanding, and the proposed method is novel. The empirical results are quite strong and Video-MTR beats a number of competitive baselines.\n- The paper is mostly clearly written and easy to read. The proposed method is well-motivated.\n- The paper conducts a number of ablation experiments, and in addition to QA accuracy it also measures latency as an additional metric in some experiments in the appendix.",
"weaknesses": "- The design of the reward function seems to be a bit ad-hoc. It would be useful to know how sensitive the method is to hyper‐parameters (thresholds and bonus amounts in each stage).\n- The paper is lacking some error analysis and discussion on failure modes, e.g. when wrong segments are retrieved.\n\nThere are a few occasions in the paper where there might be ambiguities or inaccurate claims, and I would hope that they can be clarified in the paper:\n- Around line 78 the authors claim that \"this is the first attempt to incorporate multi-turn reasoning in the context of long video understanding.\" This claim is a bit inaccurate since one can argue that the large body of existing works in \"agentic\" video models (such as \"VCA: Video Curious Agent for Long Video Understanding\", Arxiv 2412.10471) are also doing multi-turn exploration and refinement of the selected video frames within a long video.\n- Around line 335 the authors claim that \"Most open-source long-video methods operate with ≤ 128 frames.\" It used to be the case but today more and more models support longer context. For example the Qwen2.5-VL-7B model that the paper refers to officially supports a context length of 131072 and 768 frames when processing videos.",
"questions": "I would like the authors to discuss the following concerns I have on the methodology and the experiments. They are not necessarily weaknesses of the paper, but rather questions I would like to gather more information on from the authors:\n- The dataset the authors curated has a size of only 8K, which might be a bit too small for long videos. Do the author agree that it is a valid concern that policy may rely on heuristics tied to those training datasets rather than truly understand how to retrieve, and might not work well on new video domains?\n- For ultra long videos for questions that require complex reasoning and fine-grained understanding at multiple locations of the long video, retrieving only 32/64 frames and only using 3 turns might not be sufficient. Do you have a sense on why increasing these numbers did not lead to uniform increase in performance in your experiments?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T10:56:02",
"modification_date": "2025-11-12T12:04:48",
"review_url": "https://openreview.net/forum?id=7AXP2RYw2N¬eId=oVe9T3jgzW",
"license": "CC BY 4.0"
},
{
"id": "EamHWpfgHe",
"forum": "7AXP2RYw2N",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8384/Reviewer_R4Fn",
"reviewer_name": "Reviewer_R4Fn",
"rating": 4,
"confidence": 5,
"soundness": 3,
"contribution": 3,
"presentation": 2,
"summary": "This paper proposes Video-MTR, a reinforced multi-turn reasoning framework designed to enable iterative key video segment selection and question comprehension. Video-MTR performs reasoning in multiple turns, selecting video segments progressively based on the evolving understanding of previously processed segments and the current question. Extensive experiments on benchmarks like VideoMME, MLVU, LVBench, and EgoSchema demonstrate that VideoMTR outperforms existing methods in both accuracy and efficiency, advancing the state-of-the-art in long video understanding.",
"strengths": "1. As claimed by the authors, this could be the first attempt to incorporate multi-turn reasoning in the context of long video understanding.\n2. The proposed method can adaptively select important frames for the question.\n3. Experiments on multiple long-video benchmarks show that Video-MTR outperforms its backbone Qwen2.5-VL with the same number of frames.",
"weaknesses": "1. Qwen2.5-VL can support up to 768 frames and outperforms the proposed Video-MTR with the input of 64 frames. More experiments should be conducted to investigate whether Video-MTR can outperform Qwen2.5-VL with more frames.\n2. It's weired that the QA accuracy in Figure 4 exceed 1. Some explanations should be provided.\n3. The information of compared baseline models is not given, especially their backbone models.\n4. While introducing multi-turn reasoning, the efficiency compared to baselines should be analyzed.\n5. The training framework is complicated with various tricks, which may be unstable in other scenarios.",
"questions": "Please reply to Weaknesses.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T01:29:33",
"modification_date": "2025-11-12T12:04:49",
"review_url": "https://openreview.net/forum?id=7AXP2RYw2N¬eId=EamHWpfgHe",
"license": "CC BY 4.0"
},
{
"id": "cunFvGiNqq",
"forum": "7AXP2RYw2N",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8384/Reviewer_JQqn",
"reviewer_name": "Reviewer_JQqn",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "Video-MTR is a reinforced multi-turn reasoning framework for long-form video understanding, addressing challenges like long-range temporal dependencies and multi-event complexity. Unlike existing methods (static single-turn reasoning or external VLM-reliant agentic paradigms), it enables iterative key segment selection and end-to-end training via reinforcement learning (RL). Built on Qwen2.5-VL-7B, it uses a gated bi-level reward system (trajectory-level: answer correctness; turn-level: frame-query relevance via IoU) to guide multi-turn reasoning. It starts with uniform frame sampling, then retrieves relevant segments iteratively until confident or reaching a 3-turn limit. The proposed model is trained on only 8K curated samples (from NExT-GQA and QVHighlights), far fewer than existing methods (256K–4.4M).\nExperimental Results: Outperforms baselines on 4 benchmarks (VideoMME: 62.2%, MLVU: 49.8%, LVBench: 41.8%, EgoSchema: 63.4%) with 7B parameters and ≤64 frames, matching large proprietary models (e.g., Gemini-1.5-Pro) with fewer resources.",
"strengths": "Strong Long-Video Reasoning: Multi-turn iteration avoids missing critical info in long videos, with larger accuracy gains for longer videos (+6.3% on long vs. +4.6% on short in VideoMME) and complex tasks (+8.1% on multi-detail tasks in MLVU).\n\nHigh Efficiency:\nData-efficient (8K samples vs. hundreds of thousands/millions).\nCompute-efficient (7B parameters, ≤64 frames) vs. large models or high-frame-budget methods.\nBalanced latency (427.2 ms at 3 turns) vs. accuracy.\n\nEnd-to-End & Tool-Independent: Eliminates external VLMs, avoiding heterogeneous component complexity and enabling unified optimization of segment selection and question comprehension.\n\nRobust & Generalizable: Performs consistently across benchmarks; works for smaller models (Qwen2.5-VL-3B) with accuracy gains.",
"weaknesses": "1. The idea of first grounding/localization and then answering questions is not novel.\n\n2 .Limitations in Complex Tasks: Struggles with multi-event (e.g., action-order) tasks (early stopping due to training bias) and fine-grained perception (coarse frames blur micro-actions like \"brush dipping vs. mixing\").\n\n3. Turn-level rewards need high-quality temporal annotations, making scaling to new domains costly.",
"questions": "No",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-15T16:15:19",
"modification_date": "2025-11-12T12:04:50",
"review_url": "https://openreview.net/forum?id=7AXP2RYw2N¬eId=cunFvGiNqq",
"license": "CC BY 4.0"
}
] |
khBHJz2wcV
|
https://openreview.net/forum?id=khBHJz2wcV
|
Physics-Constrained Fine-Tuning of Flow-Matching Models for Generation and Inverse Problems
| 3
| 3.75
|
[
4,
6,
2,
0
] |
[
4,
3,
4,
4
] | 4
|
[
"Generative Modeling",
"Physics‑Informed Machine Learning",
"Inverse Problems",
"Parameter Identification"
] |
We present a framework for fine-tuning flow-matching generative models to enforce physical constraints and solve inverse problems in scientific systems. Starting from a model trained on low-fidelity or observational data, we apply a differentiable post-training procedure that minimizes weak-form residuals of governing partial differential equations (PDEs), promoting physical consistency and adherence to boundary conditions without distorting the underlying learned distribution. To infer unknown physical inputs, such as source terms, material parameters, or boundary data, we augment the generative process with a learnable latent parameter predictor and propose a joint optimization strategy. The resulting model produces physically valid field solutions alongside plausible estimates of hidden parameters, effectively addressing ill-posed inverse problems in a data-driven yet physics-aware manner. We validate our method on canonical PDE problems, demonstrating improved satisfaction of physical constraints and accurate recovery of latent coefficients. Further, we confirm cross-domain utility through fine-tuning of natural-image models. Our approach bridges generative modelling and scientific inference, opening new avenues for simulation-augmented discovery and data-efficient modelling of physical systems.
|
generative models
|
https://openreview.net/pdf?id=khBHJz2wcV
| 2025-09-19T19:19:43
| 4
|
[
{
"id": "150xxQEo77",
"forum": "khBHJz2wcV",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission17809/Reviewer_Mbyw",
"reviewer_name": "Reviewer_Mbyw",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "The paper presents a post-training scheme that takes a pretrained flow-matching generator and tilts its distribution toward PDE-consistent samples by minimizing weak-form PDE residuals—so you can enforce physics (and boundary conditions) without retraining from scratch or having paired (state, parameter) data. Fine-tuning is posed as Adjoint Matching (a memoryless stochastic optimal control) with a small theoretical extension that scales the noise schedule for stability.",
"strengths": "-The key selling point is the enforcement of PDEs via weak-form residuals on a pretrained flow-matching model—no paired data or full retraining while keeping the base model’s inference cost.\n\n-The model jointly evolves state and latent parameters with an inverse predictor, enabling guided sampling from sparse parameter observations and adaptation under model misspecification.",
"weaknesses": "The proposed method is practical for post-training physics enforcement for flow matching (no paired data or full retraining). It is useful and timely, but quite incremental rather than foundational.\n\nPhysics is imposed via a weak-form residual penalty added to the flow-matching objective; it aligns the denoiser with PDE residuals but does not guarantee exact constraint satisfaction.\n\nThe method relies on several heuristics and hyperparameters, such as scaled “memoryless” noise with factor κ, time-grid tilting (q = 0.9), and computing loss on only a subset of late steps (K_last, K). These improve stability but introduce tuning sensitivity without a comprehensive robustness study.",
"questions": "Since physics is enforced via weak-form residual penalties (not hard projections), could you report post–fine-tuning feasibility diagnostics—e.g., distributions of PDE residual norms, boundary-condition violations, and conservation drift?\n\nCan you provide sensitivity curves (accuracy and residuals) versus the hyperparameters across datasets, along with any principled guidance or conditions that ensure stability without ad-hoc tuning? \n\nBecause dense weak-residual evaluation is costly, you use compact test functions and patch-based subsampling. Could you quantify coverage (e.g., how many test centers are required to achieve a target residual error) and explore adaptive sampling that targets high-residual regions?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-05T15:42:06",
"modification_date": "2025-11-12T14:07:31",
"review_url": "https://openreview.net/forum?id=khBHJz2wcV¬eId=150xxQEo77",
"license": "CC BY 4.0"
},
{
"id": "3t7QZxiKLP",
"forum": "khBHJz2wcV",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission17809/Reviewer_ZYUG",
"reviewer_name": "Reviewer_ZYUG",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 2,
"summary": "The paper introduces a physics-constrained fine-tuning framework for pretrained flow-matching generative models, enabling them to satisfy PDE-based physical laws and jointly infer latent parameters without retraining from scratch. It casts the physics-constrained fine-tuning as Adjoint-Matching loss for distribution-level correction and a lightweight architectural extension with residual heads for joint state–parameter evolution. Overall, it positions itself as a general and data-efficient bridge between physics-informed learning and modern generative modeling.",
"strengths": "**S1.** The paper recasts physics-based simulation as an adjoint-matching control framework, elegantly linking preference-aligned generative fine-tuning with physics-constrained inference. This bridges simulation-augmented modeling and stochastic optimal control, enabling physically consistent generative trajectories.\n\n**S2.** The method’s joint treatment of state and latent parameters allows simultaneous forward generation and inverse recovery within a unified flow-matching model.\n\n**S3.** The experimental evaluation is broad and convincing, demonstrating consistent improvements in physics residuals and inverse reconstruction accuracy across multiple PDE benchmarks, while maintaining generative fidelity and efficiency.",
"weaknesses": "**W1**: A core conceptual weakness of the paper is that it does not truly model the joint state–parameter distribution $(x, α)$. The latent variable $α$ is introduced post hoc through a frozen inverse predictor $\\phi(x_1)$, which breaks end-to-end coupling between physical states and governing parameters. I would be interested know what tradeoffs do this approach have. An ablation where the base model jointly learns $(x, α)$ during pretraining would help clarify the effectiveness of the two-stage setup or if a single joint flow $v_t(x, α)$ could achieve stronger physical coherence and lower residuals.\n\n\n**W2:** The mathematical notation is dense and inconsistent, making the exposition difficult to follow even for technically skilled readers. Symbols like ($v_t$), ($b_t$), and ($u_t$) are overloaded across base, fine-tuned, and control flows, while stochastic and deterministic forms are interleaved without clear separation. Algorithm 1 is not self consistent, missing how for e.g. $\\phi$ is used. As a result, the formalism obscures theoretical contributions and could benefit from a clearer hierarchy of variables (e.g., consistently distinguishing state vs. parameter flows) and unified notation across sections.\n\n**W3:** The theoretical novelty of the paper is incremental over the original Adjoint Matching framework (Domingo-Enrich et al., 2025). The core stochastic optimal control formulation, adjoint dynamics, and lean-adjoint optimization are directly inherited, with the main contribution being their adaptation to PDE-constrained fine-tuning. While this is a valuable and well-motivated extension, it constitutes more of an application-level adaptation than a fundamentally new methodological advance.",
"questions": "Q1: How does the proposed fine-tuning framework scale to nonlinearity where PDE constraints become non-analytic or high-dimensional? Are there stability or performance guarantees in such regimes?\n\nQ2: Could the authors clarify which aspects of their method represent core methodological contributions beyond the existing Adjoint Matching framework (Domingo-Enrich et al., 2025)?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T16:34:36",
"modification_date": "2025-11-12T14:07:31",
"review_url": "https://openreview.net/forum?id=khBHJz2wcV¬eId=3t7QZxiKLP",
"license": "CC BY 4.0"
},
{
"id": "zOIoPgVHjN",
"forum": "khBHJz2wcV",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission17809/Reviewer_Pud7",
"reviewer_name": "Reviewer_Pud7",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This paper proposes a post-training framework that fine-tunes flow-matching generators using randomized weak-form PDE residuals and a joint latent-parameter pathway, so the model produces physics-consistent fields while simultaneously inferring hidden coefficients; experiments on canonical PDE tasks indicate reduced residuals with limited impact on sample diversity.\n\nContributions.\n1) Post-training physics alignment: turns an already trained flow-matching model into a physics-respecting generator by minimizing weak-form residuals with compact test functions, avoiding unstable high-order derivatives and limiting drift from the base distribution.\n2) Joint state–parameter generation: augments the generator with a learned evolution for latent physical parameters and an inverse predictor, and fine-tunes both under an adjoint-matching objective (with a scaled memoryless schedule) to couple solutions and parameters.\n3) Practical control and coverage: demonstrates denoising, sparse-observation guidance, and boundary-condition adaptation, and exposes simple knobs to trade off constraint strength versus fidelity/diversity, with lightweight fine-tuning overhead.",
"strengths": "1) Proposes a post-training route to impose physics on pretrained flow-matching models via weak-form PDE residuals, coupled with joint latent-parameter evolution for inverse problems without paired labels; also introduces a scaled memoryless noise schedule within adjoint matching.\n\n2) Grounds the method in adjoint matching and implements randomized local test functions for stable weak residuals; experiments span Darcy denoising, sparse-observation guidance, linear-elasticity boundary adaptation, and a small natural-image recoloring case, with ablations showing a residual–diversity trade-off.\n\n3) Clearly states goals and contributions, provides a pipeline diagram, derives the weak forms and test-function design, includes a full training algorithm and detailed dataset/backbone specs, and offers a reproducibility statement.",
"weaknesses": "1) Diversity objective may be misaligned for PDE solvers. For well-posed forward/inverse PDEs the target is a single solution; promoting output “diversity” is not desirable, and when partial observations make the task ill-posed, diversity stems from the problem, not the pipeline. The paper treats diversity as a knob/metric (SSIM-based) and studies its trade-off against residuals (Fig.\\ 3), which can conflict with PDE goals.\n\n2) Test problems are not comprehensive. Evaluations focus on Darcy denoising, sparse-obs guidance, and a linear-elasticity boundary change, plus a small image recoloring demo; there is no coverage of more challenging PDEs such as Poisson, Navier–Stokes, or Helmholtz, nor larger-scale or multi-physics settings.\n\n3) Limited baselines and quantitative recovery metrics. Beyond an ECI comparison in the elasticity case, there is no systematic head-to-head with alternative physics-constrained generative methods, and the paper provides little quantitative evaluation of latent-parameter recovery accuracy or real-data tests (most results are residual reductions and visuals).",
"questions": "1) Diversity vs. PDE correctness. For well-posed forward/inverse PDEs, please justify when output diversity is desirable; otherwise, replace or augment SSIM-based diversity with task metrics (solution error L2/H1, weak/strong residual distributions, boundary-violation rates) and, for partial-observation settings, include posterior calibration (coverage vs. nominal).\n\n2) Scope of test problems. Add at least one oscillatory elliptic case (Poisson/Helmholtz) and one basic incompressible flow (e.g., lid-driven cavity or cylinder shedding); if new runs are infeasible, provide higher-resolution or 3D variants or a brief scaling analysis (compute, stability bottlenecks).\n\n3) Baselines and recovery metrics. Include matched-compute head-to-head with (i) training-time physics-regularized flow matching, (ii) inference-time projection/ECI, and (iii) a classical PDE-constrained inversion baseline. Report solution error, boundary violations, weak/strong residuals, latent-parameter MAE/RMSE, and wall-clock.\n\n4) Ablations for method choices. Provide a small ablation comparing weak vs. strong residuals (stability, final residuals) and sensitivity to test-function sampling; show how the scaled memoryless parameter kappa affects stability, residual reduction, and drift from the base distribution.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T15:30:03",
"modification_date": "2025-11-12T14:07:32",
"review_url": "https://openreview.net/forum?id=khBHJz2wcV¬eId=zOIoPgVHjN",
"license": "CC BY 4.0"
},
{
"id": "voHngDO1tb",
"forum": "khBHJz2wcV",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission17809/Reviewer_n7dD",
"reviewer_name": "Reviewer_n7dD",
"rating": 0,
"confidence": 4,
"soundness": 1,
"contribution": 1,
"presentation": 3,
"summary": "This paper introduces a physics-consrtained post-training scheme for pre-trained flow-matching generators that enforces PDE consistency and simultaneously infers latent physical parameters for inverse problems without paired (state, parameter) supervision. The key idea is to treat weak-form PDE residuals as a reward and fine-tune the generator via Adjoint Matching. Experiments on Darcy flow, linear elasticity and natural images demonstrate PDE residual reduction and parameter recovery.",
"strengths": "## Originality\n- Novel problem formulation. Enforcing parameter-dependent PDE constraints without paired (parameter, solution) training data.\n- Creative architecture design. Joint state-parameter evolution with surrogate base flows constructed via inverse predictor.\n- Technical contribution. Scaled memoryless noise schedule (Lemma 1) provides useful numerical stabilization.\n- Weak-form residual approach.Stochastic sampling of Wendland-wavelet test functions is practical and avoids instabilities of strong-form residuals\n\n## Quality\n- solid theoretical grounding via adjoint matching framework.\n- thoughtful multi-faced regularization with clear ablations.\n- diverse experimental scenarios covering denoising, sparse conditioning and boundary misspecification\n- extensive reproducibility details \n\n## Clarity\nGood visual explanation, well-motivated problem setup\n\n## Significance\nPost-training paradigm is more practical than training from scratch. Proof-of-concept establishes feasibility of joint parameter-solution generation post-training, potentially inspiring future work.",
"weaknesses": "## Unacceptable Absence of Quantitive Results\nThe paper is almost entirely based on qualitative visualizations with virtually no quantitative evaluation. Only Figure 3 quantifies how the hyperparameters mediate the trade-off between staying close to the base model and reducing the (weak) PDE residual. Everyting else is visualizations of cherry picked samples. No quantitive parameter recovery metrics despite solving \"inverse problems\". No numerical residual comparisons with other methods despite claiming constraint enforcement. No baseline comparison tables of even one numerical metric despite comparing with ECI. \nThe absence of quantitive results makes it impossible to assess\n- Whether the method actually works reliably?\n- How it compares to alternatives (I am not talking about a comprehensive benchmarking agains other SOTA methods)\n- When it succeeds vs fails\n- what design choice actually matter.\n\nThe authors are encouraged to conduct thorough quantitative experiments for future submissions of this work.",
"questions": "Please refer to weaknesses section.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T05:22:34",
"modification_date": "2025-11-12T14:07:32",
"review_url": "https://openreview.net/forum?id=khBHJz2wcV¬eId=voHngDO1tb",
"license": "CC BY 4.0"
}
] |
|
O4Oy7NsSwG
|
https://openreview.net/forum?id=O4Oy7NsSwG
|
Topology and geometry of the learning space of ReLU networks: connectivity and singularities
| 5.5
| 3.25
|
[
4,
6,
6,
6
] |
[
4,
3,
4,
2
] | 4
|
[
"learning dynamics",
"topology",
"neural networks",
"ReLU networks",
"geometry",
"symmetry",
"loss landscape",
"gradient",
"singularity",
"connectedness"
] |
Understanding the properties of the parameter space in feed-forward ReLU networks is critical for effectively analyzing and guiding training dynamics. After initialization, training under gradient flow decisively restricts the parameter space to an algebraic variety that emerges from the homogeneous nature of the ReLU activation function. In this study, we examine two key challenges associated with feed-forward ReLU networks built on general directed acyclic graph (DAG) architectures: the (dis)connectedness of the parameter space and the existence of singularities within it. We extend previous results by providing a thorough characterization of connectedness, highlighting the roles of bottleneck nodes and balance conditions associated with specific subsets of the network. Our findings clearly demonstrate that singularities are intricately connected to the topology of the underlying DAG and its induced sub-networks. We discuss the reachability of these singularities and establish a principled connection with differentiable pruning. We validate our theory with simple numerical experiments.
|
learning theory
|
https://openreview.net/pdf?id=O4Oy7NsSwG
| 2025-09-13T19:39:01
| 4
|
[
{
"id": "QDl0LSwCbp",
"forum": "O4Oy7NsSwG",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4772/Reviewer_pzFJ",
"reviewer_name": "Reviewer_pzFJ",
"rating": 4,
"confidence": 4,
"soundness": 4,
"contribution": 2,
"presentation": 4,
"summary": "The paper studies invariant sets for gradient flow training of DAG-based ReLU architectures and singularities within those invariant sets.",
"strengths": "The paper is very well written and provides some insights on properties of the training dynamics.",
"weaknesses": "To me the results seem to be relatively minor and easy extensions of previous results. The authors suggest that formulating these conservation laws with the use of the incidence matrix of the DAG gives significant new insight. But as far as I can see, the main insight is that there a singularities when parts of the graph become disconnected, which does not seem to be surprising.",
"questions": "On one hand, singularities could be a concern, because they cannot be escaped once reached. On the other hand you suggest that they may be desirable in the sense that they can be seen as the model performing some automatic pruning during training (and indeed you suggest that one may want to induce singularities intentionally). May question is whether there could not be a worry that inducing singularities prematurely limits the model and prevents it from later converging to more favorable solutions which require all neurons (or at least some of the prematurely pruned ones).",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T20:30:37",
"modification_date": "2025-11-12T11:19:22",
"review_url": "https://openreview.net/forum?id=O4Oy7NsSwG¬eId=QDl0LSwCbp",
"license": "CC BY 4.0"
},
{
"id": "8LyWSQ16cV",
"forum": "O4Oy7NsSwG",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4772/Reviewer_j6Nf",
"reviewer_name": "Reviewer_j6Nf",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper studies the properties of the parameter space of ReLU networks, notably in order to decide whether this space is connected and/or contains singularities, which are relevant questions to consider when targeting an optimally trained network, or to prune the network without losing performance/expressiveness, respectively.\n\nThe authors consider the framework of Directed Acyclic Graphs (DAGs), which is more general than layered architectures,and focus in this paper on properties of homogeneous activation functions, in particular here the ReLU activation function.\n\nThey show a complete characterization of the connectedness of the learning space under GF with any given initialization, analysing to this end the role of bottleneck vertices in the network (that is, vertices with only one out-going arc, or only one in-coming arc) and the balance conditions (invariant under GF once the initialization is done) on related sets of vertices.\n\nMoreover, the authors study singularities, namely parts of the learning space where part of the network stops contributing to the computation. They prove a link between the existence of such singularities and the already mentioned balance conditions, and that even when the conditions are gathered, a GF algorithm will not stumble upon a singularity in finite time. \nThe authors circumvent this impossibility to favor \"self-pruning\" by using regularization, and provide numerical experiments showing which regularization helps driving the model towards singularities.",
"strengths": "Provides a sound and thorough theoretical analysis of the connectivity of learning space for ReLU-activated DAGs Networks trained under GF after arbitrary initialization.\n\nTheoretical analysis of the conditions of existence of singularities, and of the possibility to reach them, complemented with experimental results on tools to reach these singularities in practice.",
"weaknesses": "The results on connectivity might be achievable with simpler tools and less technicality.\n\nThe experimental part on connectivity does not bring anything to the discussion. \n\nThe introduction of some notions and symbols is lacking.",
"questions": "p3, discussing on re-scaling: Do you assume here and in the rest of the paper that all biases are 0?\n\np4, top of the page: do you have any other requirement on $\\\\ell$, other than it being differentiable? For instance $\\\\ell(x,x)=0$ ?\n\np5, Definition 1: $\\\\theta^2$ is the vector obtained from $\\\\theta$ by squaring each individual element? Or do you here implicitly use some other product?\n\np5, Proposition 3: the point of view of network flows can be obtained in a simpler way as what is done in Appendix A.3. Indeed, since the source and sinks have unconstrained flows, it would suffice to initialize all edge weights with 1, and then correct the balance for each node $u$ with a simple edition of the weights along a path from an arbitrary source to an arbitrary sink going through $u$. Does the algebraic point of view give, in some way, more insight for this paper?\n\np6, Theorem 1: I think the proof could take a shortcut (following the idea of the precedent remark, and the intuition-providing text at the beginning of page 7: first prove that if the conditions are not satisfied, it is unfeasible to satisfy the responsible set of vertices, and if it is, show that fixing first the edge weights incident to $Anc(v)$ (or $Desc(v)$) to satisfy the local constraints, and then construct the rest of the solution greedily without editing any edge incident to $Anc(v)$ (which is then possible by definition of this set, anything outside is on at least one path from source to sink avoiding $Anc(v)$). \nFor the trivial case where the deleted $e$ does not correspond to a bottleneck, the result is immediate.\nThen, the Proposition 4 directly yields the result.\nIs there a reason for taking the long and more technical way?\n\np6, Figure 2d: the figure is a good illustration of the proved theorem. I don't understand however, what the additional experiments on real data (Appendix A.9.1) bring to the paper, since it needs no further empirical demonstration that the space is disconnected. As I am less acquainted with the experimental side, could you indicate what I am missing here?\n\np8, Proposition 6: is the converse known to be true/false? \n\np9: When and why is self-pruning interesting to have? I understand why one wants to self-prune when the initialization was made such that some singularity exist, but is there an advantage in how expressive the network can be when initialized with a reachable singularity, versus when initialized such that none can be attained by GF?\n\n\nTypos and Suggestions:\n\np3, Symmetries of ReLU networks: $\\\\sigma$ is not introduced, which could be done by adding \"the activation function\" in front. Moreover, the formulae for ReLU and Leaky ReLU are both wrong: ReLU: $\\\\sigma(z)=\\\\max\\\\{z, 0\\\\}$ and Leaky ReLU: $\\\\sigma(z)=\\\\max\\\\{z, \\\\gamma z\\\\}$.\n\np3, Local conservation laws under gradient flow: the variables $d$ and $e$ are clear from context but should be defined nonetheless. \n\np6, Definition 2: prefer the use of \"...with $V^-_B$, $V^+_B$ the sets..., respectively.\"\n\np6, Figure 2c: (ii) this case is misleading, since it is not obvious without the text that the case (iii) can \"override\" it. It also is technically not true, since there could be a completely independent vertex $v'$ in the network making the space disconnected. Maybe find an alternate formulation meaning roughly \"a priori connected\".\n\np7, Corollary 2: unless some intermediate layer has a single neuron! It might not be an interesting case, but the soundness of the corollary requires excluding it.\n\np7: \"Concretely, it means that the balance condition ... will forbid sign switches ...\" This is the important intuition behind this section, it would be valuable to highlight it more, and potentially to merge it with the previous paragraph. \n\n\n\nI am open to updating my rating of the paper, depending on the answers provided by the authors.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T00:01:51",
"modification_date": "2025-11-12T11:19:23",
"review_url": "https://openreview.net/forum?id=O4Oy7NsSwG¬eId=8LyWSQ16cV",
"license": "CC BY 4.0"
},
{
"id": "L6BjbXTkBb",
"forum": "O4Oy7NsSwG",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4772/Reviewer_Fprd",
"reviewer_name": "Reviewer_Fprd",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This paper studies the feed-forward ReLU networks defined over directed acyclic graphs, examining the (dis)connectedness of the parameter space and the existence of singularities within it. The conservation laws under gradient flow are identified. Due to the disconnectedness of certain parameter configurations, certain singularities are unreachable, reducing the expressivity of ReLU networks at initialization.",
"strengths": "- The paper is relatively well-written and polished. Illustrative figures are provided to accompany the theoretical results and aid understanding.\n- The theoretical formulation is clean.\n- The result on the disconnectedness of the parameter space is somewhat surprising. The implication of losing expressivity at initialization seems interesting.\n- Some numerical experiments are conducted to validate theoretical results.",
"weaknesses": "- I am wondering whether the disconnected case occurs in fully-connected ReLU networks or not, since the example network given in Figure 2(d.1) does not look like a fully-connected network. If the disconnection only occurs in networks that are not fully connected, then the statement in line 358 may be inaccurate: \"the expressivity can be reduced to the extent that they lose their universal approximation capability\"; because ReLU networks that are not fully connected are not universal approximators to begin with. Please feel free to correct me if I have misunderstood your results.\n- In line 423, the authors state that: \"given a random initialization, the probability of $\\mathcal H_G(c)$ having singularities is zero.\" I trust that this statement itself is correct. However, it doesn't necessarily mean that the gradient flow/descent cannot go near singularities. It's quite common that ReLU networks can have saddle-to-saddle dynamics, in which the gradient flow path passes near a sequence of fixed points [1]. In those cases, even though the dynamics from random initialization never puts the parameters exactly in an invariant set, going near those fixed point is still a very prominent, if not the most prominent, trait of the learning dynamics. If I didn't misunderstand the result, the paragraph \"singularities are rare\" should probably come with more nuance or caveat -- \"probability of having singularities being zero\" doesn't mean that learning dynamics doesn't go near singularities.\n- The conservation laws arising from symmetries are also studied in [2]. I am wondering how their results relate to yours results in \"local conservation laws under gradient flow\" in line 160.\n- It might be useful to also discuss the limitation of studying gradient flow in place of SGD. Because the quantities that obey conservation laws under gradient flow can actually be time-varying in SGD [3,4].\n\n[1] Boursier et al. \"Gradient flow dynamics of shallow relu networks for square loss and orthogonal inputs.\" NeurIPS 2022.\n\n[2] Ziyin. \"Symmetry induces structure and constraint of learning.\" ICML 2024.\n\n[3] Liu et al. \"Noise and fluctuation of finite learning rate stochastic gradient descent.\" ICML 2021.\n\n[4] Chen et al. \"Stochastic collapse: How gradient noise attracts sgd dynamics towards simpler subnetworks.\" NeurIPS 2023.",
"questions": "Is there a particular reason to use the uncommon notation of double angle brackets $《》$? I struggled to understand it from a short inline definition given in line 177. I also didn't know if this notation is essential for reading and understanding the main results.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T22:13:08",
"modification_date": "2025-11-12T11:19:23",
"review_url": "https://openreview.net/forum?id=O4Oy7NsSwG¬eId=L6BjbXTkBb",
"license": "CC BY 4.0"
},
{
"id": "4bYzYMcFjn",
"forum": "O4Oy7NsSwG",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4772/Reviewer_aTae",
"reviewer_name": "Reviewer_aTae",
"rating": 6,
"confidence": 2,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "The paper presents their study on the function spaces parameterized by polynomial neural networks (i.e., those whose activation functions are polynomial). There are two main contributions: identifiability and singularity of functions in the neuromanifold (i.e., functions representable by neural networks). For the former, the authors show that for generic functions in neuromanifold, the set of parameters realizing these functions is at most finitely many or singleton, for Multi-Layer Perceptrons (MLP) and Convolutional Neural Networks (CNN) architectures respectively. For the latter, they characterize singularities as functions realized by sparse subnetworks and links this discovery to the sparsity bias of MLPs.",
"strengths": "The paper are generally well-written and the results are well-presented. While I do not dive into the proof, their results look sound to me. Two contributions are mathematically interesting and suggest further following work.",
"weaknesses": "Several points deserves to be further polished:\n1. Since most architectures use ReLU, I find that it is better to connect the current results to the ReLU cases (authors did admit this limitation in section 5).\n2. The bound on the degree of the activation in Theorem 4.1 is vacuous in the dimensions of the neural network architecture. Hence, I am not sure if this result reflects what we truly observe in practice.\n3. If I understand it correctly, the definition of critically exposed implies that there exists a positive probability that mappings $u$ admit a weight in a critically exposed set as critical points of the training dynamics (provided that we have sufficiently data). However, since we are unable to quantify this probability, they might be negligible and might vanish when dimension increases. I am not sure if we can use this notion to explain the so-called ``bias towards sparse subnetworks'' as in the paper.",
"questions": "1. In section 3.2, the link between optimization on the parameter space and on the neuromanifold is rather hand-waving. I wonder if there is a real relation between these two (under suitable conditions).",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-29T18:43:19",
"modification_date": "2025-11-12T11:19:23",
"review_url": "https://openreview.net/forum?id=O4Oy7NsSwG¬eId=4bYzYMcFjn",
"license": "CC BY 4.0"
}
] |
|
iITycdPaOd
|
https://openreview.net/forum?id=iITycdPaOd
|
Structure before the Machine: Input Space is the Prerequisite for Concepts
| 3
| 3.5
|
[
4,
2,
4,
2
] |
[
3,
4,
4,
3
] | 4
|
[
"Spectral Principal Paths",
"Linear Representation Hypothesis",
"Representation Learning"
] |
High-level representations have become a central focus in enhancing AI transparency and control, shifting attention from individual neurons or circuits to structured semantic directions that align with human-interpretable concepts. Motivated by the Linear Representation Hypothesis (LRH), we propose the Input-Space Linearity Hypothesis (ISLH), which posits that concept-aligned directions originate in the input space and are selectively amplified with increasing depth. We then introduce the Spectral Principal Path (SPP) framework, which formalizes how deep networks progressively distill linear representations along a small set of dominant spectral directions. Building on this framework, we further demonstrate the multimodal robustness of these representations in Vision-Language Models (VLMs). By bridging theoretical insights with empirical validation, this work advances a structured theory of representation formation in deep networks, paving the way for improving AI robustness, fairness, and transparency.
|
unsupervised, self-supervised, semi-supervised, and supervised representation learning
|
https://openreview.net/pdf?id=iITycdPaOd
| 2025-09-19T02:08:45
| 4
|
[
{
"id": "3KtAkpoZn3",
"forum": "iITycdPaOd",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission13528/Reviewer_UAP6",
"reviewer_name": "Reviewer_UAP6",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 3,
"presentation": 3,
"summary": "The paper develops a theoretical framework aimed at understanding how linear representations of high-level concepts emerge in deep neural networks. The authors hypothesize that observations in the ambient space are a linear combination of the high-level concept and spurious linear directions. This is coined the Input Space Linearity Hypothesis (ISLH). \n\nBuilding on this, the authors derive the Spectral Principal Path (SPP) for an affine network without non-linearities. The SPP is the path through projection matrices for which output-input singular vectors of adjacent matrices remain mostly co-linear, whilst corresponding singular values are large. Through Theorem 4.1, the ILHR and SPP framework is linked to the Linear Representation Hypothesis, connecting the work with prior research.",
"strengths": "The paper is well-written, quite self-contained, and has well-crafted illustrations.\n\nThe proposed method is both creative and original. The authors honestly point out that “our current framework is subject to several limitations.” (conclusion). \n\nWhile the developed method rests on a few key assumptions, it is elegant and well thought through. Given that the assumptions hold in practice, which the presented results suggest, the framework is a notable step towards better understanding how linear representations of high-level concepts emerge and propagate through deep neural networks.",
"weaknesses": "1. Introduction: covers the core concepts; however, readers unfamiliar with the LRH paper (Park et al. 2023) have considerably less context for understanding the setting.\n\n1. The theoretical derivation of the SPP method hinges on having a generalized network, like equation 5, without non-linearities. Yet, little discussion is provided on the limitations that come with this assumption.\n\n1. It wasn’t immediately clear to me why the second term in equation 8 is inserted. Based on the model specification in equation 5, this term would be zero. It would be good to state more explicitly for which case(s) this term is non-zero, e.g., referring to Appendix A.2.1.\n\n1. While the authors state in their reproducibility statement that they are committed to releasing the code, no private repository was provided as part of the review. This complicates disambiguating parts of the work, specifically related to these points:\n\n 1. Complexity of computing the SPP (equation 13): Finding the principal path is exponential in the depth of the model. Hence, for a $L$ layer network where the layer-wise jacobians have ranks r_1, …, r_L, the possible paths are: $\\prod_{l=1}^L r_l$. While this could become prohibitive for deep networks, it is not something that the authors discuss nor provide details on how to compute in practice.\n 1. It is not immediately clear how the results in Figures 2 and 4 were computed. Please see the questions below for more details on where the confusion lies. Without further elaboration, it raises questions about the robustness and generality of the proposed framework.\n\n 1. In line with the above, I am left with unanswered questions in terms of how rigorously the proposed method has been evaluated. At the outset, the results seem mostly qualitative, which weakens the support for ISLH and SPP.",
"questions": "1. Line 51: Maybe it would be useful to introduce the notion of “unembedding space”, this concept is not really a “standard” standard concept.\n\n1. Line 139: “recent advances in interpretability have shifted the focus toward analyzing representation”, please give references to this statement.\n\n1. Line 148: “co ncept” -> “concept”\n\n1. Eq. 3+4, should $x$ not be $\\bf{x}$?\n\n1. Figure 2: It is unclear how the cosine similarities in Figure 2 were computed; was a single sample $x$ used, or were cosine similarities between principal singular vector(s) and multiple samples for the same and/or distinct concepts used? \n\n1. Figure 2: How should the polar plot be interpreted exactly? Is it the components of $f_l(x)$ that are plotted as the dashed lines, or how exactly was this plot constructed?\n\n1. Figure 3: You write in the context of the figure (Line 323), “only a very small subset of singular values are amplified”. This interpretation seems a bit forced, considering that several layers have many non-zero magnitude singular values. Also, the relative difference between the highest singular value remains quite small compared to other non-zero singular values e.g., layers 19-21. Could you elaborate on Line 323 in relation to Figure 3?\n\n1. Figure 4: How did you compute the concept direction $\\bar{\\lambda}_W$ for the results? Was a single concept used for this computation, or are the cosine similarities across layers an average for multiple concept directions?\n\n1. Figure 5: What constitutes a “low” and a “high score?\n\n1. Section 5.5: Experiments in sec 5.5, this is quite qualitative, can this not be made more quantitative?\n\n1. Conclusion: Should this be written in past tense?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-04T04:13:03",
"modification_date": "2025-11-12T13:08:52",
"review_url": "https://openreview.net/forum?id=iITycdPaOd¬eId=3KtAkpoZn3",
"license": "CC BY 4.0"
},
{
"id": "RbxGXSy0KN",
"forum": "iITycdPaOd",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission13528/Reviewer_MB12",
"reviewer_name": "Reviewer_MB12",
"rating": 2,
"confidence": 4,
"soundness": 1,
"contribution": 2,
"presentation": 1,
"summary": "The paper claims that concept directions already exist in the input space, and neural networks propagate these directions along a few dominant paths layer by layer. Assuming this structure (ISLH), the authors argue that the LRH naturally emerges. The paper expands layer-wise Jacobians via SVD and defines a \"path gain\" as the product of singular values and inter-layer alignments. A few experiments are conducted on a VLM to probe intermediate-layer concept alignment.",
"strengths": "1. Analysis of the LRH and the inquiry for the reason of its emergence is interesting, analysing it through intermediate layers is sound.",
"weaknesses": "1. LRH has mainly been discussed in text-only contexts, and the ISLH is also formulated that way. Yet the experiments use a VLM, and all actual manipulations seem to be on the language side. It’s unclear what the vision modality contributes here, this effectively invalidates the claimed \"raw-space\" perspective.\n2. The exposition is confusing (see Questions).\n3. Theoretical claims are made for linear networks. Since a purely linear network can be collapsed into a single matrix, it’s unclear how this is informative for real nonlinear architectures.\n4. Section 5.2 largely repeats Section 4.3 without adding new content.\n5. Experimental evaluation is weak and mostly qualitative.\n6. 5.5.1 repeats contents of 5.5.2.",
"questions": "1. Eq 12: is G(\\mathcal{P}) appears to not be a scalar that can be maximized - can the authors clarify?\n2. How to interpret Fig 2: what do the angles represent?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T19:11:34",
"modification_date": "2025-11-12T13:08:52",
"review_url": "https://openreview.net/forum?id=iITycdPaOd¬eId=RbxGXSy0KN",
"license": "CC BY 4.0"
},
{
"id": "p92E49Z2GU",
"forum": "iITycdPaOd",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission13528/Reviewer_XQ8K",
"reviewer_name": "Reviewer_XQ8K",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper introduces the Spectral Principal Path (SPP) framework to explain the emergence and stability of linear representations in deep networks. The authors propose the Input-Space Linearity Hypothesis (ISLH)—an extension of the Linear Representation Hypothesis (LRH)—which suggests that concept-aligned directions originate in the input space and are selectively amplified through network depth.\nThe authors conside a derivation based on the Jacobian SVD decomposition paired with cumulative gain maximization, arguing that representations found in the input space propagate along a few dominant spectral paths aligned with the largest singular values.\n\nThe experiments show that principal singular vectors stabilize in deeper layers, singular values exhibit selective growth, and concept directions become increasingly concentrated and stable, supporting the theoretical claims.",
"strengths": "The work bridges theory and interpretability, offering an appealing spectral perspective on how neural networks encode and stabilize concepts.\n\nThe proposed Spectral Principal Path framework provides a unified spectral mechanism that connects the input-space structure to the linear separability observed in deep representations, while this link is well articulated in the main text. \n\nMoreover, the provided evidence of singular vector stabilization and selective singular value growth provide some important insights into the observed representational coherence.",
"weaknesses": "My main concerns revolve around some experimental choices of the authors and some restricting assumptions considered. Specifically:\n\n*Simplified architecture assumption*: The SPP derivation assumes a purely stacked linear model, which may not capture non-linearities or normalization effects critical in deep networks. How robust is the theory under more realistic settings? Would the insights provided in the manuscript extend to more common architectures?\n\n*Residual and attention extensions*: In the main text, line 214, the authors mention \"we show our extension to residual connections and attention mechanisms\". I find this to be a bit misleading; while residual connections are somewhat discussed theoretically (on a single paragraph), the attention mechanism is only empirically justified. Could the authors provide a more formal link between the residual connections/attention operators and the spectral path analysis? If not, is the only way to validate if the findings hold through a questionable in terms of interpretation empirical evaluation?\n\n*Limited model diversity*: The experiments rely on a single model (Idefics2-8B). How general are the observed SPP behaviors across other architectures (e.g., LLMs, CNNs, or smaller-scale models)?\n\n*Evaluation Setup*: Testing SPP behavior in binary “concept flip” or contrastive settings (inspired by RePe) seems restrictive. Would the same spectral alignment hold in continuous or open-ended concept spaces?\n\n*Interpretation of multimodal results*: While the LAT scans are visually compelling, it’s unclear how they directly confirm spectral path propagation rather than post-hoc representational clustering. Similar results where provided in the original LAT paper, that does not consider SPP.",
"questions": "Please see the Weaknesses section.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T22:10:16",
"modification_date": "2025-11-12T13:08:53",
"review_url": "https://openreview.net/forum?id=iITycdPaOd¬eId=p92E49Z2GU",
"license": "CC BY 4.0"
},
{
"id": "yxP27BUIdx",
"forum": "iITycdPaOd",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission13528/Reviewer_SVti",
"reviewer_name": "Reviewer_SVti",
"rating": 2,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This paper proposes Input-Space Linearity Hypothesis (ISLH), extending the Linear Representation Hypothesis (LRH) to the input space where each sample is treated as an entangled mixture. The paper then proposes a Spectral Principal Path (SPP) framework, which provides an explanation for LRH based on ISLH, that there exists spectral paths along certain singular directions to amplify and propagate the linear direction from inputs to outputs. It then runs experiments from a reference paper, RepE, on VLMs, and obtains similar results to the reference.",
"strengths": "The spectral path viewpoint is new and intriguing.\n\nThe paper writes clearly and is easy to read.",
"weaknesses": "Unclear or missing description of experimental setup. For example, what is the data used to plot Figures 2, 3, and 4? What is x here? What is the concept used (W) to obtain Figure 4? Further, how is the conclusion “only a very small subset of singular values are amplified; the remainder stay close to their initial scale” obtained from Figure 3?\n\nDisconnection between the framework and experiments. (1) The framework builds upon inputs in a vector space propagated through a generalized network, while the experiments are launched on Idefics2-8B, which consists of two encoders for text and image modalities, respectively. Though these two modalities converge into a single embedding space, it is unclear whether the framework is useful for visual inputs, since later (contracting concept) experiments are all based on text inputs. Still, for text inputs, it is unclear whether they fulfill the basic assumption that “each sample is an entangled mixture” in a vector space. (2) Section 5.5 is detached from the framework. According to the description, they are literally a replication of what has been done in [1], on VLM instead of LLM. The plots do not provide sufficient support to the proposed framework or help claim its validity.\n\n[1] Representation Engineering: A Top-Down Approach to AI Transparency",
"questions": "See weaknesses. Besides the above points:\n\nWhy do you call Section 5.5.1 (line 373) your method? What did you modify compared to the original algorithm in [1], used to plot Figure 9 in their Section 4.3.2? \n\nAlso, which layer in which module (vision or text) did you choose? Notice in [1], the authors clearly stated that they did a sweep to identify one of the strongest layers for concept reading and picked that one. This is related to your description in Line 424 (“layer-agnostic”), which is incorrect.\n\nMisc: (typos) in line 148 concept; line 237 singular.\n\n[1] Representation Engineering: A Top-Down Approach to AI Transparency",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-27T23:53:36",
"modification_date": "2025-11-12T13:08:53",
"review_url": "https://openreview.net/forum?id=iITycdPaOd¬eId=yxP27BUIdx",
"license": "CC BY 4.0"
}
] |
|
oRmo4p1KEE
|
https://openreview.net/forum?id=oRmo4p1KEE
|
QuadGPT: Native Quadrilateral Mesh Generation with Autoregressive Models
| 5.5
| 3.75
|
[
8,
4,
4,
6
] |
[
4,
3,
4,
4
] | 4
|
[
"Autoregressive Quad Mesh Generation",
"Reinforcement Learning",
"Topology Optimization"
] |
The generation of quadrilateral-dominant meshes is a cornerstone of professional 3D content creation.
However, existing generative models generate quad meshes by first generating triangle meshes and then merging triangles into quadrilaterals with some specific rules, which typically produces quad meshes with poor topology.
In this paper, we introduce QuadGPT, the first autoregressive framework for generating quadrilateral meshes in an end-to-end manner.
QuadGPT formulates this as a sequence prediction paradigm, distinguished by two key innovations: a unified tokenization method to handle mixed topologies of triangles and quadrilaterals, and a specialized Reinforcement Learning fine-tuning method tDPO for better generation quality.
Extensive experiments demonstrate that QuadGPT significantly surpasses previous triangle-to-quad conversion pipelines in both geometric accuracy and topological quality.
Our work establishes a new benchmark for native quad-mesh generation and showcases the power of combining large-scale autoregressive models with topology-aware RL refinement for creating structured 3D assets.
|
A novel method that directly generates quad-dominant meshes with superior topology, overcoming the limitations of conversion-based approaches.
|
applications to computer vision, audio, language, and other modalities
|
https://openreview.net/pdf?id=oRmo4p1KEE
| 2025-09-01T20:54:56
| 4
|
[
{
"id": "w9Icudi0Iw",
"forum": "oRmo4p1KEE",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission213/Reviewer_87rP",
"reviewer_name": "Reviewer_87rP",
"rating": 8,
"confidence": 4,
"soundness": 3,
"contribution": 4,
"presentation": 3,
"summary": "This paper presents an auto-regressive mesh generation model that can produce both triangular meshes and quadrilateral meshes. The model is trained with a multi-stage training strategy: 1) pre-training on triangular meshes only, 2) fine-tuning on a mixture of triangular and quadrilateral meshes, 3) reinforcement learning post-training to enhance the topology of the generated meshes.",
"strengths": "1. This paper represents the first work on quadrilateral mesh generation. It proposes a hybrid representation for triangles and quadrilaterals, enabling the generation of artist-like 3D meshes input point clouds.\n\n2. The paper presents a multi-stage end-to-end training framework, which incorporates both traditional next-token-prediction and sequence-level reinforcement learning supervision. The generation capability is improved in a targeted manner.\n\n3. As observed in the metrics and qualitative study, the model achieves superior results. The paper also provides many potential applications of the generated meshes, highlighting its practical usefulness.",
"weaknesses": "1. As the model supports both triangular and quadrilateral mesh generation, it would be better if treated triangular mesh generation as an evaluation task. It is interesting to study: \n - whether training on the mixed representation helps the generation of triangular meshes, \n - whether the proposed reinforcement learning also improves the generation of triangular meshes.\n\n It is also a natural request since previous works mainly focus on generating triangular meshes.\n\n2. How well model follows the `quad-dominance parameter`? Now that we have mixed representations, *i.e.* triangular and mixed triangular and quadrilateral representations. Is the `quad-dominance parameter` sufficient for conditioning the model to generate the desired mesh representation? It would be better to show different generation results (triangular and mixed representations) for the same geometry.",
"questions": "1. As one of the advantages of generating quadrilateral meshes is to save tokens (representing two triangles ~2 x 9 coords with one quadrilatero ~12 coords) It is also interesting to study how much the current model can represent complex geometries with less tokens compared to previous methods.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T17:31:43",
"modification_date": "2025-11-12T10:44:30",
"review_url": "https://openreview.net/forum?id=oRmo4p1KEE¬eId=w9Icudi0Iw",
"license": "CC BY 4.0"
},
{
"id": "ZtOtzPpcft",
"forum": "oRmo4p1KEE",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission213/Reviewer_cNgg",
"reviewer_name": "Reviewer_cNgg",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "1 The paper extends prior triangle-based mesh generation frameworks to quadrilateral mesh generation by introducing an additional vertex in the sequence representation.\n2 The paper propose a topology-aware reinforcement learning fine-tuning method (tDPO) to enhance QuadMesh quality",
"strengths": "1. Indeed, this paper extends existing triangle-based mesh generation methods to quadrilateral mesh generation.\n\n2. The paper introduces a reinforcement learning strategy with one reward function that encourages long continuous edges and one penalty function that discourages fractures, aiming to improve mesh quality.",
"weaknesses": "Although this paper is the first to extend autoregressive mesh generation to quadrilateral meshes, the extension—essentially adding one additional vertex—feels rather trivial and not strongly innovative.\n\nThe training process is also computationally expensive, requiring 64 A100 GPUs for 7 days, which makes it difficult for other researchers to reproduce the results. If the authors do not plan to release the code and weights, the paper’s academic contribution will be quite limited. Furthermore, the model is trained on proprietary licensed assets, which further reduces reproducibility and makes independent verification challenging.\n\nOverall, this paper follows a typical data-driven approach—collect/label/clean a large dataset and train a large model. The methodology itself is not particularly innovative or inspiring, and it is unclear how future researchers could benefit from it.",
"questions": "I could like the author to report NC and |NC| similar to other papers.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T17:30:29",
"modification_date": "2025-11-12T10:44:31",
"review_url": "https://openreview.net/forum?id=oRmo4p1KEE¬eId=ZtOtzPpcft",
"license": "CC BY 4.0"
},
{
"id": "2s0WtO1Mhj",
"forum": "oRmo4p1KEE",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission213/Reviewer_SA17",
"reviewer_name": "Reviewer_SA17",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 4,
"summary": "This work introduces QuadGPT, an autoregressive framework for direct generation of quadrilateral meshes. The input condition is a point cloud with normals. The authors propose a unified representation through padding, supporting mixed-element topologies (triangles + quads). The model is built upon an hourglass architecture. The pretraining loss is a standard cross-entropy loss. The model is further finetuned by a reinforcement learning approach with truncated direct preference optimization, rewarding coherent edge loops. Experiments show that QuadGPT generates higher quality quad meshes when compared to prior methods, both quantitatively and qualitatively.",
"strengths": "- This work introduces an end-to-end learning-based framework for direct quad mesh generation from point clouds. This is challenging as meshes have complex structures with significantly large numbers of face and vertex elements, and forming coherent edge loops as in professional-crafted quad meshes is difficult.\n- To promote clean topologies in the generated quad meshes, the authors introduce a reinforcement learning stage, optimizing a direct preference optimization objective rewarding long, coherent edge loops. To handle long sequences, the authors use a truncated, local window-based approach.",
"weaknesses": "- The novelty of the proposed method is somewhat limited: the straightforward padding in the sequence representation to support triangles + quads, the hierarchical hourglass architecture from MeshTron [Hao et al. 2024], the direct preference optimization already proposed for mesh generation in DeepMesh [Zhao et al. 2025]. Overall, the proposed method seems to be a simple extension of those existing works to quad mesh generation.\n- The experimental evaluation is less comprehensive. Comparisons to the triangle mesh generation baselines (e.g., MeshAnything, DeepMesh) may not be fair, due to the difference in training data and model capacity for long sequences (Fig 4).\n- The proposed model has less controllability over the ratio of triangles and quads in the output mesh, though a conditioning mechanism with a quad-dominance parameter is introduced.",
"questions": "- As mentioned in the weaknesses, the comparisons to triangle mesh generation baselines need to be strengthened. For example, in the training strategy, the authors already pretrained a model exclusively on triangle meshes (L247-248). Combining this model with triangle-to-quadrilateral conversion could be a strong baseline, reducing the difference of training data and model capacity used in other triangle mesh generation baselines.\n- The authors introduced a learnable embedding for a quad-dominance parameter to control the target ratio of face types. However, L990 seems to indicate that this is not effective in practice, and there is no corresponding quantitative analysis.\n- In L111, the authors claim that QuadGPT bridges the gap between text/image inputs and production-ready 3D artist meshes. However, the majority of the results in the main text are generated from point clouds.\n- There is no promise of code and data release. Though the authors mention that a public API will be provided, access to the full model weights and training data is important for reproducibility and follow-up research.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-26T05:00:23",
"modification_date": "2025-11-12T10:44:31",
"review_url": "https://openreview.net/forum?id=oRmo4p1KEE¬eId=2s0WtO1Mhj",
"license": "CC BY 4.0"
},
{
"id": "DXSXC2gCoN",
"forum": "oRmo4p1KEE",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission213/Reviewer_N2bc",
"reviewer_name": "Reviewer_N2bc",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The paper introduces QuadGPT, an end-to-end autoregressive framework for generating native mixed quadrilateral and triangular meshes directly from a point cloud input. QuadGPT proposes a unified serialization scheme that handles both triangle and quad faces using a padding-based tokenization, along with an Hourglass Transformer architecture for efficient processing of long sequences. Furthermore, the model is refined using a reinforcement learning approach, which uses a topological reward function to encourage the formation of clean, production-ready edge loops. The authors demonstrate that this approach significantly surpasses prior state-of-the-art methods in both geometric fidelity and topological coherence on a large, curated dataset.",
"strengths": "- The paper is clearly written and easy to follow. \n- The experimental results are impressive, demonstrating strong generalization capability on a wide range of meshes.",
"weaknesses": "- The method itself is a combination of previous efforts (e.g., hourglass transformer and quad-dominance control form Meshtron, direct mesh tokenization from MeshXL, point cloud encoder from MeshAnything), with the biggest difference as the introduction of a mixed quad-triangle setting, which is a rather simple extension.\n- There is no ablation on the dataset. It's hard to tell if the performance boost is mainly from better data quality, and the comparison with previous works trained on public datasets is unfair. \n- Missing references and discussions on the triangle-to-quad conversion algorithm, for example, Blossom-Quad [1] and Blender's built in algorithm.\n\n[1] Blossom-Quad: A non-uniform quadrilateral mesh generator using a minimum-cost perfect-matching algorithm; Remacle, J‐F., et al.",
"questions": "- The padding-based serialization seems pretty plain and inefficient for meshes with many triangles. Have the authors considered about using token compression techniques like BPT?\n- I wonder if the authors have tried experiments on openly available dataset? It's unfair to compare with other models trained on different datasets (of potentially worse quality). It's understandable to use proprietary datasets for the best performance, but the author should at least do some ablation study on the dataset quality, which can provide some insights for future research (e.g., the TripoSG paper).\n- How effective is the quad-dominance parameter for controlling the ratio of quad vs triangular faces? It also sounds pretty empirical to gradually anneal the data distribution with r from 0 to 1.\n- What's the insight for native quad mesh generation? Especially, I wonder its difference from first generating pure triangular meshes with the same model (e.g. use r=0), and then apply the proposed triangle-to-quad algorithm.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-24T21:46:19",
"modification_date": "2025-11-12T10:44:31",
"review_url": "https://openreview.net/forum?id=oRmo4p1KEE¬eId=DXSXC2gCoN",
"license": "CC BY 4.0"
}
] |
0aBAAS0rRT
|
https://openreview.net/forum?id=0aBAAS0rRT
|
Map as a Prompt: Learning Multi-Modal Spatial-Signal Foundation Models for Cross-scenario Wireless Localization
| 5.333333
| 2.666667
|
[
6,
4,
6
] |
[
2,
3,
3
] | 3
|
[
"Wireless Localization",
"Foundation Models",
"Self-Supervised Learning",
"Fine-Tuning",
"6G Networks"
] |
Accurate and robust wireless localization is a critical enabler for emerging 5G/6G applications, including autonomous driving, extended reality, and smart manufacturing. Despite its importance, achieving precise localization across diverse environments remains challenging due to the complex nature of wireless signals and their sensitivity to environmental changes. Existing data-driven approaches often suffer from limited generalization capability, requiring extensive labeled data and struggling to adapt to new scenarios. To address these limitations, we propose SigMap, a multimodal foundation model that introduces two key innovations: (1) A cycle-adaptive masking strategy that dynamically adjusts masking patterns based on channel periodicity characteristics to learn robust wireless representations; (2) A novel "map-as-prompt" framework that integrates 3D geographic information through lightweight soft prompts for effective cross-scenario adaptation. Extensive experiments demonstrate that our model achieves state-of-the-art performance across multiple localization tasks while exhibiting strong zero-shot generalization in unseen environments, significantly outperforming both supervised and self-supervised baselines by considerable margins.
|
We propose SigMap, a foundation model that uses self-supervised learning with cycle-adaptive masking and map-conditioned prompting to achieve accurate and generalizable wireless localization across diverse scenarios.
|
applications to physical sciences (physics, chemistry, biology, etc.)
|
https://openreview.net/pdf?id=0aBAAS0rRT
| 2025-09-17T17:40:02
| 3
|
[
{
"id": "otZhJeUNq0",
"forum": "0aBAAS0rRT",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8908/Reviewer_ct8k",
"reviewer_name": "Reviewer_ct8k",
"rating": 6,
"confidence": 2,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper presents SigMap, a prompt-based architecture for cross-scenario wireless localization that integrates masked autoencoding with geographic and topological maps serving as soft prompts. The model introduces a cycle-adaptive masking mechanism designed to align with the cyclic nature of Channel State Information (CSI) signals, thereby improving feature learning during pretraining. Evaluated within simulated DeepMIMO environments, SigMap demonstrates strong generalization capability and achieves parameter-efficient few-shot adaptation. The approach aims to bridge the gap between environment-specific training and scalable localization across diverse wireless scenarios.",
"strengths": "(1) The idea of using maps as prompts is both innovative and practical. By embedding spatial priors directly into the learning framework, the model can better understand geographic context without requiring explicit supervision or heavy parameterization. This approach provides a lightweight yet effective way to integrate domain knowledge into data-driven models.\n\n(2) The proposed cycle-adaptive masking strategy effectively leverages the inherent periodic and structural characteristics of CSI signals. This allows the pretraining process to focus on more informative segments of the data, improving robustness and representation quality, especially when dealing with noisy or incomplete measurements.\n\n(3) The demonstration of few-shot adaptation using a frozen backbone is impressive, as it highlights the model’s ability to generalize with minimal retraining. This efficiency in adapting to new environments or conditions suggests that SigMap could serve as a versatile foundation for scalable wireless localization systems, reducing computational and data requirements during deployment.",
"weaknesses": "(1) The absence of real-world evaluation limits the impact of the results. Without validation on empirical datasets or publicly available benchmarks such as CSI-Bench, it is difficult to assess how well the approach generalizes beyond simulation. This gap weakens the practical relevance of the presented findings.\n\n(2) The paper’s claim of developing a “foundation model” for wireless localization appears overstated. While the architecture shows potential for generalization within simulated settings, it lacks evidence of robustness across devices, propagation environments, or hardware variations, all of which are critical for real-world applicability.\n\n(3) Although the system integrates several established components—masked autoencoders, vision transformers, and graph-based prompting—the overall architectural contribution feels incremental. The novelty lies more in the combination and application context rather than in introducing fundamentally new mechanisms or model designs.\n\n(4) The work asserts interpretability through the use of map prompts but does not provide supporting analysis. Visual or quantitative evaluation of how the prompts influence model predictions would strengthen the paper’s interpretability claims and offer deeper insights into model behavior.\n\n(5) The scalability of the proposed approach remains uncertain. The paper does not explore how the framework performs when applied to large-scale or densely connected map graphs, which are common in real-world urban deployments. Understanding such scalability constraints is important for practical use in complex environments.\n\n(6) While the paper relies on ray-tracing–based wireless simulation, this approach—though widely used—offers limited novelty unless extended with advanced modeling such as diffuse scattering, dynamic environments, or hybrid physics–ML calibration. The current setup would benefit from stronger validation or augmentation to better capture real-world propagation complexity.",
"questions": "Can the authors report scalability experiments by evaluating SigMap on larger or denser map graphs, or by simulating more complex urban propagation conditions, to objectively assess how the method performs in real-world large-scale deployments and justify its practical robustness?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T08:09:42",
"modification_date": "2025-11-12T12:10:59",
"review_url": "https://openreview.net/forum?id=0aBAAS0rRT¬eId=otZhJeUNq0",
"license": "CC BY 4.0"
},
{
"id": "aW6Y6rt9A6",
"forum": "0aBAAS0rRT",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8908/Reviewer_uDeu",
"reviewer_name": "Reviewer_uDeu",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "The paper proposes SigMap, a multimodal foundation-model framework for wireless localization featuring (1) a periodicity-aware adaptive masking pretraining scheme tailored to CSI, and (2) a “map-as-prompt” mechanism that encodes 3D maps as geometric prompts for parameter-efficient finetuning. Experiments on DeepMIMO (O1-3p5) show gains for single/multi-BS localization and some few-shot cross-scenario transfer.",
"strengths": "I think how the authors use GNN to generate the Prompt is a great innovation. It cleverly borrows the idea of Prompt-Tuning from LLMs, encoding 3D map information into lightweight soft prompts used to guide a large signal foundation model. This fundamentally solves the problem of model adaptation in new environments.",
"weaknesses": "Weakness\n1.\tThe introduction has too many paragraphs, although it compares with many existing works, the logic is not clear. It cannot effectively introduce the work done in this article from existing works. In addition, the shortcomings of many existing studies, such as the inability to capture high-dimensional features for description, are not sufficient to demonstrate the inadequacy of these work.\n2.\tThe experimental setup is relatively single and the validation depth is insufficient. Evidence is mostly from a single ray-tracing world, and there is only one cross scenario experiment. The existing experiments are difficult to fully demonstrate the universal applicability of the proposed model. Apart from error metrics, are there any other experiments that demonstrate the effectiveness of the proposed model?\n3.\tHow to better reflect the mentioned advantages limited labeled samples, efficient parameters, interpretability? For example, the model proposed parameters efficient, but the comparison of training time, memory usage, inference complexity is insufficient.\n4.\tInsufficient ablation experiments.",
"questions": "All of the paper's training and testing are based on DeepMIMO, a simulation dataset. It lacks validation on data collected in the real world.\nReal-world signals are filled with noise, dynamic interference, and complex propagation effects that simulators cannot fully replicate. It is a significant unknown whether the clean physical laws learned from the simulator can maintain high performance in a dirty real-world environment.\nthe current \"Map-as-prompt\" primarily encodes the environment's geometry by processing 3D coordinates with a GNN. It does not encode material information. The model doesn't know if it's facing a concrete wall that absorbs\"signals or a glass curtain wall that reflects them.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-27T14:59:14",
"modification_date": "2025-11-12T12:11:00",
"review_url": "https://openreview.net/forum?id=0aBAAS0rRT¬eId=aW6Y6rt9A6",
"license": "CC BY 4.0"
},
{
"id": "MwuDGJ6B86",
"forum": "0aBAAS0rRT",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8908/Reviewer_GjCa",
"reviewer_name": "Reviewer_GjCa",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The paper proposes SIGMAP, a transformer backbone pre-trained with cycle-adaptive masked modeling on Channel State Information, then fine-tuned with a learned geographic prompt from a 3D map via a GNN. The paper claims three main contributions: (1) cycle-adaptive masking to break periodic shortcuts in CSI; (2) map-as-prompt conditioning using 3D geometry; (3) parameter-efficient adaptation with strong cross-scenario generalization.\n\nThe experiments demonstrate substantial improvements over other baselines, on both single- and Multi-BS localization, as well as generalization performance.",
"strengths": "1. The self-adaptive masking and GNN map-as-prompt strategies are novel and meaningful combinations for indoor localization task. The experimental results show significant advantages over other baselines.\n\n2. During fine-tuning, only prompt GNN and projection head are trained, while the backbone is kept frozen. This makes the model efficient and handy for deployment.\n\n3. The algorithm achieves consistent metric gains in different tasks. And the improvements are substantial.",
"weaknesses": "1. The paper asserts good generalization abilities, but it’s not intuitively clear why the algorithm achieves this. The model isn’t trained using meta-learning or transfer learning techniques. The paper also lacks of experimental comparisons to modern baselines that target at generalization in indoor localization, e.g., [1].\n\n2. The paper doesn't mention how the quality or degradation of the 3D Map could adversely affect the performance of the model. Illustrations of the 3D Map used are needed. More ablation studies on the qualities of the 3D Map are desirable.\n\n\n[1] Gao, Jun, et al. \"MetaLoc: Learning to learn wireless localization.\" IEEE Journal on Selected Areas in Communications 41.12 (2023): 3831-3847.",
"questions": "1. Could the authors explain why the model achieves good generalization abilities to new environments? Since the algorithm is not trained using meta-learning or transfer learning, I am curious about how the model learns to generalize.\n\n2. Could the authors give an example of the 3D Map used in the paper? Can the authors discuss how the quality of the 3D map would affect the model’s performance?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-17T03:18:10",
"modification_date": "2025-11-12T12:11:00",
"review_url": "https://openreview.net/forum?id=0aBAAS0rRT¬eId=MwuDGJ6B86",
"license": "CC BY 4.0"
}
] |
Zz2gtWX8wn
|
https://openreview.net/forum?id=Zz2gtWX8wn
|
ReviewScore: Misinformed Peer Review Detection with Large Language Models
| 4.5
| 3
|
[
8,
2,
4,
4
] |
[
3,
3,
3,
3
] | 4
|
[
"Peer Review Evaluation",
"Argument Evaluation",
"Critical Thinking",
"Logic",
"Large Language Models",
"Neurosymbolic Approaches"
] |
Peer review serves as a backbone of academic research, but in most AI conferences, the review quality is degrading as the number of submissions explodes. To reliably detect low-quality reviews, we define misinformed review points as either "weaknesses" in a review that contain incorrect premises, or "questions" in a review that can be already answered by the paper. We verify that 15.2% of weaknesses and 26.4% of questions are misinformed and introduce ReviewScore indicating if a review point is misinformed. To evaluate the factuality of each premise of weaknesses, we propose an automated engine that reconstructs every explicit and implicit premise from a weakness. We build a human expert-annotated ReviewScore dataset to check the ability of LLMs to automate ReviewScore evaluation. Then, we measure human-model agreements on ReviewScore using eight current state-of-the-art LLMs and verify moderate agreements. We also prove that evaluating premise-level factuality shows significantly higher agreements than evaluating weakness-level factuality. A thorough disagreement analysis further supports a potential of fully automated ReviewScore evaluation.
|
We introduce ReviewScore, a new evaluation of peer review quality, focusing on detecting misinformed review points.
|
datasets and benchmarks
|
https://openreview.net/pdf?id=Zz2gtWX8wn
| 2025-09-18T06:55:12
| 4
|
[
{
"id": "42tS1loj06",
"forum": "Zz2gtWX8wn",
"review_number": 5,
"reviewer_id": "ICLR.cc/2026/Conference/Submission9929/Reviewer_GJmX",
"reviewer_name": "Reviewer_GJmX",
"rating": 8,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The paper introduces ReviewScore, a novel framework and dataset for detecting misinformed peer review points in academic paper reviews. They are defined as (1) questions already answered in the paper or (2) weaknesses based on incorrect premises. \n\nThe authors verify that a significant fraction of review points (15.2% of weaknesses, 26.4% of questions) are misinformed. \n\nTo automate this detection, they proposes a two-stage evaluation approach:\n1. Base ReviewScore, measuring unanswerability and factuality on a 5-point scale.\n2. Advanced ReviewScoreE, which decomposes review points into arguments and reconstructs their premises via an automatic argument reconstruction engine, checked for validity (using a SAT solver) and faithfulness (via LLM feedback loops).\n\nA human-annotated dataset of 657 review points (ICLR 2021–2023) is created to benchmark human–model agreement across eight modern LLMs (Claude, GPT-4/5, Gemini, LLaMA-3, etc.). Experiments show moderate agreement (F1≈0.4–0.5, κ≈0.3–0.4), and the paper demonstrates that premise-level analysis significantly improves over direct weakness-level evaluation.",
"strengths": "1. Novel Problem Definition: the paper identifies misinformed reviews as a measurable and impactful aspect of review quality — a previously underexplored issue. The operational definitions (answered question / incorrect premise) are specific yet widely applicable.\n\n2. Methodological Innovation: The automatic argument reconstruction engine combining LLMs, formal logic translation, and a SAT solver is a creative and rigorous integration of symbolic reasoning and LLM capabilities. The validity–faithfulness feedback loops are well-motivated and thoughtfully designed.\n\n3. High-quality Dataset: The authors build a trustworthy expert-annotated dataset (657 review points, 1,748 premises) with strong documentation and process controls (cross-annotation, consensus, training sessions).",
"weaknesses": "1. Limited Agreement and Practical Readiness: Despite the substantial improvement over the baseline, the overall human-model agreement remains moderate (F1 ~0.45, Kappa ~0.35). This level of reliability is likely insufficient for fully autonomous deployment and would require human oversight, potentially reducing the net efficiency gains.\n\n2. Subjectivity in Annotations: Inter-annotator α=0.30–0.43 is low, suggesting high ambiguity in human judgments of factuality. The paper’s conclusions about “moderate agreement” might partly reflect human inconsistency rather than LLM capability.\n\n3. Dataset Scope and Diversity: Only ICLR reviews are used; the framework’s generality to NLP, CVPR, or smaller venues is unclear. Annotators are graduate students rather than senior researchers — potentially limiting domain depth.\n\n4. Evaluation Simplifications Models were provided only the main paper text, not figures or appendices, which could substantially affect factual verification. The binary misinformed/not-misinformed threshold is somewhat arbitrary; alternative calibration curves could provide richer insight.\n\n5. Argument Reconstruction Validation: While impressive, the argument reconstruction quality is mainly assessed via one model (Claude 3.7). Broader ablations (e.g., GPT-5 vs open models) or human inspection metrics could strengthen confidence.",
"questions": "1. How consistent are human labels for the same review points across groups (e.g., claim vs argument disagreements)?\n2. Could the reconstruction engine generalize to other reasoning tasks (e.g., debate analysis, scientific claim verification)?\n3. Could incorporating citation graphs or author responses systematically (rather than as an optional experiment) raise reliability?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-06T16:15:29",
"modification_date": "2025-11-12T12:23:01",
"review_url": "https://openreview.net/forum?id=Zz2gtWX8wn¬eId=42tS1loj06",
"license": "CC BY 4.0"
},
{
"id": "ThNoEqv9IN",
"forum": "Zz2gtWX8wn",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission9929/Reviewer_TFcU",
"reviewer_name": "Reviewer_TFcU",
"rating": 2,
"confidence": 3,
"soundness": 2,
"contribution": 1,
"presentation": 3,
"summary": "This paper attempts to detect low-quality reviews by defining a misinformed review point, i.e., ReviewScore. Specifically, the misinformed review point includes a question stated in a review that can already be answered by the paper, or a weakness stated in a review is incorrect or contains incorrect premises regarding the paper. After that, the authors build a human expert-annotated benchmark to evaluate the ability of current SOTA LLMs.",
"strengths": "1. The problem studied in this paper is valuable. \n2. The paper is well-organized.",
"weaknesses": "1. My main concern is the proposed definition 2 (Misinformed Review Point), which is the core assumption of this paper. The claim \"a question stated in a review can already be answered by the paper\" may be caused by many other cases, not just the misinformed point. For example, maybe the paper is poorly written, the clarity is misleading, or the reviewer is not familiar with this area, and so on. If this paper is poorly written\n2. We believe that, in the peer-review system, scholars evaluate their academic level rankings based on the \"consistency assumption\", i.e., scholars with stronger abilities usually have stronger persuasiveness for evaluating others, and these scholars can also obtain higher achievements [1]. Therefore, we believe the \"review quality\" is mainly determined by the human/expert preference. And the \"low-quality review problem\" is a system problem. \n\n\n[1] PICO: PEER REVIEW IN LLMS BASED ON CONSISTENCY OPTIMIZATION. ICLR'2025.",
"questions": "See the weaknesses.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T21:44:49",
"modification_date": "2025-11-12T12:23:01",
"review_url": "https://openreview.net/forum?id=Zz2gtWX8wn¬eId=ThNoEqv9IN",
"license": "CC BY 4.0"
},
{
"id": "0Mxuu8siD8",
"forum": "Zz2gtWX8wn",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission9929/Reviewer_ht6d",
"reviewer_name": "Reviewer_ht6d",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This paper introduces REVIEWSCORE, a framework to automatically detect misinformed review points in peer reviews from AI conferences. A misinformed review point is defined as either (1) a question already answered by the paper or (2) a weakness based on incorrect or unsupported claims. The authors develop two scoring systems: BASE REVIEWSCORE and ADVANCED REVIEWSCORE\n\nThe authors construct a human-annotated dataset of review points from 40 ICLR 2021–2023 papers, labeled by 15 graduate students. They then evaluate agreement between these human annotations and predictions from 8 SOTA LLMs (GPT-4, Claude 2, PaLM 2, Gemini, etc.), finding that ADVANCED REVIEWSCORE achieves higher alignment than BASE, especially in weakness detection.",
"strengths": "* Well-defined problem: The concept of “misinformed review points” is clearly formalized and reflects real concerns authors face, such as reviewers asking questions that are already answered in the paper.\n* Argument reconstruction engine: The paper presents a novel approach that extracts both explicit and implicit premises from weaknesses, enabling premise-level factuality evaluation—a step beyond most prior work.\n* Empirical study with LLMs: Eight LLMs are evaluated on the REVIEWSCORE dataset, with quantitative results showing moderate human-model agreement, and ADVANCED scoring improves over BASE.",
"weaknesses": "* Insufficient dataset size: The dataset only covers 40 ICLR submissions across three years (~0.4% of the total), which is far too small to support general conclusions or train reliable automated systems. Most findings should be considered preliminary.\n* Limited diversity and experience of annotators: All annotations are from graduate students. There’s no evidence that experienced reviewers, ACs, or SACs were consulted—this may bias the labeling toward an author-centric view of review quality.\n* No practical deployment pathway: The paper does not explore how REVIEWSCORE could be integrated into real conference workflows (e.g., as part of rebuttal, AC dashboards, or review filtering), nor does it simulate such use cases.",
"questions": "* How well does your argument reconstruction engine perform on an external dataset or human-reconstructed arguments?\n* Have you tested REVIEWSCORE on reviews from other venues to see how it generalizes beyond the training data?\n* Could REVIEWSCORE be used as part of a rebuttal assistant or reviewer feedback system? If yes, what would that pipeline look like?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T05:27:07",
"modification_date": "2025-11-12T12:23:01",
"review_url": "https://openreview.net/forum?id=Zz2gtWX8wn¬eId=0Mxuu8siD8",
"license": "CC BY 4.0"
},
{
"id": "7xc4pJPceE",
"forum": "Zz2gtWX8wn",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission9929/Reviewer_ywZt",
"reviewer_name": "Reviewer_ywZt",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper addresses the degrading quality of peer review in AI conferences, a problem driven by the explosive growth in paper submissions. The authors aim to reliably detect low-quality review by defining \"misinformed review points\" as either reviewer \"questions\" that are already answerable by the paper or \"weaknesses\" that are incorrect or based on incorrect premises. The primary contribution is ReviewScore, a new evaluation criterion to automatically identify these misinformed points. The paper introduces base ReviewScore for foundational scoring and a more effective advanced ReviewScore, which breaks arguments down to evaluate the factuality of each individual premise. To enable this, the authors also propose an automatic argument reconstruction engine that uses LLMs to extract all explicit and implicit premises from a review. To validate this automated system, the authors built a human expert-annotated ReviewScore dataset and measured the performance of eight state-of-the-art LLMs against it. The results showed moderate human-model agreement and confirmed that evaluating premise-level factuality is significantly more reliable than evaluating weakness-level factuality.",
"strengths": "* This paper introduces a refined approach to reviewing AI conference papers by providing a more detailed examination of each review point. \n* The paper proposes the ADVANCED REVIEWSCORE, which evaluates the factuality of individual premises within arguments, proving to be more effective than simply evaluating weaknesses at a high level.",
"weaknesses": "1. While the paper introduces the REVIEWSCORE system to detect misinformed review points, its classification of errors into two categories: questions and weaknesses, appears too simplistic. There are likely more nuanced types of errors in peer reviews that this system does not currently address. \n2. Although the paper presents REVIEWSCORE to detecting misinformed review points, it fails to adequately compare its method with existing peer review evaluation systems.\n3. The paper’s experimental evaluation relies on a small and homogeneous dataset consisting of 40 ICLR submissions. While this serves as a proof of concept, the limited size and scope of the dataset restrict its generalizability. The dataset's lack of diversity in terms of conferences and academic domains may not reflect the wide range of review styles and subjects encountered across various fields.",
"questions": "1. What is the rationale behind limiting the classification to just these two categories? Are there other types of misinformed review points, such as misunderstandings of methodologies, misinterpretation of results, or reviewer biases, that could be considered?\n2. Could you provide a more in-depth comparative analysis to highlight the strengths and limitations of REVIEWSCORE in relation to other state-of-the-art review quality assessment tools? How does REVIEWSCORE perform in comparison to other AI-based approaches for identifying misinformed review points?\n3. How do you plan to address the generalizability of your findings given the limited scope of the dataset? Additionally, there could be biases introduced during the human annotation process. Could you expand the dataset to include a wider range of conferences and academic domains to ensure robustness? Also, could the inclusion of external data sources (e.g., citation counts, feedback from future users) strengthen the accuracy and completeness of the dataset?",
"flag_for_ethics_review": [
"Yes, Responsible research practice (e.g., human subjects, annotator compensation, data release)"
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T11:38:53",
"modification_date": "2025-11-12T12:23:02",
"review_url": "https://openreview.net/forum?id=Zz2gtWX8wn¬eId=7xc4pJPceE",
"license": "CC BY 4.0"
}
] |
zwLpUxiqSE
|
https://openreview.net/forum?id=zwLpUxiqSE
|
Space Filling Curves as Spatial Priors for Small or Data-Scarce Vision Transformers
| 4.5
| 3.5
|
[
6,
6,
2,
4
] |
[
3,
4,
4,
3
] | 4
|
[
"space filling curves",
"ViT",
"spatial priors"
] |
Vision Transformers (ViTs) have become a dominant backbone in computer vision, yet their attention mechanism lacks inherent spatial inductive biases, which are especially crucial in small models and low-data regimes. Inspired by the masking in Linear Transformers and the scanning patterns of Vision SSMs, we propose VIOLIN, a lightweight masked attention mechanism that integrates Space Filling Curves (SFCs) to enhance spatial awareness with negligible computational overhead. VIOLIN scans the input image with multiple SFCs to build curve specific decay masks, which are averaged and multiplied with the attention matrix to encode spatial relationships. It yields notable gains in data-scarce settings: when fine-tuning on VTAB-1K, VIOLIN improves accuracy by up to 8.7% on the Structured group, and it can be combined with parameter-efficient tuning methods such as LoRA. Beyond fine-tuning, VIOLIN consistently improves various tiny or small scale ViT architectures (e.g., DeiT, DINO) during pretraining on ImageNet-1K, achieving gains of up to 0.9\% on on ImageNet-1K and 7.2\% on pixel level CIFAR-100. Overall, VIOLIN offers a computationally efficient yet effective way to inject spatial inductive bias into ViTs, particularly benefiting small models and data-scarce scenarios.
|
A new attention mechanism for vision backbones using Space Filling Curves improving both fine-tuning and pre-training of ViTs.
|
other topics in machine learning (i.e., none of the above)
|
https://openreview.net/pdf?id=zwLpUxiqSE
| 2025-09-20T01:56:33
| 4
|
[
{
"id": "HX5jH4abzP",
"forum": "zwLpUxiqSE",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission20303/Reviewer_8zFr",
"reviewer_name": "Reviewer_8zFr",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "The manuscript proposes a lightweight masked attention mechanism named VIOLIN that integrates Space Filling Curves (SFCs) to enhance spatial awareness in smaller visual transformers (ViT). By better filling the space in 2D images through specifically designed curves, a better neighborhood representation is achieved when applying ViTs. VIOLIN scans the input image with multiple SFCs to build curve specific decay masks which are averaged and then weighted with the attention matrix to encode spatial relationships.\n\nAs SFCs the authors use Snake, Zig-zag, Peano, and Hilbert curves together with their transposed variants to capture diverse scanning patterns in both row and column major order.",
"strengths": "- The author propose an approach to represent better the neighbourhoods through Space filling curves (SFC) in order to enhance the processing of the image with ViT networks.\n- The manuscript concludes that by using SFCs improves the performance in performance in small models and limited-data settings.\n- Extensive experimental results are provided.\n- The approach requires only limited extra computational demands\n- Extending the application of SFCs to video understanding is also assessed.",
"weaknesses": "- There is no systematic or any theoretical study about what the space filling curves are useful for in ViTs\n- It is not clear what applications can be used for such SPCs based representations in ViTs except for some particular filtering. In the manuscript it is indicated that it can be applied for classification, semantic segmentation or object detection.\n- It is not clear how such SPCs can be used to some other ViT models.",
"questions": "Could the multiple SFC scans be combined in a more efficient way than by simply averaging?\n\nHow would the proposed multiple SFC work in the case of other transformer networks than those tested in the manuscript? \nFor example how would they work for the Swin transformer proposed in the paper:\nZ. Liu et al., Swin Transformer: Hierarchical Vision Transformer using Shifted Windows, ICCV 2021.\n\nHow it would work, when applied on videos, on some video transformers, like for:\nLimin, W. el al, VideoAME V2: Scaling}video masked autoencoders with dual masking, CVPR 2023.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T17:46:35",
"modification_date": "2025-11-12T15:49:29",
"review_url": "https://openreview.net/forum?id=zwLpUxiqSE¬eId=HX5jH4abzP",
"license": "CC BY 4.0"
},
{
"id": "W9hsPjgYqK",
"forum": "zwLpUxiqSE",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission20303/Reviewer_cmWr",
"reviewer_name": "Reviewer_cmWr",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "The paper introduces the use of space filling curves as a way to introduce spatial priors to vision transformers. It extends upon the use of decay masks with image flattening as determined by different space filling curves. The use of different curves effectively reorders the patches of the image in different spatially meaningful ways as compared to a single zig-zag line scan used in transformer architectures. The proposed method improves upon previous data efficient methods under similar settings and can also be applied solely in the fine-tuning stage.",
"strengths": "The authors proposed a novel way to include hand designed spatial priors thru the use of SFCs and proposed an efficient and effective way to incorporate into ViT architectures. Their proposed method can also be included into pretrained models with fine-tuning only. The proposed method improves on previous data-efficient methods like DeiT. Well designed ablation studies were also included to show the effects of each of their proposed changes to the attention mechanism. The authors also include a rather commendable and substantial appendix with important key prior art.",
"weaknesses": "- Training flow is not immediately clear in the main paper. Since there are multiple stages to train a ViT with VIOLIN masks, it would be good to recap on the stages even though DeiT’s training recipe was followed. This would make the experiment section and the ablation studies clearer. \n- The authors proposed the use of different hand selected SFCs, it would be interesting to see how a separately learned patch ordering, e.g. from Kutscher 2025, compares. After all, the mask decay method can take in any form of ordering. \n- Minor issue: Typo in Figure 2\\. In the center block, VIOLIN is misspelled.",
"questions": "- DieT uses CNN as a teacher network. Is a CNN also used in this case? \n- Since CNNs have the strongest spatial prior, could the authors also include a similarly size SOTA CNN? Especially if, similar to DieT training recipe, a CNN is used as a teacher network",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T03:10:56",
"modification_date": "2025-11-12T15:49:30",
"review_url": "https://openreview.net/forum?id=zwLpUxiqSE¬eId=W9hsPjgYqK",
"license": "CC BY 4.0"
},
{
"id": "uxpdFscEPD",
"forum": "zwLpUxiqSE",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission20303/Reviewer_nX45",
"reviewer_name": "Reviewer_nX45",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 1,
"presentation": 2,
"summary": "The paper proposes VIOLIN, a masked attention mechanism for Vision Transformers (ViTs) that incorporates Space Filling Curves (SFCs) to improve spatial inductive biases. Standard ViTs suffer from the lack of spatial awareness due to the permutation-equivalent nature of self-attention. Inspired by linear attention and SSMs, VIOLIN constructs curve-specific decay masks that model the relative spatial distance between image patches. These masks are averaged and applied to the attention matrix, introducing spatial priors without modifying the core ViT architecture.",
"strengths": "- Extensive empirical validation: tested on diverse model scales (5M–86M parameters) and training setups (supervised and self-supervised).\n\n- The paper is easy to follow.\n\n- The proposed method shows some improvment.",
"weaknesses": "- **Limited Contribution from the Core Method**: [6] has shown that average pooling can boost the DeiT's performance. Tab. 14 suggests that **the performance gain mainly comes from the average pooling**. The VIOLIN only provide marginal improvement for small models, and **even harms the performance of the large model ViT-B**.\n\n- **Limited Generalization**: Based on the the results in Tab. 8, **the improvements on Swin-T and Swin-S are below 0.2%**, which is likely within run-to-run variance and not statistically significant. This suggests that the proposed method is rather an engineering optimization technique, which does not generalize well to different models.\n\n- **Limited Comparison**. The baselines used for comparison primarily rely on absolute positional embeddings, which are known to be suboptimal. Relative positional encodings, widely adopted in modern architectures [1-5], are simpler, more flexible, and have been shown to outperform absolute encodings in multiple settings. **It is not clear that space-filling curves offer any meaningful advantage over such approaches**. Without direct comparisons to relative positional encoding, the benefits of VIOLIN are difficult to justify.\n\n[1] Wu, Kan, et al. \"Rethinking and improving relative position encoding for vision transformer.\" Proceedings of the IEEE/CVF international conference on computer vision. 2021.\n\n[2] Zihang Dai, Zhilin Yang, Yiming Yang, Jaime G Carbonell, Quoc Le, and Ruslan Salakhutdinov. Transformer-xl: Attentive language models beyond a fixed-length context. In ACL, 2019. 1, 3, 7, 8\n\n[3] Liu, Ze, et al. \"Swin transformer: Hierarchical vision transformer using shifted windows.\" Proceedings of the IEEE/CVF international conference on computer vision. 2021.\n\n[4] Liu, Ze, et al. \"Swin transformer v2: Scaling up capacity and resolution.\" Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022.\n\n[5] Zhou, Yuxuan, et al. \"SP-ViT: Learning 2D Spatial Priors for Vision Transformers.\" 33rd British Machine Vision Conference. BMVA Press, 2022.\n\n[6] Conditional Positional Encodings for Vision Transformers, ICLR2023.",
"questions": "Could the authors also compare VIOLIN to other methods related to spatial prior, such as relative positional encoding/bias in [1-5] ?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T13:15:13",
"modification_date": "2025-11-12T15:49:30",
"review_url": "https://openreview.net/forum?id=zwLpUxiqSE¬eId=uxpdFscEPD",
"license": "CC BY 4.0"
},
{
"id": "EQUAdPMCbf",
"forum": "zwLpUxiqSE",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission20303/Reviewer_sQfG",
"reviewer_name": "Reviewer_sQfG",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This paper proposes **VIOLIN**, a simple and plug-and-play spatial prior module for Vision Transformers (ViTs). \nThe method introduces *Space Filling Curves (SFCs)* (e.g., Snake, Zig-zag, Peano, Hilbert) to define alternative scanning orders of image patches. \nFor each curve \\(c\\), a decaying mask \\(M_c[i,j] = \\gamma_c^{|i-j|}\\) is constructed to encourage locality in attention. \nAfter aligning these masks back to the standard patch order and averaging, the resulting mask \\(M_{\\text{VIOLIN}}\\) is multiplied with the attention score matrix before softmax.\n\nThe approach is extremely lightweight (+0.0002% params, +0.64% FLOPs) and can be applied to pretrained or finetuned ViTs without architectural changes. \nExtensive experiments on **VTAB-1K**, **ImageNet-1K**, **DINO**, **pixel-level CIFAR-100**, and dense tasks (ADE20K / COCO) show consistent gains, especially on “Structured” VTAB tasks (+8.7%).",
"strengths": "- **Well-defined target problem:** Focuses on *small models and data-scarce regimes* where ViTs lack spatial inductive bias — a meaningful and under-explored setting. \n- **Simplicity and generality:** VIOLIN requires no retraining or re-architecture changes, making it truly plug-and-play. \n- **Elegant formulation:** The SFC-based decaying masks are clearly derived; the permutation and averaging operations are well explained. \n- **Strong empirical results:** Significant improvement on VTAB-1K (Structured group +8.7%) and pixel-level CIFAR-100 (+7.2%) convincingly show the benefit of spatial priors. \n- **Low computational cost:** The added overhead is negligible, suitable for real-world low-resource finetuning. \n- **Broad applicability:** Small but consistent gains on segmentation and detection tasks further validate its generality.",
"weaknesses": "1. **Novelty is limited.** \n The core idea—distance-decayed attention weights—is reminiscent of *linear attention*, *RMT*, and *RetNet*–style exponential decay mechanisms. \n The use of multiple SFCs and their averaged mask is an incremental extension rather than a fundamentally new concept.\n\n2. **Missing comparisons with strong baselines.** \n The paper compares mainly to vanilla DeiT/DeiT-III/DINO backbones. \n It lacks direct comparisons with existing locality-enforcing methods, such as:\n - Relative positional bias (Swin / ViT-RPB),\n - Convolutional stems or LocalViT,\n - Manhattan-distance masks (RMT),\n - Single-curve or random-curve baselines. \n Without these, it is unclear whether the large Structured-task gains stem from the proposed multi-SFC averaging or from any reasonable local bias.\n\n3. **Training details and fairness are under-specified.** \n VTAB-1K finetuning recipes (learning rate, γ initialization, α sharing) are buried in the appendix. \n It remains unclear whether baselines were tuned equivalently. \n The surprising claim that *untrained masks outperform pretrained ones* needs stronger justification.\n\n4. **Questionable mask effectiveness.** \n Figure 7 shows most γ₍c₎ values approach 1, suggesting the mask becomes nearly uniform. \n If so, why does the Structured group improve so dramatically? \n More analysis of per-head γ values and locality visualization is needed.\n\n5. **Computational overhead claim is not empirically verified.** \n Only theoretical FLOPs/parameter ratios are reported. \n Actual GPU memory and runtime increase (especially on dense tasks) should be measured.\n\n6. **Overstated framing.** \n The paper sometimes overclaims by calling VIOLIN a *principled spatial prior via SFCs*. \n In fact, the method does not exploit the geometric guarantees of SFCs; it only uses index distance \\(|i-j|\\) with exponential decay. \n Theoretical justification for averaging multiple SFC-induced metrics is weak.",
"questions": "Please refer to Weaknesses",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T04:37:18",
"modification_date": "2025-11-12T15:49:31",
"review_url": "https://openreview.net/forum?id=zwLpUxiqSE¬eId=EQUAdPMCbf",
"license": "CC BY 4.0"
}
] |
jeTiBeW3iZ
|
https://openreview.net/forum?id=jeTiBeW3iZ
|
Memorization Through the Lens of Sample Gradients
| 5
| 3.75
|
[
6,
6,
2,
6
] |
[
3,
3,
5,
4
] | 4
|
[
"Memorization",
"Sample Gradients"
] |
Deep neural networks are known to often memorize underrepresented, hard examples, with implications for generalization and privacy. Feldman & Zhang (2020) defined a rigorous notion of memorization.
However it is prohibitively expensive to compute at scale because it requires training models both with and without the data point of interest in order to calculate the memorization score.
We observe that samples that are less memorized tend to be learned earlier in training, whereas highly memorized samples are learned later.
Motivated by this observation, we introduce Cumulative Sample Gradient (CSG), a computationally efficient proxy for memorization. CSG is the gradient of the loss with respect to input samples, accumulated over the course of training.
The advantage of using input gradients is that per-sample gradients can be obtained with negligible overhead during training. The accumulation over training also reduces per-epoch variance and enables a formal link to memorization. Theoretically, we show that CSG is bounded by memorization and by learning time.
Tracking these gradients during training reveals a characteristic rise–peak–decline trajectory whose timing is mirrored by the model’s weight norm. This yields an early-stopping criterion that does not require a validation set: stop at the peak of the weight norm. This early stopping also enables our memorization proxy, CSG, to be up to five orders of magnitude more efficient than the memorization score from Feldman & Zhang (2020). It is also approximately 140 $\times$ and 10$\times$ faster than the prior state-of-the-art memorization proxies, input curvature and cumulative sample loss, while still aligning closely with the memorization score, exhibiting high correlation. Further, we develop Sample Gradient Assisted Loss (SGAL), a proxy that further improves alignment with memorization and is highly efficient to compute. Finally, we show that CSG attains state-of-the-art performance on practical dataset diagnostics, such as mislabeled-sample detection and enables bias discovery, providing a theoretically grounded toolbox for studying memorization in deep networks.
|
alignment, fairness, safety, privacy, and societal considerations
|
https://openreview.net/pdf?id=jeTiBeW3iZ
| 2025-09-18T23:24:19
| 4
|
[
{
"id": "z3GZqYPWjF",
"forum": "jeTiBeW3iZ",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12621/Reviewer_P7vQ",
"reviewer_name": "Reviewer_P7vQ",
"rating": 6,
"confidence": 3,
"soundness": 2,
"contribution": 3,
"presentation": 3,
"summary": "The paper proposes Cumulative Sample Gradient (CSG) as a theoretically motivated and computationally efficient proxy for memorization in deep neural networks. The authors define CSG as the gradient of the loss with respect to input samples, accumulated across training, and claim it correlates strongly with Feldman & Zhang’s formal memorization score while being orders of magnitude cheaper to compute. They theoretically show that CSG is bounded by both learning time and memorization, then empirically validate this relation across CIFAR-100 and ImageNet. Moreover, they propose that the peak of the model’s weight norm corresponds to the optimal early-stopping point, eliminating the need for a validation set. The paper also introduces Sample Gradient Assisted Loss (SGAL) as an efficiency improvement, and reports strong performance on tasks such as mislabeled sample detection and dataset bias discovery",
"strengths": "1. The work links input-space gradients to memorization and learning dynamics through formal theorems, extending prior work that primarily focused on weight gradients or loss-based proxies.\n\n2. The idea of accumulating input gradients incurs minimal additional computation during training and is potentially useful for large-scale data auditing, noisy-label detection, and privacy diagnostics.\n\n3. The observed “rise–peak–decline” trajectory in sample gradients and weight norms provides an intuitive link between optimization dynamics and generalization behavior.",
"weaknesses": "1. The main claim that “Cumulative Sample Gradient” represents a gradient of loss with respect to the input, accumulated over training, is conceptually questionable. The proposed CSG is essentially an aggregated gradient norm trajectory rather than a true differentiable functional of the loss. Treating it as a gradient object conflates sensitivity analysis (∇ₓℓ) with memorization, which lacks theoretical grounding in generalization theory. The derivations (Theorems 4.2–4.3) merely establish loose proportionality bounds without proving causality or sufficiency.\n\n2. The assertion that the peak of weight norm universally coincides with the minimum validation loss is overstated. This correspondence may depend on architecture, optimizer, and regularization strength, and may fail under strong augmentation or non-stationary data.\n\n3. While the authors claim that CSG generalizes across tasks, they only test standard supervised image classification. The theoretical link assumes uniform β-stability of SGD and bounded loss, which rarely holds in modern deep nets. It remains unclear whether CSG maintains predictive utility in other regimes such as self-supervised, generative, or multi-label settings.",
"questions": "1. Since CSG is defined as the accumulated input gradient norm, not the derivative of a loss functional over training trajectories, do we have a rigorous reason to treat it as a “gradient of loss with respect to input samples, accumulated over training”? How does this differ from simply tracking cumulative sensitivity?\n\n2. The paper asserts that stopping at the peak weight norm matches the minimum validation loss. Can this be proven under general conditions? How robust is this correspondence across architectures, datasets, or optimizers (e.g., Adam, Adagrad, adaptive schedulers)?\n\n3. Do you think the CSG–memorization relationship could generalize to regression, contrastive, or generative models, where accuracy or label definitions differ? You don’t need to perform new experiments—rather, please share your intuition on whether and why such generalization might hold.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T14:41:23",
"modification_date": "2025-11-12T12:57:48",
"review_url": "https://openreview.net/forum?id=jeTiBeW3iZ¬eId=z3GZqYPWjF",
"license": "CC BY 4.0"
},
{
"id": "dR4LtmlQfC",
"forum": "jeTiBeW3iZ",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12621/Reviewer_gxif",
"reviewer_name": "Reviewer_gxif",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The paper proposes Cumulative Sample Gradient (CSG)—the loss gradient w.r.t. the input, accumulated over training—as a computationally cheap proxy for stability-based memorization (Feldman & Zhang, 2020). The authors provide theory that (i) expected CSG is upper-bounded by learning time (Theorem 4.2) and (ii) linearly bounded by memorization (Theorem 4.3). Empirically, they observe a characteristic rise–peak–decline trajectory for average per-sample input gradients that aligns with a peak in weight norm and the first minimum in validation loss (double-descent boundary), enabling validation-free early stopping. They also introduce SGAL (a loss accumulated only until the gradient-based stopping point) for further efficiency. Across CIFAR-100/ImageNet, CSG/SGAL correlates well with memorization scores, is substantially faster than curvature and CSL proxies, supports mislabeled-sample detection, and helps surface dataset biases.",
"strengths": "•\tOriginality & clarity: Using input gradients accumulated over training as a memorization proxy is elegant; the rise–peak–decline alignment with the weight norm and double-descent boundary is compelling and clearly presented. \n•\tQuality (theory): Theorems relating CSG to learning time and memorization provide formal grounding absent in many proxies; assumptions and proof sketches are transparent. \n•\tQuality (empirics): Consistent binned linear trends; strong correlation with F&Z scores; broad comparisons (CSL, curvature, forgetting events, loss sensitivity) on CIFAR-100/ImageNet; MIA and adversarial-distance analyses support the privacy link. \n•\tSignificance & practicality: Large speedups (0.1–0.3× of standard training vs. 3.6–14.3× for curvature; orders of magnitude vs. F&Z) lower the barrier to dataset diagnostics at scale; mislabeled-sample AUROCs are SOTA or competitive at all noise levels.",
"weaknesses": "•\tAssumption sensitivity and constant opacity. The theoretical bounds rely on β-stability, Lipschitz continuity, L-bounded losses, and learning-rate conditions, and exclude first-layer skip connections. Constants involving the pseudo-inverse of batch matrices (κ terms) may be large/ill-conditioned, making the bounds hard to interpret quantitatively. \n•\tCalibration claim is mixed. Table 2 shows lower ECE for the last epoch (0.1017) than for gradient-based stopping (0.1382), contradicting the blanket statement that early-stopped checkpoints have lower calibration errors; other metrics (MCE/MSCE/UCE) favor earlier stopping, so the narrative should be nuanced. \n•\tScope of datasets/models. Results are primarily on CIFAR-100/ImageNet with ResNet/Inception. It would strengthen generality to include modern architectures (e.g., ViT) and tasks beyond image classification, since input-gradient behavior and training dynamics may differ. (The Adam experiment is a useful first step.) \n•\tComparative coverage. While CSL/curvature/forgetting/loss-sensitivity are included, some adjacent proxies (e.g., EL2N, GraNd / importance-sampling-style difficulty measures) and influence-based approximations (e.g., TracIn) are not compared; these could provide a more complete picture of trade-offs.\n•\tTraining-access requirement. Like many proxies, CSG needs access to per-sample gradients during training; this limits pure post-hoc auditing scenarios (the limitation is acknowledged). \n•\tQualitative bias analysis. The bias discovery examples are informative but largely qualitative; quantitative fairness metrics (e.g., subgroup error rates) would make the case stronger.",
"questions": "1.\tHow sensitive is the weight-norm peak rule to weight decay, label smoothing, data augmentation strength, and optimizer hyperparameters? A small ablation across these knobs would clarify robustness. \n2.\tPractically, do you compute input gradients every iteration for all samples, or on a schedule/subset? Please quantify wall-clock overhead vs. vanilla training across model sizes. \n3.\tHave you tested CSG/SGAL on ViTs or transformers for NLP? If not, what obstacles (e.g., tokenization, augmentation) do you anticipate?\n4.\tCan you empirically estimate the constants in Lemma 4.1 (e.g., behavior of κ through training) to illustrate why the linear trends emerge despite potential ill-conditioning? \n5.\tGiven Table 2 shows mixed results across ECE/MCE/MSCE/UCE, can you reconcile the claim “lower calibration errors than last epoch” and specify which metrics you prioritize and why? \n6.\tSince you use precomputed F&Z scores, how sensitive are your correlations to training recipe variations (architectures different from F&Z)?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T06:07:10",
"modification_date": "2025-11-12T12:57:48",
"review_url": "https://openreview.net/forum?id=jeTiBeW3iZ¬eId=dR4LtmlQfC",
"license": "CC BY 4.0"
},
{
"id": "JygZB8rteV",
"forum": "jeTiBeW3iZ",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12621/Reviewer_BKpC",
"reviewer_name": "Reviewer_BKpC",
"rating": 2,
"confidence": 5,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "The paper proposes CSG, a fast, theoretically grounded proxy for measuring memorization in deep networks by accumulating input loss gradients during training. CSG correlates strongly with true memorization scores while being up to five orders of magnitude more efficient, enabling validation-free early stopping, mislabeled-sample detection, and bias discovery.",
"strengths": "CSG offers a theoretically grounded and computationally efficient way to estimate memorization, achieving near-perfect correlation with true scores at a fraction of the cost.\n\nIt enables validation-free early stopping and state-of-the-art mislabeled data detection, making it both practical and interpretable for large-scale deep learning.",
"weaknesses": "Novelty:\nMy primary concern lies in the novelty of the work. The authors’ main observation that memorization tends to occur in the later stages of training is well established and has been extensively documented in prior studies [1,2]. Likewise, leveraging gradients to approximate or track memorization has been explored before [3,4], making the core idea appear incremental rather than groundbreaking. Can the authors show how their work performs in relation to [1,3,4].\n\nComputational Cost:\nWhile the proposed approach claims efficiency, computing cumulative sample gradients still requires forward and backward passes for each sample at every epoch, which can be prohibitive for large models and datasets. Prior work [3] already proposes strategies to reduce this overhead.\n\nLimited Optimizer Evaluation:\nThe experiments rely solely on the Adam optimizer. To demonstrate broader applicability, results should be validated across multiple optimizers such as SGD, RMSProp, and AdamW. Can authors provide more info on how their method would behave across optimizers?\n\n\n[1]. Agiollo, Andrea, Young In Kim, and Rajiv Khanna. \"Approximating Memorization Using Loss Surface Geometry for Dataset Pruning and Summarization.\" Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2024.\n[2] https://aclanthology.org/2024.blackboxnlp-1.4\n[3] https://arxiv.org/abs/2008.11600\n[4] https://arxiv.org/pdf/2002.08484",
"questions": "Above",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T01:10:58",
"modification_date": "2025-11-12T12:57:49",
"review_url": "https://openreview.net/forum?id=jeTiBeW3iZ¬eId=JygZB8rteV",
"license": "CC BY 4.0"
},
{
"id": "AcLjSfGWdo",
"forum": "jeTiBeW3iZ",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12621/Reviewer_maBN",
"reviewer_name": "Reviewer_maBN",
"rating": 6,
"confidence": 4,
"soundness": 4,
"contribution": 3,
"presentation": 3,
"summary": "This work proposes a computationally efficient way to approximate the degree of memorization in deep neural nets. Based on the observation that memorized samples tend to have longer training time, this work proposes cumulative sample gradient (CSG) as a proxy for memorization. Theoretical results show the relation between CSG and learning time & memorization. Empirical evaluations corroborates the findings and shows superior computational performance over previous state-of-the-art.",
"strengths": "Overall, this work is well-written and organized. It motivates the problem settings and draws fairly clear connection with previous work. The contribution is pertinent to the current challenge of ML. By providing a more efficient probe for the phenomenon of memorization, this work can accelerate future research in this field. The theoretical formulations are solid and with clear purpose: they do shed light into the construction of the practical proxy. The empirical improvements are encouraging. No major flaw with experiment design.",
"weaknesses": "The work doesn't have apparent weaknesses that might lead to clear reject. I do have a few questions about the assumption and the source of computation edge over previous work. Having them clarified can better help the reader understand the contribution and use the tool with confidence.\n\nThere are a few grammatic glitches here and there. For example, \"is plays\" at Line 310 and \"it's roots\" at Line 258. Can be fixed by proof reading.",
"questions": "1) What makes the computation of CSG faster than CSL? Seems that both metrics are cumulative and the computation of loss/gradient are not too different in general. Could you tell us more about the source of speedup?\n\n2) The theoretical results show that CSG is upper bounded by learning time and memorization. Does that mean high CSG -> high memorization? What about the opposite direction, does low CSG -> low memorization?\n\n3) The theories are formulated against pure SGD. Does CSG reliably detect memorization/mislabeled samples for different optimizers? If time allows, could you show some evidence of CSG's success for other optimizer? \n\n4) Are previous SOTA's performance dependent on the choice of optimizer?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T15:15:48",
"modification_date": "2025-11-12T12:57:49",
"review_url": "https://openreview.net/forum?id=jeTiBeW3iZ¬eId=AcLjSfGWdo",
"license": "CC BY 4.0"
}
] |
|
eWBu4tY9ta
|
https://openreview.net/forum?id=eWBu4tY9ta
|
Safeguarding Multimodal Knowledge Copyright in the RAG-as-a-Service Environment
| 4.666667
| 3.333333
|
[
4,
4,
6
] |
[
3,
3,
4
] | 3
|
[
"Watermark",
"VLM",
"Dataset Copyright Protection"
] |
As Retrieval-Augmented Generation (RAG) evolves into service-oriented platforms (Rag-as-a-Service) with shared knowledge bases, protecting the copyright of contributed data becomes essential. Existing watermarking methods in RAG focus solely on textual knowledge, leaving image knowledge unprotected. In this work, we propose \textit{AQUA}, the first watermark framework for image knowledge protection in Multimodal RAG systems. \textit{AQUA} embeds semantic signals into synthetic images using two complementary methods: acronym-based triggers and spatial relationship cues. These techniques ensure watermark signals survive indirect watermark propagation from image retriever to textual generator, being efficient, effective and imperceptible. Experiments across diverse models and datasets show that \textit{AQUA} enables robust, stealthy, and reliable copyright tracing, filling a key gap in multimodal RAG protection.
|
An effective watermarking framework for protecting the copyright of multimodal knowledge, especially image knowledge, in RaaS.
|
alignment, fairness, safety, privacy, and societal considerations
|
https://openreview.net/pdf?id=eWBu4tY9ta
| 2025-09-19T16:34:38
| 3
|
[
{
"id": "38wcTQEGqV",
"forum": "eWBu4tY9ta",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16979/Reviewer_k6AT",
"reviewer_name": "Reviewer_k6AT",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 3,
"presentation": 2,
"summary": "This paper introduces AQUA, a novel watermarking framework designed to safeguard image knowledge copyrights in Multimodal Retrieval-Augmented Generation (RAG) systems. With the rise of RAG-as-a-Service (RaaS) platforms, where data providers contribute knowledge to a shared pool used by external services, the need for protecting copyright has become critical. Existing watermarking methods have largely focused on text-based RAG systems, leaving image knowledge unprotected. AQUA addresses this gap by embedding semantic signals into images through two complementary watermarking methods: AQUAacronym (embedding uncommon acronyms and their full names) and AQUAspatial (using spatial relationships in the image). These techniques ensure that the watermarks survive indirect propagation from image retrievers to textual generators, making them efficient, effective, and imperceptible. Experiments demonstrate that AQUA is robust, stealthy, and effective in tracing copyright, even in the face of attacks like image transformations and regeneration.",
"strengths": "1. AQUA introduces a groundbreaking watermarking method for Multimodal RAG systems, focusing on the protection of image knowledge, an area previously neglected in watermarking research. By using semantic-based signals (acronyms and spatial relationships), it provides a new approach to watermark embedding that spans both image and text modalities.\n\n2. The watermarking techniques, particularly AQUAacronym and AQUAspatial, are shown to be robust against various image transformations and attacks, including rescaling, rotation, compression, and regeneration. They maintain their imperceptibility to end-users and cannot be detected by unauthorized filtering mechanisms.\n\n3. The framework has been extensively tested across different RAG models and multimodal datasets (MMQA and WebQA). The paper provides a thorough evaluation of AQUA’s effectiveness, harmlessness, stealthiness, and robustness, with results indicating that AQUA outperforms baseline methods and maintains high retrieval success and generation success rates.\n\n4. AQUA is adaptable for both black-box and white-box scenarios, meaning it can be used in various real-world RAG systems without requiring direct access to the internal model or dataset. Its design also ensures easy deployment and provides a solid baseline for future research in the protection of multimodal datasets in RaaS environments.",
"weaknesses": "1. Focuses on 7B-scale VLMs (LLaVA-NeXT, InternVL3, etc.) without assessing performance on larger models (e.g., 32B+ VLMs) or lightweight models for edge deployments.\n\n2. Does not assess how watermark detection performance degrades over time with retriever/generator updates, fine-tuning, or dataset drift.\n\n3. While mentioning a reference distribution for practical verification, it provides only a single example without guiding how to adapt it to diverse dataset characteristics or RAG system configurations.",
"questions": "Please refer to the weakness",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T14:59:04",
"modification_date": "2025-11-12T13:56:33",
"review_url": "https://openreview.net/forum?id=eWBu4tY9ta¬eId=38wcTQEGqV",
"license": "CC BY 4.0"
},
{
"id": "bGJ02WO4LU",
"forum": "eWBu4tY9ta",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16979/Reviewer_DCH9",
"reviewer_name": "Reviewer_DCH9",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This paper introduces AQUA, a framework to address copyright protection for image knowledge in multimodal Retrieval-Augmented Generation (RAG) services. The authors identify novel challenges, such as indirect watermark propagation (embedding a watermark in an image that is detected in generated text) and the need for an explicitly retrievable watermark that doesn't cause an obvious data distribution shift. It introduces two variants: AQUA_acronym: Embeds rare acronyms and their full names into synthetic images, leveraging a VLM's OCR capability for verification. AQUA_spatial: Generates images with unusual spatial relationships for models with limited OCR, leveraging spatial reasoning for verification. A comprehensive evaluation demonstrates that AQUA is effective with high detection rates, harmless, stealthy, and robust against image attacks.",
"strengths": "- This is the first work to formally tackle image copyright protection in multimodal RAG. The problem formulation, particularly identifying \"indirect watermark propagation\" as a core challenge, is a novel and significant contribution. The two proposed methods are creative and well-designed solutions.\n- This work fills a critical, unaddressed gap in AI data governance as RAG services increasingly rely on proprietary multimodal data. AQUA provides a practical solution and sets a strong baseline for an important new research area.\n- The paper is exceptionally clear. Figures 1, 2, and 3 provide excellent visualizations of the RaaS problem, the core challenges, and the AQUA methodology. Key concepts, like the \"Trigger\" and \"Instruction\" components of a probe query, are precisely defined and aid understanding.",
"weaknesses": "- The paper's threat model, which only considers one defender and one adversary, overlooks the multi-tenant nature of RaaS platforms. It's unclear how AQUA would prevent \"collisions\" where multiple providers independently create the same watermark (e.g., the same acronym or spatial concept), which could lead to false accusations of misuse.\n- The methods' reliance on VLM capabilities (OCR, spatial reasoning) is also a potential fragility. A future, more advanced VLM might identify AQUA_spatial images as \"unnatural\" and refuse to answer. Conversely, an adversary could fine-tune their model to specifically ignore text overlays or unusual object pairings, defeating the watermark. The robustness tests focus on image transformations, not model-level adaptations.\n- For AQUA_spatial, the semantic trigger must be as rare as the image content. While the paper shows 0% retrieval for 10000 benign queries, it's unclear if this query set was stress-tested with queries semantically similar to the triggers. A benign user could accidentally issue a query that matches the trigger, retrieving the watermark.",
"questions": "I got several questions for this paper:\n- How does AQUA prevent watermark collisions in a RaaS platform with hundreds of data providers? Does this framework require a centralized \"watermark registry\" managed by the platform?\n- Have you considered failure cases where a VLM's safety or \"common sense\" guardrails cause it to identify AQUA_spatial images as \"unnatural\" and refuse the probe query? How robust is AQUA against an adversary who fine-tunes their model to ignore these specific watermark types?\n- How do you guarantee the semantic uniqueness of the AQUA_spatial trigger? Was the benign query set in Section 5.4 specifically tested for queries that are semantically similar, though not identical, to your triggers?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-29T21:19:42",
"modification_date": "2025-11-12T13:56:34",
"review_url": "https://openreview.net/forum?id=eWBu4tY9ta¬eId=bGJ02WO4LU",
"license": "CC BY 4.0"
},
{
"id": "JqedpfCwae",
"forum": "eWBu4tY9ta",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16979/Reviewer_vGro",
"reviewer_name": "Reviewer_vGro",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper proposes **AQUA**, the first watermarking framework dedicated to safeguarding **image knowledge copyright** in **Multimodal Retrieval-Augmented Generation (RAG)** systems, filling a critical gap left by existing text-only methods. AQUA addresses the challenge of *indirect watermark propagation* (image input to textual output) and *unapparent distribution shifts* through two complementary methods: **$AQUA_{acronym}$**, which embeds uncommon acronyms into images, and **$AQUA_{spatial}$**, which uses synthetic images with unusual spatial relationships, both leading to textual verification signals in the RAG output. Experiments across various Multimodal RAG models and datasets demonstrate that AQUA is highly effective, harmless, stealthy, and robust against common attacks, enabling reliable copyright tracing with high efficiency and statistical significance.",
"strengths": "1. This problem and the proposed method is novel.\n2. This paper is well-structured and easy to follow.\n3. The evaluation consider multiple attack methods.",
"weaknesses": "1. The space between each paragraph seems small.\n2. It seems no adapative attacks are considered.\n3. There are only a few baselines to compare.",
"questions": "1. According to Table 4, it seems $AQUA_{acronym}$ consistently outperforms $AQUA_{spatial}$. Are there specific scenarios where only $AQUA_{spatial}$ is applicable or can those two varients be combined?\n\n2. Are image watermarking methods totally inapplicable in this problem?\n\n3. Apart from copyright detection, can AQUA be extended for the attribution of copyright as well [1]?\n\n[1] Watermark-based Attribution of AI-Generated Content.\n\n4. What value of Rank can be considered as good, as it ranges from 1 to 10 in table 2?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-28T23:20:21",
"modification_date": "2025-11-12T13:56:34",
"review_url": "https://openreview.net/forum?id=eWBu4tY9ta¬eId=JqedpfCwae",
"license": "CC BY 4.0"
}
] |
T9ikO8tXfY
|
https://openreview.net/forum?id=T9ikO8tXfY
|
Breaking Down and Building Up: Mixture of Skill-Based Vision-and-Language Navigation Agents
| 4
| 4
|
[
4,
4,
4
] |
[
4,
3,
5
] | 3
|
[
"Vision-and-Language Navigation",
"Skill-Based Agents",
"Mixture-of-Experts"
] |
Vision-and-Language Navigation (VLN) poses significant challenges for agents to interpret natural language instructions and navigate complex 3D environments. While recent progress has been driven by large-scale pre-training and data augmentation, current methods still struggle to generalize to unseen scenarios, particularly when complex spatial and temporal reasoning is required. In this work, we propose SkillNav, a modular framework that introduces structured, skill-based reasoning into Transformer-based VLN agents. Our method decomposes navigation into a set of interpretable atomic skills (e.g., Vertical Movement, Area and Region Identification, Stop and Pause), each handled by a specialized agent. To support targeted skill training without manual data annotation, we construct a synthetic dataset pipeline that generates diverse, linguistically natural, skill-specific instruction-trajectory pairs. We then introduce a novel training-free Vision-Language Model (VLM)-based router, which dynamically selects the most suitable agent at each time step by aligning sub-goals with visual observations and historical actions. SkillNav obtains competitive results on commonly-used benchmarks, and establishes state-of-the-art generalization to the GSA-R2R, a benchmark with novel instruction styles and unseen environments.
|
We propose SkillNav, a modular framework that decomposes navigation into interpretable atomic skills and uses a vision-language model router to achieve state-of-the-art generalization in vision-and-language navigation.
|
applications to computer vision, audio, language, and other modalities
|
https://openreview.net/pdf?id=T9ikO8tXfY
| 2025-09-19T07:48:19
| 3
|
[
{
"id": "nXQ34dRGRC",
"forum": "T9ikO8tXfY",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14576/Reviewer_WcPc",
"reviewer_name": "Reviewer_WcPc",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "The paper proposes SkillNav, a mixture of skill-based framework for Vision-and-Language Navigation (VLN). It decomposes instructions with an LLM-based Temporal Reordering module into ordered sub-goals, then uses a VLM-based Action Router to select among specialized skill agents at each step. The approach aims to improve compositionality, interpretability, and OOD generalization on R2R and GSA-R2R.",
"strengths": "1. Clear, modular design with interpretability: Temporal reordering → sub-goal localization → skill routing → action, making intermediate reasoning explicit and auditable.\n2. Thoughtful skill taxonomy and expansion: Builds on NavNuances (Direction Adjustment, Vertical Movement, Landmark Detection, Area/Region ID) and adds Stop & Pause and Temporal Order Planning to address frequent failure modes.\n3. Strong empirical results under distribution shift: On R2R (Val-Unseen/Test-Unseen) and GSA-R2R (R/N; Basic/Scene), SkillNav achieves competitive to SOTA performance; notably, prior SRDF is strong on R2R but generalizes poorly to GSA-R2R, whereas SkillNav holds up better.\n4. Two-stage training that encourages reusable skills: Agents share a DUET-based, skill-agnostic backbone (trained on R2R + ScaleVLN + Temporal synthetic data) before skill-specific fine-tuning—clean separation of training for reuse and specialization.\n5. Ablations that isolate key components: Experiments vary reordering on/off and router choices (Random, Qwen2.5-VL-7B-Instruct, GLM-4.1V-9B), showing consistent gains from both Temporal Reordering and VLM routing",
"weaknesses": "1. Router dependence on external VLMs: The action router relies on large VLMs in a zero-shot fashion. This raises cost, stability, and reproducibility concerns (model drift, API dependence). Can the authors quantify runtime/cost trade-offs and variance across VLM choices?\n2. Synthetic, skill-directed instructions may bias learning: Since skills are trained on tightly targeted synthetic data, do agents overfit to linguistic “triggers”? Consider small-scale human validation or cross-style tests beyond the current splits. (Design suggests targeted datasets per skill.)\n3. Coverage and interaction of skills: The taxonomy is compelling, but there is limited quantitative analysis of coverage (how often each skill is needed) and inter-skill interference. A distribution/co-occurrence and error-attribution study would clarify completeness.\n4. Reproducibility details: Training/fine-tuning hyperparameters, routing prompts, and fallback policies are not fully specified; reproducing the full stack (especially router behavior) may be challenging.\n5. Reproducibility details: Training/fine-tuning hyperparameters, routing prompts, and fallback policies are not fully specified; reproducing the full stack (especially router behavior) may be challenging.",
"questions": "See weakness.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-03T16:36:34",
"modification_date": "2025-11-12T13:22:42",
"review_url": "https://openreview.net/forum?id=T9ikO8tXfY¬eId=nXQ34dRGRC",
"license": "CC BY 4.0"
},
{
"id": "M2ixLzmHYy",
"forum": "T9ikO8tXfY",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14576/Reviewer_EJ2y",
"reviewer_name": "Reviewer_EJ2y",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "SkillNav is a modular VLN framework that decomposes navigation into atomic skills, uses an LLM to reorder instructions into subgoals, and employs a VLM-based router to pick the right skill at each step. Each skill has a synthetic, skill-focused dataset and a specialized agent fine-tuned on a DUET backbone, then all agents are integrated for execution. The method attains strong R2R results and state-of-the-art generalization on GSA-R2R, and shows skill-wise gains on NavNuances.",
"strengths": "1. Clear modular design with interpretable skills, temporal reordering, and a VLM router that localizes subgoals and selects a single best skill. \n2. Practical synthetic data pipeline that enables skill-specific supervision without human annotation.\n3. Reasonable empirical results on GSA-R2R with competitive R2R performance and skill-level improvements on NavNuances.",
"weaknesses": "1. It would be great to see if the method can generalize to a broader setting, such as real-world robotic settings or computer-use agent settings. The current evaluations on VLN tasks are somewhat limited and artificial.\n2. Compared with other baselines such as SRDF, the model still falls behind and there is a significant performance gap between their model and other baselines on benchmarks.\n3. Router effectiveness depends on the chosen VLM, and ablations show nontrivial variance across routers.",
"questions": "1. Can the model fit into the current MLLM-based paradigm?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T07:30:30",
"modification_date": "2025-11-12T13:22:42",
"review_url": "https://openreview.net/forum?id=T9ikO8tXfY¬eId=M2ixLzmHYy",
"license": "CC BY 4.0"
},
{
"id": "WjzeyD2ABr",
"forum": "T9ikO8tXfY",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14576/Reviewer_WmQk",
"reviewer_name": "Reviewer_WmQk",
"rating": 4,
"confidence": 5,
"soundness": 3,
"contribution": 2,
"presentation": 4,
"summary": "This paper proposes SkillNav, a mixture-of-experts framework designed to decompose vision-and-language navigation (VLN) into atomic skills and route-level reasoning. The approach aims to improve compositional generalization by assigning navigation instructions to specialized experts and reordering sub-instructions using LLMs. The authors also introduce several skill-based datasets to support the design and use of VLMs as the router for different experts. Experiments on R2R and GSA-R2R demonstrate the effectiveness of the proposed SkillNav especially in GSA-R2R with styled instructions.",
"strengths": "1. Interesting motivation and idea. The paper presents a clear and appealing idea of decomposing navigation into modular “skills” and routing instructions through specialized experts. This formulation aligns well with the broader goal of enhancing compositional reasoning in VLN. The authors also provide distinct datasets for training each expert, which adds practical value and can facilitate future research in this direction.\n2. The proposed SkillNav framework effectively combines the generalization ability of LLM-based methods with the strong task-specific performance of supervised VLN models through a hierarchical structure. This hybrid design is promising and represents a good balance between accuracy and efficiency.\n3. Clear and well-organized presentation. The paper is well-written and easy to follow. The motivation, methodology, and experiments are presented in a coherent and logical flow, making the main contributions easy to understand and evaluate.",
"weaknesses": "1. Misalignment between motivation and method. The motivation and the proposed method don’t quite line up. While the paper claims to tackle compositional reasoning through an MoE setup, the approach feels more like a form of data augmentation. The results in Table 4 also suggest that the MoE component doesn’t really make a difference — the no-router variant performs almost the same. The largest improvement happens in the Test-N-Scene split of GSA-R2R, but the method doesn’t include any mechanism specifically designed for handling scene-style instructions. This improvement might actually come from the reordering module, where the LLM helps better interpret the instructions, rather than from the MoE design itself.\n2. Modest experimental gains and potential data leakage. The experimental results are not very convincing. On the standard R2R benchmark, the method falls short compared to SRDF, which also augments instructions in a similar way. The claimed improvement mainly comes from GSA-R2R, but that dataset includes scenes from HM3D, which are also used in training by ScaleVLN and the proposed SkillNav. This overlap raises a concern about possible data leakage, which could partly explain the performance gains.\n3. Missing ablations and unclear MoE behavior. The paper would benefit from more ablation studies to verify whether the MoE setup actually works as intended. Right now, the model uses VLMs to directly predict the skill without any fine-tuning, which is odd given that the paper also introduces a skill-labeled dataset. It’s unclear why that dataset wasn’t used to train or adapt the experts. In addition, there’s no analysis showing how the routing mechanism selects the right expert for a given instruction. Without this, it’s hard to tell whether the model is really leveraging multiple experts or just behaving like a black box.",
"questions": "1. In Lines 48–52, the authors state that existing methods tend to memorize examples, limiting their effectiveness in unseen environments. Could the authors provide evidence for this claim? Some recent works, such as ScaleVLN and SRDF, have already narrowed the performance gap between val-seen and val-unseen, which seems to contradict that statement.\n2. More explanation is needed regarding the “atomic skills” derived from NavNuances. Since these form the foundation of the proposed method, readers would benefit from a clearer understanding of what each skill represents and how they are defined or annotated.\n3. In Table 3, the landmark detection performance is noticeably lower than the other subtasks. Could the authors elaborate on why this happens? Does it relate to data imbalance, annotation difficulty, or inherent limitations of the current detection pipeline?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-29T21:43:31",
"modification_date": "2025-11-12T13:22:43",
"review_url": "https://openreview.net/forum?id=T9ikO8tXfY¬eId=WjzeyD2ABr",
"license": "CC BY 4.0"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.