id
stringlengths
10
10
url
stringlengths
42
42
title
stringlengths
5
214
average_rating
float64
-1
8.5
average_confidence
float64
-1
5
ratings
listlengths
0
9
confidences
listlengths
0
9
reviewers_num
int64
0
9
keywords
listlengths
1
42
abstract
stringlengths
26
4.31k
tldr
stringlengths
0
250
primary_area
stringclasses
21 values
pdf_url
stringlengths
40
40
submission_date
timestamp[s]date
2025-09-01 19:59:51
2025-09-20 20:18:08
total_reviews
int64
0
18
reviews
listlengths
0
9
KV9hrBIqA9
https://openreview.net/forum?id=KV9hrBIqA9
Think-on-Graph 3.0: Efficient and Adaptive LLM Reasoning on Heterogeneous Graphs via Multi-Agent Dual-Evolving Context Retrieval
3
4.5
[ 2, 2, 2, 6 ]
[ 5, 4, 5, 4 ]
4
[ "Retrieval-Augmented Generation (RAG)", "Multi-Agent", "Dual-Evolving", "Heterogeneous Graph" ]
Retrieval-Augmented Generation (RAG) and Graph-based RAG has become the important paradigm for enhancing Large Language Models (LLMs) with external knowledge. However, existing approaches face a fundamental trade-off. While graph-based methods are inherently dependent on high-quality graph structures, they face significant practical constraints: manually constructed knowledge graphs are prohibitively expensive to scale, while automatically extracted graphs from corpora are limited by the performance of the underlying LLM extractors, especially when using smaller, local-deployed models. This paper presents Think-on-Graph 3.0 (ToG-3), a novel framework that introduces Multi-Agent Context Evolution and Retrieval (MACER) mechanism to overcome these limitations. Our core innovation is the dynamic construction and refinement of a Chunk-Triplets-Community heterogeneous graph index, which pioneeringly incorporates a dual-evolution mechanism of Evolving Query and Evolving Sub-Graph for precise evidence retrieval. This approach addresses a critical limitation of prior Graph-based RAG methods, which typically construct a static graph index in a single pass without adapting to the actual query. A multi-agent system, comprising Constructor, Retriever, Reflector, and Responser agents, collaboratively engages in an iterative process of evidence retrieval, answer generation, sufficiency reflection, and, crucially, evolving query and subgraph. This dual-evolving multi-agent system allows ToG-3 to adaptively build a targeted graph index during reasoning, mitigating the inherent drawbacks of static, one-time graph construction and enabling deep, precise reasoning even with lightweight LLMs. Extensive experiments demonstrate that ToG-3 outperforming compared baselines on both deep and broad reasoning benchmarks,and ablation studies confirm the efficacy of the components of MACER framework.
We introduce Think-on-Graph 3.0 (ToG-3), which provides a unified, efficient, and adaptive solution for complex knowledge reasoning tasks (including deep reasoning and broad reasoning tasks) via Multi-Agent Dual-Evolving Context Retrieval Loop.
foundation or frontier models, including LLMs
https://openreview.net/pdf?id=KV9hrBIqA9
2025-09-12T21:21:53
4
[ { "id": "kDTRB3yb1C", "forum": "KV9hrBIqA9", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission4456/Reviewer_ZyKa", "reviewer_name": "Reviewer_ZyKa", "rating": 2, "confidence": 5, "soundness": 1, "contribution": 1, "presentation": 3, "summary": "The pa...
noLMXTqgCp
https://openreview.net/forum?id=noLMXTqgCp
Decoupled-Value Attention for Prior-Data Fitted Networks: GP-Inference for Physical Equations
4
2.75
[ 6, 4, 4, 2 ]
[ 3, 3, 4, 1 ]
4
[ "Gaussian Process", "Meta-Learning", "Prior-data Fitted Networks", "Learning of Physics" ]
Prior-data fitted networks (PFNs) are a promising alternative to time-consuming Gaussian process (GP) inference for creating fast surrogates of physical systems. PFN reduces the computational burden of GP-training by replacing Bayesian inference in GP with a single forward pass of a learned prediction model. However, with standard Transformer attention, PFNs show limited effectiveness on high-dimensional regression tasks. We introduce Decoupled-Value Attention (DVA)-- motivated by the GP property that the function space is fully characterized by the kernel over inputs and the predictive mean is a weighted sum of training targets. DVA computes similarities from inputs only and propagates labels solely through values. Thus, the proposed DVA mirrors the GP update while remaining kernel-free. We demonstrate that the crucial factor for scaling PFNs is the attention rule rather than the architecture itself. Specifically, our results demonstrate that (a) localized attention consistently reduces out-of-sample validation loss in PFNs across different dimensional settings, with validation loss reduced by more than 50\% in five- and ten-dimensional cases, and (b) the role of attention is more decisive than the choice of backbone architecture, showing that CNN-based PFNs can perform at par with their Transformer-based counterparts. The proposed PFNs provide 64-dimensional power flow equation approximations with a mean absolute error of the order of $10^{-3}$, while being over $80\times$ faster than exact GP inference.
Decoupled-Value Attention (DVA) separates input similarity from label propagation, mirroring Gaussian process updates and enabling scalable, kernel-free PFNs. This achieves architecture-agnostic and scalable PFNs.
transfer learning, meta learning, and lifelong learning
https://openreview.net/pdf?id=noLMXTqgCp
2025-09-19T11:28:34
4
[ { "id": "2ZehxxBZnR", "forum": "noLMXTqgCp", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission15541/Reviewer_YKgM", "reviewer_name": "Reviewer_YKgM", "rating": 6, "confidence": 3, "soundness": 2, "contribution": 3, "presentation": 2, "summary": "Prior...
Ba5hOI2SkF
https://openreview.net/forum?id=Ba5hOI2SkF
Learning Deep Modality-Shared Self-Expressiveness for Image Clustering with Textual Information
5
4
[ 6, 6, 4, 4 ]
[ 4, 4, 4, 4 ]
4
[ "deep clustering", "self-expressive model", "multimodal" ]
Leveraging textual information for image clustering has emerged as a promising direction, driven by the powerful representations of vision-language models. However, existing approaches usually leverage modality alignment, which merely shapes the representations implicitly, failing to preserve and exploit modality-specific structures, and leaving the overall representation distribution unclear. In this paper, we propose a simple but principled approach, termed deep modality-shared self-expressive model (DeepMORSE), which simultaneously learns structured representations that conform to the union of modality-specific subspace structures and, via a modality-shared self-expressive model, discovers structures shared across modalities. We evaluate our DeepMORSE approach on seven widely used image clustering benchmarks and observe performance improvements exceeding 4\% on the UCF-101, DTD-47, and ImageNet-Dogs datasets. In addition, we demonstrate the strong transferability of the learned representations by achieving state-of-the-art performance on downstream tasks such as image retrieval and zero-shot classification—without requiring any task-specific losses or post-processing.
unsupervised, self-supervised, semi-supervised, and supervised representation learning
https://openreview.net/pdf?id=Ba5hOI2SkF
2025-09-18T21:23:43
4
[ { "id": "qLvkuymQcJ", "forum": "Ba5hOI2SkF", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission11568/Reviewer_JHm3", "reviewer_name": "Reviewer_JHm3", "rating": 6, "confidence": 4, "soundness": 3, "contribution": 3, "presentation": 3, "summary": "This ...
aX3E6LirK5
https://openreview.net/forum?id=aX3E6LirK5
pFedMMA: Personalized Federated Fine-Tuning with Multi-Modal Adapter for Vision-Language Models
4.5
3.5
[ 6, 4, 4, 4 ]
[ 4, 4, 4, 2 ]
4
[ "Multi-Modal Adapter", "Personalized Federated Fine-Tuning", "Few-Shot Learning of Vision Language Models" ]
Vision-Language Models (VLMs) like CLIP have demonstrated remarkable generalization in zero- and few-shot settings, but adapting them efficiently to decentralized, heterogeneous data remains a challenge. While prompt tuning has emerged as a popular parameter-efficient approach in personalized federated learning, existing methods often sacrifice generalization in favor of personalization, struggling particularly on unseen classes or domains. In this work, we propose pFedMMA, the first personalized federated learning framework that leverages multi-modal adapters for vision-language tasks. Each adapter contains modality-specific up- and down-projection layers alongside a globally shared projection that aligns cross-modal features. Our optimization strategy allows clients to locally adapt to personalized data distributions while collaboratively training the shared projection to improve global generalization. This design is also communication-efficient, as only the shared component is exchanged during communication rounds. Through extensive experiments across eleven datasets, including domain- and label-shift scenarios, we show that pFedMMA achieves state-of-the-art trade-offs between personalization and generalization, outperforming recent federated prompt tuning methods.
foundation or frontier models, including LLMs
https://openreview.net/pdf?id=aX3E6LirK5
2025-09-18T17:43:15
4
[ { "id": "NILAgvFwdI", "forum": "aX3E6LirK5", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission11070/Reviewer_VEYb", "reviewer_name": "Reviewer_VEYb", "rating": 6, "confidence": 4, "soundness": 4, "contribution": 2, "presentation": 3, "summary": "The p...
ZmGfCj1n2P
https://openreview.net/forum?id=ZmGfCj1n2P
A robust PPG foundation model using multimodal physiological supervision
4
4
[ 4, 4, 4 ]
[ 4, 4, 4 ]
3
[ "Photoplethysmography (PPG)", "health", "ubiquitous computing", "foundation model", "wearables", "representation learning", "multimodal", "self-supervised learning", "time series", "physiology" ]
Photoplethysmography (PPG), a non-invasive measure of changes in blood volume, is widely used in both wearable devices and clinical settings. Although recent work has explored PPG foundation models using large-scale intensive care unit (ICU) datasets, these efforts often assume the need for clean and high-quality signals. In contrast, we argue that the inherent noise and variability in ICU datasets can be harnessed to build more robust and generalizable representations. To address this, we propose a PPG foundation model that leverages accompanying electrocardiogram and respiratory signals in ICU datasets to select contrastive samples during pretraining. Our approach allows the model to retain and learn from noisy PPG segments, improving robustness without requiring multimodal inputs at inference. Our model, pretrained on 3x fewer subjects than existing state-of-the-art approaches, achieves performance improvements of up to 36\% in classification and 42\% in regression on 14 out of 15 diverse downstream tasks, including stress and heart rate prediction. Our results demonstrate that multimodal supervision can leverage clinical data to enable the development of robust, unimodal foundation models for both clinical and consumer-level data.
foundation or frontier models, including LLMs
https://openreview.net/pdf?id=ZmGfCj1n2P
2025-09-19T22:51:20
3
[ { "id": "xvsg1yN8mg", "forum": "ZmGfCj1n2P", "review_number": 3, "reviewer_id": "ICLR.cc/2026/Conference/Submission19085/Reviewer_ye6c", "reviewer_name": "Reviewer_ye6c", "rating": 4, "confidence": 4, "soundness": 2, "contribution": 2, "presentation": 2, "summary": "The p...
YyPZPrPjQD
https://openreview.net/forum?id=YyPZPrPjQD
TableMaster: A Recipe to Advance Table Understanding with Language Models
4.666667
3
[ 6, 4, 4 ]
[ 3, 3, 3 ]
3
[ "Table Understanding", "Table Reasoning", "Large Language Model", "Natural Language Processing" ]
Tables serve as a fundamental format for representing structured relational data. While current language models (LMs) excel at many text-based tasks, they still face challenges in table understanding due to the complex characteristics of tabular data, such as their structured nature. In this paper, we aim to enhance LMs for improved table understanding. We identify four key challenges: 1) difficulty in locating target data, 2) deficiency in table semantics, 3) numerical inaccuracies in textual reasoning, and 4) semantic inflexibility in symbolic reasoning. To address these issues, we propose TableMaster, a recipe and comprehensive framework that integrates multiple solutions to overcome these obstacles. TableMaster first extracts relevant table content and verbalizes it with enriched semantic context. Additionally, we introduce adaptive reasoning, a flexible approach that dynamically adjusts between textual and symbolic reasoning, tailoring the reasoning process to each query. Extensive analyses and experiments demonstrate our findings and the effectiveness of TableMaster. On the WikiTQ dataset, TableMaster achieves an accuracy of 78.13% using GPT-4o-mini, surpassing existing baselines.
TableMaster analyzes the challenges of table understanding with language models and provides a comprehensive recipe and framework to address them.
applications to computer vision, audio, language, and other modalities
https://openreview.net/pdf?id=YyPZPrPjQD
2025-09-19T11:42:08
3
[ { "id": "0KGpPCYL1a", "forum": "YyPZPrPjQD", "review_number": 3, "reviewer_id": "ICLR.cc/2026/Conference/Submission15626/Reviewer_MV4i", "reviewer_name": "Reviewer_MV4i", "rating": 6, "confidence": 3, "soundness": 3, "contribution": 2, "presentation": 3, "summary": "This ...
IFsvqHlMPq
https://openreview.net/forum?id=IFsvqHlMPq
Multimodal Masked Polymer Autoencoder for Unified Polymer Informatics
3
3.5
[ 6, 2, 2, 2 ]
[ 3, 4, 3, 4 ]
4
[ "Polymer Informatics", "Multimodal Learning", "Scientific discovery", "Data-driven polymer development", "Multi-view representation learning" ]
Recent advances in large-scale sequence modeling have opened new opportunities for polymer informatics, enabling both property prediction from structures and inverse design of structures from desired properties. Most existing approaches, however, model these tasks as separate mappings, limiting their flexibility and robustness. We propose a multimodal representation learning framework that unifies diverse polymer informatics tasks within a single model. Our approach treats each property or structural element as an individual submodality and introduces an information-theoretic objective that balances informativeness across arbitrary subsets of modalities. The resulting Multimodal Masked Polymer Autoencoder (MMPAE) serves as an end-to-end foundation model, supporting both cross-modal generation and retrieval. Extensive experiments on large polymer datasets show that MMPAE not only surpasses strong task-specific baselines under realistic missing-value conditions, but also provides a flexible platform for diverse downstream applications within a unified architecture.
unsupervised, self-supervised, semi-supervised, and supervised representation learning
https://openreview.net/pdf?id=IFsvqHlMPq
2025-09-20T15:20:29
4
[ { "id": "KU9uUdjusD", "forum": "IFsvqHlMPq", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission24109/Reviewer_VuYc", "reviewer_name": "Reviewer_VuYc", "rating": 6, "confidence": 3, "soundness": 3, "contribution": 2, "presentation": 3, "summary": "The a...
sfe6KFGRlD
https://openreview.net/forum?id=sfe6KFGRlD
Dual-Stage Frequency-based Denoising for Generative Recommendation
4
4
[ 2, 4, 6, 4 ]
[ 4, 5, 3, 4 ]
4
[ "Generative Recommendation", "Frequency-Domain Modeling", "Denoising", "Attention Mechanism" ]
Generative recommendation has emerged as a promising frontier in modeling the complex and continuously evolving nature of user preferences. However, its practical effectiveness is often undermined by a fundamental yet overlooked vulnerability: its sensitivity to the pervasive high-frequency sequential noise inherent in raw user interaction data from accidental clicks or transient interests. This paper introduces a paradigm shift that explicitly performs frequency-domain modeling to effectively isolate and suppress sequential noise, while further addressing the challenge of frequency-domain sparsity. Specifically, we propose TONE (Two-stage Optimized deNoising for gEnerative recommendation), a generative framework built around a principled two-stage denoising strategy. In the first stage of item codebook construction, we apply ResGMM (Residual Gaussian Mixture Model) to better fit clustering boundaries, thereby alleviating semantic noise and establishing a robust foundation. In the second stage, on the generative model side, we employ a learnable Gaussian kernel to filter context-specific noise. Furthermore, we redesign the residual frequency-domain attention mechanism with explicit separation of real and imaginary components, and introduce a learnable matrix to counteract attention collapse induced by Fourier energy concentration, while preserving expressiveness. Empirical results demonstrate that TONE achieves the new state-of-the-art performance over strong baselines on three widely used benchmarks, achieving notable improvements on the Amazon Beauty dataset, with gains of 8.93\% in Recall@20 and 8.33\% in NDCG@20. Extensive experiments confirm that explicit frequency-domain denoising is key to unlocking a new level of performance and robustness in generative recommendation. The source code is available at \url{https://anonymous.4open.science/r/TONE-9E07/}.
generative models
https://openreview.net/pdf?id=sfe6KFGRlD
2025-09-19T00:41:57
4
[ { "id": "SW8kOeLN56", "forum": "sfe6KFGRlD", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission13114/Reviewer_2L5c", "reviewer_name": "Reviewer_2L5c", "rating": 2, "confidence": 4, "soundness": 2, "contribution": 2, "presentation": 2, "summary": "This ...
nFTdyfz4fC
https://openreview.net/forum?id=nFTdyfz4fC
Exploring Aleatoric Uncertainty in Object Detection via Vision Foundation Models
3.333333
3
[ 4, 4, 2 ]
[ 3, 3, 3 ]
3
[ "Aleatoric uncertainty", "Data uncertainty", "Object detection", "Data-centric learning" ]
Datasets collected from the open world unavoidably suffer from various forms of randomness or noiseness, leading to the ubiquity of aleatoric (data) uncertainty. Quantifying such uncertainty is particularly pivotal for object detection, where images contain multi-scale objects with occlusion, obscureness, and even noisy annotations, in contrast to images with centric and similar-scale objects in classification. This paper suggests modeling and exploiting the uncertainty inherent in object detection data with vision foundation models and develops a data-centric reliable training paradigm. Technically, we propose to estimate the data uncertainty of each object instance based on the feature space of vision foundation models, which are trained on ultra-large-scale datasets and able to exhibit universal data representation. In particular, we assume a mixture-of-Gaussian structure of the object features and devise Mahalanobis distance-based measures to quantify the data uncertainty. Furthermore, we suggest two curial and practical usages of the estimated uncertainty: 1) for defining uncertainty-aware sample filter to abandon noisy and redundant instances to avoid over-fitting, and 2) for defining sample adaptive regularizer to balance easy/hard samples for adaptive training. The estimated aleatoric uncertainty serves as an extra level of annotations of the dataset, so it can be utilized in a plug-and-play manner with any model. Extensive empirical studies verify the effectiveness of the proposed aleatoric uncertainty measure on various advanced detection models and challenging benchmarks.
probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
https://openreview.net/pdf?id=nFTdyfz4fC
2025-09-18T23:27:22
3
[ { "id": "LNR2MC82PW", "forum": "nFTdyfz4fC", "review_number": 3, "reviewer_id": "ICLR.cc/2026/Conference/Submission12647/Reviewer_ahvk", "reviewer_name": "Reviewer_ahvk", "rating": 4, "confidence": 3, "soundness": 2, "contribution": 2, "presentation": 3, "summary": "This ...
ZOV3697bZZ
https://openreview.net/forum?id=ZOV3697bZZ
Towards Generalizable Implicit In-Context Learning with Attention Routing
5
3.25
[ 6, 4, 4, 6 ]
[ 3, 3, 3, 4 ]
4
[ "In-context Learning", "Large Language Model", "Transfer Learning" ]
Implicit in-context learning (ICL) has newly emerged as a promising paradigm that simulates ICL behaviors in the representation space of Large Language Models (LLMs), aiming to attain few-shot performance at zero-shot cost. However, existing approaches largely rely on injecting shift vectors into residual flows, which are typically constructed from labeled demonstrations or task-specific alignment. Such designs fall short of utilizing the structural mechanisms underlying ICL and suffer from limited generalizability. To address this, we propose In-Context Routing (ICR), a novel implicit ICL method that internalizes generalizable ICL patterns at the attention logits level. It extracts reusable structural directions that emerge during ICL and employs a learnable input-conditioned router to modulate attention logits accordingly, enabling a train-once-and-reuse framework. We evaluate ICR on 12 real-world datasets spanning diverse domains and multiple LLMs. The results show that ICR consistently outperforms prior implicit ICL methods that require task-specific retrieval or training, while demonstrating robust generalization to out-of-domain tasks where existing methods struggle. These findings position ICR to push the boundary of ICL’s practical value.
We propose In-Context Routing, an implicit ICL method that steers attention logits for robust, generalizable few-shot performance at zero-shot cost.
foundation or frontier models, including LLMs
https://openreview.net/pdf?id=ZOV3697bZZ
2025-09-18T05:29:13
4
[ { "id": "eikM4XmkYy", "forum": "ZOV3697bZZ", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission9862/Reviewer_sqZY", "reviewer_name": "Reviewer_sqZY", "rating": 6, "confidence": 3, "soundness": 2, "contribution": 2, "presentation": 3, "summary": "This p...
8r3oMjN06W
https://openreview.net/forum?id=8r3oMjN06W
A COLLUSION ATTACK ON STABLE SIGNATURE AND A DEFENSE USING DOMAIN-BASED SIGNATURE ASSIGNMENT
2.5
4.25
[ 2, 2, 2, 4 ]
[ 4, 5, 4, 4 ]
4
[ "Image watermarking", "Stable Signature", "Collusion Attack", "domain-based signature assginment" ]
Stable Signature is a recent watermarking framework based on latent diffusion models, which generates images with embedded signatures by fine-tuning the decoder. While prior work has shown that watermarks can be removed while maintaining visual quality by retraining the watermarked decoder with clean images, we demonstrate that collusion among multiple users poses a practical and severe threat. Our attack begins by averaging watermarked decoders, which already provides a strong initialization for watermark removal. With encoder access, this initialization can be further fine-tuned to significantly suppress the watermark signal. Even when the encoder is not available, colluders can expand their group size to achieve comparable effectiveness, highlighting the scalability of the attack. To defend against this threat, we propose a domain-based signature assignment mechanism. In this strategy, the watermarking service provider (e.g., one using Stable Signature) partitions the signature space into domains, requiring all users in the same domain to share a fixed set of domain-index bits in their signatures. Experiments show that the domain-index bits remain robust under the collusion attack when the encoder is not available. Our studies suggest that adopting the domain-based signature assignment and keeping the encoder confidential will be good practices when Stable Signature is used as a watermarking solution.
applications to computer vision, audio, language, and other modalities
https://openreview.net/pdf?id=8r3oMjN06W
2025-09-12T08:07:05
5
[ { "id": "hbKFurAspx", "forum": "8r3oMjN06W", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission4208/Reviewer_Z4NL", "reviewer_name": "Reviewer_Z4NL", "rating": 2, "confidence": 4, "soundness": 1, "contribution": 1, "presentation": 3, "summary": "The pa...
QAwGkFD8ES
https://openreview.net/forum?id=QAwGkFD8ES
SpectrumKD: Dynamic Dataset Curation for Distribution-Aware Knowledge Distillation of Large Language Models
2.666667
3.666667
[ 2, 4, 2 ]
[ 4, 3, 4 ]
3
[ "Large Language Models", "Knowledge Distillation", "Data Curation" ]
Knowledge Distillation (KD) is a critical technique for compressing large language models (LLMs) into efficient student models while preserving performance, yet its efficacy remains highly sensitive to training data quality. Current dataset curation approaches mainly focus on quality and information at the instance level, neglecting the global distribution characteristics of the entire training dataset. This oversight often results in suboptimal data selection that degrades distillation outcomes. To address this limitation, we propose SpectrumKD, a principled data curation framework that dynamically refines training datasets across epochs by leveraging the global distribution of instance difficulty. SpectrumKD constructs a difficulty spectrum over the training corpus by ranking instances based on student model evaluation, partitioning them into four distinct learning phases: Early Learning, Continuous Learning, Late Learning, and No Learning. A sliding window segmentation strategy then selects epoch-specific subsets by adaptively shifting a fixed window across the spectrum from low to high difficulty, to ensure an uniform increase in subset difficulty across training epochs. As a plug-and-play module, SpectrumKD enhances diverse white-box KD methods and model architectures with minor computational cost. Extensive experiments across multiple language model benchmarks demonstrate consistent performance gains in distilled models, with improvements observed under varied KD approaches and model families. Crucially, SpectrumKD achieves these gains without modifying core distillation algorithms, highlighting the pivotal role of dataset distribution features and data compatibility in effective LLM distillation. Our work establishes a data-centric paradigm for KD, providing both insights and tools to advance the efficiency and capability of compressed language models.
foundation or frontier models, including LLMs
https://openreview.net/pdf?id=QAwGkFD8ES
2025-09-17T21:51:14
3
[ { "id": "djJ1u6Hfkj", "forum": "QAwGkFD8ES", "review_number": 3, "reviewer_id": "ICLR.cc/2026/Conference/Submission9274/Reviewer_UAZg", "reviewer_name": "Reviewer_UAZg", "rating": 2, "confidence": 4, "soundness": 2, "contribution": 2, "presentation": 2, "summary": "This p...
ER7zDJXtRI
https://openreview.net/forum?id=ER7zDJXtRI
ComPhy: Composing Physical Models with end-to-end Alignment
4.5
3.25
[ 6, 4, 6, 2 ]
[ 3, 3, 3, 4 ]
4
[ "Learning physics", "Physical systems", "Partial differential equations", "Systems of PDEs" ]
Real-world phenomena typically involve multiple, interwoven dynamics that can be elegantly captured by systems of Partial Differential Equations (PDEs). However, accurately solving such systems remains a challenge. In this paper, we introduce ComPhy (CP), a novel modular framework designed to leverage the inherent physical structure of the problem to solve systems of PDEs. CP assigns each PDE to a dedicated learning module, each capable of incorporating state-of-the-art methodologies such as Physics-Informed Neural Networks or Neural Conservation Laws. Crucially, CP introduces an end-to-end alignment mechanism, explicitly designed around the physical interplay of shared variables, enabling knowledge transfer between modules, and promoting solutions that are the result of the collective effort of all modules. CP is the first approach specifically designed to tackle systems of PDEs, and our results show that it outperforms state-of-the-art approaches where a single model is trained on all PDEs at once.
We introduce ComPhy, a multi-module approach to learn systems of PDEs by assigning one equation to each module. An alignment mechanism ensures the networks share information to solve the system together.
applications to physical sciences (physics, chemistry, biology, etc.)
https://openreview.net/pdf?id=ER7zDJXtRI
2025-09-17T23:59:15
4
[ { "id": "L8fe3MWOgP", "forum": "ER7zDJXtRI", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission9495/Reviewer_amcN", "reviewer_name": "Reviewer_amcN", "rating": 6, "confidence": 3, "soundness": 3, "contribution": 3, "presentation": 3, "summary": "This p...
gqkayvdfM7
https://openreview.net/forum?id=gqkayvdfM7
Power of Sign: High Probability Bounds Under $(L_0, L_1)$-smoothness and Heavy-Tailed Noise
3
4.25
[ 4, 2, 4, 2 ]
[ 3, 5, 5, 4 ]
4
[ "Heavy-tailed noise", "SignSGD", "High Probability bounds", "Generalized Smoothness" ]
In recent years, non-convex optimization problems are more often described by generalized $(L_0, L_1)$-smoothness assumption rather than standard one. Meanwhile, severely corrupted data used in these problems has increased the demand for methods capable of handling heavy-tailed noises, i.e., noises with bounded $\kappa$-th moment. Motivated by these real-world trends and challenges, we explore sign-based methods in this setup and demonstrate their effectiveness in comparison with other popular solutions like clipping or normalization. In theory, we prove the first-known high probability convergence bounds under $(L_0, L_1)$-smoothness and heavy-tailed noises with mild parameter dependencies. In the case of standard smoothness, these bounds are novel for sign-based methods as well. In particular, $\texttt{SignSGD}$ with batching achieves sample complexity $\tilde{O}\left(\left(\frac{\Delta L_0}{\varepsilon^2} + \frac{\Delta L_1}{\varepsilon}\right)\left[1 + \left(\frac{\sigma}{\varepsilon}\right)^\frac{\kappa}{\kappa-1}\right]\right), \kappa \in (1,2]$. Under the assumption of symmetric noises, $\texttt{SignSGD}$ with Majority Voting can robustly work on the whole range of $\kappa \in (0,2]$ with complexity $\tilde{O}\left(\left(\frac{\Delta L_0}{\varepsilon^2} + \frac{\Delta L_1}{\varepsilon}\right)\left[\frac{1}{\kappa^2} + \frac{\sigma^2}{\varepsilon^2}\right]\right)$. We also obtain results for parameter-free methods, Polyak-Lojasiewicz functions and momentum-based methods (in expectation). Our theoretical findings are supported by the superior performance of sign-based methods in training Large Language Models compared to clipping and normalization.
optimization
https://openreview.net/pdf?id=gqkayvdfM7
2025-09-20T01:49:35
4
[ { "id": "W2xSrjfEcU", "forum": "gqkayvdfM7", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission20251/Reviewer_e67G", "reviewer_name": "Reviewer_e67G", "rating": 4, "confidence": 3, "soundness": 4, "contribution": 3, "presentation": 4, "summary": "This ...
WxTlAbRUE6
https://openreview.net/forum?id=WxTlAbRUE6
Benchmarking Compositional generalisation for Learning Inter-atomic Potentials
2.5
4.25
[ 2, 2, 4, 2 ]
[ 4, 5, 3, 5 ]
4
[ "neural networks", "Graph Neural Networks", "Transformers", "compositional generalization", "benchmark tasks" ]
Inter-atomic potentials play an important role for modelling molecular dynamics. Unfortunately, traditional methods for computing such potentials are computationally heavy. In recent years, the idea of using neural networks to approximate these computations has gained in popularity, and a variety of Graph Neural Networks and Transformer based methods have been proposed for this purpose. Recent approaches provide highly accurate estimates, but they are typically trained and tested on the same molecules. It thus remains unclear whether these models mostly learn to interpolate the training labels, or whether their physically-informed designs actually allow them to capture the underlying principles. To address this gap, we propose a benchmark consisting of four tasks that each require some form of compositional generalisation. Training and testing involves separate molecules, but the training data is chosen such that generalisation to the test examples should be feasible for models that learn the physical principles. Our empirical analysis shows that the considered tasks are highly challenging for state-of-the-art models, with errors for out-of-distribution examples often being orders of magnitude higher than for in-distribution examples.
datasets and benchmarks
https://openreview.net/pdf?id=WxTlAbRUE6
2025-09-19T17:09:52
4
[ { "id": "9BGJrcogz5", "forum": "WxTlAbRUE6", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission17168/Reviewer_rZiy", "reviewer_name": "Reviewer_rZiy", "rating": 2, "confidence": 4, "soundness": 2, "contribution": 2, "presentation": 2, "summary": "The p...
fArR5qngYw
https://openreview.net/forum?id=fArR5qngYw
From Moments to Models: Graphon Mixture-Aware Mixup and Contrastive Learning
4
3
[ 2, 6, 4 ]
[ 3, 4, 2 ]
3
[ "Graphon", "Graphon mixture", "Moment", "Graph Contrastive Learning", "Graph Mixup" ]
Real-world graph datasets often consist of mixtures of populations, where graphs are generated from multiple distinct underlying distributions. However, modern representation learning approaches, such as graph contrastive learning (GCL) and augmentation methods like Mixup, typically overlook this mixture structure. In this work, we propose a unified framework that explicitly models data as a mixture of underlying probabilistic graph generative models represented by graphons. To characterize these graphons, we leverage graph moments (motif densities) to cluster graphs arising from the same model. This enables us to disentangle the mixture components and identify their distinct generative mechanisms. This model-aware partitioning benefits two key graph learning tasks: 1) It enables a graphon-mixture-aware mixup (GMAM), a data augmentation technique that interpolates in a semantically valid space guided by the estimated graphons, instead of assuming a single graphon per class. 2) For GCL, it enables model-adaptive and principled augmentations. Additionally, by introducing a new model-aware objective, our proposed approach (termed MGCL) improves negative sampling by restricting negatives to graphs from other models. We establish a key theoretical guarantee: a novel, tighter bound showing that graphs sampled from graphons with small cut distance will have similar motif densities with high probability. Extensive experiments on benchmark datasets demonstrate strong empirical performance. In unsupervised learning, MGCL achieves state-of-the-art results, obtaining the top average rank across eight datasets. In supervised learning, GMAM consistently outperforms existing strategies, achieving new state-of-the-art accuracy in 6 out of 7 datasets.
We model graph datasets as a mixture of underlying generative graphons, identified via motif-based clustering, to create superior data augmentation and contrastive learning frameworks.
learning on graphs and other geometries & topologies
https://openreview.net/pdf?id=fArR5qngYw
2025-09-20T05:26:29
3
[ { "id": "iiUL2lMRwx", "forum": "fArR5qngYw", "review_number": 3, "reviewer_id": "ICLR.cc/2026/Conference/Submission21404/Reviewer_z9jy", "reviewer_name": "Reviewer_z9jy", "rating": 2, "confidence": 3, "soundness": 2, "contribution": 2, "presentation": 3, "summary": "This ...
hJvcbkf2nO
https://openreview.net/forum?id=hJvcbkf2nO
Model Stitching by Invariance-aware Functional Latent Alignment
3.5
3.75
[ 2, 2, 4, 6 ]
[ 3, 4, 3, 5 ]
4
[ "Functional Similarity", "Representation Learning", "Model stitching" ]
In deep learning, functional similarity evaluation quantifies the extent to which independently trained models learn similar input-output relationships. A related concept, representation compatibility, is investigated via model stitching, where an affine transformation aligns two models to solve a task. However, recent studies highlight a critical limitation: models trained on different information cues can still produce compatible representations, making them appear functionally similar \cite{smithfunctional}. To address this, we pose two requirements for similarity under model stitching, probing both forward and backward compatibility. To realize this, we introduce invariance-aware Functional Latent Alignment (I-FuLA), a novel model stitching setting. Experiments across convolutional and transformer architectures demonstrate that invariance-aware stitching settings provide a more meaningful measure of functional similarity, with the combination of invariance-aware stitching and FuLA (i.e., I-FuLA) emerging as the optimal setting for convolution-based models.
Invariance-aware functional latent alignment can make for a reliable functional similarity metric.
unsupervised, self-supervised, semi-supervised, and supervised representation learning
https://openreview.net/pdf?id=hJvcbkf2nO
2025-09-18T22:52:43
4
[ { "id": "SKZ46o30wd", "forum": "hJvcbkf2nO", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission12340/Reviewer_hyyX", "reviewer_name": "Reviewer_hyyX", "rating": 2, "confidence": 3, "soundness": 3, "contribution": 2, "presentation": 2, "summary": "This ...
oijKOpfSmX
https://openreview.net/forum?id=oijKOpfSmX
KeyVID: Keyframe-Aware Video Diffusion for Audio-Synchronized Visual Animation
5.5
3.5
[ 6, 4, 6, 6 ]
[ 3, 4, 4, 3 ]
4
[ "Audio to Video Generation", "Keyframe Generation", "Video Generation" ]
Generating video from various conditions, such as text, image, and audio, enables precise spatial and temporal control, leading to high-quality generation results. Most existing audio-to-visual animation models rely on uniformly sampled frames from video clips. Such a uniform sampling strategy often fails to capture key audio-visual moments in videos with dramatic motions, causing unsmooth motion transitions and audio-visual misalignment. To address these limitations, we introduce KeyVID, a keyframe-aware audio-to-visual animation framework that adaptively prioritizes the generation of keyframes in audio signals to improve the generation quality. Guided by the input audio signals, KeyVID first localizes and generates the corresponding visual keyframes that contain highly dynamic motions. The remaining frames are then synthesized using a motion interpolation module, effectively reconstructing the full video sequence. This design enables the generation of high frame-rate videos that faithfully align with audio dynamics, while avoiding the cost of directly training with all frames at a high frame rate. Through extensive experiments, we demonstrate that KeyVID significantly improves audio-video synchronization and video quality across multiple datasets, particularly for highly dynamic motions
generative models
https://openreview.net/pdf?id=oijKOpfSmX
2025-09-11T09:43:22
4
[ { "id": "ITEL8ceibs", "forum": "oijKOpfSmX", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission3854/Reviewer_aTg2", "reviewer_name": "Reviewer_aTg2", "rating": 6, "confidence": 3, "soundness": 3, "contribution": 2, "presentation": 3, "summary": "This p...
8HH9dBOxwu
https://openreview.net/forum?id=8HH9dBOxwu
Unified Biomolecular Trajectory Generation via Pretrained Variational Bridge
6
4
[ 4, 8, 4, 8 ]
[ 4, 4, 4, 4 ]
4
[ "deep generative model", "molecular dynamics", "trajectory generation", "augmented bridge matching", "adjoint matching" ]
Molecular Dynamics (MD) simulations provide a fundamental tool for characterizing molecular behavior at full atomic resolution, but their applicability is severely constrained by computational inefficiency. To address this, a surge of deep generative models has recently emerged to learn dynamics at coarsened timesteps for efficient trajectory generation. Nevertheless, most of these methods suffer from two main issues: (i) Non-pretrained models are limited to single-domain simulation; (ii) Pretrained approaches, while tailored for cross-domain scenarios, fail to leverage the structural information learned during pretraining in the generative process due to misaligned training objectives. Here, we propose the Pretrained Variational Bridge (PVB), which first maps the initial state into a noised latent space and then projects it to stage-specific target states using a decoder based on augmented bridge matching. This unifies training for both single-structure and paired trajectory data, ensuring the consistent utilization of extensive cross-domain structural knowledge across stages. Moreover, we incorporate RL optimization for protein-ligand complexes using adjoint matching, which enables the model to rapidly evolve toward the holo state within short simulations, showcasing the potential for efficient post-optimization of docking poses. Experiments on proteins and protein-ligand complexes demonstrate that PVB accurately reproduces thermodynamic and kinetic observables measured in MD simulations, while achieving remarkable generative stability compared with baselines.
applications to physical sciences (physics, chemistry, biology, etc.)
https://openreview.net/pdf?id=8HH9dBOxwu
2025-09-18T13:32:43
4
[ { "id": "rqowkUu439", "forum": "8HH9dBOxwu", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission10490/Reviewer_i298", "reviewer_name": "Reviewer_i298", "rating": 4, "confidence": 4, "soundness": 3, "contribution": 3, "presentation": 2, "summary": "The p...
neTgHJlQch
https://openreview.net/forum?id=neTgHJlQch
Mind the gap: A method for evaluating and comparing regional knowledge in LLMs
2.666667
3
[ 2, 4, 2 ]
[ 3, 3, 3 ]
3
[ "benchmark", "nlp", "LLMs", "evaluation", "entity linking", "knowledge graph", "cultural entities" ]
Large Language Models (LLMs) achieve strong results on general knowledge benchmarks, yet their coverage of region-specific entities—particularly from Latin America—remains limited. To address this gap, we propose CHOCLO, an entity-centric methodology for evaluating LLM knowledge of culturally relevant entities in Latin America. The methodology extracts structured facts from domain-specific resources and organizes them into knowledge graphs spanning nine categories, resulting in more than 44,000 entities and 130,000 questions. Evaluation is carried out through two complementary strategies. The first computes factual scores using token overlap, embedding similarity, LLM-as-a-judge, and multiple-choice accuracy. The second trains probing models that predict these scores directly from LLM embeddings, enabling generation-free evaluation. Results consistently show a regional disparity: GPT-5 and GPT-3.5 score markedly lower on Latin American entities compared to the U.S. and Europe, while models such as Mistral, DeepSeek, and QWEN underperform across all regions. Category-level analysis further reveals that fauna, flora, and traditions are comparatively better represented, whereas public figures and objects show the largest deficits. CHOCLO thus exposes systematic disparities in how LLMs encode Latin American knowledge and provides a step toward culturally inclusive benchmarks that support fairer global evaluation.
This work introduces a benchmark and evaluation framework to measure how well LLMs understand Latin American entities using knowledge graphs and probing methods, revealing consistent performance gaps compared to other regions.
datasets and benchmarks
https://openreview.net/pdf?id=neTgHJlQch
2025-09-05T22:34:14
3
[ { "id": "Ve1oPYtQlt", "forum": "neTgHJlQch", "review_number": 3, "reviewer_id": "ICLR.cc/2026/Conference/Submission2426/Reviewer_1sck", "reviewer_name": "Reviewer_1sck", "rating": 2, "confidence": 3, "soundness": 2, "contribution": 2, "presentation": 2, "summary": "The pa...
GiItKTlJIB
https://openreview.net/forum?id=GiItKTlJIB
How Much Chain-of-Thought Do LLMs Really Need for Physics?
3
3.5
[ 4, 2, 4, 2 ]
[ 2, 4, 4, 4 ]
4
[ "chain-of-thought", "reasoning", "evaluation" ]
Reasoning-focused language models are increasingly applied to AI for science, but evaluation has not kept pace: benchmarks largely measure end-task accuracy while ignoring whether models genuinely depend on their own reasoning traces. This gap is critical in domains like physics problem solving, where equations, units, and structured terminology make reasoning reliability both essential and testable. We introduce a systematic deletion framework that intercepts chain-of-thought (CoT) mid-generation, removes tokens, and measures downstream effects. Applied to three open-source models—Magistral, Phi-4, and Qwen-A3B—across multiple physics benchmarks, our method shows that models remain accurate under heavy deletions (40–60\%) by “cramming” reconstructed steps into final answers. Overlap analyses reveal that deleted equations and facts often reappear, but inconsistently across strategies, exposing shallow and opportunistic reliance on CoT. These findings underscore that current accuracy-based evaluations are insufficient for scientific domains, and point toward the need for methods that assess reasoning faithfulness as a core requirement for advancing AI for science.
LLMs can solve physics problems by patching gaps in heavily deleted CoT reasoning traces, but without true faithfulness.
applications to physical sciences (physics, chemistry, biology, etc.)
https://openreview.net/pdf?id=GiItKTlJIB
2025-09-13T01:29:38
4
[ { "id": "1ZfO851Ikv", "forum": "GiItKTlJIB", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission4526/Reviewer_iLnu", "reviewer_name": "Reviewer_iLnu", "rating": 4, "confidence": 2, "soundness": 2, "contribution": 2, "presentation": 3, "summary": "The pa...
FKEHiHU4bN
https://openreview.net/forum?id=FKEHiHU4bN
Revisiting Matrix Sketching in Linear Bandits: Achieving Sublinear Regret via Dyadic Block Sketching
6.5
2.75
[ 8, 6, 6, 6 ]
[ 2, 4, 2, 3 ]
4
[ "Linear Bandits", "Matrix Sketching", "Multi-scale Sketching" ]
Linear bandits have become a cornerstone of online learning and sequential decision-making, providing solid theoretical foundations for balancing exploration and exploitation. Within this domain, matrix sketching serves as a critical component for achieving computational efficiency, especially when confronting high-dimensional problem instances. The sketch-based approaches reduce per-round complexity from $\Omega(d^2)$ to $O(d)$, where $d$ is the dimension. However, this computational efficiency comes with a fundamental pitfall: when the streaming matrix exhibits heavy spectral tails, such algorithms can incur vacuous *linear regret*. In this paper, we revisit the regret bounds and algorithmic design for sketch-based linear bandits. Our analysis reveals that inappropriate sketch sizes can lead to substantial spectral error, severely undermining regret guarantees. To overcome this issue, we propose Dyadic Block Sketching, a novel multi-scale matrix sketching approach that dynamically adjusts the sketch size during the learning process. We apply this technique to linear bandits and demonstrate that the new algorithm achieves *sublinear regret* bounds without requiring prior knowledge of the streaming matrix properties. It establishes a general framework for efficient sketch-based linear bandits, which can be integrated with any matrix sketching method that provides covariance guarantees. Comprehensive experimental evaluation demonstrates the superior utility-efficiency trade-off achieved by our approach.
We propose a framework for efficient sketch-based linear bandits to address the issue of linear regret that may arise with matrix sketching.
reinforcement learning
https://openreview.net/pdf?id=FKEHiHU4bN
2025-09-19T09:30:09
4
[ { "id": "k9YOMWQZaP", "forum": "FKEHiHU4bN", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission14906/Reviewer_NX8x", "reviewer_name": "Reviewer_NX8x", "rating": 8, "confidence": 2, "soundness": 3, "contribution": 3, "presentation": 3, "summary": "This ...
LxkfjD81xB
https://openreview.net/forum?id=LxkfjD81xB
Mending synthetic data with MAPS: Model Agnostic Post-hoc Synthetic Data Refinement Framework
3.5
3.75
[ 2, 4, 6, 2 ]
[ 4, 4, 3, 4 ]
4
[ "Genertive modeling", "Synthetic data", "Post-hoc refinement", "Privacy-Fidelity tradeoff" ]
Generating high-quality synthetic data with privacy protections remains a challenging ad-hoc process, requiring careful model design and training often tailored to the characteristics of a targeted dataset. We present MAPS, a model-agnostic post-hoc framework that improves synthetic data quality for any pre-trained generative model while ensuring sample-level privacy standards are met. Our two-stage approach first removes synthetic samples that violate privacy by being too close to real data, achieving 0-identifiability guarantees. Second, we employ importance weighting via a binary classifier to resample the remaining synthetic data according to estimated density ratios. We evaluate MAPS across two healthcare datasets (TCGA-metadata, GOSSIS-1-eICU-cardiovascular) and four generative models (TVAE, CTGAN, TabDiffusion, DGD), demonstrating significant improvements in fidelity and utility while maintaining privacy. Notably, MAPS achieves substantial improvements in fidelity metrics, with 40 out of 48 statistical tests demonstrating significant improvements in marginal distributional measures and notable enhancements in correlation structure preservation and joint distribution similarity. For example, Joint Jensen-Shannon Distance reduced from ranges of 0.7888-0.8278 to 0.5434-0.5961 on TCGA-metadata and 0.6192-0.7902 to 0.3633-0.4503 on GOSSIS-1-eICU-cardiovascular. Utility improvements are equally impressive, with classification F1 scores improving from ranges of 0.0866-0.2400 to 0.3043-0.3848 on TCGA-metadata and 0.1287-0.2085 to 0.2104-0.2497 on GOSSIS-1-eICU-cardiovascular across different model-dataset combinations. Additionally, uncertainty quantification analysis via split conformal prediction demonstrates that MAPS considerably improves calibration quality, reducing average prediction set sizes by 55-77\% while maintaining target coverage on TCGA-metadata. The code of this project is available at https://anonymous.4open.science/r/MAPS-EBF8.
MAPS refines synthetic data via identifiability filtering and importance-weighted resampling, improving fidelity and utility while ensuring 0-identifiability guarantees.
generative models
https://openreview.net/pdf?id=LxkfjD81xB
2025-09-20T00:11:46
4
[ { "id": "IFtuqR5R3D", "forum": "LxkfjD81xB", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission19681/Reviewer_KA2J", "reviewer_name": "Reviewer_KA2J", "rating": 2, "confidence": 4, "soundness": 1, "contribution": 2, "presentation": 2, "summary": "The p...
dYaIotpCiK
https://openreview.net/forum?id=dYaIotpCiK
Self-Guided Plan Extraction for Instruction-Following Tasks with Goal-Conditional Reinforcement Learning
4
3.5
[ 2, 6, 4, 4 ]
[ 4, 4, 3, 3 ]
4
[ "Instruction Following; Reinforcement Learning; Multimodal RL" ]
We introduce a framework for instruction-following tasks. Unlike prior methods that rely on predefined subtasks, our approach enables a language model to generate and refine high-level plans through a self-learning mechanism, reducing the need for manual dataset annotation. The method involves iterative co-training: an RL agent is trained to follow the generated plans, while the language model adapts and modifies these plans based on RL feedback and preferences. This creates a feedback loop where both the agent and the planner improve jointly. We validate the framework in environments with rich dynamics and stochasticity. Results show that our agents adhere to instructions more strictly than baseline methods, while also demonstrating strong generalization to previously unseen instructions.
A self-improving framework couples language-model plan generation with reinforcement learning feedback to achieve robust, generalizable instruction following without predefined subtasks.
applications to robotics, autonomy, planning
https://openreview.net/pdf?id=dYaIotpCiK
2025-09-20T17:50:58
4
[ { "id": "eaK9zHutdA", "forum": "dYaIotpCiK", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission24913/Reviewer_6zoq", "reviewer_name": "Reviewer_6zoq", "rating": 2, "confidence": 4, "soundness": 2, "contribution": 2, "presentation": 2, "summary": "The p...
Fn2rSOnpNf
https://openreview.net/forum?id=Fn2rSOnpNf
SlotGCG: Exploiting the Positional Vulnerability in LLMs for Jailbreak Attacks
5
3.5
[ 4, 4, 6, 6 ]
[ 4, 3, 4, 3 ]
4
[ "LLM", "Jailbreak", "Adversarial Attack", "Safe AI" ]
As large language models (LLMs) are widely deployed, identifying their vulnerability through jailbreak attacks becomes increasingly critical. Optimization-based attacks like Greedy Coordinate Gradient (GCG) have focused on inserting adversarial tokens to the end of prompts. However, GCG restricts adversarial tokens to a fixed insertion point (typically the prompt suffix), leaving the effect of inserting tokens at other positions unexplored. In this paper, we empirically investigate slots, i.e., candidate positions within a prompt where tokens can be inserted. We find that vulnerability to jailbreaking is highly related to the selection of the slots. Based on these findings, we introduce the Vulnerable Slot Score (VSS) to quantify the positional vulnerability to jailbreaking. We then propose SlotGCG, which evaluates all slots with VSS, selects the most vulnerable slots for insertion, and runs a targeted optimization attack at those slots. Our approach provides a position-search mechanism that is attack-agnostic and can be plugged into any optimization-based attack, adding only 200ms of preprocessing time. Experiments across multiple models demonstrate that SlotGCG significantly outperforms existing methods. Specifically, it achieves 14% higher Attack Success Rates (ASR) over GCG-based attacks, converges faster, and shows superior robustness against defense methods with 42% higher ASR than baseline approaches.
alignment, fairness, safety, privacy, and societal considerations
https://openreview.net/pdf?id=Fn2rSOnpNf
2025-09-18T14:58:38
4
[ { "id": "Ge5elwQBYJ", "forum": "Fn2rSOnpNf", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission10655/Reviewer_ELoX", "reviewer_name": "Reviewer_ELoX", "rating": 4, "confidence": 4, "soundness": 2, "contribution": 2, "presentation": 3, "summary": "This ...
0TkMmzQdwd
https://openreview.net/forum?id=0TkMmzQdwd
RETIMECAUSAL: A CONSISTENT EM FRAMEWORK FOR CAUSAL DISCOVERY IN IRREGULAR TIME SERIES
5.333333
3.666667
[ 6, 8, 2 ]
[ 3, 4, 4 ]
3
[ "Causal Discovery", "Expectation-Maximization (EM)", "Additive Noise Model (ANM)", "Irregular Sampling", "Time Series" ]
This paper studies causal discovery in irregularly sampled time series—a pivotal challenge in high-stakes domains like finance, healthcare, and climate science, where missing data and inconsistent sampling frequencies distort causal mechanisms. The core challenge arises from the interdependence between missing data imputation and causal structure recovery: an error in either component can cascade into the other, ultimately distorting the inferred causal graph. Existing methods either impute first and then discover, or jointly optimize both via neural representation learning, but lack explicit mechanisms to ensure mutual consistency of imputation and structure learning. We address this challenge with ReTimeCausal, an EM-based framework that alternates between imputation and structure learning, promoting structural consistency throughout the optimization process. Our framework emphasizes theoretical consistency guarantees for structure recovery, extending classical results to settings with irregular sampling and high missingness. Through kernelized sparse regression and structural constraints, ReTimeCausal iteratively refines missing values (E-step) and causal graphs (M-step), resolving cross-frequency dependencies and missing data issues. Extensive experiments on synthetic and real-world datasets demonstrate that ReTimeCausal outperforms existing state-of-the-art methods under challenging irregular sampling and missing data conditions.
ReTimeCausal is a robust method for causal discovery in multivariate time series with missing and irregular data, employing an EM-style framework grounded in Additive Noise Models to ensure accurate structure recovery across varying conditions.
causal reasoning
https://openreview.net/pdf?id=0TkMmzQdwd
2025-09-12T18:46:58
3
[ { "id": "P8qLdi8QKU", "forum": "0TkMmzQdwd", "review_number": 3, "reviewer_id": "ICLR.cc/2026/Conference/Submission4401/Reviewer_eYxy", "reviewer_name": "Reviewer_eYxy", "rating": 6, "confidence": 3, "soundness": 3, "contribution": 3, "presentation": 3, "summary": "This p...
Ozh7G5h7Ce
https://openreview.net/forum?id=Ozh7G5h7Ce
Shared Modular Recurrence for Universal Morphology Control
4
3.333333
[ 2, 6, 4 ]
[ 4, 3, 3 ]
3
[ "Deep Reinforcement Learning", "Robotic Control", "Generalization" ]
A universal controller for any robot morphology would greatly improve computational and data efficiency. By utilizing contextual information about the properties of individual robots and exploiting their modular structure in the architecture of deep reinforcement learning agents, steps have been made towards multi-robot control. When the robots have highly dissimilar morphologies, this becomes a challenging problem, especially when the agent must generalize to new, unseen robots. In this paper, we hypothesize that the relevant contextual information can be partially observable, but that it can be inferred through interactions for better multi-robot control and generalization to contexts that are not seen during training. To this extent, we implement a modular recurrent transformer-based architecture and evaluate its (generalization) performance on a large set of MuJoCo robots. The results show a substantial improved performance on robots with unseen dynamics, kinematics, and topologies, in four different environments.
Introduction of modular recurrence in the architecture of deep reinforcement learning agents for improved (zero-shot generalization) performance in robotic control.
reinforcement learning
https://openreview.net/pdf?id=Ozh7G5h7Ce
2025-09-19T22:33:18
3
[ { "id": "kKtm2Gnlq3", "forum": "Ozh7G5h7Ce", "review_number": 3, "reviewer_id": "ICLR.cc/2026/Conference/Submission18962/Reviewer_yQpS", "reviewer_name": "Reviewer_yQpS", "rating": 2, "confidence": 4, "soundness": 3, "contribution": 2, "presentation": 2, "summary": "This ...
9aaiQbIUND
https://openreview.net/forum?id=9aaiQbIUND
Leveraging Generative Trajectory Mismatch for Cross-Domain Policy Adaptation
4.5
3.75
[ 6, 2, 4, 6 ]
[ 4, 4, 3, 4 ]
4
[ "Reinforcement Learning; Domain Adaptation; Online Dynamics Adaptation" ]
Transferring policies across domains poses a vital challenge in reinforcement learning, due to the dynamics mismatch between the source and target domains. In this paper, we consider the setting of online dynamics adaptation, where policies are trained in the source domain with sufficient data, while only limited interactions with the target domain are allowed. There are a few existing works that address the dynamics mismatch by employing domain classifiers, value-guided data filtering, or representation learning. Instead, we study the domain adaptation problem from a generative modeling perspective. Specifically, we introduce DADiff, a diffusion-based framework that leverages the discrepancy between source and target domain generative trajectories in the generation process of the next state to estimate the dynamics mismatch. Both reward modification and data selection variants are developed to adapt the policy to the target domain. We also provide a theoretical analysis to show that the performance difference of a given policy between the two domains is bounded by the generative trajectory deviation. More discussions on the applicability of the variants and the connection between our theoretical analysis and the prior work are further provided. We conduct extensive experiments in environments with kinematic and morphology shifts to validate the effectiveness of our method. The results demonstrate that our method provides superior performance compared to existing approaches, effectively addressing the dynamics mismatch. We provide the code of our method at https://anonymous.4open.science/r/DADiff-release-83D5.
We introduce DADiff, a dynamics adaptation method designed from the perspective of diffusion models, and establish a provable performance bound.
reinforcement learning
https://openreview.net/pdf?id=9aaiQbIUND
2025-09-17T21:43:10
4
[ { "id": "UklN0o5tvR", "forum": "9aaiQbIUND", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission9260/Reviewer_G9sM", "reviewer_name": "Reviewer_G9sM", "rating": 6, "confidence": 4, "soundness": 3, "contribution": 3, "presentation": 3, "summary": "This p...
3cM4CCoFpe
https://openreview.net/forum?id=3cM4CCoFpe
MSAVQ: Multi-dimensional Sensitivity-Aware Vector Quantization for Ultra-Low-Bit Vision-Language Models
4
3.666667
[ 4, 2, 6 ]
[ 5, 3, 3 ]
3
[ "vector quantization", "llm", "vlm" ]
Vision-Language Models (VLMs) have achieved remarkable progress, but their massive scale severely limits deployment in resource-constrained settings. Among existing compression strategies, vector quantization (VQ) stands out for its strong representational power under ultra-low bitwidths. VQ achieves this by constructing a compact codebook, where weight vectors are mapped to their closest discrete codewords, thereby reducing storage and memory bandwidth requirements while retaining expressive capacity. However, applying VQ directly to VLMs faces two fundamental challenges: (1) Modality-induced weight heterogeneity. In VLMs, image and text inputs induce divergent weight distributions, which a unified codebook fails to capture. (2) Error compensation mismatch from ignoring first-order gradients. In VLMs, first-order gradients significantly contribute to quantization error, yet conventional VQ methods neglect them, causing biased compensation and accuracy loss To this end, we propose \textbf{MSAVQ} (Multi-dimensional Sensitivity-Aware Vector Quantization), a framework that addresses these issues with two key components: (1) Sensitivity-driven structured mixed-precision quantization, a mixed-precision scheme that allocates bit-widths based on channel sensitivity, combining global and local saliency metrics for fine-grained and interpretable resource distribution. (2)Gradient-aware error compensation, a compensation method that explicitly incorporates first-order gradients to address their non-negligible role in VLM quantization errors, with efficient computation enabled by Kronecker and Block-LDL decompositions. We evaluate MSAVQ on representative VLMs, including LLaVA-onevision, InternVL2, and Qwen2-VL. In 2-bit settings, it consistently surpasses state-of-the-art PTQ methods, achieving up to \textbf{+4.9} higher accuracy (71.4\% vs. 67.0\% on InternVL2-26B). These results demonstrate that MSAVQ provides a simple and effective solution for ultra-low-bit quantization of multimodal foundation models, enabling practical deployment under strict resource budgets.
foundation or frontier models, including LLMs
https://openreview.net/pdf?id=3cM4CCoFpe
2025-09-02T20:40:42
3
[ { "id": "YrxfGohs1v", "forum": "3cM4CCoFpe", "review_number": 3, "reviewer_id": "ICLR.cc/2026/Conference/Submission751/Reviewer_ZaY5", "reviewer_name": "Reviewer_ZaY5", "rating": 4, "confidence": 5, "soundness": 3, "contribution": 2, "presentation": 3, "summary": "Authors...
7EYYMXYDkL
https://openreview.net/forum?id=7EYYMXYDkL
PLaID++: A Preference Aligned Language Model for Targeted Inorganic Materials Design
4
3.666667
[ 4, 4, 4 ]
[ 4, 3, 4 ]
3
[ "Generative models", "Large Language Models", "Materials Generation", "Symmetry", "Space Group" ]
Reinforcement Learning from Verifiable Rewards (RLVR) has emerged as a promising approach to improve correctness in LLMs, however, in many scientific problems, the objective is not necessarily to produce *the* correct answer, but instead to produce a diverse array of candidates which satisfy a set of constraints. We study this challenge in the context of materials generation. To this end, we introduce PLaID++, an LLM post-trained for stable and property-guided crystal generation. We find that performance hinges on our crystallographic representation and reward formulation. First, we introduce a compact, symmetry-informed Wyckoff text representation which improves computational efficiency and encourages generalization from physical priors. Second, we demonstrate that temperature scaling acts as an entropy regularizer which counteracts mode collapse and encourages exploration. By encoding symmetry constraints directly into text and guiding model outputs towards desirable chemical space, PLaID++ generates structures that are thermodynamically stable, unique, and novel at a $\sim$ 50\% greater rate than prior methods and conditionally generates structures with desired space group properties. Our work demonstrates the potential of adapting post-training techniques from natural language processing to materials design, paving the way for targeted and efficient discovery of novel materials.
We demonstrate the generalizability of a novel symmetry encoding scheme and iterative preference alignment for crystal generation
applications to physical sciences (physics, chemistry, biology, etc.)
https://openreview.net/pdf?id=7EYYMXYDkL
2025-09-20T11:39:23
3
[ { "id": "gn1YQJQmv2", "forum": "7EYYMXYDkL", "review_number": 3, "reviewer_id": "ICLR.cc/2026/Conference/Submission23098/Reviewer_W4wS", "reviewer_name": "Reviewer_W4wS", "rating": 4, "confidence": 4, "soundness": 4, "contribution": 3, "presentation": 4, "summary": "The p...
ndrUH7IF3L
https://openreview.net/forum?id=ndrUH7IF3L
Optimizing Mixture of Block Attention
4.666667
3
[ 4, 4, 6 ]
[ 3, 4, 2 ]
3
[ "LLM", "Efficiency", "Attention" ]
Mixture of Block Attention (MoBA) is a promising building block for efficiently processing long contexts in LLMs by enabling queries to sparsely attend to a small subset of key-value blocks, drastically reducing computational cost. However, the design principles governing MoBA's performance are poorly understood, and it lacks an efficient GPU implementation, hindering its practical adoption. In this paper, we first develop a statistical model to analyze MoBA's underlying mechanics. Our model reveals that performance critically depends on the router's ability to accurately distinguish relevant from irrelevant blocks based on query-key affinities. We derive a signal-to-noise ratio that formally connects architectural parameters to this retrieval accuracy. Guided by our analysis, we identify three key pathways for improvement: using smaller block sizes, increasing head dimensions, and applying a short convolution on keys to cluster relevant signals, which enhances routing accuracy. While theoretically better, small block sizes are inefficient on GPUs. To bridge this gap, we introduce FlashMoBA, a hardware-aware CUDA kernel that enables efficient MoBA execution even with the small block sizes our theory recommends. We validate our insights by training LLMs from scratch, showing that our improved MoBA models match the performance of dense attention baselines. FlashMoBA achieves up to 9× speedup over FlashAttention-2 for small blocks, making our theoretically-grounded improvements practical. Code will be released upon publication.
foundation or frontier models, including LLMs
https://openreview.net/pdf?id=ndrUH7IF3L
2025-09-18T10:42:36
3
[ { "id": "okk1t7B4NS", "forum": "ndrUH7IF3L", "review_number": 3, "reviewer_id": "ICLR.cc/2026/Conference/Submission10187/Reviewer_yXa6", "reviewer_name": "Reviewer_yXa6", "rating": 4, "confidence": 3, "soundness": 2, "contribution": 3, "presentation": 2, "summary": "1. Th...
IrGJvFKuX2
https://openreview.net/forum?id=IrGJvFKuX2
Multi-Agent Game Generation and Evaluation via Audio-Visual Recordings
4
2.666667
[ 4, 2, 6 ]
[ 3, 2, 3 ]
3
[ "video-game", "llms", "multi-agent", "agent", "animations" ]
Generating novel video games is a challenging problem. Large Language Models (LLMs) can generate games and animations, but lack automated evaluation metrics and struggle with complex content. To tackle these issues, we built a new metric and multi-agent system. First, we propose AVR-Eval, a metric for multimedia content where a model compares the Audio-Visual Recordings (AVRs) of two contents and determines which one is better. We show that AVR-Eval properly identifies good from broken or mismatched content. Second, we built AVR-Agent, a multi-agent system to generate JavaScript code from a bank of multimedia assets (audio, images, 3D models) and using AVR feedback. We show higher AVR-Eval with AVR-Agent than one-shot prompt. However, while humans benefit from high-quality assets and audio-visual feedback, they do not significantly increase AVR-Eval for LLMs. This reveals a gap between humans and AI content creation.
New metric for multimedia evaluation and multi-agent framework for video game generation
generative models
https://openreview.net/pdf?id=IrGJvFKuX2
2025-09-19T23:38:09
3
[ { "id": "o6kBJrUadO", "forum": "IrGJvFKuX2", "review_number": 3, "reviewer_id": "ICLR.cc/2026/Conference/Submission19440/Reviewer_uKvp", "reviewer_name": "Reviewer_uKvp", "rating": 4, "confidence": 3, "soundness": 3, "contribution": 2, "presentation": 2, "summary": "This ...
bgdTK6cniJ
https://openreview.net/forum?id=bgdTK6cniJ
CaNDiCE: Causal Discovery of Nonlinear Dynamics Through Counterfactual Explanations
4
4
[ 6, 2, 4, 4 ]
[ 4, 5, 4, 3 ]
4
[ "governing equations", "nonlinear dynamics", "causality", "counterfactuals" ]
The problem of discovering governing equations from noisy observational data has broad applications in scientific discovery, control, and prediction of complex systems. However, existing approaches that infer dynamics directly from data—whether symbolic regression (e.g., tree-based methods) or sparse identification with pre-defined basis functions—often suffer from poor generalizability, sensitivity to noise, and the inclusion of spurious terms. In this work, we present a causality-preserving counterfactual explanations framework for discovering governing equations in dynamical systems. Counterfactuals in this setting are hypothetical governing equations obtained by minimally perturbing basis function coefficients to induce out-of-distribution trajectories. By penalizing counterfactuals that deviate from the observed topological causality, a measure of directed effective influence between state variables, the resulting trajectories remain consistent with the causal structure of the true dynamics inferred from observed data. As such, resulting counterfactuals are obtained only by perturbing causal terms in the governing equation, while spurious terms are naturally suppressed since their perturbations violate causal consistency. We evaluate our approach across a range of dynamical system benchmarks and show that it outperforms state-of-the-art methods, including symbolic regression, library-based sparse regression, and deep learning models, in identifying robust and parsimonious governing equations.
learning on time series and dynamical systems
https://openreview.net/pdf?id=bgdTK6cniJ
2025-09-20T14:23:10
4
[ { "id": "2ueZVyc81s", "forum": "bgdTK6cniJ", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission23857/Reviewer_CSxc", "reviewer_name": "Reviewer_CSxc", "rating": 6, "confidence": 4, "soundness": 4, "contribution": 3, "presentation": 4, "summary": "This ...
tAM9SGoEmD
https://openreview.net/forum?id=tAM9SGoEmD
SafeMVDrive: Multi-view Safety-Critical Driving Video Generation in the Real World Domain
6
3.75
[ 4, 6, 8, 6 ]
[ 3, 4, 4, 4 ]
4
[ "Autonomous driving testing", "safety-critical scenario", "video generation", "safety" ]
Safety-critical scenarios are essential for evaluating autonomous driving (AD) systems, yet they are rare in practice. Existing generators produce trajectories, simulations, or single-view videos—but they don’t meet what modern AD systems actually consume: realistic multi-view video. We present SafeMVDrive, the first framework for generating multi-view safety-critical driving videos in the real-world domain. SafeMVDrive couples a safety-critical trajectory engine with a diffusion-based multi-view video generator through three design choices. First, we pick the right adversary: a GRPO-fine-tuned vision-language model (VLM) that understands multi-camera context and selects vehicles most likely to induce hazards. Second, we generate the right motion: a two-stage trajectory process that (i) produces collisions, then (ii) transforms them into natural evasion trajectories—preserving risk while staying within what current video generators can faithfully render. Third, we synthesize the right data: a diffusion model that turns these trajectories into multi-view videos suitable for end-to-end planners. On a strong end-to-end planner, our videos substantially increase collision rate, exposing brittle behavior and providing targeted stress tests for planning modules. Our code and video examples are available at: https://iclr-1.github.io/SMD/.
applications to computer vision, audio, language, and other modalities
https://openreview.net/pdf?id=tAM9SGoEmD
2025-09-18T23:01:12
4
[ { "id": "zP8cbBqs24", "forum": "tAM9SGoEmD", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission12421/Reviewer_izyK", "reviewer_name": "Reviewer_izyK", "rating": 4, "confidence": 3, "soundness": 2, "contribution": 3, "presentation": 3, "summary": "This ...
TxJfywuHSA
https://openreview.net/forum?id=TxJfywuHSA
Fourier Features Let Agents Learn High Precision Policies with Imitation Learning
4
4
[ 2, 4, 6 ]
[ 4, 4, 4 ]
3
[ "imitation learning", "robotics", "point clouds", "point maps" ]
Various 3D modalities have been proposed for high-precision imitation learning tasks to compensate for the short-comings of RGB-only policies. Modalities that explicitly represent positions in Cartesian space have an inherent advantage over purely image-based ones, since they allow policies to reason about geometry. Point clouds are a common way to represent geometric information, and have several benefits such as permutation invariance and flexible observation size. Despite their effectiveness, a number of hybrid 2D/3D architectures have been proposed in the literature, indicating that this performance can often be task-dependent. We hypothesize that this may be due to the spectral bias of neural networks towards learning low frequency functions, which especially affects models conditioned on slow-moving Cartesian features. Building on prior work that uses a parametric projection from Cartesian space into high-dimensional Fourier space to overcome the innate low-pass filtering characteristic of neural networks, we apply Fourier features to several representative point cloud encoder architectures. We validate this approach on challenging manipulation tasks from the RoboCasa and ManiSkill3 benchmarks, and find that adding Fourier feature projections provides benefits across diverse encoder architectures and tasks, with meaningful improvements seen in the vast majority of tasks. We show that Fourier features are a general-purpose tool for point cloud-based imitation learning, which consistently improves performance by enabling policies to leverage geometric details more effectively than models conditioned on Cartesian features.
Fourier feature projections improve all 3D modalities for diffusion imitation learning of high-precision tasks, but are especially beneficial for point cloud policies.
applications to robotics, autonomy, planning
https://openreview.net/pdf?id=TxJfywuHSA
2025-09-19T02:38:14
3
[ { "id": "dOcckRymBp", "forum": "TxJfywuHSA", "review_number": 3, "reviewer_id": "ICLR.cc/2026/Conference/Submission13670/Reviewer_9Bk2", "reviewer_name": "Reviewer_9Bk2", "rating": 2, "confidence": 4, "soundness": 2, "contribution": 2, "presentation": 1, "summary": "This ...
S8bmkHXqgT
https://openreview.net/forum?id=S8bmkHXqgT
Interpretable Preference Elicitation: Aligning User Intent with Controllable Long-tailed Learning
2.666667
3
[ 4, 0, 4 ]
[ 3, 2, 4 ]
3
[ "Long-tail learning" ]
Long-tailed recognition remains a significant challenge, where models often struggle with tail class performance and adaptability to diverse user preferences. While recent controllable paradigms leveraging hypernetworks allow numerical specification of head-tail trade-offs, defining these multi-dimensional preference vectors can be unintuitive for users. This paper introduces a novel framework that bridges this gap by enabling users to articulate their preferences through natural language. We propose a two-stage approach: first, optimal numerical preference vectors are identified for canonical distribution scenarios, and a rich corpus of corresponding textual descriptions is generated. Subsequently, a lightweight neural network learns to map sentence embeddings of these textual descriptions to the underlying 3D preference vectors controlling the expert ensemble. Our method significantly enhances the usability and interpretability of controllable long-tailed learning systems without compromising, and even slightly improving, their performance on benchmark datasets. This work facilitates more accessible and practical adaptation of long-tailed models to specific real-world requirements.
unsupervised, self-supervised, semi-supervised, and supervised representation learning
https://openreview.net/pdf?id=S8bmkHXqgT
2025-09-20T17:56:10
3
[ { "id": "UfgO5MyTMG", "forum": "S8bmkHXqgT", "review_number": 3, "reviewer_id": "ICLR.cc/2026/Conference/Submission24934/Reviewer_DK75", "reviewer_name": "Reviewer_DK75", "rating": 4, "confidence": 3, "soundness": 3, "contribution": 2, "presentation": 4, "summary": "This ...
s9cqFuiD2v
https://openreview.net/forum?id=s9cqFuiD2v
GraDA: Gradient-Guided Knowledge Distillation for Domain Adaptation
4.5
3.75
[ 6, 6, 4, 2 ]
[ 3, 3, 4, 5 ]
4
[ "Unsupervised learning", "Semi-supervised learning", "Domain adaptation", "Knowledge distillation", "Graph learning" ]
In this paper, we explore $\textbf{how to enhance student network performance in knowledge distillation (KD) for domain adaptation (DA)}$. We identify two key factors impacting student performance under domain shift: $\textbf{(1) the capability of the teacher network}$ and $\textbf{(2) the effectiveness of the knowledge distillation strategy}$. For the first factor, we integrate a Vision Transformer (ViT) as the feature extractor and our proposed Category-level Aggregation (CA) module as the classifier to construct the ViT+CA teacher network. This architecture leverages ViT's ability to capture detailed representations of individual images. Additionally, the CA module employs the message-passing mechanism of a graph convolutional network to promote intra-class relations and mitigate domain shift by grouping samples with similar class information. For the second factor, we leverage pseudo labels generated by the ViT+CA teacher to guide the gradient updates of the student network's parameters, aligning the student's behavior with that of the teacher. To optimize for efficient inference and reduced computational cost, we use a convolutional neural network (CNN) for feature extraction and a multilayer perceptron (MLP) as the classifier to build the CNN+MLP student network. Extensive experiments on various DA datasets demonstrate that our method significantly surpasses current state-of-the-art approaches. Our code will be available soon.
transfer learning, meta learning, and lifelong learning
https://openreview.net/pdf?id=s9cqFuiD2v
2025-09-17T14:35:43
4
[ { "id": "pJnj2Up3oW", "forum": "s9cqFuiD2v", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission8559/Reviewer_GLAa", "reviewer_name": "Reviewer_GLAa", "rating": 6, "confidence": 3, "soundness": 3, "contribution": 2, "presentation": 3, "summary": "This p...
tDdeW2puHW
https://openreview.net/forum?id=tDdeW2puHW
From Real to Synthetic: A Fine-grained Dataset and High-fidelity Biomechanical Model for Animal Behavior Understanding
4
4
[ 8, 2, 6, 2, 2 ]
[ 4, 4, 5, 4, 3 ]
5
[ "Animal dataset", "Biomechanical model", "Synthetic data generation", "Behavioral uncertainty quantification", "Video understanding" ]
Rat behavior research contributes to the exploration of human disease mechanisms. However, existing datasets are scarce and cover limited behavior types, hindering the analysis and modeling of complex behavior patterns. We constructed ActionRat, a new multi-view rat behavior dataset that, for the first time, captures diverse actions during free exploration and brain-computer interface (BCI) control. It combines real and synthetic sequences with fine-grained keypoint annotations and atomic action sequences, supporting broader behavior analysis tasks. To efficiently generate synthetic data for dataset expansion, we developed OpenRatEngine, a high-fidelity 3D virtual biomechanical model. This model integrates anatomical priors from computed tomography (CT) scans, kinematic constraints, and lifelike appearance, reducing the domain gap between synthetic and real data. Equipped with pose control, OpenRatEngine generates synthetic sequences with accurate 3D keypoint annotations. We evaluated behavioral uncertainty quantification and animal pose estimation tasks on the ActionRat dataset, and demonstrated the outstanding synthetic data generation capability and realism of OpenRatEngine. Extensive experiments across deep learning models confirmed the effectiveness and value of both real and synthetic data.
A Fine-grained Benchmark Dataset and High-fidelity Biomechanical Model for Animal Behavior Understanding
datasets and benchmarks
https://openreview.net/pdf?id=tDdeW2puHW
2025-09-17T10:28:39
5
[ { "id": "FHoGNf4wbM", "forum": "tDdeW2puHW", "review_number": 5, "reviewer_id": "ICLR.cc/2026/Conference/Submission8245/Reviewer_gBuZ", "reviewer_name": "Reviewer_gBuZ", "rating": 8, "confidence": 4, "soundness": 3, "contribution": 4, "presentation": 3, "summary": "The pa...
isBH8kP5AX
https://openreview.net/forum?id=isBH8kP5AX
BMAttn: Block-Aligned Mixed-Precision Attention Quantization for LLM Inference
3.5
3.25
[ 2, 2, 6, 4 ]
[ 3, 3, 4, 3 ]
4
[ "LLM", "Quantization", "Pruning" ]
The proliferation of Large Language Models (LLMs) with extended context windows is severely hampered by the quadratic complexity of the self-attention mechanism. Existing acceleration methods, such as sparse attention and quantization, often employ uniform compression strategies that are misaligned with the non-uniform distribution of information importance within attention maps. This leads to a suboptimal trade-off between computational efficiency and model accuracy. To address this, we introduce Block-based Mixed-precision Attention (BMAttn), a novel framework that enables fine-grained, importance-aware precision while maintaining a hardware-friendly structure. BMAttn partitions each attention head into high-precision, low-precision, and sparse regions. To ensure computational regularity, these regions are block-aligned. To adapt to varying input lengths, their boundaries are dynamically adjusted using a lightweight affine windowing mechanism. We further propose a saliency-weighted calibration method and a layer-adaptive regularizer to automatically determine the optimal parameters, achieving a superior accuracy-efficiency balance. BMAttn achieves a speedup of up to 3.3× without any accuracy degradation, and a 5× speedup with only a 1\% accuracy loss.
other topics in machine learning (i.e., none of the above)
https://openreview.net/pdf?id=isBH8kP5AX
2025-09-04T14:59:06
4
[ { "id": "l0bkVLeGKD", "forum": "isBH8kP5AX", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission1941/Reviewer_9Diq", "reviewer_name": "Reviewer_9Diq", "rating": 2, "confidence": 3, "soundness": 1, "contribution": 3, "presentation": 2, "summary": "The pa...
ZYVhh51UlM
https://openreview.net/forum?id=ZYVhh51UlM
Perturbation Guided Drug Molecule Design via Latent Rectified Flow
2
4
[ 2, 2, 2 ]
[ 4, 3, 5 ]
3
[ "Multi-modal generation", "Perturbation biology", "Molecular generation" ]
Phenotypic drug discovery generates rich multi-modal biological data, yet translating complex cellular responses into molecular design remains a computational bottleneck. Existing generative methods operate on single modalities (transcriptomic or morphological alone) and condition on post-treatment measurements without leveraging paired control-treatment dynamics. We present **Pert2Mol**, the first framework for multi-modal phenotype-to-structure generation that integrates transcriptomic and morphological features from paired control-treatment experiments. Pert2Mol employs separate ResNet and cross-attention encoders for microscopy images and gene expression profiles, with bidirectional cross-attention between control and treatment states to capture perturbation dynamics rather than simple differential measurements. These multi-modal embeddings condition a rectified flow transformer that learns velocity fields along straight-line trajectories from noise to molecular structures, enabling deterministic generation with superior efficiency over diffusion models. We introduce Student-Teacher Self-Representation (SERE) learning where an exponential moving average teacher supervises student representations across network depths, stabilizing training in high-dimensional multi-modal spaces. Unlike previous approaches that require preprocessed differential expression vectors, Pert2Mol learns perturbation effects directly from raw paired experimental data. Experiments on large-scale datasets demonstrate the first successful multi-modal framework for phenotype-driven molecular generation.
applications to physical sciences (physics, chemistry, biology, etc.)
https://openreview.net/pdf?id=ZYVhh51UlM
2025-09-20T01:44:14
3
[ { "id": "jY3sZYpELK", "forum": "ZYVhh51UlM", "review_number": 3, "reviewer_id": "ICLR.cc/2026/Conference/Submission20223/Reviewer_EBao", "reviewer_name": "Reviewer_EBao", "rating": 2, "confidence": 4, "soundness": 3, "contribution": 1, "presentation": 2, "summary": "The p...
owpU8gxnkM
https://openreview.net/forum?id=owpU8gxnkM
ENCOURAGING CRITICAL THINKING FOR MULTIAGENT DEBATE
3.5
3.5
[ 4, 2, 2, 6 ]
[ 3, 4, 3, 4 ]
4
[ "Debate", "Critical Thinking", "Self-reflection" ]
Large language models (LLMs) have demonstrated remarkable performance across a wide range of tasks in recent years. While prior work has explored leveraging LLMs to generate synthetic data for self-improvement, repeated iterations often suffer from diminishing returns due to the reliance on homogeneous reasoning patterns and limited exploration of alternative perspectives. In this paper, we introduce a novel framework that enriches the reasoning process by encouraging critical thinking among multiple agents. Rather than deploying an ensemble of models with identical prompts, we propose a strategy generator that produces customized instructions tailored to each individual LLM. Acting as a critical thinking agent, the generator is iteratively fine-tuned using carefully selected strategies that are both diverse and effective. This approach fosters specialization within each model while promoting diversity across reasoning paths, enabling the system to maintain varied solution trajectories and achieve sustained performance gains through iterative refinement. We demonstrate the effectiveness of our method across a variety of agentic frameworks and complex reasoning tasks.
We propose a framework with optimizable strategies to guide LLM solvers in solving different questions.
unsupervised, self-supervised, semi-supervised, and supervised representation learning
https://openreview.net/pdf?id=owpU8gxnkM
2025-09-05T02:34:10
4
[ { "id": "Ql0MRofdjN", "forum": "owpU8gxnkM", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission2172/Reviewer_hHRL", "reviewer_name": "Reviewer_hHRL", "rating": 4, "confidence": 3, "soundness": 2, "contribution": 2, "presentation": 2, "summary": "LLMs c...
lBBtmSu5Q2
https://openreview.net/forum?id=lBBtmSu5Q2
On Fine-Grained I/O Complexity of Attention Backward Passes
5
3.25
[ 6, 8, 4, 2 ]
[ 3, 3, 2, 5 ]
4
[ "Attention", "I/O Complexity", "Backward Passes." ]
Large Language Models (LLMs) have demonstrated remarkable capabilities in processing long-context information. However, the quadratic complexity of attention computation with respect to sequence length poses significant computational challenges, and I/O aware algorithms have been proposed. This paper presents a comprehensive analysis of the I/O complexity for attention mechanisms, focusing on backward passes by categorizing them into small and large cache scenarios. Using the red-blue pebble game framework, we establish tight bounds on I/O complexity across all cache sizes. We confirm that the de facto standard I/O aware algorithm FlashAttention is optimal for both forward and backward passes for the large cache size scenario. For small cache sizes, we provide an algorithm that improves over existing methods and achieves tight bounds. Additionally, we extend our analysis to sparse attention, a mainstream speeding-up approach, deriving fine-grained lower bounds for both forward and backward passes and both small and large caches. Our findings complete the theoretical foundation for I/O complexity in attention mechanisms, offering insights for designing efficient algorithms of LLM training and inference.
optimization
https://openreview.net/pdf?id=lBBtmSu5Q2
2025-09-19T13:01:55
4
[ { "id": "XeVdhPAtSV", "forum": "lBBtmSu5Q2", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission15988/Reviewer_gi5z", "reviewer_name": "Reviewer_gi5z", "rating": 6, "confidence": 3, "soundness": 3, "contribution": 1, "presentation": 3, "summary": "The o...
UAZCKdd4R7
https://openreview.net/forum?id=UAZCKdd4R7
Koopman-Assisted Trajectory Synthesis: A Data Augmentation Framework for Offline Imitation Learning
6.5
3.25
[ 4, 6, 8, 8 ]
[ 3, 4, 3, 3 ]
4
[ "Offline Imitation Learning; Offline Reinforcement Learning; Data Augmentation" ]
Data augmentation plays a pivotal role in offline imitation learning (IL) by alleviating covariate shift, yet existing methods remain constrained. Single-step techniques frequently violate underlying system dynamics, whereas trajectory-level approaches are plagued by compounding errors or scalability limitations. Even recent Koopman-based methods typically function at the single-step level, encountering computational bottlenecks due to action-equivariance requirements and vulnerability to approximation errors. To overcome these challenges, we introduce Koopman-Assisted Trajectory Synthesis (KATS), a novel framework for generating complete, multi-step trajectories. By operating at the trajectory level, KATS effectively mitigates compounding errors. It leverages a state-equivariant assumption to ensure computational efficiency and scalability, while incorporating a refined generator matrix to bolster robustness against Koopman approximation errors. This approach enables a more direct and efficacious mechanism for distribution matching in offline IL. Extensive experiments demonstrate that KATS substantially enhances policy performance and achieves state-of-the-art (SOTA) results, especially in demanding scenarios with narrow expert data distributions.
reinforcement learning
https://openreview.net/pdf?id=UAZCKdd4R7
2025-09-19T23:31:52
4
[ { "id": "xe11B2ewvK", "forum": "UAZCKdd4R7", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission19398/Reviewer_cuKL", "reviewer_name": "Reviewer_cuKL", "rating": 4, "confidence": 3, "soundness": 3, "contribution": 2, "presentation": 2, "summary": "This ...
PVooP3d7cI
https://openreview.net/forum?id=PVooP3d7cI
The Price of a Second Thought: On the Evaluation of Reasoning Efficiency in Large Language Models
3.5
3.5
[ 6, 4, 2, 2 ]
[ 3, 4, 4, 3 ]
4
[ "Reasoning Efficiency", "Test-time Scaling", "Large Language Models", "Chain-of-Thought" ]
Recent thinking models trained with reinforcement learning and backwardchecking CoT often suffer from overthinking: they produce excessively long outputs even on simple problems, wasting computation. Existing evaluations, based on token efficiency, give an incomplete view as they neglect problem difficulty and intermediate computation costs. We formalize reasoning efficiency as a relative measure between thinking and instruct models, treating instruct models as the minimal-effort baseline. A systematic study across four thinking models and multiple benchmarks reveals two consistent patterns: (i) instruct models achieve higher efficiency overall, and (ii) problem difficulty affects efficiency, with thinking models wasting computation on easy problems but providing value on harder ones. Building on this insight, we propose COTHINK, a simple two-stage pipeline: an instruct model drafts a brief outline, and a thinking model expands it. On GSM8K, MATH500, and AIME24, COTHINK cuts token usage by 21.1% while keeping accuracy on four thinking models, and remains competitive with strong efficiency baselines.
We formalize reasoning efficiency to evaluate thinking models, discover potential scaling laws showing systematic overthinking on simple problems, and propose CoThink to adaptively scale computation with problem complexity.
applications to computer vision, audio, language, and other modalities
https://openreview.net/pdf?id=PVooP3d7cI
2025-09-18T19:32:11
4
[ { "id": "dQoy9XTgh6", "forum": "PVooP3d7cI", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission11280/Reviewer_fcAM", "reviewer_name": "Reviewer_fcAM", "rating": 6, "confidence": 3, "soundness": 3, "contribution": 3, "presentation": 3, "summary": "The a...
mbu8EEnp3a
https://openreview.net/forum?id=mbu8EEnp3a
Do LLMs Signal When They’re Right? Evidence from Neuron Agreement
4.5
4
[ 6, 6, 4, 2 ]
[ 4, 4, 4, 4 ]
4
[ "Neuron-Agreement Decoding (NAD); Neuron activation patterns; Unsupervised answer selection; Chain-of-thought ensembling; Token efficiency" ]
Large language models (LLMs) commonly boost reasoning via sample-evaluate-ensemble decoders (e.g., majority voting), achieving label free gains without ground truth. However, prevailing strategies score candidates using only external outputs such as token probabilities, entropies, or self evaluations, and these signals can be poorly calibrated after post training. We instead analyze internal behavior based on neuron activations and uncover three findings: (1) external signals are low dimensional projections of richer internal dynamics; (2) correct responses activate substantially fewer unique neurons than incorrect ones throughout generation; and (3) activations from correct responses exhibit stronger cross sample agreement, whereas incorrect ones diverge. Motivated by these observations, we propose Neuron Agreement Decoding (NAD), an unsupervised best of N method that selects candidates using activation sparsity and cross sample neuron agreement, operating solely on internal signals and without requiring comparable textual outputs. NAD enables early correctness prediction within the first 32 generated tokens and supports aggressive early stopping. Across math and science benchmarks with verifiable answers, NAD matches majority voting; on open ended coding benchmarks where majority voting is inapplicable, NAD consistently outperforms Avg@64. By pruning unpromising trajectories early, NAD reduces token usage by 99\% with minimal loss in generation quality, showing that internal signals provide reliable, scalable, and efficient guidance for label free ensemble decoding.
interpretability and explainable AI
https://openreview.net/pdf?id=mbu8EEnp3a
2025-09-20T18:18:27
4
[ { "id": "q2QF9xWDWT", "forum": "mbu8EEnp3a", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission25050/Reviewer_gDoM", "reviewer_name": "Reviewer_gDoM", "rating": 6, "confidence": 4, "soundness": 3, "contribution": 3, "presentation": 3, "summary": "This ...
M7eWB695jp
https://openreview.net/forum?id=M7eWB695jp
Purifying Generative LLMs from Backdoors without Prior Knowledge or Clean Reference
4.5
4
[ 2, 2, 6, 8 ]
[ 4, 3, 5, 4 ]
4
[ "LLM; Backdoor attack; Backdoor Elimination." ]
Backdoor attacks pose severe security threats to large language models (LLMs), where a model behaves normally under benign inputs but produces malicious outputs when a hidden trigger appears. Existing backdoor removal methods typically assume prior knowledge of triggers, access to a clean reference model, or rely on aggressive finetuning configurations, and are often limited to classification tasks. However, such assumptions fall apart in real-world generative LLM settings. In this work, we propose a new framework for purifying **generative LLM** without any prior trigger knowledge or clean references. Through systematic sanity checks, we find that backdoor associations are redundantly encoded across MLP layers, while attention modules primarily amplify trigger signals without establishing the behavior. Leveraging this insight, we shift the focus from isolating specific backdoor triggers to cutting off the trigger–behavior associations, and design an immunization-inspired elimination approach: by constructing multiple synthetic backdoored variants of the given suspicious model, each trained with different malicious trigger–behavior pairs, and contrasting them with their clean counterparts. The recurring modifications across variants reveal a shared **"backdoor signature"**—analogous to antigens in a virus. Guided by this signature, we neutralize highly suspicious components in LLM and apply lightweight finetuning to restore its fluency, producing purified models that withstand diverse backdoor attacks and threat models while preserving generative capability.
generative models
https://openreview.net/pdf?id=M7eWB695jp
2025-09-01T23:24:25
4
[ { "id": "vAWWZyAmIC", "forum": "M7eWB695jp", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission451/Reviewer_rkEg", "reviewer_name": "Reviewer_rkEg", "rating": 2, "confidence": 4, "soundness": 2, "contribution": 1, "presentation": 3, "summary": "The sub...
B4mu5A3wVN
https://openreview.net/forum?id=B4mu5A3wVN
e-HC: Adaptive Sequential Higher Criticism Test for Sparse Mixtures
4
3
[ 8, 2, 2, 4 ]
[ 2, 3, 4, 3 ]
4
[ "higher criticism", "sequential test", "supermartingale", "sparse mixture", "Ville's inequality" ]
We propose e-HC, an adaptive sequential test for detecting sparse and weak signals in a stream of p-values. Unlike existing approaches that rely on asymptotic approximations or require knowledge of alternative parameters, e-HC constructs exact test-martingales using moment-generating function compensators, ensuring anytime-valid Type I error control through Ville's inequality. The method adapts to unknown sparsity and signal strength by maintaining exponential weights across multiple detection thresholds, effectively learning the optimal threshold online. We establish non-asymptotic power guarantees for sparse Gaussian mixtures alternative and derive the expected stopping time scaling for weak signal regimes. The same martingale machinery naturally yields anytime-valid confidence sequences for the proportion of significant p-values. Simulations demonstrate that e-HC maintains robust performance under model misspecification, substantially outperforming sequential likelihood ratio tests when the true alternative differs from assumptions.
other topics in machine learning (i.e., none of the above)
https://openreview.net/pdf?id=B4mu5A3wVN
2025-09-18T23:11:05
4
[ { "id": "sFHLMYTsxS", "forum": "B4mu5A3wVN", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission12503/Reviewer_PYho", "reviewer_name": "Reviewer_PYho", "rating": 8, "confidence": 2, "soundness": 3, "contribution": 3, "presentation": 3, "summary": "This ...
fQIE4NJOVm
https://openreview.net/forum?id=fQIE4NJOVm
Tight Bounds and Achievable Upper Bounds of Minimal Dimensions for Embedding-based Retrieval
5.2
4
[ 6, 8, 6, 2, 4 ]
[ 4, 4, 4, 4, 4 ]
5
[ "representation learning", "embedding-based retrieval" ]
This paper studies the minimal dimension required to embed subset memberships into vector spaces. The lower and upper bounds are derived theoretically and supported empirically for various notions of "distances" or "similarities", including $\ell_2$ metric, inner product, and cosine similarity. Our results suggest no fundamental differences between those metrics in terms of Minimal Embeddable Dimension (MED). In addition, we conduct experiments in the achievable setting, where we find that we can easily realize the logarithmic dependency between the MED and the number of objects to embed. Our results also align well with existing practices in large language models, vector databases, and other related fields.
unsupervised, self-supervised, semi-supervised, and supervised representation learning
https://openreview.net/pdf?id=fQIE4NJOVm
2025-09-12T16:03:15
5
[ { "id": "CBy4xBy2FE", "forum": "fQIE4NJOVm", "review_number": 6, "reviewer_id": "ICLR.cc/2026/Conference/Submission4333/Reviewer_KFwt", "reviewer_name": "Reviewer_KFwt", "rating": 6, "confidence": 4, "soundness": 3, "contribution": 4, "presentation": 2, "summary": "This s...
0KFQ4F9YEH
https://openreview.net/forum?id=0KFQ4F9YEH
LoC-Decomp: LLM Autoformalization via Logical Concept Decomposition and Iterative Feedback Correction
4
4
[ 2, 4, 6 ]
[ 4, 3, 5 ]
3
[ "Autoformalization", "Automated theorem proving", "Large language model" ]
Automated formalization—the process of converting natural language mathematical statements into machine-verifiable formal code—plays a critical role in ensuring the reliability of mathematical reasoning generated by large language models (LLMs). Recent studies show that LLMs exhibit strong potential in automating this process, producing formal code for systems such as Lean4, Coq, and Isabelle. Despite prominent advances, existing LLM-based autoformalization methods remain limited: they lack the ability to provide reliable semantic consistency checks to ensure that the formal code accurately preserves the meaning of the original statement. Furthermore, such methods are unable to support iterative improvement through corrective feedback. To address these limitations, we propose Loc-Decomp, a novel framework that integrates an automatic semantic consistency checker and the Lean4 compiler to iteratively refine LLM-generated formalizations, ensuring both semantic consistency and syntactic correctness. Our approach introduces three key innovations: (1) A structured formalization template that decomposes complex formalization tasks into modular, foundational components, and systematically assembles them—like building blocks—into a complete formal expression. (2) A semantic self-checking mechanism based on a divide-conquer-merge strategy to detect subtle inconsistencies between the formalization and the original statement. (3) An iterative feedback-driven refinement loop that leverages both semantic and syntactic error signals to guide the LLM in progressively improving the formal output. By integrating these innovations, Loc-Decomp significantly enhances the accuracy of LLM-driven formalization, reduces reliance on human intervention, and moves closer to truly reliable automated reasoning. Extensive experiments on the MATH and miniF2F datasets demonstrate that our approach achieves a significantly higher formalization success rate compared to baseline methods and previous state-of-the-art (SOTA) approaches. On the miniF2F dataset, for instance, our method attains a success rate of 91.16%, substantially outperforming the previous SOTA result of 46.70%.
Loc-Decomp is a novel framework that enhances LLM-based autoformalization by integrating semantic consistency checks and iterative refinement, achieving a 91.16% success rate on the miniF2F dataset.
neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)
https://openreview.net/pdf?id=0KFQ4F9YEH
2025-09-19T18:44:48
3
[ { "id": "hCHKc73ab9", "forum": "0KFQ4F9YEH", "review_number": 3, "reviewer_id": "ICLR.cc/2026/Conference/Submission17639/Reviewer_8Wwq", "reviewer_name": "Reviewer_8Wwq", "rating": 2, "confidence": 4, "soundness": 2, "contribution": 1, "presentation": 1, "summary": "This ...
MlQ0goJG9U
https://openreview.net/forum?id=MlQ0goJG9U
DiRA: Nuclear Norm Dynamic Rank Adaptation for Large Language Models
3.5
3.75
[ 4, 4, 4, 2 ]
[ 4, 4, 3, 4 ]
4
[ "LLM", "Fine-Tuning", "Nuclear-Norm" ]
Parameter-Efficient Fine-Tuning (PEFT) methods, particularly Low-Rank Adaptation (LoRA), have become a standard paradigm for adapting Large Language Models (LLMs) to specific tasks. However, standard LoRA implementations use a fixed, uniform adaptation rank across all layers, a static allocation that fails to capture the varying contributions of different layers. In this work, we introduce DiRA, which learns layer-adaptive ranks by penalizing the nuclear norm of the weight update matrix $\Delta W$ for each layer. While extensive experiments show that DiRA matches or surpasses fixed-rank LoRA baselines across tasks, its primary contribution is methodological and scientific. Using DiRA as a probe, we uncover a mechanism of catastrophic forgetting in continual learning: forgetting is frequently accompanied by pronounced changes in the rank landscape. Building on this insight, we propose a new strategy that treats the previously learned rank landscape as a prior and, with only a small amount of data, regularizes current updates to retain newly acquired knowledge while recovering old-task memory, thereby mitigating forgetting. Taken together, these results position DiRA both as an efficient PEFT method and as a principled approach for understanding—and mitigating—forgetting in LLMs.
We introduce a new PEFT method DiRA, which not only improves model performance but also reveals changes in the rank landscape associated with catastrophic forgetting.
other topics in machine learning (i.e., none of the above)
https://openreview.net/pdf?id=MlQ0goJG9U
2025-09-16T21:27:36
5
[ { "id": "p6EeGeXG7I", "forum": "MlQ0goJG9U", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission7607/Reviewer_tpMu", "reviewer_name": "Reviewer_tpMu", "rating": 4, "confidence": 4, "soundness": 2, "contribution": 2, "presentation": 3, "summary": "This p...
B9iMn59jFE
https://openreview.net/forum?id=B9iMn59jFE
OmniEval: A Benchmark for Evaluating Omni-modal Models with Visual, Auditory, and Textual Inputs
4
4.25
[ 6, 6, 2, 2 ]
[ 4, 4, 4, 5 ]
4
[ "Omni models", "Benchmark", "Multimodality" ]
In this paper, we introduce OmniEval, a benchmark for evaluating multimodal Chinese and English video understanding, which encompasses visual, auditory, and textual inputs. Compared with existing benchmarks, our OmniEval has several distinctive features: (i) Full-modal collaboration: We design evaluation tasks that highlight the strong coupling between audio and video, requiring models to effectively leverage the collaborative perception of all modalities; (ii) Diversity of videos and tasks: OmniEval includes 1,000 audio-visual synchronized videos, with 307 Chinese videos and 558 English videos, systematically categorized into four major domains. (iii) Diversity and granularity of tasks: OmniEval contains 2783 question-answer pairs, comprising 1412 open-ended questions and 1371 multiple-choice questions. These questions are divided into four major task types and 12 subtask types to achieve comprehensive evaluation. Among them, we have introduced a more granular video localization task, which named as Grounding. Based on our OmniEval, we have extensively evaluated a variety of state-of-the-art models. The experimental results indicate that existing models face significant challenges in understanding the real world, with the best accuracy rate being only 10%. We hope that our OmniEval can provide a platform for evaluating the ability to construct and understand coherence from the context of all modalities.
datasets and benchmarks
https://openreview.net/pdf?id=B9iMn59jFE
2025-09-19T15:16:38
5
[ { "id": "1I41nebHsn", "forum": "B9iMn59jFE", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission16567/Reviewer_3gEp", "reviewer_name": "Reviewer_3gEp", "rating": 6, "confidence": 4, "soundness": 3, "contribution": 3, "presentation": 3, "summary": "This ...
NQsdnYkCar
https://openreview.net/forum?id=NQsdnYkCar
Arbitrary-Order Block SignSGD for Memory-Efficient LLM Fine-Tuning
6
3.75
[ 4, 8, 6, 6 ]
[ 3, 4, 4, 4 ]
4
[ "Block-Coordinate Optimization", "SignSGD", "Large Language Models (LLMs)", "Memory-Efficient Fine-Tuning" ]
We propose \textbf{ABSignSGD}, a block‑coordinate variant of sign-based descent with flexible block selection that enables memory‑ and runtime‑efficient full‑parameter fine‑tuning of large language models. We present a unified convergence analysis under mild conditions, covering both the base method and a \textit{majority‑vote} extension for distributed training. The latter improves communication efficiency by aggregating only gradient signs rather than averaging full gradients. Experiments on Qwen3‑8B and Llama3-8B, spanning mathematical reasoning and general instruction‑following tasks, show that ABSignSGD converges faster per iteration and delivers superior downstream performance while reducing both runtime and memory usage compared to existing methods. Ablation studies further indicate that the memoryless sign-based update naturally complements block‑wise updates, explaining the method’s strong empirical performance.
optimization
https://openreview.net/pdf?id=NQsdnYkCar
2025-09-18T09:36:34
4
[ { "id": "iB5Ox799f4", "forum": "NQsdnYkCar", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission10064/Reviewer_LQY1", "reviewer_name": "Reviewer_LQY1", "rating": 4, "confidence": 3, "soundness": 3, "contribution": 2, "presentation": 3, "summary": "The p...
7y9IKjl8dt
https://openreview.net/forum?id=7y9IKjl8dt
SPARK: Stepwise Process-Aware Rewards for Reference-Free Reinforcement Learning
4
3.666667
[ 2, 4, 6 ]
[ 4, 3, 4 ]
3
[ "Process Reward Models", "Inference-time Scaling", "Reference-free Reinforcement Learning", "Mathematical Reasoning", "Synthetic Verification" ]
Process reward models (PRMs) that provide dense, step-level feedback have shown promise for reinforcement learning, yet their adoption remains limited by the need for expensive step-level annotations or ground truth references. We propose SPARK--a three-stage framework where in the first stage a generator model produces diverse solutions and a verifier model evaluates them using parallel scaling (self-consistency) and sequential scaling (meta-critique). In the second stage, we use these verification outputs as synthetic training data to fine-tune generative process reward models, which subsequently serve as reward signals during training. We show that aggregating multiple independent verifications at the step level produces training data for process reward models that surpass ground-truth outcome supervision—achieving 67.5 F1 on ProcessBench (a benchmark for identifying erroneous steps in mathematical reasoning) compared to 66.4 for reference-guided training and 61.9 for GPT-4o. In the final stage, we apply our generative PRM with chain-of-thought verification (PRM-CoT) as the reward model in RL experiments on mathematical reasoning, and introduce format constraints to prevent reward hacking. Using Qwen2.5-Math-7B, we achieve 47.4\% average accuracy across six mathematical reasoning benchmarks, outperforming ground-truth-based RLVR (43.9\%). Our work enables reference-free RL training that exceeds ground-truth methods, opening new possibilities for domains lacking verifiable answers or accessible ground truth.
We train process reward models without ground truth by aggregating multiple verification attempts through inference-time scaling, achieving better performance than ground-truth-based approaches.
foundation or frontier models, including LLMs
https://openreview.net/pdf?id=7y9IKjl8dt
2025-09-19T07:33:11
3
[ { "id": "87ws3DfBoV", "forum": "7y9IKjl8dt", "review_number": 3, "reviewer_id": "ICLR.cc/2026/Conference/Submission14538/Reviewer_YhFA", "reviewer_name": "Reviewer_YhFA", "rating": 2, "confidence": 4, "soundness": 1, "contribution": 2, "presentation": 3, "summary": "This ...
fdp7klHmnn
https://openreview.net/forum?id=fdp7klHmnn
Robust Learning of Diffusion Models with Extremely Noisy Conditions
4
3.25
[ 4, 4, 2, 6 ]
[ 3, 3, 4, 3 ]
4
[ "diffusion models", "noisy conditions", "generation controllability" ]
Conditional diffusion models have the generative controllability by incorporating external conditions. However, their performance significantly degrades with noisy conditions, such as corrupted labels in the image generation or unreliable observations or states in the control policy generation. This paper introduces a robust learning framework to address extremely noisy conditions in conditional diffusion models. We empirically demonstrate that existing noise-robust methods fail when the noise level is high. To overcome this, we propose learning pseudo conditions as surrogates for clean conditions and refining pseudo ones progressively via the technique of temporal ensembling. Additionally, we develop a Reverse-time Diffusion Condition (RDC) technique, which diffuses pseudo conditions to reinforce the \textit{memorization effect} and further facilitate the refinement of the pseudo conditions. Experimentally, our approach achieves state-of-the-art performance across a range of noise levels on both class-conditional image generation and visuomotor policy generation tasks.
generative models
https://openreview.net/pdf?id=fdp7klHmnn
2025-09-06T12:39:37
4
[ { "id": "M0Cw5LZNwX", "forum": "fdp7klHmnn", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission2525/Reviewer_hdc3", "reviewer_name": "Reviewer_hdc3", "rating": 4, "confidence": 3, "soundness": 3, "contribution": 3, "presentation": 2, "summary": "This p...
InOz43jIVI
https://openreview.net/forum?id=InOz43jIVI
CoSteer: Collaborative Decoding-Time Personalization via Local Delta Steering
5
3.5
[ 4, 6, 6, 4 ]
[ 4, 3, 4, 3 ]
4
[ "Decoding-time personalization", "Collaborative text generation", "Privacy-preserving" ]
Personalized text generation has become crucial for adapting language models to diverse and evolving users' personal context across cultural, temporal, and contextual dimensions. While existing methods often rely on centralized fine-tuning or static preference alignment, they struggle to achieve real-time adaptation under resource constraints inherent to personal devices. This limitation creates a dilemma: large cloud-based models lack access to localized user-specific information, while small on-device models cannot match the generation quality of their cloud counterparts. To address this dichotomy, we present **CoSteer** , a novel collaborative framework that enables decoding-time personalization through localized delta steering. Our key insight lies in leveraging the logits difference between personal context-aware and -agnostic outputs from local small models as steering signals for cloud-based LLMs. Specifically, we formulate token-level optimization as an online learning problem, where local delta vectors dynamically adjust the remote LLM's logits within the on-device environment. This approach preserves privacy by transmitting only the final steered tokens rather than raw data or intermediate vectors, while maintaining cloud-based LLMs' general capabilities without fine-tuning. Through comprehensive experiments on various personalized generation tasks, we demonstrate that CoSteer effectively assists LLMs in generating personalized content by leveraging locally stored user profiles and histories, ensuring privacy preservation through on-device data processing while maintaining acceptable computational overhead. Our anonymized code and data is available at https://anonymous.4open.science/r/Costeer-4977
CoSteer: a framework for private LLM personalization. A local SLM uses on-device data to compute a delta signal that steers a cloud LLM, achieving high-quality personalized output without transmitting user data.
alignment, fairness, safety, privacy, and societal considerations
https://openreview.net/pdf?id=InOz43jIVI
2025-09-18T23:18:11
4
[ { "id": "atK2BFM0OS", "forum": "InOz43jIVI", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission12572/Reviewer_WxRf", "reviewer_name": "Reviewer_WxRf", "rating": 4, "confidence": 4, "soundness": 2, "contribution": 3, "presentation": 3, "summary": "This ...
qzgro4i3sg
https://openreview.net/forum?id=qzgro4i3sg
Efficient numeracy in language models through single-token number embeddings
4.5
3.5
[ 4, 4, 6, 4 ]
[ 3, 4, 3, 4 ]
4
[ "language model", "LLM", "arithmetic", "numeracy", "benchmark", "single-token number embedding", "tokenization" ]
To drive progress in science and engineering, large language models (LLMs) must be able to process large amounts of numerical data and solve long calculations efficiently. This is currently only possible through the use of external tools or extensive reasoning chains, either limiting the numerical intuition of LLMs or limiting the length of problems they can solve. We show that frontier LLMs require excessive amounts of reasoning tokens to solve even basic calculations, which is exacerbated by their tokenization strategies that split single numbers into multiple tokens. This motivates the need for efficient and effective single-token number encodings. We introduce a set of desiderata for such encodings and show that existing approaches fail to fulfill them. To address these shortcomings, we propose BitTokens, a novel tokenization strategy that embeds any number into a single token using its IEEE 754 binary floating-point representation. Through extensive experiments we show that our BitTokens allow even small language models to learn algorithms that solve basic arithmetic operations nearly perfectly. This newly gained efficiency could expand the length and complexity of problems language models can solve.
We propose BitTokens, a novel tokenization strategy for LLMs that embeds numbers using their IEEE 754 binary floating-point representation, which allows for efficient numeracy in language models
foundation or frontier models, including LLMs
https://openreview.net/pdf?id=qzgro4i3sg
2025-09-15T19:43:56
4
[ { "id": "350r2qrCTi", "forum": "qzgro4i3sg", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission5809/Reviewer_bJvs", "reviewer_name": "Reviewer_bJvs", "rating": 4, "confidence": 3, "soundness": 2, "contribution": 2, "presentation": 3, "summary": "The ma...
PBIHh6ibal
https://openreview.net/forum?id=PBIHh6ibal
PRPO: Paragraph-level Policy Optimization for Vision-Language Deepfake Detection
5.5
3
[ 6, 6, 4, 6 ]
[ 2, 4, 3, 3 ]
4
[ "deepfake detection", "vision language models", "deepfake reasoning" ]
The rapid rise of synthetic media has made deepfake detection a critical challenge for online safety and trust. Progress remains constrained by the scarcity of large, high-quality datasets. Although multimodal large language models (LLMs) exhibit strong reasoning capabilities, their performance on deepfake detection is poor, often producing explanations that are misaligned with visual evidence or hallucinatory. To address this limitation, we introduce a reasoning-annotated dataset for deepfake detection and propose Paragraph-level Relative Policy Optimization (PRPO), a reinforcement learning algorithm that aligns LLM reasoning with image content at the paragraph level. Experiments show that PRPO improves detection accuracy by a wide margin and achieves the highest reasoning score of 4.55/5.0. Ablation studies further demonstrate that PRPO significantly outperforms GRPO under test-time conditions. These results underscore the importance of grounding multimodal reasoning in visual evidence to enable more reliable and interpretable deepfake detection.
reinforcement learning
https://openreview.net/pdf?id=PBIHh6ibal
2025-09-17T17:33:46
4
[ { "id": "AP9sIuSZ2Z", "forum": "PBIHh6ibal", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission8895/Reviewer_24DX", "reviewer_name": "Reviewer_24DX", "rating": 6, "confidence": 2, "soundness": 3, "contribution": 3, "presentation": 3, "summary": "The pa...
FOrVEtwixO
https://openreview.net/forum?id=FOrVEtwixO
LangMedSAM: Scalable Adaptation of Medical Segment Anything Model (MedSAM) for Language-Prompted Medical Image Segmentation
2
4.5
[ 2, 2, 0, 4 ]
[ 4, 5, 5, 4 ]
4
[ "Medical Image Computing", "Image Segmentation", "Foundational Model" ]
Image segmentation is a crucial component of medical imaging, facilitating precise analysis and diagnosis by identifying anomalies and structures across various imaging modalities. Recent advancements have led to the development of foundational medical image segmentation models such as MedSAM. Trained on a large corpus of medical images, MedSAM generates segmentation masks based on user prompts such as bounding boxes and points. For faster inference, LiteMedSAM, a lightweight variant of MedSAM, offers a computationally more practical solution, while maintaining comparable performance. However, manually providing bounding boxes for each 2D slice in volumetric imaging remains cumbersome and hinders the automatic processing of large datasets. To address this, we introduce LangMedSAM, a multi-modal text-based segmentation model that leverages natural language prompts for mask generation in radiological images. LangMedSAM is trained on 20 publicly available medical datasets and evaluated both on these datasets and on 4 additional external datasets to assess generalizability. Building on LiteMedSAM’s architecture, it supports segmentation via both text-based prompts and conventional inputs such as bounding boxes. Our results show that text-based prompts provide a scalable and effective solution for multi-modal and multi-region medical image segmentation, offering a practical alternative to conventional prompting methods in MedSAM—particularly for the automated processing of large collections of scans.
We propose LangMedSAM, a multi-modal segmentation model that uses natural language prompts to generate anatomical and pathological masks, reducing dependence on manual bounding boxes while maintaining strong CT and MR performance.
applications to physical sciences (physics, chemistry, biology, etc.)
https://openreview.net/pdf?id=FOrVEtwixO
2025-09-20T05:16:56
4
[ { "id": "2RRtjc6g1M", "forum": "FOrVEtwixO", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission21360/Reviewer_pMMq", "reviewer_name": "Reviewer_pMMq", "rating": 2, "confidence": 4, "soundness": 2, "contribution": 2, "presentation": 1, "summary": "The p...
INwNHRWN2o
https://openreview.net/forum?id=INwNHRWN2o
Structural Error Patterns Matter: Towards More Structure-aware GNN Evaluation and Training
2
4.25
[ 2, 2, 4, 0 ]
[ 4, 5, 3, 5 ]
4
[ "GNN", "Model Evaluation", "Error Pattern" ]
Graph Neural Networks (GNNs) are a specialized family of neural networks designed to handle graph-structured data, enabling the modeling of complex relationships within graphs. Despite significant algorithmic improvements, the issue of performance evaluation for GNNs has largely been overlooked in the literature. A crucial but underexplored aspect of GNN evaluation is understanding how errors are distributed across the graph structure, which we refer to as the "structural error pattern." To the best of our knowledge, this paper is among the first to highlight the importance of paying attention to these error patterns, which are essential not only for model selection—especially in spatial applications where localized or clustered errors can signal critical issues—but also for providing algorithmic insights into the model’s performance. In this work, we introduce a novel mathematical framework that analyzes and differentiates evaluation metrics based on their sensitivity to structural error patterns. Through a thorough theoretical analysis, we identify the limitations of traditional metrics—such as accuracy and mean squared error—that fail to capture the complexity of these error distributions. To address these shortcomings, we propose a new evaluation metric explicitly designed to detect and quantify structural error patterns, offering deeper insights into GNN performance. Our extensive empirical experiments demonstrate that this metric enhances model selection and improves robustness. Furthermore, we show that it can be incorporated as a regularization method during training, leading to more reliable GNN predictions in real-world applications.
This paper study the structural error distribution of GNN
learning on graphs and other geometries & topologies
https://openreview.net/pdf?id=INwNHRWN2o
2025-09-10T22:43:17
4
[ { "id": "KlNLFjea5P", "forum": "INwNHRWN2o", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission3761/Reviewer_ozky", "reviewer_name": "Reviewer_ozky", "rating": 2, "confidence": 4, "soundness": 2, "contribution": 2, "presentation": 2, "summary": "This p...
LtTuAVkKoM
https://openreview.net/forum?id=LtTuAVkKoM
Focusing by Contrastive Attention: Enhancing VLMs' Visual Reasoning
4
3.4
[ 4, 4, 4, 4, 4 ]
[ 3, 3, 4, 4, 3 ]
5
[ "Vision-Language Models", "Visual Reasoning", "Large Language Model", "LLM", "VLM", "Reasoning" ]
Vision-Language Models (VLMs) have demonstrated remarkable success across diverse visual tasks, yet their performance degrades in complex visual environments. While existing enhancement approaches require additional training, rely on external segmentation tools, or operate at coarse-grained levels, they overlook the innate ability within VLMs. To bridge this gap, we investigate VLMs' attention patterns and discover that: (1) visual complexity strongly correlates with attention entropy, negatively impacting reasoning performance; (2) attention progressively refines from global scanning in shallow layers to focused convergence in deeper layers, with convergence degree determined by visual complexity. (3) Theoretically, we prove that the contrast of attention maps between general queries and task-specific queries enables the decomposition of visual signal into semantic signals and visual noise components. Building on these insights, we propose Contrastive Attention Refinement for Visual Enhancement (CARVE), a training-free method that extracts task-relevant visual signals through attention contrasting at the pixel level. Extensive experiments demonstrate that CARVE consistently enhances performance, achieving up to 75% improvement on open-source models. Our work provides critical insights into the interplay between visual complexity and attention mechanisms, offering an efficient pathway for improving visual reasoning with contrasting attention.
We propose Contrastive Attention Refinement for Visual Enhancement (CARVE), a training-free method that extracts task-relevant visual signals through attention contrasting at the pixel level.
foundation or frontier models, including LLMs
https://openreview.net/pdf?id=LtTuAVkKoM
2025-09-12T23:20:56
9
[ { "id": "fe0598m4vt", "forum": "LtTuAVkKoM", "review_number": 5, "reviewer_id": "ICLR.cc/2026/Conference/Submission4502/Reviewer_WNpL", "reviewer_name": "Reviewer_WNpL", "rating": 4, "confidence": 3, "soundness": 3, "contribution": 3, "presentation": 4, "summary": "The pa...
8UZpmrxoLG
https://openreview.net/forum?id=8UZpmrxoLG
Astra: General Interactive World Model with Autoregressive Denoising
5
3
[ 6, 4, 6, 4 ]
[ 2, 3, 3, 4 ]
4
[ "world model", "video generation" ]
Recent advances in diffusion transformers have empowered video generation models to generate high-quality video clips from texts or images. However, world models with the ability to predict long-horizon futures from past observations and actions remain underexplored, especially for general-purpose scenarios and various forms of actions. To bridge this gap, we introduce Astra, an interactive general world model that generates real-world futures for diverse scenarios (e.g., autonomous driving, robot grasping) with precise action interactions (e.g., camera motion, robot action). We propose an autoregressive denoising architecture and use temporal causal attention to aggregate past observations and support streaming outputs. We use a noise-augmented history memory to avoid over-reliance on past frames to balance responsiveness with temporal coherence. For precise action control, we introduce an action-aware adapter that directly injects action signals into the denoising process. We further develop a mixture of action experts that dynamically route heterogeneous action modalities, enhancing versatility across diverse real-world tasks such as exploration, manipulation, and camera control. Astra achieves interactive, consistent, and general long-term video prediction and supports various forms of interactions. Experiments across multiple datasets demonstrate the improvements of Astra in fidelity, long-range prediction, and action alignment over existing state-of-the-art world models.
generative models
https://openreview.net/pdf?id=8UZpmrxoLG
2025-09-17T23:16:34
4
[ { "id": "FS3A0rAUuF", "forum": "8UZpmrxoLG", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission9431/Reviewer_toMU", "reviewer_name": "Reviewer_toMU", "rating": 6, "confidence": 2, "soundness": 2, "contribution": 3, "presentation": 2, "summary": "In thi...
2EQPpEZtEK
https://openreview.net/forum?id=2EQPpEZtEK
DiSTAR: Diffusion over a Scalable Token Autoregressive Representation for Speech Generation
3.333333
3.666667
[ 4, 4, 2 ]
[ 3, 4, 4 ]
3
[ "text-to-speech", "residual vector quantization", "masked diffusion model", "autoregressive language model" ]
Recent attempts to interleave autoregressive (AR) sketchers with diffusion-based refiners over continuous speech representations have shown promise, but they remain brittle under distribution shift and offer limited levers for controllability. We introduce DiSTAR, a zero-shot text-to-speech framework that operates entirely in a discrete residual vector quantization (RVQ) code space and tightly couples an AR language model with a masked diffusion model, without forced alignment or a duration predictor. Concretely, DiSTAR drafts block-level RVQ tokens with an AR language model and then performs parallel masked-diffusion infilling conditioned on the draft to complete the next block, yielding long-form synthesis with blockwise parallelism while mitigating classic AR exposure bias. The discrete code space affords explicit control at inference: DiSTAR produces high-quality audio under both greedy and sample-based decoding using classifier-free guidance, supports trade-offs between robustness and diversity, and enables variable bit-rate and controllable computation via RVQ layer pruning at test time. Extensive experiments and ablations demonstrate that DiSTAR surpasses state-of-the-art zero-shot TTS systems in robustness, naturalness, and speaker/style consistency, while maintaining rich output diversity. Audio samples are provided on \url{https://anonymous.4open.science/w/DiSTAR_demo}.
applications to computer vision, audio, language, and other modalities
https://openreview.net/pdf?id=2EQPpEZtEK
2025-09-19T15:56:23
3
[ { "id": "oBwROuronm", "forum": "2EQPpEZtEK", "review_number": 3, "reviewer_id": "ICLR.cc/2026/Conference/Submission16779/Reviewer_mwhZ", "reviewer_name": "Reviewer_mwhZ", "rating": 4, "confidence": 3, "soundness": 2, "contribution": 3, "presentation": 3, "summary": "DiSTA...
M84KJx6oCx
https://openreview.net/forum?id=M84KJx6oCx
SPARK: Synergistic Policy And Reward Co-Evolving Framework
4
4
[ 6, 2, 4, 4 ]
[ 4, 4, 4, 4 ]
4
[ "RLVR", "RLHF", "LLM", "LVLM" ]
Recent Large Language Models (LLMs) and Large Vision-Language Models (LVLMs) increasingly use Reinforcement Learning (RL) for post-pretraining, such as RL with Verifiable Rewards (RLVR) for objective tasks and RL from Human Feedback (RLHF) for subjective tasks. However, RLHF incurs high costs and potential reward–policy mismatch due to reliance on human preferences, while RLVR still wastes supervision by discarding rollouts and correctness signals after each update. To address these challenges, we introduce the Synergistic Policy And Reward Co-Evolving Framework (SPARK), an efficient, on-policy, and stable method that builds on RLVR. Instead of discarding rollouts and correctness data, SPARK recycles this valuable information to simultaneously train the model itself as a generative reward model. This auxiliary training uses a mix of objectives, such as pointwise reward score, pairwise comparison, and evaluation conditioned on further-reflection responses, to teach the model to evaluate and improve its own responses. Our process eliminates the need for a separate reward model and costly human preference data. SPARK creates a positive co-evolving feedback loop: improved reward accuracy yields better policy gradients, which in turn produce higher-quality rollouts that further refine the reward model. Our unified framework supports test-time scaling via self-reflection without external reward models and their associated costs. We show that SPARK achieves significant performance gains on multiple LLM and LVLM models and multiple reasoning, reward models, and general benchmarks. For example, SPARK-VL-7B achieves an average 9.7\% gain on 7 reasoning benchmarks, 12.1\% on 2 reward benchmarks, and 1.5\% on 8 general benchmarks over the baselines, demonstrating robustness and broad generalization.
foundation or frontier models, including LLMs
https://openreview.net/pdf?id=M84KJx6oCx
2025-09-04T12:49:17
4
[ { "id": "ffSL10iEQi", "forum": "M84KJx6oCx", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission1899/Reviewer_R8hV", "reviewer_name": "Reviewer_R8hV", "rating": 6, "confidence": 4, "soundness": 3, "contribution": 2, "presentation": 3, "summary": "This p...
1EyqJNvVlh
https://openreview.net/forum?id=1EyqJNvVlh
Anchors Aweigh! Sail for Optimal Unified Multi-Modal Representations
3
3.75
[ 2, 4, 4, 2 ]
[ 3, 4, 4, 4 ]
4
[ "Multimodal alignment", "mutual information" ]
A unified representation space in multi-modal learning is essential for effectively integrating diverse data sources, such as text, images, and audio, to enhance efficiency and performance across various downstream tasks. Recent binding methods, such as ImageBind, typically rely on a single, fixed anchor modality for aligning multi-modal data. We mathematically analyze these fixed anchor binding methods and uncover significant limitations: (1) over-reliance on the choice of the anchor modality, (2) inadequate capture of intra-modal information, and (3) failure to account for cross-modal correlation among non-anchored modalities. To address these issues, we propose the need for adaptive anchor binding methods, exemplified by our framework CentroBind. The proposed method uses adaptively adjustable centroid-based anchors generated from all available modalities, leading to a balanced and rich representation space. We theoretically demonstrate that our approach captures three critical properties of multi-modal learning---intra-modal learning, inter-modal learning, and multi-modal alignment---while constructing a unified representation that spans all modalities. Experiments on both synthetic and real-world datasets show that adaptive anchor methods such as CentroBind consistently outperform fixed anchor binding methods, verifying our analysis.
unsupervised, self-supervised, semi-supervised, and supervised representation learning
https://openreview.net/pdf?id=1EyqJNvVlh
2025-09-20T13:27:21
4
[ { "id": "MJVLlUkv3p", "forum": "1EyqJNvVlh", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission23589/Reviewer_Yya1", "reviewer_name": "Reviewer_Yya1", "rating": 2, "confidence": 3, "soundness": 2, "contribution": 2, "presentation": 2, "summary": "This ...
qR2TjMZ10B
https://openreview.net/forum?id=qR2TjMZ10B
On the Representation Degradation in Vision-Language-Action Models
5
3.75
[ 4, 6, 6, 4 ]
[ 4, 4, 3, 4 ]
4
[ "robot policy learning", "vision-language-action models", "representation learning" ]
Vision-Language-Action (VLA) models have become a promising paradigm for robotic decision-making, yet their application remains limited by generalization bottlenecks. In this paper, we conduct a layer-wise representation analysis and uncover a previously overlooked phenomenon of representation degradation: deeper layers tasked with action generation exhibit diminished generalization to both semantic information and environmental dynamics. To mitigate this issue, we introduce hidden Space WOrld modeLing (SWOL), a lightweight but efficient approach that aligns degraded deep-layer features with more generalizable mid-layer representations extrapolated from future observations. SWOL enforces temporally consistent, action-grounded representations without modifying model architecture or inference procedures. Extensive experiments in simulation and real-world settings demonstrate that SWOL alleviates representation degradation, leading to improved policy effectiveness and stronger generalization across modalities of vision, language, and dynamics.
applications to robotics, autonomy, planning
https://openreview.net/pdf?id=qR2TjMZ10B
2025-09-19T21:18:36
4
[ { "id": "NRspVBjDft", "forum": "qR2TjMZ10B", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission18448/Reviewer_iHK4", "reviewer_name": "Reviewer_iHK4", "rating": 4, "confidence": 4, "soundness": 3, "contribution": 2, "presentation": 3, "summary": "The p...
XU2STJa1Fi
https://openreview.net/forum?id=XU2STJa1Fi
Mechanistic Detection and Mitigation of Hallucination in Large Reasoning Models
4
3.5
[ 6, 4, 2, 4 ]
[ 3, 4, 4, 3 ]
4
[ "Reasoning", "Hallucination", "Mechanistic Interpretability" ]
Large Reasoning Models (LRMs) have shown impressive capabilities in multi-step reasoning tasks. However, alongside these successes, a more deceptive form of model error has emerged—**Reasoning Hallucination**—where logically coherent but factually incorrect reasoning traces lead to persuasive yet faulty conclusions. Unlike traditional hallucinations, these errors are embedded within structured reasoning, making them more difficult to detect and potentially more harmful. In this work, we investigate reasoning hallucinations from a mechanistic perspective. We propose the **Reasoning Score**, which quantifies the depth of reasoning by measuring the divergence between logits obtained from projecting late layers of LRMs to the vocabulary space, effectively distinguishing shallow pattern-matching from genuine deep reasoning. Using this score, we conduct an in-depth analysis on the ReTruthQA dataset and identify two key reasoning hallucination patterns: early-stage fluctuation in reasoning depth and incorrect backtracking to flawed prior steps. These insights motivate our **R**easoning **H**allucination **D**etection (**RHD**) framework, which achieves state-of-the-art performance across multiple domains. To mitigate reasoning hallucinations, we further introduce **GRPO-R**, an enhanced reinforcement learning algorithm that incorporates step-level deep reasoning rewards via potential-based shaping. Our theoretical analysis establishes stronger generalization guarantees, and experiments demonstrate improved reasoning quality and reduced hallucination rates.
We propose a Reasoning Score grounded in mechanistic interpretability to detect and mitigate reasoning hallucinations in LRMs, introducing RHD for detection and GRPO-R for mitigation via step-level rewards.
alignment, fairness, safety, privacy, and societal considerations
https://openreview.net/pdf?id=XU2STJa1Fi
2025-09-11T13:08:37
4
[ { "id": "59VyOqWit5", "forum": "XU2STJa1Fi", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission3914/Reviewer_umeb", "reviewer_name": "Reviewer_umeb", "rating": 6, "confidence": 3, "soundness": 3, "contribution": 3, "presentation": 3, "summary": "This p...
SLhLUdlaqc
https://openreview.net/forum?id=SLhLUdlaqc
Parameter-Efficient Reinforcement Learning using Prefix Optimization
4.5
3.75
[ 4, 4, 8, 2 ]
[ 4, 4, 3, 4 ]
4
[ "reinforcement learning with verifiable rewards", "parameter efficient tuning" ]
Reinforcement Learning with Verifiable Rewards (RLVR) is a leading approach for tuning language models on mathematical reasoning tasks. However, it remains unclear whether RLVR's gains stem from genuine reasoning improvements or simply from steering the model toward answer formats that already appear in the reference distribution. Inspired by recent evidence \citep{zhao2025echo,yue2025does}, we study this question by optimizing only the first $k$ tokens (e.g. $k=32$) of each solution, generating the remainder of the response from the reference model. We study two methods for prefix optimization, using a naive algorithm that clusters prefixes and selects the best prefix (Prefix Clustering), and a method that optimizes the prefix by finetuning a lightweight adapter model with RL (Prefix-RL). We show that tuning only the first $k$ tokens can significantly improve the accuracy on math, suggesting that at least some of the gains from RL are due to upweighting a preferable solution strategy. Our results suggest that simple prefix optimization methods can provide an efficient alternative to RL, delivering substantial improvements across different models and benchmarks for a tiny fraction of the compute required for standard RL.
Optimizing just the first k tokens with a small RL-tuned adapter (“Prefix-RL”) or a Prefix Clustering approach steers a frozen LLM’s solution strategy, recovering much of full RL’s math gains at a tiny compute cost.
reinforcement learning
https://openreview.net/pdf?id=SLhLUdlaqc
2025-09-20T04:04:25
4
[ { "id": "krSR5K41rl", "forum": "SLhLUdlaqc", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission20975/Reviewer_hLfX", "reviewer_name": "Reviewer_hLfX", "rating": 4, "confidence": 4, "soundness": 3, "contribution": 2, "presentation": 2, "summary": "This ...
O02qsgSUtY
https://openreview.net/forum?id=O02qsgSUtY
STEDiff: Revealing the Spatial and Temporal Redundancy of Backdoor Attacks in Text-to-Image Diffusion Models
5
3.5
[ 4, 6, 4, 6 ]
[ 4, 3, 3, 4 ]
4
[ "Diffusion Models; Backdoor Attacks; Backdoor Defense; AI Security" ]
Recently, diffusion models have been recognized as state-of-the-art models for image generation due to their ability to produce high-quality images. However, recent studies have shown that diffusion models are susceptible to backdoor attacks, where an attacker can activate hidden biases using a specific trigger pattern, causing the model to generate a predefined target. Fortunately, executing backdoor attacks is still challenging, as they typically require substantial time and memory to perform parameter-based fine-tuning. In this paper, we are the first to reveal the **spatio-temporal redundancy** in backdoor attacks on diffusion models. **Regarding spatial redundancy**, we observed the *enrichment phenomenon*, which reflects the abnormal gradient accumulation induced by backdoor injection. **Regarding temporal redundancy**, we observed a marginal effect associated with specific time steps, indicating that only a limited subset of time steps plays a critical role in backdoor injection. Building on these findings, we present a novel framework, *STEDiff*, comprising two key components: *STEBA* and *STEDF*. *STEBA* is a spatio-temporally efficient accelerated attack strategy that achieves up to **15.07×** speedup in backdoor injection while reducing video memory usage by **82%**. *STEDF* is a detection framework leveraging spatio-temporal features, by modeling the enrichment phenomenon in weights and anisotropy across time steps, which achieves a backdoor detection rate of up to **99.8%**. Our code is available at: [https://anonymous.4open.science/r/STEDiff-9E9F/](https://anonymous.4open.science/r/STEDiff-9E9F/).
In this paper, we are the first to reveal the spatio-temporal redundancy in backdoor attacks on diffusion models. We present a novel framework, STEDiff, including a novel backdoor attack strategy and a reliable backdoor defense framework.
alignment, fairness, safety, privacy, and societal considerations
https://openreview.net/pdf?id=O02qsgSUtY
2025-09-17T15:11:39
4
[ { "id": "gQtRdjksSv", "forum": "O02qsgSUtY", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission8636/Reviewer_ct6g", "reviewer_name": "Reviewer_ct6g", "rating": 4, "confidence": 4, "soundness": 2, "contribution": 2, "presentation": 3, "summary": "The pa...
82IUMx3yRJ
https://openreview.net/forum?id=82IUMx3yRJ
Equivariant Flow Matching for Point Cloud Assembly
5
3.5
[ 2, 6, 4, 8 ]
[ 4, 3, 4, 3 ]
4
[ "flow matching", "point cloud assembly", "equivariant model" ]
The goal of point cloud assembly is to reconstruct a complete 3D shape by aligning multiple point cloud pieces. This work presents a novel equivariant solver for assembly tasks based on flow matching models. We first theoretically show that the key to learning equivariant distributions via flow matching is to learn related vector fields. Based on this result, we propose an assembly model, called equivariant diffusion assembly (Eda), which learns related vector fields conditioned on the input pieces. We further construct an equivariant path for Eda, which guarantees high data efficiency of the training process. Our numerical results show that Eda is highly competitive on practical datasets, and it can even handle the challenging situation where the input pieces are non-overlapped.
learning on graphs and other geometries & topologies
https://openreview.net/pdf?id=82IUMx3yRJ
2025-09-19T21:20:28
4
[ { "id": "HTIQIVkmkk", "forum": "82IUMx3yRJ", "review_number": 5, "reviewer_id": "ICLR.cc/2026/Conference/Submission18461/Reviewer_9ojc", "reviewer_name": "Reviewer_9ojc", "rating": 2, "confidence": 4, "soundness": 2, "contribution": 2, "presentation": 2, "summary": "This ...
MYqAKKsjF9
https://openreview.net/forum?id=MYqAKKsjF9
LifelongAgentBench: Evaluating LLM Agents as Lifelong Learners
2
3.666667
[ 2, 2, 2 ]
[ 5, 3, 3 ]
3
[ "lifelong learning", "continual learning", "incremental learning", "LLM agent" ]
Lifelong learning is essential for intelligent agents operating in dynamic environments. Current large language model (LLM)-based agents, however, remain stateless and unable to accumulate or transfer knowledge over time. Existing benchmarks treat agents as static systems and fail to evaluate lifelong learning capabilities. We present LifelongAgentBench, the first unified benchmark designed to systematically assess the lifelong learning ability of LLM agents. It provides skill-grounded, interdependent tasks across three interactive environments—Database, Operating System, and Knowledge Graph—with automatic label verification, reproducibility, and modular extensibility. Extensive experiments reveal that conventional experience replay has limited effectiveness for LLM agents due to irrelevant information and context length constraints. We further introduce a group self-consistency mechanism that significantly improves lifelong learning performance. We hope LifelongAgentBench will advance the development of adaptive, memory-capable LLM agents.
We propose a unified benchmark to evaluate the lifelong learning ability of LLM-based agents under diverse environments.
datasets and benchmarks
https://openreview.net/pdf?id=MYqAKKsjF9
2025-09-20T15:33:34
3
[ { "id": "49lwiGbZjH", "forum": "MYqAKKsjF9", "review_number": 3, "reviewer_id": "ICLR.cc/2026/Conference/Submission24174/Reviewer_eERx", "reviewer_name": "Reviewer_eERx", "rating": 2, "confidence": 5, "soundness": 2, "contribution": 1, "presentation": 1, "summary": "This ...
5F2XfLe7An
https://openreview.net/forum?id=5F2XfLe7An
SINQ: Sinkhorn-Normalized Quantization for Calibration-Free Low-Precision LLM Weights
4
4.333333
[ 6, 4, 2 ]
[ 5, 4, 4 ]
3
[ "LLM", "quantization", "integer", "INT4", "inference", "W4A16", "uniform quantization", "calibration-free quantization" ]
Post-training quantization has emerged as the most widely used strategy for deploying large language models at low precision. Still, current methods show perplexity degradation at bit-widths $\leq 4$, partly because representing outliers causes precision issues in parameters that share the same scales as these outliers. This problem is especially pronounced for calibration-free, uniform quantization methods. We introduce SINQ to augment existing post-training quantizers with an additional second-axis scale factor and a fast Sinkhorn–Knopp–style algorithm that finds scales to normalize per-row and per-column variances, thereby minimizing a novel per-matrix proxy target for quantization: the matrix imbalance. Our method has no interactions between layers and can be trivially applied to new architectures to quantize any linear layers. We evaluate our method on the Qwen3 model family and DeepSeek-V2.5. SINQ improves WikiText2 and C4 perplexity significantly against uncalibrated uniform quantization baselines and can be further enhanced by combining it with calibration and non-uniform quantization levels. Code is available in the supplementary.
foundation or frontier models, including LLMs
https://openreview.net/pdf?id=5F2XfLe7An
2025-09-19T19:57:51
3
[ { "id": "SZ8EsdGIeu", "forum": "5F2XfLe7An", "review_number": 3, "reviewer_id": "ICLR.cc/2026/Conference/Submission18032/Reviewer_uY9X", "reviewer_name": "Reviewer_uY9X", "rating": 6, "confidence": 5, "soundness": 2, "contribution": 3, "presentation": 3, "summary": "This ...
DKOIADzbtM
https://openreview.net/forum?id=DKOIADzbtM
EchoVLM: Measurement-Grounded Multimodal Learning for Echocardiography
5
3.75
[ 6, 6, 4, 4 ]
[ 4, 5, 3, 3 ]
4
[ "Echocardiography", "vision-language model", "ultrasound" ]
Echocardiography is the most widely used imaging modality in cardiology, yet its interpretation remains labor-intensive and inherently multimodal, which requires view recognition, quantitative measurements, qualitative assessments, and guideline-based reasoning. While recent vision–language models (VLMs) have achieved broad success in natural images and certain medical domains, their potential in echocardiography has been limited by the lack of large-scale, clinically grounded image–text datasets and the absence of measurement-based reasoning central to echo interpretation. We introduce EchoGround-MIMIC, the first measurement-grounded multimodal echocardiography dataset, comprising 19,065 image–text pairs from 1,572 patients with standardized views, structured measurements, measurement-grounded captions, and guideline-derived disease labels. Building on this resource, we propose EchoVLM, a vision–language model that incorporates two novel pretraining objectives: (i) a view-informed contrastive loss that encodes the view-dependent structure of echocardiographic imaging, and (ii) a negation-aware contrastive loss that distinguishes clinically critical negative from positive findings. Across five types of clinical applications with 36 tasks spanning multimodal disease classification, image–text retrieval, view classification, chamber segmentation, and landmark detection, EchoVLM achieves state-of-the-art performance (86.5\% AUC in zero-shot disease classification and 95.1\% accuracy in view classification). We demonstrate that clinically grounded multimodal pretraining yields transferable visual representations and establish EchoVLM as foundation model for end-to-end echocardiography interpretation. We will release EchoGround-MIMIC and data curation code, enabling reproducibility and further research in multimodal echocardiography interpretation.
applications to physical sciences (physics, chemistry, biology, etc.)
https://openreview.net/pdf?id=DKOIADzbtM
2025-09-04T11:49:15
4
[ { "id": "uqTActq3Nq", "forum": "DKOIADzbtM", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission1886/Reviewer_zREj", "reviewer_name": "Reviewer_zREj", "rating": 6, "confidence": 4, "soundness": 3, "contribution": 3, "presentation": 2, "summary": "The pa...
J04D9xBUCi
https://openreview.net/forum?id=J04D9xBUCi
Bridging the Preference Gap: Post-Training Input Rewriting with Large Language Models
3
3.75
[ 4, 2, 4, 2 ]
[ 3, 5, 3, 4 ]
4
[ "textual entailment", "natural language inference" ]
Pre-trained language models, such as BERT and RoBERTa, have achieved remarkable performance in semantic classification tasks. Yet, their effectiveness varies with different textual expressions due to inherent preferences developed during training. To address this limitation, we propose a framework that leverages large language models (LLMs) to rewrite input texts in ways that better align with a target classifier's preferences, thereby enhancing its performance. To achieve this, we introduce a training process for the LLM and an automated method for constructing training data that encapsulates the classifier-specific preferences. Furthermore, we present a multi-sampling and filtering strategy to address instability in LLM outputs. Empirical evaluations on semantic classification datasets demonstrate that our framework significantly improves classifier’s performances.
other topics in machine learning (i.e., none of the above)
https://openreview.net/pdf?id=J04D9xBUCi
2025-09-20T18:52:14
4
[ { "id": "USRJ9ysNgw", "forum": "J04D9xBUCi", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission25233/Reviewer_oBL7", "reviewer_name": "Reviewer_oBL7", "rating": 4, "confidence": 3, "soundness": 2, "contribution": 2, "presentation": 3, "summary": "The p...
RAs8XzpNzQ
https://openreview.net/forum?id=RAs8XzpNzQ
A solvable model of inference-time scaling
3
3
[ 2, 4, 4, 2 ]
[ 3, 2, 3, 4 ]
4
[ "test time compute", "inference time compute", "scaling law", "higher dimensional statistics" ]
Recent developments in large language models have shown advantages in reallocating a notable share of computational resource from training time to inference time. However, the principles behind inference time scaling are not well understood. In this paper, we introduce an analytically tractable model of inference-time scaling: Bayesian linear regression with a reward-weighted sampler. We study this problem in the high-dimensional regime, where the deterministic equivalents dictate a closed-form expression for the posterior predictive mean and variance. We analyze the generalization error when training data are sampled from a teacher model. We draw $k$ inference-time samples and select via softmax at a temperature applied to a quadratic reward. When the reward is not too different from the teacher, the generalization error decreases monotonically with increasing inference time samples $k$. However, the specific reward that optimizes inference-time selection generally differs from the teacher. In contrast, substantial reward misspecification induces a finite optimal $k$ beyond which more sampling can increase the generalization error, consistent with recent empirical observations. Furthermore, for fixed $k$ there exists an optimal sampling temperature. In the “best-of-$k$” limit with the teacher as reward, we prove that the generalization error decays as $\Theta(1/k^2)$ and determine the leading coefficient via extreme value theory. These formulas delineate domains where scaling inference-time computation is provably preferable to collecting more data. Finally, we demonstrate that when task difficulty increases, the previously mentioned advantage of inference-time compute degrades.
We propose a solvable model of inference-time scaling.
learning theory
https://openreview.net/pdf?id=RAs8XzpNzQ
2025-09-19T19:30:47
4
[ { "id": "oTyMnGi4GT", "forum": "RAs8XzpNzQ", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission17868/Reviewer_k4BJ", "reviewer_name": "Reviewer_k4BJ", "rating": 2, "confidence": 3, "soundness": 2, "contribution": 2, "presentation": 2, "summary": "The p...
nU4Fv2yXN1
https://openreview.net/forum?id=nU4Fv2yXN1
Understanding Subpopulation Shifts through a Unified Lens of Separability
5
3.5
[ 4, 4, 6, 6 ]
[ 4, 4, 3, 3 ]
4
[ "Subpopulation shift", "distribution shift", "spurious correlation" ]
Subpopulation shifts have been a major challenge for deploying machine learning algorithms. The shift in subgroup proportions between training and test data always leads to a significant performance drop or suboptimal performance in certain groups, therefore limiting the broader or more reliable usage of machine learning methods. We present a unified theoretical framework to characterize a broad range of subpopulation shifts, including but not limited to well-studied shifts such as spurious correlation, under-representation, and class imbalance. Within this framework, we derive the performance of the Bayesian optimal classifier fitted on skewed data. The evaluation of thorough subpopulation shifts provides a quantitative tool to guide dataset collection. Our analysis further highlights the critical role of the feature separability assumption in our modeling, which explains the effectiveness of recent shift-mitigation methods and enabled principled comparison of encoders. Overall, this framework offers a unified perspective on evaluating subpopulation shifts and provides practical guidance on future work in both data collection and training strategies.
alignment, fairness, safety, privacy, and societal considerations
https://openreview.net/pdf?id=nU4Fv2yXN1
2025-09-18T22:31:48
4
[ { "id": "Bk91danVFQ", "forum": "nU4Fv2yXN1", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission12139/Reviewer_tAiR", "reviewer_name": "Reviewer_tAiR", "rating": 4, "confidence": 4, "soundness": 3, "contribution": 2, "presentation": 3, "summary": "The a...
Eu25AOvORb
https://openreview.net/forum?id=Eu25AOvORb
UniOD: A Universal Model for Outlier Detection across Diverse Domains
6
3.5
[ 6, 6, 6, 6 ]
[ 4, 2, 4, 4 ]
4
[ "outlier detection" ]
Outlier detection (OD), distinguishing inliers and outliers in completely unlabeled datasets, plays a vital role in science and engineering. Although there have been many insightful OD methods, most of them require troublesome hyperparameter tuning (a challenge in unsupervised learning) and costly model training for every task or dataset. In this work, we propose UniOD, a universal OD framework that leverages labeled datasets to train a single model capable of detecting outliers of datasets with different feature dimensions and heterogeneous feature spaces from diverse domains. Specifically, UniOD extracts uniform and comparable features across different datasets by constructing and factorizing multi-scale point-wise similarity matrices. It then employs graph neural networks to capture comprehensive within-dataset and between-dataset information simultaneously, and formulates outlier detection tasks as node classification tasks. As a result, once the training is complete, UniOD can identify outliers in datasets from diverse domains without any further model/hyperparameter selection and parameter optimization, which greatly improves convenience and accuracy in real applications. More importantly, we provide theoretical guarantees for the effectiveness of UniOD, consistent with our numerical results. We evaluate UniOD on 30 benchmark OD datasets against 17 baselines, demonstrating its effectiveness and superiority.
A universal model that can be used for outlier detection on datasets with different feature dimension and heterogeneous feature space across diverse domains.
unsupervised, self-supervised, semi-supervised, and supervised representation learning
https://openreview.net/pdf?id=Eu25AOvORb
2025-09-11T22:43:56
4
[ { "id": "dCs9aqAEXN", "forum": "Eu25AOvORb", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission4106/Reviewer_tKHD", "reviewer_name": "Reviewer_tKHD", "rating": 6, "confidence": 4, "soundness": 3, "contribution": 3, "presentation": 4, "summary": "This p...
oHEaIwPv9s
https://openreview.net/forum?id=oHEaIwPv9s
Build-Bench: Benchmarking LLM Agents on Compiling Real-World Open Source Software
4.5
3
[ 4, 4, 6, 4 ]
[ 3, 3, 3, 3 ]
4
[ "Agent", "Benchmark", "Compilation", "LLM" ]
Automatically compiling open-source software (OSS) projects is a vital, labor-intensive, and complex task, which makes it a good challenge for LLM Agents. Existing methods rely on manually curated rules and workflows, which cannot adapt to OSS that requires customized configuration or environment setup. Recent attempts using Large Language Models (LLMs) used selective evaluation on a subset of highly rated OSS, a practice that underestimates the realistic challenges of OSS compilation. In practice, compilation instructions are often absent, de- pendencies are undocumented, and successful builds may even require patching source files or modifying build scripts. We propose a more challenging and realistic benchmark, BUILD-BENCH, comprising OSS that are more diverse in quality, scale, and characteristics. Furthermore, we propose a strong baseline LLM-based agent, OSS-BUILD-AGENT, an effective system with enhanced build instruction retrieval module that achieves state-of-the-art performance on BUILD-BENCH and is adaptable to heterogeneous OSS characteristics. We also provide detailed analysis regarding different compilation method design choices and their influence to the whole task, offering insights to guide future advances. We believe performance on BUILD-BENCH can faithfully reflect an agent’s ability to tackle compilation as a complex software engineering tasks, and, as such, our benchmark will spur innovation with a significant impact on downstream applications in the fields of software development and software security.
datasets and benchmarks
https://openreview.net/pdf?id=oHEaIwPv9s
2025-09-20T06:36:19
4
[ { "id": "mcFejxteVT", "forum": "oHEaIwPv9s", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission21758/Reviewer_y24p", "reviewer_name": "Reviewer_y24p", "rating": 4, "confidence": 3, "soundness": 2, "contribution": 2, "presentation": 2, "summary": "The p...
DuPYSaCiep
https://openreview.net/forum?id=DuPYSaCiep
UDDETTS: Unifying Discrete and Dimensional Emotions for Controllable Emotional Text-to-Speech
4.5
4
[ 6, 4, 2, 6 ]
[ 4, 4, 5, 3 ]
4
[ "text-to-speech", "LLM", "dimensional emotion", "ADV space", "semi-supervised" ]
Recent large language models (LLMs) have made great progress in the field of text-to-speech (TTS), but they still face major challenges in synthesizing fine-grained emotional speech in an interpretable manner. Traditional methods rely on discrete emotion labels to control emotion categories and intensities, which cannot capture the complexity and continuity of human emotional perception and expression. The lack of large-scale emotional speech datasets with balanced emotion distributions and fine-grained emotional annotations often causes overfitting in synthesis models and impedes effective emotion control. To address these issues, we propose UDDETTS, a universal LLM framework unifying discrete and dimensional emotions for controllable emotional TTS. This model introduces the interpretable Arousal-Dominance-Valence (ADV) space for dimensional emotion description and supports emotion control driven by either discrete emotion labels or nonlinearly quantified ADV values. Furthermore, a semi-supervised training strategy is designed to comprehensively utilize diverse speech datasets with different types of emotional annotations to train the UDDETTS. Experiments show that UDDETTS achieves linear emotion control along three interpretable dimensions, and exhibits superior end-to-end emotional speech synthesis capabilities. Code and demos are available at: https://anonymous.4open.science/w/UDDETTS.
applications to computer vision, audio, language, and other modalities
https://openreview.net/pdf?id=DuPYSaCiep
2025-09-19T03:40:27
4
[ { "id": "no6NHFG7Jj", "forum": "DuPYSaCiep", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission13886/Reviewer_bqm5", "reviewer_name": "Reviewer_bqm5", "rating": 6, "confidence": 4, "soundness": 3, "contribution": 3, "presentation": 3, "summary": "This ...
pBTXsu1i77
https://openreview.net/forum?id=pBTXsu1i77
VLM-SubtleBench: How Far Are VLMs from Human-Level Subtle Comparative Reasoning?
5.5
3.25
[ 2, 6, 6, 8 ]
[ 3, 3, 3, 4 ]
4
[ "Vision-language Models", "Multimodal Large Language Models", "Comparative Reasoning", "Benchmark", "Visual Question Answering" ]
The ability to distinguish subtle differences between visually similar images is essential for diverse domains such as industrial anomaly detection, medical imaging, and aerial surveillance. While comparative reasoning benchmarks for vision-language models (VLMs) have recently emerged, they primarily focus on images with large, salient differences and fail to capture the nuanced reasoning required for real-world applications. In this work, we introduce **VLM-SubtleBench**, a benchmark designed to evaluate VLMs on *subtle comparative reasoning*. Our benchmark covers ten difference types—Attribute, State, Emotion, Temporal, Spatial, Existence, Quantity, Quality, Viewpoint, and Action—and curate paired question–image sets reflecting these fine-grained variations. Unlike prior benchmarks restricted to natural image datasets, our benchmark spans diverse domains, including industrial, aerial, and medical imagery. Through extensive evaluation of both proprietary and open-source VLMs, we reveal systematic gaps between model and human performance across difference types and domains, and provide controlled analyses highlighting where VLMs’ reasoning sharply deteriorates. Together, our benchmark and findings establish a foundation for advancing VLMs toward human-level comparative reasoning.
datasets and benchmarks
https://openreview.net/pdf?id=pBTXsu1i77
2025-09-08T15:33:07
4
[ { "id": "ac3qBXAB6O", "forum": "pBTXsu1i77", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission3016/Reviewer_Waq7", "reviewer_name": "Reviewer_Waq7", "rating": 2, "confidence": 3, "soundness": 2, "contribution": 2, "presentation": 3, "summary": "In thi...
3PECod4ieb
https://openreview.net/forum?id=3PECod4ieb
Chronoberg: Capturing Language Evolution And Temporal Awareness In Foundation Models
5
3.5
[ 6, 8, 4, 2 ]
[ 3, 3, 3, 5 ]
4
[ "Large Language Models", "Temporal Generalization", "Lexical Semantic Change", "Continual Learning", "VAD Lexicons" ]
Large language models (LLMs) excel at operating at scale by leveraging social media and various data crawled from the web. Whereas existing corpora are diverse, their frequent lack of long-term temporal structure may however limit an LLM's ability to contextualize semantic and normative evolution of language and to capture diachronic variation. To support analysis and training for the latter, we introduce Chronoberg, a temporally structured corpus of English book texts spanning 250 years, curated from Project Gutenberg and enriched with a variety of temporal annotations. First, the edited nature of books enables us to quantify lexical semantic change through time-sensitive Valence-Arousal-Dominance (VAD) analysis and to construct historically calibrated affective lexicons to support temporally grounded interpretation. With the lexicons at hand, we demonstrate a need for modern LLM-based tools to better situate their detection of discriminatory language and contextualization of sentiment across various time-periods. In fact, we show how language models trained sequentially on Chronoberg struggle to encode diachronic shifts in meaning, emphasizing the need for temporally aware training and evaluation pipelines, and positioning Chronoberg as a scalable resource for the study of linguistic change and temporal generalization. $\\textcolor{red}{Disclaimer:}$ This paper includes language and display of samples that could be offensive to readers. $\\textcolor{blue}{Open Access:}$ Chronoberg will be available publicly on HuggingFace.
datasets and benchmarks
https://openreview.net/pdf?id=3PECod4ieb
2025-09-18T16:47:06
4
[ { "id": "Ek69LlwRXx", "forum": "3PECod4ieb", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission10946/Reviewer_GhzC", "reviewer_name": "Reviewer_GhzC", "rating": 6, "confidence": 3, "soundness": 3, "contribution": 3, "presentation": 3, "summary": "The p...
skR7tTT32C
https://openreview.net/forum?id=skR7tTT32C
KITINet: Kinetics Theory Inspired Network Architectures with PDE Simulation Approaches
3.5
2.75
[ 2, 4, 4, 4 ]
[ 3, 2, 3, 3 ]
4
[ "Physics Inspired Neural Network", "Kinetic Theory" ]
Despite the widely recognized success of residual connections in modern neural networks, their design principles remain largely heuristic. This paper introduces KITINet (KInetics Theory Inspired Network), a way that reinterprets feature propagation through the lens of non-equilibrium particle dynamics and partial differential equation (PDE) simulation. We propose a new residual module that models feature updates as the stochastic evolution of a particle system, numerically simulated via a discretized solver for the Boltzmann transport equation (BTE). This formulation mimics particle collisions, enabling additional neuron-wise information propagation via physical interactions. Additionally, we reveal that this mechanism is an implicit regularization approach that induces network parameter condensation during training, where parameters progressively concentrate into a sparse subset of dominant channels. Experiments on large language modeling, image classification, scientific computation, and text classification show consistent improvements over classic network baselines, without additional inference cost.
foundation or frontier models, including LLMs
https://openreview.net/pdf?id=skR7tTT32C
2025-09-15T16:43:38
4
[ { "id": "HVwTU3UemV", "forum": "skR7tTT32C", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission5656/Reviewer_gGkH", "reviewer_name": "Reviewer_gGkH", "rating": 2, "confidence": 3, "soundness": 2, "contribution": 2, "presentation": 1, "summary": "This p...
BS0PhDOaJ7
https://openreview.net/forum?id=BS0PhDOaJ7
CUARewardBench: Benchmark for Evaluating Reward Models on Computer-using Agent Trajectories
4.5
3.5
[ 4, 4, 2, 8 ]
[ 3, 3, 4, 4 ]
4
[ "Computer-using Agent; Reward Models; Benchmark" ]
Computer-using agents (CUAs) enable task completion through natural interaction with operating systems and software interfaces. While script-based verifiers are widely adopted for evaluation, they suffer from limited scalability and inability to provide step-wise assessment. Reward models offer promising alternatives, but their effectiveness on CUA evaluation remains largely underexplored. To address this gap, we present CUARewardBench, comprising four key contributions: (1) First-ever Comprehensive CUA Reward Benchmark:* We introduce the first benchmark for evaluating both outcome reward models (ORM) and process reward models (PRM) on CUA tasks, enabling systematic assessment across trajectory-level and step-level evaluation. (2) Diverse and Representative Dataset: Our benchmark encompasses trajectories spanning 10 software categories and collected from 7 agent architectures with varying performance levels (25.9%-50.8% success rates), ensuring comprehensive coverage of CUA decision-making patterns. (3) Expert-Validated Annotations: All trajectories undergo rigorous expert annotation through carefully designed trajectory selection criteria, key step identification protocols, and systematic annotation standards. Expert annotations are validated through comprehensive cross-checking and quality control processes to ensure benchmark reliability and practical applicability. (4) Comprehensive Analysis and Insights: Through extensive experiments across 7 vision-language models and 3 prompt templates, we reveal critical limitations of current CUA RMs, including insufficient visual reasoning capabilities, knowledge deficiencies, and the superiority of general VLMs over specialized CUA models for reward evaluation. Our findings provide practical guidance for future CUA RM development and highlight potentials for advancing evaluation of CUA models.
CUARewardBench: A Benchmark for Evaluating Reward Models on Computer-using Agent
datasets and benchmarks
https://openreview.net/pdf?id=BS0PhDOaJ7
2025-09-18T20:51:56
4
[ { "id": "sD519JfvYn", "forum": "BS0PhDOaJ7", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission11472/Reviewer_R33o", "reviewer_name": "Reviewer_R33o", "rating": 4, "confidence": 3, "soundness": 3, "contribution": 2, "presentation": 2, "summary": "This ...
LNilmuJmF0
https://openreview.net/forum?id=LNilmuJmF0
HEART-ViT: HESSIAN-GUIDED EFFICIENT DYNAMIC ATTENTION AND TOKEN PRUNING IN VISION TRANSFORMERS
2.5
4.75
[ 4, 2, 2, 2 ]
[ 4, 5, 5, 5 ]
4
[ "Vision Transformers (ViTs)", "Dynamic pruning", "Hessian-based sensitivity", "Token and head pruning", "Edge-efficient inference" ]
Vision Transformers (ViTs) deliver state-of-the-art accuracy but their quadratic at- tention cost and redundant computations severely hinder deployment on latency- and resource-constrained platforms. Existing pruning approaches treat either tokens or heads in isolation, relying on heuristics or first-order signals, which often sacrifice accuracy or fail to generalize across inputs. We introduce HEART- ViT, a Hessian-guided efficient dynamic attention and token pruning for vision transformers, which to the best of our knowledge, is the first unified, second-order, input-adaptive framework for ViT optimization. HEART-ViT estimates curvature-weighted sensitivities of both tokens and attention heads using efficient Hessian–vector products, enabling principled pruning decisions under explicit loss budgets. This dual-view sensitivity reveals an important structural insight: token pruning dominates computational savings, while head pruning provides fine-grained redundancy removal, and their combination achieves a superior trade-off. On ImageNet-100 and ImageNet-1K with ViT-B/16 and DeiT-B/16, HEART-ViT achieves up to 49.4% FLOPs reduction, 36% lower latency, and 46% higher throughput, while consistently matching or even surpassing baseline accuracy after fine-tuning (e.g., +4.7% recovery at 40% token pruning). Beyond theoretical benchmarks, we deploy HEART-ViT on different edge devices, like- AGX Orin, demonstrating that our reductions in FLOPs and latency translate directly into real-world gains in inference speed and energy efficienc. HEART-VIT bridges the gap between theory and practice, delivering the first unified, curvature-driven pruning framework that is both accuracy-preserving and edge-efficient.
optimization
https://openreview.net/pdf?id=LNilmuJmF0
2025-09-20T09:46:54
4
[ { "id": "d3WNWJJmSh", "forum": "LNilmuJmF0", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission22556/Reviewer_8ymd", "reviewer_name": "Reviewer_8ymd", "rating": 4, "confidence": 4, "soundness": 3, "contribution": 3, "presentation": 3, "summary": "This ...
W8bKDPf1Ko
https://openreview.net/forum?id=W8bKDPf1Ko
Graph-Theoretic Intrinsic Reward: Guiding RL with Effective Resistance
4.666667
2.666667
[ 8, 4, 2 ]
[ 2, 2, 4 ]
3
[ "Reinforcement Learning", "Intrinsic Motivation", "Goal Conditioned RL", "Effective Resistance" ]
Exploration of dynamic environments with sparse rewards is a significant challenge in Reinforcement Learning, often leading to inefficient exploration and brittle policies. To address this, we introduce a novel graph-based intrinsic reward using Effective Resistance, a metric from spectral graph theory. This reward formulation guides the agent to seek configurations that are directly correlated to successful goal reaching states. We provide theoretical guarantees, proving that our method not only learns a robust policy but also achieves faster convergence by serving as a variance reduction baseline to the standard discounted reward formulation. We perform extensive empirical analysis across several challenging environments to demonstrate that our approach significantly outperforms state-of-the-art baselines, demonstrating improvements of up to 59% in success rate, 56% in timesteps taken to reach the goal, and 4 times more accumulated reward. We augment all of the supporting lemmas and theoretically motivated hyperparameter choices with corresponding experiments.
We propose an intrinsic reward formulation using the notion of Effective Resistance based on spectral graph theory, for learning robust policies in sparse environments.
reinforcement learning
https://openreview.net/pdf?id=W8bKDPf1Ko
2025-09-18T16:01:38
3
[ { "id": "21sJGJg7pw", "forum": "W8bKDPf1Ko", "review_number": 3, "reviewer_id": "ICLR.cc/2026/Conference/Submission10814/Reviewer_FfQG", "reviewer_name": "Reviewer_FfQG", "rating": 8, "confidence": 2, "soundness": 3, "contribution": 3, "presentation": 3, "summary": "This ...
kHhMs642rR
https://openreview.net/forum?id=kHhMs642rR
Evaluating SAE interpretability without generating explanations
3.5
3.75
[ 2, 4, 4, 4 ]
[ 3, 5, 3, 4 ]
4
[ "interpretability", "explanation", "sae", "transcoder" ]
Sparse autoencoders (SAEs) and transcoders have become important tools for machine learning interpretability. However, measuring the quality of the features they uncover remains challenging, and there is no consensus in the community about which benchmarks to use. Most evaluation procedures start by producing a single-sentence explanation for each feature in the sparse coder. These explanations are then evaluated based on how well they enable an LLM to predict the activation of a feature in new contexts. This method makes it difficult to disentangle the explanation generation and evaluation process from the actual interpretability of the features in the sparse coder. In this work, we adapt existing methods to assess the interpretability of sparse coders, with the advantage that they do not require generating natural language explanations as an intermediate step. This enables a more direct and potentially standardized assessment of interpretability. Furthermore, we compare the scores produced by our interpretability metrics with human evaluations across similar tasks and varying setups, offering suggestions for the community on improving the evaluation of these techniques.
Instead of evaluating whether explanations match activating contexts, we evaluate how much are activating contexts similar between themselves.
interpretability and explainable AI
https://openreview.net/pdf?id=kHhMs642rR
2025-09-19T01:24:48
4
[ { "id": "UhdnwsXEao", "forum": "kHhMs642rR", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission13305/Reviewer_VHFv", "reviewer_name": "Reviewer_VHFv", "rating": 2, "confidence": 3, "soundness": 2, "contribution": 1, "presentation": 2, "summary": "The a...
gyXfJUcR72
https://openreview.net/forum?id=gyXfJUcR72
Memory-Efficient LLM Pretraining via Minimalist Optimizer Design
5.5
3.75
[ 6, 4, 4, 8 ]
[ 4, 5, 3, 3 ]
4
[ "LLM Training", "Optimizer", "Efficiency" ]
Training large language models (LLMs) typically relies on adaptive optimizers such as Adam, which introduce extra operations and require significant more memory to maintain first- and second-order moments than SGD. While recent works such as GaLore, Fira and APOLLO have proposed state-compressed variants to reduce memory consumption, a fundamental question remains: *What are the minimum modifications to plain SGD needed to match state-of-the-art pretraining performance?* We systematically investigate this question using a bottom-up approach, and identify two simple yet highly (memory- and compute-) efficient techniques: (1) column-wise gradient normalization (normalizing the gradient along the output dimension), which boosts SGD performance without momentum; and (2) applying first-order momentum only to the output layer, where gradient variance is highest. Combining these two techniques lead to SCALE (Stochastic Column-normAlized Last-layer momEntum), a simple optimizer for memory efficient pretraining. Across multiple LLaMA models (60M–1B), SCALE matches or exceeds the performance of Adam while using only 35–45\% of the total memory. It also consistently outperforms memory-efficient optimizers such as GaLore, Fira and APOLLO, making it a strong candidate for large-scale pretraining under memory constraints. For LLaMA 7B model, SCALE outperforms the state-of-the-art memory-efficient methods APOLLO and Muon, in terms of both perplexity and memory consumption.
foundation or frontier models, including LLMs
https://openreview.net/pdf?id=gyXfJUcR72
2025-09-18T23:20:00
4
[ { "id": "fYToZ1qWt1", "forum": "gyXfJUcR72", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission12587/Reviewer_Cu6E", "reviewer_name": "Reviewer_Cu6E", "rating": 6, "confidence": 4, "soundness": 3, "contribution": 3, "presentation": 3, "summary": "This ...
roYDAg8Hve
https://openreview.net/forum?id=roYDAg8Hve
How private is diffusion-based sampling?
4
3.333333
[ 6, 4, 2 ]
[ 4, 3, 3 ]
3
[ "differential privacy", "diffusion-based sampling", "gaussian differential privacy", "EDM" ]
Diffusion models have emerged as the foundation of modern generative systems, yet their high memorization capacity raises privacy concerns. While differentially private (DP) training provides formal guarantees, it remains impractical for large-scale diffusion models. In this work, we take a different route by analyzing privacy leakage during the sampling process. We introduce an empirical denoiser that enables tractable computation of per-step sensitivities, allowing each denoising step to be interpreted as a Gaussian mechanism. Building on this perspective, we apply Gaussian Differential Privacy (GDP) to derive tight privacy bounds. Furthermore, we identify critical windows in the denoising trajectory—time steps where salient semantic features emerge—and quantify how privacy loss depends on stopping relative to these windows. Our study provides the first systematic characterization of privacy guarantees in diffusion sampling, offering a principled foundation for designing privacy-preserving generative pipelines beyond DP training.
We provide a systematic privacy analysis of diffusion sampling by modeling each step with Gaussian DP and analyzing their total privacy composition.
alignment, fairness, safety, privacy, and societal considerations
https://openreview.net/pdf?id=roYDAg8Hve
2025-09-13T05:38:07
3
[ { "id": "K9Ka8N3sG6", "forum": "roYDAg8Hve", "review_number": 3, "reviewer_id": "ICLR.cc/2026/Conference/Submission4582/Reviewer_Ln5k", "reviewer_name": "Reviewer_Ln5k", "rating": 6, "confidence": 4, "soundness": 3, "contribution": 3, "presentation": 3, "summary": "The pa...
3NLF20wthr
https://openreview.net/forum?id=3NLF20wthr
3S-Attack: Spatial, Spectral and Semantic Invisible Backdoor Attack Against DNN Models
4
4
[ 6, 4, 4, 2 ]
[ 4, 4, 3, 5 ]
4
[ "Artificial intelligence Security", "Backdoor attack", "Deep neural network", "DCT transform" ]
Backdoor attacks implant hidden behaviors into models by poisoning training data or modifying the model directly. These attacks aim to maintain high accuracy on benign inputs while causing misclassification when a specific trigger is present. While existing studies have explored stealthy triggers in spatial and spectral domains, few incorporate the semantic domain. In this paper, we propose 3S-attack, a novel backdoor attack which is stealthy across the spatial, spectral, and semantic domains. The key idea is to exploit the semantic features of benign samples as triggers, using Gradient-weighted Class Activation Mapping (Grad-CAM) and a preliminary model for extraction. Then we embedded the trigger in the spectral domain, followed by pixel-level restrictions in the spatial domain. This process minimizes the distance between poisoned and benign samples, making the attack harder to detect by existing defenses and human inspection. And it exposes a vulnerability at the intersection of robustness and semantic interpretability, revealing that models can be manipulated to act in semantically consistent yet malicious ways. Extensive experiments on various datasets, along with theoretical analysis, demonstrate the stealthiness of 3S-attack and highlight the need for stronger defenses to ensure AI security.
This paper proposes a novel backdoor attack that is stealthy in spatial, spectral, and semantic domains against DNN models
alignment, fairness, safety, privacy, and societal considerations
https://openreview.net/pdf?id=3NLF20wthr
2025-09-01T23:35:06
4
[ { "id": "8PoeupbHwC", "forum": "3NLF20wthr", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission461/Reviewer_2KfJ", "reviewer_name": "Reviewer_2KfJ", "rating": 6, "confidence": 4, "soundness": 3, "contribution": 3, "presentation": 1, "summary": "This pa...
yhEi1aeWCQ
https://openreview.net/forum?id=yhEi1aeWCQ
NUMBER REPRESENTATIONS IN LLMS: A COMPUTATIONAL PARALLEL TO HUMAN PERCEPTION
5.333333
3.333333
[ 4, 8, 4 ]
[ 3, 4, 3 ]
3
[ "Natural Logarithmic", "Number line", "LLM", "representations", "embeddings" ]
Humans are believed to perceive numbers on a logarithmic mental number line, where smaller values are represented with greater resolution than larger ones. This cognitive bias, supported by neuroscience and behavioral studies, suggests that numerical magnitudes are processed in a sublinear fashion rather than on a uniform linear scale. Inspired by this hypothesis, we investigate whether large language models (LLMs) exhibit a similar logarithmic-like structure in their internal numerical representations. By analyzing how numerical values are encoded across different layers of LLMs, we apply dimensionality reduction techniques such as PCA and PLS followed by geometric regression to uncover latent structures in the learned embeddings. Our findings reveal that the model’s numerical representations exhibit sublinear spacing, with distances between values aligning with a logarithmic scale. This suggests that LLMs, much like humans, may encode numbers in a compressed, non-uniform manner.
interpretability and explainable AI
https://openreview.net/pdf?id=yhEi1aeWCQ
2025-09-16T14:58:47
3
[ { "id": "sogLT89O9T", "forum": "yhEi1aeWCQ", "review_number": 3, "reviewer_id": "ICLR.cc/2026/Conference/Submission7044/Reviewer_W9st", "reviewer_name": "Reviewer_W9st", "rating": 4, "confidence": 3, "soundness": 2, "contribution": 2, "presentation": 3, "summary": "This p...
4MTFyYOsWJ
https://openreview.net/forum?id=4MTFyYOsWJ
High Probability Streaming Lower Bounds for $F_2$ Estimation
4
3.75
[ 4, 4, 4, 4 ]
[ 3, 4, 4, 4 ]
4
[ "sketching", "streaming", "dimensionality reduction" ]
A recent paper of Braverman and Zamir [BZ'24] gave a lower bound of $\Omega(\frac{1}{\epsilon^2}\log n)$ for estimating the $F_2$ moment of a stream to within $1 \pm \epsilon$ multiplicative error, resolving the complexity of $F_2$ estimation for constant failure probability $\delta$ in the insertion-only model. We show that their argument can be adapted to achieve tight dependence on the failure probability $\delta$. Our key step is to replace the "Exam Set Disjointness" problem used in [BZ24] with a robust version that we call "Exam Mostly Frequency" (EMostlyFreq). This is the exam version of the communication problem underlying the high-probability analysis introduced in [Kamath, Price, Woodruff '21]. We prove a tight lower bound of $\Omega(\frac{1}{\epsilon^2} \log(\frac{\epsilon\sqrt{n}}{\log(1/\delta)}) \log(1/\delta))$ for $F_2$ estimation.
other topics in machine learning (i.e., none of the above)
https://openreview.net/pdf?id=4MTFyYOsWJ
2025-09-20T02:05:58
4
[ { "id": "namxX3MRJo", "forum": "4MTFyYOsWJ", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission20365/Reviewer_zBJb", "reviewer_name": "Reviewer_zBJb", "rating": 4, "confidence": 3, "soundness": 4, "contribution": 3, "presentation": 2, "summary": "This ...
wVBVa09JVV
https://openreview.net/forum?id=wVBVa09JVV
Don't Guess the Future, Find the Bottleneck: Spectral Subgoals for Offline Goal-Conditioned RL
4.5
4
[ 4, 6, 4, 4 ]
[ 5, 3, 4, 4 ]
4
[ "offline goal conditional reinforcement learning" ]
Offline goal-conditioned RL (OGCRL) learns to reach arbitrary goals from offline dataset, but long-horizon performance hinges on crossing a handful of hard-to-cross bottlenecks. These bottlenecks not only dictate the feasible paths toward the goal but also act as critical keypoints, marking the transitions between adjacent regions and providing the agent with essential directional guidance. Prior hierarchical methods pick subgoals by time or short-horizon value heuristics, which do not localize the bottleneck, as a result, the agent losing the clear guidance that bottlenecks could provide about where to pass next. We instead model long-horizon planning as “cross the next bottleneck”: we apply Laplacian spectral clustering to offline dataset to expose bottlenecks and then identify trajectories from the offline dataset that cross these boundaries, and the intersects are defined as keypoints (KPs). Then the most representative KPs are automatically selected and a directed KP reachability graph $\mathcal G_{\mathrm{KP}}$ is constructed based on the selected KPs. We then restrict high-level choices to these bottleneck states and use a pluggable low-level controller to execute the short transitions between them. We provide theory showing that the next bottleneck is the optimal one-step subgoal and that Laplacian spectra recover bottlenecks with high overlap. Thus, Laplacian spectral clustering can discover approximately optimal subgoals. Empirically, the same pattern holds: across D4RL and OGBench, our method achieves state-of-the-art results on a broad set of navigation and manipulation tasks and across diverse dataset regimes, for example, **96.5\%** on **AntMaze** and **84.5\%** on **Franka-Kitchen**.
reinforcement learning
https://openreview.net/pdf?id=wVBVa09JVV
2025-09-19T14:50:51
4
[ { "id": "eqyrqwTqSG", "forum": "wVBVa09JVV", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission16443/Reviewer_PGk3", "reviewer_name": "Reviewer_PGk3", "rating": 4, "confidence": 5, "soundness": 3, "contribution": 2, "presentation": 2, "summary": "* Thi...
yzwSzhqLpH
https://openreview.net/forum?id=yzwSzhqLpH
Entropy-Guided Dynamic Tokens for Graph-LLM Alignment in Molecular Understanding
4
4
[ 2, 4, 6, 4 ]
[ 4, 4, 5, 3 ]
4
[ "Multimodal Modeling", "Graph–LLM Alignment", "Molecule Understanding", "Backbone-Free Tuning" ]
Molecular understanding is central to advancing areas such as science and drug discovery, yet large language models (LLMs) struggle to understand molecular graphs effectively. Existing graph–LLM bridges often adapt a Q-Former–style connector with fixed-length static tokens originally designed for vision tasks. These designs overlook stereochemistry and substructural context and typically require costly LLM-backbone fine-tuning, limiting efficiency and generalization. We introduce EDT-Former, an Entropy-guided Dynamic Token Transformer that generates tokens aligned with informative molecular patches, preserving both local and global structural features for molecular graph understanding. Beyond prior approaches, EDT-Former enables alignment between frozen graph encoders and LLMs without tuning the LLM backbone, resulting in computationally efficient fine-tuning, and it achieves state-of-the-art results on the MoleculeQA and Mol-Instructions benchmarks, underscoring its effectiveness for scalable and generalizable multimodal molecular understanding.
EDT-Former: entropy-guided dynamic query tokens map molecular graphs to LLMs, capturing local and global structure features for comprehensive understanding and reasoning with backbone-free, connector-only training.
applications to physical sciences (physics, chemistry, biology, etc.)
https://openreview.net/pdf?id=yzwSzhqLpH
2025-09-19T12:09:39
4
[ { "id": "N3GDvggGsY", "forum": "yzwSzhqLpH", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission15761/Reviewer_y4JF", "reviewer_name": "Reviewer_y4JF", "rating": 2, "confidence": 4, "soundness": 1, "contribution": 2, "presentation": 2, "summary": "The p...
57YfUhcYXd
https://openreview.net/forum?id=57YfUhcYXd
Eliminating Inductive Bias in Reward Models with Information-Theoretic Guidance
5.5
3.5
[ 6, 6, 4, 6 ]
[ 4, 3, 3, 4 ]
4
[ "LLM", "RLHF", "Reward Hacking", "Debias" ]
Reward models (RMs) are crucial in reinforcement learning from human feedback (RLHF) to align large language models (LLMs) with human values. However, RM training data is commonly recognized as low-quality, always containing preference conflicts and inductive biases, such as response length or speaking style, which can easily lead to reward overfitting and hacking. A few recent RM debiasing methods either target merely a single specific type of preference bias or only address simple linear bias relations such as Pearson coefficients. To mitigate more complicated inductive bias of reward modeling, inspired by the information bottleneck, we introduce a novel information-theoretic debiasing method called **D**ebiasing via **I**nformation optimization for **R**M (DIR). More specifically, our method trains RMs by maximizing the mutual information (MI) between preference prediction and input response pairs, while minimizing the MI between RM outputs and biased attributes of preference inputs. With the theoretical justification of information theory, DIR can handle different types of bias with more comprehensive non-linear correlations, enlarging its real-world application scenarios. In experiments, we verify the effectiveness of DIR with three types of inductive biases: response length, sycophancy, and format. Based on the numerical results, we discover that DIR can not only effectively diminish target inductive biases but also improve RLHF performances on various benchmarks with better generalization abilities.
probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
https://openreview.net/pdf?id=57YfUhcYXd
2025-09-19T10:52:36
4
[ { "id": "EGtAa7csX8", "forum": "57YfUhcYXd", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission15332/Reviewer_cNct", "reviewer_name": "Reviewer_cNct", "rating": 6, "confidence": 4, "soundness": 3, "contribution": 3, "presentation": 3, "summary": "This ...
ky5iqwZSXI
https://openreview.net/forum?id=ky5iqwZSXI
Reliable Fine-Grained Evaluation of Natural Language Math Proofs
5
4
[ 4, 6, 6, 4 ]
[ 3, 4, 4, 5 ]
4
[ "automated proof evaluation; LLM-as-a-judge; LLM-generated math proofs; rubric-guided grading; prompt optimization; expert-annotated proof dataset; evaluator reliability; reward modeling" ]
Recent advances in large language models (LLMs) for mathematical reasoning have largely focused on tasks with easily verifiable final answers while generating and verifying natural language math proofs remains an open challenge. We identify the absence of a reliable, fine-grained evaluator for LLM-generated math proofs as a critical gap. To address this, we propose a systematic methodology for developing and validating evaluators that assign fine-grained scores on a 0–7 scale. Our approach first constructs a carefully designed, problem-specific marking scheme, and then uses it as a foundation to systematically study other key design choices, including the backbone model, additional context, instruction sets, and evaluation workflows. To enable this study, we introduce ProofBench, the first expert-annotated dataset of fine-grained proof ratings, spanning 131 problems from major math competitions and 393 LLM-generated solutions (from o3, Gemini 2.5 Pro, and DeepSeek-R1) with expert gradings. Our evaluation shows that a strong reasoning backbone, a detailed marking scheme, and simple ensembling are crucial for high performance. This leads to our best evaluator, ProofGrader, which achieves an RMSE of 1.093 compared to expert grading, significantly outperforming simpler baselines. Furthermore, to demonstrate its practical utility, we test ProofGrader as a reward model in a best-of-$n$ selection task. At $n=8$, it achieves an average score of 4.05/7, bridging more than 90\% of the performance gap between a naive binary evaluator (2.59) and the human oracle (4.21), underscoring its potential to improve downstream proof generation.
LLMs lack reliable proof evaluators. We introduce ProofBench and a 0–7 methodology; our ProofGrader (marking schemes + ensembling) hits RMSE 1.093 vs experts and lifts best-of-8 to 4.05/7, closing >90% of the gap to a human oracle.
foundation or frontier models, including LLMs
https://openreview.net/pdf?id=ky5iqwZSXI
2025-09-20T16:56:24
4
[ { "id": "TAq6T2iuEB", "forum": "ky5iqwZSXI", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission24615/Reviewer_HNs6", "reviewer_name": "Reviewer_HNs6", "rating": 4, "confidence": 3, "soundness": 2, "contribution": 3, "presentation": 2, "summary": "This ...
kYkfCs4ZAH
https://openreview.net/forum?id=kYkfCs4ZAH
FlexiCodec: A Dynamic Neural Audio Codec for Low Frame Rates
5.666667
3.833333
[ 6, 8, 2, 4, 6, 8 ]
[ 5, 3, 4, 3, 4, 4 ]
6
[ "Audio coding", "neural audio codecs", "speech language model" ]
Neural audio codecs are foundational to speech language models. It is expected to have a low frame rate and decoupled semantic and acoustic information. A lower frame rate codec can reduce the computational cost of speech language models by shortening the sequence length. Recent studies have developed 12.5Hz low-frame-rate audio codecs, but even lower frame rate codecs remain underexplored. We find that a major challenge for very low frame rate tokens is missing semantic information. This paper introduces **FlexiCodec** to address this limitation. FlexiCodec improves semantic preservation with a **dynamic frame rate** approach and introduces a novel architecture featuring an **ASR feature-assisted dual stream** encoding and Transformer bottlenecks. With dynamic frame rates, it uses less frames at information-sparse regions through adaptively merging semantically similar frames. A dynamic frame rate also allows FlexiCodec to support inference-time **controllable frame rates** between 3Hz and 12.5Hz. Experiments on **6.25Hz, 8.3Hz and 12.5Hz** average frame rates confirm that FlexiCodec excels over baseline systems in semantic information preservation and delivers a high audio reconstruction quality. We also validate the effectiveness of FlexiCodec in language model-based TTS. Demos are available at: https://flexicodec.github.io.
generative models
https://openreview.net/pdf?id=kYkfCs4ZAH
2025-09-17T14:34:23
6
[ { "id": "tLZkJL4mwe", "forum": "kYkfCs4ZAH", "review_number": 6, "reviewer_id": "ICLR.cc/2026/Conference/Submission8557/Reviewer_PDhk", "reviewer_name": "Reviewer_PDhk", "rating": 6, "confidence": 5, "soundness": 3, "contribution": 2, "presentation": 3, "summary": "This p...
t9cOXsdpKg
https://openreview.net/forum?id=t9cOXsdpKg
What Matters in Deep Learning for Time Series Forecasting?
3.333333
3.333333
[ 4, 2, 4 ]
[ 4, 3, 3 ]
3
[ "time series forecasting", "architecture design", "deep learning" ]
Deep learning models have grown increasingly popular in time series applications. However, the large quantity of newly proposed architectures, together with often contradictory empirical results, makes it difficult to assess which components contribute significantly to final performance. We aim to make sense of the current design space of deep learning architectures for time series forecasting by discussing the design dimensions and trade-offs that can explain, often unexpected, observed results. We discuss the necessity of grounding model design on principles for forecasting groups of time series and how such principles can be applied to current models. In particular, we assess how concepts such as locality and globality apply to recent forecasting architectures. We show that accounting for these aspects can be more relevant for achieving accurate results than adopting specific sequence modeling layers and that simple, well-designed forecasting architectures can often match the state of the art. We discuss how overlooked implementation details in existing architectures (1) fundamentally change the class of the resulting forecasting method and (2) drastically affect the observed empirical results. Our results call for rethinking current faulty benchmarking practices and for the need to focus on the foundational aspects of the forecasting problem when designing neural network architectures. As a step in this direction, we also propose an auxiliary forecasting model card, i.e., a template with a set of fields to characterize existing and new forecasting architectures based on key design choices.
learning on time series and dynamical systems
https://openreview.net/pdf?id=t9cOXsdpKg
2025-09-19T22:25:22
3
[ { "id": "GTcWDCw15M", "forum": "t9cOXsdpKg", "review_number": 3, "reviewer_id": "ICLR.cc/2026/Conference/Submission18911/Reviewer_eXVU", "reviewer_name": "Reviewer_eXVU", "rating": 4, "confidence": 4, "soundness": 2, "contribution": 2, "presentation": 2, "summary": "The p...
c1jWNZ1Zqg
https://openreview.net/forum?id=c1jWNZ1Zqg
Variational Inference for Cyclic Learning
6.666667
3.333333
[ 4, 8, 8 ]
[ 4, 2, 4 ]
3
[ "Cyclic Learning", "Self-supervised Learning" ]
Cyclic learning, which involves training with pairs of inverse tasks and utilizes cycle-consistency in the design of loss functions, has emerged as a powerful paradigm for weakly-supervised learning. However, its potential remains under-explored due to the current methods’ narrow focus on domain-specific implementations. In this work, we develop generalized solutions for both pairwise cycle-consistent tasks and self-cycle-consistent tasks. By formulating cross-domain mappings as conditional probability functions, we reformulate the cycle-consistency objective as an evidence lower bound optimization problem via variational inference. Based on this formulation, we further propose two training strategies for arbitrary cyclic learning tasks: single-step optimization and alternating optimization. Our framework demonstrates broad applicability across diverse tasks. In unpaired image translation, it not only provides a theoretical justification for CycleGAN but also leads to CycleGN—a competitive GAN-free alternative. For unsupervised tracking, CycleTrack and CycleTrack-EM achieve state-of-the-art performance on multiple benchmarks. This work establishes the theoretical foundations of cyclic learning and offers a general paradigm for future research.
learning theory
https://openreview.net/pdf?id=c1jWNZ1Zqg
2025-09-20T00:01:14
3
[ { "id": "YSmWdo0wia", "forum": "c1jWNZ1Zqg", "review_number": 3, "reviewer_id": "ICLR.cc/2026/Conference/Submission19614/Reviewer_uEwd", "reviewer_name": "Reviewer_uEwd", "rating": 4, "confidence": 4, "soundness": 2, "contribution": 2, "presentation": 3, "summary": "This ...
t7wIerUT2E
https://openreview.net/forum?id=t7wIerUT2E
Controllable diffusion-based generation for multi-channel biological data
3.5
3.25
[ 8, 0, 2, 4 ]
[ 3, 4, 3, 3 ]
4
[ "diffusion model", "conditional imputation", "channel attention", "random-masking guidance", "imaging mass cytometry" ]
Biological profiling technologies, such as imaging mass cytometry (IMC) and spatial transcriptomics (ST), generate multi-channel data with strong spatial alignment and complex inter-channel relationships. Modeling such data requires generative frameworks that can jointly model spatial structure and channel relationships, while also generalizing across arbitrary combinations of observed and missing channels for practical applications. Existing generative models typically assume low-dimensional inputs (e.g., RGB images) and rely on simple conditioning mechanisms that break spatial correspondence and overlook inter-channel dependencies. This work proposes a unified multi-channel diffusion (MCD) framework for controllable generation of structured biological data with intricate inter-channel relationships. Our model introduces two key innovations: (1) a hierarchical feature injection mechanism that enables multi-resolution conditioning on spatially aligned observed channels, and (2) two complementary channel attention modules to capture inter-channel relationships and recalibrate latent features. To support flexible conditioning and generalization to arbitrary sets of observed channels, we train the model using a random channel masking strategy, enabling it to reconstruct missing channels from any combination of observed channels as the spatial condition. We demonstrate state-of-the-art performance across both spatial and non-spatial biological data generation tasks, including imputation in spatial proteomics and clinical imaging, as well as gene-to-protein prediction in single-cell datasets, and show strong generalizability to unseen conditional configurations.
applications to physical sciences (physics, chemistry, biology, etc.)
https://openreview.net/pdf?id=t7wIerUT2E
2025-09-19T07:18:46
4
[ { "id": "KhQmNvvvYH", "forum": "t7wIerUT2E", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission14506/Reviewer_WNmC", "reviewer_name": "Reviewer_WNmC", "rating": 8, "confidence": 3, "soundness": 3, "contribution": 3, "presentation": 3, "summary": "The a...
lnTX3GoeTY
https://openreview.net/forum?id=lnTX3GoeTY
Feature segregation by signed weights in artificial vision systems and biological models
4.5
3.5
[ 6, 4, 6, 2 ]
[ 3, 4, 4, 3 ]
4
[ "ventral stream", "circuit mechanisms", "interpretability", "deep learning", "visual system", "excitation inhibition", "neuroscience", "closed-loop optimization", "ablation" ]
A core principle in both artificial and biological intelligence is the use of signed connections: positive and negative weights in artificial networks, and excitatory and inhibitory synapses in the brain. While both systems develop representations for diverse tasks, it is unclear whether positive and negative signals serve distinct representational roles or whether all representations require a balanced mixture of both. This is a fundamental question for mechanistic interpretability in neuroscience and AI. Here, we investigate how signed weights shape visual representations in artificial and biological systems involved in object recognition. In ImageNet-trained neural networks, ablation and feature visualization reveal that removing positive inputs disrupts object features, while removing negative inputs preserves foreground representations but affects background textures. This segregation is more pronounced in adversarially robust models, persists with unsupervised learning, and vanishes with non-rectified activations. To better approximate the excitation versus inhibition segregation observed in biology (Dale’s law), we identified channels that projected predominantly positive or negative weights to the next layer. In early and intermediate layers, positive-projecting channels encode localized, object-like features, while negative-projecting channels encode more dispersed, background-like features. Motivated by these findings, we performed feature visualization in vivo in neurons in monkey visual cortex, across the ventral stream (V1, V4, and IT). We also fitted linear models using the input layer to classification units studied in ANNs that contained features alike those preferred by the biological neurons. We replicated ablation experiments in these model neuron units and found, as with class units, that removing positive inputs altered representations more than removing negative ones. Notably, some units closely approached Dale's law: the positively projecting units exhibited localized features, while the negatively projecting units showed larger, more dispersed features. Furthermore, we increased in vivo neuron responses by clearing the image background around the preferred feature, likely by reducing inhibitory inputs, providing concrete predictions for circuit neuroscientists to test. Our results demonstrate that both artificial and biological vision systems segregate features by weight sign: positive weights emphasize objects, negative weights encode context. This emergent organization offers a new perspective on interpretability and the convergence of representational strategies in brains and machines, with important predictions for visual neuroscience.
Neural networks trained on ImageNet segregate the object/foreground features of their output layer to the positive input weights, with similar behavior in visual neurons.
interpretability and explainable AI
https://openreview.net/pdf?id=lnTX3GoeTY
2025-09-20T05:45:11
4
[ { "id": "TXYCdLoNHd", "forum": "lnTX3GoeTY", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission21502/Reviewer_f5Cb", "reviewer_name": "Reviewer_f5Cb", "rating": 6, "confidence": 3, "soundness": 3, "contribution": 3, "presentation": 3, "summary": "Acros...
6lobo2PXdl
https://openreview.net/forum?id=6lobo2PXdl
The Power of Small Initialization in Noisy Low-Tubal-Rank Tensor Recovery
4.5
3.5
[ 4, 6, 2, 6 ]
[ 3, 4, 4, 3 ]
4
[ "low-tubal-rank tensor recovery; t-SVD; t-product; over-parameterization;non-convex" ]
We study the problem of recovering a low-tubal-rank tensor $\mathcal{X}\_\star\in \mathbb{R}^{n \times n \times k}$ from noisy linear measurements under the t-product framework. A widely adopted strategy involves factorizing the optimization variable as $\mathcal{U} * \mathcal{U}^\top$, where $\mathcal{U} \in \mathbb{R}^{n \times R \times k}$, followed by applying factorized gradient descent (FGD) to solve the resulting optimization problem. Since the tubal-rank $r$ of the underlying tensor $\mathcal{X}_\star$ is typically unknown, this method often assumes $r < R \le n$, a regime known as over-parameterization. However, when the measurements are corrupted by some dense noise (e.g., sub-Gaussian noise), FGD with the commonly used spectral initialization yields a recovery error that grows linearly with the over-estimated tubal-rank $R$. To address this issue, we show that using a small initialization enables FGD to achieve a nearly minimax optimal recovery error, even when the tubal-rank $R$ is significantly overestimated. Using a four-stage analytic framework, we analyze this phenomenon and establish the sharpest known error bound to date, which is independent of the overestimated tubal-rank $R$. Furthermore, we provide a theoretical guarantee showing that an easy-to-use early stopping strategy can achieve the best known result in practice. All these theoretical findings are validated through a series of simulations and real-data experiments.
For the noisy low-tubal-rank tensor recovery problem, we show that factorized gradient descent with small initialization converges to nearly the minimax optimal error.
optimization
https://openreview.net/pdf?id=6lobo2PXdl
2025-09-15T23:18:03
4
[ { "id": "KTtam0RrgL", "forum": "6lobo2PXdl", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission5985/Reviewer_Zuao", "reviewer_name": "Reviewer_Zuao", "rating": 4, "confidence": 3, "soundness": 3, "contribution": 3, "presentation": 3, "summary": "This p...