[edit]

Volume 280: Conference on Parsimony and Learning, 24-27 March 2025, Stanford University, USA

[edit]

Editors: Beidi Chen, Shijia Liu, Mert Pilanci, Weijie Su, Jeremias Sulam, Yuxiang Wang, Zhihui Zhu

[bib][citeproc]

Approximate Nullspace Augmented Finetuning for Robust Vision Transformers

Haoyang Liu, Aditya Singh, Yijiang Li, Haohan Wang; Conference on Parsimony and Learning, PMLR 280:1-23

Fast John Ellipsoid Computation with Differential Privacy Optimization

Xiaoyu Li, Yingyu Liang, Zhenmei Shi, Zhao Song, Junwei Yu; Conference on Parsimony and Learning, PMLR 280:24-64

Large-Scale Multiway Clustering with Seeded Clustering

Jiaxin Hu; Conference on Parsimony and Learning, PMLR 280:65-88

Learning of Patch-Based Smooth-Plus-Sparse Models for Image Reconstruction

Stanislas Ducotterd, Sebastian Neumayer, Michael Unser; Conference on Parsimony and Learning, PMLR 280:89-104

HSR-Enhanced Sparse Attention Acceleration

Bo Chen, Yingyu Liang, Zhizhou Sha, Zhenmei Shi, Zhao Song; Conference on Parsimony and Learning, PMLR 280:105-133

AdaProx: A Novel Method for Bilevel Optimization under Pessimistic Framework

Ziwei Guan, Daouda Sow, Sen Lin, Yingbin Liang; Conference on Parsimony and Learning, PMLR 280:134-164

A Case Study of Low Ranked Self-Expressive Structures in Neural Network Representations

Uday Singh Saini, William Shiao, Yahya Sattar, Yogesh Dahiya, Samet Oymak, Evangelos E. Papalexakis; Conference on Parsimony and Learning, PMLR 280:165-236

Do Global and Local Perform Cooperatively or Adversarially in Heterogeneous Federated Learning?

Huiwen Wu, Shuo Zhang; Conference on Parsimony and Learning, PMLR 280:237-254

Heterogeneous Decision Making in Mixed Traffic: Uncertainty-aware Planning and Bounded Rationality

Hang Wang, Qiaoyi Fang, Junshan Zhang; Conference on Parsimony and Learning, PMLR 280:255-277

Adaptive Batch Size Schedules for Distributed Training of Language Models with Data and Model Parallelism

Tim Tsz-Kit Lau, Weijian Li, Chenwei Xu, Han Liu, Mladen Kolar; Conference on Parsimony and Learning, PMLR 280:278-304

Revisiting the Initial Steps in Adaptive Gradient Descent Optimization

Abulikemu Abuduweili, Changliu Liu; Conference on Parsimony and Learning, PMLR 280:305-322

A Validation Approach to Over-parameterized Matrix and Image Recovery

Lijun Ding, Zhen Qin, Liwei Jiang, Jinxin Zhou, Zhihui Zhu; Conference on Parsimony and Learning, PMLR 280:323-350

Dual Reasoning: A GNN-LLM Collaborative Framework for Knowledge Graph Question Answering

Guangyi Liu, Yongqi Zhang, Yong Li, Quanming Yao; Conference on Parsimony and Learning, PMLR 280:351-372

Dimension Mixer: Group Mixing of Input Dimensions for Efficient Function Approximation

Suman Sapkota, Binod Bhattarai; Conference on Parsimony and Learning, PMLR 280:373-391

Provable Model-Parallel Distributed Principal Component Analysis with Parallel Deflation

Fangshuo Liao, Wenyi Su, Anastasios Kyrillidis; Conference on Parsimony and Learning, PMLR 280:392-416

Meta ControlNet: Enhancing Task Adaptation via Meta Learning

Junjie Yang, Jinze Zhao, Peihao Wang, Zhangyang Wang, Yingbin Liang; Conference on Parsimony and Learning, PMLR 280:417-432

Concept Bottleneck Model with Zero Performance Loss

Zhenzhen Wang, Aleksander Popel, Jeremias Sulam; Conference on Parsimony and Learning, PMLR 280:433-461

FedPeWS: Personalized Warmup via Subnetworks for Enhanced Heterogeneous Federated Learning

Nurbek Tastan, Samuel Horváth, Martin Takáč, Karthik Nandakumar; Conference on Parsimony and Learning, PMLR 280:462-483

A unified framework for Sparse plus Low-Rank Matrix Decomposition for LLMs

Mehdi Makni, Kayhan Behdin, Zheng Xu, Natalia Ponomareva, Rahul Mazumder; Conference on Parsimony and Learning, PMLR 280:484-499

Greedy Output Approximation: Towards Efficient Structured Pruning for LLMs Without Retraining

Jianwei Li, Yijun Dong, Qi Lei; Conference on Parsimony and Learning, PMLR 280:500-520

MoXCo: How I learned to stop exploring and love my local minima?

Esha Singh, Shoham Sabach, Yu-Xiang Wang; Conference on Parsimony and Learning, PMLR 280:521-544

Unlock the Theory behind Scaling 1-bit Neural Networks

Majid Daliri, Zhao Song, Chiwun Yang; Conference on Parsimony and Learning, PMLR 280:545-598

Bridging Domain Adaptation and Graph Neural Networks: A Tensor-Based Framework for Effective Label Propagation

Tao Wen, Elynn Chen, Yuzhou Chen, Qi Lei; Conference on Parsimony and Learning, PMLR 280:599-614

Theoretical and Empirical Advances in Forest Pruning

Albert Dorador; Conference on Parsimony and Learning, PMLR 280:615-651

Asymptotic Behavior of the Coordinate Ascent Variational Inference in Singular Models

Sean C Plummer, Anirban Bhattacharya, Debdeep Pati, Yun Yang; Conference on Parsimony and Learning, PMLR 280:652-674

Curse of Attention: A Kernel-Based Perspective for Why Transformers Fail to Generalize on Time Series Forecasting and Beyond

Yekun Ke, Yingyu Liang, Zhenmei Shi, Zhao Song, Chiwun Yang; Conference on Parsimony and Learning, PMLR 280:675-738

The Computational Limits of State-Space Models and Mamba via the Lens of Circuit Complexity

Yifang Chen, Xiaoyu Li, Yingyu Liang, Zhenmei Shi, Zhao Song; Conference on Parsimony and Learning, PMLR 280:739-767

Grouped Sequential Optimization Strategy - the Application of Hyperparameter Importance Assessment in Deep Learning

Ruinan Wang, Ian T. Nabney, MOHAMMAD GOLBABAEE; Conference on Parsimony and Learning, PMLR 280:768-779

You Only Debias Once: Towards Flexible Accuracy-Fairness Trade-offs at Inference Time

Xiaotian Han, Tianlong Chen, Kaixiong Zhou, Zhimeng Jiang, Zhangyang Wang, Xia Hu; Conference on Parsimony and Learning, PMLR 280:780-809

Improving Neuron-level Interpretability with White-box Language Models

Hao Bai, Yi Ma; Conference on Parsimony and Learning, PMLR 280:810-836

Quantum EigenGame for excited state calculation

David A. Quiroga, Jason Han, Anastasios Kyrillidis; Conference on Parsimony and Learning, PMLR 280:837-864

Adversarially Robust Spiking Neural Networks with Sparse Connectivity

Mathias Schmolli, Maximilian Baronig, Robert Legenstein, Ozan Ozdenizci; Conference on Parsimony and Learning, PMLR 280:865-883

Taming Sensitive Weights : Noise Perturbation Fine-tuning for Robust LLM Quantization

DONGWEI WANG, Huanrui Yang; Conference on Parsimony and Learning, PMLR 280:884-896

RecCrysFormer: Refined Protein Structural Prediction from 3D Patterson Maps via Recycling Training Runs

Tom Pan, Evan Dramko, Mitchell D. Miller, George N Phillips Jr., Anastasios Kyrillidis; Conference on Parsimony and Learning, PMLR 280:897-912

Learning Effective Dynamics across Spatio-Temporal Scales of Complex Flows

Han Gao, Sebastian Kaltenbach, Petros Koumoutsakos; Conference on Parsimony and Learning, PMLR 280:913-931

Fast and Efficient Matching Algorithm with Deadline Instances

Zhao Song, Weixin Wang, Chenbo Yin, Junze Yin; Conference on Parsimony and Learning, PMLR 280:932-959

Closure Discovery for Coarse-Grained Partial Differential Equations Using Grid-based Reinforcement Learning

Jan-Philipp von Bassewitz, Sebastian Kaltenbach, Petros Koumoutsakos; Conference on Parsimony and Learning, PMLR 280:960-984

FedOSAA: Improving Federated Learning with One-Step Anderson Acceleration

Xue Feng, M. Paul Laiu, Thomas Strohmer; Conference on Parsimony and Learning, PMLR 280:985-1006

Enhancing Video Representation Learning with Temporal Differentiation

Siyi Chen, Minkyu Choi, Zesen Zhao, Kuan Han, Qing Qu, Zhongming Liu; Conference on Parsimony and Learning, PMLR 280:1007-1034

Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients

Zhenyu Zhang, AJAY KUMAR JAISWAL, Lu Yin, Shiwei Liu, Jiawei Zhao, Yuandong Tian, Zhangyang Wang; Conference on Parsimony and Learning, PMLR 280:1035-1050

Vanishing Feature: Diagnosing Model Merging and Beyond

Xingyu Qu, Samuel Horváth; Conference on Parsimony and Learning, PMLR 280:1051-1086

Exact and Rich Feature Learning Dynamics of Two-Layer Linear Networks

Wei Huang, Wuyang Chen, zhiqiang xu, Zhangyang Wang, Taiji Suzuki; Conference on Parsimony and Learning, PMLR 280:1087-1111

Sparse MoE as a New Treatment: Addressing Forgetting, Fitting, Learning Issues in Multi-Modal Multi-Task Learning

Jie Peng, Sukwon Yun, Kaixiong Zhou, Ruida Zhou, Thomas Hartvigsen, Yanyong Zhang, Zhangyang Wang, Tianlong Chen; Conference on Parsimony and Learning, PMLR 280:1112-1145

AgentHPO: Large Language Model Agent for Hyper-Parameter Optimization

Siyi Liu, Chen Gao, Yong Li; Conference on Parsimony and Learning, PMLR 280:1146-1169

Progressive Gradient Flow for Robust N:M Sparsity Training in Transformers

Abhimanyu Rajeshkumar Bambhaniya, Amir Yazdanbakhsh, Suvinay Subramanian, Sheng-Chun Kao, Shivani Agrawal, Utku Evci, Tushar Krishna; Conference on Parsimony and Learning, PMLR 280:1170-1190

Sufficient and Necessary Explanations (and What Lies in Between)

Beepul Bharti, Paul Yi, Jeremias Sulam; Conference on Parsimony and Learning, PMLR 280:1191-1215

Streaming Kernel PCA Algorithm With Small Space

Yichuan Deng, Jiangxuan Long, Zhao Song, Zifan Wang, Han Zhang; Conference on Parsimony and Learning, PMLR 280:1216-1254

Hamiltonian Mechanics of Feature Learning: Bottleneck Structure in Leaky ResNets

Arthur Jacot, Alexandre Kaiser; Conference on Parsimony and Learning, PMLR 280:1255-1273

How Iterative Magnitude Pruning Discovers Local Receptive Fields in Fully Connected Neural Networks

William T Redman, Zhangyang Wang, Alessandro Ingrosso, Sebastian Goldt; Conference on Parsimony and Learning, PMLR 280:1274-1291

White-box Error Correction Code Transformer

Ziyan Zheng, Chin Wa Lau, Nian Guo, Xiang Shi, Shao-Lun Huang; Conference on Parsimony and Learning, PMLR 280:1292-1306

Are all layers created equal: A neural collapse perspective

Jinxin Zhou, Jiachen Jiang, Zhihui Zhu; Conference on Parsimony and Learning, PMLR 280:1307-1327

Collaborative and Efficient Personalization with Mixtures of Adaptors

Abdulla Jasem Almansoori, Samuel Horváth, Martin Takáč; Conference on Parsimony and Learning, PMLR 280:1328-1364

Explaining and Mitigating the Modality Gap in Contrastive Multimodal Learning

Can Yaras, Siyi Chen, Peng Wang, Qing Qu; Conference on Parsimony and Learning, PMLR 280:1365-1387

SGD with Weight Decay Secretly Minimizes the Ranks of Your Neural Networks

Tomer Galanti, Zachary S Siegel, Aparna Gupte, Tomaso A Poggio; Conference on Parsimony and Learning, PMLR 280:1388-1412

Towards Vector Optimization on Low-Dimensional Vector Symbolic Architecture

Shijin Duan, Yejia Liu, Gaowen Liu, Ramana Rao Kompella, Shaolei Ren, Xiaolin Xu; Conference on Parsimony and Learning, PMLR 280:1413-1432

subscribe via RSS