[edit]

Volume 328: Third Conference on Parsimony and Learning, 23-26 March 2026, Tübingen, Germany

[edit]

Editors: Rebekka Burkholz, Shiwei Liu, Saiprasad Ravishankar, William Redman, Wei Huang, Weijie Su, Zhihui Zhu

[bib][citeproc]

Semantic Homogeneity As Demonstration: Batch-Structured Semi-Supervised In-Context Learning for Natural Language Understanding

Cheng Chen, Yuangang Pan, Ivor Tsang; Conference on Parsimony and Learning, PMLR 328:1-23

Improving Medical Visual Reinforcement Fine-Tuning via Perception and Reasoning Augmentation

Guangjing Yang, ZhangYuan Yu, Ziyuan Qin, Xinyuan Song, Huahui Yi, Qingbo Kang, Jun Gao, Yiyue Li, Chenlin Du, Qicheng Lao; Conference on Parsimony and Learning, PMLR 328:24-41

ROSE: Reordered SparseGPT for More Accurate One-Shot Large Language Models Pruning

Mingluo Su, Huan Wang; Conference on Parsimony and Learning, PMLR 328:42-60

AlphaFormer: End-to-End Symbolic Regression of Alpha Factors with Transformers

Haotong Huang, Jie Peng, Zezhen Ding, Pingzhi Li, Tianlong Chen; Conference on Parsimony and Learning, PMLR 328:61-82

From sparse recovery to plug-and-play priors, understanding trade-offs for stable recovery with generalized projected gradient descent

Ali Joundi, Yann Traonmilin, Jean-François Aujol; Conference on Parsimony and Learning, PMLR 328:83-103

Panza: Investigating the Feasibility of Fully-Local Personalized Text Generation

Armand Mihai Nicolicioiu, Eugenia Iofinova, Andrej Jovanovic, Eldar Kurtic, Mahdi Nikdan, Andrei Panferov, Ilia Markov, Nir N Shavit, Dan Alistarh; Conference on Parsimony and Learning, PMLR 328:104-130

Enhancing Low-Cost Video Editing with Lightweight Adaptors and Temporal-Aware Inversion

Yangfan He, Sida Li, Jianhui Wang, Xinyuan Song, Kun Li, Xinhang Yuan, Kuan Lu, Menghao Huo, Jingqun Tang, Yi Xin, Jiaqi Chen, Keqin Li, Miao Zhang, Xueqian Wang; Conference on Parsimony and Learning, PMLR 328:131-163

SPIKE: Sparse Koopman Regularization for Physics-Informed Neural Networks

Jose Marie Antonio Miñoza; Conference on Parsimony and Learning, PMLR 328:164-191

Cannistraci-Hebb Training with N:M Semi-Structured Sparsity for Pre-Training and Re-Training

Jiaqing Lyu, Ruijie Wang, Kangyou Bao, Yingtao Zhang, Carlo Vittorio Cannistraci; Conference on Parsimony and Learning, PMLR 328:192-217

Lattice-Based Vector Quantization for Low-Bit Quantization-Aware Training

Rishika Kohli, Soma S Dhavala, Shaifu Gupta, Manoj Singh Gaur; Conference on Parsimony and Learning, PMLR 328:218-241

ShapLoRA: Allocation of Low-rank Adaption on Large Language Models via Shapley Value Inspired Importance Estimation

Colin Zhao, Qinghua Yao, Xinyuan Song, Wei Zhu; Conference on Parsimony and Learning, PMLR 328:242-264

LLMQ: Efficient Lower-Precision LLM Training for Consumer GPUs

Erik Schultheis, Dan Alistarh; Conference on Parsimony and Learning, PMLR 328:265-284

Parameter-Efficient Distributional RL via Normalizing Flows and a Geometry-Aware Cramér Surrogate

Simo Alami Chehboune, Rim Kaddah, Marie-Paule CANI, Jesse Read; Conference on Parsimony and Learning, PMLR 328:285-313

Analyzing and Mitigating Model Collapse in Reflow Methods

Huminhao Zhu, Fangyikang Wang, Tianyu Ding, Qing Qu, Zhihui Zhu; Conference on Parsimony and Learning, PMLR 328:314-340

Stochastic Unrolled Neural Networks

Samar Hadou, Navid NaderiAlizadeh, Alejandro Ribeiro; Conference on Parsimony and Learning, PMLR 328:341-359

Prompt Stability Matters: Evaluating and Optimizing Auto-Generated Prompt in General-Purpose Systems

Ke Chen, Xucheng Yu, Yufei Zhou, Haohan Wang; Conference on Parsimony and Learning, PMLR 328:360-374

Symbiotic Cooperation for Web Agents: Harnessing Complementary Strengths of Large and Small LLMs

Ruichen Zhang, Mufan Qiu, Zhen Tan, Mohan Zhang, Xiaopeng Lu, Jie Peng, Kaidi Xu, Leandro Z. Agudelo, Peter Zhenghao Qian, Tianlong Chen; Conference on Parsimony and Learning, PMLR 328:375-427

Matrix Sensing with Kernel Optimal Loss: Robustness and Optimization Landscape

Xinyuan Song, Ziye Ma; Conference on Parsimony and Learning, PMLR 328:428-500

Pruned Adaptation Modules: A Simple yet Strong Baseline for Continual Foundation Models

Elif Ceren Gok Yildirim, Murat Onur Yildirim, Joaquin Vanschoren; Conference on Parsimony and Learning, PMLR 328:501-515

Token-Aware Representation Augmentation for Fine-Grained Semi-Supervised Learning

Hongyang He, Yan Zhong, Xinyuan Song, Daizong Liu, Victor Sanchez; Conference on Parsimony and Learning, PMLR 328:516-528

MMA:Benchmarking Multi-ModalLarge Language Models in Ambiguity Contexts

Ru Wang, Selena Song, Yuquan Wang, Liang Ding, Mingming Gong, Yusuke Iwasawa, Yutaka Matsuo, Jiaxian Guo; Conference on Parsimony and Learning, PMLR 328:529-551

Optimal $k$-Discretization Learning

Tong Wang, Zhangyang Wang; Conference on Parsimony and Learning, PMLR 328:552-564

Deep Neural Regression Collapse

Akshay Rangamani, Altay Unal; Conference on Parsimony and Learning, PMLR 328:565-581

Beyond In-Distribution Success: Scaling Curves of CoT Granularity for Language Model Generalization

Ru Wang, Wei Huang, Selena Song, Haoyu Zhang, Qian Niu, Yusuke Iwasawa, Yutaka Matsuo, Jiaxian Guo; Conference on Parsimony and Learning, PMLR 328:582-611

Learning in the Null Space: Small Singular Values for Continual Learning

Cuong Anh Pham, Praneeth Vepakomma, Samuel Horváth; Conference on Parsimony and Learning, PMLR 328:612-628

(PASS) Visual Prompt Locates Good Structure Sparsity through a Recurrent HyperNetwork

Tianjin Huang, Yong Tao, Meng Fang, Li Shen, Fan Liu, Yulong Pei, Mykola Pechenizkiy, Tianlong Chen; Conference on Parsimony and Learning, PMLR 328:629-643

Sparsity-Aware Prompt Tuning: A Simple and Effective Way to Fine-tune High-Sparsity LLMs

Yuxin Zhang, Weizhong Huang, Yuexiao Ma, Yunshan Zhong, Xiawu Zheng, Rongrong Ji; Conference on Parsimony and Learning, PMLR 328:644-657

Scalable LLM Reasoning Acceleration with Low-rank Distillation

Harry Dong, Bilge Acun, Beidi Chen, Yuejie Chi; Conference on Parsimony and Learning, PMLR 328:658-675

FocusDC: Real-World Scene Infusion for Robust Dataset Condensation

Youbing Hu, Yun Cheng, Olga Saukh, Firat Ozdemir, Anqi Lu, Zhiqiang Cao, Min Zhang, Zhijun Li; Conference on Parsimony and Learning, PMLR 328:676-697

ERC-SVD: Error-Controlled SVD for Large Language Model Compression

Haolei Bai, Siyong Jian, Tuo Liang, Yu Yin, Huan Wang; Conference on Parsimony and Learning, PMLR 328:698-719

Can Less Be More? Benchmarking Lightweight Models Against State-of-the-Art Deep Learning Architectures for Deployable Seizure Detection

Isaiah Essien, Donna-lee Ginsberg, Jesse Thornburg; Conference on Parsimony and Learning, PMLR 328:720-734

Beyond Greedy Decoding: Model-Specific Strategy Selection via Multi-faceted Uncertainty Decomposition

Kwangje Baeg, Yubin Lim; Conference on Parsimony and Learning, PMLR 328:735-755

Superclass-Guided Representation Disentanglement for Spurious Correlation Mitigation

Chenruo Liu, Hongjun Liu, Zeyu Lai, Yiqiu Shen, Chen Zhao, Qi Lei; Conference on Parsimony and Learning, PMLR 328:756-794

Dynamic SFT with Structured Measurements: Fast Queries, Fast Updates, Provable Guarantees

Yang Cao, Zhao Song; Conference on Parsimony and Learning, PMLR 328:795-825

Byzantine-Robust Optimization under $(L_0,L_1)$-Smoothness

Arman Bolatov, Samuel Horváth, Martin Takáč, Eduard Gorbunov; Conference on Parsimony and Learning, PMLR 328:826-854

Effective Learning for Small Reasoning Models: An Empirical Study on 0.5B Reasoning LLMs

Xialie Zhuang, Peixian MA, Zhikai Jia, Zane Cao, Shiwei Liu; Conference on Parsimony and Learning, PMLR 328:855-869

Learning of Discretized LSTMs

Nikolaus Kopp, Franz Pernkopf; Conference on Parsimony and Learning, PMLR 328:870-880

GRAIL: Post-hoc Compensation by Linear Reconstruction for Compressed Networks

Wenwu Tang, Dong Wang, Lothar Thiele, Olga Saukh; Conference on Parsimony and Learning, PMLR 328:881-895

Trainable Bitwise Soft Quantization for Input Feature Compression

Karsten Schrödter, Jan Stenkamp, Nina Herrmann, Fabian Gieseke; Conference on Parsimony and Learning, PMLR 328:896-920

A Stein identity for $q$-Gaussians with bounded support

Sophia Sklaviadis, Thomas Möllenhoff, Mario A. T. Figueiredo, Andre Martins, Mohammad Emtiyaz Khan; Conference on Parsimony and Learning, PMLR 328:921-939

Concept based Ambiguity Resolution in LLMs

Zhibo Hu, Chen Wang, Yanfeng Shu, Hye-young Paik, Liming Zhu; Conference on Parsimony and Learning, PMLR 328:940-956

Simplex Deep Linear Discriminant Analysis

Maxat Tezekbayev, Arman Bolatov, Zhenisbek Assylbekov; Conference on Parsimony and Learning, PMLR 328:957-967

Emergence of Auditory Receptive Fields based on Surprise

Yashaswin Yashaswini, Sneha Dash, Sharba Bandyopadhyay; Conference on Parsimony and Learning, PMLR 328:968-988

KNIGHT: Knowledge Graph-Driven Multiple-Choice Question Generation with Adaptive Hardness Calibration

Mohammad Amanlou, Erfan Shafiee Moghaddam, Mahdi Nouri, Yasaman Amou Jafary, Farhan Farsi, Behnam Bahrak; Conference on Parsimony and Learning, PMLR 328:989-1024

Teaching LLMs According to Their Aptitude: Adaptive Switching Between CoT and TIR for Mathematical Problem Solving

Xin Xu, Yan Xu, Tianhao Chen, Yuchen Yan, Chengwu Liu, Zaoyu Chen, Yufei Wang, Yichun Yin, Yasheng Wang, Qun Liu, Lu Yin; Conference on Parsimony and Learning, PMLR 328:1025-1048

Sparse Mixture-of-Experts for Compositional Generalization: Empirical Evidence and Theoretical Foundations of Optimal Sparsity

Jinze Zhao, Peihao Wang, Junjie Yang, Ruisi Cai, Gaowen Liu, Jayanth Srinivasa, Ramana Rao Kompella, Yingbin Liang, Zhangyang Wang; Conference on Parsimony and Learning, PMLR 328:1049-1071

Data-Efficient and Robust Trajectory Generation through Pathlet Dictionary Learning

yuanbo tang, Yan Tang, Zihui Zhao, Zixuan Zhang, Yang Li; Conference on Parsimony and Learning, PMLR 328:1072-1089

SonoEdit: Null-Space Constrained Knowledge Editing for Pronunciation Correction in LLM-Based TTS

Ayush Pratap Singh, Harshit Singh, Nityanand Mathur, Akshat Mandloi, Sudarshan Kamath; Conference on Parsimony and Learning, PMLR 328:1090-1100

FLIPR: FLexible and Interpretable Prediction Regions for time series

Eshant English, Christoph Lippert; Conference on Parsimony and Learning, PMLR 328:1101-1111

Enhancing Long-Context Inference with Context-Position Duo-Mixture

Zhenyu Zhang, Sharath Nittur Sridhar, Zhangyang Wang, Souvik Kundu; Conference on Parsimony and Learning, PMLR 328:1112-1124

Generalized Radius and Integrated Codebook Transforms for Differentiable Vector Quantization

Haochen You, Heng Zhang, Hongyang He, Yuqi Li, Baojing Liu; Conference on Parsimony and Learning, PMLR 328:1125-1160

Selective Collaboration for Robust Federated Learning

Nazarii Tupitsa, Samuel Horváth, Martin Takáč, Eduard Gorbunov; Conference on Parsimony and Learning, PMLR 328:1161-1194

What Scalable Second-Order Information Knows for Pruning at Initialization

Ivo Gollini Navarrete, Nicolas Mauricio Cuadrado, Martin Takáč, Samuel Horváth; Conference on Parsimony and Learning, PMLR 328:1195-1227

Efficient Temporal Consistency in Diffusion-Based Video Editing with Adaptor Modules: A Theoretical Framework

Xinyuan Song, Yangfan He, Sida Li, Jianhui Wang, Hongyang He, Xinhang Yuan, Ruoyu Wang, Jiaqi Chen, Keqin Li, Kuan Lu, Menghao Huo, Ziqian Bi, Binxu Li, Pei Liu; Conference on Parsimony and Learning, PMLR 328:1228-1250

subscribe via RSS