[edit]
Volume 328: Third Conference on Parsimony and Learning, 23-26 March 2026, Tübingen, Germany
[edit]
Editors: Rebekka Burkholz, Shiwei Liu, Saiprasad Ravishankar, William Redman, Wei Huang, Weijie Su, Zhihui Zhu
Semantic Homogeneity As Demonstration: Batch-Structured Semi-Supervised In-Context Learning for Natural Language Understanding
; Conference on Parsimony and Learning, PMLR 328:1-23
Improving Medical Visual Reinforcement Fine-Tuning via Perception and Reasoning Augmentation
; Conference on Parsimony and Learning, PMLR 328:24-41
ROSE: Reordered SparseGPT for More Accurate One-Shot Large Language Models Pruning
; Conference on Parsimony and Learning, PMLR 328:42-60
AlphaFormer: End-to-End Symbolic Regression of Alpha Factors with Transformers
; Conference on Parsimony and Learning, PMLR 328:61-82
From sparse recovery to plug-and-play priors, understanding trade-offs for stable recovery with generalized projected gradient descent
; Conference on Parsimony and Learning, PMLR 328:83-103
Panza: Investigating the Feasibility of Fully-Local Personalized Text Generation
; Conference on Parsimony and Learning, PMLR 328:104-130
Enhancing Low-Cost Video Editing with Lightweight Adaptors and Temporal-Aware Inversion
; Conference on Parsimony and Learning, PMLR 328:131-163
SPIKE: Sparse Koopman Regularization for Physics-Informed Neural Networks
; Conference on Parsimony and Learning, PMLR 328:164-191
Cannistraci-Hebb Training with N:M Semi-Structured Sparsity for Pre-Training and Re-Training
; Conference on Parsimony and Learning, PMLR 328:192-217
Lattice-Based Vector Quantization for Low-Bit Quantization-Aware Training
; Conference on Parsimony and Learning, PMLR 328:218-241
ShapLoRA: Allocation of Low-rank Adaption on Large Language Models via Shapley Value Inspired Importance Estimation
; Conference on Parsimony and Learning, PMLR 328:242-264
LLMQ: Efficient Lower-Precision LLM Training for Consumer GPUs
; Conference on Parsimony and Learning, PMLR 328:265-284
Parameter-Efficient Distributional RL via Normalizing Flows and a Geometry-Aware Cramér Surrogate
; Conference on Parsimony and Learning, PMLR 328:285-313
Analyzing and Mitigating Model Collapse in Reflow Methods
; Conference on Parsimony and Learning, PMLR 328:314-340
Stochastic Unrolled Neural Networks
; Conference on Parsimony and Learning, PMLR 328:341-359
Prompt Stability Matters: Evaluating and Optimizing Auto-Generated Prompt in General-Purpose Systems
; Conference on Parsimony and Learning, PMLR 328:360-374
Symbiotic Cooperation for Web Agents: Harnessing Complementary Strengths of Large and Small LLMs
; Conference on Parsimony and Learning, PMLR 328:375-427
Matrix Sensing with Kernel Optimal Loss: Robustness and Optimization Landscape
; Conference on Parsimony and Learning, PMLR 328:428-500
Pruned Adaptation Modules: A Simple yet Strong Baseline for Continual Foundation Models
; Conference on Parsimony and Learning, PMLR 328:501-515
Token-Aware Representation Augmentation for Fine-Grained Semi-Supervised Learning
; Conference on Parsimony and Learning, PMLR 328:516-528
MMA:Benchmarking Multi-ModalLarge Language Models in Ambiguity Contexts
; Conference on Parsimony and Learning, PMLR 328:529-551
Optimal $k$-Discretization Learning
; Conference on Parsimony and Learning, PMLR 328:552-564
Deep Neural Regression Collapse
; Conference on Parsimony and Learning, PMLR 328:565-581
Beyond In-Distribution Success: Scaling Curves of CoT Granularity for Language Model Generalization
; Conference on Parsimony and Learning, PMLR 328:582-611
Learning in the Null Space: Small Singular Values for Continual Learning
; Conference on Parsimony and Learning, PMLR 328:612-628
(PASS) Visual Prompt Locates Good Structure Sparsity through a Recurrent HyperNetwork
; Conference on Parsimony and Learning, PMLR 328:629-643
Sparsity-Aware Prompt Tuning: A Simple and Effective Way to Fine-tune High-Sparsity LLMs
; Conference on Parsimony and Learning, PMLR 328:644-657
Scalable LLM Reasoning Acceleration with Low-rank Distillation
; Conference on Parsimony and Learning, PMLR 328:658-675
FocusDC: Real-World Scene Infusion for Robust Dataset Condensation
; Conference on Parsimony and Learning, PMLR 328:676-697
ERC-SVD: Error-Controlled SVD for Large Language Model Compression
; Conference on Parsimony and Learning, PMLR 328:698-719
Can Less Be More? Benchmarking Lightweight Models Against State-of-the-Art Deep Learning Architectures for Deployable Seizure Detection
; Conference on Parsimony and Learning, PMLR 328:720-734
Beyond Greedy Decoding: Model-Specific Strategy Selection via Multi-faceted Uncertainty Decomposition
; Conference on Parsimony and Learning, PMLR 328:735-755
Superclass-Guided Representation Disentanglement for Spurious Correlation Mitigation
; Conference on Parsimony and Learning, PMLR 328:756-794
Dynamic SFT with Structured Measurements: Fast Queries, Fast Updates, Provable Guarantees
; Conference on Parsimony and Learning, PMLR 328:795-825
Byzantine-Robust Optimization under $(L_0,L_1)$-Smoothness
; Conference on Parsimony and Learning, PMLR 328:826-854
Effective Learning for Small Reasoning Models: An Empirical Study on 0.5B Reasoning LLMs
; Conference on Parsimony and Learning, PMLR 328:855-869
Learning of Discretized LSTMs
; Conference on Parsimony and Learning, PMLR 328:870-880
GRAIL: Post-hoc Compensation by Linear Reconstruction for Compressed Networks
; Conference on Parsimony and Learning, PMLR 328:881-895
Trainable Bitwise Soft Quantization for Input Feature Compression
; Conference on Parsimony and Learning, PMLR 328:896-920
A Stein identity for $q$-Gaussians with bounded support
; Conference on Parsimony and Learning, PMLR 328:921-939
Concept based Ambiguity Resolution in LLMs
; Conference on Parsimony and Learning, PMLR 328:940-956
Simplex Deep Linear Discriminant Analysis
; Conference on Parsimony and Learning, PMLR 328:957-967
Emergence of Auditory Receptive Fields based on Surprise
; Conference on Parsimony and Learning, PMLR 328:968-988
KNIGHT: Knowledge Graph-Driven Multiple-Choice Question Generation with Adaptive Hardness Calibration
; Conference on Parsimony and Learning, PMLR 328:989-1024
Teaching LLMs According to Their Aptitude: Adaptive Switching Between CoT and TIR for Mathematical Problem Solving
; Conference on Parsimony and Learning, PMLR 328:1025-1048
Sparse Mixture-of-Experts for Compositional Generalization: Empirical Evidence and Theoretical Foundations of Optimal Sparsity
; Conference on Parsimony and Learning, PMLR 328:1049-1071
Data-Efficient and Robust Trajectory Generation through Pathlet Dictionary Learning
; Conference on Parsimony and Learning, PMLR 328:1072-1089
SonoEdit: Null-Space Constrained Knowledge Editing for Pronunciation Correction in LLM-Based TTS
; Conference on Parsimony and Learning, PMLR 328:1090-1100
FLIPR: FLexible and Interpretable Prediction Regions for time series
; Conference on Parsimony and Learning, PMLR 328:1101-1111
Enhancing Long-Context Inference with Context-Position Duo-Mixture
; Conference on Parsimony and Learning, PMLR 328:1112-1124
Generalized Radius and Integrated Codebook Transforms for Differentiable Vector Quantization
; Conference on Parsimony and Learning, PMLR 328:1125-1160
Selective Collaboration for Robust Federated Learning
; Conference on Parsimony and Learning, PMLR 328:1161-1194
What Scalable Second-Order Information Knows for Pruning at Initialization
; Conference on Parsimony and Learning, PMLR 328:1195-1227
Efficient Temporal Consistency in Diffusion-Based Video Editing with Adaptor Modules: A Theoretical Framework
; Conference on Parsimony and Learning, PMLR 328:1228-1250
subscribe via RSS