[edit]

Volume 234: Conference on Parsimony and Learning, 3-6 January 2024, Hongkong, China

[edit]

Editors: Yuejie Chi, Gintare Karolina Dziugaite, Qing Qu, Atlas Wang Wang, Zhihui Zhu

[bib][citeproc]

PC-X: Profound Clustering via Slow Exemplars

Yuangang Pan, Yinghua Yao, Ivor Tsang; Conference on Parsimony and Learning, PMLR 234:1-19

WS-iFSD: Weakly Supervised Incremental Few-shot Object Detection Without Forgetting

Xinyu Gong, Li Yin, Juan-Manuel Perez-Rua, Zhangyang Wang, Zhicheng Yan; Conference on Parsimony and Learning, PMLR 234:20-38

Sparse Fréchet sufficient dimension reduction via nonconvex optimization

Jiaying Weng, Chenlu Ke, Pei Wang; Conference on Parsimony and Learning, PMLR 234:39-53

Efficiently Disentangle Causal Representations

Yuanpeng Li, Joel Hestness, Mohamed Elhoseiny, Liang Zhao, Kenneth Church; Conference on Parsimony and Learning, PMLR 234:54-71

Emergence of Segmentation with Minimalistic White-Box Transformers

Yaodong Yu, Tianzhe Chu, Shengbang Tong, Ziyang Wu, Druv Pai, Sam Buchanan, Yi Ma; Conference on Parsimony and Learning, PMLR 234:72-93

Continual Learning with Dynamic Sparse Training: Exploring Algorithms for Effective Model Updates

Murat Onur Yildirim, Elif Ceren Gok, Ghada Sokar, Decebal Constantin Mocanu, Joaquin Vanschoren; Conference on Parsimony and Learning, PMLR 234:94-107

Decoding Micromotion in Low-dimensional Latent Spaces from StyleGAN

Qiucheng Wu, Yifan Jiang, Junru Wu, Kai Wang, Eric Zhang, Humphrey Shi, Zhangyang Wang, Shiyu Chang; Conference on Parsimony and Learning, PMLR 234:108-133

HARD: Hyperplane ARrangement Descent

Tianjiao Ding, Liangzu Peng, Rene Vidal; Conference on Parsimony and Learning, PMLR 234:134-158

FIXED: Frustratingly Easy Domain Generalization with Mixup

Wang Lu, Jindong Wang, Han Yu, Lei Huang, Xiang Zhang, Yiqiang Chen, Xing Xie; Conference on Parsimony and Learning, PMLR 234:159-178

Domain Generalization via Nuclear Norm Regularization

Zhenmei Shi, Yifei Ming, Ying Fan, Frederic Sala, Yingyu Liang; Conference on Parsimony and Learning, PMLR 234:179-201

Investigating the Catastrophic Forgetting in Multimodal Large Language Model Fine-Tuning

Yuexiang Zhai, Shengbang Tong, Xiao Li, Mu Cai, Qing Qu, Yong Jae Lee, Yi Ma; Conference on Parsimony and Learning, PMLR 234:202-227

Deep Self-expressive Learning

Chen Zhao, Chun-Guang Li, Wei He, Chong You; Conference on Parsimony and Learning, PMLR 234:228-247

Sparse Activations with Correlated Weights in Cortex-Inspired Neural Networks

Chanwoo Chun, Daniel Lee; Conference on Parsimony and Learning, PMLR 234:248-268

Piecewise-Linear Manifolds for Deep Metric Learning

Shubhang Bhatnagar, Narendra Ahuja; Conference on Parsimony and Learning, PMLR 234:269-281

HRBP: Hardware-friendly Regrouping towards Block-based Pruning for Sparse CNN Training

Haoyu Ma, Chengming Zhang, lizhi xiang, Xiaolong Ma, Geng Yuan, Wenkai Zhang, Shiwei Liu, Tianlong Chen, Dingwen Tao, Yanzhi Wang, Zhangyang Wang, Xiaohui Xie; Conference on Parsimony and Learning, PMLR 234:282-301

Cross-Quality Few-Shot Transfer for Alloy Yield Strength Prediction: A New Materials Science Benchmark and A Sparsity-Oriented Optimization Framework

Xuxi Chen, Tianlong Chen, Everardo Yeriel Olivares, Kate Elder, Scott McCall, Aurelien Perron, Joseph McKeown, Bhavya Kailkhura, Zhangyang Wang, Brian Gallagher; Conference on Parsimony and Learning, PMLR 234:302-323

Deep Leakage from Model in Federated Learning

Zihao Zhao, Mengen Luo, Wenbo Ding; Conference on Parsimony and Learning, PMLR 234:324-340

Balance is Essence: Accelerating Sparse Training via Adaptive Gradient Correction

Bowen Lei, Dongkuan Xu, Ruqi Zhang, Shuren He, Bani Mallick; Conference on Parsimony and Learning, PMLR 234:341-378

An Adaptive Tangent Feature Perspective of Neural Networks

Daniel LeJeune, Sina Alemohammad; Conference on Parsimony and Learning, PMLR 234:379-394

Probing Biological and Artificial Neural Networks with Task-dependent Neural Manifolds

Michael Kuoch, Chi-Ning Chou, Nikhil Parthasarathy, Joel Dapello, James J. DiCarlo, Haim Sompolinsky, SueYeon Chung; Conference on Parsimony and Learning, PMLR 234:395-418

Exploring Minimally Sufficient Representation in Active Learning through Label-Irrelevant Patch Augmentation

Zhiyu Xue, Yinlong Dai, Qi Lei; Conference on Parsimony and Learning, PMLR 234:419-439

Unsupervised Learning of Structured Representation via Closed-Loop Transcription

Shengbang Tong, Xili Dai, Yubei Chen, Mingyang Li, ZENGYI LI, Brent Yi, Yann LeCun, Yi Ma; Conference on Parsimony and Learning, PMLR 234:440-457

Algorithm Design for Online Meta-Learning with Task Boundary Detection

Daouda Sow, Sen Lin, Yingbin Liang, Junshan Zhang; Conference on Parsimony and Learning, PMLR 234:458-479

NeuroMixGDP: A Neural Collapse-Inspired Random Mixup for Private Data Release

Donghao Li, Yang Cao, Yuan Yao; Conference on Parsimony and Learning, PMLR 234:480-514

Jaxpruner: A Concise Library for Sparsity Research

Joo Hyung Lee, Wonpyo Park, Nicole Elyse Mitchell, Jonathan Pilault, Johan Samir Obando Ceron, Han-Byul Kim, Namhoon Lee, Elias Frantar, Yun Long, Amir Yazdanbakhsh, Woohyun Han, Shivani Agrawal, Suvinay Subramanian, Xin Wang, Sheng-Chun Kao, Xingyao Zhang, Trevor Gale, Aart J.C. Bik, Milen Ferev, Zhonglin Han, Hong-Seok Kim, Yann Dauphin, Gintare Karolina Dziugaite, Pablo Samuel Castro, Utku Evci; Conference on Parsimony and Learning, PMLR 234:515-528

Image Quality Assessment: Integrating Model-centric and Data-centric Approaches

Peibei Cao, Dingquan Li, Kede Ma; Conference on Parsimony and Learning, PMLR 234:529-541

How to Prune Your Language Model: Recovering Accuracy on the “Sparsity May Cry” Benchmark

Eldar Kurtic, Torsten Hoefler, Dan Alistarh; Conference on Parsimony and Learning, PMLR 234:542-553

Leveraging Sparse Input and Sparse Models: Efficient Distributed Learning in Resource-Constrained Environments

Emmanouil Kariotakis, Grigorios Tsagkatakis, Panagiotis Tsakalides, Anastasios Kyrillidis; Conference on Parsimony and Learning, PMLR 234:554-569

Closed-Loop Transcription via Convolutional Sparse Coding

Xili Dai, Ke Chen, Shengbang Tong, Jingyuan Zhang, Xingjian Gao, Mingyang Li, Druv Pai, Yuexiang Zhai, Xiaojun Yuan, Heung-Yeung Shum, Lionel Ni, Yi Ma; Conference on Parsimony and Learning, PMLR 234:570-589

Less is More – Towards parsimonious multi-task models using structured sparsity

Richa Upadhyay, Ronald Phlypo, Rajkumar Saini, Marcus Liwicki; Conference on Parsimony and Learning, PMLR 234:590-601

subscribe via RSS