[edit]
Volume 234: Conference on Parsimony and Learning, 3-6 January 2024, Hongkong, China
[edit]
Editors: Yuejie Chi, Gintare Karolina Dziugaite, Qing Qu, Atlas Wang Wang, Zhihui Zhu
PC-X: Profound Clustering via Slow Exemplars
Conference on Parsimony and Learning, PMLR 234:1-19
;WS-iFSD: Weakly Supervised Incremental Few-shot Object Detection Without Forgetting
Conference on Parsimony and Learning, PMLR 234:20-38
;Sparse Fréchet sufficient dimension reduction via nonconvex optimization
Conference on Parsimony and Learning, PMLR 234:39-53
;Efficiently Disentangle Causal Representations
Conference on Parsimony and Learning, PMLR 234:54-71
;Emergence of Segmentation with Minimalistic White-Box Transformers
Conference on Parsimony and Learning, PMLR 234:72-93
;Continual Learning with Dynamic Sparse Training: Exploring Algorithms for Effective Model Updates
Conference on Parsimony and Learning, PMLR 234:94-107
;Decoding Micromotion in Low-dimensional Latent Spaces from StyleGAN
Conference on Parsimony and Learning, PMLR 234:108-133
;HARD: Hyperplane ARrangement Descent
Conference on Parsimony and Learning, PMLR 234:134-158
;FIXED: Frustratingly Easy Domain Generalization with Mixup
Conference on Parsimony and Learning, PMLR 234:159-178
;Domain Generalization via Nuclear Norm Regularization
Conference on Parsimony and Learning, PMLR 234:179-201
;Investigating the Catastrophic Forgetting in Multimodal Large Language Model Fine-Tuning
Conference on Parsimony and Learning, PMLR 234:202-227
;Deep Self-expressive Learning
Conference on Parsimony and Learning, PMLR 234:228-247
;Sparse Activations with Correlated Weights in Cortex-Inspired Neural Networks
Conference on Parsimony and Learning, PMLR 234:248-268
;Piecewise-Linear Manifolds for Deep Metric Learning
Conference on Parsimony and Learning, PMLR 234:269-281
;HRBP: Hardware-friendly Regrouping towards Block-based Pruning for Sparse CNN Training
Conference on Parsimony and Learning, PMLR 234:282-301
;Cross-Quality Few-Shot Transfer for Alloy Yield Strength Prediction: A New Materials Science Benchmark and A Sparsity-Oriented Optimization Framework
Conference on Parsimony and Learning, PMLR 234:302-323
;Deep Leakage from Model in Federated Learning
Conference on Parsimony and Learning, PMLR 234:324-340
;Balance is Essence: Accelerating Sparse Training via Adaptive Gradient Correction
Conference on Parsimony and Learning, PMLR 234:341-378
;An Adaptive Tangent Feature Perspective of Neural Networks
Conference on Parsimony and Learning, PMLR 234:379-394
;Probing Biological and Artificial Neural Networks with Task-dependent Neural Manifolds
Conference on Parsimony and Learning, PMLR 234:395-418
;Exploring Minimally Sufficient Representation in Active Learning through Label-Irrelevant Patch Augmentation
Conference on Parsimony and Learning, PMLR 234:419-439
;Unsupervised Learning of Structured Representation via Closed-Loop Transcription
Conference on Parsimony and Learning, PMLR 234:440-457
;Algorithm Design for Online Meta-Learning with Task Boundary Detection
Conference on Parsimony and Learning, PMLR 234:458-479
;NeuroMixGDP: A Neural Collapse-Inspired Random Mixup for Private Data Release
Conference on Parsimony and Learning, PMLR 234:480-514
;Jaxpruner: A Concise Library for Sparsity Research
Conference on Parsimony and Learning, PMLR 234:515-528
;Image Quality Assessment: Integrating Model-centric and Data-centric Approaches
Conference on Parsimony and Learning, PMLR 234:529-541
;How to Prune Your Language Model: Recovering Accuracy on the “Sparsity May Cry” Benchmark
Conference on Parsimony and Learning, PMLR 234:542-553
;Leveraging Sparse Input and Sparse Models: Efficient Distributed Learning in Resource-Constrained Environments
Conference on Parsimony and Learning, PMLR 234:554-569
;Closed-Loop Transcription via Convolutional Sparse Coding
Conference on Parsimony and Learning, PMLR 234:570-589
;Less is More – Towards parsimonious multi-task models using structured sparsity
Conference on Parsimony and Learning, PMLR 234:590-601
;subscribe via RSS