[edit]

Volume 178: Conference on Learning Theory, 2-5 July 2022, London, UK

[edit]

Editors: Po-Ling Loh, Maxim Raginsky

[bib][citeproc]

Conference on Learning Theory 2022: Preface

Po-Ling Loh, Maxim Raginsky; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:i-ii

Analysis of Langevin Monte Carlo from Poincare to Log-Sobolev

Sinho Chewi, Murat A Erdogdu, Mufan Li, Ruoqi Shen, Shunshi Zhang; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:1-2

Optimization-Based Separations for Neural Networks

Itay Safran, Jason Lee; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:3-64

Mirror Descent Strikes Again: Optimal Stochastic Convex Optimization under Infinite Noise Variance

Nuri Mert Vural, Lu Yu, Krishna Balasubramanian, Stanislav Volgushev, Murat A Erdogdu; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:65-102

Wasserstein GANs with Gradient Penalty Compute Congested Transport

Tristan Milne, Adrian I Nachman; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:103-129

Robust Estimation for Random Graphs

Jayadev Acharya, Ayush Jain, Gautam Kamath, Ananda Theertha Suresh, Huanyu Zhang; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:130-166

Tight query complexity bounds for learning graph partitions

Xizhi Liu, Sayan Mukherjee; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:167-181

Pushing the Efficiency-Regret Pareto Frontier for Online Learning of Portfolios and Quantum States

Julian Zimmert, Naman Agarwal, Satyen Kale; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:182-226

Risk bounds for aggregated shallow neural networks using Gaussian priors

Laura Tinsi, Arnak Dalalyan; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:227-253

On the Benefits of Large Learning Rates for Kernel Methods

Gaspard Beugnot, Julien Mairal, Alessandro Rudi; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:254-282

Near-Optimal Statistical Query Lower Bounds for Agnostically Learning Intersections of Halfspaces with Gaussian Marginals

Daniel J Hsu, Clayton H Sanford, Rocco Servedio, Emmanouil Vasileios Vlatakis-Gkaragkounis; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:283-312

The Power of Adaptivity in SGD: Self-Tuning Step Sizes with Unbounded Gradients and Affine Variance

Matthew Faw, Isidoros Tziotis, Constantine Caramanis, Aryan Mokhtari, Sanjay Shakkottai, Rachel Ward; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:313-355

Optimal Mean Estimation without a Variance

Yeshwanth Cherapanamjeri, Nilesh Tripuraneni, Peter Bartlett, Michael Jordan; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:356-357

Beyond No Regret: Instance-Dependent PAC Reinforcement Learning

Andrew J Wagenmaker, Max Simchowitz, Kevin Jamieson; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:358-418

Learning Low Degree Hypergraphs

Eric Balkanski, Oussama Hanguir, Shatian Wang; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:419-420

Depth and Feature Learning are Provably Beneficial for Neural Network Discriminators

Carles Domingo-Enrich; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:421-447

The Implicit Bias of Benign Overfitting

Ohad Shamir; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:448-478

Universal Online Learning with Bounded Loss: Reduction to Binary Classification

Moise Blanchard, Romain Cosson; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:479-495

Negative curvature obstructs acceleration for strongly geodesically convex optimization, even with exact first-order oracles

Christopher Criscitiello, Nicolas Boumal; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:496-542

Multi-Agent Learning for Iterative Dominance Elimination: Formal Barriers and New Algorithms

Jibang Wu, Haifeng Xu, Fan Yao; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:543-543

A Private and Computationally-Efficient Estimator for Unbounded Gaussians

Gautam Kamath, Argyris Mouzakis, Vikrant Singhal, Thomas Steinke, Jonathan Ullman; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:544-572

The Price of Tolerance in Distribution Testing

Clement L Canonne, Ayush Jain, Gautam Kamath, Jerry Li; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:573-624

A bounded-noise mechanism for differential privacy

Yuval Dagan, Gil Kur; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:625-661

Learning with metric losses

Dan Tsir Cohen, Aryeh Kontorovich; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:662-700

Rate of Convergence of Polynomial Networks to Gaussian Processes

Adam Klukowski; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:701-722

Private Robust Estimation by Stabilizing Convex Relaxations

Pravesh Kothari, Pasin Manurangsi, Ameya Velingker; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:723-777

Stochastic Variance Reduction for Variational Inequality Methods

Ahmet Alacaoglu, Yura Malitsky; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:778-816

Self-Consistency of the Fokker Planck Equation

Zebang Shen, Zhenfu Wang, Satyen Kale, Alejandro Ribeiro, Amin Karbasi, Hamed Hassani; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:817-841

Monotone Learning

Olivier J Bousquet, Amit Daniely, Haim Kaplan, Yishay Mansour, Shay Moran, Uri Stemmer; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:842-866

Chasing Convex Bodies and Functions with Black-Box Advice

Nicolas Christianson, Tinashe Handina, Adam Wierman; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:867-908

ROOT-SGD: Sharp Nonasymptotics and Asymptotic Efficiency in a Single Algorithm

Chris Junchi Li, Wenlong Mou, Martin Wainwright, Michael Jordan; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:909-981

Policy Optimization for Stochastic Shortest Path

Liyu Chen, Haipeng Luo, Aviv Rosenberg; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:982-1046

Optimal SQ Lower Bounds for Learning Halfspaces with Massart Noise

Rajai Nasser, Stefan Tiegel; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:1047-1074

Private and polynomial time algorithms for learning Gaussians and beyond

Hassan Ashtiani, Christopher Liaw; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:1075-1076

Universal Online Learning: an Optimistically Universal Learning Rule

Moise Blanchard; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:1077-1125

(Nearly) Optimal Private Linear Regression for Sub-Gaussian Data via Adaptive Clipping

Prateek Varshney, Abhradeep Thakurta, Prateek Jain; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:1126-1166

Differential privacy and robust statistics in high dimensions

Xiyang Liu, Weihao Kong, Sewoong Oh; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:1167-1246

Lattice-Based Methods Surpass Sum-of-Squares in Clustering

Ilias Zadik, Min Jae Song, Alexander S Wein, Joan Bruna; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:1247-1248

Width is Less Important than Depth in ReLU Neural Networks

Gal Vardi, Gilad Yehudai, Ohad Shamir; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:1249-1281

Computational-Statistical Gap in Reinforcement Learning

Daniel Kane, Sihan Liu, Shachar Lovett, Gaurav Mahajan; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:1282-1302

Trace norm regularization for multi-task learning with scarce data

Etienne Boursier, Mikhail Konobeev, Nicolas Flammarion; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:1303-1327

The Role of Interactivity in Structured Estimation

Jayadev Acharya, Clement L. Canonne, Ziteng Sun, Himanshu Tyagi; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:1328-1355

Dimension-free convergence rates for gradient Langevin dynamics in RKHS

Boris Muzellec, Kanji Sato, Mathurin Massias, Taiji Suzuki; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:1356-1420

Adversarially Robust Multi-Armed Bandit Algorithm with Variance-Dependent Regret Bounds

Shinji Ito, Taira Tsuchiya, Junya Honda; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:1421-1422

A Sharp Memory-Regret Trade-off for Multi-Pass Streaming Bandits

Arpit Agarwal, Sanjeev Khanna, Prathamesh Patil; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:1423-1462

Approximate Cluster Recovery from Noisy Labels

Buddhima Gamlath, Silvio Lattanzi, Ashkan Norouzi-Fard, Ola Svensson; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:1463-1509

An Efficient Minimax Optimal Estimator For Multivariate Convex Regression

Gil Kur, Eli Putterman; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:1510-1546

Minimax Regret for Partial Monitoring: Infinite Outcomes and Rustichini’s Regret

Tor Lattimore; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:1547-1575

Adaptive Bandit Convex Optimization with Heterogeneous Curvature

Haipeng Luo, Mengxiao Zhang, Peng Zhao; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:1576-1612

Statistical Estimation and Online Inference via Local SGD

Xiang Li, Jiadong Liang, Xiangyu Chang, Zhihua Zhang; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:1613-1661

Community Recovery in the Degree-Heterogeneous Stochastic Block Model

Vincent Cohen-Addad, Frederik Mallmann-Trenn, David Saulpic; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:1662-1692

Strong Gaussian Approximation for the Sum of Random Vectors

Nazar Buzun, Nikolay Shvetsov, Dmitry V. Dylov; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:1693-1715

Smoothed Online Learning is as Easy as Statistical Learning

Adam Block, Yuval Dagan, Noah Golowich, Alexander Rakhlin; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:1716-1786

Gardner formula for Ising perceptron models at small densities

Erwin Bolthausen, Shuta Nakajima, Nike Sun, Changji Xu; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:1787-1911

Derivatives and residual distribution of regularized M-estimators with application to adaptive tuning

Pierre C Bellec, Yiwei Shen; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:1912-1947

Private Convex Optimization via Exponential Mechanism

Sivakanth Gopi, Yin Tat Lee, Daogao Liu; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:1948-1989

Towards Optimal Algorithms for Multi-Player Bandits without Collision Sensing Information

Wei Huang, Richard Combes, Cindy Trinh; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:1990-2012

Generalization Bounds for Data-Driven Numerical Linear Algebra

Peter Bartlett, Piotr Indyk, Tal Wagner; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:2013-2040

The query complexity of sampling from strongly log-concave distributions in one dimension

Sinho Chewi, Patrik R Gerber, Chen Lu, Thibaut Le Gouic, Philippe Rigollet; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:2041-2059

Optimal and instance-dependent guarantees for Markovian linear stochastic approximation

Wenlong Mou, Ashwin Pananjady, Martin Wainwright, Peter Bartlett; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:2060-2061

Accelerated SGD for Non-Strongly-Convex Least Squares

Aditya Varre, Nicolas Flammarion; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:2062-2126

Label noise (stochastic) gradient descent implicitly solves the Lasso for quadratic parametrisation

Loucas Pillaud Vivien, Julien Reygner, Nicolas Flammarion; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:2127-2159

Tracking Most Significant Arm Switches in Bandits

Joe Suk, Samory Kpotufe; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:2160-2182

Exact Community Recovery in Correlated Stochastic Block Models

Julia Gaudio, Miklos Z. Racz, Anirudh Sridhar; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:2183-2241

Mean-field nonparametric estimation of interacting particle systems

Rentian Yao, Xiaohui Chen, Yun Yang; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:2242-2275

Inductive Bias of Multi-Channel Linear Convolutional Networks with Bounded Weight Norm

Meena Jagadeesan, Ilya Razenshteyn, Suriya Gunasekar; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:2276-2325

New Projection-free Algorithms for Online Convex Optimization with Adaptive Regret Guarantees

Dan Garber, Ben Kretzu; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:2326-2359

Making SGD Parameter-Free

Yair Carmon, Oliver Hinder; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:2360-2389

Efficient Convex Optimization Requires Superlinear Memory

Annie Marsden, Vatsal Sharan, Aaron Sidford, Gregory Valiant; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:2390-2430

Big-Step-Little-Step: Efficient Gradient Methods for Objectives with Multiple Scales

Jonathan Kelner, Annie Marsden, Vatsal Sharan, Aaron Sidford, Gregory Valiant, Honglin Yuan; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:2431-2540

Toward Instance-Optimal State Certification With Incoherent Measurements

Sitan Chen, Jerry Li, Ryan O’Donnell; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:2541-2596

EM’s Convergence in Gaussian Latent Tree Models

Yuval Dagan, Vardis Kandiros, Constantinos Daskalakis; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:2597-2667

Benign Overfitting without Linearity: Neural Network Classifiers Trained by Gradient Descent for Noisy Linear Data

Spencer Frei, Niladri S Chatterji, Peter Bartlett; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:2668-2703

Minimax Regret Optimization for Robust Machine Learning under Distribution Shift

Alekh Agarwal, Tong Zhang; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:2704-2729

Offline Reinforcement Learning with Realizability and Single-policy Concentrability

Wenhao Zhan, Baihe Huang, Audrey Huang, Nan Jiang, Jason Lee; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:2730-2775

Non-Linear Reinforcement Learning in Large Action Spaces: Structural Conditions and Sample-efficiency of Posterior Sampling

Alekh Agarwal, Tong Zhang; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:2776-2814

Learning GMMs with Nearly Optimal Robustness Guarantees

Allen Liu, Ankur Moitra; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:2815-2895

Towards a Theory of Non-Log-Concave Sampling:First-Order Stationarity Guarantees for Langevin Monte Carlo

Krishna Balasubramanian, Sinho Chewi, Murat A Erdogdu, Adil Salim, Shunshi Zhang; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:2896-2923

Understanding Riemannian Acceleration via a Proximal Extragradient Framework

Jikai Jin, Suvrit Sra; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:2924-2962

On Almost Sure Convergence Rates of Stochastic Gradient Methods

Jun Liu, Ye Yuan; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:2963-2983

Improved analysis for a proximal algorithm for sampling

Yongxin Chen, Sinho Chewi, Adil Salim, Andre Wibisono; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:2984-3014

Realizable Learning is All You Need

Max Hopkins, Daniel M. Kane, Shachar Lovett, Gaurav Mahajan; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:3015-3069

Streaming Algorithms for Ellipsoidal Approximation of Convex Polytopes

Yury Makarychev, Naren Sarayu Manoj, Max Ovsiankin; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:3070-3093

The Pareto Frontier of Instance-Dependent Guarantees in Multi-Player Multi-Armed Bandits with no Communication

Allen X Liu, Mark Sellke; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:3094-3094

Minimax Regret on Patterns Using Kullback-Leibler Divergence Covering

Jennifer Tang; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:3095-3112

Sharp Constants in Uniformity Testing via the Huber Statistic

Shivam Gupta, Eric Price; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:3113-3192

Low-Degree Multicalibration

Parikshit Gopalan, Michael P Kim, Mihir A Singhal, Shengjia Zhao; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:3193-3234

Thompson Sampling Achieves $\tilde{O}(\sqrt{T})$ Regret in Linear Quadratic Control

Taylan Kargin, Sahin Lale, Kamyar Azizzadenesheli, Animashree Anandkumar, Babak Hassibi; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:3235-3284

Return of the bias: Almost minimax optimal high probability bounds for adversarial linear bandits

Julian Zimmert, Tor Lattimore; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:3285-3312

Uniform Stability for First-Order Empirical Risk Minimization

Amit Attia, Tomer Koren; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:3313-3332

Single Trajectory Nonparametric Learning of Nonlinear Dynamics

Ingvar M Ziemann, Henrik Sandberg, Nikolai Matni; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:3333-3364

On characterizations of learnability with computable learners

Tom F. Sterkenburg; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:3365-3379

Stability vs Implicit Bias of Gradient Methods on Separable Data and Beyond

Matan Schliserman, Tomer Koren; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:3380-3394

Near optimal efficient decoding from pooled data

Max Hahn-Klimroth, Noela Müller; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:3395-3409

Kernel interpolation in Sobolev spaces is not consistent in low dimensions

Simon Buchholz; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:3410-3440

Random Graph Matching in Geometric Models: the Case of Complete Graphs

Haoyu Wang, Yihong Wu, Jiaming Xu, Israel Yolou; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:3441-3488

Offline Reinforcement Learning: Fundamental Barriers for Value Function Approximation

Dylan J Foster, Akshay Krishnamurthy, David Simchi-Levi, Yunzong Xu; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:3489-3489

Improved Parallel Algorithm for Minimum Cost Submodular Cover Problem

Yingli Ran, Zhao Zhang, Shaojie Tang; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:3490-3502

The Dynamics of Riemannian Robbins-Monro Algorithms

Mohammad Reza Karimi, Ya-Ping Hsieh, Panayotis Mertikopoulos, Andreas Krause; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:3503-3503

Corruption-Robust Contextual Search through Density Updates

Renato Paes Leme, Chara Podimata, Jon Schneider; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:3504-3505

On The Memory Complexity of Uniformity Testing

Tomer Berg, Or Ordentlich, Ofer Shayevitz; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:3506-3523

Generalization Bounds via Convex Analysis

Gabor Lugosi, Gergely Neu; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:3524-3546

Private Matrix Approximation and Geometry of Unitary Orbits

Oren Mangoubi, Yikai Wu, Satyen Kale, Abhradeep Thakurta, Nisheeth K. Vishnoi; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:3547-3588

Efficient Online Linear Control with Stochastic Convex Costs and Unknown Dynamics

Asaf B Cassel, Alon Cohen, Tomer Koren; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:3589-3604

Two-Sided Weak Submodularity for Matroid Constrained Optimization and Regression

Theophile Thiery, Justin Ward; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:3605-3634

Corralling a Larger Band of Bandits: A Case Study on Switching Regret for Linear Bandits

Haipeng Luo, Mengxiao Zhang, Peng Zhao, Zhi-Hua Zhou; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:3635-3684

Assemblies of neurons learn to classify well-separated distributions

Max Dabagia, Santosh S Vempala, Christos Papadimitriou; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:3685-3717

The Structured Abstain Problem and the Lovász Hinge

Jessica J Finocchiaro, Rafael Frongillo, Enrique B Nueve; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:3718-3740

Fast algorithm for overcomplete order-3 tensor decomposition

Jingqiu Ding, Tommaso d’Orsi, Chih-Hung Liu, David Steurer, Stefan Tiegel; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:3741-3799

Hardness of Maximum Likelihood Learning of DPPs

Elena Grigorescu, Brendan Juba, Karl Wimmer, Ning Xie; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:3800-3819

Learning to Control Linear Systems can be Hard

Anastasios Tsiamis, Ingvar M Ziemann, Manfred Morari, Nikolai Matni, George J. Pappas; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:3820-3857

Horizon-Free Reinforcement Learning in Polynomial Time: the Power of Stationary Policies

Zihan Zhang, Xiangyang Ji, Simon Du; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:3858-3904

On the well-spread property and its relation to linear regression

Hongjie Chen, Tommaso d’Orsi; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:3905-3935

Optimal SQ Lower Bounds for Robustly Learning Discrete Product Distributions and Ising Models

Ilias Diakonikolas, Daniel M. Kane, Yuxin Sun; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:3936-3978

Private High-Dimensional Hypothesis Testing

Shyam Narayanan; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:3979-4027

How catastrophic can catastrophic forgetting be in linear regression?

Itay Evron, Edward Moroshko, Rachel Ward, Nathan Srebro, Daniel Soudry; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:4028-4079

Efficient decentralized multi-agent learning in asymmetric queuing systems

Daniel Freund, Thodoris Lykouris, Wentao Weng; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:4080-4084

Online Learning to Transport via the Minimal Selection Principle

Wenxuan Guo, YoonHaeng Hur, Tengyuan Liang, Chris Ryan; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:4085-4109

On the Role of Channel Capacity in Learning Gaussian Mixture Models

Elad Romanov, Tamir Bendory, Or Ordentlich; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:4110-4159

Parameter-free Mirror Descent

Andrew Jacobsen, Ashok Cutkosky; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:4160-4211

Chained generalisation bounds

Eugenio Clerico, Amitis Shidani, George Deligiannidis, Arnaud Doucet; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:4212-4257

Near-Optimal Statistical Query Hardness of Learning Halfspaces with Massart Noise

Ilias Diakonikolas, Daniel Kane; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:4258-4282

Faster online calibration without randomization: interval forecasts and the power of two choices

Chirag Gupta, Aaditya Ramdas; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:4283-4309

Universality of empirical risk minimization

Andrea Montanari, Basil N. Saeed; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:4310-4312

Learning a Single Neuron with Adversarial Label Noise via Gradient Descent

Ilias Diakonikolas, Vasilis Kontonis, Christos Tzamos, Nikos Zarifis; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:4313-4361

Sharper Rates for Separable Minimax and Finite Sum Optimization via Primal-Dual Extragradient Methods

Yujia Jin, Aaron Sidford, Kevin Tian; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:4362-4415

Rate-Distortion Theoretic Generalization Bounds for Stochastic Learning Algorithms

Milad Sefidgaran, Amin Gohari, Gaël Richard, Umut Simsekli; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:4416-4463

Scale-free Unconstrained Online Learning for Curved Losses

Jack J. Mayo, Hedi Hadiji, Tim van Erven; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:4464-4497

Robustly-reliable learners under poisoning attacks

Maria-Florina Balcan, Avrim Blum, Steve Hanneke, Dravyansh Sharma; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:4498-4534

Non-Gaussian Component Analysis via Lattice Basis Reduction

Ilias Diakonikolas, Daniel Kane; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:4535-4547

Can Q-learning be Improved with Advice?

Noah Golowich, Ankur Moitra; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:4548-4619

Non-Convex Optimization with Certificates and Fast Rates Through Kernel Sums of Squares

Blake Woodworth, Francis Bach, Alessandro Rudi; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:4620-4642

Hierarchical Clustering in Graph Streams: Single-Pass Algorithms and Space Lower Bounds

Sepehr Assadi, Vaggos Chatziafratis, Jakub \Lącki, Vahab Mirrokni, Chen Wang; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:4643-4702

Robust Sparse Mean Estimation via Sum of Squares

Ilias Diakonikolas, Daniel M. Kane, Sushrut Karmalkar, Ankit Pensia, Thanasis Pittas; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:4703-4763

Statistical and Computational Phase Transitions in Group Testing

Amin Coja-Oghlan, Oliver Gebhard, Max Hahn-Klimroth, Alexander S Wein, Ilias Zadik; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:4764-4781

The merged-staircase property: a necessary and nearly sufficient condition for SGD learning of sparse functions on two-layer neural networks

Emmanuel Abbe, Enric Boix Adsera, Theodor Misiakiewicz; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:4782-4887

Eigenspace Restructuring: A Principle of Space and Frequency in Neural Networks

Lechao Xiao; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:4888-4944

Sampling Approximately Low-Rank Ising Models: MCMC meets Variational Methods

Frederic Koehler, Holden Lee, Andrej Risteski; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:4945-4988

Strong Memory Lower Bounds for Learning Natural Models

Gavin Brown, Mark Bun, Adam Smith; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:4989-5029

On the power of adaptivity in statistical adversaries

Guy Blanc, Jane Lange, Ali Malik, Li-Yang Tan; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:5030-5061

Sample-Efficient Reinforcement Learning in the Presence of Exogenous Information

Yonathan Efroni, Dylan J Foster, Dipendra Misra, Akshay Krishnamurthy, John Langford; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:5062-5127

The Query Complexity of Local Search and Brouwer in Rounds

Simina Branzei, Jiawei Li; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:5128-5145

Complete Policy Regret Bounds for Tallying Bandits

Dhruv Malik, Yuanzhi Li, Aarti Singh; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:5146-5174

When Is Partially Observable Reinforcement Learning Not Scary?

Qinghua Liu, Alan Chung, Csaba Szepesvari, Chi Jin; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:5175-5220

Strategizing against Learners in Bayesian Games

Yishay Mansour, Mehryar Mohri, Jon Schneider, Balasubramanian Sivan; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:5221-5252

Orthogonal Statistical Learning with Self-Concordant Loss

Lang Liu, Carlos Cinelli, Zaid Harchaoui; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:5253-5277

Clustering with Queries under Semi-Random Noise

Alberto Del Pia, Mingchen Ma, Christos Tzamos; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:5278-5313

Efficient Projection-Free Online Convex Optimization with Membership Oracle

Zakaria Mhammedi; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:5314-5390

Better Private Algorithms for Correlation Clustering

Daogao Liu; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:5391-5412

Neural Networks can Learn Representations with Gradient Descent

Alexandru Damian, Jason Lee, Mahdi Soltanolkotabi; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:5413-5452

Stochastic linear optimization never overfits with quadratically-bounded losses on general data

Matus Telgarsky; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:5453-5488

Multilevel Optimization for Inverse Problems

Simon Weissmann, Ashia Wilson, Jakob Zech; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:5489-5524

High-Dimensional Projection Pursuit: Outer Bounds and Applications to Interpolation in Neural Networks

Kangjie Zhou, Andrea Montanari; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:5525-5527

Memorize to generalize: on the necessity of interpolation in high dimensional linear regression

Chen Cheng, John Duchi, Rohith Kuditipudi; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:5528-5560

Damped Online Newton Step for Portfolio Selection

Zakaria Mhammedi, Alexander Rakhlin; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:5561-5595

From Sampling to Optimization on Discrete Domains with Applications to Determinant Maximization

Nima Anari, Thuy-Duong Vuong; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:5596-5618

Open Problem: Properly learning decision trees in polynomial time?

Guy Blanc, Jane Lange, Mingda Qiao, Li-Yang Tan; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:5619-5623

Open Problem: Regret Bounds for Noise-Free Kernel-Based Bandits

Sattar Vakili; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:5624-5629

Open Problem: Running time complexity of accelerated $\ell_1$-regularized PageRank

Kimon Fountoulakis, Shenghao Yang; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:5630-5632

Open Problem: Do you pay for Privacy in Online learning?

Amartya Sanyal, Giorgia Ramponi; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:5633-5637

Open Problem: Better Differentially Private Learning Algorithms with Margin Guarantees

Raef Bassily, Mehryar Mohri, Ananda Theertha Suresh; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:5638-5643

Open Problem: Finite-Time Instance Dependent Optimality for Stochastic Online Learning with Feedback Graphs

Teodor Vanislavov Marinov, Mehryar Mohri, Julian Zimmert; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:5644-5649

Open Problem: Optimal Best Arm Identification with Fixed-Budget

Chao Qin; Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:5650-5654

subscribe via RSS