[edit]

Volume 247: The Thirty Seventh Annual Conference on Learning Theory, 30-3 July 2023, Edmonton, Canada

[edit]

Editors: Shipra Agrawal, Aaron Roth

[bib][citeproc]

Contents:

Preface

Conference on Learning Theory 2024: Preface

Shipra Agrawal, Aaron Roth; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:i-i

Original Papers

Limits of Approximating the Median Treatment Effect

Raghavendra Addanki, Siddharth Bhandari; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:1-21

Majority-of-Three: The Simplest Optimal Learner?

Ishaq Aden-Ali, Mikael Møller Høandgsgaard, Kasper Green Larsen, Nikita Zhivotovskiy; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:22-45

Metalearning with Very Few Samples Per Task

Maryam Aliakbarpour, Konstantina Bairaktari, Gavin Brown, Adam Smith, Nathan Srebro, Jonathan Ullman; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:46-93

A Unified Characterization of Private Learnability via Graph Theory

Noga Alon, Shay Moran, Hilla Schefler, Amir Yehudayoff; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:94-129

Mitigating Covariate Shift in Misspecified Regression with Applications to Reinforcement Learning

Philip Amortila, Tongyi Cao, Akshay Krishnamurthy; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:130-160

Fast parallel sampling under isoperimetry

Nima Anari, Sinho Chewi, Thuy-Duong Vuong; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:161-185

Two fundamental limits for uncertainty quantification in predictive inference

Felipe Areces, Chen Cheng, John Duchi, Kuditipudi Rohith; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:186-218

Mode Estimation with Partial Feedback

Charles Arnal, Vivien Cabannes, Vianney Perchet; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:219-220

Universally Instance-Optimal Mechanisms for Private Statistical Estimation

Hilal Asi, John C. Duchi, Saminul Haque, Zewei Li, Feng Ruan; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:221-259

Regularization and Optimal Multiclass Learning

Julian Asilis, Siddartha Devic, Shaddin Dughmi, Vatsal Sharan, Shang-Hua Teng; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:260-310

The Best Arm Evades: Near-optimal Multi-pass Streaming Lower Bounds for Pure Exploration in Multi-armed Bandits

Sepehr Assadi, Chen Wang; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:311-358

Universal Rates for Regression: Separations between Cut-Off and Absolute Loss

Idan Attias, Steve Hanneke, Alkis Kalavasis, Amin Karbasi, Grigoris Velegkas; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:359-405

Learning Neural Networks with Sparse Activations

Pranjal Awasthi, Nishanth Dikkala, Pritish Kamath, Raghu Meka; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:406-425

The SMART approach to instance-optimal online learning

Siddhartha Banerjee, Alankrita Bhatt, Christina Lee Yu; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:426-426

Detection of $L_∞$ Geometry in Random Geometric Graphs: Suboptimality of Triangles and Cluster Expansion

Kiril Bangachev, Guy Bresler; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:427-497

Metric Clustering and MST with Strong and Weak Distance Oracles

MohammadHossein Bateni, Prathamesh Dharangutte, Rajesh Jayaram, Chen Wang; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:498-550

Correlated Binomial Process

Moïse Blanchard, Doron Cohen, Aryeh Kontorovich; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:551-595

On the Performance of Empirical Risk Minimization with Smoothed Data

Adam Block, Alexander Rakhlin, Abhishek Shetty; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:596-629

Errors are Robustly Tamed in Cumulative Knowledge Processes

Anna Brandenberger, Cassandra Marcussen, Elchanan Mossel, Madhu Sudan; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:630-631

Thresholds for Reconstruction of Random Hypergraphs From Graph Projections

Guy Bresler, Chenghao Guo, Yury Polyanskiy; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:632-647

A Theory of Interpretable Approximations

Marco Bressan, Nicolò Cesa-Bianchi, Emmanuel Esposito, Yishay Mansour, Shay Moran, Maximilian Thiessen; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:648-668

Efficient Algorithms for Learning Monophonic Halfspaces in Graphs

Marco Bressan, Emmanuel Esposito, Maximilian Thiessen; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:669-696

Online Stackelberg Optimization via Nonlinear Control

William Brown, Christos Papadimitriou, Tim Roughgarden; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:697-749

Insufficient Statistics Perturbation: Stable Estimators for Private Least Squares Extended Abstract

Gavin Brown, Jonathan Hayase, Samuel Hopkins, Weihao Kong, Xiyang Liu, Sewoong Oh, Juan C Perdomo, Adam Smith; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:750-751

Computational-Statistical Gaps for Improper Learning in Sparse Linear Regression

Rares-Darius Buhai, Jingqiu Ding, Stefan Tiegel; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:752-771

The Price of Adaptivity in Stochastic Convex Optimization

Yair Carmon, Oliver Hinder; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:772-774

Information-theoretic generalization bounds for learning from quantum data

Matthias C. Caro, Tom Gur, Cambyse Rouzé, Daniel Stilck França, Sathyawageeswar Subramanian; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:775-839

Non-Clashing Teaching Maps for Balls in Graphs

Jérémie Chalopin, Victor Chepoi, Fionn Mc Inerney, Sébastien Ratel; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:840-875

Smoothed Analysis for Learning Concepts with Low Intrinsic Dimension

Gautam Chandrasekaran, Adam Klivans, Vasilis Kontonis, Raghu Meka, Konstantinos Stavropoulos; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:876-922

Dual VC Dimension Obstructs Sample Compression by Embeddings

Zachary Chase, Bogdan Chornomaz, Steve Hanneke, Shay Moran, Amir Yehudayoff; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:923-946

On Finding Small Hyper-Gradients in Bilevel Optimization: Hardness Results and Improved Analysis

Lesi Chen, Jing Xu, Jingzhao Zhang; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:947-980

A faster and simpler algorithm for learning shallow networks

Sitan Chen, Shyam Narayanan; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:981-994

Near-Optimal Learning and Planning in Separated Latent MDPs

Fan Chen, Constantinos Daskalakis, Noah Golowich, Alexander Rakhlin; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:995-1067

Scale-free Adversarial Reinforcement Learning

Mingyu Chen, Xuezhou Zhang; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:1068-1101

The power of an adversary in Glauber dynamics

Byron Chin, Ankur Moitra, Elchanan Mossel, Colin Sandon; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:1102-1124

Undetectable Watermarks for Language Models

Miranda Christ, Sam Gunn, Or Zamir; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:1125-1139

Risk-Sensitive Online Algorithms (Extended Abstract)

Nicolas Christianson, Bo Sun, Steven Low, Adam Wierman; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:1140-1141

Statistical curriculum learning: An elimination algorithm achieving an oracle risk

Omer Cohen, Ron Meir, Nir Weinberger; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:1142-1199

Lower Bounds for Differential Privacy Under Continual Observation and Online Threshold Queries

Edith Cohen, Xin Lyu, Jelani Nelson, Tamás Sarlós, Uri Stemmer; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:1200-1222

Learnability Gaps of Strategic Classification

Lee Cohen, Yishay Mansour, Shay Moran, Han Shao; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:1223-1259

Refined Sample Complexity for Markov Games with Independent Linear Function Approximation (Extended Abstract)

Yan Dai, Qiwen Cui, Simon S. Du; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:1260-1261

Computational-Statistical Gaps in Gaussian Single-Index Models (Extended Abstract)

Alex Damian, Loucas Pillaud-Vivien, Jason Lee, Joan Bruna; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:1262-1262

Is Efficient PAC Learning Possible with an Oracle That Responds "Yes" or "No"?

Constantinos Daskalakis, Noah Golowich; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:1263-1307

Testable Learning of General Halfspaces with Adversarial Label Noise

Ilias Diakonikolas, Daniel Kane, Sihan Liu, Nikos Zarifis; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:1308-1335

Statistical Query Lower Bounds for Learning Truncated Gaussians

Ilias Diakonikolas, Daniel M. Kane, Thanasis Pittas, Nikos Zarifis; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:1336-1363

Efficiently Learning One-Hidden-Layer ReLU Networks via SchurPolynomials

Ilias Diakonikolas, Daniel M. Kane; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:1364-1378

On the Growth of Mistakes in Differentially Private Online Learning: A Lower Bound Perspective

Daniil Dmitriev, Kristóf Szabó, Amartya Sanyal; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:1379-1398

Physics-informed machine learning as a kernel method

Nathan Doumèche, Francis Bach, Gérard Biau, Claire Boyer; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:1399-1450

Universal Lower Bounds and Optimal Rates: Achieving Minimax Clustering Error in Sub-Exponential Mixture Models

Maximilien Dreveton, Alperen Gözeten, Matthias Grossglauser, Patrick Thiran; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:1451-1485

An information-theoretic lower bound in time-uniform estimation

John Duchi, Saminul Haque; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:1486-1500

On sampling diluted Spin-Glasses using Glauber Dynamics

Charilaos Efthymiou, Kostas Zampetakis; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:1501-1515

Minimax Linear Regression under the Quantile Risk

Ayoub El Hanchi, Chris Maddison, Murat Erdogdu; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:1516-1572

The Real Price of Bandit Information in Multiclass Classification

Liad Erez, Alon Cohen, Tomer Koren, Yishay Mansour, Shay Moran; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:1573-1598

Topological Expressivity of ReLU Neural Networks

Ekin Ergen, Moritz Grillo; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:1599-1642

Contraction of Markovian Operators in Orlicz Spaces and Error Bounds for Markov Chain Monte Carlo (Extended Abstract)

Amedeo Roberto Esposito, Marco Mondelli; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:1643-1645

Computation-information gap in high-dimensional clustering

Bertrand Even, Christophe Giraud, Nicolas Verzelen; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:1646-1712

Online Newton Method for Bandit Convex Optimisation Extended Abstract

Hidde Fokkema, Dirk Van der Hoeven, Tor Lattimore, Jack J. Mayo; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:1713-1714

Agnostic Active Learning of Single Index Models with Linear Sample Complexity

Aarshvi Gajjar, Wai Ming Tai, Xu Xingyu, Chinmay Hegde, Christopher Musco, Yi Li; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:1715-1754

Safe Linear Bandits over Unknown Polytopes

Aditya Gangrade, Tianrui Chen, Venkatesh Saligrama; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:1755-1795

Sampling Polytopes with Riemannian HMC: Faster Mixing via the Lewis Weights Barrier

Khashayar Gatmiry, Jonathan Kelner, Santosh S. Vempala; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:1796-1881

$(ε, u)$-Adaptive Regret Minimization in Heavy-Tailed Bandits

Gianmarco Genalti, Lupo Marsigli, Nicola Gatti, Alberto Maria Metelli; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:1882-1915

On Convex Optimization with Semi-Sensitive Features

Badih Ghazi, Pritish Kamath, Ravi Kumar, Pasin Manurangsi, Raghu Meka, Chiyuan Zhang; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:1916-1938

Linear Bellman Completeness Suffices for Efficient Online Reinforcement Learning with Few Actions

Noah Golowich, Ankur Moitra; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:1939-1981

Mirror Descent Algorithms with Nearly Dimension-Independent Rates for Differentially-Private Stochastic Saddle-Point Problems extended abstract

Tomas Gonzalez, Cristobal Guzman, Courtney Paquette; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:1982-1982

On Computationally Efficient Multi-Class Calibration

Parikshit Gopalan, Lunjia Hu, Guy N. Rothblum; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:1983-2026

Omnipredictors for regression and the approximate rank of convex functions

Parikshit Gopalan, Princewill Okoroafor, Prasad Raghavendra, Abhishek Sherry, Mihir Singhal; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:2027-2070

Identification of mixtures of discrete product distributions in near-optimal sample and time complexity

Spencer L. Gordon, Erik Jahn, Bijan Mazaheri, Yuval Rabani, Leonard J. Schulman; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:2071-2091

On the Computability of Robust PAC Learning

Pascale Gourdeau, Lechner. Tosca, Ruth Urner; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:2092-2121

Principal eigenstate classical shadows

Daniel Grier, Hakop Pashayan, Luke Schaeffer; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:2122-2165

Community detection in the hypergraph stochastic block model and reconstruction on hypertrees

Yuzhou Gu, Aaradhya Pandey; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:2166-2203

Stochastic Constrained Contextual Bandits via Lyapunov Optimization Based Estimation to Decision Framework

Hengquan Guo, Xin Liu; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:2204-2231

Beyond Catoni: Sharper Rates for Heavy-Tailed and Robust Mean Estimation

Shivam Gupta, Samuel Hopkins, Eric Price; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:2232-2269

Prediction from compression for models with infinite memory, with applications to hidden Markov and renewal processes

Yanjun Han, Tianze Jiang, Yihong Wu; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:2270-2307

The Star Number and Eluder Dimension: Elementary Observations About the Dimensions of Disagreement

Steve Hanneke; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:2308-2359

List Sample Compression and Uniform Convergence

Steve Hanneke, Shay Moran, Waknine Tom; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:2360-2388

Adversarially-Robust Inference on Trees via Belief Propagation

Samuel B. Hopkins, Anqui Li; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:2389-2417

On the sample complexity of parameter estimation in logistic regression with normal design

Daniel Hsu, Arya Mazumdar; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:2418-2437

Faster Sampling without Isoperimetry via Diffusion-based Monte Carlo

Xunpeng Huang, Difan Zou, Hanze Dong, Yi-An Ma, Tong Zhang; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:2438-2493

Information-Theoretic Thresholds for the Alignments of Partially Correlated Graphs

Dong Huang, Xianwen Song, Pengkun Yang; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:2494-2518

Reconstructing the Geometry of Random Geometric Graphs (Extended Abstract)

Han Huang, Pakawut Jiradilok, Elchanan Mossel; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:2519-2521

Adaptive Learning Rate for Follow-the-Regularized-Leader: Competitive Analysis and Best-of-Both-Worlds

Shinji Ito, Taira Tsuchiya, Junya Honda; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:2522-2563

Black-Box k-to-1-PCA Reductions: Theory and Applications

Arun Jambulapati, Syamantak Kumar, Jerry Li, Shourya Pandey, Ankit Pensia, Kevin Tian; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:2564-2607

Closing the Computational-Query Depth Gap in Parallel Stochastic Convex Optimization

Arun Jambulapati, Aaron Sidford, Kevin Tian; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:2608-2643

Offline Reinforcement Learning: Role of State Aggregation and Trajectory Data

Zeyu Jia, Alexander Rakhlin, Ayush Sekhari, Chen-Yu Wei; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:2644-2719

Algorithms for mean-field variational inference via polyhedral optimization in the Wasserstein space

Yiheng Jiang, Sinho Chewi, Aram-Alexandre Pooladian; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:2720-2721

Faster Spectral Density Estimation and Sparsification in the Nuclear Norm (Extended Abstract)

Yujia Jin, Ishani Karmarkar, Christopher Musco, Aaron Sidford, Apoorv Vikram Singh; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:2722-2722

Some Constructions of Private, Efficient, and Optimal $K$-Norm and Elliptic Gaussian Noise

Matthew Joseph, Alexander Yu; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:2723-2766

Smaller Confidence Intervals From IPW Estimators via Data-Dependent Coarsening (Extended Abstract)

Alkis Kalavasis, Anay Mehrotra, Manolis Zampetakis; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:2767-2767

New Lower Bounds for Testing Monotonicity and Log Concavity of Distributions

Yuqian Cheng, Daniel Kane, Zheng Zhicheng; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:2768-2794

Choosing the p in Lp Loss: Adaptive Rates for Symmetric Mean Estimation

Yu-Chun Kao, Min Xu, Cun-Hui Zhang; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:2795-2839

Lasso with Latents: Efficient Estimation, Covariate Rescaling, and Computational-Statistical Gaps

Jonathan Kelner, Frederic Koehler, Raghu Meka, Dhruv Rohatgi; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:2840-2886

Testable Learning with Distribution Shift

Adam Klivans, Konstantinos Stavropoulos, Arsen Vasilyan; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:2887-2943

Learning Intersections of Halfspaces with Distribution Shift: Improved Algorithms and SQ Lower Bounds

Adam Klivans, Konstantinos Stavropoulos, Arsen Vasilyan; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:2944-2978

The title of the paper

Caleb Koch, Carmen Strassle, Li-Yang Tan; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:2979-3010

Convergence of Kinetic Langevin Monte Carlo on Lie groups

Lingkai Kong, Molei Tao; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:3011-3063

Active Learning with Simple Questions

Kontonis Vasilis, Ma Mingchen, Tzamos Christos; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:3064-3098

Sampling from the Mean-Field Stationary Distribution

Yunbum Kook, Matthew S. Zhang, Sinho Chewi, Murat A. Erdogdu, Mufan (Bill) Li; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:3099-3136

Gaussian Cooling and Dikin Walks: The Interior-Point Method for Logconcave Sampling

Yunbum Kook, Santosh S. Vempala; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:3137-3240

Simple online learning with consistent oracle

Alexander Kozachinskiy, Tomasz Steifer; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:3241-3256

Accelerated Parameter-Free Stochastic Optimization

Itai Kreisler, Maor Ivgi, Oliver Hinder, Yair Carmon; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:3257-3324

Better-than-KL PAC-Bayes Bounds

Ilja Kuzborskij, Kwang-Sung Jun, Yulian Wu, Kyoungseok Jang, Francesco Orabona; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:3325-3352

Inherent limitations of dimensions for characterizing learnability of distribution classes

Tosca Lechner, Shai Ben-David; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:3353-3374

Follow-the-Perturbed-Leader with Fréchet-type Tail Distributions: Optimality in Adversarial Bandits and Best-of-Both-Worlds

Jongyeong Lee, Junya Honda, Shinji Ito, Min-hwan Oh; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:3375-3430

Minimax-optimal reward-agnostic exploration in reinforcement learning

Gen Li, Yuling Yan, Yuxin Chen, Jianqing Fan; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:3431-3436

Optimistic Rates for Learning from Label Proportions

Gene Li, Lin Chen, Adel Javanmard, Vahab Mirrokni; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:3437-3474

Online Policy Optimization in Unknown Nonlinear Systems

Yiheng Lin, James A. Preiss, Fengze Xie, Emile Anand, Soon-Jo Chung, Yisong Yue, Adam Wierman; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:3475-3522

The role of randomness in quantum state certification with unentangled measurements

Yuhan Liu, Jayadev Acharya; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:3523-3555

Spatial properties of Bayesian unsupervised trees

Linxi Liu, Li Ma; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:3556-3581

The Predicted-Updates Dynamic Model: Offline, Incremental, and Decremental to Fully Dynamic Transformations

Quanquan C. Liu, Vaidehi Srinivas; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:3582-3641

Autobidders with Budget and ROI Constraints: Efficiency, Regret, and Pacing Dynamics

Brendan Lucier, Sarath Pattathil, Aleksandrs Slivkins, Mengxiao Zhang; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:3642-3643

Linear bandits with polylogarithmic minimax regret

Josep Lumbreras, Marco Tomamiche; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:3644-3682

Convergence of Gradient Descent with Small Initialization for Unregularized Matrix Completion

Jianhao Ma, Salar Fattahi; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:3683-3742

Projection by Convolution: Optimal Sample Complexity for Reinforcement Learning in Continuous-Space MDPs

Davide Maran, Alberto Maria Metelli, Matteo Papini, Marcello Restelli; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:3743-3774

Harmonics of Learning: Universal Fourier Features Emerge in Invariant Networks

Giovanni Luca Marchetti, Christopher J Hillar, Danica Kragic, Sophia Sanborn; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:3775-3797

Low-degree phase transitions for detecting a planted clique in sublinear time

Jay Mardia, Kabir Aladin Verchand, Alexander S. Wein; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:3798-3822

Fast, blind, and accurate: Tuning-free sparse regression with global linear convergence

Claudio Mayrink Verdun, Oleh Melnyk, Felix Krahmer, Peter Jung; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:3823-3872

Fundamental Limits of Non-Linear Low-Rank Matrix Estimation

Pierre Mergny, Justin Ko, Florent Krzakala, Lenka Zdeborová; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:3873-3873

Finding Super-spreaders in Network Cascades

Elchanan Mossel, Anirudh Sridhar; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:3874-3914

Exact Mean Square Linear Stability Analysis for SGD

Rotem Mulayoff, Tomer Michaeli; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:3915-3969

Optimistic Information Directed Sampling

Gergely Neu, Matteo Papini, Ludovic Schwartz; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:3970-4006

Robust Distribution Learning with Local and Global Adversarial Corruptions (extended abstract)

Sloan Nietert, Ziv Goldfeld, Soroosh Shafiee; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:4007-4008

Learning sum of diverse features: computational hardness and efficient gradient-based training for ridge combinations

Kazusato Oko, Yujin Song, Taiji Suzuki, Denny Wu; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:4009-4081

Depth Separation in Norm-Bounded Infinite-Width Neural Networks

Suzanna Parkinson, Greg Ongie, Rebecca Willett, Ohad Shamir, Nathan Srebro; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:4082-4114

The Limits and Potentials of Local SGD for Distributed Heterogeneous Learning with Intermittent Communication

Kumar Kshitij Patel, Margalit Glasgow, Ali Zindari, Lingxiao Wang, Sebastian U Stich, Ziheng Cheng, Nirmit Joshi, Nathan Srebro; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:4115-4157

The complexity of approximate (coarse) correlated equilibrium for incomplete information games

Binghui Peng, Aviad Rubinstein; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:4158-4184

The sample complexity of multi-distribution learning

Binghui Peng; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:4185-4204

The Sample Complexity of Simple Binary Hypothesis Testing

Ankit Pensia, Varun Jog, Po-Ling Loh; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:4205-4206

Smooth Lower Bounds for Differentially Private Algorithms via Padding-and-Permuting Fingerprinting Codes

Naty Peter, Eliad Tsfadia, Jonathan Ullman; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:4207-4239

Sample-Optimal Locally Private Hypothesis Selection and the Provable Benefits of Interactivity

Alireza F. Pour, Hassan Ashtiani, Shahab Asoodeh; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:4240-4275

Dimension-free Structured Covariance Estimation

Nikita Puchkin, Maxim Rakhuba; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:4276-4306

On the Distance from Calibration in Sequential Prediction

Mingda Qiao, Letian Zheng; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:4307-4357

Apple Tasting: Combinatorial Dimensions and Minimax Rates

Vinod Raman, Unique Subedi, Ananth Raman, Ambuj Tewari; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:4358-4380

Online Learning with Set-valued Feedback

Vinod Raman, Unique Subedi, Ambuj Tewari; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:4381-4412

Fit Like You Sample: Sample-Efficient Generalized Score Matching from Fast Mixing Diffusions

Yilong Qin, Andrej Risteski; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:4413-4457

Online Structured Prediction with Fenchel–Young Losses and Improved Surrogate Regret for Online Multiclass Classification with Logistic Loss

Shinsaku Sakaue, Han Bao, Taira Tsuchiya, Taihei Oki; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:4458-4486

Provable Advantage in Quantum PAC Learning

Wilfred Salmon, Sergii Strelchuk, Tom Gur; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:4487-4510

Improved High-Probability Bounds for the Temporal Difference Learning Algorithm via Exponential Stability

Sergey Samsonov, Daniil Tiapkin, Alexey Naumov, Eric Moulines; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:4511-4547

Adversarial Online Learning with Temporal Feedback Graphs

Khashayar Gatmiry, Jon Schneider; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:4548-4572

Training Dynamics of Multi-Head Softmax Attention for In-Context Learning: Emergence, Convergence, and Optimality (extended abstract)

Chen Siyu, Sheen Heejune, Wang Tianhao, Yang Zhuoran; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:4573-4573

A Non-Adaptive Algorithm for the Quantitative Group Testing Problem

Mahdi Soleymani, Tara Javidi; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:4574-4592

Fast sampling from constrained spaces using the Metropolis-adjusted Mirror Langevin algorithm

Vishwak Srinivasan, Andre Wibisono, Ashia Wilson; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:4593-4635

A non-backtracking method for long matrix and tensor completion

Ludovic Stephan, Yizhe Zhu; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:4636-4690

Second Order Methods for Bandit Optimization and Control

Arun Suggala, Y Jennifer Sun, Praneeth Netrapalli, Elad Hazan; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:4691-4763

Improved Hardness Results for Learning Intersections of Halfspaces

Stefan Tiegel; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:4764-4786

Pruning is Optimal for Learning Sparse Features in High-Dimensions

Nuri Mert Vural, Murat A Erdogdu; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:4787-4861

Nearly Optimal Regret for Decentralized Online Convex Optimization

Yuanyu Wan, Tong Wei, Mingli Song, Lijun Zhang; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:4862-4888

Efficient Algorithms for Attributed Graph Alignment with Vanishing Edge Correlation Extended Abstract

Ziao Wang, Weina Wang, Lele Wang; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:4889-4890

Nonlinear spiked covariance matrices and signal propagation in deep neural networks

Zhichao Wang, Denny Wu, Zhou Fan; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:4891-4957

Optimal score estimation via empirical Bayes smoothing

Andre Wibisono, Yihong Wu, Kaylee Yingxi Yang; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:4958-4991

Oracle-Efficient Hybrid Online Learning with Unknown Distribution

Changlong Wu, Jin Sima, Wojciech Szpankowski; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:4992-5018

Large Stepsize Gradient Descent for Logistic Loss: Non-Monotonicity of the Loss Improves Optimization Efficiency

Jingfeng Wu, Peter L. Bartlett, Matus Telgarsky, Bin Yu; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:5019-5073

Bridging the Gap: Rademacher Complexity in Robust and Standard Generalization

Jiancong Xiao, ruoyu Sun, Qi Long, Weijie Su; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:5074-5075

Multiple-output composite quantile regression through an optimal transport lens

Xuzhi Yang, Tengyao Wang; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:5076-5122

Top-$K$ ranking with a monotone adversary

Yuepeng Yang, Antares Chen, Lorenzo Orecchia, Cong Ma; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:5123-5162

Counting Stars is Constant-Degree Optimal For Detecting Any Planted Subgraph: Extended Abstract

Xifan Yu, Ilias Zadik, Peiyuan Zhang; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:5163-5165

Fast two-time-scale stochastic gradient method with applications in reinforcement learning

Sihan Zeng, Thinh Doan; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:5166-5212

Settling the sample complexity of online reinforcement learning

Zihan Zhang, Yuxin Chen, Jason D Lee, Simon S Du; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:5213-5219

Optimal Multi-Distribution Learning

Zihan Zhang, Wenhao Zhan, Yuxin Chen, Simon S Du, Jason D Lee; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:5220-5223

Spectral Estimators for Structured Generalized Linear Models via Approximate Message Passing (Extended Abstract)

Yihan Zhang, Hong Chang Ji, Ramji Venkataramanan, Marco Mondelli; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:5224-5230

Gap-Free Clustering: Sensitivity and Robustness of SDP

Matthew Zurek, Yudong Chen; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:5231-5300

Open Problems

Open Problem: Can Local Regularization Learn All Multiclass Problems?

Julian Asilis, Siddartha Devic, Shaddin Dughmi, Vatsal Sharan, Shang-Hua Teng; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:5301-5305

Open Problem: What is the Complexity of Joint Differential Privacy in Linear Contextual Bandits?

Achraf Azize, Debabrota Basu; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:5306-5311

Open Problem: Tight Characterization of Instance-Optimal Identity Testing

Clément Canonne; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:5312-5316

Open Problem: Black-Box Reductions and Adaptive Gradient Methods for Nonconvex Optimization

Xinyi Chen, Elad Hazan; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:5317-5324

Open problem: Direct Sums in Learning Theory

Steve Hanneke, Shay Moran, Waknine Tom; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:5325-5329

Open Problem: Optimal Rates for Stochastic Decision-Theoretic Online Learning Under Differentially Privacy

Bingshan Hu, Nishant A. Mehta; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:5330-5334

Open Problem: Anytime Convergence Rate of Gradient Descent

Guy Kornowski, Ohad Shamir; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:5335-5339

Open Problem: Order Optimal Regret Bounds for Kernel-Based Reinforcement Learning

Sattar Vakili; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:5340-5344

Open problem: Convergence of single-timescale mean-field Langevin descent-ascent for two-player zero-sum games

Guillaume Wang, Lénaïc Chizat; Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:5345-5350

subscribe via RSS