[edit]

Volume 65: Conference on Learning Theory, 7-10 July 2017, Amsterdam, Netherlands

[edit]

Editors: Satyen Kale, Ohad Shamir

[bib][citeproc]

Preface: Conference on Learning Theory (COLT), 2017

Satyen Kale, Ohad Shamir; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:1-3

Open Problem: First-Order Regret Bounds for Contextual Bandits

Alekh Agarwal, Akshay Krishnamurthy, John Langford, Haipeng Luo, Schapire Robert E.; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:4-7

Open Problem: Meeting Times for Learning Random Automata

Benjamin Fish, Lev Reyzin; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:8-11

Corralling a Band of Bandit Algorithms

Alekh Agarwal, Haipeng Luo, Behnam Neyshabur, Robert E. Schapire; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:12-38

Learning with Limited Rounds of Adaptivity: Coin Tossing, Multi-Armed Bandits, and Ranking from Pairwise Comparisons

Arpit Agarwal, Shivani Agarwal, Sepehr Assadi, Sanjeev Khanna; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:39-75

Thompson Sampling for the MNL-Bandit

Shipra Agrawal, Vashist Avadhanula, Vineet Goyal, Assaf Zeevi; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:76-78

Homotopy Analysis for Tensor PCA

Anima Anandkumar, Yuan Deng, Rong Ge, Hossein Mobahi; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:79-104

Correspondence retrieval

Alexandr Andoni, Daniel Hsu, Kevin Shi, Xiaorui Sun; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:105-126

Efficient PAC Learning from the Crowd

Pranjal Awasthi, Avrim Blum, Nika Haghtalab, Yishay Mansour; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:127-150

The Price of Selection in Differential Privacy

Mitali Bafna, Jonathan Ullman; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:151-168

Computationally Efficient Robust Sparse Estimation in High Dimensions

Sivaraman Balakrishnan, Simon S. Du, Jerry Li, Aarti Singh; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:169-212

Learning-Theoretic Foundations of Algorithm Configuration for Combinatorial Partitioning Problems

Maria-Florina Balcan, Vaishnavh Nagarajan, Ellen Vitercik, Colin White; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:213-274

The Sample Complexity of Optimizing a Convex Function

Eric Balkanski, Yaron Singer; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:275-301

Efficient Co-Training of Linear Separators under Weak Dependence

Avrim Blum, Yishay Mansour; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:302-318

Sampling from a log-concave distribution with compact support with proximal Langevin Monte Carlo

Nicolas Brosse, Alain Durmus, Éric Moulines, Marcelo Pereyra; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:319-342

Rates of estimation for determinantal point processes

Victor-Emmanuel Brunel, Ankur Moitra, Philippe Rigollet, John Urschel; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:343-345

Learning Disjunctions of Predicates

Nader H. Bshouty, Dana Drachsler-Cohen, Martin Vechev, Eran Yahav; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:346-369

Testing Bayesian Networks

Clement L. Canonne, Ilias Diakonikolas, Daniel M. Kane, Alistair Stewart; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:370-448

Multi-Observation Elicitation

Sebastian Casalaina-Martin, Rafael Frongillo, Tom Morgan, Bo Waggoner; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:449-464

Algorithmic Chaining and the Role of Partial Feedback in Online Nonparametric Learning

Nicolò Cesa-Bianchi, Pierre Gaillard, Claudio Gentile, Sébastien Gerchinovitz; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:465-481

Nearly Optimal Sampling Algorithms for Combinatorial Pure Exploration

Lijie Chen, Anupam Gupta, Jian Li, Mingda Qiao, Ruosong Wang; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:482-534

Towards Instance Optimal Bounds for Best Arm Identification

Lijie Chen, Jian Li, Mingda Qiao; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:535-592

Thresholding Based Outlier Robust PCA

Yeshwanth Cherapanamjeri, Prateek Jain, Praneeth Netrapalli; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:593-628

Tight Bounds for Bandit Combinatorial Optimization

Alon Cohen, Tamir Hazan, Tomer Koren; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:629-642

Online Learning Without Prior Information

Ashok Cutkosky, Kwabena Boahen; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:643-677

Further and stronger analogy between sampling and optimization: Langevin Monte Carlo and gradient descent

Arnak Dalalyan; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:678-689

Depth Separation for Neural Networks

Amit Daniely; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:690-696

Square Hellinger Subadditivity for Bayesian Networks and its Applications to Identity Testing

Constantinos Daskalakis, Qinxuan Pan; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:697-703

Ten Steps of EM Suffice for Mixtures of Two Gaussians

Constantinos Daskalakis, Christos Tzamos, Manolis Zampetakis; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:704-710

Learning Multivariate Log-concave Distributions

Ilias Diakonikolas, Daniel M. Kane, Alistair Stewart; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:711-727

Generalization for Adaptively-chosen Estimators via Stable Median

Vitaly Feldman, Thomas Steinke; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:728-757

Greed Is Good: Near-Optimal Submodular Maximization via Greedy Optimization

Moran Feldman, Christopher Harshaw, Amin Karbasi; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:758-784

A General Characterization of the Statistical Query Complexity

Vitaly Feldman; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:785-830

Stochastic Composite Least-Squares Regression with Convergence Rate $O(1/n)$

Nicolas Flammarion, Francis Bach; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:831-875

ZigZag: A New Approach to Adaptive Online Learning

Dylan J. Foster, Alexander Rakhlin, Karthik Sridharan; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:876-924

Memoryless Sequences for Differentiable Losses

Rafael Frongillo, Andrew Nobel; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:925-939

Matrix Completion from $O(n)$ Samples in Linear Time

David Gamarnik, Quan Li, Hongyi Zhang; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:940-947

High Dimensional Regression with Binary Coefficients. Estimating Squared Error and a Phase Transtition

Gamarnik David, Zadik Ilias; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:948-953

Two-Sample Tests for Large Random Graphs Using Network Statistics

Debarghya Ghoshdastidar, Maurilio Gutzeit, Alexandra Carpentier, Ulrike von Luxburg; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:954-977

Effective Semisupervised Learning on Manifolds

Amir Globerson, Roi Livni, Shai Shalev-Shwartz; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:978-1003

Reliably Learning the ReLU in Polynomial Time

Surbhi Goel, Varun Kanade, Adam Klivans, Justin Thaler; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:1004-1042

Fast Rates for Empirical Risk Minimization of Strict Saddle Problems

Alon Gonen, Shai Shalev-Shwartz; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:1043-1063

Nearly-tight VC-dimension bounds for piecewise linear neural networks

Nick Harvey, Christopher Liaw, Abbas Mehrabian; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:1064-1068

Submodular Optimization under Noise

Avinatan Hassidim, Yaron Singer; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:1069-1122

Surprising properties of dropout in deep networks

David P. Helmbold, Philip M. Long; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:1123-1146

Quadratic Upper Bound for Recursive Teaching Dimension of Finite VC Classes

Lunjia Hu, Ruihan Wu, Tianhong Li, Liwei Wang; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:1147-1156

A Unified Analysis of Stochastic Optimization Methods Using Jump System Theory and Quadratic Constraints

Bin Hu, Peter Seiler, Anders Rantzer; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:1157-1189

The Hidden Hubs Problem

Ravindran Kannan, Santosh Vempala; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:1190-1213

Predicting with Distributions

Michael Kearns, Zhiwei Steven Wu; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:1214-1241

Bandits with Movement Costs and Adaptive Pricing

Tomer Koren, Roi Livni, Yishay Mansour; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:1242-1268

Sparse Stochastic Bandits

Joon Kwon, Vianney Perchet, Claire Vernade; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:1269-1270

On the Ability of Neural Nets to Express Distributions

Holden Lee, Rong Ge, Tengyu Ma, Andrej Risteski, Sanjeev Arora; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:1271-1296

Fundamental limits of symmetric low-rank matrix estimation

Marc Lelarge, Léo Miolane; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:1297-1301

Robust and Proper Learning for Mixtures of Gaussians via Systems of Polynomial Inequalities

Jerry Li, Ludwig Schmidt; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:1302-1382

Adaptivity to Noise Parameters in Nonparametric Active Learning

Carpentier Alexandra Locatelli Andrea, Kpotufe Samory; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:1383-1416

Noisy Population Recovery from Unknown Noise

Shachar Lovett, Jiapeng Zhang; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:1417-1431

Inapproximability of VC Dimension and Littlestone’s Dimension

Pasin Manurangsi, Aviad Rubinstein; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:1432-1460

A Second-order Look at Stability and Generalization

Andreas Maurer; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:1461-1475

Solving SDPs for synchronization and MaxCut problems via the Grothendieck inequality

Song Mei, Theodor Misiakiewicz, Andrea Montanari, Roberto Imbuzeiro Oliveira; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:1476-1515

Mixing Implies Lower Bounds for Space Bounded Learning

Dana Moshkovitz, Michal Moshkovitz; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:1516-1566

Fast rates for online learning in Linearly Solvable Markov Decision Processes

Gergely Neu, Vicenç Gómez; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:1567-1588

Sample complexity of population recovery

Yury Polyanskiy, Ananda Theertha Suresh, Yihong Wu; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:1589-1618

Exact tensor completion with sum-of-squares

Aaron Potechin, David Steurer; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:1619-1673

Non-convex learning via Stochastic Gradient Langevin Dynamics: a nonasymptotic analysis

Maxim Raginsky, Alexander Rakhlin, Matus Telgarsky; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:1674-1703

On Equivalence of Martingale Tail Bounds and Deterministic Regret Inequalities

Alexander Rakhlin, Karthik Sridharan; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:1704-1722

Lower Bounds on Regret for Noisy Gaussian Process Bandit Optimization

Jonathan Scarlett, Ilija Bogunovic, Volkan Cevher; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:1723-1742

An Improved Parametrization and Analysis of the EXP3++ Algorithm for Stochastic and Adversarial Bandits

Yevgeny Seldin, Gábor Lugosi; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:1743-1759

Fast and robust tensor decomposition with applications to dictionary learning

Tselil Schramm, David Steurer; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:1760-1793

The Simulator: Understanding Adaptive Sampling in the Moderate-Confidence Regime

Max Simchowitz, Kevin Jamieson, Benjamin Recht; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:1794-1834

On Learning vs. Refutation

Salil Vadhan; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:1835-1848

Ignoring Is a Bliss: Learning with Large Noise Through Reweighting-Minimization

Daniel Vainsencher, Shie Mannor, Huan Xu; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:1849-1881

Memory and Communication Efficient Distributed Stochastic Optimization with Minibatch Prox

Jialei Wang, Weiran Wang, Nathan Srebro; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:1882-1919

Learning Non-Discriminatory Predictors

Blake Woodworth, Suriya Gunasekar, Mesrob I. Ohannessian, Nathan Srebro; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:1920-1953

Empirical Risk Minimization for Stochastic Convex Optimization: $O(1/n)$- and $O(1/n^2)$-type of Risk Bounds

Lijun Zhang, Tianbao Yang, Rong Jin; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:1954-1979

A Hitting Time Analysis of Stochastic Gradient Langevin Dynamics

Yuchen Zhang, Percy Liang, Moses Charikar; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:1980-2022

Optimal learning via local entropies and sample compression

Zhivotovskiy Nikita; Proceedings of the 2017 Conference on Learning Theory, PMLR 65:2023-2065

subscribe via RSS