Jekyll2023-02-08T10:41:22+00:00https://proceedings.mlr.press/v49/feed.xmlProceedings of Machine Learning Research29th Annual Conference on Learning Theory
Held in Columbia University, New York, New York, USA on 23-26 June 2016
Published as Volume 49 by the Proceedings of Machine Learning Research on 06 June 2016.
Volume Edited by:
Vitaly Feldman
Alexander Rakhlin
Ohad Shamir
Series Editors:
Neil D. Lawrence
Mark Reid
PMLRAn efficient algorithm for contextual bandits with knapsacks, and an extension to concave objectives2016-06-06T00:00:00+00:002016-06-06T00:00:00+00:00https://proceedings.mlr.press/v49/agrawal16[{"given"=>"Shipra", "family"=>"Agrawal"}, {"given"=>"Nikhil R.", "family"=>"Devanur"}, {"given"=>"Lihong", "family"=>"Li"}]Learning and Testing Junta Distributions2016-06-06T00:00:00+00:002016-06-06T00:00:00+00:00https://proceedings.mlr.press/v49/aliakbarpour16[{"given"=>"Maryam", "family"=>"Aliakbarpour"}, {"given"=>"Eric", "family"=>"Blais"}, {"given"=>"Ronitt", "family"=>"Rubinfeld"}]Sign rank versus VC dimension2016-06-06T00:00:00+00:002016-06-06T00:00:00+00:00https://proceedings.mlr.press/v49/alon16[{"given"=>"Noga", "family"=>"Alon"}, {"given"=>"Shay", "family"=>"Moran"}, {"given"=>"Amir", "family"=>"Yehudayoff"}]Efficient approaches for escaping higher order saddle points in non-convex optimization2016-06-06T00:00:00+00:002016-06-06T00:00:00+00:00https://proceedings.mlr.press/v49/anandkumar16[{"given"=>"Animashree", "family"=>"Anandkumar"}, {"given"=>"Rong", "family"=>"Ge"}]Monte Carlo Markov Chain Algorithms for Sampling Strongly Rayleigh Distributions and Determinantal Point Processes2016-06-06T00:00:00+00:002016-06-06T00:00:00+00:00https://proceedings.mlr.press/v49/anari16[{"given"=>"Nima", "family"=>"Anari"}, {"given"=>"Shayan", "family"=>"Oveis Gharan"}, {"given"=>"Alireza", "family"=>"Rezaei"}]An algorithm with nearly optimal pseudo-regret for both stochastic and adversarial bandits2016-06-06T00:00:00+00:002016-06-06T00:00:00+00:00https://proceedings.mlr.press/v49/auer16[{"given"=>"Peter", "family"=>"Auer"}, {"given"=>"Chao-Kai", "family"=>"Chiang"}]Policy Error Bounds for Model-Based Reinforcement Learning with Factored Linear Models2016-06-06T00:00:00+00:002016-06-06T00:00:00+00:00https://proceedings.mlr.press/v49/avilapires16[{"given"=>"Bernardo", "family"=>"Ávila Pires"}, {"given"=>"Csaba", "family"=>"Szepesvári"}]Learning and 1-bit Compressed Sensing under Asymmetric Noise2016-06-06T00:00:00+00:002016-06-06T00:00:00+00:00https://proceedings.mlr.press/v49/awasthi16[{"given"=>"Pranjal", "family"=>"Awasthi"}, {"given"=>"Maria-Florina", "family"=>"Balcan"}, {"given"=>"Nika", "family"=>"Haghtalab"}, {"given"=>"Hongyang", "family"=>"Zhang"}]Reinforcement Learning of POMDPs using Spectral Methods2016-06-06T00:00:00+00:002016-06-06T00:00:00+00:00https://proceedings.mlr.press/v49/azizzadenesheli16a[{"given"=>"Kamyar", "family"=>"Azizzadenesheli"}, {"given"=>"Alessandro", "family"=>"Lazaric"}, {"given"=>"Animashree", "family"=>"Anandkumar"}]Open Problem: Approximate Planning of POMDPs in the class of Memoryless Policies2016-06-06T00:00:00+00:002016-06-06T00:00:00+00:00https://proceedings.mlr.press/v49/azizzadenesheli16b[{"given"=>"Kamyar", "family"=>"Azizzadenesheli"}, {"given"=>"Alessandro", "family"=>"Lazaric"}, {"given"=>"Animashree", "family"=>"Anandkumar"}]