Multi-Observation Elicitation

Sebastian Casalaina-Martin, Rafael Frongillo, Tom Morgan, Bo Waggoner
Proceedings of the 2017 Conference on Learning Theory, PMLR 65:449-464, 2017.

Abstract

We study loss functions that measure the accuracy of a prediction based on multiple data points simultaneously. To our knowledge, such loss functions have not been studied before in the area of property elicitation or in machine learning more broadly. As compared to traditional loss functions that take only a single data point, these multi-observation loss functions can in some cases drastically reduce the dimensionality of the hypothesis required. In elicitation, this corresponds to requiring many fewer reports; in empirical risk minimization, it corresponds to algorithms on a hypothesis space of much smaller dimension. We explore some examples of the tradeoff between dimensionality and number of observations, give some geometric characterizations and intuition for relating loss functions and the properties that they elicit, and discuss some implications for both elicitation and machine-learning contexts.

Cite this Paper


BibTeX
@InProceedings{pmlr-v65-casalaina-martin17a, title = {Multi-Observation Elicitation}, author = {Casalaina-Martin, Sebastian and Frongillo, Rafael and Morgan, Tom and Waggoner, Bo}, booktitle = {Proceedings of the 2017 Conference on Learning Theory}, pages = {449--464}, year = {2017}, editor = {Kale, Satyen and Shamir, Ohad}, volume = {65}, series = {Proceedings of Machine Learning Research}, month = {07--10 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v65/casalaina-martin17a/casalaina-martin17a.pdf}, url = {https://proceedings.mlr.press/v65/casalaina-martin17a.html}, abstract = {We study loss functions that measure the accuracy of a prediction based on multiple data points simultaneously. To our knowledge, such loss functions have not been studied before in the area of property elicitation or in machine learning more broadly. As compared to traditional loss functions that take only a single data point, these multi-observation loss functions can in some cases drastically reduce the dimensionality of the hypothesis required. In elicitation, this corresponds to requiring many fewer reports; in empirical risk minimization, it corresponds to algorithms on a hypothesis space of much smaller dimension. We explore some examples of the tradeoff between dimensionality and number of observations, give some geometric characterizations and intuition for relating loss functions and the properties that they elicit, and discuss some implications for both elicitation and machine-learning contexts.} }
Endnote
%0 Conference Paper %T Multi-Observation Elicitation %A Sebastian Casalaina-Martin %A Rafael Frongillo %A Tom Morgan %A Bo Waggoner %B Proceedings of the 2017 Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2017 %E Satyen Kale %E Ohad Shamir %F pmlr-v65-casalaina-martin17a %I PMLR %P 449--464 %U https://proceedings.mlr.press/v65/casalaina-martin17a.html %V 65 %X We study loss functions that measure the accuracy of a prediction based on multiple data points simultaneously. To our knowledge, such loss functions have not been studied before in the area of property elicitation or in machine learning more broadly. As compared to traditional loss functions that take only a single data point, these multi-observation loss functions can in some cases drastically reduce the dimensionality of the hypothesis required. In elicitation, this corresponds to requiring many fewer reports; in empirical risk minimization, it corresponds to algorithms on a hypothesis space of much smaller dimension. We explore some examples of the tradeoff between dimensionality and number of observations, give some geometric characterizations and intuition for relating loss functions and the properties that they elicit, and discuss some implications for both elicitation and machine-learning contexts.
APA
Casalaina-Martin, S., Frongillo, R., Morgan, T. & Waggoner, B.. (2017). Multi-Observation Elicitation. Proceedings of the 2017 Conference on Learning Theory, in Proceedings of Machine Learning Research 65:449-464 Available from https://proceedings.mlr.press/v65/casalaina-martin17a.html.

Related Material