[edit]
Learning to Round for Discrete Labeling Problems
Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics, PMLR 84:1047-1056, 2018.
Abstract
Discrete labeling problems are often solved by formulating them as an integer program, and relaxing the integrality constraint to a continuous domain. While the continuous relaxation is closely related to the original integer program, its optimal solution is often fractional. Thus, the success of a relaxation depends crucially on the availability of an accurate rounding procedure. The problem of identifying an accurate rounding procedure has mainly been tackled in the theoretical computer science community through mathematical analysis of the worst-case. However, this approach is both onerous and ignores the distribution of the data encountered in practice. We present a novel interpretation of rounding procedures as sampling from a latent variable model, which opens the door to the use of powerful machine learning formulations in their design. Inspired by the recent success of deep latent variable models we parameterize rounding procedures as a neural network, which lends itself to efficient optimization via back propagation. By minimizing the expected value of the objective of the discrete labeling problem over training samples, we learn a rounding procedure that is more suited to the task at hand. Using both synthetic and real world data sets, we demonstrate that our approach can outperform the state-of-the-art hand-designed rounding procedures.