A Novel Method to Solve Neural Knapsack Problems

Duanshun Li, Jing Liu, Dongeun Lee, Ali Seyedmazloom, Giridhar Kaushik, Kookjin Lee, Noseong Park
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:6414-6424, 2021.

Abstract

0-1 knapsack is of fundamental importance across many fields. In this paper, we present a game-theoretic method to solve 0-1 knapsack problems (KPs) where the number of items (products) is large and the values of items are not predetermined but decided by an external value assignment function (e.g., a neural network in our case) during the optimization process. While existing papers are interested in predicting solutions with neural networks for classical KPs whose objective functions are mostly linear functions, we are interested in solving KPs whose objective functions are neural networks. In other words, we choose a subset of items that maximize the sum of the values predicted by neural networks. Its key challenge is how to optimize the neural network-based non-linear KP objective with a budget constraint. Our solution is inspired by game-theoretic approaches in deep learning, e.g., generative adversarial networks. After formally defining our two-player game, we develop an adaptive gradient ascent method to solve it. In our experiments, our method successfully solves two neural network-based non-linear KPs and conventional linear KPs with 1 million items.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-li21m, title = {A Novel Method to Solve Neural Knapsack Problems}, author = {Li, Duanshun and Liu, Jing and Lee, Dongeun and Seyedmazloom, Ali and Kaushik, Giridhar and Lee, Kookjin and Park, Noseong}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {6414--6424}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/li21m/li21m.pdf}, url = {https://proceedings.mlr.press/v139/li21m.html}, abstract = {0-1 knapsack is of fundamental importance across many fields. In this paper, we present a game-theoretic method to solve 0-1 knapsack problems (KPs) where the number of items (products) is large and the values of items are not predetermined but decided by an external value assignment function (e.g., a neural network in our case) during the optimization process. While existing papers are interested in predicting solutions with neural networks for classical KPs whose objective functions are mostly linear functions, we are interested in solving KPs whose objective functions are neural networks. In other words, we choose a subset of items that maximize the sum of the values predicted by neural networks. Its key challenge is how to optimize the neural network-based non-linear KP objective with a budget constraint. Our solution is inspired by game-theoretic approaches in deep learning, e.g., generative adversarial networks. After formally defining our two-player game, we develop an adaptive gradient ascent method to solve it. In our experiments, our method successfully solves two neural network-based non-linear KPs and conventional linear KPs with 1 million items.} }
Endnote
%0 Conference Paper %T A Novel Method to Solve Neural Knapsack Problems %A Duanshun Li %A Jing Liu %A Dongeun Lee %A Ali Seyedmazloom %A Giridhar Kaushik %A Kookjin Lee %A Noseong Park %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-li21m %I PMLR %P 6414--6424 %U https://proceedings.mlr.press/v139/li21m.html %V 139 %X 0-1 knapsack is of fundamental importance across many fields. In this paper, we present a game-theoretic method to solve 0-1 knapsack problems (KPs) where the number of items (products) is large and the values of items are not predetermined but decided by an external value assignment function (e.g., a neural network in our case) during the optimization process. While existing papers are interested in predicting solutions with neural networks for classical KPs whose objective functions are mostly linear functions, we are interested in solving KPs whose objective functions are neural networks. In other words, we choose a subset of items that maximize the sum of the values predicted by neural networks. Its key challenge is how to optimize the neural network-based non-linear KP objective with a budget constraint. Our solution is inspired by game-theoretic approaches in deep learning, e.g., generative adversarial networks. After formally defining our two-player game, we develop an adaptive gradient ascent method to solve it. In our experiments, our method successfully solves two neural network-based non-linear KPs and conventional linear KPs with 1 million items.
APA
Li, D., Liu, J., Lee, D., Seyedmazloom, A., Kaushik, G., Lee, K. & Park, N.. (2021). A Novel Method to Solve Neural Knapsack Problems. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:6414-6424 Available from https://proceedings.mlr.press/v139/li21m.html.

Related Material