Multi-layered Network Exploration via Random Walks: From Offline Optimization to Online Learning

Xutong Liu, Jinhang Zuo, Xiaowei Chen, Wei Chen, John C. S. Lui
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:7057-7066, 2021.

Abstract

Multi-layered network exploration (MuLaNE) problem is an important problem abstracted from many applications. In MuLaNE, there are multiple network layers where each node has an importance weight and each layer is explored by a random walk. The MuLaNE task is to allocate total random walk budget $B$ into each network layer so that the total weights of the unique nodes visited by random walks are maximized. We systematically study this problem from offline optimization to online learning. For the offline optimization setting where the network structure and node weights are known, we provide greedy based constant-ratio approximation algorithms for overlapping networks, and greedy or dynamic-programming based optimal solutions for non-overlapping networks. For the online learning setting, neither the network structure nor the node weights are known initially. We adapt the combinatorial multi-armed bandit framework and design algorithms to learn random walk related parameters and node weights while optimizing the budget allocation in multiple rounds, and prove that they achieve logarithmic regret bounds. Finally, we conduct experiments on a real-world social network dataset to validate our theoretical results.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-liu21ae, title = {Multi-layered Network Exploration via Random Walks: From Offline Optimization to Online Learning}, author = {Liu, Xutong and Zuo, Jinhang and Chen, Xiaowei and Chen, Wei and Lui, John C. S.}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {7057--7066}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/liu21ae/liu21ae.pdf}, url = {https://proceedings.mlr.press/v139/liu21ae.html}, abstract = {Multi-layered network exploration (MuLaNE) problem is an important problem abstracted from many applications. In MuLaNE, there are multiple network layers where each node has an importance weight and each layer is explored by a random walk. The MuLaNE task is to allocate total random walk budget $B$ into each network layer so that the total weights of the unique nodes visited by random walks are maximized. We systematically study this problem from offline optimization to online learning. For the offline optimization setting where the network structure and node weights are known, we provide greedy based constant-ratio approximation algorithms for overlapping networks, and greedy or dynamic-programming based optimal solutions for non-overlapping networks. For the online learning setting, neither the network structure nor the node weights are known initially. We adapt the combinatorial multi-armed bandit framework and design algorithms to learn random walk related parameters and node weights while optimizing the budget allocation in multiple rounds, and prove that they achieve logarithmic regret bounds. Finally, we conduct experiments on a real-world social network dataset to validate our theoretical results.} }
Endnote
%0 Conference Paper %T Multi-layered Network Exploration via Random Walks: From Offline Optimization to Online Learning %A Xutong Liu %A Jinhang Zuo %A Xiaowei Chen %A Wei Chen %A John C. S. Lui %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-liu21ae %I PMLR %P 7057--7066 %U https://proceedings.mlr.press/v139/liu21ae.html %V 139 %X Multi-layered network exploration (MuLaNE) problem is an important problem abstracted from many applications. In MuLaNE, there are multiple network layers where each node has an importance weight and each layer is explored by a random walk. The MuLaNE task is to allocate total random walk budget $B$ into each network layer so that the total weights of the unique nodes visited by random walks are maximized. We systematically study this problem from offline optimization to online learning. For the offline optimization setting where the network structure and node weights are known, we provide greedy based constant-ratio approximation algorithms for overlapping networks, and greedy or dynamic-programming based optimal solutions for non-overlapping networks. For the online learning setting, neither the network structure nor the node weights are known initially. We adapt the combinatorial multi-armed bandit framework and design algorithms to learn random walk related parameters and node weights while optimizing the budget allocation in multiple rounds, and prove that they achieve logarithmic regret bounds. Finally, we conduct experiments on a real-world social network dataset to validate our theoretical results.
APA
Liu, X., Zuo, J., Chen, X., Chen, W. & Lui, J.C.S.. (2021). Multi-layered Network Exploration via Random Walks: From Offline Optimization to Online Learning. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:7057-7066 Available from https://proceedings.mlr.press/v139/liu21ae.html.

Related Material