[edit]
Online Influence Maximization with Local Observations
Proceedings of the 30th International Conference on Algorithmic Learning Theory, PMLR 98:557-580, 2019.
Abstract
We consider an online influence maximization problem in which a
decision maker selects a node among a large number of possibilities
and places a piece of information at the node.
The information then spreads in the network on a random set of edges. The goal of the decision maker is to reach
as many nodes as possible, with the added complication that feedback is
only available about the degree of the selected node. Our main result
shows that such local observations can be sufficient for maximizing
global influence in two broadly studied families of random graph models:
stochastic block models and Chung–Lu models. With this insight, we propose
a bandit algorithm that aims at maximizing local (and thus global) influence,
and provide its theoretical analysis in both the subcritical
and supercritical regimes of both considered models. Notably, our performance
guarantees show no explicit dependence on the total number of nodes in the network,
making our approach well-suited for large-scale applications.