[edit]
Fast (1+ε)-Approximation Algorithms for Binary Matrix Factorization
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:34952-34977, 2023.
Abstract
We introduce efficient (1+ε)-approximation algorithms for the binary matrix factorization (BMF) problem, where the inputs are a matrix A∈{0,1}n×d, a rank parameter k>0, as well as an accuracy parameter ε>0, and the goal is to approximate A as a product of low-rank factors U∈{0,1}n×k and V∈{0,1}k×d. Equivalently, we want to find U and V that minimize the Frobenius loss ‖. Before this work, the state-of-the-art for this problem was the approximation algorithm of Kumar et. al. [ICML 2019], which achieves a C-approximation for some constant C\ge 576. We give the first (1+\varepsilon)-approximation algorithm using running time singly exponential in k, where k is typically a small integer. Our techniques generalize to other common variants of the BMF problem, admitting bicriteria (1+\varepsilon)-approximation algorithms for L_p loss functions and the setting where matrix operations are performed in \mathbb{F}_2. Our approach can be implemented in standard big data models, such as the streaming or distributed models.