[edit]
Finite-Time Performance of Distributed Two-Time-Scale Stochastic Approximation
Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:26-36, 2020.
Abstract
Two-time-scale stochastic approximation is a popular iterative method for finding the solution of a system of two equations. Such methods have found broad applications in many areas, especially in machine learning and reinforcement learning. In this paper, we propose a distributed variant of this method over a network of agents, where the agents use two graphs representing their communication at different speeds due to the nature of their two-time-scale updates. Our main contribution is to provide a finite-time analysis for the performance of the proposed method. In particular, we establish an upper bound for the convergence rates of the mean square errors at the agents to zero as a function of the step sizes and the network topology. We believe that the proposed method and analysis studied in this paper can be applicable to many other interesting applications.