Infinite-Dimensional Game Optimization via Variational Transport

Published in OPT 2020 @ NeurIPS, 2020

Recommended citation: Liu, L., Zhang, Y., Yang, Z., Babanezhad, R., & Wang, Z. Infinite-Dimensional Game Optimization via Variational Transport. .

Game optimization has been extensively studied when decision variables lie in a finite-dimensional space, of which solutions correspond to pure strategies at the Nash equilibrium (NE) and the gradient descent-ascent (GDA) method works widely in practice. In this paper, we consider infinite-dimensional games by a zero-sum distributional optimization problem over a space of probability measures defined on a continuous variable set, which is inspired by finding a mixed NE for finite-dimensional games. We then aim to answer a natural question: Will GDA-type algorithms still be provably efficient when extended to infinite-dimensional games? To answer this question, we propose a particle-based variational transport algorithm based on GDA in the functional space. Specifically, the algorithm performs multi-step functional gradient descent-ascent in the Wasserstein space via alternately pushing two sets of particles in the variable space. By characterizing the gradient estimation error from variational form maximization and the convergence behavior of each player with different objective landscapes, we prove rigorously that the generalized GDA algorithm converges to the NE or the value of the game efficiently for a class of games under the Polyak-Ɓojasiewicz (PL) condition. To conclude, we provide a complete statistical and convergence guarantees for solving an infinite-dimensional zero-sum game via a provably efficient particle-based method under mild conditions. Additionally, our work provides the first thorough statistical analysis for the particle-based algorithm to learn an objective functional with a variational form using universal approximators (i.e., neural networks (NNs)), which may be of independent interest. Download paper here