California Institute of Technology
July 9, 2020
Competitive optimization is needed for many ML problems such as training GANs, robust reinforcement learning, and adversarial learning. Standard approaches to competitive optimization involve each agent independently optimizing their objective functions using SGD or other gradient-based approaches. However, they suffer from oscillations and instability, since the optimization does not account for interaction among the players. We introduce competitive gradient descent (CGD) that explicitly incorporates interaction by solving for Nash equilibrium of a local game.