A Nonasymptotic Analysis of Gradient Descent Ascent for Nonconvex-Concave Minimax Problems
We present some new convergence results for two-timescale gradient descent ascent (GDA) applied to approximately solve the nonconvex-concave minimax problems in the form of minx maxy∈Y f(x, y), where the objective function f(x, y) is nonconvex in x but concave in y and Y ⊆ R n is a convex and bounded set. Historically, the GDA algorithm has been widely used in machine learning, control theory and economics. Despite strong convergence guarantees for GDA in the convex-concave setting, in a more general setting, GDA with equal learning rates can converge to limit cycles or even diverge. In this paper, we revisit the scheme of GDA from the point of view of two-timescale dynamics, showing that, despite the nonconvexity and lack of smoothness, a two-timescale version of GDA can efficiently find a stationary point of the function Φ(·) := maxy∈Y f(·, y). More specifically, we provide theoretical bounds on the complexity of solving both smooth and nonsmooth nonconvex-concave minimax problems. We further investigate properties of stochastic variants of two-timescale GDA. To the best of our knowledge, this paper provides the first instance of a nonasymptotic complexity analysis for two-timescale GDA in nonconvex-concave minimax problems, shedding light on its performance in training generative adversarial networks (GANs) and in other real-world application problems
Year of publication: |
2022
|
---|---|
Authors: | Lin, Tianyi ; Jin, Chi ; Jordan, Michael I. |
Publisher: |
[S.l.] : SSRN |
Saved in:
freely available
Extent: | 1 Online-Ressource (46 p) |
---|---|
Type of publication: | Book / Working Paper |
Language: | English |
Notes: | Nach Informationen von SSRN wurde die ursprüngliche Fassung des Dokuments August 1, 2022 erstellt |
Other identifiers: | 10.2139/ssrn.4181867 [DOI] |
Source: | ECONIS - Online Catalogue of the ZBW |
Persistent link: https://www.econbiz.de/10014030265
Saved in favorites
Similar items by person
-
Monotone inclusions, acceleration, and closed-loop control
Lin, Tianyi, (2023)
-
Adaptive, Doubly Optimal No-Regret Learning in Games with Gradient Feedback
Jordan, Michael I., (2022)
-
Provably efficient reinforcement learning with linear function approximation
Jin, Chi, (2023)
- More ...