A random coordinate descent algorithm for optimization problems with composite objective function and linear coupled constraints
In this paper we propose a variant of the random coordinate descent method for solving linearly constrained convex optimization problems with composite objective functions. If the smooth part of the objective function has Lipschitz continuous gradient, then we prove that our method obtains an ϵ-optimal solution in <InlineEquation ID="IEq1"> <EquationSource Format="TEX">$\mathcal{O}(n^{2}/\epsilon)$</EquationSource> </InlineEquation> iterations, where n is the number of blocks. For the class of problems with cheap coordinate derivatives we show that the new method is faster than methods based on full-gradient information. Analysis for the rate of convergence in probability is also provided. For strongly convex functions our method converges linearly. Extensive numerical tests confirm that on very large problems, our method is much more numerically efficient than methods based on full gradient information. Copyright Springer Science+Business Media New York 2014
Year of publication: |
2014
|
---|---|
Authors: | Necoara, Ion ; Patrascu, Andrei |
Published in: |
Computational Optimization and Applications. - Springer. - Vol. 57.2014, 2, p. 307-337
|
Publisher: |
Springer |
Saved in:
Saved in favorites
Similar items by person
-
Efficient random coordinate descent algorithms for large-scale structured nonconvex optimization
Patrascu, Andrei, (2015)
-
Path-following gradient-based decomposition algorithms for separable convex optimization
Dinh, Quoc Tran, (2014)
-
Linear convergence of random dual coordinate descent on nonpolyhedral convex problems
Necoara, Ion, (2022)
- More ...