Full Text: PDF
Volume 3, Issue 1, 30 April 2021, Pages 115-150
Abstract. In this paper, we develop a variant of the well-known Gauss-Newton (GN) method to solve a class of nonconvex optimization problems involving low-rank matrix variables. As opposed to standard GN method, our algorithm allows one to handle general smooth convex objective function. We show, under mild conditions, that the proposed algorithm globally and locally converges to a stationary point of the original problem. We also show empirically that our GN algorithm achieves higher accurate solutions than the alternating minimization algorithm (AMA). Then, we specify our GN scheme to handle the symmetric case and prove its convergence, where AMA is not applicable. Next, we incorporate our GN scheme into an alternating direction method of multipliers (ADMM) to develop a new variant, called ADMM-GN. We prove that, under mild conditions and a proper choice of the penalty parameter, our ADMM-GN globally converges to a stationary point of the original problem. Finally, we provide several numerical experiments to illustrate the proposed algorithms. Our results show that the new algorithms have encouraging performance compared to existing state-of-the-art methods.
How to Cite this Article:
Quoc Tran-Dinh, Extended Gauss-Newton and ADMM-Gauss-Newton algorithms for low-rank matrix optimization, J. Appl. Numer. Optim. 3 (2021), 115-150.