Efficient Recovery of Low-Rank Matrix via Double Nonconvex Nonsmooth Rank Minimization

2019 
Recently, there is a rapidly increasing attraction for the efficient recovery of low-rank matrix in computer vision and machine learning. The popular convex solution of rank minimization is nuclear norm-based minimization (NNM), which usually leads to a biased solution since NNM tends to overshrink the rank components and treats each rank component equally. To address this issue, some nonconvex nonsmooth rank (NNR) relaxations have been exploited widely. Different from these convex and nonconvex rank substitutes, this paper first introduces a general and flexible rank relaxation function named weighted NNR relaxation function, which is actually derived from the initial double NNR (DNNR) relaxations, i.e., DNNR relaxation function acts on the nonconvex singular values function (SVF). An iteratively reweighted SVF optimization algorithm with continuation technology through computing the supergradient values to define the weighting vector is devised to solve the DNNR minimization problem, and the closed-form solution of the subproblem can be efficiently obtained by a general proximal operator, in which each element of the desired weighting vector usually satisfies the nondecreasing order. We next prove that the objective function values decrease monotonically, and any limit point of the generated subsequence is a critical point. Combining the Kurdyka-Łojasiewicz property with some milder assumptions, we further give its global convergence guarantee. As an application in the matrix completion problem, experimental results on both synthetic data and real-world data can show that our methods are competitive with several state-of-the-art convex and nonconvex matrix completion methods.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    57
    References
    14
    Citations
    NaN
    KQI
    []