Constrained Non-convex Optimization via Stochastic Variance Reduced Approximations

2019 
Recently, stochastic variance reduced gradient (SVRG) and related methods have made significant advancements in solving finite sum non-convex problems but limiting their applicability to convex constraints. In contrast, we study the same problem subjected to non-convex constraints and analyze stochastic variance reduced framework under mild assumptions. Presence of non-convex constraints complicates the problem and is challenging. For instance, preserving the feasibility of the iterates. To this end, we propose to fuse the stochastic variance reduced framework with successive convex approximation methods henceforth, solving a convex optimization problem at each iteration while maintaining iterate feasibility. Moreover, we prove that the proposed algorithm generates a convergent sequence under mild assumptions. Experimental evaluations are provided on node localization problem to demonstrate that it is indeed faster than the state-of-the-art non-convex optimization methods.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    17
    References
    0
    Citations
    NaN
    KQI
    []