A regularized Newton method without line search for unconstrained optimization

2014 
In this paper, we propose a regularized Newton method without line search. The proposed method controls a regularization parameter instead of a step size in order to guarantee the global convergence. We show that the proposed algorithm has the following convergence properties. (a) The proposed algorithm has global convergence under appropriate conditions. (b) It has superlinear rate of convergence under the local error bound condition. (c) An upper bound of the number of iterations required to obtain an approximate solution $$x$$x satisfying $$\Vert \nabla f(x) \Vert \le \varepsilon $$??f(x)?≤? is $$O(\varepsilon ^{-2})$$O(?-2), where $$f$$f is the objective function and $$\varepsilon $$? is a given positive constant.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    24
    References
    23
    Citations
    NaN
    KQI
    []