language-icon Old Web
English
Sign In

Stochastic gradient descent

Stochastic gradient descent (often abbreviated SGD) is an iterative method for optimizing an objective function with suitable smoothness properties (e.g. differentiable or subdifferentiable). It is called stochastic because the method uses randomly selected (or shuffled) samples to evaluate the gradients, hence SGD can be regarded as a stochastic approximation of gradient descent optimization. The ideas can be traced back at least to the 1951 article titled 'A Stochastic Approximation Method' by Herbert Robbins and Sutton Monro, who proposed with detailed analysis a root-finding method now called the Robbins–Monro algorithm. Stochastic gradient descent (often abbreviated SGD) is an iterative method for optimizing an objective function with suitable smoothness properties (e.g. differentiable or subdifferentiable). It is called stochastic because the method uses randomly selected (or shuffled) samples to evaluate the gradients, hence SGD can be regarded as a stochastic approximation of gradient descent optimization. The ideas can be traced back at least to the 1951 article titled 'A Stochastic Approximation Method' by Herbert Robbins and Sutton Monro, who proposed with detailed analysis a root-finding method now called the Robbins–Monro algorithm. Both statistical estimation and machine learning consider the problem of minimizing an objective function that has the form of a sum: where the parameter w {displaystyle w} that minimizes Q ( w ) {displaystyle Q(w)} is to be estimated. Each summand function Q i {displaystyle Q_{i}} is typically associated with the i {displaystyle i} -th observation in the data set (used for training). In classical statistics, sum-minimization problems arise in least squares and in maximum-likelihood estimation (for independent observations). The general class of estimators that arise as minimizers of sums are called M-estimators. However, in statistics, it has been long recognized that requiring even local minimization is too restrictive for some problems of maximum-likelihood estimation. Therefore, contemporary statistical theorists often consider stationary points of the likelihood function (or zeros of its derivative, the score function, and other estimating equations). The sum-minimization problem also arises for empirical risk minimization. In this case, Q i ( w ) {displaystyle Q_{i}(w)} is the value of the loss function at i {displaystyle i} -th example, and Q ( w ) {displaystyle Q(w)} is the empirical risk. When used to minimize the above function, a standard (or 'batch') gradient descent method would perform the following iterations : where η {displaystyle eta } is a step size (sometimes called the learning rate in machine learning). In many cases, the summand functions have a simple form that enables inexpensive evaluations of the sum-function and the sum gradient. For example, in statistics, one-parameter exponential families allow economical function-evaluations and gradient-evaluations. However, in other cases, evaluating the sum-gradient may require expensive evaluations of the gradients from all summand functions. When the training set is enormous and no simple formulas exist, evaluating the sums of gradients becomes very expensive, because evaluating the gradient requires evaluating all the summand functions' gradients. To economize on the computational cost at every iteration, stochastic gradient descent samples a subset of summand functions at every step. This is veryeffective in the case of large-scale machine learning problems.

[ "Convergence (routing)", "Gradient descent", "Artificial neural network", "Neighbourhood components analysis", "Descent direction", "Random coordinate descent" ]
Parent Topic
Child Topic
    No Parent Topic