Old Web

English

Sign In

In the context of artificial neural networks, the rectifier is an activation function defined as the positive part of its argument: In the context of artificial neural networks, the rectifier is an activation function defined as the positive part of its argument: where x is the input to a neuron. This is also known as a ramp function and is analogous to half-wave rectification in electrical engineering. This activation function was first introduced to a dynamical network by Hahnloser et al. in 2000 with strong biological motivations and mathematical justifications. It has been demonstrated for the first time in 2011 to enable better training of deeper networks, compared to the widely used activation functions prior to 2011, e.g., the logistic sigmoid (which is inspired by probability theory; see logistic regression) and its more practical counterpart, the hyperbolic tangent. The rectifier is, as of 2017, the most popular activation function for deep neural networks. A unit employing the rectifier is also called a rectified linear unit (ReLU). Rectified linear units find applications in computer vision and speech recognition using deep neural nets. A smooth approximation to the rectifier is the analytic function which is called the softplus or SmoothReLU function. The derivative of softplus is f ′ ( x ) = e x 1 + e x = 1 1 + e − x {displaystyle f'(x)={frac {e^{x}}{1+e^{x}}}={frac {1}{1+e^{-x}}}} , the sigmoid function. The sigmoid function is a smooth approximation of the derivative of the rectifier, the Heaviside step function. The multivariable generalization of single-variable softplus is the LogSumExp with the first argument set to zero:

Parent Topic

Child Topic

No Parent Topic