language-icon Old Web
English
Sign In

Activation function

In artificial neural networks, the activation function of a node defines the output of that node given an input or set of inputs. A standard computer chip circuit can be seen as a digital network of activation functions that can be 'ON' (1) or 'OFF' (0), depending on input. This is similar to the behavior of the linear perceptron in neural networks. However, only nonlinear activation functions allow such networks to compute nontrivial problems using only a small number of nodes. In artificial neural networks, this function is also called the transfer function.with λ = 1.0507 {displaystyle lambda =1.0507} and α = 1.67326 {displaystyle alpha =1.67326} In artificial neural networks, the activation function of a node defines the output of that node given an input or set of inputs. A standard computer chip circuit can be seen as a digital network of activation functions that can be 'ON' (1) or 'OFF' (0), depending on input. This is similar to the behavior of the linear perceptron in neural networks. However, only nonlinear activation functions allow such networks to compute nontrivial problems using only a small number of nodes. In artificial neural networks, this function is also called the transfer function. In biologically inspired neural networks, the activation function is usually an abstraction representing the rate of action potential firing in the cell. In its simplest form, this function is binary—that is, either the neuron is firing or not. The function looks like ϕ ( v i ) = U ( v i ) {displaystyle phi (v_{i})=U(v_{i})} , where U {displaystyle U} is the Heaviside step function. In this case many neurons must be used in computation beyond linear separation of categories. A line of positive slope may be used to reflect the increase in firing rate that occurs as input current increases. Such a function would be of the form ϕ ( v i ) = μ v i {displaystyle phi (v_{i})=mu v_{i}} , where μ {displaystyle mu } is the slope. This activation function is linear, and therefore has the same problems as the binary function. In addition, networks constructed using this model have unstable convergence because neuron inputs along favored paths tend to increase without bound, as this function is not normalizable. All problems mentioned above can be handled by using a normalizable sigmoid activation function. One realistic model stays at zero until input current is received, at which point the firing frequency increases quickly at first, but gradually approaches an asymptote at 100% firing rate. Mathematically, this looks like ϕ ( v i ) = U ( v i ) tanh ⁡ ( v i ) {displaystyle phi (v_{i})=U(v_{i}) anh(v_{i})} , where the hyperbolic tangent function can be replaced by any sigmoid function. This behavior is realistically reflected in the neuron, as neurons cannot physically fire faster than a certain rate. This model runs into problems, however, in computational networks as it is not differentiable, a requirement to calculate backpropagation. The final model, then, that is used in multilayer perceptrons is a sigmoidal activation function in the form of a hyperbolic tangent. Two forms of this function are commonly used: ϕ ( v i ) = tanh ⁡ ( v i ) {displaystyle phi (v_{i})= anh(v_{i})} whose range is normalized from -1 to 1, and ϕ ( v i ) = ( 1 + exp ⁡ ( − v i ) ) − 1 {displaystyle phi (v_{i})=(1+exp(-v_{i}))^{-1}} is vertically translated to normalize from 0 to 1. The latter model is often considered more biologically realistic, but it runs into theoretical and experimental difficulties with certain types of computational problems. A special class of activation functions known as radial basis functions (RBFs) are used in RBF networks, which are extremely efficient as universal function approximators. These activation functions can take many forms, but they are usually found as one of three functions: where c i {displaystyle c_{i}} is the vector representing the function center and a {displaystyle a} and σ {displaystyle sigma } are parameters affecting the spread of the radius. Support-vector machines (SVMs) can effectively utilize a class of activation functions that includes both sigmoids and RBFs. In this case, the input is transformed to reflect a decision boundary hyperplane based on a few training inputs called support-vectors x {displaystyle x} . The activation function for the hidden layer of these machines is referred to as the inner product kernel, K ( v i , x ) = ϕ ( v i ) {displaystyle K(v_{i},x)=phi (v_{i})} . The support-vectors are represented as the centers in RBFs with the kernel equal to the activation function, but they take a unique form in the perceptron as where β 0 {displaystyle eta _{0}} and β 1 {displaystyle eta _{1}} must satisfy certain conditions for convergence. These machines can also accept arbitrary-order polynomial activation functions where

[ "Artificial neural network", "sigmoidal activation function", "sigmoid activation function", "Universal approximation theorem", "parametric rectified linear unit", "exponential linear units" ]
Parent Topic
Child Topic
    No Parent Topic