language-icon Old Web
English
Sign In

Logit

In statistics, the logit (/ˈloʊdʒɪt/ LOH-jit) function or the log-odds is the logarithm of the odds p/(1 − p) where p is probability. It is a type of function that creates a map of probability values from [ 0 , 1 ] {displaystyle } to ( − ∞ , + ∞ ) {displaystyle (-infty ,+infty )} . It is the inverse of the sigmoidal 'logistic' function or logistic transform used in mathematics, especially in statistics. In statistics, the logit (/ˈloʊdʒɪt/ LOH-jit) function or the log-odds is the logarithm of the odds p/(1 − p) where p is probability. It is a type of function that creates a map of probability values from [ 0 , 1 ] {displaystyle } to ( − ∞ , + ∞ ) {displaystyle (-infty ,+infty )} . It is the inverse of the sigmoidal 'logistic' function or logistic transform used in mathematics, especially in statistics. In deep learning, the term logits layer is popularly used for the last neuron layer of neural networks used for classification tasks, which produce raw prediction values as real numbers ranging from ( − ∞ , + ∞ ) {displaystyle (-infty ,+infty )} . If p is a probability, then p/(1 − p) is the corresponding odds; the logit of the probability is the logarithm of the odds, i.e. The base of the logarithm function used is of little importance in the present article, as long as it is greater than 1, but the natural logarithm with base e is the one most often used. The choice of base corresponds to the choice of logarithmic unit for the value: base 2 corresponds to a shannon, base e to a nat, and base 10 to a hartley; these units are particularly used in information-theoretic interpretations. For each choice of base, the logit function takes values between negative and positive infinity. The 'logistic' function of any number α {displaystyle alpha } is given by the inverse-logit: The difference between the logits of two probabilities is the logarithm of the odds ratio (R), thus providing a shorthand for writing the correct combination of odds ratios only by adding and subtracting: There have been several efforts to adapt linear regression methods to domain where output is probability value [ 0 , 1 ] {displaystyle } instead of any real number [ − ∞ , + ∞ ] {displaystyle } . Many of such efforts focused on modeling this problem by somehow mapping the range [ 0 , 1 ] {displaystyle } to ( − ∞ , + ∞ ) {displaystyle (-infty ,+infty )} and then running the linear regression on these transformed values. In 1934 Chester Ittner Bliss used the cumulative normal distribution function to perform this mapping and called his model probit an abbreviation for 'probability unit'; . However, this is computationally more expensive. In 1944, Joseph Berkson used log of odds and called this function logit, abbreviation for 'logistic unit' following the analogy for probit. Log odds was used extensively by Charles Sanders Peirce (late 19th century). . G. A. Barnard in 1949 coined the commonly used term log-odds; the log-odds of an event is the logit of the probability of the event. Closely related to the logit function (and logit model) are the probit function and probit model. The logit and probit are both sigmoid functions with a domain between 0 and 1, which makes them both quantile functions—i.e., inverses of the cumulative distribution function (CDF) of a probability distribution. In fact, the logit is the quantile function of the logistic distribution, while the probit is the quantile function of the normal distribution. The probit function is denoted Φ − 1 ( x ) {displaystyle Phi ^{-1}(x)} , where Φ ( x ) {displaystyle Phi (x)} is the CDF of the normal distribution, as just mentioned: As shown in the graph, the logit and probit functions are extremely similar, particularly when the probit function is scaled so that its slope at y=0 matches the slope of the logit. As a result, probit models are sometimes used in place of logit models because for certain applications (e.g., in Bayesian statistics) the implementation is easier.

[ "Statistics", "Machine learning", "Econometrics", "Logistic regression", "Mixed logit", "Logit-normal distribution", "logit equilibrium", "rank ordered logit" ]
Parent Topic
Child Topic
    No Parent Topic