language-icon Old Web
English
Sign In

Perplexity

In information theory, perplexity is a measurement of how well a probability distribution or probability model predicts a sample. It may be used to compare probability models. A low perplexity indicates the probability distribution is good at predicting the sample. In information theory, perplexity is a measurement of how well a probability distribution or probability model predicts a sample. It may be used to compare probability models. A low perplexity indicates the probability distribution is good at predicting the sample. The perplexity of a discrete probability distribution p is defined as where H(p) is the entropy (in bits) of the distribution and x ranges over events. (The base need not be 2: The perplexity is independent of the base, provided that the entropy and the exponentiation use the same base.) This measure is also known in some domains as the (order-1 true) diversity. Perplexity of a random variable X may be defined as the perplexity of the distribution over its possible values x. In the special case where p models a fair k-sided die (a uniform distribution over k discrete events), its perplexity is k. A random variable with perplexity k has the same uncertainty as a fair k-sided die, and one is said to be 'k-ways perplexed' about the value of the random variable. (Unless it is a fair k-sided die, more than k values will be possible, but the overall uncertainty is no greater because some of these values will have probability greater than 1/k, decreasing the overall value while summing.) Perplexity is sometimes used as a measure of how hard a prediction problem is. This is not always accurate. If you have two choices, one with probability 0.9, then your chances of a correct guess are 90 percent using the optimal strategy.The perplexity is 2−0.9 log2 0.9 - 0.1 log2 0.1= 1.38. The inverse of the perplexity (which, in the case of the fair k-sided die, represents the probability of guessing correctly), is 1/1.38 = 0.72, not 0.9.

[ "Speech recognition", "Machine learning", "Language model", "Natural language processing", "perplexity reduction", "Kneser–Ney smoothing", "recurrent neural network language models" ]
Parent Topic
Child Topic
    No Parent Topic