language-icon Old Web
English
Sign In

Dirichlet distribution

In probability and statistics, the Dirichlet distribution (after Peter Gustav Lejeune Dirichlet), often denoted Dir ⁡ ( α ) {displaystyle operatorname {Dir} ({oldsymbol {alpha }})} , is a family of continuous multivariate probability distributions parameterized by a vector α {displaystyle {oldsymbol {alpha }}} of positive reals. It is a multivariate generalization of the beta distribution, hence its alternative name of multivariate beta distribution (MBD). Dirichlet distributions are commonly used as prior distributions in Bayesian statistics, and in fact the Dirichlet distribution is the conjugate prior of the categorical distribution and multinomial distribution. In probability and statistics, the Dirichlet distribution (after Peter Gustav Lejeune Dirichlet), often denoted Dir ⁡ ( α ) {displaystyle operatorname {Dir} ({oldsymbol {alpha }})} , is a family of continuous multivariate probability distributions parameterized by a vector α {displaystyle {oldsymbol {alpha }}} of positive reals. It is a multivariate generalization of the beta distribution, hence its alternative name of multivariate beta distribution (MBD). Dirichlet distributions are commonly used as prior distributions in Bayesian statistics, and in fact the Dirichlet distribution is the conjugate prior of the categorical distribution and multinomial distribution. The infinite-dimensional generalization of the Dirichlet distribution is the Dirichlet process. The Dirichlet distribution of order K ≥ 2 with parameters α1, ..., αK > 0 has a probability density function with respect to Lebesgue measure on the Euclidean space RK−1 given by The normalizing constant is the multivariate beta function, which can be expressed in terms of the gamma function: The support of the Dirichlet distribution is the set of K-dimensional vectors x {displaystyle {oldsymbol {x}}} whose entries are real numbers in the interval (0,1); furthermore, ‖ x ‖ 1 = 1 {displaystyle |{oldsymbol {x}}|_{1}=1} , i.e. the sum of the coordinates is 1. These can be viewed as the probabilities of a K-way categorical event. Another way to express this is that the domain of the Dirichlet distribution is itself a set of probability distributions, specifically the set of K-dimensional discrete distributions. Note that the technical term for the set of points in the support of a K-dimensional Dirichlet distribution is the open standard (K − 1)-simplex, which is a generalization of a triangle, embedded in the next-higher dimension. For example, with K = 3, the support is an equilateral triangle embedded in a downward-angle fashion in three-dimensional space, with vertices at (1,0,0), (0,1,0) and (0,0,1), i.e. touching each of the coordinate axes at a point 1 unit away from the origin. A common special case is the symmetric Dirichlet distribution, where all of the elements making up the parameter vector α {displaystyle {oldsymbol {alpha }}} have the same value. The symmetric case might be useful, for example, when a Dirichlet prior over components is called for, but there is no prior knowledge favoring one component over another. Since all elements of the parameter vector have the same value, the symmetric Dirichlet distribution can be parametrized by a single scalar value α, called the concentration parameter. In terms of α, the density function has the form When α=1, the symmetric Dirichlet distribution is equivalent to a uniform distribution over the open standard (K − 1)-simplex, i.e. it is uniform over all points in its support. This particular distribution is known as the flat Dirichlet distribution. Values of the concentration parameter above 1 prefer variates that are dense, evenly distributed distributions, i.e. all the values within a single sample are similar to each other. Values of the concentration parameter below 1 prefer sparse distributions, i.e. most of the values within a single sample will be close to 0, and the vast majority of the mass will be concentrated in a few of the values. More generally, the parameter vector is sometimes written as the product α n {displaystyle alpha {oldsymbol {n}}} of a (scalar) concentration parameter α and a (vector) base measure n = ( n 1 , … , n K ) {displaystyle {oldsymbol {n}}=(n_{1},dots ,n_{K})} where n {displaystyle {oldsymbol {n}}} lies within the (K − 1)-simplex (i.e.: its coordinates n i {displaystyle n_{i}} sum to one). The concentration parameter in this case is larger by a factor of K than the concentration parameter for a symmetric Dirichlet distribution described above. This construction ties in with concept of a base measure when discussing Dirichlet processes and is often used in the topic modelling literature. Let X = ( X 1 , … , X K ) ∼ Dir ⁡ ( α ) {displaystyle X=(X_{1},ldots ,X_{K})sim operatorname {Dir} (alpha )} , meaning that the first K – 1 components have the above density and X K = 1 − ∑ i = 1 K − 1 X i {displaystyle X_{K}=1-sum _{i=1}^{K-1}X_{i}} .

[ "Boundary value problem", "Dirichlet space", "Pitman–Yor process", "Hierarchical Dirichlet process", "Dirichlet algebra", "Dirichlet's principle" ]
Parent Topic
Child Topic
    No Parent Topic