language-icon Old Web
English
Sign In

Belief propagation

Belief propagation, also known as sum-product message passing, is a message-passing algorithm for performing inference on graphical models, such as Bayesian networks and Markov random fields. It calculates the marginal distribution for each unobserved node (or variable), conditional on any observed nodes (or variables). Belief propagation is commonly used in artificial intelligence and information theory and has demonstrated empirical success in numerous applications including low-density parity-check codes, turbo codes, free energy approximation, and satisfiability. The algorithm was first proposed by Judea Pearl in 1982, who formulated it as an exact inference algorithm on trees, which was later extended to polytrees. While it is not exact on general graphs, it has been shown to be a useful approximate algorithm. If X={Xi} is a set of discrete random variables with a joint mass function p, the marginal distribution of a single Xi is simply the summation of p over all other variables: However, this quickly becomes computationally prohibitive: if there are 100 binary variables, then one needs to sum over 299 ≈ 6.338 × 1029 possible values. By exploiting the polytree structure, belief propagation allows the marginals to be computed much more efficiently. Variants of the belief propagation algorithm exist for several types of graphical models (Bayesian networks and Markov random fields in particular). We describe here the variant that operates on a factor graph. A factor graph is a bipartite graph containing nodes corresponding to variables V and factors F, with edges between variables and the factors in which they appear. We can write the joint mass function: where xa is the vector of neighboring variable nodes to the factor node a. Any Bayesian network or Markov random field can be represented as a factor graph. The algorithm works by passing real valued functions called messages along with the edges between the hidden nodes. More precisely, if v is a variable node and a is a factor node connected to v in the factor graph, the messages from v to a, (denoted by μ v → a {displaystyle mu _{v o a}} ) and from a to v ( μ a → v {displaystyle mu _{a o v}} ), are real-valued functions whose domain is Dom(v), the set of values that can be taken by the random variable associated with v. These messages contain the 'influence' that one variable exerts on another. The messages are computed differently depending on whether the node receiving the message is a variable node or a factor node. Keeping the same notation:

[ "Decoding methods", "survey propagation", "residual belief propagation", "nonparametric belief propagation", "Generalized distributive law", "generalized belief propagation" ]
Parent Topic
Child Topic
    No Parent Topic