language-icon Old Web
English
Sign In

Hoeffding's inequality

In probability theory, Hoeffding's inequality provides an upper bound on the probability that the sum of bounded independent random variables deviates from its expected value by more than a certain amount. Hoeffding's inequality was proven by Wassily Hoeffding in 1963. In probability theory, Hoeffding's inequality provides an upper bound on the probability that the sum of bounded independent random variables deviates from its expected value by more than a certain amount. Hoeffding's inequality was proven by Wassily Hoeffding in 1963. Hoeffding's inequality is a generalization of the Chernoff bound, which applies only to Bernoulli random variables, and a special case of the Azuma–Hoeffding inequality and the McDiarmid's inequality. It is similar to, but incomparable with, the Bernstein inequality, proved by Sergei Bernstein in 1923. Hoeffding's inequality can be applied to the important special case of identically distributed Bernoulli random variables, and this is how the inequality is often used in combinatorics and computer science. We consider a coin that shows heads with probability p and tails with probability 1 − p. We toss the coin n times. The expected number of times the coin comes up heads is pn. Furthermore, the probability that the coin comes up heads at most k times can be exactly quantified by the following expression: where H(n) is the number of heads in n coin tosses. When k = (p − ε)n for some ε > 0, Hoeffding's inequality bounds this probability by a term that is exponentially small in ε2n: Similarly, when k = (p + ε)n for some ε > 0, Hoeffding's inequality bounds the probability that we see at least εn more tosses that show heads than we would expect:

[ "Inequality", "Random variable", "Hoeffding's lemma" ]
Parent Topic
Child Topic
    No Parent Topic