language-icon Old Web
English
Sign In

Expected value

In probability theory, the expected value of a random variable, intuitively, is the long-run average value of repetitions of the same experiment it represents. For example, the expected value in rolling a six-sided die is 3.5, because the average of all the numbers that come up is 3.5 as the number of rolls approaches infinity (see § Examples for details). In other words, the law of large numbers states that the arithmetic mean of the values almost surely converges to the expected value as the number of repetitions approaches infinity. The expected value is also known as the expectation, mathematical expectation, EV, average, mean value, mean, or first moment.Let C {displaystyle C} be a constant random variable, i.e. C ≡ c {displaystyle Cequiv c} . It follows from the definition of Lebesgue integral that E ⁡ [ C ] = c {displaystyle operatorname {E} =c} . It also follows that X = C {displaystyle X=C} (a.s.). By the previous property,1. We prove additivity in several steps.DenoteTo prove the second statement, defineFor every positive constant r ∈ R > 0 {displaystyle rin {mathbb {R} }_{>0}} , P ⁡ ( | X | ≥ r ) = 0 {displaystyle operatorname {P} (|X|geq r)=0} . Indeed,Since E ⁡ [ X ] {displaystyle operatorname {E} } is defined (i.e. min ( E ⁡ [ X + ] , E ⁡ [ X − ] ) < ∞ {displaystyle min(operatorname {E} ,operatorname {E} )<infty } ), and E ⁡ [ X ] = E ⁡ [ X + ] − E ⁡ [ X − ] , {displaystyle operatorname {E} =operatorname {E} -operatorname {E} ,} we know that E ⁡ [ X + ] {displaystyle operatorname {E} } is finite. We want to show that X + < + ∞ {displaystyle X_{+}<+infty } (a.s.), or equivalently, P ⁡ ( X + = + ∞ ) = 0. {displaystyle operatorname {P} {igl (}X_{+}=+infty {igr )}=0.} 1. The case of non-negative Q {displaystyle {mathbb {Q} }} -valued random variables.Observe that, by monotonicity, the sequence { E ⁡ [ X n ] } {displaystyle {operatorname {E} }} monotonically non-decreases, and E ⁡ [ Y ] ≤ E ⁡ [ X n ] ≤ E ⁡ [ X ] . {displaystyle operatorname {E} leq operatorname {E} leq operatorname {E} .} If E ⁡ [ Y ] = + ∞ , {displaystyle operatorname {E} =+infty ,} then, by monotonicity, E ⁡ [ Y ] = E ⁡ [ X n ] = + ∞ , {displaystyle operatorname {E} =operatorname {E} =+infty ,} so lim inf n E ⁡ [ X n ] = + ∞ , {displaystyle extstyle liminf _{n}operatorname {E} =+infty ,} and the assertion follows.If P ⁡ ( X = + ∞ ) > 0 , {displaystyle operatorname {P} (X=+infty )>0,} then E ⁡ [ X ] = + ∞ . {displaystyle operatorname {E} =+infty .} On the other hand,1. For every ω ∈ Ω {displaystyle omega in Omega } ,… this advantage in the theory of chance is the product of the sum hoped for by the probability of obtaining it; it is the partial sum which ought to result when we do not wish to run the risks of the event in supposing that the division is made proportional to the probabilities. This division is the only equitable one when all strange circumstances are eliminated; because an equal degree of probability gives an equal right for the sum hoped for. We will call this advantage mathematical hope. In probability theory, the expected value of a random variable, intuitively, is the long-run average value of repetitions of the same experiment it represents. For example, the expected value in rolling a six-sided die is 3.5, because the average of all the numbers that come up is 3.5 as the number of rolls approaches infinity (see § Examples for details). In other words, the law of large numbers states that the arithmetic mean of the values almost surely converges to the expected value as the number of repetitions approaches infinity. The expected value is also known as the expectation, mathematical expectation, EV, average, mean value, mean, or first moment. More practically, the expected value of a discrete random variable is the probability-weighted average of all possible values. In other words, each possible value the random variable can assume is multiplied by its probability of occurring, and the resulting products are summed to produce the expected value. The same principle applies to an absolutely continuous random variable, except that an integral of the variable with respect to its probability density replaces the sum. The formal definition subsumes both of these and also works for distributions which are neither discrete nor absolutely continuous; the expected value of a random variable is the integral of the random variable with respect to its probability measure. The expected value does not exist for random variables having some distributions with large 'tails', such as the Cauchy distribution. For random variables such as these, the long-tails of the distribution prevent the sum or integral from converging. The expected value is a key aspect of how one characterizes a probability distribution; it is one type of location parameter. By contrast, the variance is a measure of dispersion of the possible values of the random variable around the expected value. The variance itself is defined in terms of two expectations: it is the expected value of the squared deviation of the variable's value from the variable's expected value (var(X) = E)2] = E(X2) - 2). The expected value plays important roles in a variety of contexts. In regression analysis, one desires a formula in terms of observed data that will give a 'good' estimate of the parameter giving the effect of some explanatory variable upon a dependent variable. The formula will give different estimates using different samples of data, so the estimate it gives is itself a random variable. A formula is typically considered good in this context if it is an unbiased estimator— that is if the expected value of the estimate (the average value it would give over an arbitrarily large number of separate samples) can be shown to equal the true value of the desired parameter. In decision theory, and in particular in choice under uncertainty, an agent is described as making an optimal choice in the context of incomplete information. For risk neutral agents, the choice involves using the expected values of uncertain quantities, while for risk averse agents it involves maximizing the expected value of some objective function such as a von Neumann–Morgenstern utility function. One example of using expected value in reaching optimal decisions is the Gordon–Loeb model of information security investment. According to the model, one can conclude that the amount a firm spends to protect information should generally be only a small fraction of the expected loss (i.e., the expected value of the loss resulting from a cyber or information security breach). Let X {displaystyle X} be a random variable with a finite number of finite outcomes x 1 , x 2 , … , x k {displaystyle x_{1},x_{2},ldots ,x_{k}} occurring with probabilities p 1 , p 2 , … , p k , {displaystyle p_{1},p_{2},ldots ,p_{k},} respectively. The expectation of X {displaystyle X} is defined as Since all probabilities p i {displaystyle p_{i}} add up to 1 ( p 1 + p 2 + ⋯ + p k = 1 {displaystyle p_{1}+p_{2}+cdots +p_{k}=1} ), the expected value is the weighted average, with p i {displaystyle p_{i}} ’s being the weights. If all outcomes x i {displaystyle x_{i}} are equiprobable (that is, p 1 = p 2 = ⋯ = p k {displaystyle p_{1}=p_{2}=cdots =p_{k}} ), then the weighted average turns into the simple average. This is intuitive: the expected value of a random variable is the average of all values it can take; thus the expected value is what one expects to happen on average. If the outcomes x i {displaystyle x_{i}} are not equiprobable, then the simple average must be replaced with the weighted average, which takes into account the fact that some outcomes are more likely than the others. The intuition however remains the same: the expected value of X {displaystyle X} is what one expects to happen on average.

[ "Statistics", "St. Petersburg paradox", "Expected value of perfect information", "expected value model", "Expected value of sample information" ]
Parent Topic
Child Topic
    No Parent Topic