Old Web

English

Sign In

In mathematics, a Taylor series is a representation of a function as an infinite sum of terms that are calculated from the values of the function's derivatives at a single point. In the West, the subject was formulated by the Scottish mathematician James Gregory and formally introduced by the English mathematician Brook Taylor in 1715. If the Taylor series is centered at zero, then that series is also called a Maclaurin series, after the Scottish mathematician Colin Maclaurin, who made extensive use of this special case of Taylor series in the 18th century. A function can be approximated by using a finite number of terms of its Taylor series. Taylor's theorem gives quantitative estimates on the error introduced by the use of such an approximation. The polynomial formed by taking some initial terms of the Taylor series is called a Taylor polynomial. The Taylor series of a function is the limit of that function's Taylor polynomials as the degree increases, provided that the limit exists. A function may not be equal to its Taylor series, even if its Taylor series converges at every point. A function that is equal to its Taylor series in an open interval (or a disc in the complex plane) is known as an analytic function in that interval. The Taylor series of a real or complex-valued function f (x) that is infinitely differentiable at a real or complex number a is the power series where n! denotes the factorial of n and f(n)(a) denotes the nth derivative of f evaluated at the point a. In the more compact sigma notation, this can be written as The derivative of order zero of f is defined to be f itself and (x − a)0 and 0! are both defined to be 1. When a = 0, the series is also called a Maclaurin series. The Taylor series for any polynomial is the polynomial itself. The Maclaurin series for 1/1 − x is the geometric series

Parent Topic

Child Topic

No Parent Topic