language-icon Old Web
English
Sign In

Inverse function theorem

In mathematics, specifically differential calculus, the inverse function theorem gives a sufficient condition for a function to be invertible in a neighborhood of a point in its domain: namely, that its derivative is continuous and non-zero at the point. The theorem also gives a formula for the derivative of the inverse function.In multivariable calculus, this theorem can be generalized to any continuously differentiable, vector-valued function whose Jacobian determinant is nonzero at a point in its domain, giving a formula for the Jacobian matrix of the inverse. There are also versions of the inverse function theorem for complex holomorphic functions, for differentiable maps between manifolds, for differentiable functions between Banach spaces, and so forth. In mathematics, specifically differential calculus, the inverse function theorem gives a sufficient condition for a function to be invertible in a neighborhood of a point in its domain: namely, that its derivative is continuous and non-zero at the point. The theorem also gives a formula for the derivative of the inverse function.In multivariable calculus, this theorem can be generalized to any continuously differentiable, vector-valued function whose Jacobian determinant is nonzero at a point in its domain, giving a formula for the Jacobian matrix of the inverse. There are also versions of the inverse function theorem for complex holomorphic functions, for differentiable maps between manifolds, for differentiable functions between Banach spaces, and so forth. For functions of a single variable, the theorem states that if f {displaystyle f} is a continuously differentiable function with nonzero derivative at the point a, then f {displaystyle f} is invertible in a neighborhood of a, the inverse is continuously differentiable, and the derivative of the inverse function at b = f ( a ) {displaystyle b=f(a)} is the reciprocal of the derivative of f {displaystyle f} at a {displaystyle a} : An alternate version, which assumes that f {displaystyle f} is continuous and injective near a, and differentiable at a with a non-zero derivative, will also result in f {displaystyle f} being invertible near a, with an inverse that's similarly continuous and injective, and where the above formula would apply as well. For functions of more than one variable, the theorem states that if F is a continuously differentiable function from an open set of R n {displaystyle mathbb {R} ^{n}!} into R n {displaystyle mathbb {R} ^{n}!} , and the total derivative is invertible at a point p (i.e., the Jacobian determinant of F at p is non-zero), then F is invertible near p: an inverse function to F is defined on some neighborhood of q = F ( p ) {displaystyle q=F(p)!} . Writing F = ( F 1 , … , F n ) {displaystyle F=(F_{1},ldots ,F_{n})!} , this means that the system of n equations y i = F i ( x 1 , … , x n ) {displaystyle y_{i}=F_{i}(x_{1},dots ,x_{n})!} has a unique solution for x 1 , … , x n {displaystyle x_{1},dots ,x_{n}!} in terms of y 1 , … , y n {displaystyle y_{1},dots ,y_{n}!} , provided that we restrict x and y to small enough neighborhoods of p and q, respectively.In the infinite dimensional case, the theorem requires the extra hypothesis that the Fréchet derivative of F at p has a bounded inverse. Finally, the theorem says that the inverse function F − 1 {displaystyle F^{-1}!} is continuously differentiable, and its Jacobian derivative at q = F ( p ) {displaystyle q=F(p)!} is the matrix inverse of the Jacobian of F at p: The hard part of the theorem is the existence and differentiability of F − 1 {displaystyle F^{-1}!} . Assuming this, the inverse derivative formula follows from the chain rule applied to F − 1 ∘ F = id {displaystyle F^{-1}circ F={ ext{id}}} : Consider the vector-valued function F : R 2 → R 2 {displaystyle F:mathbb {R} ^{2} o mathbb {R} ^{2}!} defined by:

[ "Banach space", "Picard–Lindelöf theorem", "Differentiable function", "Exotic R4" ]
Parent Topic
Child Topic
    No Parent Topic