language-icon Old Web
English
Sign In

Matrix (mathematics)

In mathematics, a matrix (plural: matrices) is a rectangular array (cf. irregular matrix) of numbers, symbols, or expressions, arranged in rows and columns. For example, the dimension of the matrix below is 2 × 3 (read 'two by three'), because there are two rows and three columns: [ 1 3 1 1 0 0 ] + [ 0 0 5 7 5 0 ] = [ 1 + 0 3 + 0 1 + 5 1 + 7 0 + 5 0 + 0 ] = [ 1 3 6 8 5 0 ] {displaystyle {egin{bmatrix}1&3&1\1&0&0end{bmatrix}}+{egin{bmatrix}0&0&5\7&5&0end{bmatrix}}={egin{bmatrix}1+0&3+0&1+5\1+7&0+5&0+0end{bmatrix}}={egin{bmatrix}1&3&6\8&5&0end{bmatrix}}} This operation is called scalar multiplication, but its result is not named 'scalar product' to avoid confusion, since 'scalar product' is sometimes used as a synonym for 'inner product'. In mathematics, a matrix (plural: matrices) is a rectangular array (cf. irregular matrix) of numbers, symbols, or expressions, arranged in rows and columns. For example, the dimension of the matrix below is 2 × 3 (read 'two by three'), because there are two rows and three columns: Provided that they have the same size (each matrix has the same number of rows and the same number of columns as the other), two matrices can be added or subtracted element by element (see Conformable matrix). The rule for matrix multiplication, however, is that two matrices can be multiplied only when the number of columns in the first equals the number of rows in the second (i.e., the inner dimensions are the same, n for an (m×n)-matrix times an (n×p)-matrix, resulting in an (m×p)-matrix. There is no product the other way round, a first hint that matrix multiplication is not commutative. Any matrix can be multiplied element-wise by a scalar from its associated field. The individual items in an m×n matrix A, often denoted by ai,j, where i and j usually vary from 1 to m and n, respectively, are called its elements or entries. For conveniently expressing an element of the results of matrix operations the indices of the element are often attached to the parenthesized or bracketed matrix expression; e.g.: (AB)i,j refers to an element of a matrix product. In the context of abstract index notation this ambiguously refers also to the whole matrix product. A major application of matrices is to represent linear transformations, that is, generalizations of linear functions such as f(x) = 4x. For example, the rotation of vectors in three-dimensional space is a linear transformation, which can be represented by a rotation matrix R: if v is a column vector (a matrix with only one column) describing the position of a point in space, the product Rv is a column vector describing the position of that point after a rotation. The product of two transformation matrices is a matrix that represents the composition of two transformations. Another application of matrices is in the solution of systems of linear equations. If the matrix is square, it is possible to deduce some of its properties by computing its determinant. For example, a square matrix has an inverse if and only if its determinant is not zero. Insight into the geometry of a linear transformation is obtainable (along with other information) from the matrix's eigenvalues and eigenvectors. Applications of matrices are found in most scientific fields. In every branch of physics, including classical mechanics, optics, electromagnetism, quantum mechanics, and quantum electrodynamics, they are used to study physical phenomena, such as the motion of rigid bodies. In computer graphics, they are used to manipulate 3D models and project them onto a 2-dimensional screen. In probability theory and statistics, stochastic matrices are used to describe sets of probabilities; for instance, they are used within the PageRank algorithm that ranks the pages in a Google search. Matrix calculus generalizes classical analytical notions such as derivatives and exponentials to higher dimensions. Matrices are used in economics to describe systems of economic relationships. A major branch of numerical analysis is devoted to the development of efficient algorithms for matrix computations, a subject that is centuries old and is today an expanding area of research. Matrix decomposition methods simplify computations, both theoretically and practically. Algorithms that are tailored to particular matrix structures, such as sparse matrices and near-diagonal matrices, expedite computations in finite element method and other computations. Infinite matrices occur in planetary theory and in atomic theory. A simple example of an infinite matrix is the matrix representing the derivative operator, which acts on the Taylor series of a function. A matrix is a rectangular array of numbers or other mathematical objects for which operations such as addition and multiplication are defined. Most commonly, a matrix over a field F is a rectangular array of scalars each of which is a member of F. Most of this article focuses on real and complex matrices, that is, matrices whose elements are real numbers or complex numbers, respectively. More general types of entries are discussed below. For instance, this is a real matrix: The numbers, symbols or expressions in the matrix are called its entries or its elements. The horizontal and vertical lines of entries in a matrix are called rows and columns, respectively. The size of a matrix is defined by the number of rows and columns that it contains. A matrix with m rows and n columns is called an m × n matrix or m-by-n matrix, while m and n are called its dimensions. For example, the matrix A above is a 3 × 2 matrix.

[ "Quantum mechanics", "Algebra", "Syntactic foam", "polymer composites", "Candoluminescence", "copper matrix", "power matrix" ]
Parent Topic
Child Topic
    No Parent Topic