language-icon Old Web
English
Sign In

Arithmetic circuit complexity

In computational complexity theory, arithmetic circuits are the standard model for computing polynomials. Informally, an arithmetic circuit takes as inputs either variables or numbers, and is allowed to either add or multiply two expressions it has already computed. Arithmetic circuits provide a formal way to understand the complexity of computing polynomials. The basic type of question in this line of research is 'what is the most efficient way to compute a given polynomial f {displaystyle f} ?' In computational complexity theory, arithmetic circuits are the standard model for computing polynomials. Informally, an arithmetic circuit takes as inputs either variables or numbers, and is allowed to either add or multiply two expressions it has already computed. Arithmetic circuits provide a formal way to understand the complexity of computing polynomials. The basic type of question in this line of research is 'what is the most efficient way to compute a given polynomial f {displaystyle f} ?' An arithmetic circuit C {displaystyle C} over the field F {displaystyle F} and the set of variables x 1 , … , x n {displaystyle x_{1},ldots ,x_{n}} is a directed acyclic graph as follows. Every node in it with indegree zero is called an input gate and is labeled by either a variable x i {displaystyle x_{i}} or a field element in F . {displaystyle F.} Every other gate is labeled by either + {displaystyle +} or × ; {displaystyle imes ;} in the first case it is a sum gate and in the second a product gate. An arithmetic formula is a circuit in which every gate has outdegree one (and so the underlying graph is a directed tree). A circuit has two complexity measures associated with it: size and depth. The size of a circuit is the number of gates in it, and the depth of a circuit is the length of the longest directed path in it. For example, the circuit in the figure has size six and depth two. An arithmetic circuit computes a polynomial in the following natural way. An input gate computes the polynomial it is labeled by. A sum gate v {displaystyle v} computes the sum of the polynomials computed by its children (a gate u {displaystyle u} is a child of v {displaystyle v} if the directed edge ( u , v ) {displaystyle (u,v)} is in the graph). A product gate computes the product of the polynomials computed by its children. Consider the circuit in the figure, for example: the input gates compute (from left to right) x 1 , x 2 {displaystyle x_{1},x_{2}} and 1 , {displaystyle 1,} the sum gates compute x 1 + x 2 {displaystyle x_{1}+x_{2}} and x 2 + 1 , {displaystyle x_{2}+1,} and the product gate computes ( x 1 + x 2 ) x 2 ( x 2 + 1 ) . {displaystyle (x_{1}+x_{2})x_{2}(x_{2}+1).} Given a polynomial f , {displaystyle f,} we may ask ourselves what is the best way to compute it — for example, what is the smallest size of a circuit computing f . {displaystyle f.} The answer to this question consists of two parts. The first part is finding some circuit that computes f ; {displaystyle f;} this part is usually called upper bounding the complexity of f . {displaystyle f.} The second part is showing that no other circuit can do better; this part is called lower bounding the complexity of f . {displaystyle f.} Although these two tasks are strongly related, proving lower bounds is usually harder, since in order to prove a lower bound one needs to argue about all circuits at the same time. Note that we are interested in the formal computation of polynomials, rather than the functions that the polynomials define. For example, consider the polynomial x 2 + x ; {displaystyle x^{2}+x;} over the field of two elements this polynomial represents the zero function, but it is not the zero polynomial. This is one of the differences between the study of arithmetic circuits and the study of Boolean circuits. In Boolean complexity, one is mostly interested in computing a function, rather than some representation of it (in our case, a representation by a polynomial). This is one of the reasons that make Boolean complexity harder than arithmetic complexity. The study of arithmetic circuits may also be considered as one of the intermediate steps towards the study of the Boolean case, which we hardly understand. As part of the study of the complexity of computing polynomials, some clever circuits (alternatively algorithms) were found. A well-known example is Strassen's algorithm for matrix product. The straightforward way for computing the product of two n × n {displaystyle n imes n} matrices requires a circuit of size order n 3 . {displaystyle n^{3}.} Strassen showed that we can, in fact, multiply two matrices using a circuit of size roughly n 2.807 . {displaystyle n^{2.807}.} Strassen's basic idea is a clever way for multiplying two by two matrices. This idea is the starting point of the best theoretical way for multiplying two matrices that takes time roughly n 2.376 . {displaystyle n^{2.376}.} Another interesting story lies behind the computation of the determinant of an n × n {displaystyle n imes n} matrix. The naive way for computing the determinant requires circuits of size roughly n ! . {displaystyle n!.} Nevertheless, we know that there are circuits of size polynomial in n {displaystyle n} for computing the determinant. These circuits, however, have depth that is linear in n . {displaystyle n.} Berkowitz came up with an improvement: a circuit of size polynomial in n , {displaystyle n,} but of depth O ( log 2 ⁡ ( n ) ) . {displaystyle O(log ^{2}(n)).} We would also like to mention the best circuit known for the permanent of an n × n {displaystyle n imes n} matrix. As for the determinant, the naive circuit for the permanent has size roughly n ! . {displaystyle n!.} However, for the permanent the best circuit known has size roughly 2 n , {displaystyle 2^{n},} which is given by Ryser's formula: for an n × n {displaystyle n imes n} matrix X = ( x i , j ) , {displaystyle X=(x_{i,j}),}

[ "Arbitrary-precision arithmetic", "Polynomial", "Electronic circuit" ]
Parent Topic
Child Topic
    No Parent Topic