language-icon Old Web
English
Sign In

Structural complexity

In computational complexity theory of computer science, the structural complexity theory or simply structural complexity is the study of complexity classes, rather than computational complexity of individual problems and algorithms. It involves the research of both internal structures of various complexity classes and the relations between different complexity classes. In computational complexity theory of computer science, the structural complexity theory or simply structural complexity is the study of complexity classes, rather than computational complexity of individual problems and algorithms. It involves the research of both internal structures of various complexity classes and the relations between different complexity classes. The theory has emerged as a result of (still failing) attempts to resolve the first and still the most important question of this kind, the P = NP problem. Most of the research is done basing on the assumption of P not being equal to NP and on a more far-reaching conjecture that the polynomial time hierarchy of complexity classes is infinite. The compression theorem is an important theorem about the complexity of computable functions. The theorem states that there exists no largest complexity class, with computable boundary, which contains all computable functions. The space hierarchy theorems are separation results that show that both deterministic and nondeterministic machines can solve more problems in (asymptotically) more space, subject to certain conditions. For example, a deterministic Turing machine can solve more decision problems in space n log n than in space n. The somewhat weaker analogous theorems for time are the time hierarchy theorems. The time hierarchy theorems are important statements about time-bounded computation on Turing machines. Informally, these theorems say that given more time, a Turing machine can solve more problems. For example, there are problems that can be solved with n2 time but not n time. The Valiant–Vazirani theorem is a theorem in computational complexity theory. It was proven by Leslie Valiant and Vijay Vazirani in their paper titled NP is as easy as detecting unique solutions published in 1986.The theorem states that if there is a polynomial time algorithm for Unambiguous-SAT, then NP=RP.The proof is based on the Mulmuley–Vazirani isolation lemma, which was subsequently used for a number of important applications in theoretical computer science. The Sipser–Lautemann theorem or Sipser–Gács–Lautemann theorem states that Bounded-error Probabilistic Polynomial (BPP) time, is contained in the polynomial time hierarchy, and more specifically Σ2 ∩ Π2. Savitch's theorem, proved by Walter Savitch in 1970, gives a relationship between deterministic and non-deterministic space complexity. It states that for any function f ∈ Ω ( log ⁡ ( n ) ) {displaystyle fin Omega (log(n))} ,

[ "Ecology", "Quantum mechanics", "Artificial intelligence" ]
Parent Topic
Child Topic
    No Parent Topic