language-icon Old Web
English
Sign In

Potential method

In computational complexity theory, the potential method is a method used to analyze the amortized time and space complexity of a data structure, a measure of its performance over sequences of operations that smooths out the cost of infrequent but expensive operations. In computational complexity theory, the potential method is a method used to analyze the amortized time and space complexity of a data structure, a measure of its performance over sequences of operations that smooths out the cost of infrequent but expensive operations. In the potential method, a function Φ is chosen that maps states of the data structure to non-negative numbers. If S is a state of the data structure, Φ(S) may be thought of intuitively as an amount of potential energy stored in that state; alternatively, Φ(S) may be thought of as representing the amount of disorder in state S or its distance from an ideal state. It represents work that has been accounted for ('paid for') in the amortized analysis, but not yet performed. The potential value prior to the operation of initializing a data structure is defined to be zero. Let o be any individual operation within a sequence of operations on some data structure, with Sbefore denoting the state of the data structure prior to operation o and Safter denoting its state after operation o has completed. Then, once Φ has been chosen, the amortized time for operation o is defined to be where C is a non-negative constant of proportionality (in units of time) that must remain fixed throughout the analysis.That is, the amortized time is defined to be the actual time taken by the operation plus C times the difference in potential caused by the operation. When studying asymptotic computational complexity using big O notation, constant factors are irrelevant and so the constant C is usually omitted. Despite its artificial appearance, the total amortized time of a sequence of operations provides a valid upper bound on the actual time for the same sequence of operations. For any sequence of operations O = o 1 , o 2 , … , o n {displaystyle O=o_{1},o_{2},dots ,o_{n}} , define:

[ "Algorithm", "Programming language" ]
Parent Topic
Child Topic
    No Parent Topic