language-icon Old Web
English
Sign In

Algorithmic efficiency

In computer science, algorithmic efficiency is a property of an algorithm which relates to the number of computational resources used by the algorithm. An algorithm must be analyzed to determine its resource usage, and the efficiency of an algorithm can be measured based on usage of different resources. Algorithmic efficiency can be thought of as analogous to engineering productivity for a repeating or continuous process. For maximum efficiency we wish to minimize resource usage. However, different resources such as time and space complexity cannot be compared directly, so which of two algorithms is considered to be more efficient often depends on which measure of efficiency is considered most important. For example, bubble sort and timsort are both algorithms to sort a list of items from smallest to largest. Bubble sort sorts the list in time proportional to the number of elements squared ( O ( n 2 ) {displaystyle scriptstyle {{mathcal {O}}left(n^{2} ight)}} , see Big O notation), but only requires a small amount of extra memory which is constant with respect to the length of the list ( O ( 1 ) { extstyle scriptstyle {{mathcal {O}}left(1 ight)}} ). Timsort sorts the list in time linearithmic (proportional to a quantity times its logarithm) in the list's length ( O ( n log ⁡ n ) { extstyle scriptstyle {mathcal {Oleft(nlog n ight)}}} ), but has a space requirement linear in the length of the list ( O ( n ) { extstyle scriptstyle {mathcal {Oleft(n ight)}}} ). If large lists must be sorted at high speed for a given application, timsort is a better choice; however, if minimizing the memory footprint of the sorting is more important, bubble sort is a better choice. The importance of efficiency with respect to time was emphasised by Ada Lovelace in 1843 as applying to Charles Babbage's mechanical analytical engine: Early electronic computers were severely limited both by the speed of operations and the amount of memory available. In some cases it was realized that there was a space–time trade-off, whereby a task could be handled either by using a fast algorithm which used quite a lot of working memory, or by using a slower algorithm which used very little working memory. The engineering trade-off was then to use the fastest algorithm which would fit in the available memory. Modern computers are significantly faster than the early computers, and have a much larger amount of memory available (Gigabytes instead of Kilobytes). Nevertheless, Donald Knuth emphasised that efficiency is still an important consideration: An algorithm is considered efficient if its resource consumption, also known as computational cost, is at or below some acceptable level. Roughly speaking, 'acceptable' means: it will run in a reasonable amount of time or space on an available computer, typically as a function of the size of the input. Since the 1950s computers have seen dramatic increases in both the available computational power and in the available amount of memory, so current acceptable levels would have been unacceptable even 10 years ago. In fact, thanks to the approximate doubling of computer power every 2 years, tasks that are acceptably efficient on modern smartphones and embedded systems may have been unacceptably inefficient for industrial servers 10 years ago. Computer manufacturers frequently bring out new models, often with higher performance. Software costs can be quite high, so in some cases the simplest and cheapest way of getting higher performance might be to just buy a faster computer, provided it is compatible with an existing computer.

[ "Encoding (memory)", "Coding (social sciences)", "encoder complexity" ]
Parent Topic
Child Topic
    No Parent Topic