language-icon Old Web
English
Sign In

K-independent hashing

In computer science, a family of hash functions is said to be k-independent or k-universal if selecting a function at random from the family guarantees that the hash codes of any designated k keys are independent random variables (see precise mathematical definitions below). Such families allow good average case performance in randomized algorithms or data structures, even if the input data is chosen by an adversary. The trade-offs between the degree of independence and the efficiency of evaluating the hash function are well studied, and many k-independent families have been proposed. In computer science, a family of hash functions is said to be k-independent or k-universal if selecting a function at random from the family guarantees that the hash codes of any designated k keys are independent random variables (see precise mathematical definitions below). Such families allow good average case performance in randomized algorithms or data structures, even if the input data is chosen by an adversary. The trade-offs between the degree of independence and the efficiency of evaluating the hash function are well studied, and many k-independent families have been proposed. The goal of hashing is usually to map keys from some large domain (universe) U {displaystyle U} into a smaller range, such as m {displaystyle m} bins (labelled [ m ] = { 0 , … , m − 1 } {displaystyle ={0,dots ,m-1}} ). In the analysis of randomized algorithms and data structures, it is often desirable for the hash codes of various keys to 'behave randomly'. For instance, if the hash code of each key were an independent random choice in [ m ] {displaystyle } , the number of keys per bin could be analyzed using the Chernoff bound. A deterministic hash function cannot offer any such guarantee in an adversarial setting, as the adversary may choose the keys to be the precisely the preimage of a bin. Furthermore, a deterministic hash function does not allow for rehashing: sometimes the input data turns out to be bad for the hash function (e.g. there are too many collisions), so one would like to change the hash function. The solution to these problems is to pick a function randomly from a large family of hash functions. The randomness in choosing the hash function can be used to guarantee some desired random behavior of the hash codes of any keys of interest. The first definition along these lines was universal hashing, which guarantees a low collision probability for any two designated keys. The concept of k {displaystyle k} -independent hashing, introduced by Wegman and Carter in 1981, strengthens the guarantees of random behavior to families of k {displaystyle k} designated keys, and adds a guarantee on the uniform distribution of hash codes. The strictest definition, introduced by Wegman and Carter under the name 'strongly universal k {displaystyle _{k}} hash family', is the following. A family of hash functions H = { h : U → [ m ] } {displaystyle H={h:U o }} is k {displaystyle k} -independent if for any k {displaystyle k} distinct keys ( x 1 , … , x k ) ∈ U k {displaystyle (x_{1},dots ,x_{k})in U^{k}} and any k {displaystyle k} hash codes (not necessarily distinct) ( y 1 , … , y k ) ∈ [ m ] k {displaystyle (y_{1},dots ,y_{k})in ^{k}} , we have:

[ "Perfect hash function", "Double hashing" ]
Parent Topic
Child Topic
    No Parent Topic