Tensor Quantization: High-Dimensional Data Compression

2022 
Quantization is an important technique to transform the input sample values from a large set (or a continuous range) into the output sample values in a small set (or a finite set). It has been applied broadly for lossy-data compression, pattern recognition, probability density estimation, and clustering. Vector quantization (VQ) is a prevalent image-compression technique, which treats image matrices as stretched vectors and then finds the representative stretched vectors accordingly for a given image data set. One can use tensor data representation to directly characterize the original two-dimensional image data rather than stretch the image matrix into a long vector so as to destroy the original two-dimensional data structure. In this work, we propose a new tensor quantization (TQ) framework which does not need to reduce the dimensionality of the original image data and destroy the original two-dimensional spatial relationship among data; these two serious drawbacks of vector quantization are well known. We first present tensor calculus and then propose a new parallel tensor-inversion algorithm for TQ thereupon. We also establish the pertinent theoretical proof to justify that our proposed new TQ approach is superior to the existing VQ approach especially as the image dimension becomes large. Finally, numerical experiments to evaluate the image-compression performances of VQ and TQ are demonstrated and their corresponding computational-complexities are also compared.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    46
    References
    0
    Citations
    NaN
    KQI
    []