Polar Transformation on Image Features for Orientation-Invariant Representations

2019 
The choice of image feature representation plays a crucial role in the analysis of visual information. Although vast numbers of alternative robust feature representation models have been proposed to improve the performance of different visual tasks, most existing feature representations [e.g., handcrafted features or convolutional neural networks (CNNs)] have a relatively limited capacity to capture the highly orientation-invariant (rotation/reversal) features. The net consequence is suboptimal visual performance. To address these problems, this study adopts a novel transformational approach, which investigates the potential of using polar feature representations. Our low level consists of a histogram of oriented gradient, which is then binned using annular spatial bin-type cells applied to the polar gradient. This gives gradient binning invariance for feature extraction. In this way, the descriptors have significantly enhanced orientation-invariant capabilities. The proposed feature representation, called orientation-invariant histograms of oriented gradients , is capable of accurately processing visual tasks (e.g., facial expression recognition). In the context of the CNN architecture, we propose two polar convolution operations, referred to as full polar convolution and local polar convolution, and use these to develop polar architectures for the CNN orientation-invariant representation. Experimental results show that the proposed orientation-invariant image representation, based on polar models for both handcrafted features and deep learning features, is both competitive with state-of-the-art methods and maintains compact representation on a set of challenging benchmark image datasets.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    70
    References
    2
    Citations
    NaN
    KQI
    []