A differential excitation based rotational invariance for convolutional neural networks

2016 
Deep Learning (DL) methods extract complex set of features using architectures containing hierarchical set of layers. The features so learned have high discriminative power and thus represents the input to the network in the most efficient manner. Convolutional Neural Networks (CNN) are one of the deep learning architectures, extracts structural features with little invariance to smaller translational, scaling and other forms of distortions. In this paper, the learning capabilities of CNN's are explored towards providing improvement in rotational invariance to its architecture. We propose a new CNN architecture with an additional layer formed by differential excitation against distance for the improvement of rotational invariance and is called as RICNN. Moreover, we show that the proposed method is giving superior performance towards invariance to rotations against the original CNN architecture (training samples with different orientations are not considered) without disturbing the invariance to smaller translational, scaling and other forms of distortions. Different profiles like training time, testing time and accuracies are evaluated at different percentages of training data for comparing the performance of the proposed configuration with original configuration.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    15
    References
    0
    Citations
    NaN
    KQI
    []