Automatic Visual Features for Writer Identification: A Deep Learning Approach

2019 
Identification of a person from his writing is one of the challenging problems; however, it is not new. No one can repudiate its applications in a number of domains, such as forensic analysis, historical documents, and ancient manuscripts. Deep learning-based approaches have proved as the best feature extractors from massive amounts of heterogeneous data and provide promising and surprising predictions of patterns as compared with traditional approaches. We apply a deep transfer convolutional neural network (CNN) to identify a writer using handwriting text line images in English and Arabic languages. We evaluate different freeze layers of CNN (Conv3, Conv4, Conv5, Fc6, Fc7, and fusion of Fc6 and Fc7) affecting the identification rate of the writer. In this paper, transfer learning is applied as a pioneer study using ImageNet (base data-set) and QUWI data-set (target data-set). To decrease the chance of over-fitting, data augmentation techniques are applied like contours, negatives, and sharpness using text-line images of target data-set. The sliding window approach is used to make patches as an input unit to the CNN model. The AlexNet architecture is employed to extract discriminating visual features from multiple representations of image patches generated by enhanced pre-processing techniques. The extracted features from patches are then fed to a support vector machine classifier. We realized the highest accuracy using freeze Conv5 layer up to 92.78% on English, 92.20% on Arabic, and 88.11% on the combination of Arabic and English, respectively.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    6
    References
    29
    Citations
    NaN
    KQI
    []