Fire images classification based on a handcraft approach

2023 
In recent years, wildfires and forest fires have ravaged millions of hectares of the forest all over the world. Recent technological breakthroughs have increased interest in computer vision-based fire classification that classifies fire and non-fire pixels from image or video datasets. Fire pixels from an image or video can be classified using either a traditional machine learning approach or a deep learning approach. Presently, the deep learning approach is the mainstream in forest fire detection studies. Although deep learning algorithms can handle vast amounts of data, they ignore the variation in complexity among training samples and as a result, their training model performance is limited. Furthermore, deep learning approaches with little data and features perform poorly in real-world challenging fire scenarios. As a result, the current study adopts a machine learning technique to extract higher-order features from the processed images from the publicly available datasets: Corsican dataset and FLAME, and a private dataset: Firefront_Gestosa, for classifying fire and non-fire pixels. It should be emphasized that in machine learning applications, handling multidimensional data to train a model is challenging. Feature selection is used to overcome this problem by removing redundant or irrelevant data that has an impact on the model's performance. In this paper, information-theoretic feature selection approaches are used to choose the most important features for classification while minimizing the computational cost. The traditional machine classifier, Support Vector Machine (SVM) is adopted in the present work, that works on the discriminative features input selected from the feature selection technique. The SVM performs the classification of fire and non-fire pixels with a Radial Basis Function (RBF) kernel, and the model's performance is measured using assessment measures such as overall accuracy, sensitivity, specificity, precision, recall, F-measure, and G-mean. The model draws an overall accuracy of 96,21%, a sensitivity of 94,42%, a specificity of 97,99%, a precision of 97,91%, a recall of 94,42%, an f-measure and g-mean values of 96,13% and 96,19% respectively.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []