The distributivity law as a tool of k-NN classifiers’ aggregation: mining a cyber-attack data set

2020 
This contribution proposed a novel approach for an ensemble method to increase classification accuracy and at the same time minimizing ensemble classifiers by applying the distributivity law which will aggregate the classifiers accordingly. Ensemble methods have been introduced as a useful and effective solution to improve the performance of the classification. Despite having the ability of producing the highest classification accuracy, ensemble methods have suffered significantly from their large volume of base classifiers. Nevertheless, we could overcome this problem by combining some of the classifiers. We employ here the classical version of the k Nearest Neighbor classifiers (k-NN classifiers). Moreover, this method requires the use of some suitable aggregation operators for which either the distributivity law or one of its respective inequalities occurs. A good example of such aggregations were average functions and triangular norms and conorms. The paper includes primarily the results of experiments performed on the cyber attacks in network dataset obtained from the machine learning repository UCI.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    23
    References
    0
    Citations
    NaN
    KQI
    []