language-icon Old Web
English
Sign In

Data Poisoning Attacks against MRMR

2019 
Many machine learning models lack the consideration that an adversary can alter data at the time of training or testing. Over the past decade, the machine learning models’ vulnerability has been a concern and more secure algorithms are needed. Unfortunately, the security of feature selection (FS) remains an under-explored area. There are only a few works that address data poisoning algorithms that are targeted at embedded FS; however, data poisoning techniques targeted at information-theoretic FS do not exist. In this contribution, a novel data poisoning algorithm is proposed that targets failures in minimum Redundancy Maximum Relevance (mRMR) . We demonstrate that mRMR can be easily poisoned to select features that would not normally have been selected.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    18
    References
    4
    Citations
    NaN
    KQI
    []