Garbage In, Garbage Out: Poisoning Attacks Disguised With Plausible Mobility in Data Aggregation

2021 
Location data are often aggregated to support a wide range of applications, such as mobility management, point-of-interest recommendation, map inferences, and so on. However, these aggregated results might suffer from poisoning attacks when adversaries deliberately send poisoned locations to the aggregator, namely, garbage in, garbage out. Most existing work focuses on poisoning attacks in anomaly detection, crowd sensing, recommendation, and machine learning models. One observation motivating our work is that poisoning attacks in the context of data aggregation introduce new challenges along with distinguished features compared to anomaly detection, recommendation systems and machine learning models. As such, in this paper, we concentrate on the inputs to data aggregation and introduce the first optimal attack framework that launches p oisoning a ttacks on l ocation d ata a ggregation (PALDA) and disguises the attacking behaviors. Specifically, we first formalize the poisoning attack as a min-max optimization problem and then design an iterative algorithm to address the optimization problem. Moreover, we theoretically prove the convergence of the proposed optimization algorithm and extend the attack strategy optimization to more practical settings where PALDA is applicable to any linear decomposable aggregation model executed by an aggregator. Finally, simulations on six real-world mobility datasets have demonstrated the risks entailed by poisoning attacks with data aggregation.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    68
    References
    0
    Citations
    NaN
    KQI
    []