PRID: Model Inversion Privacy Attacks in Hyperdimensional Learning Systems

2021 
Hyperdimensional Computing (HDC) is introduced as a promising solution for robust and efficient learning on embedded devices with limited resources. Since HDC often runs in a distributed way, edge devices need to share their model with other parties. However, the learned model by itself may expose information of the train data, resulting in a serious privacy concern. This paper is the first effort to show the possibility of a model inversion attack in HDC and provide solutions to overcome the challenges. HDC performs learning tasks after mapping data points into high-dimensional space. We first show the vulnerability of the HDC encoding module by introducing techniques that decode the high-dimensional data back to the original space. Then, we exploit this invertibility to extract the HDC model’s information and reconstruct the train data just by accessing the model. To address the privacy challenges we propose two iterative techniques which scrutinize HDC model from a privacy perspective: (i) intelligent noise injection that identifies and randomizes insignificant features of the model in the original space, and (ii) model quantization that removes model’s recoverable information while teaches the model iteratively to compensate the possible quality loss. Our evaluation over a wide range of classification problems indicates that our solution reduces the information leakage by 92 %(66 %) while having less than 5 % (3%) impact on the learning accuracy.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    18
    References
    0
    Citations
    NaN
    KQI
    []