StegEdge: Privacy protection of unknown sensitive attributes in edge intelligence via deception

2022 
Due to the limited capabilities of user devices, such as smart phones, and the Internet of Things (IoT), edge intelligence is being recognized as a promising paradigm to enable effective analysis of the data generated by these devices with complex artificial intelligence (AI) models, and it often entails either fully or partially offloading the computation of neural networks from user devices to edge computing servers. To protect users’ data privacy in the process, most existing researches assume that the private (sensitive) attributes of user data are known in advance when designing privacy-protection measures. This assumption is restrictive in real life, and thus limits the application of these methods. Inspired by the research in image steganography and cyber deception, in this paper, we propose StegEdge, a conceptually novel approach to this challenge. StegEdge takes as input the user-generated image and a randomly selected “cover” image that does not pose any privacy concern (e.g., downloaded from the Internet), and extracts the features such that the utility tasks can still be conducted by the edge computing servers, while potential adversaries seeking to reconstruct/recover the original user data or analyze sensitive attributes from the extracted features sent from users to the server, will largely acquire information of the cover image. Thus, users’ data privacy is protected via a form of deception. Empirical results conducted on the CelebA and ImageNet datasets show that, at the same level of accuracy for utility tasks, StegEdge reduces the adversaries’ accuracy of predicting sensitive attributes by up to 38% compared with other methods, while also defending against adversaries seeking to reconstruct user data from the extracted features.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []