Federated intelligence of anomaly detection agent in IoTMD-enabled Diabetes Management Control System

2022 
Abstract Implantable internet of things medical devices (IoTMD) has drawn a disruptive transformation in the healthcare domain. It has improved the services of healthcare providers in delivering patient care. Even better, it has supported individuals that are suffering from a chronic condition in self-managing their condition. Patients diagnosed with diabetes are the leading beneficiaries of IoTMD, assisting them in maintaining their sugar level at a normal range. However, the security defense of these systems against potential cyberthreats is currently underdeveloped. Such threats must not be ignored as they can put the lives of the patients in danger. Thus, this paper proposes deep learning (DL)-based anomaly detection system composed of estimation and classification models applied to a subdomain in healthcare systems referred to as Diabetes Management Control System (DMCS). The estimation model was used to forecast the glucose level of patients at each evaluation time step, whereas the classification model is intended to detect anomalous data points. This paper implements convolutional neural network (CNN) and multilayer perceptron (MLP) algorithms for comparison purposes. Moreover, considering that the dataset contains sensitive physiological information of the patients, this paper implements the independent learning (IL) and federated learning (FL) methods to maintain user data privacy. Furthermore, the post-quantization compression technique was also applied to convert models into their lightweight version, overcoming the computationally demanding processes of DL. Based on the experiment results, the FL method showed a higher recall rate ( > = 98.69%) than the IL method ( = 97.87%). Additionally, the FL-aided CNN-based anomaly detection system performs better than the MLP-based approach. The former achieved an average recall rate of 99.24%, whereas the latter only attained 98.69%. Without loss of recall value, the inference latency of the models was tremendously reduced from more than 300 ms to less than two milliseconds when the original models were converted to their lightweight version.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    53
    References
    0
    Citations
    NaN
    KQI
    []