Proactive Content Caching at Self-Driving Car Using Federated Learning with Edge Cloud

2021 
Proactive content caching in self-driving cars poses several challenges, particularly because of the dynamic nature of content popularity, heterogeneity in user preferences, and privacy issues for data sharing. To tackle these issues, in this paper, we study the significance of proactive content caching strategy in self-driving cars for optimizing content retrieval cost and quality-of-experience (QoE) with the edge cloud infrastructure. To that end, we propose a low-complexity content popularity prediction mechanism in a federated setting where we extract local content popularity patterns in the self-driving cars using long short-term memory (LSTM)-based prediction mechanism. Then, we leverage the privacy-preserving distributed model training paradigm of Federated Learning (FL) to create a global model by applying the Federated Averaging (FedAvg) algorithm on local LSTM models to create a regional content popularity prediction model. With extensive simulations on real-world datasets, we show the obtained global model helps to improve the local cache hit ratio, cache space utilization, and correspondingly minimize latency overhead at the self-driving cars.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    21
    References
    0
    Citations
    NaN
    KQI
    []