Machine Learning and Caching based Efficient Data Retrieval Framework

2020 
The explosive growth of wireless data and traffic, accompanied by the rapid advancements in intelligence and the processing power of user equipments (UEs), poses a very difficult challenge to the data providers to maintain the high data rate with sustainable quality-of-service (QoS). A lot of data can be saved by using caching based communication techniques, which would save the service providers a fortune and will make internet connectivity even more affordable. Also, there is room for saving bandwidth and using the limited number of servers and towers efficiently outputting a steadily healthy QoS. We propose an efficient data retrieval framework that uses caching based on the popularity of the pages where the popularity of pages is determined by the number of hits it gets over a month, which is the learning phase of the model and how frequently a given web page is requested. The proposed framework uses a causal decision tree in the background to determine the popularity of pages according to which the algorithm decides whether a given page is worthy of being cached or not. Results show that our proposed model outperforms the conventional data retrieval models in terms of cache missed probability.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    11
    References
    0
    Citations
    NaN
    KQI
    []