RLDRM: Closed Loop Dynamic Cache Allocation with Deep Reinforcement Learning for Network Function Virtualization.

2020 
Network function virtualization (NFV) technology attracts tremendous interests from telecommunication industry and data center operators, as it allows service providers to assign resource for Virtual Network Functions (VNFs) on demand, achieving better flexibility, programmability, and scalability. To improve server utilization, one popular practice is to deploy best effort (BE) workloads along with high priority (HP) VNFs when high priority VNF's resource usage is detected to be low. The key challenge of this deployment scheme is to dynamically balance the Service level objective (SLO) and the total cost of ownership (TCO) to optimize the data center efficiency under inherently fluctuating workloads. With the recent advancement in deep reinforcement learning, we conjecture that it has the potential to solve this challenge by adaptively adjusting resource allocation to reach the improved performance and higher server utilization. In this paper, we present a closed-loop automation system RLDRM11RLDRM: Reinforcement Learning Dynamic Resource Management to dynamically adjust Last Level Cache allocation between HP VNFs and BE workloads using deep reinforcement learning. The results demonstrate improved server utilization while maintaining required SLO for the HP VNFs.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    41
    References
    3
    Citations
    NaN
    KQI
    []