Service Function Chaining in NFV-Enabled Edge Networks with Natural Actor-Critic Deep Reinforcement Learning

2021 
In this paper, by exploiting the natural policy gradient-based actor-critic framework, we study the service function chaining in network function virtualization (NFV)-enabled edge networks. First, a long-run function chaining problem is formulated to minimize the end-to-end service latency, involving not only the server and wired link resources, but also radio resource in wireless links; the Markov decision process (MDP) model is further leveraged to capture dynamics in both server and radio resources, whereby the transition probability over state space is explicitly derived. Second, a natural actor-critic framework is presented, which utilizes natural policy gradient to train the deep neural network (DNN), thereby avoiding trapped into the local optimum. In particular, to overcome the high-dimensionally issue in action space, we further resort to one integer linear programming (ILP) formulation, reducing the space size from cube to first power. Finally, simulations are conducted to demonstrate the effectiveness of proposed approach, revealing that the latency minimization could benefit from the learning in not only service function chain (SFC) routing across edge servers, but also radio resource allocation in wireless links.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    15
    References
    1
    Citations
    NaN
    KQI
    []