Service migration in multi-access edge computing: A joint state adaptation and reinforcement learning mechanism

2021 
Abstract With the development of the internet of things (IoT), the concept of an edge network has been gradually expanding to other fields including internet of vehicles, mobile communication networks and smart grids. Because the resources of terminals are limited, the long-distance movements of users will increase the running costs of the services that are offloaded to edge servers, and even the services on terminals will stop running. Another problem is that resource shortages or hardware failures of these edge networks can affect the service migration policy. In this paper, a novel service migration method based on state adaptation and deep reinforcement learning is proposed to efficiently overcome network failures. Before migration, we define four edge network states to discuss the migration policy and adopt the two-dimensional movement around the edge servers to adapt to the applications scenarios of our work. Then, we use the satisfiability modulo theory (SMT) method to solve the candidate space of migration policies based on cost constraints, delay constraints and available resource capacity constraints to shorten the interruption time. Finally, the service migration problem can be transformed into the optimal destination server and low-cost migration path problem based on the Markov decision process by the deep Q-learning (DQN) algorithm. Moreover, we theoretically prove the rate of convergence in the learning rate function of our algorithm to improve the convergence rate. Our experimental results demonstrate that our proposed service migration mechanism can effectively shorten the delays from service interruptions, and better avoid the impact of edge network failure on the migration results and, thus, improve the users’ satisfaction.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    40
    References
    1
    Citations
    NaN
    KQI
    []