A Deep Reinforcement Learning-based Task Scheduling Algorithm for Energy Efficiency in Data Centers

2021 
Cloud data centers provide end-users with a wide range of application scenarios, including scientific computing, smart grids, etc. The number and size of data centers have rapidly increased in recent years, which causes severe environmental problems and colossal power demand. Therefore, it is desirable to use a proper scheduling method to optimize resource usage and reduce energy consumption in a data center. However, it is rather difficult to design an effective and efficient task scheduling algorithm because of the dynamic and complex environment of data centers. This paper proposes a task scheduling algorithm, WSS, to optimize resource usage and reduce energy consumption based on a model-free deep reinforcement learning framework inspired by the Wolpertinger architecture. The proposed algorithm can handle the scheduling problem on a sizeable discrete action space, improve decision efficiency, and save the training convergence time. Meanwhile, the proposed algorithm based on Soft Actor-Critic is designed to improve the stability and exploration capability of WSS. Experiments based on real-world traces prove that WSS can reduce energy consumption by nearly 25% compared with the Deep Q-network task scheduling algorithm. Moreover, WSS can provide a short time of training convergence without increasing the average waiting time of tasks and achieve stable performance.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    20
    References
    0
    Citations
    NaN
    KQI
    []