Agent Based Simulation of Network Routing: Reinforcement Learning Comparison

2018 
The paper considers and compares two methods applicable for self-adaptive routing in communication networks based on immobile agent. Two different reinforcement learning algorithms, Q-learning and SARSA, were employed in the simulated environment and results were gathered and compared. Since the task of routing is to find the optimal path between source and destination for every information piece of the service, the critical moment for routing in communication networks is quality of service, which includes coordination and support by many dislocated devices. These devices change their properties in time, they can appear and they fall down. Thus the task of the agents is to learn to predict new situations while continuously operates, i.e. to self-adapt its function. Our experiments show that the SARSA agent outperforms Q agent in information routing but in some situations, both agents fall. The circumstances in agent environment for which the agents are not prestigious were detected and depicted.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    5
    References
    0
    Citations
    NaN
    KQI
    []