Abstract |
In distributed software-defined networks (SDN), multiple physical SDN controllers, each managing a network domain, are implemented to balance centralized control, scalability and reliability requirements. In such networking paradigm, controllers synchronize with each other to maintain a logically centralized network view. Despite various proposals of distributed SDN controller architectures, most existing works only assume that such logically centralized network view can be achieved with some synchronization designs, but the question of how exactly controllers should synchronize with each other to maximize the benefits of synchronization under the eventual consistency assumptions is largely overlooked. To this end, we formulate the controller synchronization problem as a Markov Decision Process (MDP) and apply reinforcement learning techniques combined with deep neural network to train a smart controller synchronization policy, which we call the Deep-Q (DQ) Scheduler. Evaluation results show that DQ Scheduler outperforms the antientropy algorithm implemented in the ONOS controller by up to 95.2% for inter-domain routing tasks. |