BML: A High-performance, Low-cost Gradient Synchronization Algorithm For DML Training

Authors:
Songtao Wang Tsinghua University
Dan Li Tsinghua University
Yang Cheng Tsinghua University
Jinkun Geng Tsinghua University
Yanshu Wang Tsinghua Univeristy
Shuai Wang Tsinghua University
Shu-Tao Xia Tsinghua University
Jianping Wu Tsinghua University

Introduction:

In distributed machine learning (DML), the network performance between machines significantly impacts the speed of iterative training.In this paper the authors propose BML, a new gradient synchronization algorithm with higher network performance and lower network cost than the current practice.

Abstract:

In distributed machine learning (DML), the network performance between machines significantly impacts the speed of iterative training. In this paper we propose BML, a new gradient synchronization algorithm with higher network performance and lower network cost than the current practice. BML runs on BCube network, instead of using the traditional Fat-Tree topology. BML algorithm is designed in such a way that, compared to the parameter server (PS) algorithm on a Fat-Tree network connecting the same number of server machines, BML achieves theoretically 1/k of the gradient synchronization time, with k/5 of switches (the typical number of k is 2∼4). Experiments of LeNet-5 and VGG-19 benchmarks on a testbed with 9 dual-GPU servers show that, BML reduces the job completion time of DML training by up to 56.4%.

You may want to know: