|Mingyue Ji||University of Utah, USA|
|Rongrong Chen||University of Utah, USA|
We consider a wireless distributed computing network , where all computing nodes (workers) are connected via wireless medium obeying the seminal protocol channel model. In particular, we focus on the MapReduce-type platform, where each worker is assigned to compute some arbitrary output functions from F input files, which are distributively cached in all workers. The overall computation is decomposed into computing a set of "Map" and "Reduce" functions across all workers. The goal is to characterize the minimum computing latency as a function of the computation load. Unlike other related works, which consider either wireline settings or restrict the communication among workers to single-hop, here we focus on the wireless scenario and do not constrain any communication schemes. We propose a data set cache strategy based on a deterministic assignment of Maximum Distance Separable (MDS)-coded date sets over all input files, and a coded multicast transmission strategy where the workers send linearly coded computing results to each other in order to collectively satisfy their assigned tasks. We show that our approach can achieve a scalable communication latency, outperform the state of the art schemes in the order sense, and achieve the information theoretic outer bound within a multiplicative constant factor in practical parameter regimes.