FP-DCNN: a parallel optimization algorithm for deep convolutional neural network

2021 
Deep convolutional neural networks (DCNNs) have been successfully used in many computer visions task. However, with the increasing complexity of the network and continuous growth of data scale, training a DCNN model suffers from the following three problems: excessive network parameters, insufficient capability of the parameter optimization, and inefficient parallelism. To overcome these obstacles, this paper develops an optimization algorithm for deep convolutional neural network (FP-DCNN) in the MapReduce framework. First, a pruning method based on Taylor’s loss (FMPTL) is designed to trim redundant parameters, which not only compresses the structure of DCNN, but also reduces the computational cost of training. Next, a glowworm swarm optimization algorithm based on information sharing strategy (IFAS) is presented, which improves the ability of parameter optimization by adjusting the initialization of weights. Finally, a dynamic load balancing strategy based on parallel computing entropy (DLBPCE) is proposed to achieve an even distribution of data and thus improve the parallel performance of the cluster. Our experiments show that compared with other parallelized algorithms, this algorithm not only reduces the computational cost of network training, but also obtains a higher processing speed.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    29
    References
    0
    Citations
    NaN
    KQI
    []