Learnability and robustness of shallow neural networks learned by a performance-driven BP and a variant of PSO for edge decision-making

2021 
It may not be easy to implement complex AI models in edge devices without strong computing capacity (e.g. GPU). The Universal Approximation Theorem states that a shallow neural network (SNN) can represent any nonlinear function. In this paper, we focus on the learnability and robustness of SNNs, obtained by a greedy tight force heuristic algorithm [a Performance Driven Back-Propagation (PDBP)] and a loose force meta-heuristic algorithm [a variant of particle swarm optimization (VPSO)]. From the engineering prospective, all sensors are well justified for a specific task. Hence, all sensor readings should be strongly correlated to the target, and the structure of an SNN should depend on the dimensions of a problem space. The key findings of the research are summarized as follows: (1) The number of hidden neurons of an SNN depends on the nonlinearity of the training data, and the number of hidden neurons up to the dimension number of a problem space could be enough; (2) The learnability of SNNs, produced by error-driven PDBP, is always better than that of SNNs, optimized by error-driven VPSOs; (3) The performances of SNNs, obtained by PDBPs and VPSOs, do not change much for different training rates; and (4) Comparing with other classic machine learning algorithms, such as C4.5, NB and NN in literature, the SNNs, obtained by accuracy-driven PDBPs, win for all tested data sets, and the improvement percentage is up to 32.86%. Hence, the research could provide valued guidance for the implementation of edge intelligence.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    41
    References
    1
    Citations
    NaN
    KQI
    []