Understanding the Impact of Neural Variations and Random Connections on Inference.

2021 
Recent research suggests that in vitro neural networks created from dissociated neurons may be used for computing and performing machine learning tasks. To develop a better artificial intelligent system, a hybrid bio-silicon computer is worth exploring. However, the performance of current hybrid bio-silicon design is still far from the silicon-based computer. One reason may be that the living neural network has many intrinsic properties, such as the random network connectivity, high network sparsity, and large neural and synaptic variability. These properties may lead to new design considerations and existing algorithms may need to be adjusted for living neural network implementation. This work investigates the impact of neural variation and random connections on the inference of learning algorithms. A two-layer hybrid bio-silicon platform is constructed and a three-stage design method is proposed for fast development of living neural network algorithms. Neural variation and dynamics are verified by fitting model parameters with biological experimental results. Random connections are generated under different connection probabilities to vary network sparsity. A multi-layer perceptron algorithm is tested with biological constraints on the MNIST dataset. The results show that a reasonable inference accuracy can be achieved when neural variations and random network connections are taken into account. A new adaptive pre-processing technique is proposed to ensure good learning accuracy with different living neural network sparsity.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    36
    References
    0
    Citations
    NaN
    KQI
    []