language-icon Old Web
English
Sign In

Computational Neural Networks

1992 
Research on neural network modeling has a long history. Neurobiologists have discovered individual nerve cells existing in the brain and learned how neurons carry information, transmit information, and respond to various stimuli. Based on the understanding of the nervous system, many neural networks have been proposed by researchers. Over the past fifty years, thousands of papers have been published in this area. As early as 1943, McCulloch and Pitts [MP43] developed a neural network by treating neurons as Boolean devices and showed that such a network could compute. This network used a step function as the activation function which has been adopted by many neural networks such as the Amari recurrent network [Ama71, Ama77], the discrete Hopfield network [Hop82], and the discrete bidirectional associative memory [Kos88]. Recently, learning has become the main focus in this area. In 1949, Hebb [Heb49] proposed a learning rule that is a simulated network; first tested in the Edmonds and Min-sky’s learning machine, it is still used today in many learning paradigms. In the 1950s, Rosenblatt [Ros59, Ros62] invented a class of simple neuron learning networks called perceptrons in order to realize a dynamic, interactive and self-organizing system. Minsky and Papert [MP69] studied Rosenblatt’s learning networks and found that a two-layer network would only work for the linear separatable problems.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []