A stochastic training model for perceptron algorithms

1991 
A stochastic training model that can be used to study the transient and steady-state convergence properties of perceptron learning algorithms is presented. It is based on a system identification formulation whereby the training signals are modeled as the output of a nonlinear system. The perceptron input signals are modeled as a Gaussian random vector so that closed-form expressions can be derived for expectations of Gaussian variates. These, in turn, can be solved to predict the trajectories and convergence points of the network connection weights. Although this model is quite general and can be applied to a variety of multilayer perceptron configurations, the authors focus on the single-layer perceptron and two of its learning algorithms. >
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    10
    References
    1
    Citations
    NaN
    KQI
    []