Evaluation of constructive neural networks with cascaded architectures

2002 
Abstract In this study, we have investigated five different constructive neural network algorithms, of which four were methods found in the literature and one was our own recently developed algorithm. The algorithms that were studied were Cascade-Correlation, Modified Cascade-Correlation, Cascade, Cascade Network, and our own recently developed Fixed Cascade Error. The investigated algorithms have many similarities: they all have a cascaded architecture and they automatically increase the size of the neural network by adding new hidden units to the network as the training proceeds. Furthermore, the networks are trained in a layer-by-layer style, i.e. as the hidden units are installed in the network, their input weights are frozen so that they do not change in the later stages of the network training. The basic versions of the algorithms (which use only one randomly initialized candidate unit in the hidden unit training) were improved during the course of this research by adding a deterministic initialization method and the utilization of multiple candidate units in the training phase of the hidden units. The key idea of the deterministic initialization method is to create a large pool of randomly initialized hidden units, of which only the best unit is further trained and installed in the network. On the other hand, when we utilize multiple candidate units, we train a number of candidate units to the final solution, after which the best one of them is selected to be installed as a hidden unit in the active network. The numerical simulations show that especially the multiple candidate unit versions of the algorithms produce usually better results than the basic versions of the algorithms. In addition, the computational costs of the algorithms do not increase when using the deterministic initialization method, but in most cases we can even reduce the computational costs needed for the network training. Moreover, it should be noticed that our own algorithm produces rather often the best performance level among the investigated algorithms.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    37
    References
    35
    Citations
    NaN
    KQI
    []