Bi-level multi-objective evolution of a Multi-Layered Echo-State Network Autoencoder for data representations

2019 
Abstract The Multi-Layered Echo-State Network (ML-ESN) is a recently developed, highly powerful type of recurrent neural network. It has succeeded in dealing with several non-linear benchmark problems. On account of its rich dynamics, ML-ESN is exploited in this paper, for the first time, as a recurrent Autoencoder (ML-ESNAE) to extract new features from original data representations. Further, the challenging and crucial task of optimally determining the ML-ESNAE architecture and training parameters is addressed, in order to extract more efficient features from the data. Traditionally, in a ML-ESN, the number of parameters (hidden neurons, sparsity rates, weights) are randomly chosen and manually altered to achieve a minimum learning error. On one hand, this random setting may not guarantee best generalization results. On the other, it can increase the network’s complexity. In this paper, a novel bi-level evolutionary optimization approach is thus proposed for the ML-ESNAE, to deal with these challenges. The first level offers Pareto multi-objective architecture optimization, providing maximum learning accuracy while maintaining a reduced complexity target. Next, every Pareto optimal solution obtained from the first level undergoes a mono-objective weights optimization at the second level. Particle Swarm Optimization (PSO) is used as an evolutionary tool for both levels 1 and 2. An empirical study shows that the evolved ML-ESNAE produces a noticeable improvement in extracting new, more expressive data features from original ones. A number of application case studies, using a range of benchmark datasets, show that the extracted features produce excellent results in terms of classification accuracy. The effectiveness of the evolved ML-ESNAE is demonstrated for both noisy and noise-free data. In conclusion, the evolutionary ML-ESNAE is proposed as a new benchmark for the evolutionary AI and machine learning research community.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    61
    References
    16
    Citations
    NaN
    KQI
    []