Learning Through Deterministic Assignment of Hidden Parameters

Supervised learning frequently boils down to determining hidden and bright parameters in a parameterized hypothesis space based on finite input-output samples. The hidden parameters determine the nonlinear mechanism of an estimator, while the bright parameters characterize the linear mechanism. In a traditional learning paradigm, hidden and bright parameters are not distinguished and trained simultaneously in one learning process. Such a one-stage learning (OSL) brings a benefit of theoretical analysis but suffers from the high computational burden. In this paper, we propose a two-stage learning scheme, learning through deterministic assignment of hidden parameters (LtDaHPs), suggesting to deterministically generate the hidden parameters by using minimal Riesz energy points on a sphere and equally spaced points in an interval. We theoretically show that with such a deterministic assignment of hidden parameters, LtDaHP with a neural network realization almost shares the same generalization performance with that of OSL. Then, LtDaHP provides an effective way to overcome the high computational burden of OSL. We present a series of simulations and application examples to support the outperformance of LtDaHP.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader