A Gradient Entropy Regularized Likelihood Learning Algorithm on Gaussian Mixture with Automatic Model Selection

2006 
In Gaussian mixture (GM) modeling, it is crucial to select the number of Gaussians for a sample data set. In this paper, we propose a gradient entropy regularized likelihood (ERL) algorithm on Gaussian mixture to solve this problem under regularization theory. It is demonstrated by the simulation experiments that the gradient ERL learning algorithm can select an appropriate number of Gaussians automatically during the parameter learning on a sample data set and lead to a good estimation of the parameters in the actual Gaussian mixture, even in tlie cases of two or more actual Gaussians overlapped strongly.
    • Correction
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []