Effective approximation of high-dimensional space using neural networks

2021 
Because of the curse of dimensionality, the data in high-dimensional space hardly afford sufficient information for neural networks training. Hence, this is a tough task to approximate the high-dimensional space using neural networks. To address this, here proposes the method that neural networks approximate a high-dimensional function that can effectively approach the high-dimensional space, rather than using neural networks to directly approximate the high-dimensional space. Hence, two boundaries were derived by Lipschitz condition, i.e., the one is that neural networks approximate a high-dimensional function, and the other is that a high-dimensional function approaches the high-dimensional space. Experimental results on synthetic and real-world datasets show that our method is effective and outperforms the competing methods in the performance to approximate the high-dimensional space. We find that this manner of using neural networks to approximate a high-dimensional function that can effectively approach the high-dimensional space is more resistance to the curse of dimensionality. In addition, the ability of the proposed method to approximate the high-dimensional space is related to the number of hidden layers and the choice of high-dimensional functions, but more relies on the latter. Our findings demonstrate that it is no obvious dependency between the number of hidden layers respecting the proposed method and the choice for high-dimensional functions.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    48
    References
    0
    Citations
    NaN
    KQI
    []