Approximation capabilities of neural networks on unbounded domains.
2019
We prove universal approximation theorems of neural networks in $L^{p}(\mathbb{R} \times [0, 1]^n)$, under the conditions that $p \in [2, \infty)$ and that the activiation function belongs to among others a monotone sigmoid, relu, elu, softplus or leaky relu. Our results partially generalize classical universal approximation theorems on $[0,1]^n.$
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
29
References
6
Citations
NaN
KQI