Convergence bounds for empirical nonlinear least-squares.

2020 
We consider best approximation problems in a (nonlinear) subspace $\mathcal{M}$ of a Banach space $(\mathcal{V},\|\bullet\|)$ where only an empirical estimate $\|\bullet\|_n$ of the norm can be computed. The norm is assumed to be of the form $\|v\| := \mathbb{E}_Y[|v|_Y^2]^{1/2}$ for some (parametric) seminorm $|\bullet|_Y$ depending on a random variable $Y$. The objective is to approximate an unknown function $u \in \mathcal{V}$ by $v\in\mathcal{M}$ by minimizing the empirical norm $\|u-v\|_n^2 := \tfrac{1}{n}\sum_{i=1}^n |u-v|_{y_i}^2$ w.r.t.\ $n$ random samples $\{y_i\}_{i=1,\ldots,n}$. It is well-known that such least squares approximations can become inaccurate and unstable when the number of samples $n$ is too close to the number of parameters $m \propto \operatorname{dim}(\mathcal{M})$. We review this statement in the light of adapted distributions for the samples $y_i$ and establish error bounds of the empirical best approximation error based on a restricted isometry property (RIP) $(1-\delta)\|v\|^2 \le \|v\|_n^2 \le (1+\delta)\|v\|^2 \ \forall v\in\mathcal{M}$ which holds in probability. These results are closely related to those in "Optimal weighted least-squares methods" (A. Cohen and G. Migliorati, 2016) and show that $n \ge sm$ is sufficient for the RIP to be satisfied with high probability. The factor $s$ represents the variation of the empirical norm $\|\bullet\|_n$ on $\mathcal{M}$. % and can be influenced by the choice of the distribution of the samples. Several model classes are examined and numerical experiments illustrate some of the obtained stability bounds.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    7
    Citations
    NaN
    KQI
    []