Quantitative Sobolev Approximation Bounds for Neural Operators with Empirical Validation on Burgers Equation
arXiv:2605.08170v1 Announce Type: new Abstract: Neural operators have emerged as a powerful tool for learning mappings between infinite-dimensional function spaces. However, their approximation properties in Sobolev norms remain poorly quantified, even though these norms control both function values and derivatives and are the natural metrics for PDE well-posedness, stability, and generalization. We develop a functional-analytic framework for operator learning in Sobolev spaces and connect it to the numerical behavior of Fourier Neural Operators (FNOs) on a prototypical PDE. First, for a continuous nonlinear operator $\mathcal{G}: H^{s}(D)\to H^{t}(D')$ with $s > d/2$ and inputs restricted to a compact subset of $H^{s}(D)$, we prove that $\mathcal{G}$ can be uniformly approximated in $H^{t}$-norm by a neural operator with $\mathcal{O}(\varepsilon^{-d/s})$ trainable parameters. This yields an explicit complexity--error relation of the form $\|\mathcal{G}-\mathcal{G}_\theta\|_{H^{t}} \lesssim C N^{-s/d}$. We then study the one-dimensional viscous Burgers solution operator $\mathcal{G}: u_{0}\mapsto u(\cdot,1)$ on a bounded $H^{1}$-ball and train FNOs with an $H^{1}$-loss. Across a sweep of model sizes, we obtain test $H^{1}$-errors down to $\mathcal{O}(10^{-7})$ and relative errors of order $10^{-3}$, with predictions accurately matching both solutions and spatial derivatives on held-out data. A log-log plot of Sobolev error versus parameter count exhibits an approximate power law $\|\mathcal{G}-\mathcal{G}_\theta\|_{H^{1}} \approx C N^{-\alpha}$ with empirical exponent $\alpha \approx 1.4$, and long-horizon training reveals optimization instabilities in large FNOs, providing quantitative evidence that Sobolev-space approximation theory meaningfully predicts neural-operator scaling behavior.
