A class of neural networks that gained particular interest in the last years are neural ordinary differential equations (neural ODEs). We study input-output relations of neural ODEs using dynamical systems theory and prove several results about the exact embedding of maps in different neural ODE architectures in low and high dimension. The embedding capability of a neural ODE architecture can be increased by adding, for example, a linear layer, or augmenting the phase space. Yet, there is currently no systematic theory available and our work contributes towards this goal by developing various embedding results as well as identifying situations, where no embedding is possible. The mathematical techniques used include as main components iterative functional equations, Morse functions and suspension flows, as well as several further ideas from analysis. Although practically, mainly universal approximation theorems are used, our geometric dynamical systems viewpoint on universal embedding provides a fundamental understanding, why certain neural ODE architectures perform better than others.
We study the maximum mean discrepancy (MMD) in the context of critical transitions modelled by fast-slow stochastic dynamical systems. We establish a new link between the dynamical theory of critical transitions with the statistical aspects of the MMD. In particular, we show that a formal approximation of the MMD near fast subsystem bifurcation points can be computed to leading-order. In particular, this leading order approximation shows that the MMD depends intricately on the fast-slow systems parameters and one can only expect to extract warning signs under rather stringent conditions. However, the MMD turns out to be an excellent binary classifier to detect the change point induced by the critical transition. We cross-validate our results by numerical simulations for a van der Pol-type model.