Home | Research | Groups | Christian Kühn

Research Group Christian Kühn


Link to website at TUM

Christian Kühn

Prof. Dr.

Associate

Multiscale and Stochastic Dynamics

Christian Kühn

leads the Multiscale and Stochastic Dynamics at TU Munich.

The research interests of his group are very broad and lie at the interface of differential equations, dynamical systems and mathematical modelling. In terms of application areas, the group works on a wide range of problems in areas such as biophysics, climate science, ecology, epidemiology, fluid dynamics, neuroscience, among others. In the context of machine learning, we are particularly interested in ‘mathematics for ML’, i.e., to try to understand, when AI is efficient and robust, or when it is prone to adversarial attacks. In fact, all machine learning algorithms can be viewed as dynamical systems, e.g., such as DNNs, transformers, etc are basically particle systems on networks. Also training algorithms are iterative mappings leading again to dynamics, e.g., SGD is a stochastic/random dynamical system.

Team members @MCML

PhD Students

Link to website

Sara-Viola Kuntz

Multiscale and Stochastic Dynamics

Publications @MCML

2025


[2]
C. Kühn and S.-V. Kuntz.
Analysis of the Geometric Structure of Neural Networks and Neural ODEs via Morse Functions.
DS 2025 - SIAM Conference on Applications of Dynamical Systems. Denver, CO, USA, May 11-15, 2025. To be published. Preprint available. arXiv
Abstract

Besides classical feed-forward neural networks, also neural ordinary differential equations (neural ODEs) have gained particular interest in recent years. Neural ODEs can be interpreted as an infinite depth limit of feed-forward or residual neural networks. We study the input-output dynamics of finite and infinite depth neural networks with scalar output. In the finite depth case, the input is a state associated with a finite number of nodes, which maps under multiple non-linear transformations to the state of one output node. In analogy, a neural ODE maps an affine linear transformation of the input to an affine linear transformation of its time-T map. We show that depending on the specific structure of the network, the input-output map has different properties regarding the existence and regularity of critical points, which can be characterized via Morse functions. We prove that critical points cannot exist if the dimension of the hidden layer is monotonically decreasing or the dimension of the phase space is smaller or equal to the input dimension. In the case that critical points exist, we classify their regularity depending on the specific architecture of the network. We show that except for a Lebesgue measure zero set in the weight space, each critical point is non-degenerate, if for finite depth neural networks the underlying graph has no bottleneck, and if for neural ODEs, the affine linear transformations used have full rank. For each type of architecture, the proven properties are comparable in the finite and the infinite depth case. The established theorems allow us to formulate results on universal embedding, i.e., on the exact representation of maps by neural networks and neural ODEs. Our dynamical systems viewpoint on the geometric structure of the input-output map provides a fundamental understanding of why certain architectures perform better than others.

MCML Authors
Link to Profile Christian Kühn

Christian Kühn

Prof. Dr.

Multiscale and Stochastic Dynamics

Link to website

Sara-Viola Kuntz

Multiscale and Stochastic Dynamics


[1]
C. Kühn and S.-V. Kuntz.
The Influence of the Memory Capacity of Neural DDEs on the Universal Approximation Property.
Preprint (May. 2025). arXiv
Abstract

Neural Ordinary Differential Equations (Neural ODEs), which are the continuous-time analog of Residual Neural Networks (ResNets), have gained significant attention in recent years. Similarly, Neural Delay Differential Equations (Neural DDEs) can be interpreted as an infinite depth limit of Densely Connected Residual Neural Networks (DenseResNets). In contrast to traditional ResNet architectures, DenseResNets are feed-forward networks that allow for shortcut connections across all layers. These additional connections introduce memory in the network architecture, as typical in many modern architectures. In this work, we explore how the memory capacity in neural DDEs influences the universal approximation property. The key parameter for studying the memory capacity is the product Kτ of the Lipschitz constant and the delay of the DDE. In the case of non-augmented architectures, where the network width is not larger than the input and output dimensions, neural ODEs and classical feed-forward neural networks cannot have the universal approximation property. We show that if the memory capacity Kτ is sufficiently small, the dynamics of the neural DDE can be approximated by a neural ODE. Consequently, non-augmented neural DDEs with a small memory capacity also lack the universal approximation property. In contrast, if the memory capacity Kτ is sufficiently large, we can establish the universal approximation property of neural DDEs for continuous functions. If the neural DDE architecture is augmented, we can expand the parameter regions in which universal approximation is possible. Overall, our results show that by increasing the memory capacity Kτ, the infinite-dimensional phase space of DDEs with positive delay τ>0 is not sufficient to guarantee a direct jump transition to universal approximation, but only after a certain memory threshold, universal approximation holds.

MCML Authors
Link to Profile Christian Kühn

Christian Kühn

Prof. Dr.

Multiscale and Stochastic Dynamics

Link to website

Sara-Viola Kuntz

Multiscale and Stochastic Dynamics