holds a professorship for Physics-Enhanced Machine Learning at TU Munich.
His research focus on the analysis and development of numerical algorithms for machine learning. This covers algorithms to enable, accelerate, and optimize simulation and analysis of complex dynamical systems, as well as nonlinear manifold learning techniques, including data-driven approximations of Koopman and Laplace operators. Recently, his group has also worked on energy-efficient training of neural networks inspired by random feature modeling.
Learning dynamical systems that respect physical symmetries and constraints remains a fundamental challenge in data-driven modeling. Integrating physical laws with graph neural networks facilitates principled modeling of complex N-body dynamics and yields accurate and permutation-invariant models. However, training graph neural networks with iterative, gradient-based optimization algorithms (e.g., Adam, RMSProp, LBFGS) often leads to slow training, especially for large, complex systems. In comparison to 15 different optimizers, we demonstrate that Hamiltonian Graph Networks (HGN) can be trained up to 600x faster–but with comparable accuracy–by replacing iterative optimization with random feature-based parameter construction. We show robust performance in diverse simulations, including N-body mass-spring systems in up to 3 dimensions with different geometries, while retaining essential physical invariances with respect to permutation, rotation, and translation. We reveal that even when trained on minimal 8-node systems, the model can generalize in a zero-shot manner to systems as large as 4096 nodes without retraining. Our work challenges the dominance of iterative gradient-descent-based optimization algorithms for training neural network models for physical systems.
Physics-enhanced Machine Learning
Physics-enhanced Machine Learning
©all images: LMU | TUM
2024-12-27 - Last modified: 2024-12-27