Home  | Publications | MWL+24

A Canonicalization Perspective on Invariant and Equivariant Learning

MCML Authors

Link to Profile Stefanie Jegelka PI Matchmaking

Stefanie Jegelka

Prof. Dr.

Principal Investigator

Abstract

In many applications, we desire neural networks to exhibit invariance or equivariance to certain groups due to symmetries inherent in the data. Recently, frame-averaging methods emerged to be a unified framework for attaining symmetries efficiently by averaging over input-dependent subsets of the group, i.e., frames. What we currently lack is a principled understanding of the design of frames. In this work, we introduce a canonicalization perspective that provides an essential and complete view of the design of frames. Canonicalization is a classic approach for attaining invariance by mapping inputs to their canonical forms. We show that there exists an inherent connection between frames and canonical forms. Leveraging this connection, we can efficiently compare the complexity of frames as well as determine the optimality of certain frames. Guided by this principle, we design novel frames for eigenvectors that are strictly superior to existing methods -- some are even optimal -- both theoretically and empirically. The reduction to the canonicalization perspective further uncovers equivalences between previous methods. These observations suggest that canonicalization provides a fundamental understanding of existing frame-averaging methods and unifies existing equivariant and invariant learning methods.

inproceedings MWL+24


NeurIPS 2024

38th Conference on Neural Information Processing Systems. Vancouver, Canada, Dec 10-15, 2024.
Conference logo
A* Conference

Authors

G. Ma • Y. Wang • D. Lim • S. Jegelka • Y. Wang

Links

URL GitHub

Research Area

 A3 | Computational Models

BibTeXKey: MWL+24

Back to Top