Title: Considerations of Structure in Machine Learning
Abstract: Representation learning is a fundamental challenge in machine learning, particularly when working with high-dimensional data without labels. Traditional approaches, such as Variational AutoEncoders or Independent Components Analysis, primarily exploit statistical structure in the latent space and often assume IID observations, treating observations as isolated data points. However, real-world data is often interrelated by underlying algebraic structures that shape its variability and composition (e.g. positions in 3D space are structured by the action of 3D translations). In this talk, I will discuss how interaction can be leveraged to both discover and enforce these structures in representation learning. My work shifts the focus from statistical assumptions to structural priors, leading to more robust and data-efficient learning. First, I introduce the Homomorphism AutoEncoder, which discovers the group acting on the latent space and learns the corresponding manifold through interaction. Building on this, I explore how the learned representations for simple settings can be composed into a modular understanding of more complex settings from limited observations. By explicitly considering structure, we achieve extreme data-efficiency, improved generalization, and enhanced robustness in downstream tasks.
Dates
March 1st, 2025 → March 15th, 2025
Abstract submission deadline
March 8th, 2025 → March 15th, 2025
Paper submission deadline
April 14th ,2025
Accept/Reject notification
May 21-23 ,2025
Netys Conference