FoG | Foundations of Generalization

Summary
Arguably, the most crucial objective of Learning Theory is to understand the basic notion of generalization: How can a learning agent infer from a finite amount of data to the whole population? Today's learning algorithms are poorly understood from that perspective. In particular, best practices, such as using highly overparameterized models to fit relatively few data, seem to be in almost contradiction to common wisdom, and classical models of learning seem to be incapable of explaining the impressive success of such algorithms. The objective of this proposal is to understand generalization in overparameterized models and understand the role of algorithms in learning. Toward this task, I will consider two mathematical models of learning that shed light on this fundamental problem.

The first model is the well-studied, yet only seemingly well-understood, model of Stochastic Convex optimization. My investigations, so far, provided a new picture that is much more complex than was previously known or assumed, regarding fundamental notions such as regularization, inductive bias as well as stability. These works show that even in this, simplistic setup of learning, understanding such fundamental principles may be a highly ambitious task. On the other hand, given the simplicity of the model, it seems that such an understanding is a prerequisite to any future model that will explain modern Machine Learning algorithms.

The second model considers a modern task of synthetic data generation. Synthetic data generation serves as an ideal model to further study the tension between concepts such as generalization and memorization. Here we with a challenge to model the question of generalization, and answer fundamental questions such as: when is synthetic data original and when is it a copy of the empirical data?
Results, demos, etc. Show all and search (0)
Unfold all
/
Fold all
More information & hyperlinks
Web resources: https://cordis.europa.eu/project/id/101116258
Start date: 01-01-2024
End date: 31-12-2028
Total budget - Public funding: 1 419 375,00 Euro - 1 419 375,00 Euro
Cordis data

Original description

Arguably, the most crucial objective of Learning Theory is to understand the basic notion of generalization: How can a learning agent infer from a finite amount of data to the whole population? Today's learning algorithms are poorly understood from that perspective. In particular, best practices, such as using highly overparameterized models to fit relatively few data, seem to be in almost contradiction to common wisdom, and classical models of learning seem to be incapable of explaining the impressive success of such algorithms. The objective of this proposal is to understand generalization in overparameterized models and understand the role of algorithms in learning. Toward this task, I will consider two mathematical models of learning that shed light on this fundamental problem.

The first model is the well-studied, yet only seemingly well-understood, model of Stochastic Convex optimization. My investigations, so far, provided a new picture that is much more complex than was previously known or assumed, regarding fundamental notions such as regularization, inductive bias as well as stability. These works show that even in this, simplistic setup of learning, understanding such fundamental principles may be a highly ambitious task. On the other hand, given the simplicity of the model, it seems that such an understanding is a prerequisite to any future model that will explain modern Machine Learning algorithms.

The second model considers a modern task of synthetic data generation. Synthetic data generation serves as an ideal model to further study the tension between concepts such as generalization and memorization. Here we with a challenge to model the question of generalization, and answer fundamental questions such as: when is synthetic data original and when is it a copy of the empirical data?

Status

SIGNED

Call topic

ERC-2023-STG

Update Date

12-03-2024
Images
No images available.
Geographical location(s)