Understanding DL | Understanding Deep Learning

Summary
While extremely successful, deep learning (DL) still lacks a solid theoretical foundation.

In the last 5 years the PI focused almost entirely on DL theory, yielding a strong publication record with 7 papers at NeurIPS (the leading ML conference), including 2 spotlights (top 3% of submitted papers) and one oral (top 1%), 2 papers at ICLR (the leading DL conference), and 1 paper at COLT (the leading ML theory conference). These results are amongst the first that break a 20 years hiatus in NN theory, thereby giving some hope for a solid deep learning theory. This includes 1) the first poly-time learnability result for non-trivial function class by SGD on NN, 2) the first such result with near optimal rate, 3) new methodology to bound the sample complexity of NN, that established the first sample complexity bound that is sublinear in the number of parameters, under norm constraints that are valid in practice, 4) an explanation to the phenomena of adversarial examples.

We plan to go far beyond these and other results, and to build a coherent theory for DL, addressing all three pillars of learning theory:
Optimization: We plan to investigate the success of SGD in finding a good model, arguably the greatest mystery of modern deep learning. Specifically our goal is to understand what models are learnable by SGD on neural networks. To this end, we plan to come up with a new class of models that can potentially lead to new deep learning algorithms, with a solid theory behind them.
Statistical Complexity: We plan to crack the second great mystery of modern deep learning, which is their ability to generalize with fewer examples than parameters. Our plan is to investigate the sample complexity of classes of neural networks that are defined by bounds on the weights’ magnitude.
Representation: We plan to investigate functions that can be realized by NN. This includes classical questions such as the benefits of depth, as well as more modern aspects such as adversarial examples.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: https://cordis.europa.eu/project/id/101041711
Start date: 01-09-2022
End date: 31-08-2027
Total budget - Public funding: 1 499 750,00 Euro - 1 499 750,00 Euro
Cordis data

Original description

While extremely successful, deep learning (DL) still lacks a solid theoretical foundation.

In the last 5 years the PI focused almost entirely on DL theory, yielding a strong publication record with 7 papers at NeurIPS (the leading ML conference), including 2 spotlights (top 3% of submitted papers) and one oral (top 1%), 2 papers at ICLR (the leading DL conference), and 1 paper at COLT (the leading ML theory conference). These results are amongst the first that break a 20 years hiatus in NN theory, thereby giving some hope for a solid deep learning theory. This includes 1) the first poly-time learnability result for non-trivial function class by SGD on NN, 2) the first such result with near optimal rate, 3) new methodology to bound the sample complexity of NN, that established the first sample complexity bound that is sublinear in the number of parameters, under norm constraints that are valid in practice, 4) an explanation to the phenomena of adversarial examples.

We plan to go far beyond these and other results, and to build a coherent theory for DL, addressing all three pillars of learning theory:
Optimization: We plan to investigate the success of SGD in finding a good model, arguably the greatest mystery of modern deep learning. Specifically our goal is to understand what models are learnable by SGD on neural networks. To this end, we plan to come up with a new class of models that can potentially lead to new deep learning algorithms, with a solid theory behind them.
Statistical Complexity: We plan to crack the second great mystery of modern deep learning, which is their ability to generalize with fewer examples than parameters. Our plan is to investigate the sample complexity of classes of neural networks that are defined by bounds on the weights’ magnitude.
Representation: We plan to investigate functions that can be realized by NN. This includes classical questions such as the benefits of depth, as well as more modern aspects such as adversarial examples.

Status

SIGNED

Call topic

ERC-2021-STG

Update Date

09-02-2023
Images
No images available.
Geographical location(s)
Structured mapping
Unfold all
/
Fold all
Horizon Europe
HORIZON.1 Excellent Science
HORIZON.1.1 European Research Council (ERC)
HORIZON.1.1.0 Cross-cutting call topics
ERC-2021-STG ERC STARTING GRANTS
HORIZON.1.1.1 Frontier science
ERC-2021-STG ERC STARTING GRANTS