CHORAL | Computational Hardness Of RepresentAtion Learning

Summary
Rich internal representations of complex data are crucial to the predictive power of neural networks. Unfortunately, current statistical analyses are restricted to over-simplified networks, whose representations (i.e., weight matrices) are either random, and/or project the data in comparatively very large or very low dimensional spaces; in many applications the situation is very different. The modelisation of realistic data is another issue. There is an urgent need to reconcile theory and practice.

Based on a synergy of the mathematical physics of spin glasses, matrix-models from physics, and information and random matrix theory, CHORAL’s statistical framework will delimit computational gaps in the learning, from structured data, of much more realistic models of neural networks. These gaps will quantify the discrepancy between:

(i) the statistical cost of learning good representations, i.e., the minimal amount of training data required to reach a satisfactory predictive performance;
(ii) the cost of efficiency, i.e., the amount of data needed when learning using tractable algorithms, such as approximate message-passing and noisy gradient descents.
Comparing these costs will quantify when learning is computationally hard or not.

To achieve this, CHORAL will first focus on dictionary learning, another essential task of representation learning, and then move on to multi-layer neural networks, which can be thought of as concatenated dictionary learning problems.

CHORAL’s ambitious program, by defining benchmarks for algorithms used in virtually all fields of science and technology will have a direct practical impact. Equally important will be its conceptual impact: the study of information processing systems has become a major source of inspiration for mathematics.
Results, demos, etc. Show all and search (0)
Unfold all
/
Fold all
More information & hyperlinks
Web resources: https://cordis.europa.eu/project/id/101039794
Start date: 01-10-2022
End date: 30-09-2027
Total budget - Public funding: 1 280 750,00 Euro - 1 280 750,00 Euro
Cordis data

Original description

Rich internal representations of complex data are crucial to the predictive power of neural networks. Unfortunately, current statistical analyses are restricted to over-simplified networks, whose representations (i.e., weight matrices) are either random, and/or project the data in comparatively very large or very low dimensional spaces; in many applications the situation is very different. The modelisation of realistic data is another issue. There is an urgent need to reconcile theory and practice.

Based on a synergy of the mathematical physics of spin glasses, matrix-models from physics, and information and random matrix theory, CHORAL’s statistical framework will delimit computational gaps in the learning, from structured data, of much more realistic models of neural networks. These gaps will quantify the discrepancy between:

(i) the statistical cost of learning good representations, i.e., the minimal amount of training data required to reach a satisfactory predictive performance;
(ii) the cost of efficiency, i.e., the amount of data needed when learning using tractable algorithms, such as approximate message-passing and noisy gradient descents.
Comparing these costs will quantify when learning is computationally hard or not.

To achieve this, CHORAL will first focus on dictionary learning, another essential task of representation learning, and then move on to multi-layer neural networks, which can be thought of as concatenated dictionary learning problems.

CHORAL’s ambitious program, by defining benchmarks for algorithms used in virtually all fields of science and technology will have a direct practical impact. Equally important will be its conceptual impact: the study of information processing systems has become a major source of inspiration for mathematics.

Status

SIGNED

Call topic

ERC-2021-STG

Update Date

09-02-2023
Images
No images available.
Geographical location(s)