Summary
Despite the undeniable success of machine learning in addressing a wide variety of technological and scientific challenges, the current trend of training predictive models with an evergrowing number of parameters from an evergrowing amount of data is not sustainable. These huge models, often engineered by large corporations benefiting from huge computational resources, typically require learning a billion or more of parameters. They have proven to be very effective in solving prediction tasks in computer vision, natural language processing, and computational biology, for example, but they mostly remain black boxes that are hard to interpret, computationally demanding, and not robust to small data perturbations.
With a strong emphasis on visual modeling, the grand challenge of APHELEIA is to develop a new generation of machine learning models that are more robust, interpretable, and efficient, and do not require massive amounts of data to produce accurate predictions. To achieve this objective, we will foster new interactions between classical signal processing, statistics, optimization, and modern deep learning. Our goal is to reduce the need for massive data by enabling scientists and engineers to design trainable machine learning models that directly encode a priori knowledge of the task semantics and data formation process, while automatically prefering simple and stable solutions over complex ones. These models will be built on solid theoretical foundations with convergence and robustness guarantees, which are important to make real-life trustworthy predictions in the wild. We will implement these ideas in an open-source software toolbox readily applicable to visual recognition and inverse imaging problems, which will also handle other modalities. This will stimulate interdisciplinary collaborations, with the potential to be a game changer in the way scientists and engineers design machine learning problems.
With a strong emphasis on visual modeling, the grand challenge of APHELEIA is to develop a new generation of machine learning models that are more robust, interpretable, and efficient, and do not require massive amounts of data to produce accurate predictions. To achieve this objective, we will foster new interactions between classical signal processing, statistics, optimization, and modern deep learning. Our goal is to reduce the need for massive data by enabling scientists and engineers to design trainable machine learning models that directly encode a priori knowledge of the task semantics and data formation process, while automatically prefering simple and stable solutions over complex ones. These models will be built on solid theoretical foundations with convergence and robustness guarantees, which are important to make real-life trustworthy predictions in the wild. We will implement these ideas in an open-source software toolbox readily applicable to visual recognition and inverse imaging problems, which will also handle other modalities. This will stimulate interdisciplinary collaborations, with the potential to be a game changer in the way scientists and engineers design machine learning problems.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: | https://cordis.europa.eu/project/id/101087696 |
Start date: | 01-09-2023 |
End date: | 31-08-2028 |
Total budget - Public funding: | 1 999 375,00 Euro - 1 999 375,00 Euro |
Cordis data
Original description
Despite the undeniable success of machine learning in addressing a wide variety of technological and scientific challenges, the current trend of training predictive models with an evergrowing number of parameters from an evergrowing amount of data is not sustainable. These huge models, often engineered by large corporations benefiting from huge computational resources, typically require learning a billion or more of parameters. They have proven to be very effective in solving prediction tasks in computer vision, natural language processing, and computational biology, for example, but they mostly remain black boxes that are hard to interpret, computationally demanding, and not robust to small data perturbations.With a strong emphasis on visual modeling, the grand challenge of APHELEIA is to develop a new generation of machine learning models that are more robust, interpretable, and efficient, and do not require massive amounts of data to produce accurate predictions. To achieve this objective, we will foster new interactions between classical signal processing, statistics, optimization, and modern deep learning. Our goal is to reduce the need for massive data by enabling scientists and engineers to design trainable machine learning models that directly encode a priori knowledge of the task semantics and data formation process, while automatically prefering simple and stable solutions over complex ones. These models will be built on solid theoretical foundations with convergence and robustness guarantees, which are important to make real-life trustworthy predictions in the wild. We will implement these ideas in an open-source software toolbox readily applicable to visual recognition and inverse imaging problems, which will also handle other modalities. This will stimulate interdisciplinary collaborations, with the potential to be a game changer in the way scientists and engineers design machine learning problems.
Status
SIGNEDCall topic
ERC-2022-COGUpdate Date
31-07-2023
Images
No images available.
Geographical location(s)