Summary
Neural networks have firmly established themselves as powerful tools in many scientific domains, e.g. for protein folding, recovering images of black holes, or solving Schrödinger equations. Although empirically highly successful, neural network based methods very often lack the mathematical foundation to be able to guarantee the accuracy of the solution they produce. While this lack of reliability constitutes a major issue for many applications, I believe that, for scientific machine learning in particular, there is a promising path towards overcoming these issues, as there usually exists knowledge of the ground truth one would like to learn, e.g. that it must satisfy some partial differential equation. The goal of this project is understanding, from an approximation theory perspective, when and why such knowledge may be exploited.
A defining property of neural networks is that they consist of a composition of simple building blocks. From the view of approximation theory this is a major paradigm shift, as it classically focuses on superpositional approximation, i.e. based on taking linear combinations of simple building blocks. This project aims to understand, on a fundamental structural level, how compositional approximation differs from classical superpositional approximation. Specifically it will first prove the existence of cases, in which compositional approximation provides a fundamental advantage over superpositional approximation, and subsequently develop ways to characterize these cases.
On one hand this will significantly deepen the comprehension of this paradigm-shift in approximation theory, on the other hand it will establish a foundation for the development of provably accurate neural networks based machine learning algorithms.
A defining property of neural networks is that they consist of a composition of simple building blocks. From the view of approximation theory this is a major paradigm shift, as it classically focuses on superpositional approximation, i.e. based on taking linear combinations of simple building blocks. This project aims to understand, on a fundamental structural level, how compositional approximation differs from classical superpositional approximation. Specifically it will first prove the existence of cases, in which compositional approximation provides a fundamental advantage over superpositional approximation, and subsequently develop ways to characterize these cases.
On one hand this will significantly deepen the comprehension of this paradigm-shift in approximation theory, on the other hand it will establish a foundation for the development of provably accurate neural networks based machine learning algorithms.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: | https://cordis.europa.eu/project/id/101152591 |
Start date: | 01-04-2025 |
End date: | 31-03-2027 |
Total budget - Public funding: | - 183 600,00 Euro |
Cordis data
Original description
Neural networks have firmly established themselves as powerful tools in many scientific domains, e.g. for protein folding, recovering images of black holes, or solving Schrödinger equations. Although empirically highly successful, neural network based methods very often lack the mathematical foundation to be able to guarantee the accuracy of the solution they produce. While this lack of reliability constitutes a major issue for many applications, I believe that, for scientific machine learning in particular, there is a promising path towards overcoming these issues, as there usually exists knowledge of the ground truth one would like to learn, e.g. that it must satisfy some partial differential equation. The goal of this project is understanding, from an approximation theory perspective, when and why such knowledge may be exploited.A defining property of neural networks is that they consist of a composition of simple building blocks. From the view of approximation theory this is a major paradigm shift, as it classically focuses on superpositional approximation, i.e. based on taking linear combinations of simple building blocks. This project aims to understand, on a fundamental structural level, how compositional approximation differs from classical superpositional approximation. Specifically it will first prove the existence of cases, in which compositional approximation provides a fundamental advantage over superpositional approximation, and subsequently develop ways to characterize these cases.
On one hand this will significantly deepen the comprehension of this paradigm-shift in approximation theory, on the other hand it will establish a foundation for the development of provably accurate neural networks based machine learning algorithms.
Status
SIGNEDCall topic
HORIZON-MSCA-2023-PF-01-01Update Date
22-11-2024
Images
No images available.
Geographical location(s)