Summary
Artificial intelligence is changing our lives. Artificial neural networks are present and entering various fields of modern technology such as medicine, engineering, education and many more. Even a small-scale theoretical understanding of why and how neural networks succeed in practice can have a considerable impact on the future development of such technologies.
In contrast, combinatorial optimization is a well-established discipline at the intersection of mathematics and computer science, dealing with classical algorithmic questions like the Shortest Path or Traveling Salesperson Problems. A powerful tool to study structural and algorithmic properties of combinatorial optimization problems is polyhedral geometry. For example, the geometric notion of extension complexity classifies how well a specific problem can be expressed and solved via an extremely successful general-purpose technique called linear programming.
Recent developments show that polyhedral theory can also be a powerful tool to achieve a better mathematical understanding of neural networks. The overall goal of this project is to significantly intensify the connection between neural networks and polyhedral theory, using the concept of extension complexity. This new symbiosis will advance both, the theoretical understanding of neural networks as well as the fundamental understanding of classical combinatorial optimization problems. On the side of neural networks, we expect to obtain new bounds on the required size and depth to solve a given problem, serving as an explanation of why large and deep neural networks are more successful in practice. Furthermore, we expect contributions to a more refined understanding of the computational complexity to train a neural network. On the side of combinatorial optimization, we expect that generalized notions of extension complexity inspired by neural networks lead to new structural and algorithmic insights to classical problems like the matching problem.
In contrast, combinatorial optimization is a well-established discipline at the intersection of mathematics and computer science, dealing with classical algorithmic questions like the Shortest Path or Traveling Salesperson Problems. A powerful tool to study structural and algorithmic properties of combinatorial optimization problems is polyhedral geometry. For example, the geometric notion of extension complexity classifies how well a specific problem can be expressed and solved via an extremely successful general-purpose technique called linear programming.
Recent developments show that polyhedral theory can also be a powerful tool to achieve a better mathematical understanding of neural networks. The overall goal of this project is to significantly intensify the connection between neural networks and polyhedral theory, using the concept of extension complexity. This new symbiosis will advance both, the theoretical understanding of neural networks as well as the fundamental understanding of classical combinatorial optimization problems. On the side of neural networks, we expect to obtain new bounds on the required size and depth to solve a given problem, serving as an explanation of why large and deep neural networks are more successful in practice. Furthermore, we expect contributions to a more refined understanding of the computational complexity to train a neural network. On the side of combinatorial optimization, we expect that generalized notions of extension complexity inspired by neural networks lead to new structural and algorithmic insights to classical problems like the matching problem.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: | https://cordis.europa.eu/project/id/101153187 |
Start date: | 01-04-2024 |
End date: | 31-03-2026 |
Total budget - Public funding: | - 175 920,00 Euro |
Cordis data
Original description
Artificial intelligence is changing our lives. Artificial neural networks are present and entering various fields of modern technology such as medicine, engineering, education and many more. Even a small-scale theoretical understanding of why and how neural networks succeed in practice can have a considerable impact on the future development of such technologies.In contrast, combinatorial optimization is a well-established discipline at the intersection of mathematics and computer science, dealing with classical algorithmic questions like the Shortest Path or Traveling Salesperson Problems. A powerful tool to study structural and algorithmic properties of combinatorial optimization problems is polyhedral geometry. For example, the geometric notion of extension complexity classifies how well a specific problem can be expressed and solved via an extremely successful general-purpose technique called linear programming.
Recent developments show that polyhedral theory can also be a powerful tool to achieve a better mathematical understanding of neural networks. The overall goal of this project is to significantly intensify the connection between neural networks and polyhedral theory, using the concept of extension complexity. This new symbiosis will advance both, the theoretical understanding of neural networks as well as the fundamental understanding of classical combinatorial optimization problems. On the side of neural networks, we expect to obtain new bounds on the required size and depth to solve a given problem, serving as an explanation of why large and deep neural networks are more successful in practice. Furthermore, we expect contributions to a more refined understanding of the computational complexity to train a neural network. On the side of combinatorial optimization, we expect that generalized notions of extension complexity inspired by neural networks lead to new structural and algorithmic insights to classical problems like the matching problem.
Status
SIGNEDCall topic
HORIZON-MSCA-2023-PF-01-01Update Date
12-03-2024
Images
No images available.
Geographical location(s)