VeriDeL | Verifiably Safe and Correct Deep Neural Networks

Summary
Deep machine learning is revolutionizing computer science. Instead of manually creating complex software, engineers now use automatically generated deep neural networks (DNNs) in critical financial, medical and transportation systems, obtaining previously unimaginable results.

Despite their remarkable achievements, DNNs remain opaque. We do not understand their decision making and cannot prove their correctness - thus risking potentially devastating outcomes. For example, it has been shown that DNNs that navigate autonomous aircraft with the goal of avoiding collisions could produce incorrect turning advisories. Thus, the lack of formal guarantees regarding DNN behavior is preventing their safe deployment in critical systems, and could jeopardize human lives. Consequently, there is a crucial need to ensure that DNNs operate correctly.

Recent and exciting developments in formal verification allow us to automatically reason about DNNs. However, this is a nascent technology, which currently only scales to medium-sized DNNs - whereas real-world systems are much larger. Additionally, it is unclear how to apply this technology in practice. I propose to bridge this crucial gap through the development of novel, scalable and groundbreaking techniques for verifying the correctness of large DNNs, and by applying them to real systems of interest. I will do this by (1) developing search-space pruning techniques, which will enable us to verify larger DNNs; (2) creating novel abstraction-refinement techniques, which will allow us to scale to even larger DNNs; and (3) identifying new kinds of relevant specifications and key domains where DNNs are used, demonstrating the verification of real-world DNNs.

This project will result in a sound and expressive framework for automatically reasoning about DNNs, orders of magnitude larger than is possible today. This framework will ensure the safety and correctness of DNNs deployed in critical systems, greatly benefiting users and society.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: https://cordis.europa.eu/project/id/101112713
Start date: 01-11-2023
End date: 31-10-2028
Total budget - Public funding: 1 500 000,00 Euro - 1 500 000,00 Euro
Cordis data

Original description

Deep machine learning is revolutionizing computer science. Instead of manually creating complex software, engineers now use automatically generated deep neural networks (DNNs) in critical financial, medical and transportation systems, obtaining previously unimaginable results.

Despite their remarkable achievements, DNNs remain opaque. We do not understand their decision making and cannot prove their correctness - thus risking potentially devastating outcomes. For example, it has been shown that DNNs that navigate autonomous aircraft with the goal of avoiding collisions could produce incorrect turning advisories. Thus, the lack of formal guarantees regarding DNN behavior is preventing their safe deployment in critical systems, and could jeopardize human lives. Consequently, there is a crucial need to ensure that DNNs operate correctly.

Recent and exciting developments in formal verification allow us to automatically reason about DNNs. However, this is a nascent technology, which currently only scales to medium-sized DNNs - whereas real-world systems are much larger. Additionally, it is unclear how to apply this technology in practice. I propose to bridge this crucial gap through the development of novel, scalable and groundbreaking techniques for verifying the correctness of large DNNs, and by applying them to real systems of interest. I will do this by (1) developing search-space pruning techniques, which will enable us to verify larger DNNs; (2) creating novel abstraction-refinement techniques, which will allow us to scale to even larger DNNs; and (3) identifying new kinds of relevant specifications and key domains where DNNs are used, demonstrating the verification of real-world DNNs.

This project will result in a sound and expressive framework for automatically reasoning about DNNs, orders of magnitude larger than is possible today. This framework will ensure the safety and correctness of DNNs deployed in critical systems, greatly benefiting users and society.

Status

SIGNED

Call topic

ERC-2023-STG

Update Date

12-03-2024
Images
No images available.
Geographical location(s)
Structured mapping
Unfold all
/
Fold all
Horizon Europe
HORIZON.1 Excellent Science
HORIZON.1.1 European Research Council (ERC)
HORIZON.1.1.0 Cross-cutting call topics
ERC-2023-STG ERC STARTING GRANTS
HORIZON.1.1.1 Frontier science
ERC-2023-STG ERC STARTING GRANTS