Summary
Reinforcement learning (RL) agents learn to behave optimally via trial and error, without the need to encode complicated behavior explicitly. However, RL generally lacks mechanisms to constantly ensure correct behavior regarding sophisticated task and safety specifications.
Formal verification (FV), and in particular model checking, provides formal guarantees on a system's correctness based on rigorous methods and precise specifications. Despite active development by researchers from all over the world, fundamental challenges obstruct the application of FV to RL so far.
We identify three key challenges that frame the objectives of this proposal.
(1) Complex environments with large degrees of freedom induce large state and feature spaces. This curse of dimensionality poses a longstanding problem for verification.
(2) Common approaches for the correctness of RL systems employ idealized discrete state spaces.
However, realistic problems are often continuous.
(3) Knowledge about real-world environments is inherently uncertain.
To ensure safety, correctness guarantees need to be robust against such imprecise knowledge about the environment.
The main objective of the DEUCE project is to develop novel and data-driven verification methods that tightly integrate with RL. To cope with the curse of dimensionality, we devise learning-based abstraction schemes that distill the system parts that are relevant for the correctness. We employ and define models whose expressiveness captures various types of uncertainty. These models are the basis for formal and data-driven abstractions of continuous spaces. We provide model-based FV mechanisms that ensure safe and correct exploration for RL agents.
DEUCE will elevate the scalability and expressiveness of verification towards real-world deployment of reinforcement learning.
Formal verification (FV), and in particular model checking, provides formal guarantees on a system's correctness based on rigorous methods and precise specifications. Despite active development by researchers from all over the world, fundamental challenges obstruct the application of FV to RL so far.
We identify three key challenges that frame the objectives of this proposal.
(1) Complex environments with large degrees of freedom induce large state and feature spaces. This curse of dimensionality poses a longstanding problem for verification.
(2) Common approaches for the correctness of RL systems employ idealized discrete state spaces.
However, realistic problems are often continuous.
(3) Knowledge about real-world environments is inherently uncertain.
To ensure safety, correctness guarantees need to be robust against such imprecise knowledge about the environment.
The main objective of the DEUCE project is to develop novel and data-driven verification methods that tightly integrate with RL. To cope with the curse of dimensionality, we devise learning-based abstraction schemes that distill the system parts that are relevant for the correctness. We employ and define models whose expressiveness captures various types of uncertainty. These models are the basis for formal and data-driven abstractions of continuous spaces. We provide model-based FV mechanisms that ensure safe and correct exploration for RL agents.
DEUCE will elevate the scalability and expressiveness of verification towards real-world deployment of reinforcement learning.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: | https://cordis.europa.eu/project/id/101077178 |
Start date: | 01-01-2023 |
End date: | 31-12-2027 |
Total budget - Public funding: | 1 500 000,00 Euro - 1 500 000,00 Euro |
Cordis data
Original description
Reinforcement learning (RL) agents learn to behave optimally via trial and error, without the need to encode complicated behavior explicitly. However, RL generally lacks mechanisms to constantly ensure correct behavior regarding sophisticated task and safety specifications.Formal verification (FV), and in particular model checking, provides formal guarantees on a system's correctness based on rigorous methods and precise specifications. Despite active development by researchers from all over the world, fundamental challenges obstruct the application of FV to RL so far.
We identify three key challenges that frame the objectives of this proposal.
(1) Complex environments with large degrees of freedom induce large state and feature spaces. This curse of dimensionality poses a longstanding problem for verification.
(2) Common approaches for the correctness of RL systems employ idealized discrete state spaces.
However, realistic problems are often continuous.
(3) Knowledge about real-world environments is inherently uncertain.
To ensure safety, correctness guarantees need to be robust against such imprecise knowledge about the environment.
The main objective of the DEUCE project is to develop novel and data-driven verification methods that tightly integrate with RL. To cope with the curse of dimensionality, we devise learning-based abstraction schemes that distill the system parts that are relevant for the correctness. We employ and define models whose expressiveness captures various types of uncertainty. These models are the basis for formal and data-driven abstractions of continuous spaces. We provide model-based FV mechanisms that ensure safe and correct exploration for RL agents.
DEUCE will elevate the scalability and expressiveness of verification towards real-world deployment of reinforcement learning.
Status
SIGNEDCall topic
ERC-2022-STGUpdate Date
09-02-2023
Images
No images available.
Geographical location(s)