ALGOCERT | Devising certifiable and explainable algorithms for verification and planning in cyber-physical systems

Summary
One of the main challenges in the development of complex computerized systems lies in verification -– the process of ensuring the systems' correctness.

Model checking is an approach for system verification in which one uses mathematical reasoning to conduct an algorithmic analysis of the possible computations of the system, in order to formally prove that a system satisfies a given specification.

Traditionally, model checking is done as follows. The user inputs a system and a specification to a model checker, and gets a yes/no output as to whether the system satisfies the specification. Typically, when the answer is ``no'', a counterexample is also outputted, usually in the form of a computation of the system that violates the specification. This gives the user an informative output that can be used to fix the system, or, possible, the specification.

A drawback of model checking is that, in contrast with providing counterexamples for ``no'' answers, a ``yes'' answer does not include any proof, explanation, or certificate of correctness. The advantage of having such certificates is twofold: first, it would help convincing the designer of the system's correctness, and second, it can be used to gain insight into the workings of complex systems.

A similar drawback occurs in an application of model-checking to robotic planning. There, a suggested plan issued by the model checker may seem complicated or counter intuitive to a human user. Thus, one would want some explanation of the plan, that would convince the user of its correctness, and, possibly, its optimality.

The aim of this proposal is to address the challenge of providing certificates for the correctness of systems, and analogously -- providing explanations for plans. This involves several challenges: finding contexts in which explanations and certificates have reasonable definitions, and then - devising a suitable theoretical algorithmic framework, and a practical, scalable implementation.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: https://cordis.europa.eu/project/id/837327
Start date: 15-07-2019
End date: 14-07-2021
Total budget - Public funding: 185 464,32 Euro - 185 464,00 Euro
Cordis data

Original description

One of the main challenges in the development of complex computerized systems lies in verification -– the process of ensuring the systems' correctness.

Model checking is an approach for system verification in which one uses mathematical reasoning to conduct an algorithmic analysis of the possible computations of the system, in order to formally prove that a system satisfies a given specification.

Traditionally, model checking is done as follows. The user inputs a system and a specification to a model checker, and gets a yes/no output as to whether the system satisfies the specification. Typically, when the answer is ``no'', a counterexample is also outputted, usually in the form of a computation of the system that violates the specification. This gives the user an informative output that can be used to fix the system, or, possible, the specification.

A drawback of model checking is that, in contrast with providing counterexamples for ``no'' answers, a ``yes'' answer does not include any proof, explanation, or certificate of correctness. The advantage of having such certificates is twofold: first, it would help convincing the designer of the system's correctness, and second, it can be used to gain insight into the workings of complex systems.

A similar drawback occurs in an application of model-checking to robotic planning. There, a suggested plan issued by the model checker may seem complicated or counter intuitive to a human user. Thus, one would want some explanation of the plan, that would convince the user of its correctness, and, possibly, its optimality.

The aim of this proposal is to address the challenge of providing certificates for the correctness of systems, and analogously -- providing explanations for plans. This involves several challenges: finding contexts in which explanations and certificates have reasonable definitions, and then - devising a suitable theoretical algorithmic framework, and a practical, scalable implementation.

Status

CLOSED

Call topic

MSCA-IF-2018

Update Date

28-04-2024
Images
No images available.
Geographical location(s)
Structured mapping
Unfold all
/
Fold all
Horizon 2020
H2020-EU.1. EXCELLENT SCIENCE
H2020-EU.1.3. EXCELLENT SCIENCE - Marie Skłodowska-Curie Actions (MSCA)
H2020-EU.1.3.2. Nurturing excellence by means of cross-border and cross-sector mobility
H2020-MSCA-IF-2018
MSCA-IF-2018