Summary
Program analyzers act as guards for ensuring reliability of modern-day
software. But who guards the guards themselves? Due to their high
complexity and sophisticated algorithms, analyzers are likely to
contain critical bugs, which we define as those leading to wrong
results, e.g., returning `correct' for incorrect software. Such bugs
may have detrimental consequences, especially in safety-critical
settings. It is, therefore, imperative to be able to detect them.
Verifying the absence of critical bugs in a program analyzer is
prohibitively expensive. Contrary to verification, automated test
generation can be used to effectively find such bugs. Existing testing
approaches, however, are still in their infancy for this application
domain.
To address this issue, MirandaTesting will develop the first
principled methodology for testing a wide range of program
analyzers. At its core, our methodology exposes more information about
why an analyzer computes a particular result; it then uses this
information to interrogate the analyzer aiming to force it into a
contradiction, thus revealing a critical bug. The project has the
following goals:
1. Design a general framework for testing program analyzers using the
MirandaTesting methodology;
2. Develop interrogation strategies pertaining to eleven prevalent
classes of program analyzers;
3. Demonstrate the effectiveness of concrete instantiations of the
general framework and interrogation strategies for several popular
program analyzers from each class;
4. Focus on disseminating our methodology and infrastructure.
If successful, MirandaTesting will enable systematic testing of entire
program-analyzer classes. As a result, analyzers will exhibit fewer
critical bugs, potentially preventing catastrophic outcomes in
safety-critical domains.
software. But who guards the guards themselves? Due to their high
complexity and sophisticated algorithms, analyzers are likely to
contain critical bugs, which we define as those leading to wrong
results, e.g., returning `correct' for incorrect software. Such bugs
may have detrimental consequences, especially in safety-critical
settings. It is, therefore, imperative to be able to detect them.
Verifying the absence of critical bugs in a program analyzer is
prohibitively expensive. Contrary to verification, automated test
generation can be used to effectively find such bugs. Existing testing
approaches, however, are still in their infancy for this application
domain.
To address this issue, MirandaTesting will develop the first
principled methodology for testing a wide range of program
analyzers. At its core, our methodology exposes more information about
why an analyzer computes a particular result; it then uses this
information to interrogate the analyzer aiming to force it into a
contradiction, thus revealing a critical bug. The project has the
following goals:
1. Design a general framework for testing program analyzers using the
MirandaTesting methodology;
2. Develop interrogation strategies pertaining to eleven prevalent
classes of program analyzers;
3. Demonstrate the effectiveness of concrete instantiations of the
general framework and interrogation strategies for several popular
program analyzers from each class;
4. Focus on disseminating our methodology and infrastructure.
If successful, MirandaTesting will enable systematic testing of entire
program-analyzer classes. As a result, analyzers will exhibit fewer
critical bugs, potentially preventing catastrophic outcomes in
safety-critical domains.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: | https://cordis.europa.eu/project/id/101076510 |
Start date: | 01-07-2023 |
End date: | 30-06-2028 |
Total budget - Public funding: | 1 499 991,00 Euro - 1 499 991,00 Euro |
Cordis data
Original description
Program analyzers act as guards for ensuring reliability of modern-daysoftware. But who guards the guards themselves? Due to their high
complexity and sophisticated algorithms, analyzers are likely to
contain critical bugs, which we define as those leading to wrong
results, e.g., returning `correct' for incorrect software. Such bugs
may have detrimental consequences, especially in safety-critical
settings. It is, therefore, imperative to be able to detect them.
Verifying the absence of critical bugs in a program analyzer is
prohibitively expensive. Contrary to verification, automated test
generation can be used to effectively find such bugs. Existing testing
approaches, however, are still in their infancy for this application
domain.
To address this issue, MirandaTesting will develop the first
principled methodology for testing a wide range of program
analyzers. At its core, our methodology exposes more information about
why an analyzer computes a particular result; it then uses this
information to interrogate the analyzer aiming to force it into a
contradiction, thus revealing a critical bug. The project has the
following goals:
1. Design a general framework for testing program analyzers using the
MirandaTesting methodology;
2. Develop interrogation strategies pertaining to eleven prevalent
classes of program analyzers;
3. Demonstrate the effectiveness of concrete instantiations of the
general framework and interrogation strategies for several popular
program analyzers from each class;
4. Focus on disseminating our methodology and infrastructure.
If successful, MirandaTesting will enable systematic testing of entire
program-analyzer classes. As a result, analyzers will exhibit fewer
critical bugs, potentially preventing catastrophic outcomes in
safety-critical domains.
Status
SIGNEDCall topic
ERC-2022-STGUpdate Date
31-07-2023
Images
No images available.
Geographical location(s)