Summary
Digital heritage are computer-based materials of enduring value that should be kept for future generations, for example photographs and videos. As an asset of our times, historical photographs and videos can greatly benefit from digital restoration techniques, from colorization or color enhancement to the removal of scratches or other artefacts. In this project, we focus on two cases for digital heritage restoration: colorization and color image enhancement of old photographs and videos. Historically, image enhancement methods were rooted in tailor-made priors using well-understood physics and/or statistical models. Now, deep learning approaches leverage large amounts of data to train generative models that can hallucinate on the generated images. However, the useful versatility of deep learning approaches faces two main problems:
(a) Deep models are black boxes whose inner behaviors are difficult to interpret, which is an important drawback when assessing their reliability, studying failure cases, and improving their robustness. This hinders their direct adoption in the digital heritage restoration process. Thus, explainability is a highly desirable characteristic for image enhancement models.
(b) Image enhancement problems are ill-conditioned, especially for digital heritage photos (e.g., there are many plausible colorizations of a grayscale image). Yet, users rarely have a say in the process of enhancement with deep models, which is typically decided by the model based on statistical decisions. Thus, physically plausible or realistic solutions should be favored, as well as allowing the end user to explore and guide the algorithm towards the intended solution.
In this project, we propose to confront the ill-posed nature of image enhancement problems by a comprehensive involvement of the user in the loop, shifting the important decision-making from the model to the user. This will lead to results that are user oriented and achieve higher quality.
(a) Deep models are black boxes whose inner behaviors are difficult to interpret, which is an important drawback when assessing their reliability, studying failure cases, and improving their robustness. This hinders their direct adoption in the digital heritage restoration process. Thus, explainability is a highly desirable characteristic for image enhancement models.
(b) Image enhancement problems are ill-conditioned, especially for digital heritage photos (e.g., there are many plausible colorizations of a grayscale image). Yet, users rarely have a say in the process of enhancement with deep models, which is typically decided by the model based on statistical decisions. Thus, physically plausible or realistic solutions should be favored, as well as allowing the end user to explore and guide the algorithm towards the intended solution.
In this project, we propose to confront the ill-posed nature of image enhancement problems by a comprehensive involvement of the user in the loop, shifting the important decision-making from the model to the user. This will lead to results that are user oriented and achieve higher quality.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: | https://cordis.europa.eu/project/id/101152858 |
Start date: | 22-07-2024 |
End date: | 21-07-2026 |
Total budget - Public funding: | - 165 312,00 Euro |
Cordis data
Original description
Digital heritage are computer-based materials of enduring value that should be kept for future generations, for example photographs and videos. As an asset of our times, historical photographs and videos can greatly benefit from digital restoration techniques, from colorization or color enhancement to the removal of scratches or other artefacts. In this project, we focus on two cases for digital heritage restoration: colorization and color image enhancement of old photographs and videos. Historically, image enhancement methods were rooted in tailor-made priors using well-understood physics and/or statistical models. Now, deep learning approaches leverage large amounts of data to train generative models that can hallucinate on the generated images. However, the useful versatility of deep learning approaches faces two main problems:(a) Deep models are black boxes whose inner behaviors are difficult to interpret, which is an important drawback when assessing their reliability, studying failure cases, and improving their robustness. This hinders their direct adoption in the digital heritage restoration process. Thus, explainability is a highly desirable characteristic for image enhancement models.
(b) Image enhancement problems are ill-conditioned, especially for digital heritage photos (e.g., there are many plausible colorizations of a grayscale image). Yet, users rarely have a say in the process of enhancement with deep models, which is typically decided by the model based on statistical decisions. Thus, physically plausible or realistic solutions should be favored, as well as allowing the end user to explore and guide the algorithm towards the intended solution.
In this project, we propose to confront the ill-posed nature of image enhancement problems by a comprehensive involvement of the user in the loop, shifting the important decision-making from the model to the user. This will lead to results that are user oriented and achieve higher quality.
Status
SIGNEDCall topic
HORIZON-MSCA-2023-PF-01-01Update Date
22-11-2024
Images
No images available.
Geographical location(s)