MIA-NORMAL | Medical Image Analysis with Normative Machine Learning

Summary
As one of the most important aspects of diagnosis, treatment planning, treatment delivery, and follow-up, medical imaging provides an unmatched ability to identify disease with high accuracy. As a result of its success, referrals for imaging examinations have increased significantly. However, medical imaging depends on interpretation by highly specialised clinical experts and is thus rarely available at the front-line-of-care, for patient triage, or for frequent follow-ups. Very often, excluding certain conditions or confirming physiological normality would be essential at many stages of the patient journey, to streamline referrals and relieve pressure on human experts who have limited capacity. Hence, there is a strong need for increased imaging with automated diagnostic support for clinicians, healthcare professionals, and caregivers.

Machine learning is expected to be an algorithmic panacea for diagnostic automation. However, despite significant advances such as Deep Learning with notable impact on real-world applications, robust confirmation of normality is still an unsolved problem, which cannot be addressed with established approaches.

Like clinical experts, machines should also be able to verify the absence of pathology by contrasting new images with their knowledge about healthy anatomy and expected physiological variability. Thus, the aim of this proposal is to develop normative representation learning as a new machine learning paradigm for medical imaging, providing patient-specific computational tools for robust confirmation of normality, image quality control, health screening, and prevention of disease before onset. We will do this by developing novel Deep Learning approaches that can learn without manual labels from healthy patient data only, applicable to cross-sectional, sequential, and multi-modal data. Resulting models will be able to extract clinically useful and actionable information as early and frequent as possible during patient journeys.
Results, demos, etc. Show all and search (0)
Unfold all
/
Fold all
More information & hyperlinks
Web resources: https://cordis.europa.eu/project/id/101083647
Start date: 01-09-2023
End date: 31-08-2028
Total budget - Public funding: 1 997 841,00 Euro - 1 997 841,00 Euro
Cordis data

Original description

As one of the most important aspects of diagnosis, treatment planning, treatment delivery, and follow-up, medical imaging provides an unmatched ability to identify disease with high accuracy. As a result of its success, referrals for imaging examinations have increased significantly. However, medical imaging depends on interpretation by highly specialised clinical experts and is thus rarely available at the front-line-of-care, for patient triage, or for frequent follow-ups. Very often, excluding certain conditions or confirming physiological normality would be essential at many stages of the patient journey, to streamline referrals and relieve pressure on human experts who have limited capacity. Hence, there is a strong need for increased imaging with automated diagnostic support for clinicians, healthcare professionals, and caregivers.

Machine learning is expected to be an algorithmic panacea for diagnostic automation. However, despite significant advances such as Deep Learning with notable impact on real-world applications, robust confirmation of normality is still an unsolved problem, which cannot be addressed with established approaches.

Like clinical experts, machines should also be able to verify the absence of pathology by contrasting new images with their knowledge about healthy anatomy and expected physiological variability. Thus, the aim of this proposal is to develop normative representation learning as a new machine learning paradigm for medical imaging, providing patient-specific computational tools for robust confirmation of normality, image quality control, health screening, and prevention of disease before onset. We will do this by developing novel Deep Learning approaches that can learn without manual labels from healthy patient data only, applicable to cross-sectional, sequential, and multi-modal data. Resulting models will be able to extract clinically useful and actionable information as early and frequent as possible during patient journeys.

Status

SIGNED

Call topic

ERC-2022-COG

Update Date

31-07-2023
Images
No images available.
Geographical location(s)