EAR | Audio-based Mobile Health Diagnostics

Summary
Mobile health is becoming the holy grail for affordable medical diagnostics. It has the potential of associating human behaviour with medical symptoms automatically and at early disease stage; it also offers cheap deployment, reaching populations generally not able to afford diagnosis and delivering a level of monitoring so fine which will likely improve diagnostic theory itself. The advancements of technology offer new ranges of sensing and computation capability with the potential of further improving the reach of mobile health. Audio sensing through microphones of mobile devices has recently being recognized as a powerful and yet underutilized source of medical information: sounds from the human body (e.g., sighs, breathing sounds and voice) are indicators of disease or disease onsets. The current pilots, while generally medically grounded, are potentially ad-hoc from the perspective of key areas of computer science; specifically, in their approaches to computational models and how the system resource demands are optimized to fit within the limits of the mobile devices, as well as in terms of robustness needed for tracking people in their daily lives. Audio sensing also comes with challenges which threaten its use in clinical context: its power hungry nature and the fact that audio data is very sensitive and the collection of this sort of data for analytics violates obvious ethical rules. This work proposes models to link sounds to disease diagnosis and to deal with the inherent issues raised by in-the-wild sensing: noise and privacy concerns. We exploit these audio models in wearable systems maximizing the use of local hardware resources with power optimization and accuracy in both near real time and sparse audio sampling. Privacy will arise as a by-product taking away the need of cloud analytics. Moreover, the framework will embed the ability to quantify the diagnostic uncertainty and consider patient context as confounding factors via additional sensors.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: https://cordis.europa.eu/project/id/833296
Start date: 01-10-2019
End date: 31-03-2025
Total budget - Public funding: 2 493 724,00 Euro - 2 493 724,00 Euro
Cordis data

Original description

Mobile health is becoming the holy grail for affordable medical diagnostics. It has the potential of associating human behaviour with medical symptoms automatically and at early disease stage; it also offers cheap deployment, reaching populations generally not able to afford diagnosis and delivering a level of monitoring so fine which will likely improve diagnostic theory itself. The advancements of technology offer new ranges of sensing and computation capability with the potential of further improving the reach of mobile health. Audio sensing through microphones of mobile devices has recently being recognized as a powerful and yet underutilized source of medical information: sounds from the human body (e.g., sighs, breathing sounds and voice) are indicators of disease or disease onsets. The current pilots, while generally medically grounded, are potentially ad-hoc from the perspective of key areas of computer science; specifically, in their approaches to computational models and how the system resource demands are optimized to fit within the limits of the mobile devices, as well as in terms of robustness needed for tracking people in their daily lives. Audio sensing also comes with challenges which threaten its use in clinical context: its power hungry nature and the fact that audio data is very sensitive and the collection of this sort of data for analytics violates obvious ethical rules. This work proposes models to link sounds to disease diagnosis and to deal with the inherent issues raised by in-the-wild sensing: noise and privacy concerns. We exploit these audio models in wearable systems maximizing the use of local hardware resources with power optimization and accuracy in both near real time and sparse audio sampling. Privacy will arise as a by-product taking away the need of cloud analytics. Moreover, the framework will embed the ability to quantify the diagnostic uncertainty and consider patient context as confounding factors via additional sensors.

Status

SIGNED

Call topic

ERC-2018-ADG

Update Date

27-04-2024
Images
No images available.
Geographical location(s)
Structured mapping
Unfold all
/
Fold all
Horizon 2020
H2020-EU.1. EXCELLENT SCIENCE
H2020-EU.1.1. EXCELLENT SCIENCE - European Research Council (ERC)
ERC-2018
ERC-2018-ADG