FPH | Fair predictions in health

Summary
In clinical care, machine learning is progressively used to enhance diagnosis, therapy choice, and effectiveness of the health system. Because machine-learning models learn from historically gathered information, populations that have suffered past human and structural biases (e.g. unequal access to education or resources) — called protected groups — are susceptible to damage from inaccurate projections or resource allocations, reinforcing health inequalities. For example, racial and gender differences exist in the way clinical data are produced and these can be transferred as biases in the models. Several techniques of algorithmic fairness have been suggested in the literature on machine learning to ameliorate the performance of machine learning with respect to its fairness. The debate in statistics and machine learning has however failed to provide a principled approach for choosing concepts of bias, prejudice, discrimination, and fairness in predictive models, with a clear link to ethical theory discussed within philosophy.
The specific scientific objectives of this research project are:
O1: ethical theory: mapping the ethical theories that are relevant for the allocation of resources in health care and draw connections with the literature in fair machine learning
O2: probabilistic ethics: understand how standard moral concepts such as responsibility, merit, need, talent, equality, and benefit can be understood in probabilistic terms
O3: epistemology of causality: understand if current claims made by counterfactual and causal models of fairness in AI are robust with respect to different philosophical understandings of probability, causality, and counterfactuals
O4: application: to show the relevance these philosophical ideas by applying them to a limited number of paradigmatic cases of the application of predictive algorithms in health care.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: https://cordis.europa.eu/project/id/898322
Start date: 01-09-2021
End date: 31-08-2023
Total budget - Public funding: 183 473,28 Euro - 183 473,00 Euro
Cordis data

Original description

In clinical care, machine learning is progressively used to enhance diagnosis, therapy choice, and effectiveness of the health system. Because machine-learning models learn from historically gathered information, populations that have suffered past human and structural biases (e.g. unequal access to education or resources) — called protected groups — are susceptible to damage from inaccurate projections or resource allocations, reinforcing health inequalities. For example, racial and gender differences exist in the way clinical data are produced and these can be transferred as biases in the models. Several techniques of algorithmic fairness have been suggested in the literature on machine learning to ameliorate the performance of machine learning with respect to its fairness. The debate in statistics and machine learning has however failed to provide a principled approach for choosing concepts of bias, prejudice, discrimination, and fairness in predictive models, with a clear link to ethical theory discussed within philosophy.
The specific scientific objectives of this research project are:
O1: ethical theory: mapping the ethical theories that are relevant for the allocation of resources in health care and draw connections with the literature in fair machine learning
O2: probabilistic ethics: understand how standard moral concepts such as responsibility, merit, need, talent, equality, and benefit can be understood in probabilistic terms
O3: epistemology of causality: understand if current claims made by counterfactual and causal models of fairness in AI are robust with respect to different philosophical understandings of probability, causality, and counterfactuals
O4: application: to show the relevance these philosophical ideas by applying them to a limited number of paradigmatic cases of the application of predictive algorithms in health care.

Status

CLOSED

Call topic

MSCA-IF-2019

Update Date

28-04-2024
Images
No images available.
Geographical location(s)
Structured mapping
Unfold all
/
Fold all
Horizon 2020
H2020-EU.1. EXCELLENT SCIENCE
H2020-EU.1.3. EXCELLENT SCIENCE - Marie Skłodowska-Curie Actions (MSCA)
H2020-EU.1.3.2. Nurturing excellence by means of cross-border and cross-sector mobility
H2020-MSCA-IF-2019
MSCA-IF-2019