FairML | Theory of Fair Machine Learning

Summary
Designing fair machine learning algorithms is challenging because the training data is often imbalanced and reflects (sometimes subconscious) biases of human annotators, leading to a possible propagation of biases into future decision-making. Besides, enforcing fairness usually leads to an inevitable deterioration of accuracy due to restrictions on the space of classifiers. In this project, I will address this challenge by developing oracle bounds of fairness restraints and a Pareto-dominated trade-off between fairness and accuracy using ensemble classifiers with the majority vote, to cancel out not only errors but also biases. I will also develop illegal bias tracing and long-term fairness capturing to comply with anti-subordination lawfully, using learning theory tools including causality and online learning for moral responsibility. The central objective of this proposal is to gain a theoretical understanding of fairness and to design machine learning algorithms that simultaneously improve both fairness and accuracy. The study is essential both for improved scientific understanding of fairness in machine learning models, and for the development of fairer algorithms for the numerous application domains, such as recruitment, criminal judging, or lending. Moreover, the project also takes interdisciplinary knowledge of economics and law into account to avoid fairness concepts in machine learning from being misaligned with their legal counterparts, enlarging the impact of machine learning applications and giving back to the wider community.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: https://cordis.europa.eu/project/id/101106768
Start date: 01-08-2024
End date: 31-07-2026
Total budget - Public funding: - 214 934,00 Euro
Cordis data

Original description

Designing fair machine learning algorithms is challenging because the training data is often imbalanced and reflects (sometimes subconscious) biases of human annotators, leading to a possible propagation of biases into future decision-making. Besides, enforcing fairness usually leads to an inevitable deterioration of accuracy due to restrictions on the space of classifiers. In this project, I will address this challenge by developing oracle bounds of fairness restraints and a Pareto-dominated trade-off between fairness and accuracy using ensemble classifiers with the majority vote, to cancel out not only errors but also biases. I will also develop illegal bias tracing and long-term fairness capturing to comply with anti-subordination lawfully, using learning theory tools including causality and online learning for moral responsibility. The central objective of this proposal is to gain a theoretical understanding of fairness and to design machine learning algorithms that simultaneously improve both fairness and accuracy. The study is essential both for improved scientific understanding of fairness in machine learning models, and for the development of fairer algorithms for the numerous application domains, such as recruitment, criminal judging, or lending. Moreover, the project also takes interdisciplinary knowledge of economics and law into account to avoid fairness concepts in machine learning from being misaligned with their legal counterparts, enlarging the impact of machine learning applications and giving back to the wider community.

Status

SIGNED

Call topic

HORIZON-MSCA-2022-PF-01-01

Update Date

12-03-2024
Images
No images available.
Geographical location(s)
Structured mapping
Unfold all
/
Fold all
Horizon Europe
HORIZON.1 Excellent Science
HORIZON.1.2 Marie Skłodowska-Curie Actions (MSCA)
HORIZON.1.2.0 Cross-cutting call topics
HORIZON-MSCA-2022-PF-01
HORIZON-MSCA-2022-PF-01-01 MSCA Postdoctoral Fellowships 2022