MAMMOth | Multi-Attribute, Multimodal Bias Mitigation in AI Systems

Summary
Artificial Intelligence (AI) is increasingly employed by businesses, governments, and other organizations to make decisions with far-reaching impacts on individuals and society. This offers big opportunities for automation in different sectors and daily life, but at the same time it brings risks for discrimination of minority and marginal population groups on the basis of the so-called protected attributes, like gender, race, and age. Despite the large body of research to date, the proposed methods work in limited settings, under very constrained assumptions, and do not reflect the complexity and requirements of real world applications.
To this end, the MAMMOth project focuses on multi-discrimination mitigation for tabular, network and multimodal data. Through its computer science and AI experts, MAMMOth aims at addressing the associated scientific challenges by developing an innovative fairness-aware AI-data driven foundation that provides the necessary tools and techniques for the discovery and mitigation of (multi-)discrimination and ensures the accountability of AI-systems with respect to multiple protected attributes and for traditional tabular data and more complex network and visual data.
The project will actively engage with numerous communities of vulnerable and/or underrepresented groups in AI research right from the start, adopting a co-creation approach, to make sure that actual user needs and pains are at the centre of the research agenda and act as guidance to the project’s activities. A social science-driven approach supported by social science and ethics experts will guide project research, and a science communication approach will increase the outreach of the outcomes.
The project aims to demonstrate through pilots the developed solutions into three relevant sectors of interest: a) finance/loan applications, b) identity verification systems, and c) academic evaluation.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: https://cordis.europa.eu/project/id/101070285
Start date: 01-11-2022
End date: 31-10-2025
Total budget - Public funding: 3 304 975,00 Euro - 3 304 975,00 Euro
Cordis data

Original description

Artificial Intelligence (AI) is increasingly employed by businesses, governments, and other organizations to make decisions with far-reaching impacts on individuals and society. This offers big opportunities for automation in different sectors and daily life, but at the same time it brings risks for discrimination of minority and marginal population groups on the basis of the so-called protected attributes, like gender, race, and age. Despite the large body of research to date, the proposed methods work in limited settings, under very constrained assumptions, and do not reflect the complexity and requirements of real world applications.
To this end, the MAMMOth project focuses on multi-discrimination mitigation for tabular, network and multimodal data. Through its computer science and AI experts, MAMMOth aims at addressing the associated scientific challenges by developing an innovative fairness-aware AI-data driven foundation that provides the necessary tools and techniques for the discovery and mitigation of (multi-)discrimination and ensures the accountability of AI-systems with respect to multiple protected attributes and for traditional tabular data and more complex network and visual data.
The project will actively engage with numerous communities of vulnerable and/or underrepresented groups in AI research right from the start, adopting a co-creation approach, to make sure that actual user needs and pains are at the centre of the research agenda and act as guidance to the project’s activities. A social science-driven approach supported by social science and ethics experts will guide project research, and a science communication approach will increase the outreach of the outcomes.
The project aims to demonstrate through pilots the developed solutions into three relevant sectors of interest: a) finance/loan applications, b) identity verification systems, and c) academic evaluation.

Status

SIGNED

Call topic

HORIZON-CL4-2021-HUMAN-01-24

Update Date

09-02-2023
Images
No images available.
Geographical location(s)
Structured mapping
Unfold all
/
Fold all
Horizon Europe
HORIZON.2 Global Challenges and European Industrial Competitiveness
HORIZON.2.4 Digital, Industry and Space
HORIZON.2.4.5 Artificial Intelligence and Robotics
HORIZON-CL4-2021-HUMAN-01
HORIZON-CL4-2021-HUMAN-01-24 Tackling gender, race and other biases in AI (RIA)