AIrbrush | AIrbrush: a transdiciplinary value-sensitive study of biases and stereotypes in AI-generated Global Health images, and their significance for science and society

Summary
This project engages with the proliferation of abusive and biased stereotypes from colonial and humanitarian photography through generative AI technology, and investigates their consequences for science and society. This is a pressing issue: AI simultaneously absorbs and learns from real images, which, in the case of global health, have been marked by racism, coloniality and sexism, meaning that given images become a cluster for generative AI to learn from biased depictions and perpetuate negative stereotypes. Such cycles have to be studied and eliminated in order to move toward more equal postcolonial societies and promote a culture of value-sensitive depictions of vulnerable people. The project builds and greatly expands on the emerging methodology of purposeful generation and value-sensitive evaluation of AI-generated Global Health visuals, recently pioneered by Prof. Koen Peeters (the supervisor) and Dr. Alenichev, and encapsulated in a Lancet Global Health Article in August 2023. Offering a first-ever systematic study of AI-generated Global Heath visuals, AIrbrush sets five core objectives and asks: How should the international community account for generative AI as part of the internationally set goal of decolonizing Global Health and its visual culture, and tackle biased depictions of race, class, gender, and other socially enacted markers of similarity and difference? AIrbrush answers this question by analysing the substrate of the real global health images AI learns from, evaluates the learning progress and the reproduction and modification of such tropes by AI, theorizes this relationship and outlines societal outcomes with regard to the future of respectful depictions in the AI era. The findings from this study will be encapsulated as academic articles, a thematic webinar, a collaboration with the WHO AI and Ethics research group, and an art exhibition at ITM (the host) and in other places, among other outputs.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: https://cordis.europa.eu/project/id/101150547
Start date: 01-06-2024
End date: 31-05-2026
Total budget - Public funding: - 191 760,00 Euro
Cordis data

Original description

This project engages with the proliferation of abusive and biased stereotypes from colonial and humanitarian photography through generative AI technology, and investigates their consequences for science and society. This is a pressing issue: AI simultaneously absorbs and learns from real images, which, in the case of global health, have been marked by racism, coloniality and sexism, meaning that given images become a cluster for generative AI to learn from biased depictions and perpetuate negative stereotypes. Such cycles have to be studied and eliminated in order to move toward more equal postcolonial societies and promote a culture of value-sensitive depictions of vulnerable people. The project builds and greatly expands on the emerging methodology of purposeful generation and value-sensitive evaluation of AI-generated Global Health visuals, recently pioneered by Prof. Koen Peeters (the supervisor) and Dr. Alenichev, and encapsulated in a Lancet Global Health Article in August 2023. Offering a first-ever systematic study of AI-generated Global Heath visuals, AIrbrush sets five core objectives and asks: How should the international community account for generative AI as part of the internationally set goal of decolonizing Global Health and its visual culture, and tackle biased depictions of race, class, gender, and other socially enacted markers of similarity and difference? AIrbrush answers this question by analysing the substrate of the real global health images AI learns from, evaluates the learning progress and the reproduction and modification of such tropes by AI, theorizes this relationship and outlines societal outcomes with regard to the future of respectful depictions in the AI era. The findings from this study will be encapsulated as academic articles, a thematic webinar, a collaboration with the WHO AI and Ethics research group, and an art exhibition at ITM (the host) and in other places, among other outputs.

Status

SIGNED

Call topic

HORIZON-MSCA-2023-PF-01-01

Update Date

06-10-2024
Images
No images available.
Geographical location(s)
Structured mapping
Unfold all
/
Fold all
Horizon Europe
HORIZON.1 Excellent Science
HORIZON.1.2 Marie Skłodowska-Curie Actions (MSCA)
HORIZON.1.2.0 Cross-cutting call topics
HORIZON-MSCA-2023-PF-01
HORIZON-MSCA-2023-PF-01-01 MSCA Postdoctoral Fellowships 2023