Summary
Most of us use technology related to natural language processing (NLP) such as Google Search or virtual assistants in phones and other devices on a daily basis. Large-scale pre-trained language models hereby play a crucial role as they often form the basis of those technologies. Those models are trained on a large amount of training data (e.g. the entire English Wikipedia and the Brown corpus) which makes it impossible to curate the training corpus and potential stereotypes and biases will be implemented into the model, often without researchers noticing. This can lead to problematic and unfair behaviour towards certain demographics, often those who already suffer from implicit biases in society.
With FairER, I aim to get a deeper understanding of the inner workings of these language models. In particular, I want to investigate how well their solution strategies align with those of humans and whether this depends on certain demographic attributes such as gender, race, age but also reading abilities and level of education. I will also probe those language models for fairness and inclusiveness, i.e., find out whether the performance of an NLP application depends on demographic attributes of the user. Furthermore, I will conduct this project in a multilingual setting and apply interpretability methods to better understand the rationale behind a model’s decision.
The main impact of FairER will be a better understanding of how language models treat different demographics. These insights will help to improve the fairness and inclusiveness of NLP applications. Furthermore, the datasets I will record and publish along with the code will encourage other researchers to replicate my findings and continue this line of research. Ultimately, this project will have both a scientific and societal impact on the NLP community and users of NLP applications.
With FairER, I aim to get a deeper understanding of the inner workings of these language models. In particular, I want to investigate how well their solution strategies align with those of humans and whether this depends on certain demographic attributes such as gender, race, age but also reading abilities and level of education. I will also probe those language models for fairness and inclusiveness, i.e., find out whether the performance of an NLP application depends on demographic attributes of the user. Furthermore, I will conduct this project in a multilingual setting and apply interpretability methods to better understand the rationale behind a model’s decision.
The main impact of FairER will be a better understanding of how language models treat different demographics. These insights will help to improve the fairness and inclusiveness of NLP applications. Furthermore, the datasets I will record and publish along with the code will encourage other researchers to replicate my findings and continue this line of research. Ultimately, this project will have both a scientific and societal impact on the NLP community and users of NLP applications.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: | https://cordis.europa.eu/project/id/101065558 |
Start date: | 01-09-2022 |
End date: | 31-08-2024 |
Total budget - Public funding: | - 214 934,00 Euro |
Cordis data
Original description
Most of us use technology related to natural language processing (NLP) such as Google Search or virtual assistants in phones and other devices on a daily basis. Large-scale pre-trained language models hereby play a crucial role as they often form the basis of those technologies. Those models are trained on a large amount of training data (e.g. the entire English Wikipedia and the Brown corpus) which makes it impossible to curate the training corpus and potential stereotypes and biases will be implemented into the model, often without researchers noticing. This can lead to problematic and unfair behaviour towards certain demographics, often those who already suffer from implicit biases in society.With FairER, I aim to get a deeper understanding of the inner workings of these language models. In particular, I want to investigate how well their solution strategies align with those of humans and whether this depends on certain demographic attributes such as gender, race, age but also reading abilities and level of education. I will also probe those language models for fairness and inclusiveness, i.e., find out whether the performance of an NLP application depends on demographic attributes of the user. Furthermore, I will conduct this project in a multilingual setting and apply interpretability methods to better understand the rationale behind a model’s decision.
The main impact of FairER will be a better understanding of how language models treat different demographics. These insights will help to improve the fairness and inclusiveness of NLP applications. Furthermore, the datasets I will record and publish along with the code will encourage other researchers to replicate my findings and continue this line of research. Ultimately, this project will have both a scientific and societal impact on the NLP community and users of NLP applications.
Status
SIGNEDCall topic
HORIZON-MSCA-2021-PF-01-01Update Date
09-02-2023
Images
No images available.
Geographical location(s)