alignAI | value-ALIGNed socio-technical systems using large-language models (LLMs)

Summary
Large Language Models (LLMs) are trained on broad data, using self-supervision at scale, to complete a wide range of tasks. Wider use of LLMs has risen in recent months due to applications such as ChatGPT. Although LLMs bring many opportunities to improve our everyday lives, the impacts on humans and society have not yet been prioritized or fully understood. Given the rapid development of these tools, the risk of negative implications is significant if LLMs are not developed and deployed in a way that is aligned with human values and responds to individual needs and preferences. To mitigate any negative consequences, academia, in close collaboration with industry, needs to train the next generation of researchers to understand the complexities of the socio-technical implications surrounding the use of LLMs.

The alignAI Doctoral Network will train 17 doctoral candidates (DCs) to work in the international and highly interdisciplinary field of LLM research and development. The core of the project focuses on the alignment of LLMs with human values, identifying relevant values and methods for alignment implementation. Two principles provide a foundation for the approach. First, explainability is a key enabler for all aspects of trustworthiness, accelerating development, promoting usability, and facilitating human oversight and auditing of LLMs. Second, fairness is a key aspect of trustworthiness, facilitating access to AI applications and ensuring equal impact of AI-driven decision-making. The practical relevance of the project is ensured by three use cases in education, positive mental health, and news consumption. This approach allows us to develop specific guidelines and test prototypes and tools to promote value alignment. We follow a unique methodological approach, with DCs from social sciences and humanities “twinned” with DCs from technical disciplines for each use case (9 DCs in total), while the other 8 DCs carry out horizontal research across the use cases.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: https://cordis.europa.eu/project/id/101169473
Start date: 01-09-2024
End date: 31-08-2028
Total budget - Public funding: - 3 576 974,00 Euro
Cordis data

Original description

Large Language Models (LLMs) are trained on broad data, using self-supervision at scale, to complete a wide range of tasks. Wider use of LLMs has risen in recent months due to applications such as ChatGPT. Although LLMs bring many opportunities to improve our everyday lives, the impacts on humans and society have not yet been prioritized or fully understood. Given the rapid development of these tools, the risk of negative implications is significant if LLMs are not developed and deployed in a way that is aligned with human values and responds to individual needs and preferences. To mitigate any negative consequences, academia, in close collaboration with industry, needs to train the next generation of researchers to understand the complexities of the socio-technical implications surrounding the use of LLMs.

The alignAI Doctoral Network will train 17 doctoral candidates (DCs) to work in the international and highly interdisciplinary field of LLM research and development. The core of the project focuses on the alignment of LLMs with human values, identifying relevant values and methods for alignment implementation. Two principles provide a foundation for the approach. First, explainability is a key enabler for all aspects of trustworthiness, accelerating development, promoting usability, and facilitating human oversight and auditing of LLMs. Second, fairness is a key aspect of trustworthiness, facilitating access to AI applications and ensuring equal impact of AI-driven decision-making. The practical relevance of the project is ensured by three use cases in education, positive mental health, and news consumption. This approach allows us to develop specific guidelines and test prototypes and tools to promote value alignment. We follow a unique methodological approach, with DCs from social sciences and humanities “twinned” with DCs from technical disciplines for each use case (9 DCs in total), while the other 8 DCs carry out horizontal research across the use cases.

Status

SIGNED

Call topic

HORIZON-MSCA-2023-DN-01-01

Update Date

29-09-2024
Images
No images available.
Geographical location(s)
Structured mapping
Unfold all
/
Fold all
Horizon Europe
HORIZON.1 Excellent Science
HORIZON.1.2 Marie Skłodowska-Curie Actions (MSCA)
HORIZON.1.2.0 Cross-cutting call topics
HORIZON-MSCA-2023-DN-01
HORIZON-MSCA-2023-DN-01-01 MSCA Doctoral Networks 2023