condenSE | Sustainable Training of Code Language Models through Data Refinement

Summary
"Large language models (LLMs) have gained widespread attention and user adoption. These models, when trained on source code from platforms like GitHub, acquire a deep understanding of both the semantic and syntactic structures of code (i.e., code language models or CLMs). This understanding has paved the way for significant advancements in software engineering, offering developers valuable assistance in labor-intensive tasks like bug fixing and code writing. While CLMs offer tremendous assistance in software engineering tasks, their massive data requirements result in substantial energy consumption and CO2 emissions.

This proposal challenges the conventional wisdom that ""more data is better"" and instead advocates for a refined approach to data in the training of CLMs. We propose that by intentionally decreasing training data volume while simultaneously enhancing data quality through data refinement techniques, we can reduce energy consumption while maintaining or even improving performance on software engineering tasks. The condenSE project represents a pioneering effort to advance sustainable training practices for CLMs. Unlike existing methods, which are often non-systematic or limited to natural languages, condenSE promises a comprehensive approach to achieve sustainability via data refinement for CLMs.

This initiative is well-aligned with the EU Green Deal initiative and UN Sustainable Development Goals, and the increasing attention for LLMs and CLMs means that now is the right time to address their sustainability. The proposal's potential for success is further strengthened by the host institution's international standing, providing a wide range of collaborative opportunities, as well as by the complementary expertise of the applicant and supervisor, spanning the fields of software engineering, machine learning, dataset creation, and language model application."
Unfold all
/
Fold all
More information & hyperlinks
Web resources: https://cordis.europa.eu/project/id/101151798
Start date: 01-04-2024
End date: 31-03-2026
Total budget - Public funding: - 210 911,00 Euro
Cordis data

Original description

"Large language models (LLMs) have gained widespread attention and user adoption. These models, when trained on source code from platforms like GitHub, acquire a deep understanding of both the semantic and syntactic structures of code (i.e., code language models or CLMs). This understanding has paved the way for significant advancements in software engineering, offering developers valuable assistance in labor-intensive tasks like bug fixing and code writing. While CLMs offer tremendous assistance in software engineering tasks, their massive data requirements result in substantial energy consumption and CO2 emissions.

This proposal challenges the conventional wisdom that ""more data is better"" and instead advocates for a refined approach to data in the training of CLMs. We propose that by intentionally decreasing training data volume while simultaneously enhancing data quality through data refinement techniques, we can reduce energy consumption while maintaining or even improving performance on software engineering tasks. The condenSE project represents a pioneering effort to advance sustainable training practices for CLMs. Unlike existing methods, which are often non-systematic or limited to natural languages, condenSE promises a comprehensive approach to achieve sustainability via data refinement for CLMs.

This initiative is well-aligned with the EU Green Deal initiative and UN Sustainable Development Goals, and the increasing attention for LLMs and CLMs means that now is the right time to address their sustainability. The proposal's potential for success is further strengthened by the host institution's international standing, providing a wide range of collaborative opportunities, as well as by the complementary expertise of the applicant and supervisor, spanning the fields of software engineering, machine learning, dataset creation, and language model application."

Status

SIGNED

Call topic

HORIZON-MSCA-2023-PF-01-01

Update Date

05-10-2024
Images
No images available.
Geographical location(s)
Structured mapping
Unfold all
/
Fold all
Horizon Europe
HORIZON.1 Excellent Science
HORIZON.1.2 Marie Skłodowska-Curie Actions (MSCA)
HORIZON.1.2.0 Cross-cutting call topics
HORIZON-MSCA-2023-PF-01
HORIZON-MSCA-2023-PF-01-01 MSCA Postdoctoral Fellowships 2023