Summary
An unprecedented energy crisis is looming over us. In order to transition to a greener and more energy-efficient society, existing technologies need to be improved and novel techniques such as nuclear fusion developed. This requires the stabilization of aerodynamics, heat transfer or combustion and fusion processes and thus, the development of efficient control strategies for large-scale dynamical systems. In recent years, reinforcement learning (RL) has emerged as a highly promising data-driven technique. Unfortunately, we cannot trust RL to handle our most important and complex systems, since the resulting controllers do not possess performance guarantees. Certifiable RL approaches such as linear or kernel methods tend to scale poorly, such that their applicability is limited to toy examples. In contrast to other application areas, this is a complete show-stopper for safety-critical engineering. Moreover, the training is extremely data hungry and costly, due to which RL itself contributes to the energy crisis.
The vision of this project is to develop new foundational methods to equip RL controllers for large-scale engineering systems with performance guarantees by exploiting system knowledge and systematically reducing the complexity. To achieve this, I will target three major breakthroughs, consisting of (A) global linearization of the dynamics via the Koopman operator framework, (B) the extension of certified Q-learning to continuous action spaces via control quantization, and (C) the detection and exploitation of symmetries in the system dynamics.
The project requires significant joint advancements in several challenging areas such as control, approximation theory and machine learning. In the case of success, the resulting controllers will provide a massive advancement of RL towards safety-critical engineering applications and significantly contribute to the challenge of meeting the future energy demands of our society.
The vision of this project is to develop new foundational methods to equip RL controllers for large-scale engineering systems with performance guarantees by exploiting system knowledge and systematically reducing the complexity. To achieve this, I will target three major breakthroughs, consisting of (A) global linearization of the dynamics via the Koopman operator framework, (B) the extension of certified Q-learning to continuous action spaces via control quantization, and (C) the detection and exploitation of symmetries in the system dynamics.
The project requires significant joint advancements in several challenging areas such as control, approximation theory and machine learning. In the case of success, the resulting controllers will provide a massive advancement of RL towards safety-critical engineering applications and significantly contribute to the challenge of meeting the future energy demands of our society.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: | https://cordis.europa.eu/project/id/101161457 |
Start date: | 01-01-2025 |
End date: | 31-12-2029 |
Total budget - Public funding: | 1 499 000,00 Euro - 1 499 000,00 Euro |
Cordis data
Original description
An unprecedented energy crisis is looming over us. In order to transition to a greener and more energy-efficient society, existing technologies need to be improved and novel techniques such as nuclear fusion developed. This requires the stabilization of aerodynamics, heat transfer or combustion and fusion processes and thus, the development of efficient control strategies for large-scale dynamical systems. In recent years, reinforcement learning (RL) has emerged as a highly promising data-driven technique. Unfortunately, we cannot trust RL to handle our most important and complex systems, since the resulting controllers do not possess performance guarantees. Certifiable RL approaches such as linear or kernel methods tend to scale poorly, such that their applicability is limited to toy examples. In contrast to other application areas, this is a complete show-stopper for safety-critical engineering. Moreover, the training is extremely data hungry and costly, due to which RL itself contributes to the energy crisis.The vision of this project is to develop new foundational methods to equip RL controllers for large-scale engineering systems with performance guarantees by exploiting system knowledge and systematically reducing the complexity. To achieve this, I will target three major breakthroughs, consisting of (A) global linearization of the dynamics via the Koopman operator framework, (B) the extension of certified Q-learning to continuous action spaces via control quantization, and (C) the detection and exploitation of symmetries in the system dynamics.
The project requires significant joint advancements in several challenging areas such as control, approximation theory and machine learning. In the case of success, the resulting controllers will provide a massive advancement of RL towards safety-critical engineering applications and significantly contribute to the challenge of meeting the future energy demands of our society.
Status
SIGNEDCall topic
ERC-2024-STGUpdate Date
22-11-2024
Images
No images available.
Geographical location(s)