Summary
Reinforcement learning (RL) is an intensely studied subfield of machine learning concerned with sequential decision-making problems where a learning agent interacts with an unknown reactive environment while attempting to maximize its rewards. In recent years, RL methods have gained significant popularity due to being the key technique behind some spectacular breakthroughs of artificial intelligence (AI) research, which renewed interest in applying such techniques to challenging real-world problems like control of autonomous vehicles or smart energy grids. While the RL framework is clearly suitable to address such problems, the applicability of the current generation of RL algorithms is limited by a lack of formal performance guarantees and a very low sample efficiency. This project proposes to address this problem and advance the state of the art in RL by developing a new generation of provably efficient and scalable algorithms. Our approach is based on identifying various structural assumptions for Markov decision processes (MDPs, the main modeling tool used in RL) that enable computationally and statistically efficient learning. Specifically, we will focus on MDP structures induced by various approximation schemes including value-function approximation and relaxations of the linear-program formulation of optimal control in MDPs. Based on this view, we aim to develop a variety of new tools for designing and analyzing RL algorithms, and achieve a deep understanding of fundamental performance limits in structured MDPs. While our main focus will be on rigorous theoretical analysis of algorithms, most of our objectives are inspired by practical concerns, particularly by the question of scalability. As a result, we expect that our proposed research will have significant impact on both the theory and practice of reinforcement learning, bringing RL methods significantly closer to practical applicability.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: | https://cordis.europa.eu/project/id/950180 |
Start date: | 01-10-2021 |
End date: | 30-09-2026 |
Total budget - Public funding: | 1 493 990,00 Euro - 1 493 990,00 Euro |
Cordis data
Original description
Reinforcement learning (RL) is an intensely studied subfield of machine learning concerned with sequential decision-making problems where a learning agent interacts with an unknown reactive environment while attempting to maximize its rewards. In recent years, RL methods have gained significant popularity due to being the key technique behind some spectacular breakthroughs of artificial intelligence (AI) research, which renewed interest in applying such techniques to challenging real-world problems like control of autonomous vehicles or smart energy grids. While the RL framework is clearly suitable to address such problems, the applicability of the current generation of RL algorithms is limited by a lack of formal performance guarantees and a very low sample efficiency. This project proposes to address this problem and advance the state of the art in RL by developing a new generation of provably efficient and scalable algorithms. Our approach is based on identifying various structural assumptions for Markov decision processes (MDPs, the main modeling tool used in RL) that enable computationally and statistically efficient learning. Specifically, we will focus on MDP structures induced by various approximation schemes including value-function approximation and relaxations of the linear-program formulation of optimal control in MDPs. Based on this view, we aim to develop a variety of new tools for designing and analyzing RL algorithms, and achieve a deep understanding of fundamental performance limits in structured MDPs. While our main focus will be on rigorous theoretical analysis of algorithms, most of our objectives are inspired by practical concerns, particularly by the question of scalability. As a result, we expect that our proposed research will have significant impact on both the theory and practice of reinforcement learning, bringing RL methods significantly closer to practical applicability.Status
SIGNEDCall topic
ERC-2020-STGUpdate Date
27-04-2024
Images
No images available.
Geographical location(s)