REDIAL | Re-thinking Efficiency in Deep Learning under Accelerators and commodity and processors

Summary
In just a few short years, breakthroughs from the field of deep learning have transformed how computers perform a wide-variety of tasks such as recognizing a face, driving a car or translating a language. Not only has deep learning become an everyday tool, it is also the most promising direction for tackling a number of still open problems in machine learning and artificial intelligence. However, routine deep learning activities (such as training a model) exert severe resource demands (e.g., memory, compute, energy) that are currently slowing the advancement of the field, and preventing full global participation in this research to only the largest of companies.

The goal of REDIAL is to solve core technical challenges that span the areas of machine learning and system research which collectively can enable a radical jump in the efficiency of deep learning. It aims to address both the challenge of high training costs and time, as well as the barrier to deploying models on constrained devices (like wearables, sensors) that currently require new efficiency techniques to be invented each time a deep learning innovation occurs. To accomplish this REDIAL takes two complementary approaches. First, it seeks to build a theoretical understanding of current approaches to deep learning efficiency, a desperately needed step given current over reliance on empirical observations. Second, it aims to develop new architectures and methods for training and inference that tackle core efficiency bottlenecks, such as: dependencies preventing parallelization and excessive on-chip data movement; while also opening new opportunities including the greater adoption of analog processing within accelerators. REDIAL aims to change the way the world trains its models, and deploys them to constrained devices, by producing a series of new deep architectures and algorithms with properties that promote high efficiency that can serve as a foundation for new machine learning innovation.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: https://cordis.europa.eu/project/id/805194
Start date: 01-09-2020
End date: 31-08-2025
Total budget - Public funding: 1 495 036,00 Euro - 1 495 036,00 Euro
Cordis data

Original description

In just a few short years, breakthroughs from the field of deep learning have transformed how computers perform a wide-variety of tasks such as recognizing a face, driving a car or translating a language. Not only has deep learning become an everyday tool, it is also the most promising direction for tackling a number of still open problems in machine learning and artificial intelligence. However, routine deep learning activities (such as training a model) exert severe resource demands (e.g., memory, compute, energy) that are currently slowing the advancement of the field, and preventing full global participation in this research to only the largest of companies.

The goal of REDIAL is to solve core technical challenges that span the areas of machine learning and system research which collectively can enable a radical jump in the efficiency of deep learning. It aims to address both the challenge of high training costs and time, as well as the barrier to deploying models on constrained devices (like wearables, sensors) that currently require new efficiency techniques to be invented each time a deep learning innovation occurs. To accomplish this REDIAL takes two complementary approaches. First, it seeks to build a theoretical understanding of current approaches to deep learning efficiency, a desperately needed step given current over reliance on empirical observations. Second, it aims to develop new architectures and methods for training and inference that tackle core efficiency bottlenecks, such as: dependencies preventing parallelization and excessive on-chip data movement; while also opening new opportunities including the greater adoption of analog processing within accelerators. REDIAL aims to change the way the world trains its models, and deploys them to constrained devices, by producing a series of new deep architectures and algorithms with properties that promote high efficiency that can serve as a foundation for new machine learning innovation.

Status

SIGNED

Call topic

ERC-2018-STG

Update Date

27-04-2024
Images
No images available.
Geographical location(s)
Structured mapping
Unfold all
/
Fold all
Horizon 2020
H2020-EU.1. EXCELLENT SCIENCE
H2020-EU.1.1. EXCELLENT SCIENCE - European Research Council (ERC)
ERC-2018
ERC-2018-STG