Summary
Deep Learning is an area of massive progress, with myriad applications and significant industry adoption. A key enabler of its progress is the ability to train large, highly-accurate Deep Neural Networks (DNNs) in a distributed fashion, across tens to thousands of different computational nodes. Yet, DNN training at scale poses severe challenges to standard paradigms in distributed computing; existing distributed training approaches and their practical implementations, via training libraries such as PyTorch or TensorFlow, often suffer from major distribution bottlenecks, which can significantly reduce computational efficiency, leading to wasted time, money, and energy.
The FastML proof-of-concept (PoC) project will tackle this efficiency challenge head-on, by introducing a distributed training framework that will significantly reduce or even eliminate the overheads of parallelization for practical distributed training workloads, in common usage scenarios. FastML’s distinctive feature is leveraging the algorithmic and software techniques developed by our ERC Starting Grant, in order to reduce distribution overheads at scale without impacting training convergence or model accuracy. FastML stands in contrast to current distribution techniques, which rely on hardware overprovisioning–essentially, providing very fast but also very expensive interconnects between the computing nodes. As such, FastML can bring significant infrastructure and running cost improvements to its users, as well as lowering the cost and hardware entry barrier to training accurate machine learning models. The PoC will design and develop the FastML software library to target industry-relevant workloads via pilot projects jointly with our industrial partners. In addition, we will perform an in-depth market study, devise intellectual property and go-to-market strategies, and produce a minimally-viable product (MVP), which will be demonstrated to potential customers and investors.
The FastML proof-of-concept (PoC) project will tackle this efficiency challenge head-on, by introducing a distributed training framework that will significantly reduce or even eliminate the overheads of parallelization for practical distributed training workloads, in common usage scenarios. FastML’s distinctive feature is leveraging the algorithmic and software techniques developed by our ERC Starting Grant, in order to reduce distribution overheads at scale without impacting training convergence or model accuracy. FastML stands in contrast to current distribution techniques, which rely on hardware overprovisioning–essentially, providing very fast but also very expensive interconnects between the computing nodes. As such, FastML can bring significant infrastructure and running cost improvements to its users, as well as lowering the cost and hardware entry barrier to training accurate machine learning models. The PoC will design and develop the FastML software library to target industry-relevant workloads via pilot projects jointly with our industrial partners. In addition, we will perform an in-depth market study, devise intellectual property and go-to-market strategies, and produce a minimally-viable product (MVP), which will be demonstrated to potential customers and investors.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: | https://cordis.europa.eu/project/id/101158077 |
Start date: | 01-05-2024 |
End date: | 31-10-2025 |
Total budget - Public funding: | - 150 000,00 Euro |
Cordis data
Original description
Deep Learning is an area of massive progress, with myriad applications and significant industry adoption. A key enabler of its progress is the ability to train large, highly-accurate Deep Neural Networks (DNNs) in a distributed fashion, across tens to thousands of different computational nodes. Yet, DNN training at scale poses severe challenges to standard paradigms in distributed computing; existing distributed training approaches and their practical implementations, via training libraries such as PyTorch or TensorFlow, often suffer from major distribution bottlenecks, which can significantly reduce computational efficiency, leading to wasted time, money, and energy.The FastML proof-of-concept (PoC) project will tackle this efficiency challenge head-on, by introducing a distributed training framework that will significantly reduce or even eliminate the overheads of parallelization for practical distributed training workloads, in common usage scenarios. FastML’s distinctive feature is leveraging the algorithmic and software techniques developed by our ERC Starting Grant, in order to reduce distribution overheads at scale without impacting training convergence or model accuracy. FastML stands in contrast to current distribution techniques, which rely on hardware overprovisioning–essentially, providing very fast but also very expensive interconnects between the computing nodes. As such, FastML can bring significant infrastructure and running cost improvements to its users, as well as lowering the cost and hardware entry barrier to training accurate machine learning models. The PoC will design and develop the FastML software library to target industry-relevant workloads via pilot projects jointly with our industrial partners. In addition, we will perform an in-depth market study, devise intellectual property and go-to-market strategies, and produce a minimally-viable product (MVP), which will be demonstrated to potential customers and investors.
Status
SIGNEDCall topic
ERC-2023-POCUpdate Date
23-11-2024
Images
No images available.
Geographical location(s)