MEMFLUX | An in-memory dataflow accelerator for deep learning

Summary
Deep neural networks (DNNs), loosely inspired by biological neural networks, consist of parallel processing units called neurons interconnected by plastic synapses. By tuning the weights of these interconnections, these networks are able to perform certain cognitive tasks remarkably well. DNNs are being deployed all the way from cloud data centers to edge servers and even end devices and is projected to be a tens of billion Euro-market just for semiconductor companies in the next few years. There is a significant effort towards the design of custom ASICs based on reduced precision arithmetic and highly optimized dataflow. However, one of the primary reasons for the inefficiency, namely the need to shuttle millions of synaptic weight values between the memory and processing units, remains unaddressed. In-memory computing is an emerging computing paradigm that addresses this challenge of processor-memory dichotomy. For example, a computational memory unit with resistive memory (memristive) devices organized in a crossbar configuration is capable of performing matrix-vector multiply operations in place by exploiting the Kirchhoff’s circuits laws. Moreover, the computational time complexity reduces to O(1). The goal of this project is to prototype such an in-memory computing accelerator for ultra-low latency, ultra-low power DNN inference.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: https://cordis.europa.eu/project/id/966764
Start date: 01-07-2021
End date: 31-12-2022
Total budget - Public funding: - 150 000,00 Euro
Cordis data

Original description

Deep neural networks (DNNs), loosely inspired by biological neural networks, consist of parallel processing units called neurons interconnected by plastic synapses. By tuning the weights of these interconnections, these networks are able to perform certain cognitive tasks remarkably well. DNNs are being deployed all the way from cloud data centers to edge servers and even end devices and is projected to be a tens of billion Euro-market just for semiconductor companies in the next few years. There is a significant effort towards the design of custom ASICs based on reduced precision arithmetic and highly optimized dataflow. However, one of the primary reasons for the inefficiency, namely the need to shuttle millions of synaptic weight values between the memory and processing units, remains unaddressed. In-memory computing is an emerging computing paradigm that addresses this challenge of processor-memory dichotomy. For example, a computational memory unit with resistive memory (memristive) devices organized in a crossbar configuration is capable of performing matrix-vector multiply operations in place by exploiting the Kirchhoff’s circuits laws. Moreover, the computational time complexity reduces to O(1). The goal of this project is to prototype such an in-memory computing accelerator for ultra-low latency, ultra-low power DNN inference.

Status

CLOSED

Call topic

ERC-2020-POC

Update Date

27-04-2024
Images
No images available.
Geographical location(s)
Structured mapping
Unfold all
/
Fold all
Horizon 2020
H2020-EU.1. EXCELLENT SCIENCE
H2020-EU.1.1. EXCELLENT SCIENCE - European Research Council (ERC)
ERC-2020
ERC-2020-PoC