Summary
"Since the breakthrough application of Deep Neural Networks algorithms (DNNs) to speech and image recognition, the number of applications that use DNNs has exploded, achieving the highest accuracy in a myriad of contexts (health, robotics, finance, gaming, etc.). However, their superior accuracy comes at the cost of high computational complexity.
Current approaches to solve this challenge are cloud-based, incurring in high power consumption and high latency, given their communication needs. Although cloud approaches are suitable for some context, they are suboptimal for real-time applications running on embedded or mobile devices (with limited battery capacity and requiring fast responses).
REEXEN appears to bring a solution to this challenge: an extremely efficient AI processor (a semiconductor chip) specifically designed for supporting DNN-based edge applications. By exploiting state-of-the-art semiconductor technologies in mixed-signal circuits and in-memory processing, REEXEN obtains the best power-efficiency when executing DNN algorithms, in terms of maximum throughput per energy unit consumption (30 TOPs/W). By reducing the ""distance"" between data generation (sensors), data storage (memory) and data processing (core processor or nucleus), and by eliminating A/D conversions, REEXEN also achieves minimum latency (
Current approaches to solve this challenge are cloud-based, incurring in high power consumption and high latency, given their communication needs. Although cloud approaches are suitable for some context, they are suboptimal for real-time applications running on embedded or mobile devices (with limited battery capacity and requiring fast responses).
REEXEN appears to bring a solution to this challenge: an extremely efficient AI processor (a semiconductor chip) specifically designed for supporting DNN-based edge applications. By exploiting state-of-the-art semiconductor technologies in mixed-signal circuits and in-memory processing, REEXEN obtains the best power-efficiency when executing DNN algorithms, in terms of maximum throughput per energy unit consumption (30 TOPs/W). By reducing the ""distance"" between data generation (sensors), data storage (memory) and data processing (core processor or nucleus), and by eliminating A/D conversions, REEXEN also achieves minimum latency (
Unfold all
/
Fold all
More information & hyperlinks
Web resources: | https://cordis.europa.eu/project/id/889805 |
Start date: | 01-01-2020 |
End date: | 30-04-2020 |
Total budget - Public funding: | 71 429,00 Euro - 50 000,00 Euro |
Cordis data
Original description
"Since the breakthrough application of Deep Neural Networks algorithms (DNNs) to speech and image recognition, the number of applications that use DNNs has exploded, achieving the highest accuracy in a myriad of contexts (health, robotics, finance, gaming, etc.). However, their superior accuracy comes at the cost of high computational complexity.Current approaches to solve this challenge are cloud-based, incurring in high power consumption and high latency, given their communication needs. Although cloud approaches are suitable for some context, they are suboptimal for real-time applications running on embedded or mobile devices (with limited battery capacity and requiring fast responses).
REEXEN appears to bring a solution to this challenge: an extremely efficient AI processor (a semiconductor chip) specifically designed for supporting DNN-based edge applications. By exploiting state-of-the-art semiconductor technologies in mixed-signal circuits and in-memory processing, REEXEN obtains the best power-efficiency when executing DNN algorithms, in terms of maximum throughput per energy unit consumption (30 TOPs/W). By reducing the ""distance"" between data generation (sensors), data storage (memory) and data processing (core processor or nucleus), and by eliminating A/D conversions, REEXEN also achieves minimum latency (
Status
CLOSEDCall topic
EIC-SMEInst-2018-2020Update Date
27-10-2022
Images
No images available.
Geographical location(s)
Structured mapping
Unfold all
/
Fold all