Summary
The Grenadyn project will demonstrate that assemblies of imperfect, dynamical nanodevices can self-learn through physical principles, like biological neurons and synapses do, with performance comparable to the best artificial intelligence (AI) algorithms. For this, Grenadyn’s networks will minimize their effective energy together with the recognition error when learning.
The starting point of Grenadyn is an algorithm called Equilibrium Propagation, developed by AI pioneer Yoshua Bengio, that takes its roots in physics. We will assemble memristive as well as spintronic nanocomponents in neural networks that perform pattern recognition through Equilibrium Propagation. We will show that these dynamical networks learn by nudging their outputs towards the desired solution through a spring-like force, and letting nano-synapses and neurons reorganize themselves towards equilibrium. We will show that they can also learn directly from the data, without supervision.
We will induce a high resilience to imperfections in these networks through self-adaptation and digitization. We will demonstrate by experiments and simulations that our physical neural networks made of variable elements compute with an accuracy similar to software neural networks trained with backpropagation. We will produce a chip integrating nanosynaptic devices on CMOS and achieve state-of-the-art recognition rates on AI image benchmarks.
We will enhance the network functionalities by leveraging their dynamical properties through synchronization and time-delayed feedback. Finally, we extend Grenadyn’s in-materio self-learning to any assembly of coupled dynamical nanodevices, providing novel horizons for multifunctional materials and devices.
Grenadyn’s scientific advances in condensed-matter physics, non-linear dynamics, electronics, and AI will give the foundations for deep network chips that contain billions of nano-synapses and nano-neurons and self-learn with state-of-the-art accuracy.
The starting point of Grenadyn is an algorithm called Equilibrium Propagation, developed by AI pioneer Yoshua Bengio, that takes its roots in physics. We will assemble memristive as well as spintronic nanocomponents in neural networks that perform pattern recognition through Equilibrium Propagation. We will show that these dynamical networks learn by nudging their outputs towards the desired solution through a spring-like force, and letting nano-synapses and neurons reorganize themselves towards equilibrium. We will show that they can also learn directly from the data, without supervision.
We will induce a high resilience to imperfections in these networks through self-adaptation and digitization. We will demonstrate by experiments and simulations that our physical neural networks made of variable elements compute with an accuracy similar to software neural networks trained with backpropagation. We will produce a chip integrating nanosynaptic devices on CMOS and achieve state-of-the-art recognition rates on AI image benchmarks.
We will enhance the network functionalities by leveraging their dynamical properties through synchronization and time-delayed feedback. Finally, we extend Grenadyn’s in-materio self-learning to any assembly of coupled dynamical nanodevices, providing novel horizons for multifunctional materials and devices.
Grenadyn’s scientific advances in condensed-matter physics, non-linear dynamics, electronics, and AI will give the foundations for deep network chips that contain billions of nano-synapses and nano-neurons and self-learn with state-of-the-art accuracy.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: | https://cordis.europa.eu/project/id/101020684 |
Start date: | 01-10-2022 |
End date: | 30-09-2027 |
Total budget - Public funding: | 2 462 587,00 Euro - 2 462 587,00 Euro |
Cordis data
Original description
The Grenadyn project will demonstrate that assemblies of imperfect, dynamical nanodevices can self-learn through physical principles, like biological neurons and synapses do, with performance comparable to the best artificial intelligence (AI) algorithms. For this, Grenadyn’s networks will minimize their effective energy together with the recognition error when learning.The starting point of Grenadyn is an algorithm called Equilibrium Propagation, developed by AI pioneer Yoshua Bengio, that takes its roots in physics. We will assemble memristive as well as spintronic nanocomponents in neural networks that perform pattern recognition through Equilibrium Propagation. We will show that these dynamical networks learn by nudging their outputs towards the desired solution through a spring-like force, and letting nano-synapses and neurons reorganize themselves towards equilibrium. We will show that they can also learn directly from the data, without supervision.
We will induce a high resilience to imperfections in these networks through self-adaptation and digitization. We will demonstrate by experiments and simulations that our physical neural networks made of variable elements compute with an accuracy similar to software neural networks trained with backpropagation. We will produce a chip integrating nanosynaptic devices on CMOS and achieve state-of-the-art recognition rates on AI image benchmarks.
We will enhance the network functionalities by leveraging their dynamical properties through synchronization and time-delayed feedback. Finally, we extend Grenadyn’s in-materio self-learning to any assembly of coupled dynamical nanodevices, providing novel horizons for multifunctional materials and devices.
Grenadyn’s scientific advances in condensed-matter physics, non-linear dynamics, electronics, and AI will give the foundations for deep network chips that contain billions of nano-synapses and nano-neurons and self-learn with state-of-the-art accuracy.
Status
SIGNEDCall topic
ERC-2020-ADGUpdate Date
27-04-2024
Images
No images available.
Geographical location(s)