Article | REF: H5040 V1

Oscillatory neural networks for energy-efficient computing

Authors: Madeleine ABERNOT, Gabriele BOSCHETTO, Stefania CARAPEZZI, Corentin DELACOUR, Thierry GIL, Aida TODRI-SANIAL

Publication date: November 10, 2022

You do not have access to this resource.
Click here to request your free trial access!

Already subscribed? Log in!


Overview

Français

ABSTRACT

Artificial intelligence allows us to solve increasingly complex problems, but algorithms require a lot of computational resources that consume a lot of power, especially during the learning phase to train neural networks with a large amount of data. Today, the need to process the data where they are generated requires low power architectures.

This article presents a novel neural network based on oscillatory neural networks (ONN) which are inspired by the oscillations observed in the brain. ONNs present an alternative low power computing paradigm that enables online learning and inference on edge IoT devices.

Read this article from a comprehensive knowledge base, updated and supplemented with articles reviewed by scientific committees.

Read the article

AUTHORS

  • Madeleine ABERNOT: Doctoral student at the University of Montpellier, Montpellier Laboratory of Computer Science, Robotics and Microelectronics (LIRMM)

  • Gabriele BOSCHETTO: Post-doctoral fellow at CNRS, Montpellier Laboratory of Computer Science, Robotics and Microelectronics (LIRMM)

  • Stefania CARAPEZZI: Post-doctoral fellow at CNRS, Montpellier Laboratory of Computer Science, Robotics and Microelectronics (LIRMM)

  • Corentin DELACOUR: Doctoral student at the University of Montpellier, Montpellier Laboratory of Computer Science, Robotics and Microelectronics (LIRMM)

  • Thierry GIL: CNRS Research Engineer, Montpellier Laboratory of Computer Science, Robotics and Microelectronics (LIRMM)

  • Aida TODRI-SANIAL: CNRS Research Director, Montpellier Laboratory of Computer Science, Robotics and Microelectronics (LIRMM)

 INTRODUCTION

In-vehicle artificial intelligence requires very substantial computing power to process large quantities of data in real time from many different types of device, such as cameras, proximity sensors for an autonomous vehicle, and so on. Today, data processing takes place mainly in the cloud or in data centers. But this requires high bandwidth for data transfer, data security, data processing delays and high energy consumption. To solve these problems, the scientific and industrial community has focused on developing low-power architectures to enable local data processing. This becomes all the more crucial as, in the coming years, billions of peripheral IoT devices will be connected

So how can we drastically reduce the power consumption of these electronic systems? Research is now turning to new computing units to enable more data to be processed in less time, while consuming less energy. Conventional processors (CPUs) are based on the von Neumann architecture, in which the computing unit and memory are separated into two distinct units. The transfer between the memory and the calculation unit is therefore time- and energy-consuming. To speed up processing, new processors (GPUs) use several of these architectures (calculation unit and memory unit) to perform calculations in parallel. However, this type of processor still consumes far too much energy when transferring data between processor and memory. To reduce data transfer between processor and memory, in-memory computing architectures have also been proposed, offering memory cells with simple calculation capabilities. More recently, research has turned to new architectures inspired by the nature and functioning of the human brain to reduce the energy consumption of electronic systems. These circuits are known as neuromorphic. Researchers have estimated that the human brain requires around 20 watts of power to perform extremely complex tasks.

This is how Artificial Neural Networks (ANNs) came into being. ANNs are inspired by neural networks and their ability to reproduce and model non-linear processes. An ANN contains a multitude of neurons connected by synapses. The calculations are performed by the neurons, while the synapses represent the system's memory through their value.

Several types of ANN have been designed, the most widely used being single or multi-layer perceptron neural networks, convolutional neural networks, pulse neural networks and recurrent neural networks. Among recurrent neural networks, oscillatory neural networks (ONNs) have recently emerged as an interesting alternative for solving complex problems at low power consumption.

This article gives an in-depth description of the ONN-based computing paradigm, implementation, and applications. First,...

You do not have access to this resource.

Exclusive to subscribers. 97% yet to be discovered!

You do not have access to this resource.
Click here to request your free trial access!

Already subscribed? Log in!


The Ultimate Scientific and Technical Reference

A Comprehensive Knowledge Base, with over 1,200 authors and 100 scientific advisors
+ More than 10,000 articles and 1,000 how-to sheets, over 800 new or updated articles every year
From design to prototyping, right through to industrialization, the reference for securing the development of your industrial projects

KEYWORDS

oscillatory Neural network   |   edge computing   |   energy efficient   |   associative memory   |   solving optimisation problems


This article is included in

Software technologies and System architectures

This offer includes:

Knowledge Base

Updated and enriched with articles validated by our scientific committees

Services

A set of exclusive tools to complement the resources

Practical Path

Operational and didactic, to guarantee the acquisition of transversal skills

Doc & Quiz

Interactive articles with quizzes, for constructive reading

Subscribe now!

Ongoing reading
Oscillating neural networks for energy-efficient computing