[I'MTech] When AI keeps an ear on nursing home residents

The OSO-AI start-up has recently completed a €4 million funding round. Its artificial intelligence solution that can detect incidents such as falls or cries for help has convinced investors, along with a number of nursing homes in which it has been installed. This technology was developed in part through the work of Claude Berrou, a researcher at IMT Atlantique, and the company’s co-founder and scientific advisor.

OSO-AI, a company incubated at IMT Atlantique, is the result of an encounter between Claude Berrou, a researcher at the engineering school, and Olivier Menut, an engineer at STMicroelectronics. Together, they started to develop artificial intelligence that can recognize specific sounds. After completing a €4 million funding round, the start-up now plans to fast-track the development of its product: ARI (French acronym for Smart Resident Assistant), a solution designed to alert staff in the event of an incident inside a resident’s room.

Claude Berrou, co-founder and scientific advisor of the startup OSO-AI

The device takes the form of an electronic unit equipped with high-precision microphones. ARI’s goal is to “listen” to the sound environment in which it is placed and send an alert whenever it picks up a worrying sound. Information is then transmitted via wi-fi and processed in the cloud.

Normally, in nursing homes, there is only a single person on call at night,” says Claude Berrou. “They hear a cry for help at 2 am but don’t know which room it came from. So they have to go seek out the resident in distress, losing precious time before they can intervene – and waking up many residents in the process. With our system, the caregiver on duty receives a message such as, ‘Room 12, 1st floor, cry for help,’ directly on their mobile phone.” The technology therefore saves time that may be life-saving for an elderly person, and is less intrusive than a surveillance camera so it is better accepted. Especially since it is paused whenever someone else enters the room. Moreover, it helps relieve the workload and mental burden placed on the staff.

OSO-AI is inspired by how the brain works

But how can an information system hear and analyze sounds? The device developed by OSO-AI relies on machine learning, a branch of artificial intelligence, and artificial neural networks. In a certain way, this means that it tries to imitate how the brain works. “Any machine designed to reproduce basic properties of human intelligence must be based on two separate networks,” explains the IMT Atlantique researcher. “The first is sensory-based and innate: it allows living beings to react to external factors based on the five senses. The second is cognitive and varies depending on the individual: it supports long-term memory and leads to decision-making based on signals from diverse sources.

How is this model applied to the ARI unit and the computers that receive the preprocessed signals? A first “sensory-based” layer is responsible for capturing the sounds, using microphones, and turning them into representative vectors. These are then compressed and sent to the second “cognitive” layer, which then analyzes the information, relying in particular on neural networks, in order to decide whether or not to issue an alert. It is by comparing new data to that already stored in its memory that the system is able to make a decision. For example, if a cognitively-impaired resident tends to call for help all the time, it must be able to decide not to warn the staff every time.

OSO-AI

 

More information

 

Published on 19.01.2021

by Pierre-Hervé VAILLANT