Deep Learning-Powered Neuroprosthesis Restores Words to Paralyzed Man
The world’s first ‘speech neuroprosthesis’ technology allows for the decoding of full words from the brain activity of a paralyzed individual.
Researchers at UC San Francisco have developed the technology to restore autonomous communication in paralyzed persons—directly by decoding the brain activity of patients.
Spearheaded by UCSF Neurosurgeon Edward Chang, MD, the clinical research trial successfully demonstrated the “direct decoding of full words from the brain activity” of a paralyzed and speech-impaired participant.
Communication was achieved by implanting a high-density multielectrode array (a collection of microscopic electrodes on the bottom of a chip) that’s used to capture the behaviour of any neural network or to identify electrical activity (such as the firing of neurons) on a particular area of the brain that controls speech.
The researchers recorded 22 hours of cortical activity of the candidate while instructing the candidate to attempt to say individual words from a vocabulary set consisting of 50 words.
As the candidate attempted to pronounce the words from the 50-word set, the neural data was being sampled and classified by a specialized deep-learning algorithm—which would later utilize precision and recall method to identify what words are being attempted by the candidate later.
But that’s not it. The researchers employed natural-language processing (NLP) models to predict the next word the candidate would be likely to pick in a sequence—enabling the decryption of complete sentences the candidate attempted to verbalize.
“To our knowledge, this is the first successful demonstration of direct decoding of full words from the brain activity of someone who is paralyzed and cannot speak,” stated Dr Chang, the Joan and Sanford Weill Chair of Neurological Surgery at UCSF and senior author on the study published in the New England Journal of Medicine.
“It shows strong promise to restore communication by tapping into the brain’s natural speech machinery.”
At a rate of 18 words per minute, the system was able to decode words at an astonishing rate of accuracy up to 93%.
David Moses, the postdoctoral engineer on the project and lead author of the study, expressed optimism at the results of the early trial as it provided proof of principle.
“We were thrilled to see the accurate decoding of a variety of meaningful sentences,” said Moses in a press statement.
“We’ve shown that it is actually possible to facilitate communication in this way and that it has potential for use in conversational settings.”
The story plausibly makes for the most riveting TL; DR you’d see on the internet—a group of doctors read the mind of a man from a chip.
Subscribe to the Blog
Join for the latest tech, design and industry news straight to your inbox.
Our UX team designs customer experiences and digital products that your users will love.
Leave a Reply