Researchers from UC San Francisco have developed a new neuroprosthesis of greetings designed to enable men with severe paralysis to communicate in complete sentences. Neuroprosthesis translates signals from the brain to the vocal channel directly into the words that appear on the screen as text. Breakthrough was developed in collaboration with participants in clinical research trials and built in more than a decade of effort.
UCSF Neurosurgeon Edward Chang, MD, said at his knowledge, this was the first successful demonstration of decoding directly from full words of paralyzed people who could not speak. Breakthrough shows the promise of restoring communication by tapping the brain’s natural greeting machine. Losing the ability to speak unfortunately is not uncommon because of strokes, accidents, and diseases.
Because it cannot communicate is a significant harm to one’s health and well-being. While most studies in the field of communication neuroproshetics focus on restoring communication using spell-based approaches that require typing letters one by one in text format, new study is different. Chang and his team are working on signal translation intended to control the muscles of the vowel system to talk to words rather than signals to move typers.
Chang said his team’s approach had used aspects of natural and fluid greeting and promising faster and organic communication. During speech, people usually communicate at a high level of up to 150 to 200 words per minute. No spell-based is to quickly make the form of communication much slower.
By capturing brain signals and directly to words, much closer to how we usually speak. Chang has worked to develop neuroprosthesis of his words over the past decade. He developed towards the destination with the help of patients at the UCSF epilepsy center that underwent neurosurgery to show their seizure areas using the electrode arrangement placed on the surface of their brain. All patients have normal speeches and voluntarily have their brain recordings analyzed for activities related to speech.
Chang and colleagues map patterns of cortical activity associated with vocal channel movements that produce each consonant and vocal. The findings were translated into the recognition of words, and methods for real-time decoding from these patterns and statistical language models to increase the accuracy of being developed. The first patient in the trial was a man in his late 30s suffering from brain stem strokes more than 15 years ago which damaged the relationship between his brain, his vocal channel, and limbs.
As a result, it has a very limited head, neck, neck and limb movement and communicates using a pointer attached to the baseball cap to poke the letter on the screen. The patient works with Chang and his team to form a 50-word vocabulary that can be recognized from brain activity, which is enough to make hundreds of sentences expressing user concepts.