UC San Francisco scientists have recovered contact with a paralyzed patient, using a computer to send messages directly from his brain.
The remarkable achievement is a step towards a time when implantable prostheses can help lose the ability to speak due to stroke, spinal cord injury and neurodegenerative disease.
UCSF Neurosurgeon Dr. “This trial tells us that, yes, we can recover words from a lost speech person,” he said. Edward Chang, Who led the study. “It’s just the beginning, but it certainly tells us it’s possible.”
The project is considered to be the first successful demonstration of direct decoding of complete sound from brain activity. Known as “Bravo” (Brain-Computer Interface Restoration Arm and Voice), the study was published in Wednesday’s issue. New England Journal of Medicine.
Just as the brain sends signals to move an arm or leg, so it sends signals to the vocal cords to make sound. But people with vocal paralysis cannot control these muscles. Their brains prepare messages for delivery, but those messages get stuck.
Scientists have placed a flexible pad on the parts of the brain that these vocal muscles control by tapping into the system. They then decode the signals into words that are displayed on a screen.
When asked “How are you today?” And “Do you want some water?” The patient’s answers appeared on the computer screen.
“I’m very good,” he said. “No, I’m not thirsty.”
The volunteer was a man in his late twenties who suffered a devastating stroke 15 years ago after undergoing surgery for injuries sustained in a car accident. Her head, neck and limbs have been extremely limited since the injury. He uses an electric wheelchair and usually communicates using a pointer attached to a baseball cap to throw characters on a screen.
128 The electrode sits gently on the surface of the brain without penetrating the tissue This method is safe and has been used for years to monitor the seizure activity of patients with epilepsy.
The journal reports that the system can translate its words at a rate of up to 18 words per minute from brain activity. General speech is 150 or 200 words per minute.
Slower than normal speech, it is faster than other attempts at neuroprosthetics in communication, which uses eye movements or muscle twitches – writing one letter after another as text to restore communication.
Quick decoding is possible according to Chang.
The patient’s statement was up to 93% accurate, listing a “self-correcting” software similar to the one used when sending.
According to Chang, “In many cases, they still have the information needed to produce fluent speech. We just need technology to allow them to publish it. “
Neurologist Dr. Ley writes, “Through this pioneering demonstration of how a person can generate text by trying to restore neurotic function for people with amyotrophic lateral sclerosis, cerebral palsy, stroke or other disorders.” Sydney Cash of Harvard Medical School in the editorial with Hutchberg of Massachusetts General Hospital.
“Ultimately, success will be marked by how easily our patients can share their thoughts with all of us.”
The work is built on innovation In different cases.
For years, Chang’s lab focused on fundamental questions about how brain circuits interpret and produce speech – in particular, how it allows us to control the vocal tracts to coordinate the lips, jaw, tongue, and larynx.
“We knew enough to ask a very basic question: If we now know how speech works when people speak normally, how can we use this information for someone who has lost the ability to speak after being paralyzed?” Chang said.
Then, with colleagues UCSF Well Institute for Neurosciences, The team listened to brain cells learn to shoot as they told the vocal organs to move.
The team recorded this brain information when volunteers with general speech, who temporarily had small recording electrodes placed above the surface of their brain, answered common questions.
Then they Create a map Brain signal pattern.
Postdoctoral engineer David Musa to reconstruct sound or sound from brain signals Creates a set of machine-learning algorithms Decorated speech model. The statistical language model has improved accuracy.
In the new Bravo study, they connected the electrodes to a computer by a cable connected to a port of the patient’s head.
For 1.5 years, they asked him to create a 50-word vocabulary, including “good” water, “music,” “family,” and “computer.” These were enough to create hundreds of sentences about his daily life.
People are not reading this method.
“Internal thoughts are really complex … they’re distributed throughout the brain,” Chang said. “These are not in a particular part of the brain and we are far from understanding how it works.”
Instead, they are decoding what they are trying to say out loud. “These are signals that are disconnected from the vocal tract through a stroke or other type of brain injury,” Chang said.
It is not yet known if this method will be clinically practical for many people. It has only been used in a single person, so it may not work well for others.
According to the team, more work remains to be done to improve this approach. They plan to create systems with higher data resolution to record faster, more information from the brain. They want to expand the vocabulary. And they dream of creating a system that can translate these very complex brain signals into spoken words – not just text.
Chang said, “What this means is that people who are suffering because they can’t communicate with their loved ones, can’t communicate with their caregivers about their basic needs, will be able to express some necessary feelings or emotions in them.”