This article is from the source 'guardian' and was first published or seen on . It last changed over 40 days ago and won't be checked again for changes.

You can find the current article at its original source at https://www.theguardian.com/science/2019/apr/24/scientists-create-decoder-to-turn-brain-activity-into-speech-parkinsons-als-throat-cancer

The article has changed 2 times. There is an RSS feed of changes available.

Version 0 Version 1
Scientists create decoder to turn brain activity into speech Scientists create decoder to turn brain activity into speech
(about 7 hours later)
Scientists have developed a decoder that can translate brain activity directly into speech.Scientists have developed a decoder that can translate brain activity directly into speech.
In future the brain-machine interface could restore speech to people who have lost their voice through paralysis and conditions such as throat cancer, amyotrophic lateral sclerosis (ALS) and Parkinson’s disease.In future the brain-machine interface could restore speech to people who have lost their voice through paralysis and conditions such as throat cancer, amyotrophic lateral sclerosis (ALS) and Parkinson’s disease.
“For the first time … we can generate entire spoken sentences based on an individual’s brain activity,” said Edward Chang, a professor of neurological surgery at the University of California San Francisco (UCSF) and the senior author of the work. “This is an exhilarating proof of principle that, with technology that is already within reach, we should be able to build a device that is clinically viable in patients with speech loss.”“For the first time … we can generate entire spoken sentences based on an individual’s brain activity,” said Edward Chang, a professor of neurological surgery at the University of California San Francisco (UCSF) and the senior author of the work. “This is an exhilarating proof of principle that, with technology that is already within reach, we should be able to build a device that is clinically viable in patients with speech loss.”
The technology promises to transform the lives of people who rely on painfully slow communication methods that make a casual conversation impossible. Speech synthesisers, like the one used by the late Stephen Hawking, typically involve spelling out words letter-by-letter using eye or facial muscle movements. They allow people to say about eight words a minute, compared with natural speech, which averages 100-150.The technology promises to transform the lives of people who rely on painfully slow communication methods that make a casual conversation impossible. Speech synthesisers, like the one used by the late Stephen Hawking, typically involve spelling out words letter-by-letter using eye or facial muscle movements. They allow people to say about eight words a minute, compared with natural speech, which averages 100-150.
Kate Watkins, a professor of cognitive neuroscience at the University of Oxford, described the latest work as a “huge advance”. “This could be really important for providing people who have no means of producing language with a device that could deliver that for them,” she said.Kate Watkins, a professor of cognitive neuroscience at the University of Oxford, described the latest work as a “huge advance”. “This could be really important for providing people who have no means of producing language with a device that could deliver that for them,” she said.
Previous attempts to artificially translate brain activity into speech have mostly focused on unravelling how speech sounds are represented in the brain, and have had limited success.Previous attempts to artificially translate brain activity into speech have mostly focused on unravelling how speech sounds are represented in the brain, and have had limited success.
Chang and his colleagues tried something different. They targeted the brain areas that send the instructions needed to coordinate the sequence of movements of the tongue, lips, jaw and throat during speech.Chang and his colleagues tried something different. They targeted the brain areas that send the instructions needed to coordinate the sequence of movements of the tongue, lips, jaw and throat during speech.
“We reasoned that if these speech centres in the brain are encoding movements rather than sounds, we should try to do the same in decoding those signals,” said Gopala Anumanchipalli, a speech scientist at UCSF and the paper’s first author.“We reasoned that if these speech centres in the brain are encoding movements rather than sounds, we should try to do the same in decoding those signals,” said Gopala Anumanchipalli, a speech scientist at UCSF and the paper’s first author.
Given the speed, subtlety and complexity of movements people make during speech, this task presented a fiendish computational challenge, outlined in a paper in the journal Nature.Given the speed, subtlety and complexity of movements people make during speech, this task presented a fiendish computational challenge, outlined in a paper in the journal Nature.
The team recruited five volunteers who were about to undergo neurosurgery for epilepsy. In preparation for the operation, doctors temporarily implanted electrodes in the brain to map the sources of the patients’ seizures. While the electrodes were in place, the volunteers were asked to read several hundred sentences aloud, while the scientists recorded activity from a brain area known to be involved in speech production.The team recruited five volunteers who were about to undergo neurosurgery for epilepsy. In preparation for the operation, doctors temporarily implanted electrodes in the brain to map the sources of the patients’ seizures. While the electrodes were in place, the volunteers were asked to read several hundred sentences aloud, while the scientists recorded activity from a brain area known to be involved in speech production.
The aim was to decode speech using a two-step process: translating electrical signals in the brain to vocal movements and then translating those movements into speech sounds.The aim was to decode speech using a two-step process: translating electrical signals in the brain to vocal movements and then translating those movements into speech sounds.
They did not need to collect data on the second step because other researchers had previously compiled a large library of data showing how vocal movements are linked to speech sounds. They could use this to reverse engineer what the vocal movements of their patients would look like.They did not need to collect data on the second step because other researchers had previously compiled a large library of data showing how vocal movements are linked to speech sounds. They could use this to reverse engineer what the vocal movements of their patients would look like.
They then trained a machine learning algorithm to be able to match patterns of electrical activity in the brain with the vocal movements this would produce, such as pressing the lips together, tightening vocal cords and shifting the tip of the tongue to the roof of the mouth. They describe the technology as a “virtual vocal tract” that can be controlled directly by the brain to produce a synthetic approximation of a person’s voice.They then trained a machine learning algorithm to be able to match patterns of electrical activity in the brain with the vocal movements this would produce, such as pressing the lips together, tightening vocal cords and shifting the tip of the tongue to the roof of the mouth. They describe the technology as a “virtual vocal tract” that can be controlled directly by the brain to produce a synthetic approximation of a person’s voice.
Audio samples of the speech sound like a normal human voice, but with something akin to a strong foreign accent.Audio samples of the speech sound like a normal human voice, but with something akin to a strong foreign accent.
To test intelligibility, the scientists asked hundreds of people to listen through Amazon’s Mechanical Turk platform and transcribe samples. In one test they were given 100 sentences and and a pool of 25 words to select from each time, including target words and random ones. The listeners transcribed the sentences perfectly 43% of the time. To test intelligibility, the scientists asked hundreds of people to listen through Amazon’s Mechanical Turk platform and transcribe samples. In one test they were given 100 sentences and a pool of 25 words to select from each time, including target words and random ones. The listeners transcribed the sentences perfectly 43% of the time.
Some sounds, such as “sh” and “z” were synthesised accurately and the general intonation and gender of the speaker was conveyed well, but the decoder struggled with “b” and “p” sounds.Some sounds, such as “sh” and “z” were synthesised accurately and the general intonation and gender of the speaker was conveyed well, but the decoder struggled with “b” and “p” sounds.
Watkins said these imperfections would not necessarily prove a significant barrier to communication. In practice, people become familiar with the quirks of a person’s speech over time and can make logical inferences about what someone is saying.Watkins said these imperfections would not necessarily prove a significant barrier to communication. In practice, people become familiar with the quirks of a person’s speech over time and can make logical inferences about what someone is saying.
The scientists were also able to decode new sentences the algorithm was not trained on and it appeared to translate between people, which is seen as crucial for such technology to be useful to patients.The scientists were also able to decode new sentences the algorithm was not trained on and it appeared to translate between people, which is seen as crucial for such technology to be useful to patients.
The next big test will be to determine whether someone who cannot speak could learn to use the system without being able to train it on their own voice.The next big test will be to determine whether someone who cannot speak could learn to use the system without being able to train it on their own voice.
Potential to resharpen the rapierPotential to resharpen the rapier
In the book The Diving Bell and the Butterfly the French journalist Jean-Dominique Bauby reflected on his life after being paralysed by a stroke. He considered one of the greatest drawbacks to be losing his ability to tell a joke: “The keenest rapier grows dull and falls flat when it takes several minutes to thrust it home. By the time you strike, even you no longer understand what had seemed so witty before you started to dictate it, letter by letter.”In the book The Diving Bell and the Butterfly the French journalist Jean-Dominique Bauby reflected on his life after being paralysed by a stroke. He considered one of the greatest drawbacks to be losing his ability to tell a joke: “The keenest rapier grows dull and falls flat when it takes several minutes to thrust it home. By the time you strike, even you no longer understand what had seemed so witty before you started to dictate it, letter by letter.”
For those who lose their voice, whether through paralysis, stroke, neck cancer or neurodegenerative conditions, such as ALS, things have improved since the 1990s, when Bauby dictated his book letter-by-letter through blinks of his left eye. But only slightly.For those who lose their voice, whether through paralysis, stroke, neck cancer or neurodegenerative conditions, such as ALS, things have improved since the 1990s, when Bauby dictated his book letter-by-letter through blinks of his left eye. But only slightly.
Speech synthesisers, like Hawking’s, recognise the users’ movements automatically and can use predictive text-type technology to speed up the process. But they still require words to be typed out at frustratingly slow speeds.Speech synthesisers, like Hawking’s, recognise the users’ movements automatically and can use predictive text-type technology to speed up the process. But they still require words to be typed out at frustratingly slow speeds.
The latest advance could, for the first time, allow people who have been deprived of speech through illness or injury to converse naturally, without extra effort. This has the potential to restore, not only the ability to state one’s thoughts and needs, but also the joy and sparkle of conversation.The latest advance could, for the first time, allow people who have been deprived of speech through illness or injury to converse naturally, without extra effort. This has the potential to restore, not only the ability to state one’s thoughts and needs, but also the joy and sparkle of conversation.
Medical researchMedical research
NeuroscienceNeuroscience
CancerCancer
Parkinson's diseaseParkinson's disease
HealthHealth
newsnews
Share on FacebookShare on Facebook
Share on TwitterShare on Twitter
Share via EmailShare via Email
Share on LinkedInShare on LinkedIn
Share on PinterestShare on Pinterest
Share on WhatsAppShare on WhatsApp
Share on MessengerShare on Messenger
Reuse this contentReuse this content