Neurologists are translating brain activity into synthetic, understandable speech
The tech could be used to help patients like Stephen Hawking who have impaired speech
Photo by Denis Lesak on Unsplash
Dr. Stephen Hawking may have lost his voice in 1985, but he was far from finished speaking with the rest of the world. With a custom-designed computer interface, he used a single cheek muscle to navigate an adaptive word predictor and slowly type out literature that remains at the forefront of human knowledge.
Scientists have tried to directly relay brain signals to audible speech, but often come up short. One of the reasons is that the brain does not directly generate speech, but rather it instructs the movements of our vocal tract that lead to speech as an output. With this in mind, researchers from the University of California - San Francisco measured frequencies from the Ventral Sensorimotor Cortex, which is the brain region that coordinates the movement of the body parts shaping the vocal tract, to predict the corresponding motion of vocal organs called articulators - our tongue, lips, teeth, and palate. These predictions were then synthesized into a waveform of audible speech. To see how this works in action, check out this really cool video produced by the USCF Nuerosurgery team.
Using listeners from Amazon Mechanical Turk to verify intelligibility, they found that a speaker even miming the target sentences produced sufficient data to accurately capture synthetic speech. Interestingly, the step between articulation and sound output was generalizable between participants, indicating that a version of this technology could be used clinically to help rescue impaired speech. However, the primary measurements were taken from participants who already had intracranial implants on their cortical regions, so unfortunately this technology won’t be available over the counter anytime soon.