Diseases like Parkinson’s and ALS cause a person to lose the ability to speak. Children are also born with speech disabilities; stutter etc. people also lose their speech during accidents and trauma. In all these cases the person who loses the ability to speak also losses a lot of confidence due to the nature of society which demands the way of expression.
Technology has helped these patients a lot by the devices that convert head or eye movements into speech. However, these methods are very tedious that is the speed that they operate is very slow. Contemporary technology has the limitation of converting 10 words per minute while natural human speech occurs at around 150 words/minute.
A team at the University of California – San Francisco got the motivation from this bridge between brain and speech and developed a machine interface that can generate natural-sounding synthetic speech by using brain activity to control a virtual vocal tract — an anatomically detailed computer simulation including the lips, jaw, tongue, and larynx.
The researchers first recorded signals from the patient’s brain while they were speaking. The researchers mapped the activity of the brain while the brain was giving instructions to lips, tongue, jaw and vocal cord. The researchers then applied those maps to a program that produces synthetic speech.
The next step of the researchers is to design clinical trials for people with speech impairment and then use that data to apply to an already trained computer algorithm.