Here is the search Setting: A woman speaks Dutch through a microphone, while 11 tiny needles made of platinum and iridium record her brain waves.
The 20-year-old volunteer suffers from epilepsy, and her doctors installed 2 millimeter metal pieces — each studded with as many as 18 electrodes — into the front and left side of her brain in hopes of determining the point of origin for her seizures. But this part of micro-neural acupuncture is also a lucky break for a separate team of researchers because the electrodes are in contact with parts of her brain responsible for producing and expressing spoken words.
This is the cool part. After the woman spoke (this is called “explicit speech”), and after the computer balanced the sounds with activity in her brain, the researchers asked her to do so again. This time she barely whispered, imitating the words with her mouth, tongue and jaw. This is “intended speech”. Then you do it again – but without moving at all. The researchers just asked her imagine say the words.
It was a copy of the way people talk, but the other way around. In real life, we formulate silent thoughts in one part of our brains, another part converts them into words, and then some control the movement of the mouth, tongue, lips and throat, producing audible sounds at the correct frequencies to deliver speech. . Here, computers allow a woman’s brain to jump into a queue. They recorded when she was talking about thinking – the technical term is “imagined speech” – and were able to play an audible signal in real time from the distorted signals coming from her brain. The sounds were not as intelligible as the words. this work, published at the end of September, is still somewhat preliminary. But the simple fact that they happen at the speed of milliseconds of thought and action shows an astonishing progress toward an emerging use of brain-computer interfaces: giving a voice to people who can’t speak.
This deficit – caused by a neurological disorder or injury to the brain – is called anarthria. It’s exhausting and terrifying, but people have few ways to deal with it. Instead of direct speech, people with anarthrosis may use devices that translate the movement of other body parts into letters or words; Even a wink will work. Recently, a brain computer interface implanted in the cortex of a person with locking syndrome allowed the translation of imagined images Handwriting outputs 90 characters per minute. good but not great; A typical spoken word conversation in the English language is relatively 150 words per minute.
The problem is, like move his arm (or pointer), the formulation and production of speech is really complex. It depends on feedback, which is a 50ms loop between when we say something and hear ourselves say it. This is what allows people to perform real-time quality control on their speech. In this respect, it is what allows humans to learn to speak in the first place — hearing language, producing sounds, hearing ourselves produce those sounds (via the ear and auditory cortex, another part of the brain) and comparing what we are with what we are trying to do.