Categories: Business

The woman’s brain implant transforms her thoughts into discourse in real time: Sciencelert

Almost two decades after having undergone a stroke at the cerebral trunk at the age of 30 which left her unable to speak, a woman in the United States has regained the ability to transform her thoughts into real time thanks to a new brain-computer interface process (BCI).


By analyzing its brain activity by increments of 80 milliseconds and translating it into a version synthesized by its voice, the innovative method of American researchers has dissipated a frustrating delay which tormented the previous versions of the technology.


The ability of our body to communicate sounds because we think them is a function that we often hold for acquired. It is only in rare moments when we are forced to take a break for a translator, or to hear our speech delayed by a speaker, we appreciate the speed of our own anatomy.


For individuals whose ability to shape the sound has been cut off from the speech centers of their brain, whether through conditions such as amyotrophic lateral sclerosis or lesions in critical parts of the nervous system, brain implants coupled with specialized software promised a new lease.


A number of BCI speech translation projects have recently experienced monumental breakthroughs, each aiming to move away at the time to generate a discourse from thoughts.


Most existing methods require a full piece of text to be taken into account before the software can decipher its meaning, which can considerably lead to the seconds between the initiation of speech and vocalization.


Not only is this unnatural, but it can also be frustrating and uncomfortable for those who use the system.


“Improving the latency of speech synthesis and decoding speed is essential for dynamic conversation and current communication,” write researchers from the University of California in Berkeley and San Francisco in their published report.


This is “aggravated by the fact that the synthesis of speech requires an additional time to play and so that the user and the listener include synthesized audio”, explains the team, led by the University of California, the computer engineer of Berkeley, Kaylo Littlejohn.


In addition, most existing methods are based on the interface training “speaker” by openly passing the vocalization movements. For people who are out of practice or who have always had trouble speaking, providing their decoding software with enough data could be a challenge.


To overcome these two obstacles, the researchers have formed a flexible and in -depth neural network on the activity of the sensorimotor cortex of the 47 -year participant while she spoke ‘100 unique sentences of a vocabulary of just over 1000 words.


Littlejohn and his colleagues also used a form of assisted communication based on 50 sentences using a smaller set of words.

Operating by increments of 80 milliseconds, this last method of translation of neural commands in discourse can communicate in an almost natural way. (Littlejohn et al., Nature neuroscience2025)

Unlike previous methods, this process did not imply the participant who tries to vocalize – just to think of the sentences in his mind.


The decoding by the system of the two communication methods was significant, the average number of words per minute translated near the double of that of previous methods.


Above all, the use of a predictive method which could constantly the interpretation on the fly allowed the participant’s discourse to flow in a much more natural way which was 8 times faster than other methods. It even looked like his own voice, thanks to a vocal synthesis program based on previous recordings of his speech.


The execution of the offline process without limits in time, the team showed that its strategy could even interpret neural signals representing words on which it had not been deliberately formed.


The authors note that there is still a lot of room for improvement before the method is considered clinically viable. Although the discourse is intelligible, it was not well below the methods that decorate the text.


Given the path traveled by technology, however, there are reasons to be optimistic that those who have no voice could soon sing the praises of researchers and their mental reading devices.

This research was published in Nature neuroscience.

remon Buul

Recent Posts

Doctor GI, the intestinal health researcher does 3 things to prevent colon cancer

Colon cancer levels in people under the age of 50 are increasing. James Kinross, a…

6 seconds ago

Trump follows the perfect plan to crush the market

Friday, Jim Cramer of CNBC castigated the White House for his approach to commercial policy,…

2 minutes ago

The Nit championship went down to a miracle game – and the agonizing lay -up

If only the NCAA tournament had so much excitement. The NIT championship match in Indianapolis…

3 minutes ago

They envy says that he and Donald Trump are working on Dr. Congo Mineral Deal

The United States and the Democratic Republic of Congo are in talks on a mineral…

4 minutes ago

Jennifer Lopez’s plunging dress and Broadway Premiere cape

Jennifer Lopez spent an evening in the theater - and her dramatic and theatrical look…

5 minutes ago