Delays and dictionaries
A year after Stanford’s work in 2024, Stavisky’s team published His own research on a text brain system that has brought precision to 97.5%. “Almost every word was correct, but communicating on text can be limiting, right?” According to Stavisky. “Sometimes you want to use your voice. This allows you to do interjections, it makes it less likely that other people interrupt you – you can sing, you can use words that are not in the dictionary.” But the most common approach to the generation of speech was based on synthesis from the text, which led directly to another problem with BCI systems: very high latency.
In almost all aids of BCI speech, sentences appeared on a screen after a significant delay, long after the patient finished spinning the words in their mind. The part of the synthesis of speech generally occurred after the preparation of the text, which caused even more delay. Text brain solutions also suffered from a limited vocabulary. The latest system of this type supported a dictionary of approximately 1,300 words. When you have tried to speak a different language, to use a more elaborate vocabulary, or even to say the unusual name of a coffee just at the corner of the street, the systems have failed.
Thus, Wairagkar designed his prosthesis to translate brain signals into sounds, not words – and do it in real time.
Sound extraction
The patient who agreed to participate in the Wairagkar study was appointed T15 and was a 46 -year -old man suffering from SLA. “He is seriously paralyzed and when he tries to speak, he is very difficult to understand. I have known him for several years, and when he speaks, I may understand 5% of what he says, ”explains David Mr. Brandman, neurosurgeon and co-author of the study. Before working with the UC Davis team, T15 communicated using a gyroscopic head mouse to control a cursor on a computer screen.