Stanford University researchers have recently achieved incredible breakthroughs in brain-computer interfaces (BCIs). These accelerations have the potential to radically change how people with paralysis communicate. The research at the Neural Prosthetics Translational Laboratory, under the direction of postdoctoral researcher Erin Kunz, is the first. Our findings that surgically implanted BCIs can assist users in articulating their thoughts suggest that they can surreptitiously listen in on users’ inner monologues.
BCIs work by decoding neural signals to speech, offering a lifeline for individuals with no other means of verbal communication. From its first public unveiling, the technology’s ability to decode inner speech has directly raised complex questions about privacy and consent. Kunz and her team achieved an impressive 74% accuracy in decoding sentences from a vocabulary of 125,000 words, utilizing a method traditionally employed by virtual assistants like Alexa and Siri. That means only when the device recognizes a particular wake word do we turn the device on, allowing for speech to be translated only if it’s specifically intended.
Kunz highlighted an immediate issue with the technology’s dangers to privacy. “If inner speech is similar enough to attempted speech, could it unintentionally leak out when someone is using a BCI?” she queried. This question highlights the need for careful consideration of how BCIs might access and interpret thoughts that users may not wish to share.
To protect user privacy while allowing for open communication, the TUSM research team introduced two main strategies to mitigate this risk. For their experiments, they chose the phrase “Chitty Chitty Bang Bang.” Its rarity in daily conversation makes it a perfect trigger for the device. “We picked Chitty Chitty Bang Bang because it doesn’t occur too frequently in conversations and it’s highly identifiable,” Kunz explained.
The research, which appeared in the journal Cell, highlights the tension between technological progress and ethical obligation. Nita Farahany, a well-known bioethicist who has been deeply engaged in this research, expressed the larger meaning of this work. “The more we push this research forward, the more transparent our brains become,” she stated. The authors of the study believe that with the rapid development of BCIs, the line between public and private thought could become less clear.
Farahany added to the panel discussion’s overall note of caution and call for vigilance in proceeding down this uncharted path toward brain transparency. “We have to recognize that this new era of brain transparency really is an entirely new frontier for us,” she remarked. The aim of the researchers was not to take away BCI technology, which provides essential communication means for disabled individuals. In the process, they raise significant ethical issues around personal privacy.
At the moment, the technology behind BCIs is the surgical implantation of microelectrode arrays directly onto the surface of the brain. These devices are trained to read the brain’s neural signals when a person tries to speak and decode those signals into actual words. “We’re recording the signals as they’re attempting to speak and translating those neural signals into the words that they’re trying to say,” Kunz described.
BCIs are evolving at breathtaking speed. These are good first steps, say experts, but users need to be trained on how their ideas can be read or misread. The researchers aim to create a transparent dialogue about these technologies to ensure that users’ rights and autonomy are respected.