Remember Doc Brown’s wacky contraption to read Marty McFly’s thoughts in Back to the Future? As absurd as it seemed, we may not be far from a future where AI can actually do just that — read your mind. But instead of wearing a giant, beeping helmet, imagine simply donning a cap while artificial intelligence (AI) translates your thoughts into text. Welcome to the brave new world of mind-reading technology, as explained by Michael Blumenstein and Jerry Tang in a fascinating interview with Brian Greene.
The technology behind this breakthrough uses non-invasive AI systems that identify patterns in neural firings to decode human thoughts. Blumenstein, Deputy Dean for Research at the University of Technology Sydney, and Tang, a leading researcher at the University of Texas, are working on cutting-edge projects that translate brain activity into language. Together, they’re uncovering new possibilities — and challenges — of this remarkable field.
Brainwaves to Words
At the heart of these projects is the idea that brain activity leaves identifiable “fingerprints.” Tang’s research team uses functional magnetic resonance imaging (fMRI), which measures blood flow in the brain as a proxy for neural activity. The fMRI readings are then processed through AI models to predict the words a person is hearing or imagining.
“We take brain recordings from a user and predict the words that the user was hearing or imagining,” Tang explained. “It’s a two-step process. First, we train a language decoder on brain responses, then apply it to new brain responses.”
To train these decoders, subjects listen to hours of podcasts, allowing the system to correlate specific words with brain activity patterns.
“We record 16 hours of brain activity while participants listen to narrative stories,” Tang continues. “This helps us build machine learning models that relate words to activity patterns in the brain.” The results are promising but not perfect; the decoder can capture the gist of a person’s thoughts, but not always the exact words. “It will paraphrase what the person is thinking,” said Tang.
For instance, when a subject imagined the sentence “Marco leaned over to me and whispered, ‘You are the bravest girl I know,’” the decoder produced, “He runs up to me and hugs me tight and whispers, ‘You saved me.’” While not identical, the decoded output is strikingly close in meaning.
EEG-Based Decoding, What’s That?
Blumenstein’s team, on the other hand, is working with electroencephalography (EEG), a more portable and accessible option. Unlike fMRI, which requires a large machine, EEG uses a cap with electrodes to detect the brain’s electrical signals. These signals are then processed through a complex AI pipeline that maps brainwave patterns to words. Blumenstein points out that their approach relies on “self-supervised learning,” meaning the AI can train itself without needing extensive labeled datasets.
“We use a cap that measures EEG signals, and then through feature extraction, we make these signals palatable for the computer to process,” said Blumenstein. “The real challenge is correlating the waves to specific words.”
The EEG-based system also decodes thoughts with impressive results. One example shows a subject thinking the sentence: “I’d like a bowl of chicken soup, please.” The decoder, though imperfect, responds with “Yes, a bowl of beef soup.” As Blumenstein noted: “The fact that we can decode these words at all, using just EEG signals, is a significant leap forward.”
The Ethical Quandary of Mind-Reading AI
While these advancements hold immense potential — particularly for people who cannot speak due to medical conditions — there are undeniable ethical concerns. Greene, who moderated the discussion, asked both researchers about the possible invasion of mental privacy. Tang, keenly aware of these issues, asserts that “nobody’s brain should be decoded without their full cooperation.” Blumenstein echoed similar sentiments, stressing that ethical considerations must be a top priority, particularly in medical applications.
Blumenstein also mentioned that involving clinicians in the research process could help guide the technology’s development in ethically sound ways.
“This is where AI in health is really taking off,” he said. “It’s important to have the clinical angle, ensuring that we help those who need it most while maintaining strict ethical standards.”
A Glimpse of the Future
The future of mind-reading technology is both thrilling and daunting. On one hand, these systems could offer a voice to people who are otherwise unable to communicate — whether due to stroke, paralysis, or coma. On the other hand, the very idea that AI could one day decode thoughts raises unsettling questions about mental privacy and personal freedom.
However, both researchers remain on the bright side of life as far as the tech is concerned. Tang believes that the technology is still in its early stages, with many improvements to come, particularly in accuracy.
“Right now, we can say that our decoder is doing significantly better than expected by chance,” he explained. Blumenstein agreed, saying that while current results are promising, there’s much more work to do. “If we keep improving how we capture and process the data, I have no doubt that we’ll reach much higher levels of accuracy.”