A machine that can read our thoughts sounds like something out of the pages of a science fiction novel, but a new artificial intelligence (AI) system called DeWave can do just that.
Australian researchers have developed a technique that uses electroencephalogram (EEG) caps to record neural activity and convert quiet thoughts from brain waves to text. Scientists at the University of Technology Sydney (UTS) say that in early experiments he has achieved an accuracy of over 40% and that DeWave’s AI could enable people who cannot speak or type to communicate. is expected.
The non-invasive system does not require implants or surgery, unlike Elon Musk’s planned Neuralink chip. It was tested on a dataset from subjects reading text while both their brain activity and eye movements were monitored. DeWave learned to decipher thoughts by matching EEG patterns with eye fixations indicating recognized words.
UTS lead researcher Chin-Teng Lin said DeWave introduces an “innovative approach to neural decoding”.he said in some words statement: “This study is a pioneering effort to translate raw brain waves directly into language and represents an important advance in the field.”
Professor Lin continued: “It is the first to incorporate discrete encoding technology into the brain-to-text translation process, introducing an innovative approach to neural decoding. Integration with large-scale language models enables neuroscience and AI It also opens up new frontiers.β
DeWave’s AI may one day help paralyzed patients
Verbs were found to be the easiest for the AI ββto identify from neural signals, but specific nouns were sometimes translated as pairs of synonyms. Researchers suggest that semantically related concepts can generate similar brainwave patterns and cause challenges.
With just the snug attachment of the EEG cap needed to capture input, this technology could one day enable fluid communication with paralyzed patients and direct control of assistive devices. However, work remains to improve the system’s accuracy to about 90%, on par with speech recognition.
Combined with rapidly advancing language models, similar brain-computer interfaces may one day allow people to communicate and interact with technology simply by thinking.
Featured image: UTS