US scientists have developed a new artificial intelligence (AI) system that can translate a person's brain activity -- while listening to a story or silently imagining telling a story -- into a continuous stream of text. The system, developed by a team at the University of Texas at Austin relies in part on a transformer model, similar to the ones that power Open AI's ChatGPT and Google's Bard.

COMMERCIAL BREAK
SCROLL TO CONTINUE READING

According to the team that published the study in the journal Nature Neuroscience, it could help people who are mentally aware yet physically unable to speak, such as those debilitated by a stroke, to communicate intelligently again. For. ,Also Read: iPhone 14 Under Rs 40,000 On Amazon: How To Grab The Deal? check here,

Unlike other language decoding systems in development, this system, called the Semantic Decoder, does not require subjects to have surgical implants, making the procedure non-invasive. The participants are also not required to use only words from the prescribed list. ,ALSO READ: AI generates pictures of PM Narendra Modi in different avatars: See how he looks,

Brain activity is measured using a functional MRI scanner followed by extensive training of the decoders, in which the person listens to a podcast for hours in the scanner.

Later, provided the participant is open to decoding their thoughts, their listening to a new story or imagining the story being told allows the machine to generate text related only to brain activity.

“For a non-invasive method, this is a real leap forward compared to what’s come before, which are typically single words or short sentences,” said Alex Huth, assistant professor of neuroscience and computer science at UT Austin.

“We’re getting the model to decode language continuously for extended periods of time with complex ideas,” he said.

The result is not a word-for-word transcript. Instead, researchers have designed it to capture, imperfectly, the essence of what is being said or thought. About half the time, when the decoder has been trained to monitor a participant’s brain activity, the machine produces text that closely (and sometimes exactly) matches the original words’ intended meanings.

For example, in experiments, a participant listening to a speaker say: “I don’t have my driver’s license,” yet his thoughts were translated as, “He hasn’t even started learning to drive yet.” Is.

The team also addressed questions about the potential misuse of technology in the study. The paper describes how decoding only works with cooperative participants who volunteered to train the decoder.

For individuals on whom the decoder was not trained, the results were ambiguous, and if participants on whom the decoder was trained resisted afterwards—for example, by thinking other thoughts—the results were equally unhelpful. .

“We take very seriously the concern that this could be used for bad purposes and have worked to avoid this,” said Jerry Tang, a doctoral student in computer science. “We want to make sure that people only use these types of technologies when they want to and that helps them.”

In addition to having participants listen to or think about the stories, the researchers asked the subjects to watch four short, silent videos while in the scanner. The semantic decoder was able to use their brain activity to accurately describe certain events from the video.

This system is currently not practical for use outside the laboratory due to the dependence on the time required on the fMRI machine. But the researchers think the work could shift to other, more portable brain-imaging systems, such as functional near-infrared spectroscopy

For all other news related to business, politics, tech, sports and auto, visit Zeebiz.com