Home Tech AI Is Helping Us Read Minds, but Should We?

AI Is Helping Us Read Minds, but Should We?

Call us


Since mind reading has only existed in the realms of fantasy and fiction, it seems fair to apply the phrase to a system that uses brain scan data to decipher stories that a person has read, heard, or even just imagined. It’s the latest in a series of spooky linguistic feats fueled by artificial intelligence, and it’s left people wondering what kinds of nefarious uses humanity will find for such advances.

Even the lead researcher on the project, computational neuroscientist Alexander Huth, called his team’s sudden success with using noninvasive functional magnetic resonance imaging to decode thoughts “kind of terrifying” in the pages of Science.

But what’s also terrifying is the fact that any of us could come to suffer the horrific condition the technology was developed to address — paralysis so profound that it robs people of the ability even to speak. That can happen gradually through neurological diseases such as ALS or suddenly, as with a stroke that rips away all ability to communicate in an instant. Take for example, the woman who described an ordeal of being fully aware for years while treated as a vegetable. Or the man who recounted being frozen, terrified and helpless as a doctor asked his wife if they should withdraw life support and let him die.

Magazine editor Jean-Dominique Bauby, who suffered a permanent version of the condition, used a system of eye blinks to write the book The Diving Bell and the Butterfly. What more could he have done given a mind decoder?

Each mind is unique, so the system developed by Huth and his team only works after being trained for hours on a single person. You can’t aim it at a someone new and learn anything, at least for now, Huth and collaborator Jerry Tang explained last week in a press event leading up to a publication of their work in Monday’s Nature Neuroscience.

And yet their advance opens prospects that are both scary and enticing: A better understanding of the workings of our brains, a new window into mental illness, and maybe a way for us to know our own minds. Balanced against that is the concern that one day such technology may not require an individual’s consent, allowing it to invade the last refuge of human privacy.

Huth, who is an assistant professor at the University of Texas, was one of the first test subjects. He and two volunteers had to remain motionless for a total of 16 hours each in a functional MRI, which tracks brain activity through the flow of oxygenated blood, listening to stories from The Moth Radio Hour and the Modern Love podcast, chosen because they tend to be enjoyable and engaging.

This trained the system, which produced a model for predicting patterns of brain activity associated with different sequences of words. Then there was a trial-and-error period, during which the model was used to reconstruct new stories from the subjects’ brain scans, harnessing the power of a version of ChatGPT to predict which word would likely follow from another.

Eventually the system was able to “read” brain scan data to decipher the gist of what the volunteers had been hearing. When the subjects heard, “I don’t have my driver’s license yet,” the system came up with, “she has not even started to learn to drive.” For some reason, Huth explained, it’s bad with pronouns, unable to figure out who did what to whom.

Weirder still, the subjects were shown videos with no sound, and the system could make inferences about what they were seeing. In one, a character kicked down another, and the system used the brain scan to come up with, “he knocked me to the ground.” The pronouns seemed scrambled, but the action was spookily on target.

The people in the scanner might never have been thinking in words at all. “We’re definitely getting at something deeper than language,” Tang said. “There’s a lot more information in brain data than we initially thought.”

This isn’t a rogue lab doing mad science but part of a long-term effort that’s been pursued by scientists around the world. In a 2021 The New Yorker article, researchers described projects leading up to this breakthrough. One shared a vision of a Silicon Valley-funded endeavor that could streamline the cumbersome functional MRI scanner into a wearable “thinking hat.” People would wear the hat, along with sensors, to record their surroundings to decode their inner worlds and mind meld with others — even perhaps communicate with other species. The recent breakthroughs make this future seem closer.

For something that’s never existed, mind reading seems to crop up regularly in popular culture, often reflecting a desire for lost or never-realized connection, as Gordon Lightfoot sang of in If You Could Read my Mind. We envy the Vulcans their capacity for mind melding.

Historical precedent, however, warns that people can do harm by simply taking advantage of the belief that they have a mind-reading technology — just as authorities have manipulated juries, crime suspects, job candidates and others with the belief that a polygraph is an accurate lie detector. Scientific reviews have shown that the polygraph does not work as people think it does. But then, scientific studies have shown our brains don’t work the way we think they do either.

So, the important work of giving voice back to people whose voices have been lost to illness or injury must be undertaken with deep thought for ethical considerations; and an awareness of the many ways in which that work can be subverted. Already there’s a whole field of neuroethics, and experts have evaluated the use of earlier, less effective versions of this technology. But this breakthrough alone warrants a new focus. Should doctors or family members be allowed to use systems such as Huth’s to attempt to ask about a paralyzed patient’s desire to live or die? What if it reports back that the person chose death? What if it misunderstood? These are questions we all should start grappling with.

© 2023 Bloomberg LP


OnePlus recently launched its first tablet in India, the OnePlus Pad, which is only sold in a Halo Green colour option. With this tablet, OnePlus has stepped into a new territory that’s dominated by Apple’s iPad. We discuss this and more on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.

(This story has not been edited by NDTV staff and is auto-generated from a syndicated feed.)

Affiliate links may be automatically generated – see our ethics statement for details.



Source link