Newsstand Menu

Is today’s AI actually intelligent or just acting?

Image of a robot hand solving a puzzle
Large language models like ChatGPT are getting closer to passing the conversation-based Turing test. But their success reveals how easily we can be tricked into attributing intelligence, agency, and even consciousness to artificial systems. Image: © iaremenko - stock.adobe.com.

image of the Winter 2023 HT logo

Print Friendly, PDF & Email

Inside the Tyrell Corporation’s shadowy Los Angeles headquarters, bounty hunter Rick Deckard sets up a machine to run the “Voight-Kampff” test. He’ll use it to figure out if an employee named Rachael is human or not. These days it’s getting harder to tell.

This scene from the iconic 1982 film Blade Runner is a work of fiction. But the Voight-Kampff concept was inspired by the real-life Turing test. Instead of outing rogue robots, the Turing test judges a machine’s ability to hold human-level conversations. The idea is that in the future, androids will pass it with ease, so another, harder test will be needed to determine whether or not they’re human.

Today’s AI is nowhere near passing the Voight-Kampff test—if such a thing were even to exist. However, they are giving Turing a run for his money.

“Large language models like ChatGPT can now engage in surprisingly convincing conversations,” Cold Spring Harbor Laboratory (CSHL) Professor Anthony Zador says. “But they still struggle with physical common sense that humans often take for granted.”

Overcoming this challenge is key to unlocking AI’s full potential. To solve the problem, researchers have turned to a familiar source of inspiration—the brain.

The embodied Turing test

Neuroscience has long been an essential driver of AI research, Zador says. Following meetings at CSHL in 2020 and 2022, he and colleagues from around the world banded together to propose a new “embodied” Turing test built on the principles of a burgeoning field called NeuroAI.

“The original Turing test was designed to create a standard to judge AI against. But it doesn’t account for our abilities to interact with and reason about the physical world,” Zador says. “NeuroAI is based on the idea that a better understanding of the brain will reveal the ingredients of intelligence and eventually lead to human-level AI.”

Real worms perfected their swimming abilities over millions of years of evolution. This video shows how an AI worm, equipped with only about two dozen artificial neurons, quickly figured out how to swim.

The embodied Turing test aims to benchmark how an AI interacts with the world compared to its living counterpart. Each embodied test would be unique to the animal it’s measuring up against. For example, a beaver AI might be tested on building dams; a squirrel AI on its ability to hop between trees. Traits like these are built on core capabilities shared by almost every animal, Zador says. This core has been honed through millions of years of evolution.

“Animals are defined, in part, by three shared characteristics,” Zador explains. “They purposefully move around and interact with their environment. They adapt to new situations. And their brains are extremely energy efficient. Developing a system that can pass an embodied Turing test based on NeuroAI principles for one animal could make adapting it to other animals—including humans—much more straightforward.”

How NeuroAI can get us there

What’s keeping AI from making the leap? According to Zador, one big pitfall is a rift between AI and neuroscience.

Despite their long relationship, many scientists working in AI today are unaware of the fields’ shared history and collective opportunities. Even researchers who know how neuroscience shaped AI often argue that neuroscience is no longer relevant to the field. “Engineers don’t study birds to build better planes” is the usual refrain. But Zador disagrees.

“The analogy fails not only because pioneers of aviation did indeed study birds—and some still do—but also because modern aeronautical engineering is not trying to achieve ‘bird-level’ flight,” he says. “If that were the goal, engineers would be well-advised to pay close attention to birds.”

Although AI has easily defeated human opponents in games like chess and Go, these systems have yet to master basic skills like walking. Robots you may have seen running around were programmed to do that. They can’t learn it on their own.

Zador has emerged as one of the most vocal and knowledgeable proponents of NeuroAI. He says AI must clear several hurdles to realize its “tremendous opportunities to unleash human creativity and catalyze economic growth, relieving workers from performing the most dangerous and menial jobs.”

In a recent Nature Communications article, Zador and collaborators from across the globe outlined three major requirements for getting us there.

  1. A new generation of researchers must be trained in both neuroscience and computer science. The researchers must be equipped with the ethical tools to ensure AI benefits society.
  2. An open, shared platform must be created for building and testing AI systems.
  3. Greater support must be given to fundamental research efforts to reverse engineer the brain and define the underlying ingredients of intelligence.

This paradigm shift could lead to AI passing an embodied Turing test. But how would it fare against Voight-Kampff? “I would love to have a robot do my dishes or clean my house,” Zador says. “But we’re far from developing anything like that beyond the level of a Roomba®.”

So, while AI is poised to revolutionize society and the global economy, we can rest assured that Blade Runner-like androids remain firmly in the realm of science fiction—for now.

Written by: Nick Wurm, Communications Specialist | wurm@cshl.edu | 516-367-5940

Stay informed

Sign up for our newsletter to get the latest discoveries, upcoming events, videos, podcasts, and a news roundup delivered straight to your inbox every month.

  Newsletter Signup

About

Anthony Zador

Anthony Zador

Professor
The Alle Davis and Maxine Harrison Professor of Neurosciences
M.D., Ph.D., Yale University, 1994

Tags