Artificial intelligence (AI) still has a lot to learn from animal brains, says Cold Spring Harbor Laboratory (CSHL) neuroscientist Tony Zador. Now, he’s hoping that lessons from neuroscience can help the next generation of artificial intelligence overcome some particularly difficult barriers.
Anthony Zador, M.D., Ph.D., has spent his career working to describe, down to the individual neuron, the complex neural networks that make up a living brain. But he started his career studying artificial neural networks (ANNs). ANNs, which are the computing systems behind the recent AI revolution, are inspired by the branching networks of neurons in animal and human brains. However, this broad concept is usually where the inspiration ends.
In a perspective piece recently published in Nature Communications, Zador describes how improved learning algorithms are allowing AI systems to achieve superhuman performance on an increasing number of more complex problems like chess and poker. Yet, machines are still stumped by what we consider to be the simplest problems.
Solving this paradox may finally enable robots to learn how to do something as organic as stalking prey or building a nest, or even something as human and mundane as doing the dishes—a task that Google CEO Eric Schmidt once called “literally the number one request… but an extraordinarily difficult problem” for a robot.
“The things that we find hard, like abstract thought or chess-playing, are actually not the hard thing for machines. The things that we find easy, like interacting with the physical world, that’s what’s hard,” Zador explained. “The reason that we think it’s easy is that we had half a billion years of evolution that has wired up our circuits so that we do it effortlessly.”
That’s why Zador writes that the secret to quick learning might not be a perfected general learning algorithm. Instead, he suggests that biological neural networks sculpted by evolution provide a kind of scaffolding to facilitate the quick and easy learning for specific kinds of tasks—usually those crucial for survival.
For an example, Zador points to your backyard.
“You have squirrels that can jump from tree to tree within a few weeks after birth, but we don’t have mice learning the same thing. Why not?” Zador said. “It’s because one is genetically predetermined to become a tree-dwelling creature.”
Zador suggests that one result of this genetic predisposition is the innate circuitry that helps guide an animal’s early learning. However, these scaffolding networks are far less generalized than the perceived panacea of machine learning that most AI experts are pursuing. If ANNs identified and adapted similar sets of circuitry, Zador argues, the future’s household robots might just one day surprise us with clean dishes.
Written by: Brian Stallard, Content Developer/Communicator | publicaffairs@cshl.edu | 516-367-8455
Citation
Zador, A., A critique of pure learning and what artificial neural networks can learn from animal brains, Nature Communications, 21 August 2019.
Principal Investigator
Anthony Zador
Professor
The Alle Davis and Maxine Harrison Professor of Neurosciences
M.D., Ph.D., Yale University, 1994