Today’s AI can talk the talk, but it literally can’t walk the walk. This week At the Lab, CSHL Professors Anthony Zador and Alexei Koulakov discuss the biological brain’s multibillion-year evolutionary advantage over artificial intelligence—and the new AI they built based on that very concept.
Read the related story: The next evolution of AI begins with ours
Transcript
Sam Diamond: You’re now At the Lab with Cold Spring Harbor Laboratory. My name is Sam Diamond, and this week At the Lab, “AI evolves.”
SD: Decades of science-fiction have instilled in popular culture the idea that if computers ever start acting like us, they’ll take over the planet and wipe out humanity.
{Futuristic action movie sound effects}
SD: Not so fast. Today’s AI can communicate very much like you and I. Some of the podcasts you listen to might even be scripted or voiced by AI. Not this one—we’re real.
SD: So how come AI still struggles with some things you and I take for granted, like walking? It’s because we biological organisms have a multibillion-year evolutionary head start. And over the eons, the genome has done something incredible. It’s evolved in such a way that it’s able to pass on the instructions for creating something far more complex than the genome itself—that is, the brain.
SD: This paradox has puzzled geneticists and neuroscientists for decades. However, machine-learning engineers working on AI hadn’t really given it much thought, until recently. CSHL Professor Tony Zador put a new spin on this old problem.
Tony Zador: Yes, there is this reinforcement-like process that occurs each generation. But what you pass on to your offspring is a set of genes, which instruct the embryo how to build a brain. So, it passes through what I call a genomic bottleneck. There’s a limited amount of information that gets passed down from generation to generation, and that is the target of evolution. And I’m arguing that the fact that there is a genomic bottleneck is actually important. It confers potentially a computational advantage.
SD: In other words, the problem may not be so much of a problem at all. Or, in terms more familiar to engineers, it’s not a bug—it’s a feature.
SD: For CSHL Professor Alexei Koulakov, that raised a good question.
Alexei Koulakov: There was an opportunity because the AI field was progressing and they’re capable of generating some very complex networks that can do a lot of stuff. And so we thought, ‘Can we actually implement this bottleneck idea in the form of an AI algorithm?’
SD: The short answer: Yes, and that’s what they did. Now, of course, they didn’t recreate the complexity of the human brain overnight. Koulakov explains:
AK: The actual mechanisms of neural development are way more sophisticated than what we included in our model. But the model is inspired by biology, and it can be extended to include those complex mechanisms. And one thing which is pretty cool is that with this biology-inspired mechanism, you can produce a lot of compression.
SD: How can this compression be put to use? For starters, the model is already capable of holding its own against state-of-the-art AI in tasks like image recognition and games like Space Invaders. But that’s just the beginning.
SD: Ever try running a new AI chatbot on your mobile device and get bumped down to the older version? One reason for that is limited computing space. Better compression models could help with these kinds of problems.
SD: But fear not, listeners. Your human hosts still have at least six million years on their artificial counterparts. And with that…
SD: Thanks again for joining us At the Lab. Please remember to hit subscribe and visit us at CSHL.edu for more fascinating science stories like this one. For Cold Spring Harbor Laboratory, I’m Sam Diamond, and I’ll see you next time At the Lab.