Newsstand Menu

Are smart robots a threat?

Robot-AI-machine-learning
Print Friendly, PDF & Email

Understanding animal brains could help usher in a golden age for human-like artificial intelligence (AI). That’s according to a recent perspective piece written by CSHL Professor Tony Zador, M.D., Ph.D. A pioneer in computational neuroscience, Zador focuses on how the brain influences complex behaviors. Foreign Policy named him one of the Top 100 Leading Global Thinkers of 2015. CSHL’s Brian Stallard sat down with Zador to hear more about this future for AI and how the machine learning field still has a lot to learn from neuroscience.

[Questions and answers have been edited for clarity.]

Why did you decide to write about machine learning?

This question of how the machine learning field can learn from the biological brain has been an interest of mine since I was a graduate student. In the last 10 years, there’s been a tremendous resurgence in interest in neural network approaches to artificial intelligence. It sort of rekindled my interest in it, and as I returned to the field, I realize that one of the things that continued to bother me was this mismatch between how people in the field formulate the problem of learning and how I think most neuroscientists think about the problem.

What exactly are artificial neural networks (ANNs)?

These ANNs are basically the computing systems that support the complex learning algorithms that drive modern artificial intelligence. They’re inspired by the networks of neurons in a real brain. The current neural network models emerged out of considerations of what we knew about neuroscience circa 1950 to 1980, but there’s very little in modern neural networks that incorporates anything that we didn’t already know back then.

What can ANNs already achieve?

What they can do now is solve a lot of problems that they couldn’t solve five years ago. So when you are on Facebook and one of your friends is communicating in Arabic or Chinese or Farsi, and you wonder what they’re saying, you can now click on that and get a translation. And that machine translation, which is now pretty darn good, sucked five years ago, right? You could maybe tell what the topic was, but that was about it. But now you can understand everything in a foreign language! And that’s because those algorithms now use neural networks as their engine. Also, image search is infinitely better than it was a couple years ago. And we’re getting pretty close to self-driving cars, but there are still some issues. And of course machines have now achieved superhuman level in almost every game—Chess, Go. Last month they emerged as the winners in multiplayer poker, which was a difficult one.

So, what’s left to achieve?

A robot that loads your dishwasher!

As simple as that?

That’s what I’d want. I’ve been leading with the dishwasher example since I started giving talks in the field 20-plus years ago. And we’re not much closer. Maybe we don’t need machine learning or neural networks to get it done. Maybe it can be achieved some other way. But at least that’s the direction I find interesting. AIs right now can’t learn to load and unload a dishwasher and know where to put those dishes; they can’t interact with the real world in any meaningful way.

It seems to me that this is really a lot about defining learning.

Exactly. So the question at hand is, how do you build a machine that is able to interact with the world? And the usual logic in the artificial intelligence community has started with the observation that kids are born and within a couple years they’re able to do things that no machine can do. And that has been taken as evidence that children must have algorithms for learning that are vastly superior to anything that we have.

And what I’m arguing is that, no, it’s not about some general algorithm for learning. Rather, we are creatures who have evolved to be born with essentially all the machinery in place to do all these things. Mice don’t take years to get good at solving these problems. They take weeks. Or insects, which are born able to do these things out of the box. It’s built into our genomes; it’s innate.

How could evolution and these innate behaviors determine how fast we learn?

One of the examples I cite is a mouse-like creature, Peromyscus, in which different species build different burrows. They build complex burrows, and this is somehow wired into their genomes. We know that because you can take a Peromyscus of one species and have it reared by parents from another species, and when it’s an adult, it will build the burrow that its biological parents build, not the ones that its adoptive parents build. So somehow it’s built into the genome that this creature builds this kind of a burrow; that’s innate.

And how does the genome encode the instructions for building a certain kind of burrow?

Well, we certainly don’t know in detail, but we know approximately the form of the answer. It’s not like the animal references the particular nucleotides in its genome and reads them out directly to figure out the length of the burrows that it’s going to build. Instead what the genome encodes is an instruction set for wiring up neural circuits, establishing learning rules within those circuits, and also the details of the individual neurons in that circuit. And it is the wiring diagram that contains the information the animal uses to execute its innate behavior.

But if AI could have these quick learning curves to make organic decisions, couldn’t that lead to robots thinking like people? Aren’t you worried about a robot apocalypse?

No! Why would you want to take over the world? Well, we are hardwired perhaps as humans and in particular as human males to want power. Right? But it’s less obvious in other species. So I think it has to do with our primate lineage that these things sort of seem intrinsically intertwined. But an intelligent robot will have no reason to want to dominate. For that matter, it doesn’t even have a survival instinct. In AI, we’re designing the circuitry, not evolution, so we’re essentially picking what will be innate. So if you don’t want them to learn to dominate, you just leave that out.

So what should we be worried about?

There are still huge problems. If we do achieve AI which can think like humans, the disruption to human economies might be massive. And it could be just awful. Machines have already replaced humans for many forms of manual labor. But they haven’t been able to replace humans for intellectual labor. But if machines are as smart as people, then even intellectual labor will no longer be valued. You’ll be out of a job, I’ll potentially be out of a job. I mean there are some horribly dystopian futures that, at least in our current system, might be accelerated with the advent of AI. And a lot of people in Silicon Valley take this seriously. But at the end of the day, we really can’t say how much good will come out of it and how much bad. If I worry too much about this, I’d just be paralyzed. It’s an interesting enough problem that I’m just going to have fun and keep working on it.

Written by: Brian Stallard, Content Developer/Communicator | publicaffairs@cshl.edu | 516-367-8455

Stay informed

Sign up for our newsletter to get the latest discoveries, upcoming events, videos, podcasts, and a news roundup delivered straight to your inbox every month.

  Newsletter Signup

About

Anthony Zador

Anthony Zador

Professor
The Alle Davis and Maxine Harrison Professor of Neurosciences
M.D., Ph.D., Yale University, 1994

Tags