Newsstand Menu

Is an AI singularity possible?

Kyle Daruwalla in a lab coat operates equipment in a laboratory setting with text overlay about a lab show.

How close is artificial intelligence to matching biological intelligence? When will computers be able to think and act as humans do? If you ask one of the big tech companies currently developing AI, the answer will likely be “soon.” But for CSHL NeuroAI Scholars Kyle Daruwalla and Christian Pehle, the possibility of a so-called AI singularity is still a long way off.

After seeing Daruwalla and Pehle speak with sci-fi novelist James Rollins during a recent CSHL webinar, I became fascinated by the neuroAI scholars’ shared opinion that an artificial superintelligence was not really viable. What made them so sure when big tech keeps proclaiming it’s right around the corner? Thankfully, the two of them were eager to sit down with me here At the Lab and delve deeper into their line of thinking.

From the limitations of computing power to the very structure of our society, Daruwalla and Pehle lay out a compelling argument for why the singularity may not happen and what the real future of artificial intelligence might look like.


Transcript

Nick Fiore: You’re now At the Lab with Cold Spring Harbor Laboratory, where we talk about inspiring curiosity, discoveries, innovations, and the many ways that science makes life better. My name is Nick Fiore, and I’m joined today by two of the lab’s NeuroAI scholars, Kyle Daruwalla and Christian Pehle. We’ll be discussing the possibilities of a technological singularity in the age of artificial intelligence.  

NF: Gentlemen, thank you for joining me today. Before we get into our topic, could you explain what NeuroAI is and the nature of your research here at the lab?

Kyle Daruwalla: Yeah, sure. So, I think each of us have similar but slightly different views of NeuroAI. At least the kind of research that I’m working on here at the Lab goes a little bit in both directions. So, there’s one kind of NeuroAI that I think happens here at Cold Spring Harbor and a lot of institutions where people use state-of-the-art AI models but apply it to biological data and then try to accelerate the pace of scientific research. So that’s kind of one half. And then the other half of my research is focused on building better AI models from biological inspiration.

Christian Pehle: The way I think about NeuroAI is the aim is to establish a kind of virtuous cycle that had existed from the beginning of even computation. So initially, before modern computers were invented, the brain was the only example we had of something that computes. And so initially we used metaphors from the brain to even think about how computers should work. And for the, let’s say, first 50 years of AI development, the brain was the prime example that people used when they were thinking about building AI systems.

CP: But arguably in the last, let’s say, 10 years, this virtuous cycle has been broken. So, the modern AI architectures take very little inspiration from the brain beyond the absolute basics. But they are also far less efficient and far less adaptive than, and more brittle than, what we know biological brains are. So, in a way, NeuroAI aims to reestablish this virtuous cycle between looking at neuroscience and what we know about biological brains and bringing over principles that we can identify there back to artificial intelligence.

NF: Obviously, there have been a lot of developments in the field of AI research recently, but is there some unanswered question or goal that keeps you up at night that you’re pushing towards in your work?

CP: I think there’s an exponential increase in investment in AI right now. So, the amount of just purely money and energy that people are willing to commit to the current approach is increasing rapidly. And with it, also actually, the capabilities of these models is increasing rapidly. That’s all due to the scaling law, which basically says that as you exponentially increase the amount of data, you will get improvement in the performance of these models. So, the question that keeps me up at night is, I mean, first of all, this part, data, will be running out, or it has run out. But then the next thing is now reasoning essentially, and there’s still some room there. What keeps me up at night is will this continue?

CP: If it continues, that means that, and that’s what people believe, that there’s like a three-to-five-year window in which humans will be able to contribute to the generation of knowledge and after that, AI systems will be just strictly superior at that. And so that could be either read as like a very pessimistic view of the world, but it also means that right now, since we are in an exponential, any decision that I make, or any decision that an AI researcher makes right now, [about] what to research on, could have an outsized influence on the future trajectory of AI. So, like, choosing what to work on right now, that could be the most important thing ever, because if it really turns out to be that in five years there will be nothing to research anymore, then this is now the most critical time to do research. 

KD: Yeah, yeah, I guess another aspect of that is, you know, you were saying how any decision that we make now in terms of what to research could have an outsized impact on the future trajectory of AI. And I guess one aspect of that is people talk about the financial or the energy costs of AI and its impact on society right now. And so I think for anyone working in AI, that’s probably a non-technical aspect that keeps you up at night is you’re kind of like the people who are part of the Manhattan Project. But at the scale of all AI researchers in the world, and you may not work on the thing that makes the big difference, but you could be. And so what’s the ethical and the right way to do that?

KD: And how can you ensure that the technology that you’re creating will actually result in the betterment of society instead of some kind of outsized harm? Because I think in a lot of this conversation about singularity, most people worry about the negative outcome where you have some kind of conscious AI, and then it works against the interests of humans.

NF: Yeah, I think both of your answers really touch upon the topic I wanted to get into today, which is this idea of an AI singularity, the possibility of a future in which artificial intelligence surpasses human control, becomes sentient, becomes a superintelligence, however you want to define it. I recently listened to a discussion between you and the author, James Rollins, and the two of you seem to cast doubt on that narrative. And I guess to start, why do you tend to not agree with that theory?

KD: I guess the first thing for me, at least, when we were talking with James, I mentioned this too, is that people who believe in AI singularity, they think of there being some kind of threshold beyond which an AI would suddenly be conscious in the same way that a human is conscious. But a big difference for me between, I guess, those believers and my kind of thinking is that I don’t really think of consciousness as a particular threshold. I think of it as kind of like a spectrum, and it’s, you know, every living organism to some—I guess I prefer not to use the word consciousness, right— every living organism has some level of agency, some amount of information it takes in from the world and some amount of control it exerts on it. And kind of the balance between those two and the range of problems that this particular living thing needs to solve determines where it lands on this spectrum.  

KD: And I think for us as humans, we just happen to be on one end of that spectrum where our brains are designed in such a way that we’re not super-efficient at any one particular thing. Instead, what we are is extremely adaptable and flexible, and that adaptability and flexibility allows us to survive in a world where there are other living organisms that are extremely specialized towards one particular task, and then do those tasks far better than a human would ever be able to do. So, I think just on like a very broad level, there’s a huge difference between my thinking there on consciousness and what a lot of believers in AI singularity would promote.  

KD: But then I guess if we define singularity as a point where an AI can self-improve and it has at least the same level of intelligence and reasoning as the average human does, I think we’re still a ways off from that being the case. Part of that is because of what I was talking about earlier. There’s a lot of things we as humans do that involve interacting with the physical world, and it’s not in the form of language or in some domain akin to that. And for that reason, I think there’s a lot of learning that we have embedded both in our genome and in our early experience that’s not really captured in current AI models, but is the bedrock of our reasoning abilities as we move through the world. And so how current systems would be able to achieve that from pure language would just be this scaling law phenomenon that Christian mentioned. And I think, for me, the energy resources and the compute resources, the data resources required to hit the scale for human-level intelligence is too far out in the current paradigm.  

KD: And I think a lot of people who try to predict the future believe in this moment where the AI systems themselves will come up with a big breakthrough that breaks that scaling and gives us a new scaling law that’s even better. But I just think of that as kind of fantastical thinking. There’s no evidence in my mind to believe that that’s true other than faith. 

CP: Yeah, I think I mean, from the get-go, that was also part of why I answered the way I did about the singularity. As soon as consciousness comes up, it’s very hard for me to talk about that, because I don’t have a strong belief either way about it. But if I define the singularity as like a technological singularity, meaning that there’s a positive feedback loop that leads to an exponentially running away of technological progress, and in this particular case, that runaway positive feedback loop would be, you know, spurred by the fact that we’ve now invented AGIs, artificial general intelligence, and that is able to self-improve, then I think I can give some arguments why I think there’s like severe challenges to that.  

CP: And I think I would break this down in let’s say three categories, but there’s others. The first one would be physical limitations, like just the physical laws at some point will prevent an explosion absent of, you know, things that we don’t know about. Then there’s algorithmic limitations. So, the types of models that we’re building right now, even in principle, cannot break, let’s say, for example, complexity limitation. So there’s some things that computers just simply cannot compute. So even if you build an AI model, it wouldn’t be able to compute the answer and what that means. And this goes back to actually what Kyle is saying. At that point, in order to make progress, you need to interact with the real world, because you need to be able to do then experiments and once you have to do that, then the timescale of improvement is actually no longer just limited by compute, but it is limited by your ability to interact with the real world and to make progress there.  

CP: And then finally, and I think in many ways it’s the biggest one, it’s economics. So, even though right now people are willing to invest hundreds of billions, maybe trillions into progress in AI, if the return on investment turns out to be less than expected, that explosive investment can collapse. So, we don’t know if that will happen. Obviously, any, for example, improvement of the energy efficiency or of the ability to train large-scale models at a fraction of the cost would help that. And so I think that is kind of my focus right now. I think in terms of self-improvement, that’s hard to know. The place where I can see this happening in the most likely way is in abstract domains. For example, in mathematics, it could easily be the case that over the next three to five years, we see models that can prove theorems that haven’t been proven by humans, and then that can lead to self-improvement in the ability of AI systems to build advanced mathematics.  

CP: But just how Aristotelian physics failed because it was just a bunch of Greek philosophers that were sitting around and thinking about the world, how the world ought to be, similarly, the AI systems without experimental access to the world will not be able to arbitrarily improve upon our knowledge and capabilities in the real world. And so the timescale at which that will happen is not just set by even just investing and building new data centers. It’s set by how much can they speed up this experimental process? All the data that’s possibly available to AI systems has been hoovered up and has been used, and so I don’t think, ultimately, it will lead to a runaway feedback process. 

NF: The idea that we’re running out of or have run out of data is not really something I’m familiar with. Should we be concerned about this? 

KD: I guess if you’re a company like OpenAI, maybe you’re concerned about it, because the easy wins are to some extent gone. So, you have to innovate in order to keep up with the pace of progress you’re promising, but without the same kind of approach you’ve been pursuing. Yeah, like these models, the way they’ve been trained is basically on all the data that’s out there in the form of text on the internet. And of course, there’s more and more text always being generated on the internet. One aspect of the problem is that that text is generated by AI. And so it’s kind of a little bit uncertain at this stage whether that will lead to long-term degradation in the quality of the data that’s used to train these models. And so in terms of running out of data, even though, you know, people are still posting on Reddit and stuff like that all the time, Wikipedia is always growing, the amount of useful, high-quality data that would actually lead to a meaningful improvement in a lot of these AI models is kind of running out. And I think that’s perhaps what is most concerning to these companies.  

KD: I think for someone like me, the fact that data is running out is not really that big of a concern. It’s more of an opportunity because a lot of the kind of research that I’m focused on is trying to figure out how to build models that learn in a sample-efficient manner. So, for example, animals for tasks that are really relevant to what that animal needs to do very soon after being born can perform that task near optimal efficiency. And certainly it doesn’t take many, many examples, as many as a large language model to learn how to do that task. And so being able to kind of adapt in the presence of limited data is a feature of biology that isn’t captured by current models. 

CP: Part of the thing is that even if we had more data, it would become too costly to train a model with that amount of data. So we are, I think, at a point where basically we are both running out of high-quality data but also where we would be hitting diminishing returns if we were to present more data. So, that’s why, as far as we can tell, and as far as we know, the more profitable and reasonable approach is to train the models on reasoning now. They try to. For example, for code and for things that are like just purely evaluatable inside a computer, you can make rapid advance by just simply letting these models try to reason, solve problems, and then use that data that is basically synthetic but verifiable to make progress. 

KD: The point you just made about training on reasoning and training on synthetic data is kind of at the heart of a lot of these singularity type arguments, and is exactly kind of the mission of these companies, I would guess, based on their current actions, is to train models to write code really well. Their goal right now is not to do everything a human can do, but to specifically focus on these tasks where they can generate a lot of synthetic data and they can verify really well if the model is doing the correct thing or not. And the idea is that slowly they’ll replace a lot of their engineering teams with these AI models. And then because of that, they’ll be producing more and more code that does AI research, which is what we do anyways.   

KD: And kind of at the heart of the singularity argument is that that’ll hit some kind of critical threshold where now you have enough progress being made on just testing out new ideas with some limited supervision from a human, you test out ideas fast enough that eventually you will hit on the correct idea that is the next breakthrough. And that’s what starts this self-fulfilling prophecy of AI singularity. But I think that just kind of discounts the importance of ingenuity in research and critical thinking that, even in the best reasoning models, isn’t really there. I think the breakthrough that’s required to achieve something like singularity is not incremental research, and at best, the approach that these companies are taking would lead to really good incremental researchers, but not necessarily ones that will have the insight to make that critical breakthrough. 

CP: When we started this conversation, I was focusing on physical limitations. OK, they for sure exist. When they will come into play is hard to say because there’s things like the Landauer limit and just the total energy available on Earth. Those are very far away. But the, let’s say, algorithmic limitations of these models, they are far less clear. It’s not clear whether the approach that is taken right now will ever lead to a truly general intelligence. I can much more easily believe that it will lead to, you know, new mathematical breakthroughs than I can believe that it will lead to a positive feedback loop on all technology. Because the things that are required to get such a positive feedback loop in all of technology, like I said, you cannot just think them up. You actually need to interact with the real world.  

CP: And once you do that, then the rate of improvement is no longer just restricted by the amount of money that you’re investing into GPUs. It’s related to a complete restructuring of society. And at that point, there’s lots of other friction that will arise because it’s much harder to change society than to just build data centers. 

NF: OK, so, we’re seeing the limitations that stand between where we’re at now and the possibility of singularity. What do you see as being the best possible outcome from the current trajectory of AI development? 

KD: For me, what I think of the best possible outcome for AI has more to do with computing than it has to do with AI. Kind of like Christian mentioned a while back in this conversation very early on, you know, the brain was the only example of a computing system that we had. I think we kind of lose sight of the fact with a lot of the technology we have today, that the idea of computers in the first place is relatively new. It’s barely a century old. And so there’s a lot of fundamental problems as it pertains to computing that we just haven’t solved. I think answering those questions involves building AI in a very different way from what these companies are currently doing. I think to have the kind of innovations that are necessary in AI systems requires solving the same difficult problems that we try to solve with computers today. How do you compute in a fixed amount of area without generating a lot of heat? You know, maybe spikes is the answer. Maybe something else is the answer. It’s true that biology has solved some of those problems.  

KD: And so, yeah, I think for me, just as a computer scientist, the best possible outcome is that we’ll actually have a breakthrough and big insight into some of these more fundamental problems about the nature of computing by studying these systems that are synthetic and artificial, but also more like the brain than actual computers are. 

CP: I think the most positive things that I can think about are we will be able to advance science in a way that we weren’t before, which will be sad for scientists. It’s something that I’m actually kind of sad about. I would rather do science as a human, but on the other hand, some scientific problems just need answers. So, it’s not about the egos of scientists, whether AI systems make advances there or not. Making advances in science can mean that we can cure diseases—that we can figure out things that we were not previously able to figure out.  

CP: Advancing science, to my mind, is one of the noblest things that we could hope for in terms of AI development, and that’s actually also part why I chose to come to Cold Spring Harbor, because that’s the mission that we have here—to advance science and to benefit humanity. So, in the best case scenario, AI will benefit humanity. It will lead us to cure diseases. It will lead us to figure out technology that we haven’t figured out. And I think in that sense it can be really positive. The negative potential outcomes, they keep me up at night, too. But I have enough wisdom to understand that I have limited influence on those. And so I’m just focusing my energy on making sure that the type of research that I do, in my estimation, will lead to a positive outcome. And that’s how I think about it.  

NF: It’s clear that the future of AI is still being written. But thanks to the curiosity, discoveries, and innovation happening right now at Cold Spring Harbor Laboratory, there is tremendous hope for that future. Thank you for supporting science by listening and learning right alongside us. If interested in more news about Cold Spring Harbor Laboratory research, subscribe to this podcast, sign up for our newsletter, and follow us on social media so you can share with us how science impacts you. To philanthropically support research at Cold Spring Harbor Lab, visit give.cshl.edu. Because science makes life better.