GPT-4 Is Just The beginning
On Tuesday, OpenAI announced the release of GPT-4, its latest, biggest language model, only a few months after the splashy release of ChatGPT. GPT-4 was already in action — Microsoft has been using it to power Bing’s new assistant function. The people behind OpenAI have written that they think the best way to handle powerful AI systems is to develop and release them as quickly as possible, and that’s certainly what they’re doing.
Also on Tuesday, I sat down with Holden Karnofsky, the co-founder and co-CEO of Open Philanthropy, to talk about AI and where it’s taking us.
Karnofsky, in my view, should get a lot of credit for his prescient views on AI. Since 2008, he’s been engaging with what was then a small minority of researchers who were saying that powerful AI systems were one of the most important social problems of our age — a view that I think has aged remarkably well.
Some of his early published work on the question, from 2011 and 2012, raises questions about what shape those models will take, and how hard it would be to make developing them go well — all of which will only look more important with a decade of hindsight.
In the last few years, he’s started to write about the case that AI may be an unfathomably big deal — and about what we can and can’t learn from the behavior of today’s models. Over that same time period, Open Philanthropy has been investing more in making AI go well. And recently, Karnofsky announced a leave of absence from his work at Open Philanthropy to explore working directly on AI risk reduction.
The following interview has been edited for length and clarity.
Kelsey Piper
You’ve written about how AI could mean that things get really crazy in the near future.
Holden Karnofsky
The basic idea would be: Imagine what the world would look like in the far future after a lot of scientific and technological development. Generally, I think most people would agree the world could look really, really strange and unfamiliar. There’s a lot of science fiction about this.
British colonizers created a massive canal system in Pakistan — and helped cause the country’s deadly water crisis.What is most high stakes about AI, in my opinion, is the idea that AI could potentially serve as a way of automating all the things that humans do to advance science and technology, and so we could get to that wild future a lot faster than people tend to imagine.
Today, we have a certain number of human scientists who try to push forward science and technology. The day that we’re able to automate everything they do, that could be a massive increase in the amount of scientific and technological advancement that’s getting done. And furthermore, it can create a kind of feedback loop that we don’t have today where basically as you improve your science and technology that leads to a greater supply of hardware and more efficient software that runs a greater number of AIs.
And because AIs are the ones doing the science and technology research and advancement, that could go in a loop. If you get that loop, you get very explosive progress.
The upshot of all this is that the world most people imagine thousands of years from now in some wild sci-fi future could be more like 10 years out or one year out or months out from the point when AI systems are doing all the things that humans typically do to advance science and technology.
This all follows straightforwardly from standard economic growth models, and there are signs of this kind of feedback loop in parts of economic history.