Angus Colwell Angus Colwell

‘I was astounded’: Gary Marcus on the Sam Altman saga

Credit: Getty Images

This morning, OpenAI – the firm behind ChatGPT – rehired its chief executive, Sam Altman, after it fired him on Friday. Altman is the most prominent ambassador for the world of artificial intelligence, and was set to join Microsoft after leaving the company. After his sacking, more than 95 per cent of OpenAI’s employees demanded that the board leave and reinstate him. Many staff were threatening to quit the lab, and Microsoft had agreed to match their pay. Today, OpenAI caved and welcomed him back.

What’s going on here? Did a firm that was set up to make AI ‘for the benefit of humanity’, whose whole idea was to not be dictated by the whims of the market, forget itself?

I spoke to Gary Marcus, whose Substack is one of the most interesting around. He is professor emeritus of psychology and neural science at New York University, and founded a machine learning company – Geometric Intelligence – that was later bought by Uber.

I conducted the interview with Gary last night, before Altman was rehired. I asked Gary over email this morning what his thoughts were on the past 24 hours:

There is a lot to like about how it turned out, with much credit to the interim CEO Emmett Shear for threading the needle between the employees’ love for Sam and the board’s staunch voice for maintaining the original mission. The company began as a nonprofit charity, by charter in the service of humanity, and still is a nonprofit, and a lot of people of seem to have lost sight of that. The board was unfairly vilified, and all of humanity should thank them for standing their ground.

The very fact that Sam (at least temporarily) gave up his board seat and  signed off on his ally Greg Brockman departing from the board as well, while leaving Adam D’Angelo, who seems to be firmest advocate on the other side, remain on the board, speaks volumes: Sam also agreed to internal investigation. We still don’t know exactly what transpired, but I think that the final outcome is more checks and balances where we will really need them. A ton of drama, a very good outcome. (Side note, there no women or minorities on the new interim board; I certainly hope there is more balance when they expand the board, given OpenAI’s outsized influence on the world. They should include some scientists, too.)

The full interview is below. It has been edited for clarity.

How surprised were you to hear that Sam Altman had left OpenAI? Had there been rumours, and was it a possibility you had considered? 

I was astounded. He’s one of the most popular CEOs of all time. But the statement pointed to issues of truthfulness. I’ve never been a fan of the way OpenAI has marketed itself. So I was astounded, but I could see how it happened at the same time. 

What are your problems with how OpenAI marketed itself?

Well, let’s start with the title. They’re called OpenAI, and they were originally open, but they became closed. They started as a nonprofit, but they turned into this hybrid nonprofit for profit. You look at their original mission and it’s about being for the benefit of humanity. But lately it’s mostly been about shipping products. So there’s always been some tension between what they have promised and what they’re actually delivering.

So how do you feel now about the workability of the whole notion of ‘nonprofit’? The OpenAI charter said that its ‘primary fiduciary duty is to humanity’. Do you think that is true? 

There are real risks here

OpenAI is likely to fail now. I would say it started out as a nonprofit. If it had stayed as a nonprofit, we wouldn’t be having these conversations. But they made a switch to being an nonprofit that had a for-profit company in its midst, taking money from powerful venture capitalists who are now suing or threatening to sue. Once they made that switch, I think the die was cast. It was inevitable that there would be tension between their mission of protecting humanity and the other mission of racing forward so as to make money. And so I view it as a failed experiment. I won’t say there’s no way to do this, but if you look at their org chart, you look at their prospectus, they always knew this might happen or it was always a possibility that there would be some tension. They made it absolutely clear that the nonprofit called the shots and, you know, it blew up. 

You say that you think OpenAI is likely to fail now? How do you feel about that? Do you view that with regret, or are you fairly sanguine about that possibility? 

I think that it’s extremely important that the world has research around how to make AI beneficial for society, how to constrain the risks. OpenAI was supposed to be doing that. I didn’t feel like they were doing a great job of that, to be honest. If they fail, maybe someone else will step in and there are various other vehicles that we might do this. If the world fails at that mission, we’re in trouble. There are real risks here. But I would say that their original mission remains incredibly important.

You’ve written that you can’t just upscale large language models (LLMs) like ChatGPT to create miraculous progress and breakthroughs and get to AGI (artificial general intelligence). However, do you think the events this weekend might slow down progress, or might it, as some suggest, speed it up by giving the people on the safety side a bad rep?

The consequences here are very hard to anticipate. I absolutely continue to think that large language models are not the correct path to trustworthy AI. I think that they are basically autocomplete on steroids, and that’s not a good basis for reasoning ethically, for interpreting consequences of actions, for avoiding bias, for being trustworthy.

So OpenAI had an enormous share, not just of the market, but also intellectual space. They were worth so much money, they got so many people to join them, et cetera. I don’t think they were following the right path. So there could actually be a positive consequence here if they get out of this business. Other people might actually step in and explore a broader range of hypotheses than they were doing. So in that way, it could be a good thing.

It could be a bad thing in other ways, like if all of their talents are going to go to Microsoft. Microsoft is a company that recently fired a whole team of people working on ethical AI. So I don’t think it’s realistic to expect that if those employees go to Microsoft, that they’re necessarily going to work on what some people would call beneficial or trustworthy AI. It’s very unclear. The only thing that we can be sure of is that an enormous amount of instability has been introduced in the system, and therefore things are less predictable. I’ve always been telling people that I can’t predict where we’ll be ten years from now. I probably can’t even predict five. I don’t even know what’s going to happen next week.

Do you believe Sam Altman when he says that his priority is safety? 

Well, I think the fact that he’s going to Microsoft suggests otherwise. Microsoft has talked the talk of responsible AI and transparency, but they’re not transparent about what training data they use. They fired a bunch of people who were working on responsible AI. I don’t see them as a paragon of responsible AI, and to some extent, if Sam really was committed to safe and responsible AI, I would think that the move here would be to start a new institute devoted to that, not to go work for another big commercial company. That was the reasoning behind OpenAI existing in the first place, the idea that maybe you couldn’t get this stuff done in a big company. So I think the choices he’s appearing to make right now say something about what his actual values might be, but also, let’s see how it plays out. 

You first wrote on Substack that you had a theory that OpenAI may have not been happy with Sam Altman having other ventures. 

Well, not just that. Sam has always been involved in many companies, so it was not news to the board that he was involved in other companies. But my read continues to be that they saw something about one of those other companies, and the nature of that company they felt was in conflict with the mission, and hadn’t been disclosed properly. That’s how I read their statement. I don’t know if it’s correct, but that remains my theory. 

How do you feel about the rumour at the moment, which is that this has been a coup from the deccelerationists. There’s talk that Ilya Sutskever, OpenAI’s chief scientist, got concerned about an advance in AI capabilities, and that he thought Sam was going too quickly.

I can’t say it’s impossible. But two things: one is there has to have been a proximal trigger here, an immediate thing that led the board to act very quickly in a way that appeared to the outside world to be unprofessional. The question is what that immediate thing might be and whether it was more like a business dealing or more of a technology advance. If it was a technology advance, then why does Sam take the lead for that? Why don’t they just have a meeting and say: ‘Hey, Sam, you know, we’re worried about this. Can we change the strategy here?’ And that’s not how they proceeded. So I don’t really read it that way.

The other thing that’s interesting is that I’ve been arguing for years that we’re not really on the path to artificial general intelligence. Large language models are not going to get there. And Sam has been arguing against me and has in fact sometimes criticised me in public. He wrote a sort of funny tweet saying something like: ‘Give me the confidence of a mediocre deep learning skeptic.’ I had said that the deep learning had maybe not insuperable problems, but problems, deep problems that we have not resolved. Then he very quickly, over the last few months, changed. Basically he now sounds like me. The arguments that he made at the Cambridge Union last week, just before all of this went down, was that you could only get so far with large language models. I wrote this famous piece: ‘Deep Learning is hitting a wall.’ He basically echoed exactly the things I said. Either he was playing some kind of game there, which would suggest that he’s duplicitous or no, they don’t actually have a breakthrough that looks to him like AGI. 

Why did OpenAI even consider reinstating Sam Altman over the weekend? 

My read is that the board never wanted to reinstate him, but the company and the employees and the investors did. My read is that the board did not. From what I can tell, the board’s actually been pretty consistent, even under enormous pressure now from lawyers and employees and so forth. My impression remains that the board never wanted to have Sam back.

Again, I’m not on the inside. So a lot of the questions you’re asking me about, I’m speculating based on the tweets that I see. I’ve worked in the industry a little bit: I had a startup, I sold it to Uber. I have lots of friends in industry. So there are tea leaves that I think I can read because I know how these games go. But I could be wrong.  

This was not a business decision

There has been some comment about the fact that members of the OpenAI board are aligned with the world of effective altruism. How much do you think that consequential reasoning comes into business decisions like this? Or do you think it’s just murky human morality? 

I really can’t say exactly, except for one thing. I don’t know how to say it louder. This was not a business decision. This was a decision of a nonprofit about their mission. In that sense, it was inherently consequentialist, I suppose, depending on how you mean that term. This was not a decision about how OpenAI could make money or even how it could survive, but about how it could not violate its mission. And its mission was AI in the service of humanity. 

Is there any plan you’ve seen from a particular government that looks like a workable route to making safer AI?

I think my favourite right now is the Holly Blumenthal Bill and the EU AI Act, as it was written a few weeks ago. It’s looking at things like, how do we check a model before widespread deployment to make sure that the benefits outweigh the risks? How do we look at it after the fact? I think that’s a good bill. There’s also another good bill introduced by John Thune and Amy Klobuchar. So there are some good models out there. Unfortunately, the EU AI Act may get gutted this week, and I’m less convinced that the Holly Blumenthal or the Klobuchar bills will actually get through. Ultimately, it’s going to come down to Chuck Schumer’s position and he’s stressed innovation, in my view, more than he has stressed regulation and safety.

Sam Altman said last week that we are heading for the ‘best world ever’. You also have Eliezer Yudkowsky saying we may have a couple of years left. Are they speaking on anything more than an article of faith? Does it require a sort of religious belief to make a judgment on AI and existential risk? 

Anyone who’s intellectually honest has to admit we don’t know where this is going. I mean, we’ll still be here in two years. What things are like, I don’t know. But anyone who’s intellectually honest realises there’s uncertainty here, that we don’t know what the technology will do for good. We don’t know who will use it. We don’t know how they will use it. We don’t know what the regulations will be in place. We don’t know what enforcement there might be. We don’t know exactly how the technology would develop. It’s just absurd to say that we know what the outcome is here.

I think it’s perfectly reasonable to say I’m excited about AI, because I think it could help humanity, and is perfectly reasonable to say I’m concerned about AI because it could hurt humanity. Those are both 100 per cent true, and it’s up to us as a society to figure out how to bring it to the best possible outcome. I would suggest that leaving everything in the hands of Silicon Valley, or in the hands of government that basically has been letting Silicon Valley construct the rules, is a terrible idea. What we learned in the last few days is that these companies can’t even govern themselves. We should not assume that they can self-regulate AI. That’s an absurd idea. We obviously need to have some wise government intervention, informed by scientists and civil society, that will help us get to a good outcome. We can’t assume a good outcome. We can’t assume a bad outcome. We have to make the right choices here. 

Comments