

Discover more from Techtris
OpenAI set out to show a different kind of tech giant was possible. It might prove the opposite.
What has happened looks incomprehensible unless you know the strange history of OpenAI, its philosophy, and how that is now clashing with reality. Let’s try to do the short version.
Welcome to a special one-topic issue of the Techtris newsletter, on what must undoubtedly be the most significant s**tshow in big tech of quite some time. OpenAI is the zeitgeist company of the AI movement, and it has gone from apparent triumph to disaster in the course of a week.
A huge product demo “developer day” earlier in the week suggested the company was accelerating the commercialisation and the practical rollout of its tech. But on Friday afternoon, the company’s board announced CEO Sam Altman – perhaps AI’s best-known advocate and chief executive – was departing with immediate effect.
No effort was made to hide this was a sacking, with the official statement including that the board had lost confidence in Altman. Extraordinarily, almost no effort seemed to have been made to include significant stakeholders: shareholders and senior executives were completely out of the loop.
The company now seems to be imploding: senior executives are departing, investors want answers, and the board is reportedly seeking Altman’s return. What has happened looks incomprehensible unless you know the strange history of OpenAI, its philosophy, and how that is now clashing with reality. Let’s try to do the short version (which won’t be that short, but the long version is especially long).
It all starts with effective altruism
Effective altruism – which I’ve written about here before ($) – started out as a relatively simple idea for the philanthropic sector: donate money where it would be most effective, rather than just donating ineffectually for a ‘warm glow’.
The early logic of this was to test empirically the effectiveness of different interventions and fund those that had the best results – this tended towards suggesting that targeting money at the poorest countries and poorest populations would have better results than richer countries, with interventions such as deworming or mosquito nets particularly popular.
Given the movement’s grounding in the mathematics of utilitarianism, though, the movement started to attract proponents from big tech – who started pushing the logic further. Why focus just on the money you’re donating when you could increase that amount by earning more to give more? This logic suggested it would be more ethical to work in a hedge fund, or big tech, and donate than to work for a charity, for example – if you were capable. Disgraced former FTX boss Sam Bankman-Fried was, until his fall, the poster boy for this creed.
The logic could be pushed further: why should we only think about people alive today? If the future could have many billions (or even trillions) more people than are alive today, interventions aimed at the long term might be exponentially more effective than those carried out today.
That logic points those with effective altruism towards two things: interplanetary settlement (so humanity could outlive an Earth-extinction event) and Artificial General Intelligence (AGI) – a super-intelligent version of AI that could, if ‘aligned’ correctly to humanity’s benefit, vastly improve both the length and quality of our existence.
That “if” on alignment is exceptionally important: the people who look to highly-advanced AGIs have watched all the same human-versus-robot movies the rest of us have, and are just as aware than an artificial superintelligence could end humanity just as easily as elevate it.
That’s where OpenAI was supposed to come in.
Enter OpenAI
This is very much a SparkNotes-style summary of the distinctly-involved backstory of OpenAI, which I suggest is not taken as gospel. The short version is OpenAI was initially set up as a true not-for-profit with the goal of advancing the introduction of a safe AGI.
That was the overriding priority, with no intention of focusing on the profit motive or on hefty returns from venture capitalists. Funders including Elon Musk had appeared to pledge to give the organisation billions over the course of its development, so it would not need to worry about paying the bills as it advised the project.
For various reasons – as usual including personality clashes and differences in directions – this ‘pure’ not-for-profit model did not last for long, and the money OpenAI had thought it could expect from Musk dried up. This left OpenAI needing to try to attract investment from commercial players, who would reasonably want a company able to give them a return.
OpenAI, in a series of moves led by Altman – who it should be noted is himself a sincere believer in a lot of the non-commercial motives of OpenAI – shifted to a hybrid structure. It would be able to generate a profit and a return forinvestors, potentially even eventually IPOing.
However, investors were warned that they should consider that OpenAI may never generate a profit, that the company would not be primarily focused on delivering a profit, and that they might be better thinking of any investment as something nearer to a donation than something likely to generate a return.
OpenAI’s biggest investors do not get a seat on the board, and even Altman himself had no equity whatsoever in the company – an almost unheard of situation for the sector. The theory was the model would attract like-minded investors willing to accept an extremely unusual governance situation, which would help the company stay focused on its mission.
In exchange, there was a relatively generous cap on the size of returns an investor could expect: namely 100x what they put in. So Microsoft, having invested a billion dollars, could expect a maximum of $100 billion as returns.
These are obviously astronomical returns, but they are not entirely unheard of in the world of big tech – so essentially there is a promise that your investment could be hugely successful, but not so much so if the company achieved AGI that you had a sizeable share of its revenues forever.
Until the last week or so, this model seemed to be working. OpenAI was increasingly becoming a name rather than a principle, in that the models it generated long since stopped being open-sourced by default, and the company’s operations are now less open-source than Meta’s AI projects.
So…what’s gone wrong?
What counts as effective?
The situation is still murky, but so far as we can tell, at the core of Sam Altman’s ouster is an old-fashioned dispute between founders. Ilya Sutskever, OpenAI’s chief scientist and a co-founder, is said to have been behind the ouster – based on concerns around the pace of commercialisation versus research, and whether there was sufficient focus on safety.
OpenAI now has a hot product that a lot of people think is on the verge of being viable for uses from content generation, to call centre management, among numerous others, in the very near future. There are also multiple rival products with similar capabilities.
Here is where the logic of effective altruism stops offering clear answers, especially to those who think strong AGIs are possible in the medium term, and who worry about their dangers to different extents.
One set of logic tells you this: doing research is expensive, and it’s a lot easier to do expensive things if you have money. Revenue is the best guarantor of independence, and so OpenAI should monetise the tech it has in order to be a major player in both AI and the AI safety debate now, and to fund work on a true AGI.
The other set of logic tells you that commercial distractions and compromises today will stop the work that should go on building the future – whether that means focusing harder and faster on developing AGI, or stopping to work out how to do it safely first.
Either would have arguments against commercialisation today – and all three arguments are justifiable under effective altruism’s mathematical approach. It just depends where you set the dial on different unknowable risks and returns. It turns out that effective altruism does not really work as an escape from strategic decision-making – there is rarely actually one ‘right’ answer in reality.
EA versus reality, part two
On paper, what OpenAI’s board did was well within its power. It had a charter that granted it the power to remove the CEO, and investors knew they did not have a seat on the board and would not get one. They also knew, and had signed documents confirming they knew, that OpenAI would not act to maximise their return. An investor lawsuit against the decision would be a difficult one to win.
That is not the way the world actually works, however. You can have a great deal of authority on paper and then discover it amounts to almost nothing in practice. The simple reality is that Altman is the person with the loyalty of much of the senior staff, and there would have been nothing to stop him starting a new AI startup on Monday.
With the option of returns without any of the restrictions on investment that OpenAI offered, it was obvious that several of OpenAI’s main backers would have been onside to switch teams – several of them have openly said as much by this point. It was also clear that many of the senior staff would move to.
OpenAI would face having all of its senior team hollowed out, its investor support crumbling at the same time, and it would need to see off a new rival against that backdrop. The board simply failed to recognise the informal power dynamics, having overly focused on the formal ones.
This is pretty much as spectacularly badly as a board can fail: it is their entire purpose to see the whole picture, not just the bit of it that suits them. It shows up the wider problem of effective altruism, too: it is easy to reduce things to rules, rankings, pieces of paper and the like in theory. It falls apart when it meets reality.
What happens next is anyone’s guess. The board of OpenAI is reportedly trying to woo Altman back – and he is understandably not willing to do that without significant governance and personnel changes on the board. Investors are unlikely to be willing to go without board seats for much longer.
OpenAI’s current governance model is dead – it will be interesting to see, should Altman return, what the effective altruism-sympathetic CEO would be willing to tolerate in terms of curbs on his own authority. OpenAI was, on top of everything else, an attempt to build a big tech startup in which the founder/CEO didn’t wield unassailable power.
OpenAI is trying to change the world in two ways: in building safe and revolutionary artificial intelligence models, and in showing big tech companies can work differently to how they have so far. The jury is still out on the former, but the latter experiment is now looking very much like a failure.
This is a one-shot special of my occasional newsletter Techtris. If you’ve enjoyed it, please do subscribe – all posts for the moment are free for the first four weeks after they’re released. If you’ve really enjoyed it, please do feel free to subscribe, but it gets you nothing beyond the archive for the moment.
Cheers,
James
PS. This newsletter was edited by Jasper Jackson. Please address all feedback on effective altruism to him.
PPS: The image accompanying this newsletter comes from OpenAI’s Dall-E 3. I asked for “a boardroom coup going very badly” and…it decided to make it furry. Here’s how it expanded the prompt:
“Imagine a corporate boardroom scene where anthropomorphic animals in business suits are having a chaotic meeting. A lion, symbolizing the leader, is visibly frustrated, surrounded by a sneaky-looking fox, an indecisive rabbit, and a bear trying to calm everyone down. Papers are flying, a coffee mug is spilling, and a whiteboard has a chaotic strategy plan scribbled on it. The scene is filled with exaggerated expressions and a humorous undertone to depict the concept of a boardroom coup gone wrong. Let's create this scene.”