The messy, secretive reality behind OpenAI’s bid to save the world

Above all, it is lionized for its mission. Its goal is to be the first to create AGI—a machine with the learning and reasoning powers of a human mind. The purpose is not world domination; rather, the lab wants to ensure that the technology is developed safely and its benefits distributed evenly to the world.

The implication is that AGI could easily run amok if the technology’s development is left to follow the path of least resistance. Narrow intelligence, the kind of clumsy AI that surrounds us today, has already served as an example. We now know that algorithms are biased and fragile; they can perpetrate great abuse and great deception; and the expense of developing and running them tends to concentrate their power in the hands of a few. By extrapolation, AGI could be catastrophic without the careful guidance of a benevolent shepherd.

OpenAI wants to be that shepherd, and it has carefully crafted its image to fit the bill. In a field dominated by wealthy corporations, it was founded as a nonprofit. Its first announcement said that this distinction would allow it to “build value for everyone rather than shareholders.” Its charter—a document so sacred that employees’ pay is tied to how well they adhere to it—further declares that OpenAI’s “primary fiduciary duty is to humanity.” Attaining AGI safely is so important, it continues, that if another organization were close to getting there first, OpenAI would stop competing with it and collaborate instead. This alluring narrative plays well with investors and the media, and in July Microsoft injected the lab with a fresh $1 billion.

Photograph of OpenAI branded sign in their office spaceOpenAI’s logo hanging in its office.

Christie Hemm Klok

But three days at OpenAI’s office—and nearly three dozen interviews with past and current employees, collaborators, friends, and other experts in the field—suggest a different picture. There is a misalignment between what the company publicly espouses and how it operates behind closed doors. Over time, it has allowed a fierce competitiveness and mounting pressure for ever more funding to erode its founding ideals of transparency, openness, and collaboration. Many who work or worked for the company insisted on anonymity because they were not authorized to speak or feared retaliation. Their accounts suggest that OpenAI, for all its noble aspirations, is obsessed with maintaining secrecy, protecting its image, and retaining the loyalty of its employees.

Since its earliest conception, AI as a field has strived to understand human-like intelligence and then re-create it. In 1950, Alan Turing, the renowned English mathematician and computer scientist, began a paper with the now-famous provocation “Can machines think?” Six years later, captivated by the nagging idea, a group of scientists gathered at Dartmouth College to formalize the discipline.

“It is one of the most fundamental questions of all intellectual history, right?” says Oren Etzioni, the CEO of the Allen Institute for Artificial Intelligence (AI2), a Seattle-based nonprofit AI research lab. “It’s like, do we understand the origin of the universe? Do we understand matter?”

The trouble is, AGI has always remained vague. No one can really describe what it might look like or the minimum of what it should do. It’s not obvious, for instance, that there is only one kind of general intelligence; human intelligence could just be a subset. There are also differing opinions about what purpose AGI could serve. In the more romanticized view, a machine intelligence unhindered by the need for sleep or the inefficiency of human communication could help solve complex challenges like climate change, poverty, and hunger.

But the resounding consensus within the field is that such advanced capabilities would take decades, even centuries—if indeed it’s possible to develop them at all. Many also fear that pursuing this goal overzealously could backfire. In the 1970s and again in the late ’80s and early ’90s, the field overpromised and underdelivered. Overnight, funding dried up, leaving deep scars in an entire generation of researchers. “The field felt like a backwater,” says Peter Eckersley, until recently director of research at the industry group Partnership on AI, of which OpenAI is a member.