News

The ChatGPT King Isn’t Worried, but He Knows You Might Be

I first met Sam Altman in the summer of 2019, days after Microsoft agreed to invest $1 billion in his three-year-old start-up, OpenAI. At his suggestion, we had dinner at a small, decidedly modern restaurant not far from his home in San Francisco.

Halfway through the meal, he held up his iPhone so I could see the contract he had spent the last several months negotiating with one of the world’s largest tech companies. It said Microsoft’s billion-dollar investment would help OpenAI build what was called artificial general intelligence, or A.G.I., a machine that could do anything the human brain could do.

Later, as Mr. Altman sipped a sweet wine in lieu of dessert, he compared his company to the Manhattan Project. As if he were chatting about tomorrow’s weather forecast, he said the U.S. effort to build an atomic bomb during the Second World War had been a “project on the scale of OpenAI — the level of ambition we aspire to.”

He believed A.G.I. would bring the world prosperity and wealth like no one had ever seen. He also worried that the technologies his company was building could cause serious harm — spreading disinformation, undercutting the job market. Or even destroying the world as we know it.

“I try to be upfront,” he said. “Am I doing something good? Or really bad?”

In 2019, this sounded like science fiction.

In 2023, people are beginning to wonder if Sam Altman was more prescient than they realized.

Now that OpenAI has released an online chatbot called ChatGPT, anyone with an internet connection is a click away from technology that will answer burning questions about organic chemistry, write a 2,000-word term paper on Marcel Proust and his madeleine or even generate a computer program that drops digital snowflakes across a laptop screen — all with a skill that seems human.

As people realize that this technology is also a way of spreading falsehoods or even persuading people to do things they should not do, some critics are accusing Mr. Altman of reckless behavior.

This past week, more than a thousand A.I. experts and tech leaders called on OpenAI and other companies to pause their work on systems like ChatGPT, saying they present “profound risks to society and humanity.”

And yet, when people act as if Mr. Altman has nearly realized his long-held vision, he pushes back.

“The hype over these systems — even if everything we hope for is right long term — is totally out of control for the short term,” he told me on a recent afternoon. There is time, he said, to better understand how these systems will ultimately change the world.

Sam Altman, the chief executive of OpenAI, whose company created an online chatbot called ChatGPT, and has received more than $13 billion in investment from Microsoft.Credit…Jim Wilson/The New York Times

Many industry leaders, A.I. researchers and pundits see ChatGPT as a fundamental technological shift, as significant as the creation of the web browser or the iPhone. But few can agree on the future of this technology.

Some believe it will deliver a utopia where everyone has all the time and money ever needed. Others believe it could destroy humanity. Still others spend much of their time arguing that the technology is never as powerful as everyone says it is, insisting that neither nirvana nor doomsday is as close as it might seem.

Mr. Altman, a slim, boyish-looking, 37-year-old entrepreneur and investor from the suburbs of St. Louis, sits calmly in the middle of it all. As chief executive of OpenAI, he somehow embodies each of these seemingly contradictory views, hoping to balance the myriad possibilities as he moves this strange, powerful, flawed technology into the future.

That means he is often criticized from all directions. But those closest to him believe this is as it should be. “If you’re equally upsetting both extreme sides, then you’re doing something right,” said OpenAI’s president, Greg Brockman.

To spend time with Mr. Altman is to understand that Silicon Valley will push this technology forward even though it is not quite sure what the implications will be. At one point during our dinner in 2019, he paraphrased Robert Oppenheimer, the leader of the Manhattan Project, who believed the atomic bomb was an inevitability of scientific progress. “Technology happens because it is possible,” he said. (Mr. Altman pointed out that, as fate would have it, he and Oppenheimer share a birthday.)

He believes that artificial intelligence will happen one way or another, that it will do wonderful things that even he can’t yet imagine and that we can find ways of tempering the harm it may cause.

It’s an attitude that mirrors Mr. Altman’s own trajectory. His life has been a fairly steady climb toward greater prosperity and wealth, driven by an effective set of personal skills — not to mention some luck. It makes sense that he believes that the good thing will happen rather than the bad.

A New Generation of Chatbots

Card 1 of 5

A brave new world. A new crop of chatbots powered by artificial intelligence has ignited a scramble to determine whether the technology could upend the economics of the internet, turning today’s powerhouses into has-beens and creating the industry’s next giants. Here are the bots to know:

ChatGPT. ChatGPT, the artificial intelligence language model from a research lab, OpenAI, has been making headlines since November for its ability to respond to complex questions, write poetry, generate code, plan vacations and translate languages. GPT-4, the latest version introduced in mid-March, can even respond to images (and ace the Uniform Bar Exam).

Bing. Two months after ChatGPT’s debut, Microsoft, OpenAI’s primary investor and partner, added a similar chatbot, capable of having open-ended text conversations on virtually any topic, to its Bing internet search engine. But it was the bot’s occasionally inaccurate, misleading and weird responses that drew much of the attention after its release.

Bard. Google’s chatbot, called Bard, was released in March to a limited number of users in the United States and Britain. Originally conceived as a creative tool designed to draft emails and poems, it can generate ideas, write blog posts and answer questions with facts or opinions.

Ernie. The search giant Baidu unveiled China’s first major rival to ChatGPT in March. The debut of Ernie, short for Enhanced Representation through Knowledge Integration, turned out to be a flop after a promised “live” demonstration of the bot was revealed to have been recorded.

But if he’s wrong, there’s an escape hatch: In its contracts with investors like Microsoft, OpenAI’s board reserves the right to shut the technology down at any time.

The Vegetarian Cattle Farmer

The warning, sent with the driving directions, was: “Watch out for cows.”

Mr. Altman’s weekend home is a ranch in Napa, Calif., where farmhands grow wine grapes and raise cattle.

During the week, Mr. Altman and his partner, Oliver Mulherin, an Australian software engineer, share a house on Russian Hill in the heart of San Francisco. But as Friday arrives, they move to the ranch, a quiet spot among the rocky, grass-covered hills. Their 25-year-old house is remodeled to look both folksy and contemporary. The Cor-Ten steel that covers the outside walls is rusted to perfection.

As you approach the property, the cows roam across both the green fields and gravel roads.

Mr. Altman is a man who lives with contradictions, even at his getaway home: a vegetarian who raises beef cattle. He says his partner likes them.

On a recent afternoon walk at the ranch, we stopped to rest at the edge of a small lake. Looking out over the water, we discussed, once again, the future of A.I.

His message had not changed much since 2019. But his words were even bolder.

He said his company was building technology that would “solve some of our most pressing problems, really increase the standard of life and also figure out much better uses for human will and creativity.”

OpenAI employees at work in the cafeteria.Credit…Jim Wilson/The New York Times

He was not exactly sure what problems it will solve, but he argued that ChatGPT showed the first signs of what is possible. Then, with his next breath, he worried that the same technology could cause serious harm if it wound up in the hands of some authoritarian government.

Mr. Altman tends to describe the future as if it were already here. And he does so with an optimism that seems misplaced in today’s world. At the same time, he has a way of quickly nodding to the other side of the argument.

Kelly Sims, a partner with the venture capital firm Thrive Capital who worked with Mr. Altman as a board adviser to OpenAI, said it was like he was constantly arguing with himself.

“In a single conversation,” she said, “he is both sides of the debate club.”

He is very much a product of the Silicon Valley that grew so swiftly and so gleefully in the mid-2010s. As president of Y Combinator, the Silicon Valley start-up accelerator and seed investor, from 2014 to 2019, he advised an endless stream of new companies — and was shrewd enough to personally invest in several that became household names, including Airbnb, Reddit and Stripe. He takes pride in recognizing when a technology is about to reach exponential growth — and then riding that curve into the future.

But he is also the product of a strange, sprawling online community that began to worry, around the same time Mr. Altman came to the Valley, that artificial intelligence would one day destroy the world. Called rationalists or effective altruists, members of this movement were instrumental in the creation of OpenAI.

The question is whether the two sides of Sam Altman are ultimately compatible: Does it make sense to ride that curve if it could end in diaster? Mr. Altman is certainly determined to see how it all plays out.

He is not necessarily motivated by money. Like many personal fortunes in Silicon Valley that are tied up in a wide variety of public and private companies, Mr. Altman’s wealth is not well documented. But as we strolled across his ranch, he told me, for the first time, that he holds no stake in OpenAI. The only money he stands to make from the company is a yearly salary of around $65,000 — “whatever the minimum for health insurance is,” he said — and a tiny slice of an old investment in the company by Y Combinator.

His longtime mentor, Paul Graham, founder of Y Combinator, explained Mr. Altman’s motivation like this:

“Why is he working on something that won’t make him richer? One answer is that lots of people do that once they have enough money, which Sam probably does. The other is that he likes power.”

‘What Bill Gates Must Have Been Like’

In the late 1990s, the John Burroughs School, a private prep school named for the 19th-century American naturalist and philosopher, invited an independent consultant to observe and critique daily life on its campus in the suburbs of St. Louis.

The consultant’s review included one significant criticism: The student body was rife with homophobia.

In the early 2000s, Mr. Altman, a 17-year-old student at John Burroughs, set out to change the school’s culture, individually persuading teachers to post “Safe Space” signs on their classroom doors as a statement in support of gay students like him. He came out during his senior year and said the St. Louis of his teenage years was not an easy place to be gay.

Georgeann Kepchar, who taught the school’s Advanced Placement computer science course, saw Mr. Altman as one of her most talented computer science students — and one with a rare knack for pushing people in new directions.

“He had creativity and vision, combined with the ambition and force of personality to convince others to work with him on putting his ideas into action,” she said. Mr. Altman also told me that he had asked one particularly homophobic teacher to post a “Safe Space” sign just to troll to the guy.

Mr. Graham, who worked alongside Mr. Altman for a decade, saw the same persuasiveness in the man from St. Louis.

“He has a natural ability to talk people into things,” Mr. Graham said. “If it isn’t inborn, it was at least fully developed before he was 20. I first met Sam when he was 19, and I remember thinking at the time: ‘So this is what Bill Gates must have been like.’”

The two got to know each other in 2005 when Mr. Altman applied for a spot in Y Combinator’s first class of start-ups. He won a spot — which included $10,000 in seed funding — and after his sophomore year at Stanford University, he dropped out to build his new company, Loopt, a social media start-up that let people share their location with friends and family.

Mr. Altman at the Loopt office, in 2007.Credit…Sherry Tesler for The New York Times

He now says that during his short stay at Stanford, he learned more from the many nights he spent playing poker than he did from most of his other college activities. After his freshman year, he worked in the artificial intelligence and robotics lab overseen by Prof. Andrew Ng, who would go on to found the flagship A.I. lab at Google. But poker taught Mr. Altman how to read people and evaluate risk.

It showed him “how to notice patterns in people over time, how to make decisions with very imperfect information, how to decide when it was worth pain, in a sense, to get more information,” he told me while strolling across his ranch in Napa. “It’s a great game.”

After selling Loopt for a modest return, he joined Y Combinator as a part-time partner. Three years later, Mr. Graham stepped down as president of the firm and, to the surprise of many across Silicon Valley, tapped a 28-year-old Mr. Altman as his successor.

Mr. Altman is not a coder or an engineer or an A.I. researcher. He is the person who sets the agenda, puts the teams together and strikes the deals. As the president of “YC,” he expanded the firm with near abandon, starting a new investment fund and a new research lab and stretching the number of companies advised by the firm into the hundreds each year.

He also began working on several projects outside the investment firm, including OpenAI, which he founded as a nonprofit in 2015 alongside a group that included Elon Musk. By Mr. Altman’s own admission, YC grew increasingly concerned he was spreading himself too thin.

He resolved to refocus his attention on a project that would, as he put it, have a real impact on the world. He considered politics, but settled on artificial intelligence.

He believed, according to his younger brother Max, that he was one of the few people who could meaningfully change the world through A.I. research, as opposed to the many people who could do so through politics.

In 2019, just as OpenAI’s research was taking off, Mr. Altman grabbed the reins, stepping down as president of Y Combinator to concentrate on a company with fewer than 100 employees that was unsure how it would pay its bills.

Within a year, he had transformed OpenAI into a nonprofit with a for-profit arm. That way he could pursue the money it would need to build a machine that could do anything the human brain could do.

Raising ‘10 Bills’

In the mid-2010s, Mr. Altman shared a three-bedroom, three-bath San Francisco apartment with his boyfriend at the time, his two brothers and their girlfriends. The brothers went their separate ways in 2016 but remained on a group chat, where they spent a lot of time giving one another gruff, as only siblings can, his brother Max remembers. Then, one day, Mr. Altman sent a text saying he planned to raise $1 billion for his company’s research.

Within a year, he had done so. After running into Satya Nadella, Microsoft’s chief executive, at an annual gathering of tech leaders in Sun Valley, Idaho — often called “summer camp for billionaires” — he personally negotiated a deal with Mr. Nadella and Microsoft’s chief technology officer, Kevin Scott.

A few years later, Mr. Altman texted his brothers again, saying he planned to raise an additional $10 billion — or, as he put it, “10 bills.” By this January, he had done this, too, signing another contract with Microsoft

Greg Brockman, the president of OpenAI.Credit…Jim Wilson/The New York Times

Mr. Brockman, OpenAI’s president, said Mr. Altman’s talent lies in understanding what people want. “He really tries to find the thing that matters most to a person — and then figure out how to give it to them,” Mr. Brockman told me. “That is the algorithm he uses over and over.”

The agreement has put OpenAI and Microsoft at the center of a movement that is poised to remake everything from search engines to email applications to online tutors. And all this is happening at a pace that surprises even those who have been tracking this technology for decades.

Amid the frenzy, Mr. Altman is his usual calm self — though he does say he uses ChatGPT to help him quickly summarize the avalanche of emails and documents coming his way.

Mr. Scott of Microsoft believes that Mr. Altman will ultimately be discussed in the same breath as Steve Jobs, Bill Gates and Mark Zuckerberg.

Mr. Altman with Kevin Scott, the Chief Technology Officer at Microsoft, in February.Credit…Ruth Fremson/The New York Times

“These are people who have left an indelible mark on the fabric of the tech industry and maybe the fabric of the world,” he said. “I think Sam is going to be one of those people.”

The trouble is, unlike the days when Apple, Microsoft and Meta were getting started, people are well aware of how technology can transform the world — and how dangerous it can be.

The Man in the Middle

In March, Mr. Altman tweeted out a selfie, bathed by a pale orange flash, that showed him smiling between a blond woman giving a peace sign and a bearded guy wearing a fedora.

The woman was the Canadian singer Grimes, Mr. Musk’s former partner, and the hat guy was Eliezer Yudkowsky, a self-described A.I. researcher who believes, perhaps more than anyone, that artificial intelligence could one day destroy humanity.

The selfie — snapped by Mr. Altman at a party his company was hosting — shows how close he is to this way of thinking. But he has his own views on the dangers of artificial intelligence.

Mr. Yudkowsky and his writings played key roles in the creation of both OpenAI and DeepMind, another lab intent on building artificial general intelligence.

He also helped spawn the vast online community of rationalists and effective altruists who are convinced that A.I. is an existential risk. This surprisingly influential group is represented by researchers inside many of the top A.I. labs, including OpenAI. They don’t see this as hypocrisy: Many of them believe that because they understand the dangers clearer than anyone else, they are in the best position to build this technology.

Mr. Altman believes that effective altruists have played an important role in the rise of artificial intelligence, alerting the industry to the dangers. He also believes they exaggerate these dangers.

As OpenAI developed ChatGPT, many others, including Google and Meta, were building similar technology. But it was Mr. Altman and OpenAI that chose to share the technology with the world.

The OpenAI offices in San Francisco. In its contracts with investors like Microsoft, OpenAI’s board reserves the right to shut the technology down at any time.Credit…Jim Wilson/The New York Times

Many in the field have criticized the decision, arguing that this set off a race to release technology that gets things wrong, makes things up and could soon be used to rapidly spread disinformation. On Friday, the Italian government temporarily banned ChatGPT in the country, citing privacy concerns and worries over minors being exposed to explicit material.

Mr. Altman argues that rather than developing and testing the technology entirely behind closed doors before releasing it in full, it is safer to gradually share it so everyone can better understand risks and how to handle them.

He told me that it would be a “very slow takeoff.”

When I asked Mr. Altman if a machine that could do anything the human brain could do would eventually drive the price of human labor to zero, he demurred. He said he could not imagine a world where human intelligence was useless.

If he’s wrong, he thinks he can make it up to humanity.

He rebuilt OpenAI as what he called a capped-profit company. This allowed him to pursue billions of dollars in financing by promising a profit to investors like Microsoft. But these profits are capped, and any additional revenue will be pumped back into the OpenAI nonprofit that was founded back in 2015.

His grand idea is that OpenAI will capture much of the world’s wealth through the creation of A.G.I. and then redistribute this wealth to the people. In Napa, as we sat chatting beside the lake at the heart of his ranch, he tossed out several figures — $100 billion, $1 trillion, $100 trillion.

If A.G.I. does create all that wealth, he is not sure how the company will redistribute it. Money could mean something very different in this new world.

But as he once told me: “I feel like the A.G.I. can help with that.”

Related Articles

Back to top button