Mustafa Suleyman remembers the epochal moment he grasped artificial intelligence’s potential. It was 2016 — Paleolithic times by A.I. standards — and DeepMind, the company he had co-founded that was acquired by Google in 2014, had pitted its A.I. machine, AlphaGo, against a world champion of Go, the confoundingly difficult strategy game. AlphaGo zipped through thousands of permutations, making fast work of the hapless human. Stunned, Suleyman realized the machine had “seemingly superhuman insights,” he says in hisbook on A.I., “The Coming Wave.”
The result is no longer stunning — but the implications are. Little more than a year after OpenAI’s ChatGPT software helped bring generative A.I. into the public consciousness, companies, investors and regulators are grappling with how to shape the very technology designed to outsmart them. The exact risks of the technology are still being debated, and the companies that will lead it are yet to be determined. But one point of agreement: A.I. is transformative. “The level of innovation is very hard for people to imagine,” said Vinod Khosla, founder of the Silicon Valley venture capital firm Khosla Ventures, which was one of the first investors in OpenAI. “Pick an area: books, movies, music, products, oncology. It just doesn’t stop.”
If 2023 was the year the world woke up to A.I., 2024 might be the year in which its legal and technical limits will be tested, and perhaps breached. DealBook spoke with A.I. experts about the real-world effects of this shift and what to expect next year.
Judges and lawmakers will increasingly weigh in. The flood of A.I. regulations in recent months is likely to come under scrutiny. That includes President Biden’s executive order in October, which, if Congress ratifies, could compel companies to ensure that their A.I. systems cannot be used to make biological or nuclear weapons; embed watermarks on A.I.-generated content; and to disclose foreign clients to the government.
At the A.I. Safety Summit in Britain in November, 28 countries, including China — though not Russia — agreed to collaborate to prevent “catastrophic risks.” And in marathon negotiations in December, the E.U. drafted one of the world’s first comprehensive attempts to limit the use of artificial intelligence, which, among other provisions, restricts facial recognition and deep fakes and defines how businesses can use A.I. The final text is due out in early 2024, and the bloc’s 27 member countries hope to approve it before European Parliament elections in June.
With that, Europe might effectively create global A.I. rules, requiring any company that does business in its market, of 450 million people, to cooperate. “It makes life tough for innovators,” said Matt Clifford, who helped organize the A.I. summit in Britain. “They have to think about complying with a very long list of things people in Brussels are worried about.”
There are plenty of concerns, including about A.I.’s potential to replace large numbers of jobs and to reinforce existing racial biases.
Some fear overloading A.I. businesses with regulations. Clifford believes existing fraud and consumer-protection laws make some portions of Europe’s legislation, the A.I. Act, redundant. But the E.U.’s lead architect, Dragos Tudorache, said that Europe “wasn’t aiming to be global regulators,” and that he maintained close dialogue with members of the U.S. Congress during the negotiations. “I am convinced we have to stay in sync as much as possible,” he said.
Governments have good reason to address A.I.: Even simple tools can serve dark purposes. “The microphone enabled both the Nuremberg rallies and the Beatles,” wrote Suleyman, who is now the chief executive of Inflection AI, a start-up he co-founded last year with Reid Hoffman, a co-founder of LinkedIn. He fears that A.I. could become “uncontained and uncontainable” once it outsmarts humans. “Homo technologicus could end up being threatened by its own creation.”
A.I. capabilities will soar. It’s hard to know when that tipping point might arrive. Jensen Huang, the co-founder and chief executive of Nvidia, whose dominance of A.I. chips has seen its share price more than triple since Jan. 1, told the DealBook Summit in late November that “there’s a whole bunch of things that we can’t do yet.”
Khosla believes the key A.I. breakthrough in 2024 will be “reasoning,” allowing machines to produce far more accurate results, and that in 2025, “A.I. will win in reasoning against intelligent members of the community.” A.I. machines will be steadily more capable of working through several logical steps, and performing probabilistic thinking, such as identifying a disease based on specific data, Khosla said.
Exponential growth in computational power, which hugely increases the capability of A.I. machines, factors into those predictions. “In 2024, it will be between 10 and 100 times more than current-day models,” Clifford said. “We don’t actually know what kind of innovations that’s going to result in.”
One new tool could be generative audio that allows users to deliver speeches in, say, Biden’s voice or to generate rap songs, opera or Beethoven’s nonexistent 10th symphony. DeepMind and YouTube have partnered with musicians to create A.I. tools allowing artists to insert instruments, transform musical styles or compose a melody from scratch.
Billions in investments will be needed. None of this will come cheap, and the question now is which companies will be able to build truly sustainable A.I. businesses. Of 175,072 A.I. patents filed between 2012 and 2022, more than half were filed in the last three years, according to Deutsche Bank. In 2024 and 2025, the bank predicts sharp increases in companies using A.I. for human resources, marketing, sales and product development. That is already happening: Legal firms, for example, have begun using A.I.-generated contracts, cutting out hours of work for lawyers. “The time is ripe for an explosion of A.I. innovation,” it predicted last May.
As those innovations roll out, fund-raising has ramped up. The French A.I. start-up Mistral AI— considered a European contender to OpenAI — raised more than half a billion dollars in 2023. More than $200 million came from the Silicon Valley venture capital giant Andreessen Horowitz in a funding round that valued Mistral, just seven months old, at $2 billion.
But that might not be enough to create a general-purpose A.I. system of the kind that powers ChatGPT and that Mistral has in mind. “It’s becoming clear the vast sums of money you need to be competitive,” Clifford said. “If you want to build a general-purpose model, it may be that the amount of capital needed is so great, it makes it very tricky for traditional venture capital.”
The story could be different for A.I. tools that serve a specific purpose, a category that spawned hundreds of start-ups in 2023. After a sharp downturn last year, A.I. venture funding is rising fast, with most invested in U.S. companies. Khosla said that this year he had backed 30 A.I. start-ups, including in India, Japan, Britain and Spain, companies which he said “are not afraid of the Big Tech guy.” He expects A.I. funding to continue rising through at least 2024. “Every country wants to be in the game,” he said. “That will accelerate the money flow, and the number of start-ups will keep accelerating.”