Justin Sullivan | Getty Images News | Getty Images
AGI refers to the concept of a form of artificial intelligence that can perform any intellectual task that a human can. For years, OpenAI has been working to research and develop AGI that is safe and benefits all humanity.
“I think it’s not a super useful term,” Altman told CNBC’s “Squawk Box” last week, when asked whether the company’s latest GPT-5 model moves the world any closer to achieving AGI. The AI entrepreneur has previously said he thinks AGI could be developed in the “reasonably close-ish future.”
The problem with AGI, Altman said, is that there are multiple definitions being used by different companies and individuals. One definition is an AI that can do “a significant amount of the work in the world,” according to Altman — however, that has its issues because the nature of work is constantly changing.
“I think the point of all of this is it doesn’t really matter and it’s just this continuing exponential of model capability that we’ll rely on for more and more things,” Altman said.
Altman isn’t alone in raising skepticism about “AGI” and how people use the term.
Difficult to define
Nick Patience, vice president and AI practice lead at The Futurum Group, told CNBC that though AGI is a “fantastic North Star for inspiration,” on the whole it’s not a helpful term.
“It drives funding and captures the public imagination, but its vague, sci-fi definition often creates a fog of hype that obscures the real, tangible progress we’re making in more specialised AI,” he said via email.
OpenAI and other startups have raised billions of dollars and attained dizzyingly high valuations with the promise that they will eventually reach a form of AI powerful enough to be considered “AGI.” OpenAI was last valued by investors at $300 billion and it is said to be preparing a secondary share sale at a valuation of $500 billion.
Last week, the company released GPT-5, its latest large language model for all ChatGPT users. OpenAI said the new system is smarter, faster and “a lot more useful” — especially when it comes to writing, coding and providing assistance on health care queries.
But the launch led to criticisms from some online that the long-awaited model was an underwhelming upgrade, making only minor improvements on its predecessor.
“By all accounts it’s incremental, not revolutionary,” Wendy Hall, professor of computer science at the University of Southampton, told CNBC.
AI firms “should be forced to declare how they measure up to globally agreed metrics” when they launch new products, Hall added. “It’s the Wild West for snake oil salesmen at the moment.”
A distraction?
For his part, Altman has admitted OpenAI’s new model misses the mark of his own personal definition of AGI, as the system is not yet capable of continuously learning on its own.
While OpenAI still maintains artificial general intelligence as its ultimate goal, Altman has said it’s better to talk about levels of progress toward this state of general intelligence rather than asking if something is AGI or not.
“We try now to use these different levels … rather than the binary of, ‘is it AGI or is it not?’ I think that became too coarse as we get closer,” the OpenAI CEO said during a talk at the FinRegLab AI Symposium in November 2024.
Altman still expects AI to achieve some key breakthroughs in specific fields — such as new math theorems and scientific discoveries — in the next two years or so.
“There’s so much exciting real-world stuff happening, I feel AGI is a bit of a distraction, promoted by those that need to keep raising astonishing amounts of funding,” Futurum’s Patience told CNBC.
“It’s more useful to talk about specific capabilities than this nebulous concept of ‘general’ intelligence.”

Leave a Reply