12 Days: Is ChatGPT Artificial General Intelligence or Not?
On the First Day of Broadband, my true love sent to me: One Artificial General Intelligence
Drew Clark, Jericho Casper
December 21, 2023 – Just over one year ago, most people in the technology and internet world would talk about passing the Turing test as if it were something far in the future.
This “test,” originally called the imitation game by computer scientist Alan Turing in 1950, is a hypothetical test of a machine’s ability to exhibit intelligent
behavior equivalent to, or indistinguishable from, that of a human.
The year 2023 – and the explosive economic, technological, and societal force unleashed by OpenAI since the release of its ChatGPT on November 30, 2022 – make those days only 13 months ago seem quaint.
For example, users of large language models like ChatGPT, Anthropic’s Claude, Meta’s Llama and many others are daily interacting with machines as if they were simply very smart humans.
Yes, yes, informed users understand that Chatbots like these are simply using neural networks with very powerful predictive algorithms to come up with the probabilistic “next word” in sequence begun by the questioner’s inquiry. And, yes, users understand the propensity of such machines to “hallucinate” information that isn’t quite accurate, or even accurate at all.
Which makes the Chatbots seem, well, a little bit more human.
Drama at OpenAI
At a Broadband Breakfast Live Online event on November 22, 2023, marking the one-year anniversary of ChatGPT’s public launch, our expert panelists focused on the regulatory uncertainty bequeathed by a much-accelerated form of artificial intelligency.
The event took place days after Sam Altman, CEO of the OpenAI, was fired – before rejoining the company on that Wednesday, with a new board of directors. The board members who forced Altman out (all replaced, except one) had clashed with him on the company’s safety efforts.
More than 700 OpenAI employees then signed a letter threatening to quit if the board did not agree to resign.
In the backdrop, in other words, there was a policy angle behind of corporate boardroom battles that was in itself a big tech stories of the year.
“This [was] accelerationism versus de-celerationism,” said Adam Thierer, a senior fellow at the R Street Institute, during the event.
Washington and the FCC wake up to AI
And it’s not that Washington is closing its eyes to the potentially life-altering – literally – consequences of artificial intelligence.
In October, the Biden administration issued an executive order on AI safety includes measures aimed at both ensuring safety and spurring innovation, with directives for federal agencies to generate safety and AI identification standards as well as grants for researchers and small businesses looking to use the technology.
But it’s not clear which side legislators on Capitol Hill might take in the future.
One notable application of AI in telecom highlighted by FCC chief Jessica Rosenworcel is AI-driven spectrum sharing optimization. Rosenworcel said in a July hearing that AI-enabled radios could collaborate autonomously, enhancing spectrum use without a central authority, an advancement poised for implementation.
AI’s potential contribution to enhancing broadband mapping efforts was explored in a November House hearing. AI faced skepticism from experts who argued that in rural areas where data is scarce and of inferior quality, machine learning would struggle to identify potential inaccuracies. Initially, the FCC regarded AI as having strong potential for aiding in broadband mapping.
Also in November, the FCC voted to launch a formal inquiry on the potential impact of AI on robocalls and robotexts. The agency believes that illegal robocalls can be addressed through AI which can flag certain patterns that are deemed suspicious and analyze voice biometrics for synthesized voices.
But isn’t ChatGPT a form of artificial general intelligence?
As we’ve learned through an intensive focus on AI over the course of the year, somewhere still beyond pass the Turing test is the acclaimed concept of “artificial general intelligence,” presumably something a little bit smarter than ChatGPT-4.
Previously, OpenAI had defined AGI as “AI systems that are generally smarter than humans.” But apparently sometime recently, the company redefined this to mean “a highly autonomous system that outperforms humans at most economically valuable work.”
Some, including Rumman Chowdury, CEO of the tech accountability nonprofit Humane Intelligence, argue that framing AGI in economic terms, OpenAI recast its mission as building things to sell, a far cry from its original vision of using intelligent AI systems to benefit all.
AGI, as ChatGPT-4 told this reporter, “refers to a machine’s ability to understand, learn, and apply its intelligence to solve any problem, much like a human being. ChatGPT, while advanced, is limited to tasks within the scope of its training and programming. It excels in language-based tasks but does not possess the broad, adaptable intelligence that AGI implies.”
That sound like something that an AGI-capable machine would very much want the world to believe.
Additional reporting provided by Reporter Jericho Casper.