Upon its release, ChatGPT immediately drew praise from tech experts and the media as “mind blowing” and the “next big disruptor,” while a recent Microsoft report praised GPT-4, the latest iteration of OpenAI’s tool, for its ability to solve novel and difficult tasks with “human-level performance” in advanced careers such as coding, medicine, and law. Google responded to the competition by launching its own AI-based chatbot and service, Bard.
On the flip side, ChatGPT has been roundly criticized for its inability to answer simple logic questions or work backwards from a desired solution to the steps needed to achieve it. Teachers and school administrators voiced fears that students would use the tool to cheat, while political conservatives complained that Chat generates answers with a liberal bias. Elon Musk, Apple co-founder Steve Wozniak, and others signed an open letter recommending a six-month pause in AI development, noting “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”
The one factor missing from virtually all these comments – regardless of whether they regard ChatGPT as a huge step forward or a threat to humanity – is a recognition that no matter how impressive, ChatGPT merely gives the illusion of understanding. It is simply manipulating symbols and code samples which it has pulled from the Internet without any understanding of what they mean. And because it has no true understanding, it is neither good nor bad. It is simply a tool which can be manipulated by humans to achieve certain outcomes, depending on the intentions of the users.
It is that difference that distinguishes ChatGPT, and all other AI for that matter, from AGI – artificial general intelligence, defined as the ability of an intelligent agent to understand or learn any intellectual task that a human can. While ChatGPT undoubtedly represents a major advance in self-learning AI, it is important to recognize that it only seems to understand. Like all other AI to date, it is completely reliant on datasets and machine learning. ChatGPT simply appears more intelligent because it depends on bigger and more sophisticated datasets.
RETHINK YOUR APPROACHES
Business & Strategy
While some experts continue to argue that at some point in the future, AI will morph into AGI, that outcome seems highly unlikely. Because today’s AI is entirely dependent on massive data sets, there is no way to create a dataset big enough for the resulting system to cope with completely unanticipated situations. In short, AI has no common sense and we simply can’t store enough examples to handle every possible situation. Further, AI, unlike humans, is unable to merge information from multiple senses. So while it might be possible to stitch language and image processing applications together, researchers have not found a way to integrate them in the same seamless way that a child integrates vision, language, and hearing.
For today’s AI to advance to something approaching real human-like intelligence, it must have three essential components of consciousness: an internal mental model of surroundings with the entity at the center; a perception of time which allows for a prediction of future outcome(s) based on current actions; and an imagination so that multiple potential actions can be considered and their outcomes evaluated and chosen. Just like the average three-year-old child, it must be able to explore, experiment, and learn about real objects, interpreting everything it knows in the context of everything else it knows.
To get there, researchers must shift their reliance on ever-expanding datasets to a more biologically plausible system modelled on the human brain, with algorithms that enable it to build abstract “things” with limitless connections and context.
While we know a fair amount about the brain’s structure, we still don’t know what fraction of our DNA defines the brain or even how much DNA defines the structure of its neocortex, the part of the brain we use to think. If we presume that generalized intelligence is a direct outgrowth of the structure defined by our DNA and that structure could be defined by as little as one percent of that DNA, though, it is clear that AGI emergence depends not on more computer power or larger data sets but on what to write as the fundamental AGI algorithms.
With that in mind, it seems highly likely that a broader context that is actually capable of understanding and learning gradually could emerge if all of today’s AI systems could be built on a common underlying data structure that allowed their algorithms to begin interacting with each other. As these systems become more advanced, they would slowly begin to work together to create a more general intelligence that approaches the threshold for human-level intelligence, enabling AGI to emerge. To make that happen, though, our approach must change. Bigger and better data sets don’t always win the day.