OpenAI has long asserted that immense computational horsepower in conjunction with reinforcement learning is a necessary step on the road to AGI, or AI that can learn any task a human can [14].
The fathers of AI 2.0, such as Yoshua Bengio and Yann LeCun, argue that AGI is not possible to create from current AI technology. They think we need self-supervised learning (actually GPT-2 and GPT-3 are self-supervised) and advanced neurobiology-based advancements [15].
However, the fathers of AI 1.0, the grandfather’s of AI, such as Marvin Minsky and Andrew McCarthy, argued that an abundance of knowledge (data) and a “society” of common-sense reasoning specialists was the road to AGI [16].
GPT-3 is the existence proof that scaling up the amount of text (data), scaling up the parameters (model size), and scaling up the training computes results in better accuracy (scary performance) on a specialist for few-shot NLP tasks.
Do the model’s architecture, size of the model, and the amount of training computes realize a common-sense reasoning specialist? Do data and common sense reasoning get us to AGI?
Speculation About a Possible Future of Artificial Intelligence
So like, the biggest mistake that I see artificial intelligence researchers making is assuming that they’re intelligent. Yeah they’re not, compared to AI. — Elon Musk [12].
Sixty to sixty-five years ago, one of the first computers filled a room. Sixty years later, a computer core, about the size of my head, has scaled up about 1 billion times (maybe more) the first computer.
Suppose the first viable quantum computer fills an entire room. Will it be 60 years before a quantum computer core, the size of my head, scaled up about 1 billion times the first quantum computer?
Maybe.
Imagine a quantum computer, with an AGI (Artificial General Intelligence) model of a scale of 1 billion times the parameters of GPT-3 or about 3 million times the parameters of the human brain.
“I’ve predicted that in 2029, we will pass the Turing test,” stated Ray Kurzweil [11].
Note: GPT-3 is real close to GPT-3 passing the Turing test [13].
GPT-3 is quite impressive in some areas, and still clearly subhuman in others. — Kevin Lackey, Just, 2020.
Do you think we will have a Hawking-Musk nightmare or a Havens-Kurzweil dream [11,12]?
We may have both, or we may have neither.
I put money on our tool-making. I doubt we will change, or should we change this behavior.
I feel that Elon Musk, with the NuralLink project, is making a bet on our tool-making about the future potential of AI [17].