The concept of Artificial General Intelligence (AGI) is generating significant discussion these days, even though there’s no consensus on its precise definition. While some experts argue that AGI is still centuries away, relying on advancements that remain beyond our current comprehension, Google DeepMind believes it could be realized by 2030 and is taking steps to ensure safety in its development.
Disagreements within the scientific community on such complex topics are expected, and it is indeed prudent to prepare for both the short-term and long-term implications. However, the prospect of having AGI within five years is quite startling.
Presently, the most notable “frontier AI” projects available are all large language models (LLMs)—sophisticated tools for text generation and image creation. For instance, ChatGPT struggles with mathematical problems, and many models falter in following instructions or accurately modifying their outputs. Anthropic’s Claude hasn’t yet been able to conquer playing Pokémon, which highlights that despite the remarkable linguistic capabilities of these models, they are still significantly influenced by subpar training data and often exhibit problematic tendencies.
Envisioning a leap from our existing AI systems to one that, in the words of DeepMind, can demonstrate talents comparable to those of “the 99th percentile of skilled adults” is quite a challenge. Essentially, DeepMind anticipates AGI to possess intelligence that matches or surpasses that of the top 1% of humans.
This leads to the question: What dangers could a superintelligent AGI pose, according to DeepMind’s assessments?