The most talked-about AI chatbot, ChatGPT, has gripped the world by storm as of late. And search engine giants have begun releasing their own AI chatbot competitors. Moreover, people can now create images, audio, and videos from scratch using AI. These unprecedented advancements have paved the way for unimaginable progress in the field of AI.
However, AI experts and companies are now setting their sights on artificial general intelligence (AGI), which could lead to the development of AI systems that are just as intelligent as humans – if not more.
What is Artificial General Intelligence (AGI)?
In simple terms, AGI refers to the idea of creating an AI system capable of achieving a level of intelligence comparable to or greater than that of humans. OpenAI, the company behind ChatGPT and DALL·E 2, defined AGI as: “highly autonomous systems that outperform humans at most economically valuable work.” These systems would be able to solve complex problems, adapt to new environments, and improve their knowledge and skills on their own.
The possibilities that AGI presents are truly exciting but frightening at the same time, as the risks are massive. While AGI systems could make our lives easier by performing tasks that are beyond our human capabilities, they could also pose a massive threat to our existence if they are not to be regulated properly. There’s also the concept of artificial superintelligence, which imagines systems with intellectual capabilities far greater than humans. While some experts believe that technologies like these won’t be achieved, others believe that they can be achieved by the end of the decade.
- Advertisement -
The implications of AGI and ASI are mind-boggling. Imagine an AI system that can perform complex surgeries with precision, make accurate predictions in the stock market, or even solve the most complicated mathematical equations with ease. However, as we venture further into the realm of AGI and ASI, we must not ignore the risks involved. There’s a real possibility that we may end up creating AI systems that could be unethical and even uncontrollable.
Will GPT-5 achieve AGI?
It is undeniable that the recent introduction of OpenAI’s ChatGPT has been a significant event for the journey to AGI so far. The latest version of the AI-powered generative chatbot, GPT-4, was launched just a month ago, but OpenAI is not stopping there.
Siqi Chen, a developer and renowned AI investor, recently tweeted that GPT-5 is set to complete its training by the end of this year and achieve AGI. If Chen’s claim is accurate and GPT-5 achieves AGI, then it will have surpassed even the most optimistic of timelines for achieving AGI.
But there have been critics to suggest that GPT will never reach AGI status, such as developer Harrison Kinsley. He recently expressed scepticism via Twitter, stating that GPT-5 (or any GPT) would not achieve AGI, as it uses a gradient descent method, which optimises the model and makes it highly unlikely to achieve AGI.
Why experts are worried about AGI
The CEO of Open AI, Sam Altman, has issued a stern warning to the world about the potential dangers of Artificial General Intelligence, stating that it could bring about massive risks such as misuse, drastic accidents, and societal disruption. Altman has even gone as far as to suggest that AGI could arrive within the next decade, a timeframe that seems more and more likely by the day.
In a recent blog post, Altman says that the first AGI will be just a point along the progress of AI and that progress will continue from there. He went on to state that the rate of progress in AI over the past decade could potentially be sustained for a long period of time, leading to an extremely different world than what we currently know. The risks that come with this, he argues, could be alike to an existential threat.
But it’s not just Altman who is sounding the alarm. Elon Musk recently spoke about experiencing “existential angst” due to the concept of AGI. However, even with this fear, Elon believes the anxiety is worth it, in the long run, stating “But, all things considered with regard to AGI existential angst, I would prefer to be alive now to witness AGI than be alive in the past and not.”
In a paper published by Monash University and the University of Twente, researchers wrote that the need for victory in war could require handing over control of armies to machines. Going far as to suggest that victory may be determined by which force has the better AI. This raises questions about the ethics of giving machines the power to make decisions that would result in thousands if not millions of human deaths.
- Advertisement -
While the thought of a dystopian future controlled by supercomputers may seem daunting, many experts believe it can’t be stopped. As Max Tegmark, a physicist at MIT, put it “It’s pretty inevitable that it’s [AGI] going to happen unless we humans wipe ourselves out first by other means…Just as it was easier to build airplanes than figure out how birds fly, it’s probably easier to build AGI than figure out how brains work”.