ChatGPT’s creators predict that artificial intelligence (AI) may outstrip human capabilities in the next decade, with the emergence of “superintelligence” surpassing even other advanced technologies.

According to the creators of ChatGPT, artificial intelligence (AI) has the potential to surpass human abilities in various fields within the next decade. They believe that the advancement of “superintelligence” will lead to a level of power and capability unmatched by any existing technology.

In a recent blog post, the cofounders of OpenAI, the developer behind ChatGPT, including CEO Sam Altman, expressed the possibility that artificial intelligence (AI) could surpass the “expert skill level” of humans in various domains. They went on to suggest that AI could potentially perform tasks at a scale equivalent to that of today’s major corporations, thereby demonstrating significant productivity.

“Superintelligence will be more powerful than other technologies humanity has had to contend with in the past,” the OpenAI executives said. “We can have a dramatically more prosperous future; but we have to manage risk to get there.”

Following the launch of ChatGPT, industry leaders have become progressively more concerned about the potential impact of powerful AI on society. They have issued stern warnings about its ability to displace jobs, contribute to the spread of misinformation, and facilitate criminal activities. These leaders highlight the disruptive nature of AI and the need for careful consideration of its ethical implications to mitigate potential risks.

The release of generative AI tools like ChatGPT has raised concerns, specifically due to their contribution to an AI arms race where companies like Microsoft and Google find themselves in direct competition with one another.

The raised concerns have sparked calls for regulatory measures to be implemented for AI. In the mentioned blog post, OpenAI’s leaders emphasized the importance of taking a proactive approach in managing the potential harms of AI, especially considering the potential for existential risks.

In the blog post, Altman and other individuals proposed the establishment of an organization similar to the International Atomic Energy Agency to supervise the progress of AI beyond a certain level of capability. They suggested implementing measures such as audits and safety compliance tests to ensure responsible development and use of AI.

OpenAI did not immediately respond to Insider’s request for comment made outside of normal working hours.

Leave a Reply