Elon Musk: experts call for a halt in AI development.

30.03.2023

Ealon Musk

More than 1,100 people, including Ilon Musk and Steve Wozniak, have signed an open letter asking all AI labs to immediately suspend training of AI systems more powerful than GPT-4 for six months.The letter drew attention to the need to develop robust protocols before we enter the next phase of AI development.

This brings us to the question, why do we need such protocols?

Over the past few months, the development of AI has taken a giant leap due to the competition caused by the soaring popularity of ChatGPT.
The capabilities of AI systems like GPT-4 have stunned everyone. But it has also created a number of new problems.
As companies working on AI aim to build much more advanced systems, the sky’s the limit now.

Security protocols must be verified by independent experts.

In the letter, the signatories ask stakeholders to “develop and implement” common security protocols for advanced AI developments. These security protocols should be “rigorously audited and monitored by independent outside experts,” they said.
The letter calls for governments to step in and impose a moratorium if all major stakeholders do not halt AI development.

AI has two sides: The good and the bad

We’ve all seen what an advanced AI chatbot like ChatGPT can do. It can do everything from writing essays to coding to ordering food on your behalf (with plugins).
However, we’ve seen the other side of things. The side that is rife with misinformation, deep fakes and cyberattacks.
It is the second side that caused this letter.

AI models can be potential fake news mines

One of the biggest problems with artificial intelligence systems is their potential to become mines for fake news. Their ability to mimic humans can lead to a barrage of fake news because AI does not seek the truth.
Because AI systems are trained on large amounts of data, there is a high probability that they will inherit human biases. Misinformation and prejudice can harm humans.

AI becoming intelligent is a serious problem

We all remember when Google fired Blake Lemoine, the engineer who worked on LaMDA, for claiming that AI was becoming intelligent.
Google and the AI experts were quick to turn a blind eye to Lemoine’s claims. Many believe that intelligent AI is impossible because we don’t yet have the proper infrastructure.
However, given the pace of AI development, it may happen sooner than we imagine.

Fraudsters can use AI systems to defraud people

The development of AI has been a boon to many. One group that has benefited greatly is scammers.
Scammers use ChatGPT to create sophisticated and sophisticated phishing emails to lure people into traps. ChatGPT and other advanced AI systems can create hyper-realistic phone scenarios that scammers can use to impersonate customer service representatives to gain access to sensitive information.

AI blurs the line between real and fake

The problems of generative AI are not limited to text-generating AI. AI systems capable of generating images and videos have brought a new dimension to fake images and videos.
The constant evolution of technology has led to more realistic fakes than ever before. The potential for abuse here is limitless.
From fake news to fake products, it can all lead to a number of problems for humanity.

Should the development of advanced artificial intelligence systems be halted?

So far, we’ve seen all the problems that advanced AI systems can cause. Does this mean we need to suspend the development of AI systems that are more powerful than GPT-4?
What is needed is for all stakeholders to have a discussion and come to a consensus on what needs to be done to make AI systems safe.
But a long pause could hinder innovation and evolution.

Let's discuss your project