Elon Musk and a number of other reputable AI researchers have signed an open letter imploring AI labs all around the world to put a halt to the creation of large-scale AI systems out of concern for the deep risks to society and mankind that they believe this software poses.
The open letter, issued by the non-profit Future of Life Institute, warns that machine learning systems are now being developed and deployed in an “out-of-control race” by AI labs, which “no one, not even their creators, can understand, predict, or reliably control.”
According to the open letter, “we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” The letter further stated that “This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”
A number of well-known AI researchers and CEOs, including Stuart Russell, Yoshua Bengio, Gary Marcus, and Emad Mostaque, have signed the petition, along with author Yuval Noah Harari, Apple co-founder Steve Wozniak, Skype co-founder Jaan Tallinn, lawmaker Andrew Yang, and others. The list of signatories to the open letter is enormous, however, rumours say that names are being added to the list as a joke.
For now, it’s difficult to tell how much impact the letter will likely gather on the current state of AI development, especially where tech giants like Google and Microsoft are putting significant effort and investment into the release of new AI products while frequently disregarding previously-admitted concerns about ethics and safety. But, it is a reflection of the growing hostility to this “ship it now and fix it later” strategy, which may eventually be politicalized or put to debate by legislators.
Though these AI tools have one way or the other proven to have expanded possibilities we didn’t imagine but due to a few concerns that have been raised, there is a need for regulations and standards. Even OpenAI itself has acknowledged the potential need for an “independent review” of future AI systems to make sure they adhere to safety requirements, as noted in the letter. According to the signatories, this time has come.
The open letter has stated that “AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.”
Discover more from TechBooky
Subscribe to get the latest posts sent to your email.