Prominent Figures Unite to Call for Ban on AI Superintelligence Development

That’s why a diverse group of public figures, led by Prince Harry and Meghan, the Duchess of Sussex, have taken action. They signed an open letter calling for a moratorium on the development of artificial general intelligence until certain safety protocols have been established. Released by the nonprofit Future of Life Institute, the letter emphasizes…

Lucas Nguyen Avatar

By

Prominent Figures Unite to Call for Ban on AI Superintelligence Development

That’s why a diverse group of public figures, led by Prince Harry and Meghan, the Duchess of Sussex, have taken action. They signed an open letter calling for a moratorium on the development of artificial general intelligence until certain safety protocols have been established. Released by the nonprofit Future of Life Institute, the letter emphasizes the need for human-centered AI advancements, expressing concerns about the potential risks posed by unchecked technological growth.

The letter’s statement, comprising 30 words, outlines the group’s position clearly: it calls for stringent regulatory measures before proceeding with AI superintelligence. Other prominent signers include leaders from the technology industry and other influential public figures. Included in their ranks are Steve Wozniak, co-founder of Apple, British billionaire Richard Branson, former U.S. Joint Chiefs of Staff Chairman Mike Mullen, Democratic foreign policy expert Susan Rice, former Irish President Mary Robinson, Stephen Fry, and will.i.am.

Prince Harry contributed a personal note to the letter, stating, “the future of AI should serve humanity, not replace it. I believe the true test of progress will be not how fast we move, but how wisely we steer. There is no second chance.” His statements highlight the need to put people first as AI technologies develop exponentially.

Joseph Gordon-Levitt, actor and founder of the technology company Hypeworx, introduced a real-world practical perspective into the foaming discussion around AI. He raised critical questions about the implications of AI on society: “Does AI need to imitate humans, groom our kids, turn us all into slop junkies and make zillions of dollars serving ads? Most people don’t want that.” Yet his comments express a deeper worry that many have, including me, about what this all means for the ethical landscape of AI development.

Stuart Russell, a prominent AI researcher, clarified that the letter’s intent is not to impose a traditional ban or moratorium. Yet, he called it an initiative. Its aim is to lay the groundwork for safety standards for a technology that has the potential to bring grave existential dangers to humanity. It’s not an extreme request—it’s just a plan to make sure basic safety measures are the standard for a technology that its developers claim has a non-zero probability to bring about human extinction. Is that really too much to ask?” he declared.

The non-profit project has received a flurry of interest as concerns mount over the pace of development of powerful new AI tools. The signatories are hoping to nudge the discussion along with powerful tech companies such as Google, OpenAI, and Meta Platforms. They’ll talk about the ethical obligations that accompany their cool new tech. The collective effort reflects a shift in public discourse where influential voices from various sectors are calling for accountability in technology development.

Max Tegmark, an AI researcher at MIT and one of the founders of the Future of Life Institute. He noted how fast the landscape of AI criticism is shifting. He remarked, “In the past, it’s mostly been the nerds versus the nerds,” highlighting how mainstream concerns are now emerging surrounding the potential dangers of AI. “I feel what we’re really seeing here is how the criticism has gone very mainstream,” Tegmark added.

Conversations about the ethics of AI are getting louder, and for good reason. This collective ask from these influential leaders represents an important opportunity for federal policymakers and technology industry leaders. We have to make sure breakthroughs in AI are breakthroughs for the public good. Human safety must be given priority always, over commercial imperatives.

Lucas Nguyen Avatar