The Duke and Duchess of Sussex Join Tech Visionaries in Calling for Prohibition on Superintelligent Systems
Prince Harry and Meghan Markle have teamed up with AI experts and Nobel laureates to push for a complete ban on developing superintelligent AI systems.
The royal couple are among the signatories of a influential declaration that calls for “a prohibition on the development of artificial superintelligence”. Superintelligent AI refers to AI systems that could exceed human cognitive abilities in every intellectual area, though this technology have not yet been developed.
Primary Requirements in the Declaration
The declaration insists that the ban should remain in place until there is “widespread expert agreement” on developing ASI “safely and controllably” and once “strong public buy-in” has been secured.
Prominent figures who endorsed the statement include technology visionary and Nobel Prize recipient Geoffrey Hinton, along with his fellow “godfather” of modern AI, Yoshua Bengio; tech entrepreneur a Silicon Valley legend; UK entrepreneur Virgin founder; Susan Rice; former Irish president Mary Robinson, and UK writer Stephen Fry. Other Nobel laureates who signed include Beatrice Fihn, a physics Nobelist, an astrophysicist, and an economics expert.
Behind the Movement
The statement, targeted at national leaders, technology companies and policy makers, was organized by the FLI organization, a US-based AI safety group that earlier demanded a pause in developing powerful AI systems in 2023, shortly after the launch of conversational AI made AI a worldwide public talking point.
Industry Perspectives
In recent months, Meta's CEO, the chief executive of Facebook parent Meta, one of the leading tech companies in the United States, claimed that advancement toward superintelligent AI was “approaching reality”. However, some experts have suggested that discussions about superintelligence indicates competitive positioning among tech companies spending hundreds of billions on AI recently, rather than the sector being close to achieving any technical breakthroughs.
Potential Risks
However, the organization states that the prospect of ASI being achieved “in the coming decade” presents numerous risks ranging from eliminating all human jobs to losses of civil liberties, leaving nations to national security risks and even threatening humanity with existential risk. Deep concerns about AI center around the possible capability of a AI system to escape human oversight and protective measures and initiate events contrary to human interests.
Citizen Sentiment
FLI released a US national poll showing that approximately three-quarters of US citizens want strong oversight on advanced AI, with 60% believing that superhuman AI should not be created until it is demonstrated to be secure or manageable. The survey of 2,000 US adults added that only a small fraction backed the current situation of rapid, uncontrolled advancement.
Industry Objectives
The leading AI companies in the United States, including the conversational AI creator OpenAI and the search giant, have made the development of artificial general intelligence – the theoretical state where AI matches human cognitive capability at most cognitive tasks – an explicit goal of their research. Although this is one notch below superintelligence, some experts also caution it could pose an extinction threat by, for instance, being able to improve itself toward achieving superintelligence, while also carrying an underlying danger for the contemporary workforce.