The Duke and Duchess of Sussex Join AI Pioneers in Calling for Ban on Advanced AI

Prince Harry and Meghan Markle have joined forces with AI experts and Nobel laureates to advocate for a complete ban on developing superintelligent AI systems.

The royal couple are among the signatories of a powerful statement that demands “a ban on the development of superintelligence”. Artificial superintelligence (ASI) refers to artificial intelligence that could exceed human cognitive abilities in all cognitive tasks, though this technology have not yet been developed.

Key Demands in the Statement

The statement insists that the ban should stay active until there is “broad scientific consensus” on creating superintelligence “with proper safeguards” and once “substantial public support” has been achieved.

Prominent figures who endorsed the statement include technology visionary and Nobel Prize recipient a leading AI researcher, along with his colleague and pioneer of contemporary artificial intelligence, another AI expert; tech entrepreneur a Silicon Valley legend; UK entrepreneur Virgin founder; Susan Rice; ex-head of state an international leader, and UK writer a public intellectual. Additional Nobel winners who signed include Beatrice Fihn, a physics Nobelist, John C Mather, and an economics expert.

Behind the Movement

The declaration, aimed at national leaders, tech firms and policy makers, was coordinated by the FLI organization, a US-based AI safety group that earlier demanded a hiatus in advancing strong artificial intelligence in 2023, shortly after the launch of conversational AI made artificial intelligence a global political discussion topic.

Tech Sector Views

In recent months, Mark Zuckerberg, the chief executive of Facebook parent Meta, one of the leading tech companies in the United States, stated that development of superintelligence was “approaching reality”. However, some experts have suggested that discussions about superintelligence indicates competitive positioning among technology firms investing enormous sums on artificial intelligence recently, rather than the sector being close to achieving any technical breakthroughs.

Possible Dangers

Nonetheless, the organization warns that the possibility of artificial superintelligence being achieved “within the next ten years” carries numerous risks ranging from replacing human workers to losses of civil liberties, leaving nations to national security risks and even threatening humanity with extinction. Deep concerns about AI center around the possible capability of a AI system to escape human oversight and protective measures and initiate events against human welfare.

Public Opinion

The institute released a American survey showing that about 75% of US citizens want strong oversight on sophisticated artificial intelligence, with 60% believing that superhuman AI should not be developed until it is proven safe or controllable. The poll of 2,000 US adults noted that only 5% backed the status quo of fast, unregulated development.

Industry Objectives

The top artificial intelligence firms in the US, including the ChatGPT developer OpenAI and Google, have made the creation of human-level AI – the theoretical state where AI matches human levels of intelligence at many intellectual activities – an explicit goal of their research. Although this is slightly less advanced than ASI, some specialists also warn it could carry an extinction threat by, for instance, being able to enhance its own capabilities toward reaching superintelligent levels, while also carrying an underlying danger for the contemporary workforce.

Rebecca Russell
Rebecca Russell

Travel enthusiast and sustainability advocate, sharing insights on eco-friendly accommodations and outdoor experiences.