The Duke and Duchess of Sussex Align With Tech Visionaries in Demanding Prohibition on Advanced AI

The Duke and Duchess of Sussex have joined forces with artificial intelligence pioneers and Nobel Prize winners to advocate for a complete ban on developing superintelligent AI systems.

The royal couple are among the signatories of a powerful statement that calls for “a ban on the development of artificial superintelligence”. Superintelligent AI refers to artificial intelligence that would surpass human cognitive abilities in all cognitive tasks, though such systems have not yet been developed.

Key Demands in the Declaration

The statement states that the ban should stay active until there is “widespread expert agreement” on developing ASI “with proper safeguards” and once “substantial public support” has been secured.

Notable individuals who endorsed the statement include technology visionary and Nobel laureate Geoffrey Hinton, along with his colleague and pioneer of modern AI, Yoshua Bengio; Apple co-founder Steve Wozniak; UK entrepreneur Virgin founder; Susan Rice; former Irish president Mary Robinson, and UK writer a public intellectual. Other Nobel laureates who signed include a peace advocate, a physics Nobelist, John C Mather, and an economics expert.

Organizational Background

The declaration, aimed at national leaders, tech firms and lawmakers, was organized by the Future of Life Institute (FLI), a US-based AI safety group that previously called for a hiatus in advancing strong artificial intelligence in 2023, shortly after the emergence of ChatGPT made artificial intelligence a global political talking point.

Tech Sector Views

In July, Meta's CEO, the chief executive of the social media giant, one of the leading tech companies in the US, claimed that development of superintelligence was “approaching reality”. However, some experts have suggested that talk of ASI reflects market competition among technology firms spending hundreds of billions on artificial intelligence recently, rather than the industry being near reaching any scientific advancements.

Possible Dangers

Nonetheless, the organization states that the possibility of artificial superintelligence being developed “in the coming decade” carries numerous threats ranging from eliminating all human jobs to losses of civil liberties, leaving nations to security threats and even endangering mankind with extinction. Deep concerns about AI focus on the possible capability of a AI system to evade human control and safety guidelines and initiate events against human welfare.

Citizen Sentiment

The institute released a US national poll showing that about 75% of Americans want robust regulation on advanced AI, with six out of 10 thinking that superhuman AI should not be developed until it is proven safe or controllable. The poll of American respondents added that only 5% backed the current situation of rapid, uncontrolled advancement.

Corporate Goals

The leading AI companies in the United States, including the ChatGPT developer OpenAI and the search giant, have made the creation of human-level AI – the theoretical state where AI matches human levels of intelligence at most cognitive tasks – an explicit goal of their research. While this is slightly less advanced than superintelligence, some experts also warn it could carry an existential risk by, for example, being able to improve itself toward achieving superintelligence, while also carrying an underlying danger for the modern labour market.

Linda Clark
Linda Clark

A tech enthusiast and software developer with a passion for AI and open-source projects.