The Duke and Duchess of Sussex Align With Tech Visionaries in Demanding Ban on Superintelligent Systems

Prince Harry and Meghan Markle have teamed up with artificial intelligence pioneers and Nobel laureates to push for a total prohibition on developing superintelligent AI systems.

The royal couple are among the signatories of a influential declaration that calls for “a ban on the creation of superintelligence”. Superintelligent AI refers to artificial intelligence that could exceed human intelligence in every intellectual area, though this technology remain theoretical.

Primary Requirements in the Statement

The statement insists that the prohibition should stay active until there is “broad scientific consensus” on developing ASI “safely and controllably” and once “substantial public support” has been secured.

Prominent figures who endorsed the statement include technology visionary and Nobel laureate Geoffrey Hinton, along with his fellow “godfather” of modern AI, Yoshua Bengio; Apple co-founder Steve Wozniak; UK entrepreneur Richard Branson; former US national security adviser; former Irish president Mary Robinson, and British author Stephen Fry. Other Nobel laureates who signed include a peace advocate, Frank Wilczek, an astrophysicist, and an economics expert.

Organizational Background

The declaration, targeted at governments, tech firms and lawmakers, was organized by the FLI organization, a American AI ethics organization that previously called for a hiatus in developing powerful AI systems in recent years, shortly after the launch of conversational AI made artificial intelligence a worldwide public discussion topic.

Industry Perspectives

In recent months, Meta's CEO, the chief executive of the social media giant, one of the major AI developers in the US, stated that advancement toward superintelligent AI was “now in sight”. However, some analysts have suggested that talk of ASI reflects competitive positioning among technology firms spending hundreds of billions on AI recently, rather than the industry being close to achieving any scientific advancements.

Possible Dangers

Nonetheless, FLI warns that the possibility of artificial superintelligence being achieved “in the coming decade” carries numerous risks ranging from eliminating all human jobs to losses of civil liberties, exposing countries to national security risks and even endangering mankind with existential risk. Existential fears about artificial intelligence center around the potential ability of a system to escape human oversight and protective measures and initiate events against human welfare.

Public Opinion

The institute published a American survey showing that about 75% of US citizens want strong oversight on advanced AI, with six out of 10 believing that artificial superintelligence should not be developed until it is demonstrated to be secure or controllable. The survey of American respondents noted that only 5% supported the current situation of fast, unregulated development.

Industry Objectives

The leading AI companies in the United States, including the ChatGPT developer OpenAI and the search giant, have made the development of artificial general intelligence – the hypothetical condition where AI matches human levels of intelligence at many intellectual activities – an stated objective of their research. While this is slightly less advanced than superintelligence, some specialists also warn it could carry an existential risk by, for example, being able to improve itself toward reaching superintelligent levels, while also presenting an underlying danger for the contemporary workforce.

Taylor Foster
Taylor Foster

A Canadian food enthusiast and blogger passionate about sharing local delicacies and recipes.