The AI Summit was a promising start – but momentum must be maintained

December 25, 2023

The UK’s much anticipated AI Safety Summit saw twenty-eight governments reach a consensus that risks posed by systems on the ‘frontier’ of general purpose AI need to be addressed. Billions of dollars have been invested into creating AI systems such as OpenAI’s GPT-4, which can serve as a coding assistant, or draft emails and essays. Just before the Summit, the UK government released an ambitious 42-point outline for best practice policies that frontier AI companies should be following. I was part of a team of researchers that conducted a rapid review of whether the six biggest AI companies met these standards. National bodies and frameworks will be vital, especially in the countries housing frontier AI developers.

The UK’s much anticipated AI Safety Summit saw twenty-eight governments reach a consensus that risks posed by systems on the ‘frontier’ of general purpose AI need to be addressed.

Given the frenetic pace of AI development, and the huge resources behind it, this consensus is much-needed progress. But it is just the first step.  

Billions of dollars have been invested into creating AI systems such as OpenAI’s GPT-4, which can serve as a coding assistant, or draft emails and essays. Without appropriate safeguards, however, it can also tell you how to build a bomb from household materials. 

Experts fear that future iterations might be capable of aiding bad actors in large-scale cyberattacks, or designing chemical weapons.

GPT-4 is perhaps not overly concerning at the moment. But if we compare the impressive leap in capability from its previous iteration to now, and project that forward, things start to feel scary.

The techniques underlying AI have been shown to scale: more data and computing resources applied to bigger models yield ever more capable AI. With more money and better techniques, we will continue to see rapid advances.

However, these AI systems are often opaque and unpredictable. New iterations have unexpected abilities that are sometimes uncovered only months after release.

Companies like Google DeepMind and OpenAI are testing and designing safeguards for their models, but not every company is putting in the same degree of work, and it’s unclear if even the most safety-conscious actors are doing enough.

Just before the Summit, the UK government released an ambitious 42-point outline for best practice policies that frontier AI companies should be following. I was part of a team of researchers that conducted a rapid review of whether the six biggest AI companies met these standards.

While all companies were committed to research on AI safety, none met all the standards, with Meta and Amazon getting lower ‘safety grades’. There were several best practices that no company met, including prepared responses to worst-case scenarios, and external scrutiny of the datasets used to train AI. 

With technology this powerful, we cannot rely on voluntary self-regulation. National bodies and frameworks will be vital, especially in the countries housing frontier AI developers.

Regulators need expertise and the power to monitor and intervene in AI – not just approving systems for release, but each stage of development, as with new medicines.

International governance is equally important. AI is global: from world-spanning semiconductor supply chains and data needs, to transnational use of frontier models. Meaningful governance of these systems requires domestic and international regulators working in tandem.

The source of this news is from University of Cambridge

Popular in Research

1

Jul 7, 2024

Scientists use generative AI to answer complex questions in physics

2

Jul 7, 2024

First language song book a hit in the APY Lands

3

Jul 7, 2024

Navigating longevity with industry leaders at MIT AgeLab PLAN Forum

4

Jul 7, 2024

Dismissed and discharged: health systems still failing people with poor mental health

5

Jul 7, 2024

Elaine Liu: Charging ahead

Biden’s ABC Interview Was a Necessary Appointment With the Public — and a Botched One

Jul 7, 2024

No “Serious Condition”: Watch Biden Tell George Stephanopoulos Of Debate Debacle In First Clip From ABC Interview

Jul 6, 2024

American Air, Gate Gourmet Face Pressure on Contracts to Avoid Strikes

Jul 7, 2024

MSN

Jul 7, 2024

NYU Dentistry Names Implant Dentistry Fellowship in Recognition of Major Gift from Alumni Noel Liu and Nazish Jafri

Jul 7, 2024

Biden Aides Provided Questions in Advance for His Radio Interviews

Jul 7, 2024