The AI Summit was a promising start – but momentum must be maintained

December 25, 2023

The UK’s much anticipated AI Safety Summit saw twenty-eight governments reach a consensus that risks posed by systems on the ‘frontier’ of general purpose AI need to be addressed. Billions of dollars have been invested into creating AI systems such as OpenAI’s GPT-4, which can serve as a coding assistant, or draft emails and essays. Just before the Summit, the UK government released an ambitious 42-point outline for best practice policies that frontier AI companies should be following. I was part of a team of researchers that conducted a rapid review of whether the six biggest AI companies met these standards. National bodies and frameworks will be vital, especially in the countries housing frontier AI developers.

The UK’s much anticipated AI Safety Summit saw twenty-eight governments reach a consensus that risks posed by systems on the ‘frontier’ of general purpose AI need to be addressed.

Given the frenetic pace of AI development, and the huge resources behind it, this consensus is much-needed progress. But it is just the first step.  

Billions of dollars have been invested into creating AI systems such as OpenAI’s GPT-4, which can serve as a coding assistant, or draft emails and essays. Without appropriate safeguards, however, it can also tell you how to build a bomb from household materials. 

Experts fear that future iterations might be capable of aiding bad actors in large-scale cyberattacks, or designing chemical weapons.

GPT-4 is perhaps not overly concerning at the moment. But if we compare the impressive leap in capability from its previous iteration to now, and project that forward, things start to feel scary.

The techniques underlying AI have been shown to scale: more data and computing resources applied to bigger models yield ever more capable AI. With more money and better techniques, we will continue to see rapid advances.

However, these AI systems are often opaque and unpredictable. New iterations have unexpected abilities that are sometimes uncovered only months after release.

Companies like Google DeepMind and OpenAI are testing and designing safeguards for their models, but not every company is putting in the same degree of work, and it’s unclear if even the most safety-conscious actors are doing enough.

Just before the Summit, the UK government released an ambitious 42-point outline for best practice policies that frontier AI companies should be following. I was part of a team of researchers that conducted a rapid review of whether the six biggest AI companies met these standards.

While all companies were committed to research on AI safety, none met all the standards, with Meta and Amazon getting lower ‘safety grades’. There were several best practices that no company met, including prepared responses to worst-case scenarios, and external scrutiny of the datasets used to train AI. 

With technology this powerful, we cannot rely on voluntary self-regulation. National bodies and frameworks will be vital, especially in the countries housing frontier AI developers.

Regulators need expertise and the power to monitor and intervene in AI – not just approving systems for release, but each stage of development, as with new medicines.

International governance is equally important. AI is global: from world-spanning semiconductor supply chains and data needs, to transnational use of frontier models. Meaningful governance of these systems requires domestic and international regulators working in tandem.

The source of this news is from University of Cambridge

Popular in Research

1

Feb 12, 2024

DTU Scientist Co-Authors International Paper on COVID-19

2

Feb 19, 2024

Two Simple Words May Help Decide Immigration Case Before the High Court

3

5 days ago

AI agents help explain other AI systems

4

Feb 13, 2024

AI innovation helps create authentic, pitch perfect vocals

5

Feb 10, 2024

Engineers to develop robot maintenance crews in space

Biden defends strikes on Houthis, vows to respond again

1 day ago

IDF wipes out Hezbollah commander on embattled Lebanon streets in dramatic footage

1 day ago

Donald Trump advertises on MSNBC in New Hampshire to slow Haley

5 hours ago

Japan's Nikkei crosses 39,000 as robust earnings, investor-friendly measures drive risk-on sentiment

6 days ago

NYU Gallatin Galleries Presents ‘In Loving Memory,’ an exhibition of work by Khidr Joseph Feb. 1—29, 2024

2 days ago

NYU Wagner Labor Initiative Explores Role of State and Local Government in Workers’ Rights

2 days ago