The UK’s much anticipated AI Safety Summit saw twenty-eight governments reach a consensus that risks posed by systems on the ‘frontier’ of general purpose AI need to be addressed.
Given the frenetic pace of AI development, and the huge resources behind it, this consensus is much-needed progress. But it is just the first step.
Billions of dollars have been invested into creating AI systems such as OpenAI’s GPT-4, which can serve as a coding assistant, or draft emails and essays. Without appropriate safeguards, however, it can also tell you how to build a bomb from household materials.
Experts fear that future iterations might be capable of aiding bad actors in large-scale cyberattacks, or designing chemical weapons.
GPT-4 is perhaps not overly concerning at the moment. But if we compare the impressive leap in capability from its previous iteration to now, and project that forward, things start to feel scary.
The techniques underlying AI have been shown to scale: more data and computing resources applied to bigger models yield ever more capable AI. With more money and better techniques, we will continue to see rapid advances.
However, these AI systems are often opaque and unpredictable. New iterations have unexpected abilities that are sometimes uncovered only months after release.
Companies like Google DeepMind and OpenAI are testing and designing safeguards for their models, but not every company is putting in the same degree of work, and it’s unclear if even the most safety-conscious actors are doing enough.
Just before the Summit, the UK government released an ambitious 42-point outline for best practice policies that frontier AI companies should be following. I was part of a team of researchers that conducted a rapid review of whether the six biggest AI companies met these standards.
While all companies were committed to research on AI safety, none met all the standards, with Meta and Amazon getting lower ‘safety grades’. There were several best practices that no company met, including prepared responses to worst-case scenarios, and external scrutiny of the datasets used to train AI.
With technology this powerful, we cannot rely on voluntary self-regulation. National bodies and frameworks will be vital, especially in the countries housing frontier AI developers.
Regulators need expertise and the power to monitor and intervene in AI – not just approving systems for release, but each stage of development, as with new medicines.
International governance is equally important. AI is global: from world-spanning semiconductor supply chains and data needs, to transnational use of frontier models. Meaningful governance of these systems requires domestic and international regulators working in tandem.