The frenetic pace of technology in the 21st century is a
Emerging technologies could increase our prosperity and longevity, and help eradicate disease. However, they also hold the potential for conflict, democratic breakdown and climate crises.
This is the “lesson of history”, according to the director of a new Cambridge institute set up to help ensure new technologies are harnessed for the good of humankind.
“Previous waves of technology helped us thrive as a species, with higher incomes and more people alive than ever before,” says Dr Stephen Cave. "But those waves also had huge costs."
"The last industrial revolution, for example, fuelled the rise of communism and fascism, colonial expansion and the greenhouse gases that now threaten the biosphere."
Today, both the
scale and speedof technological change is greater than ever before.
Large language models have had among the fastest uptake of any technology in history: 100 million active users of ChatGPT within sixty days of launching. Just months after the Covid outbreak, scientists were testing mRNA vaccines that now protect over five billion people.
The potential rewards of these technologies are immense, while the worst case scenarios should we fail could be existential.
Maximising the benefits while minimising the risks requires insights into history, society, politics, psychology and ethics as well as a deep understanding of the technologies themselves.
To meet this interdisciplinary challenge, the University has brought together three established Cambridge research centres under one banner: the new Institute for Technology and Humanity, launched today.
By integrating the Leverhulme Centre for the Future of Intelligence (CFI), the Centre for Human-Inspired AI (CHIA), and the Centre for the Study of Existential Risk (CSER), the new initiative will contain historians and philosophers as well as computer scientists and robotics experts.
“The new institute demonstrates that the University of Cambridge is rising to this challenge of ensuring that human technologies do not exceed and overwhelm human capacities and human needs,” said Prof Deborah Prentice, Cambridge's Vice-Chancellor.
Between the three Cambridge centres, the UK's first Masters Degree in the ethics of AI has already been launched, and researchers have advised international organisations on the governance of nuclear bombs and autonomous weapons.
Current work includes design toolkits for ethical AI, computer vision systems that could help self-driving cars spot hidden pedestrians, and research on the effect of volcanos on global communications systems.
The new Institute will see major research strands on lessons from Covid-19, the misuses of generative AI, and the development of emotion-enhanced AI.
There are also plans for scholarship initiatives and further Masters and PhD programmes on the ethics of technology as well as human-centred robotics.
Situated in the School of Arts and Humanities, the Institute is closely tied to other flagship programmes, such as the University-wide ai@cam initiative, which aims to support the development of AI for science and society.
The University has long been at the forefront of technological development, from IVF to the webcam – and also responses to it, from Bertrand Russell’s work on nuclear disarmament to Onora O’Neill’s contributions to bioethics.
This push and pull between the engineering of new technology and the ethics behind it is "how the future gets forged", says Cave. "We now have all this under one umbrella.”