In artificial intelligence, this is achieved by backpropagation: adjusting a model’s parameters to reduce the error in the output. Furthermore, we can learn new information while maintaining the knowledge we already have, while learning new information in artificial neural networks often interferes with existing knowledge and degrades it rapidly. They analysed and simulated these information-processing models and found that they employ a fundamentally different learning principle from that used by artificial neural networks. The researchers developed a mathematical theory showing that letting neurons settle into a prospective configuration reduces interference between information during learning. They demonstrated that prospective configuration explains neural activity and behaviour in multiple learning experiments better than artificial neural networks.