Can AI Grasp Related Concepts After Learning Only One?

December 11, 2023

This technique, Meta-learning for Compositionality (MLC), outperforms existing approaches and is on par with, and in some cases better than, human performance. MLC centers on training neural networks—the engines driving ChatGPT and related technologies for speech recognition and natural language processing—to become better at compositional generalization through practice. Developers of existing systems, including large language models, have hoped that compositional generalization will emerge from standard training methods, or have developed special-purpose architectures in order to achieve these abilities. MLC, in contrast, shows how explicitly practicing these skills allow these systems to unlock new powers, the authors note. “We have shown, for the first time, that a generic neural network can mimic or exceed human systematic generalization in a head-to-head comparison.”

Humans have the ability to learn a new concept and then immediately use it to understand related uses of that concept—once children know how to “skip,” they understand what it means to “skip twice around the room” or “skip with your hands up.” 

But are machines capable of this type of thinking? In the late 1980s, Jerry Fodor and Zenon Pylyshyn, philosophers and cognitive scientists, posited that artificial neural networks—the engines that drive artificial intelligence and machine learning— are not capable of making these connections, known as “compositional generalizations.” However, in the decades since, scientists have been developing ways to instill this capacity in neural networks and related technologies, but with mixed success, thereby keeping alive this decades-old debate.

Researchers at New York University and Spain’s Pompeu Fabra University have now developed a technique—reported in the journal Nature—that advances the ability of these tools, such as ChatGPT, to make compositional generalizations. This technique, Meta-learning for Compositionality (MLC), outperforms existing approaches and is on par with, and in some cases better than, human performance. MLC centers on training neural networks—the engines driving ChatGPT and related technologies for speech recognition and natural language processing—to become better at compositional generalization through practice.

Developers of existing systems, including large language models, have hoped that compositional generalization will emerge from standard training methods, or have developed special-purpose architectures in order to achieve these abilities. MLC, in contrast, shows how explicitly practicing these skills allow these systems to unlock new powers, the authors note.

“For 35 years, researchers in cognitive science, artificial intelligence, linguistics, and philosophy have been debating whether neural networks can achieve human-like systematic generalization,” says Brenden Lake, an assistant professor in NYU’s Center for Data Science and Department of Psychology and one of the authors of the paper. “We have shown, for the first time, that a generic neural network can mimic or exceed human systematic generalization in a head-to-head comparison.”

The source of this news is from New York University

Popular in Research

1

Jul 7, 2024

Scientists use generative AI to answer complex questions in physics

2

Jul 7, 2024

First language song book a hit in the APY Lands

3

Jul 7, 2024

Navigating longevity with industry leaders at MIT AgeLab PLAN Forum

4

Jul 7, 2024

Dismissed and discharged: health systems still failing people with poor mental health

5

Jul 7, 2024

Elaine Liu: Charging ahead

Biden’s ABC Interview Was a Necessary Appointment With the Public — and a Botched One

Jul 7, 2024

No “Serious Condition”: Watch Biden Tell George Stephanopoulos Of Debate Debacle In First Clip From ABC Interview

Jul 6, 2024

American Air, Gate Gourmet Face Pressure on Contracts to Avoid Strikes

Jul 7, 2024

MSN

Jul 7, 2024

NYU Dentistry Names Implant Dentistry Fellowship in Recognition of Major Gift from Alumni Noel Liu and Nazish Jafri

Jul 7, 2024

Biden Aides Provided Questions in Advance for His Radio Interviews

Jul 7, 2024