Over a decade ago, the neuroscientist Ev Fedorenko asked 48 English speakers to complete tasks like reading sentences, recalling information, solving math problems, and listening to music. As they did this, she scanned their brains using functional magnetic resonance imaging to see which circuits were activated. If, as linguists have proposed for decades, language is connected to thought in the human brain, then the language processing regions would be activated even during nonlinguistic tasks.
Fedorenko's experiment, published in 2011 in the Proceedings of the National Academy of Sciences, showed that when it comes to arithmetic, musical processing, general working memory, and other nonlinguistic tasks, language regions of the human brain showed no response. Contrary to what many linguistists have claimed, complex thought and language are separate things. One does not require the other. "We have this highly specialized place in the brain that doesn't respond to other activities," says Fedorenko, who is an associate professor at the Department of Brain and Cognitive Sciences (BCS) and the McGovern Institute for Brain Research. "It's not true that thought critically needs language."
The design of the experiment, using neuroscience to understand how language works, how it evolved, and its relation to other cognitive functions, is at the heart of Fedorenko's research. She is part of a unique intellectual triad at MIT's Department of BCS, along with her colleagues Roger Levy and Ted Gibson. (Gibson and Fedorenko have been married since 2007). Together they have engaged in a years-long collaboration and built a significant body of research focused on some of the biggest questions in linguistics and human cognition. While working in three independent labs — EvLab, TedLab, and the Computational Psycholinguistics Lab — the researchers are motivated by a shared fascination with the human mind and how language works in the brain. "We have a great deal of interaction and collaboration," says Levy. "It's a very broadly collaborative, intellectually rich and diverse landscape."
Using combinations of computational modeling, psycholinguistic experimentation, behavioral data, brain imaging, and large naturalistic language datasets, the researchers also share an answer to a fundamental question: What is the purpose of language? Of all the possible answers to why we have language, perhaps the simplest and most obvious is communication. "Believe it or not," says Ted Gibson, "that is not the standard answer."
Gibson first came to MIT in 1993 and joined the faculty of the Linguistics Department in 1997. Recalling the experience today, he describes it as frustrating. The field of linguistics at that time was dominated by the ideas of Noam Chomsky, one of the founders of MIT's Graduate Program in Linguistics, who has been called the father of modern linguistics. Chomsky's "nativist" theories of language posited that the purpose of language is the articulation of thought and that language capacity is built-in in advance of any learning. But Gibson, with his training in math and computer science, felt that researchers didn't satisfyingly test these ideas. He believed that finding the answer to many outstanding questions about language required quantitative research, a departure from standard linguistic methodology. "There's no reason to rely only on you and your friends, which is how linguistics has worked," Gibson says. "The data you can get can be much broader if you crowdsource lots of people using experimental methods." Chomsky's ascendancy in linguistics presented Gibson with what he saw as a challenge and an opportunity. "I felt like I had to figure it out in detail and see if there was truth in these claims," he says.
Three decades after he first joined MIT, Gibson believes that the collaborative research at BCS is persuasive and provocative, pointing to new ways of thinking about human culture and cognition. "Now we're at a stage where it is not just arguments against. We have a lot of positive stuff saying what language is," he explains. Levy adds: "I would say all three of us are of the view that communication plays a very import role in language learning and processing, but also in the structure of language itself."
Levy points out that the three researchers completed PhDs in different subjects: Fedorenko in neuroscience, Gibson in computer science, Levy in linguistics. Yet for years before their paths finally converged at MIT, their shared interests in quantitative linguistic research led them to follow each other's work closely and be influenced by it. The first collaboration between the three was in 2005 and focused on language processing in Russian relative clauses. Around that time, Gibson recalls, Levy was presenting what he describes as "lovely work" that was instrumental in helping him to understand the links between language structure and communication. "Communicative pressures drive the structures," says Gibson. "Roger was crucial for that. He was the one helping me think about those things a long time ago."
Levy's lab is focused on the intersection of artificial intelligence, linguistics, and psychology, using natural language processing tools. "I try to use the tools that are afforded by mathematical and computer science approaches to language to formalize scientific hypotheses about language and the human mind and test those hypotheses," he says.
Levy points to ongoing research between him and Gibson focused on language comprehension as an example of the benefits of collaboration. "One of the big questions is: When language understanding fails, why does it fail?" Together, the researchers have applied the concept of a "noisy channel," first developed by the information theorist Claude Shannon in the 1950s, which says that information or messages are corrupted in transmission. "Language understanding unfolds over time, involving an ongoing integration of the past with the present," says Levy. "Memory itself is an imperfect channel conveying the past from our brain a moment ago to our brain now in order to support successful language understanding." Indeed, the richness of our linguistic environment, the experience of hundreds of millions of words by adulthood, may create a kind of statistical knowledge guiding our expectations, beliefs, predictions, and interpretations of linguistic meaning. "Statistical knowledge of language actually interacts with the constraints of our memory," says Levy. "Our experience shapes our memory for language itself."
All three researchers say they share the belief that by following the evidence, they will eventually discover an even bigger and more complete story about language. "That's how science goes," says Fedorenko. "Ted trained me, along with Nancy Kanwisher, and both Ted and Roger are very data-driven. If the data is not giving you the answer you thought, you don't just keep pushing your story. You think of new hypotheses. Almost everything I have done has been like that." At times, Fedorenko's research into parts of the brain's language system has surprised her and forced her to abandon her hypotheses. "In a certain project I came in with a prior idea that there would be some separation between parts that cared about combinatorics versus words meanings," she says, "but every little bit of the language system is sensitive to both. At some point, I was like, this is what the data is telling us, and we have to roll with it."
The researchers' work pointing to communication as the constitutive purpose of language opens new possibilities for probing and studying non-human language. The standard claim is that human language has a drastically more extensive lexicon than animals, which have no grammar. "But many times, we don't even know what other species are communicating," says Gibson. "We say they can't communicate, but we don't know. We don't speak their language." Fedorenko hopes that more opportunities to make cross-species linguistic comparisons will open up. "Understanding where things are similar and where things diverge would be super useful," she says.
Meanwhile, the potential applications of language research are far-reaching. One of Levy’s current research projects focuses on how people read and use machine learning algorithms informed by the psychology of eye movements to develop proficiency tests. By tracking the eye movements of people who speak English as a second language while they read texts in English, Levy can predict how good they are at English, an approach that could one day replace the Test of English as a Foreign Language. "It's an implicit measure of language rather than a much more game-able test," he says.
The researchers agree that some of the most exciting opportunities in the neuroscience of language lies with large language models that provide new opportunities for asking new questions and making new discoveries. "In the neuroscience of language, the kind of stories that we've been able to tell about how the brain does language were limited to verbal, descriptive hypotheses," says Fedorenko. Computationally implemented models are now amazingly good at language and show some degree of alignment to the brain, she adds. Now, researchers can ask questions such as: what are the actual computations that cells are doing to get meaning from strings of words? "You can now use these models as tools to get insights into how humans might be processing language," she says. "And you can take the models apart in ways you can't take apart the brain."