People

Director

Roger Levy

My work is dedicated to advancing our foundational understanding of human language. How do we understand what we hear and read? How are we able to convert thoughts into meaningful utterances that others understand? And how do we acquire the knowledge that makes all this possible? My research program sits at the intersection of artificial intelligence, psychology, and linguistics, and tackles these questions through theory, computationally implemented models of language, psychological experimentation, analysis of large linguistic datasets, and more.


Postdocs

Stephan Meylan

My research focuses on the nature, origins, and utility of linguistic knowledge in children's early development, and carefully considers how such knowledge grows into the adult ability to process language. I am particularly interested in approaching language learning as a multi-task, multi-agent problem. In my research I use a combination of computational models from NLP, corpus studies, web-based experiments, and in-lab experiments. My postdoc is co-advised by Dr. Elika Bergelson at Harvard, with whom I conduct eyetracking experiments to better understand children's early knowledge of language.


PhD students

Thomas Hikaru Clark

My research interests relate to how humans solve the problem of communication under various constraints, including information-theoretic and availability-based pressures in both typical and atypical language user populations. I investigate these questions using a combination of computational modeling, behavioral experiments, and corpus studies.

Tiwalayo Eisape

I study the algorithms that underlie human language use. I am particularly interested in resource rationality and computational modeling. My current work uses recent advances in natural language processing, deep learning, and neurosymbolic machine learning to model human-like production, comprehension, and linguistic reasoning.

Ben Lipkin

My research focuses on the computational and cognitive mechanisms supporting contextualized language processing, with an emphasis on pragmatics and discourse. I am particularly interested in how people jointly mediate between inferences over linguistic forms in tandem with mental models of world states to resolve the intended meanings of utterances. I investigate these questions using computational modeling along with human neural and behavioral experiments.

Ced Zhang

I study language, thinking, and learning in humans and machines from interdisciplinary perspectives. On the language side, one focus is the nature and theories of linguistic meaning. Another is how to model our abilities to learn rich knowledge representations from language and express complex thoughts in language. A long-term goal is to build, in a cognitively inspired way, more general and beneficial AI systems that can coherently and effectively communicate with us.


Masters students

Kinan Martin

My interests are in natural language processing and the advent of large language models, which give a window into understanding how humans process language. My research probes language models of differing modalities to understand how language models represent linguistic structures, and how these representations may mirror or differ from those of the human brain. I am also interested in testing theories of cross-linguistic universalisms, such as uniform information density.

Subha Nawer Pushpita

I am very interested in understanding what aspects of a piece of text or a concept in a certain language make it easier for learners to process and understand that concept/text better. I envision a future where we use computational frameworks and tools to understand more about our brains, so that we can design learning materials and technologies in a way that will help us be better and more productive learners. In this era of LLM when the question whether AI will outsmart us constantly appears, we need to be more efficient learners and much better communicators, and I want to work on computational frameworks that can achieve those.

Sophia Zhi

I'm interested in child language acquisition and what computational models can teach us about how children learn language. My research studies the role of multimodal information in child phonological acquisition and processing.


Undergraduates

Marisa Montione

“I am interested in language comprehension and how it affects human perception and biases. One of my research projects investigates what the human brain processing language looks like in real time through the use of magnetoencephalography (MEG) neurosignals. Additionally, I have interests in aphasia and language acquisition.”

Diego Ureña

I’m a class of 2024 undergraduate majoring in computation and cognition (6-9). I have broad interests across language production, comprehension, and processing. My current project is with Yevgeni Berzak to better understand the relationship between language processing and comprehension through the use of MEG neuroimaging, and using that information to build improved NLP models.


Postdoc Alumni

Helena Aparicio (now Assistant Professor, Cornell Linguistics)

My research focuses on linguistic meaning and its interactions with context in language understanding. Most of my work has focused on understanding how different types of linguistic context-dependence affect the way in which listeners exploit contextual information to efficiently approximate the speaker's meaning. To answer these questions, I combine insights from theoretical linguistics and cognitive science more broadly with experimental and computational methods.

Yevgeni Berzak (now Assistant Professor, Technion Faculty of Data and Decision Sciences)

My research combines Natural Language Processing (NLP), Computational Linguistics and Cognitive Science. I currently study what eye movements during reading can reveal about the linguistic knowledge and cognitive state of the reader, and how such signal can be used to improve NLP. Other related interests include multilingualism, linguistic typology, treebanking, and grounded language acquisition.

Canaan Breiss (now Assistant Professor, USC Linguistics)

My research is in theoretical, computational, and experimental phonology, with particular interest in learning/acquisition, the representation of overlapping and interacting phonological processes, and phonology's interfaces with (morpho)syntax and the lexicon. In the study of these phenomena, I use computational methods from Bayesian cognitive modelling, NLP, and computational phonology; corpus methods, online surveys of understudied languages, and laboratory experiments of all types with infants and adults. I'm current affiliated with both the Computational Psycholinguistics Lab, and with the MIT-IBM Watson AI Research Lab.

Victoria Fossum

Victoria's interests include real-time human sentence processing, probabilistic syntactic methods, and machine translation. She has recently done work comparing hierarchical and sequential probabilistic grammars as models of real-time human sentence comprehension, evaluated against eye-tracking corpora.

MH Tessler (now Research Scientist, DeepMind)

I am interested in how people use language to share their thoughts and feelings. I am particularly fascinated by the context-sensitivity of language understanding, issues of vagueness, and how people learn from linguistic messages. In my research, I use computational models and behavioral experiments and enjoy thinking up novel data analytic methods.

Titus von der Malsburg (now Junior Professor, Stuttgart Linguistics)

I investigate how the human brain makes sense of language. How is each word that we hear or read combined with our understanding of the sentence so far? What sources of knowledge are recruited in this process? And how are they reconciled when they are in conflict? To answer questions like these, I use experimental and computational methods ranging from eye-tracking and event-related brain potentials to large-scale crowd-sourcing and Bayesian data analysis and cognitive modeling.

Eva Wittenberg (now Associate Professor, CEU Cognitive Science)

I am interested in how the mind assembles meaning, how this capacity came to be, and how it interacts with other cognitive abilities. I investigate the decisions that speakers face when they wrap their messages in grammar. Speakers make structural choices dozens of times per day, and listeners rapidly process them, make inferences about why something was said in a particular way, and create a representation of the speaker’s intended meaning in their minds.

Noga Zaslavsky (now Assistant Professor, UC Irvine Language Science)

My research aims to understand language, learning, and reasoning from first principles, building on ideas and methods from machine learning and information theory. I’m particularly interested in finding computational principles that explain how we use language to represent the environment; how this representation can be learned in humans and in artificial neural networks; how it interacts with other cognitive functions, such as perception, action, social reasoning, and decision making; and how it evolves over time and adapts to changing environments and social needs. I believe that such principles could advance our understanding of human and artificial cognition, as well as guide the development of artificial agents that can evolve on their own human-like communication systems without requiring huge amounts of human-generated training data.


PhD Alumni

Klinton Bicknell (now Director of AI, Duolingo)

My research seeks to understand the remarkable efficiency of language comprehension. I investigate how we comprehend using a diverse set of methodologies: I build formal, computational models of comprehension using tools from computational linguistics and machine learning, and I also perform a wide range of empirical work, including both controlled experiments (especially eye tracking) and statistical analyses of large, naturalistic corpora. (Klinton was CPL's first PhD graduate!)

Rebecca Colavin

My main area of interest is computational phonology. I am interested in phonotactics, the set of language specific rules that determine the acceptability of sound sequences. In particular, I am interested in the nature of the phonotactic grammar and the relationship between lexical frequency and gradient speaker judgments.

Gabriel Doyle (now Associate Professor, SDSU Linguistics)

I’m a linguist interested in understanding how language is shaped by social and cognitive pressures. I model their influence using math and computers. My core stance on humans is that we are boundedly rational beings, meaning that we aim to behave rationality in a world that is incredibly complex. Thus, we can use rational (usually Bayesian) mathematical models of human behavior as our basic infrastructure for human thought and then examine how we deviate from that and why. My research uses these mathematical models to formalize human linguistic behaviors to try to quantify the effects of different cognitive, social, and communicative pressures on our language.

Richard Futrell (now Associate Professor, UC Irvine Language Sciences)

I study language processing in humans and machines using information theory and Bayesian cognitive modeling. I also work on NLP and AI interpretability.

Jon Gauthier (now Postdoctoral Scholar, UCSF)

I'm interested in linguistic meaning: how it is acquired by the child, how it is structured in the mind of the speaker, and how it is worked out in the mind of the listener. I study these questions through different computational case studies, combining data and methods from linguistics, psychology, artificial intelligence, and neuroscience. You can find much more about my work on my website, where I also blog about language, cognitive science, and philosophy, among other things.

Matthias Hofer (now Postdoc, MIT BCS)

My research focuses on cognitive models of how language is perceived and acquired, with the goal of connecting these model to social and cultural processes to explain language structure. In particular, I am interested in how properties such as discreteness and compositionality arise in grounded communication systems that evolve over time. I pursue these questions by conducting behavioral experiments that mimic cultural evolutionary processes, and by building probabilistic models of the observed linguistic behavior.

Jennifer Hu (now Research Fellow, Kempner Institute for the Study of Natural and Artificial Intelligence, Harvard University)

My research develops computational models of how humans resolve ambiguity in language understanding, with the goal of building better systems of artificial intelligence. I am also interested in how brains and machines represent linguistic meaning and structure.

Anubha Kothari

Anubha's PhD research focused on word order variation and language processing constraints in Hindi, using corpus analysis and controlled behavioral experiments.

Emily Morgan (now Assistant Professor, UC Davis Linguistics)

To know a language is to use one’s past linguistic experience to form expectations about future linguistic experience. This process is mediated by both speakers’ stored representations of their previous experience, and the online procedures used to process new stimuli in light of those representations. My research thus asks what the form of these representations is, and how the language processing system integrates these stored representations with incoming stimuli to form online expectations during language comprehension. I also ask comparable questions in other domains, specifically programming languages and music.

Bozena Pajak (now VP of Learning and Curriculum, Duolingo)

I am interested in how previously acquired linguistic knowledge affects future language learning. In particular, I adopt a Bayesian perspective on learning, which leads naturally to questions about how learners interpret new language input given their current state of knowledge. My work primarily investigates learning at the phonetic and phonological levels: I use psycholinguistic experiments and computational modeling to study how adults discriminate novel sounds and interpret statistical phonetic regularities in novel language speech given their prior language exposure.

Albert Yonghahk Park

Albert's PhD research focused on nonprojective dependency syntax and on noisy-channel grammar correction.

Till Poppels

I'm interested in understanding how people process language, focusing in particular on the emergence of meaning from interaction. What speakers mean is often underspecified in what they actually say, and I want to understand how listeners infer the missing pieces of the puzzle. Recently, my main focus in addressing this rather broad question has been on ellipsis, in particular Verb Phrase Ellipsis. In some sense, elliptical utterances represent an extreme form of underspecification, but how the missing information is inferred remains highly controversial. I also work on the topic of inferential language comprehension from two other angles: the rational resolution of multiple implicature-driving forces; and a noisy-channel approach to non-literal interpretation.

Peng Qian (now Postdoc, MIT BCS & Harvard Psychology)

I'm interested in the cognitive basis of human language. My current work combines behavioral experiments and computational models to investigate the relevance of linguistic knowledge in learning, reasoning, and judgment.

Nathaniel J. Smith

Language is one of humanity's most complicated artifacts -- yet language use is fast, effective, and tightly coordinated with concurrent non-linguistic activities. The goal of my research is to understand the architecture of the cognitive systems that allow language to be used in real time, and to interact in a fine-grained, flexible, and non-modular way with non-linguistic cognition and action. I'm interested in this both for its own sake, and because it seems to me a paradigm case of a challenging cognitive task: a domain where some of the complexities of high-level cognition are laid bare, and whose study is likely to give insight into the architecture of high- and low-level cognition in general. Theoretically, my work draws on insights from traditional, psycho-, cognitive, and computational linguistics, and also theoretical tools from other psychological domains, in particular rational models of perception and control. Empirically, I use a wide variety of methods, including both designed experiments and corpus studies of eye-tracking, self-paced reading, cloze tasks, and EEG/ERP/rERP.

Ethan Wilcox (now Postdoc, Machine Learning Institute, ETH Zürich)

I am a computational psycholinguist. I use tools from computer science to build models of language processing and language acquisition. I am particularly interested in how people process language as they read, and how they make inferences about language structure during language learning.

Meilin Zhan (now Data Scientist, AirBnB)

My research seeks to understand the cognitive underpinning of the production and comprehension of natural language. Speakers often face choices as to how to structure their intended message into an utterance. When multiple options are available, what general principles govern speaker choice? What inferences do comprehenders make about why something was said in a particular way? To answer these questions, I combine analysis of naturalistic language datasets, psycholinguistic experiments, and computational modeling. (Meilin was the first CPL student to graduate at MIT!)


Masters Alumni

Anna Sinelnikova

I am passionate about building software that can help research in traditionally less computationally intensive fields. My project is about understanding the kinds of contextual cues available to children that helps them resolve ambiguous language. Currently, I am pursuing an MEng degree in EECS.


Research Associate Alumni

Tristan Thrush (now PhD Student, Stanford Computer Science)

In order to understand human intelligence, we need to understand how we can learn a mapping from language to meaning. Particularly, how do we come to associate language descriptions with relations and objects in a grounded environment, such as the real world? How can we use existing knowledge to infer the meanings of descriptions that are not easily exemplified? My approach is to construct computational models that learn this mapping as humans do. You can look at some of my work on my website.


Undergraduate Alumni

Chelsea Ajunwa (now PhD student, Northeastern Psychology)

Chelsea worked with Veronica Boyce on an experimental psycholinguistics project that involves studying human sentence processing using A-Maze.

Suhas Arehalli (now Assistant Professor of Computer Science, Macalester College)

Suhas worked with Eva Wittenberg on event representation and lexical semantics.

Veronica Boyce (now PhD student, Stanford Psychology)

I'm interested in how language use shapes human interaction and influences our thoughts and beliefs. One of my research projects looks at how the gender information conveyed by pronouns seems to introduce biases between production and comprehension.

Wednesday Bushong (now Assistant Professor of Psychology, University of Hartford)

Wednesday worked with Emily Morgan and Roger Levy on the processing of multiword expressions. She is interested in how people strategically make use of their probabilistic knowledge during language processing.

Curtis Chen (now PhD student, University of Edinburgh)

Curtis worked with Helena Aparicio on modeling and experimental approaches to how humans represent and reason about gradable adjectives.

Robert Chen

Robert worked with Tiwa and CJ on gamifying the collection of cloze completions to provide training data for cognitively plausible AI.

Jamie Fu

Jamie with Yevgeni Berzak to better understand language processing through the use of magnetoencephalography (MEG) neurosignals and eyetracking.

Vineet Gangireddy (now Citadel)

Vineet was an undergraduate (and concurrent Master's student) in Applied Mathematics at Harvard. He worked with Tiwa and Yoon Kim on interpreting language models as implicit parsers and developing psycholinguistically plausible attention strategies for transformers.

Siyi Lin

Siyi worked with Yevgeni Berzak on a project using eye-tracking experiments to correlate language comprehension and eye movements.

CJ Quines

CJ worked with Tiwa Eisape to tackle some of the challenges involved with collecting cloze completions at scale.

Jason Madeano

Jason worked with MH Tessler on iterated transmission experiments to study how people use language to share their thoughts. He also worked on probing off-the-shelf word embeddings to see how they can be used to differentiate between semantic relations.

Pranali Vani

Pranali worked with Ethan Wilcox on comparing human processing of language against the performance of NLP models.

Melodie Yen (now Neuroscience, UCLA)

Melodie worked with Emily Morgan on the processing of multiword expressions.

Irene Zhou (now PhD student, Yale Psychology)

Irene worked with Noga Zaslavsky and Jennifer Hu to explore how humans resolve ambiguity in communication using models of pragmatic reasoning.


 
PhD students Masters students Undergraduates Postdocs Research Associates Visitors
Klinton Bicknell Anna Sinelnikova Suhas Arehalli Helena Aparicio Veronica Boyce Fuyun Wu
Rebecca Colavin K. Michael Brooks Richard Futrell Tristan Thrush Kasia Hitczenko
Gabriel Doyle Wednesday Bushong Victoria Fossum Kentaro Nakatani
Anubha Kothari Hannah Campbell Titus von der Malsburg Yanan Sheng
Emily Morgan Bonnie Chinh Eva Wittenberg Reuben Cohn-Gordon
Bozena Pajak Abhishek Goyal Michael Henry Tessler Aixiu An
Y. Albert Park Jake Prasad Polina Tsvilodub
Till Poppels Agatha Ventura
Nathaniel Smith Melodie Yen
Meilin Zhan Silvia Cho
Peng Qian Karen Gu
Ethan Wilcox Brin Harper
Jiaxing Liu
Katherine Liu
Erin Shin
Arun Wongprommoon
Beining Jenny Zhang