Our work aims to develop more predictive, interpretable, and empirically grounded theories of human cognition, especially decision making. A central challenge in cognitive science is that traditional studies often capture only limited snapshots of complex human behavior. We address this by combining large-scale behavioral experiments, machine learning, and cognitive modeling to uncover regularities that can be translated into better scientific theories.
Much of our current work focuses on how people make decisions involving risk, uncertainty, strategic interaction, and other complex tradeoffs.
Rather than treating these as isolated domains, we study them as instances of a broader challenge: understanding how human choice depends on context, complexity, and the structure of the environment.
This includes work on risky choice, strategic games, moral decisions, social judgment, and other settings in which classical theories often capture only part of human behavior.
We conduct large-scale behavioral studies to capture richer and more varied patterns of behavior, producing widely used benchmark datasets like Choices13k and CIFAR-10H.
To translate these datasets into scientific explanation, we use machine learning (ML) and artificial intelligence (AI) as tools for analysis, model discovery, and theory building.
This approach has led to improved theories in risky choice, strategic interaction, and moral decision making, among other domains.
We also study the broader computational foundations of cognition such as generalization, categorization, and mental representation.
This work asks how minds represent knowledge about the world, organize experience into useful concepts, and use that knowledge to make inferences under conditions of limited data and uncertainty.
These questions are relevant to any intelligent system, and thus connect cognitive science, machine learning, and AI.