Event Date
Presented by: Dr. Ben Zhou, Assistant Professor of Computer Science, Arizona State University
Sponsored By: The Department of Computer Science
Abstract
Human reasoning comes from trial and error on an abstract level: we learn from past experiences, conceptualize atomic operations and notions, and recompose them in new situations and scenarios. Such abstraction and conceptualization are the key sources of generalization. On the other hand, recent successes in machine reasoning are primarily from end-to-end training (e.g., next-word prediction and task-specific supervision), where models tend to rely more on memorization than generalization. Such observations motivate us to encourage models to perform high-level abstractions when learning. In this talk, I will discuss my recent works and methods that force model abstractions with artificial bottlenecks, where we deliberately remove essential information from the reasoning process so that models have to reason abstractively, similar to how computer programs conceptualize input values with variables and operations. These works show promising future research directions of model generalization and trustworthiness.
Bio
Ben Zhou is an assistant professor in the School of Computing and Augmented Intelligence at Arizona State University. Ben's research uses data and symbolic cognitive processes to improve model reasoning, controllability, and trustworthiness from learning/inference schemes and architectural perspectives. Ben obtained his Ph.D. degree from the University of Pennsylvania. He is a recipient of the ENIAC fellowship from the University of Pennsylvania and a finalist for the CRA Outstanding Undergraduate Researcher Award. Additional information is available at http://xuanyu.me/.