October 19, 2022

November 15th 10:30am—11:30am | UW ECE 123

Abstract: Much of my work has dealt with human-robot interaction by pretending that people are like robots: assuming they optimize for utility, and run Bayes filters to maintain estimates over what they can’t directly observe. It’s somewhat surprising that this approach works at all, given that behavioral economics has long warned us that people are a bag of heuristics and cognitive biases, which is a far cry from “rational” robot behavior. On the other hand, treating people as black boxes and throwing a lot of data at the problem leads to models that are themselves a bag of spurious correlations that produce amazingly accurate predictions in distribution, but fail spectacularly outside of that context. This has left me with the question: how do we get accurate, yet robust, human models? One idea I want to share in this talk is that perhaps many of the aspects of human behavior that seem arbitrary, inconsistent, and time-varying, might actually be explained by acknowledging that people make decisions using inaccurate estimates that evolve over time. This is far away from a perfect model, but it greatly expands the space of useful models for robots and AI agents more broadly.