Inductive biases in theory-based reinforcement learning

Abstract

Inductive biases lie at the core of human learning efficiency. Making assumptions about task structure allows us to better predict task dynamics and target exploration towards potentially rewarding outcomes. However, most of the existing work on human inductive biases has studied relatively simple tasks; it is unclear to what extent these biases apply to more realistically complex domains. The current work provides an empirical exploration of human inductive biases in a video game domain with more realistic complexity. In particular, we present a computational model of how high-level inductive biases about video game structure can guide search through a space of object-oriented, relational theories. Finally, we demonstrate how these inductive biases allow our model to better account for human behavior on a series of prediction and planning tasks.