Jensen KT, Hennequin G* and Mattar MG*
Abstract
When faced with a novel situation, people often spend substantial periods of time contemplating possible futures. For such planning to be rational, the benefits to behavior must compensate for the time spent thinking. Here, we capture these features of behavior by developing a neural network model where planning itself is controlled by the prefrontal cortex. This model consists of a meta-reinforcement learning agent augmented with the ability to plan by sampling imagined action sequences from its own policy, which we call ‘rollouts’. In a spatial navigation task, the agent learns to plan when it is beneficial, which provides a normative explanation for empirical variability in human thinking times. Additionally, the patterns of policy rollouts used by the artificial agent closely resemble patterns of rodent hippocampal replays. Our work provides a theory of how the brain could implement planning through prefrontal–hippocampal interactions, where hippocampal replays are triggered by—and adaptively affect—prefrontal dynamics.
Try and play the game ↓ (you will need a keyboard...) You have 20 seconds to explore the maze (which has cyclic boundaries), find the hidden reward, get teleported, and go back to the reward as many times as you can. At the end of the 20s, a new environment will be generated with a new hidden reward location and you can do all this again (again again!).
(If using the up/down arrows annoyingly also scrolls the page, try holding the Shift key at the same time.)