Colin W. Hoy

Asymmetric coding of reward prediction errors in human insula and dorsomedial prefrontal cortex

Abstract:

The signed value and unsigned salience of reward prediction errors (RPEs) are critical to understanding reinforcement learning (RL) and cognitive control. Dorsomedial prefrontal cortex (dMPFC) and insula (INS) are key regions for integrating reward and surprise information, but conflicting evidence for both signed and unsigned activity has led to multiple proposals for the nature of RPE representations in these brain areas. Recently developed RL models allow neurons to respond differently to positive and negative RPEs. Here, we use intracranially recorded high frequency activity (HFA) to test whether this flexible asymmetric coding strategy captures RPE coding diversity in human INS and dMPFC. At the region level, we found a bias towards positive RPEs in both areas which paralleled behavioral adaptation. At the local level, we found spatially interleaved neural populations responding to unsigned RPE salience and valence-specific positive and negative RPEs. Furthermore, directional connectivity estimates revealed a leading role of INS in communicating positive and unsigned RPEs to dMPFC. These findings support asymmetric coding across distinct but intermingled neural populations as a core principle of RPE processing and inform theories of the role of dMPFC and INS in RL and cognitive control.

Authors:

  • Colin W. Hoy

  • David R. Quiroga-Martinez

  • Eduardo Sandoval

  • David King-Stephens

  • Kenneth D. Laxer

  • Peter Weber

  • Jack J. Lin

  • Robert T. Knight

Date: 2023

DOI: https://doi.org/10.1038/s41467-023-44248-1

View PDF

Multiple sequential prediction errors during reward processing in the human brain

Summary:

Recent developments in reinforcement learning, cognitive control, and systems neuroscience highlight the complimentary roles in learning of valenced reward prediction errors (RPEs) and non-valenced salience prediction errors (PEs) driven by the magnitude of surprise. A core debate in reward learning focuses on whether valenced and non-valenced PEs can be isolated in the human electroencephalogram (EEG). Here, we combine behavioral modeling and single-trial EEG regression revealing a sequence of valenced and non-valenced PEs in an interval timing task dissociating outcome valence, magnitude, and probability. Multiple regression across temporal, spatial, and frequency dimensions revealed a spatio-tempo-spectral cascade from valenced RPE value represented by the feedback related negativity event-related potential (ERP) followed by non-valenced RPE magnitude and outcome probability effects indexed by subsequent P300 and late frontal positivity ERPs. The results show that learning is supported by a sequence of multiple PEs evident in the human EEG.

Authors:

  • Colin W. Hoy

  • Sheila C. Steiner

  • Robert T. Knight

Date: 2020

DOI: https://doi.org/10.1101/2020.10.20.347740

View PDF