Skip Navigation


CIT can broadcast your seminar, conference or meeting live to a world-wide audience over the Internet as a real-time streaming video. The event can be recorded and made available for viewers to watch at their convenience as an on-demand video or a downloadable file. CIT can also broadcast NIH-only or HHS-only content.

Neural Systems Underlying Reinforcement Learning

Loading video...

114 Views  
   
Air date: Monday, September 30, 2019, 12:00:00 PM
Time displayed is Eastern Time, Washington DC Local
Views: Total views: 114, (45 Live, 69 On-demand)
Category: Neuroscience
Runtime: 01:01:17
Description: NIH Neuroscience Series Seminar

Dr. Averbeck’s lab studies the neural circuitry that underlies reinforcement learning. Reinforcement learning (RL) is the behavioral process of learning to make advantageous choices. While some preferences are innate, many are learned over time. How do we learn what we like and what we want to avoid? The lab uses a combination of experiments in in-vivo model systems, human participants including patients and computational modeling. They examine several facets of the learning problem including learning from gains vs. losses, learning to select rewarding actions vs. learning to select rewarding objects, and the explore-exploit trade-off. The explore-exploit trade-off describes a fundamental problem in learning. Should you try every restaurant when visiting a new city, or explore a small set of them and then return to your favorite several times?

Standard models of RL assume that dopamine neurons code reward prediction errors (RPEs; the difference between the size of the reward received and the reward that was expected following a choice). These RPEs are then communicated to the basal ganglia, specifically the striatum, because of its substantial dopamine innervation. This dopamine signal drives learning in frontal-striatal and amygdala-striatal circuits, such that choices that have previously been rewarded lead to larger neural responses in the striatum, and choices that have previously not been rewarded (or have been punished) lead to smaller responses. Thus, the striatal neurons come to represent the values of choices. They signal a high-value choice with higher activity and this higher activity drives decision processes. These models often mention a potential role for the amygdala, without formally incorporating it. They further suggest a general role for the ventral-striatum (VS) in representing values of decisions, whether they are decisions about actions or decisions about objects and independent of whether values are related to reward magnitude or probability.

In contrast to the standard model, they have recently shown that the amygdala has a larger role in RL than the VS (Costa VD et al., Neuron, 2016). In addition, the role of the VS depends strongly on the reward environment. When rewards are predictable, the VS has almost no role in learning whereas when rewards are less predictable the VS plays a larger role. This data outlines a more specific role for the VS in RL than is attributed to it by current models. Given that the VS has been implicated in depression, particularly adolescent depression, this delineation of the contribution of the VS to normal behavior may help inform hypotheses about the mechanisms and circuitry underlying depression.

For more information go to https://neuroscience.nih.gov/neuroseries
Debug: Show Debug
Author: Bruno Averbeck, Ph.D., Lab of Neurophysiology, NIMH, NIH
Download: To download this event, select one of the available bitrates:
[64k]  [150k]  [240k]  [440k]  [740k]  [1040k]  [1240k]  [1440k]  [1840k]    How to download a Videocast
Caption Text: Download Caption File
CIT Live ID: 34803
Permanent link: https://videocast.nih.gov/launch.asp?28759