GSTDTAP

浏览/检索结果: 共2条,第1-2条 帮助

已选(0)清除 条数/页:   排序方式:
Clonal analysis of immunodominance and cross-reactivity of the CD4 T cell response to SARS-CoV-2 期刊论文
Science, 2021
作者:  Jun Siong Low;  Daniela Vaqueirinho;  Federico Mele;  Mathilde Foglierini;  Josipa Jerak;  Michela Perotti;  David Jarrossay;  Sandra Jovic;  Laurent Perez;  Rosalia Cacciatore;  Tatiana Terrot;  Alessandra Franzetti Pellanda;  Maira Biggiogero;  Christian Garzoni;  Paolo Ferrari;  Alessandro Ceschi;  Antonio Lanzavecchia;  Federica Sallusto;  Antonino Cassotta
收藏  |  浏览/下载:9/0  |  提交时间:2021/06/24
A distributional code for value in dopamine-based reinforcement learning 期刊论文
NATURE, 2020, 577 (7792) : 671-+
作者:  House, Robert A.;  Maitra, Urmimala;  Perez-Osorio, Miguel A.;  Lozano, Juan G.;  Jin, Liyu;  Somerville, James W.;  Duda, Laurent C.;  Nag, Abhishek;  Walters, Andrew;  Zhou, Ke-Jin;  Roberts, Matthew R.;  Bruce, Peter G.
收藏  |  浏览/下载:61/0  |  提交时间:2020/07/03

Since its introduction, the reward prediction error theory of dopamine has explained a wealth of empirical phenomena, providing a unifying framework for understanding the representation of reward and value in the brain(1-3). According to the now canonical theory, reward predictions are represented as a single scalar quantity, which supports learning about the expectation, or mean, of stochastic outcomes. Here we propose an account of dopamine-based reinforcement learning inspired by recent artificial intelligence research on distributional reinforcement learning(4-6). We hypothesized that the brain represents possible future rewards not as a single mean, but instead as a probability distribution, effectively representing multiple future outcomes simultaneously and in parallel. This idea implies a set of empirical predictions, which we tested using single-unit recordings from mouse ventral tegmental area. Our findings provide strong evidence for a neural realization of distributional reinforcement learning.


Analyses of single-cell recordings from mouse ventral tegmental area are consistent with a model of reinforcement learning in which the brain represents possible future rewards not as a single mean of stochastic outcomes, as in the canonical model, but instead as a probability distribution.