Lectures of Karl Friston in Moscow
September 23-26, HSE University and Moscow State University of Psychology and Education will host lectures by KARL FRISTON, an outstanding British neuroscientist, scientific director of the Laboratory for Functional Neuroimaging at University College London, member of the Royal Scientific Society (2006), professor at the University of London.
In 2016, Karl Friston was No. 1 on the list of the most influential neurophysiological scientists (Hirsch index = 174 according to WebOfScience).
Karl Friston was widely known and recognized in scientific circles for formulating a number of fundamental theoretical concepts, such as the principle of minimizing free energy, the Bayesian brain hypothesis, and others. Over the past ten years, Friston has devoted most of his time and energy to developing the idea that he calls "The principle of free energy." Friston believes that this idea describes the principle of the organization of behavior, including intelligence. “If you are alive, what behavior should you exhibit?” - he tries to answer such a question.
09/23/2019. Venue: Moscow State University of Psychology and Education, 29 Sretenka Street
Lecture 1: Deep temporal models and active inference
Abstract: How do we navigate a deeply structured world? Why are you reading this sentence first - and did you actually look at the fifth word? This lecture offers some answers by appealing to active inference based on deep temporal models. It builds on previous formulations of active inference to simulate behavioural and electrophysiological responses under hierarchical generative models of state transitions. Inverting these models corresponds to sequential inference, such that the state at any hierarchical level entails a sequence of transitions in the level below. The deep temporal aspect of these models means that evidence is accumulated over nested time scales, enabling inferences about narratives (i.e., temporal scenes). Lecture illustrates this behaviour with Bayesian belief updating - and neuronal process theories - to simulate the epistemic foraging seen in reading. Finally, we exploit the deep structure of these models to simulate responses to local (e.g., font type) and global (e.g., semantic) violations; reproducing mismatch negativity and P300 responses respectively.
Registration for the lecture on the link
09/24/2019. Venue: Moscow State University of Psychology and Education, 19 Sretenka Street
Lecture 2: I am therefore I think.
Abstract: This overview of the free energy principle offers an account of embodied exchange with the world that associates conscious operations with actively inferring the causes of our sensations. Its agenda is to link formal (mathematical) descriptions of dynamical systems to a description of perception in terms of beliefs and goals. The argument has two parts: the first calls on the lawful dynamics of any (weakly mixing) ergodic system – from a single cell organism to a human brain. These lawful dynamics suggest that (internal) states can be interpreted as modelling or predicting the (external) causes of sensory fluctuations. In other words, if a system exists, its internal states must encode probabilistic beliefs about external states. Heuristically, this means that if I exist (am) then I must have beliefs (think). The second part of the argument is that the only tenable beliefs I can entertain about myself are that I exist. This may seem rather obvious; however, if we associate existing with ergodicity, then (ergodic) systems that exist by predicting external states can only possess prior beliefs that their environment is predictable. It transpires that this is equivalent to believing that the world – and the way it is sampled – will resolve uncertainty about the causes of sensations. We will conclude by looking at the epistemic behaviour that emerges under these beliefs, using simulations of active inference.
Registration for the lecture on the link
09/25/2019. Venue: Moscow State University of Psychology and Education, 29 Sretenka Street
Lecture 3: Active inference and belief propagation in the brain.
Abstract: This presentation considers deep temporal models in the brain. It builds on previous formulations of active inference to simulate behaviour and electrophysiological responses under deep (hierarchical) generative models of discrete state transitions. The deeply structured temporal aspect of these models means that evidence is accumulated over distinct temporal scales, enabling inferences about narratives (i.e., temporal scenes). Se illustrate this behaviour in terms of Bayesian belief updating – and associated neuronal processes – to reproduce the epistemic foraging seen in reading. These simulations reproduce these sort of perisaccadic delay period activity and local field potentials seen empirically; including evidence accumulation and place cell activity. Finally, we exploit the deep structure of these models to simulate responses to local (e.g., font type) and global (e.g., semantic) violations; reproducing mismatch negativity and P300 responses respectively. These simulations are presented as an example of how to use basic principles to constrain our understanding of system architectures in the brain – and the functional imperatives that may apply to neuronal networks.
Registration for the lecture on the link
09/26/2019 Venue: HSE, 4/2 Armenian Lane
Lecture 4: Active inference and artificial curiosity.
Abstract: This talk offers a formal account of insight and learning in terms of active (Bayesian) inference. It deals with the dual problem of inferring states of the world and learning its statistical structure. In contrast to current trends in machine learning (e.g., deep learning), we focus on how agents learn from a small number of ambiguous outcomes to form insight. I will simulations of abstract rule-learning and approximate Bayesian inference to show that minimising (expected) free energy leads to active sampling of novel contingencies. This epistemic, curiosity-directed behaviour closes ‘explanatory gaps’ in knowledge about the causal structure of the world; thereby reducing ignorance, in addition to resolving uncertainty about states of the known world. We then move from inference to model selection or structure learning to show how abductive processes emerge when agents test plausible hypotheses about symmetries in their generative models of the world. The ensuing Bayesian model reduction evokes mechanisms associated with sleep and has all the hallmarks of ‘aha moments’.
Registration for the lecture on the link
Article on the activities of Karl Friston: https://habr.com/ru/post/432304/