lectures of Karl Friston in Moscow
Karl Friston will give lectures at HSE and MGPPU
Although Karl Friston trained in psychiatry, his revolutionary impact on studies of the brain derives from his inventive use of probability theory to analyse neural imaging data. He invented a technique, statistical parametric mapping or SPM, that is now used universally to look for correspondences in brain activity as measured by magnetic resonance imaging.
He then invented voxel-based morphometry, a sensitive method of measuring the volume of brain structures — one application demonstrated the increased volume of a region underlying spatial memory in London taxi drivers. Karl’s dynamic causal modelling is used to estimate how different cortical regions of the brain influence one another.
Karl’s suggestion that the minimisation of surprise can explain many aspects of action and perception informs his continuing efforts to integrate imaging data with other measures such as electroencephalography. He received a Golden Brain Award from the Minerva Foundation in 2003, and the Weldon Memorial Prize in 2013.
23.09.2019. Место проведения: МГППУ Сретенка 29
Lecture 1: Deep temporal models and active inference
Abstract: How do we navigate a deeply structured world? Why are you reading this sentence first - and did you actually look at the fifth word? This lecture offers some answers by appealing to active inference based on deep temporal models. It builds on previous formulations of active inference to simulate behavioural and electrophysiological responses under hierarchical generative models of state transitions. Inverting these models corresponds to sequential inference, such that the state at any hierarchical level entails a sequence of transitions in the level below. The deep temporal aspect of these models means that evidence is accumulated over nested time scales, enabling inferences about narratives (i.e., temporal scenes). Lecture illustrates this behaviour with Bayesian belief updating - and neuronal process theories - to simulate the epistemic foraging seen in reading. Finally, we exploit the deep structure of these models to simulate responses to local (e.g., font type) and global (e.g., semantic) violations; reproducing mismatch negativity and P300 responses respectively.
Registration по ссылке
24.09.2019Место проведения: МГППУ Сретенка 29
Lecture 2: I am therefore I think.
Abstract: This overview of the free energy principle offers an account of embodied exchange with the world that associates conscious operations with actively inferring the causes of our sensations. Its agenda is to link formal (mathematical) descriptions of dynamical systems to a description of perception in terms of beliefs and goals. The argument has two parts: the first calls on the lawful dynamics of any (weakly mixing) ergodic system – from a single cell organism to a human brain. These lawful dynamics suggest that (internal) states can be interpreted as modelling or predicting the (external) causes of sensory fluctuations. In other words, if a system exists, its internal states must encode probabilistic beliefs about external states. Heuristically, this means that if I exist (am) then I must have beliefs (think). The second part of the argument is that the only tenable beliefs I can entertain about myself are that I exist. This may seem rather obvious; however, if we associate existing with ergodicity, then (ergodic) systems that exist by predicting external states can only possess prior beliefs that their environment is predictable. It transpires that this is equivalent to believing that the world – and the way it is sampled – will resolve uncertainty about the causes of sensations. We will conclude by looking at the epistemic behaviour that emerges under these beliefs, using simulations of active inference.
25.09.2019Место проведения: МГППУ Сретенка 29
Lecture 3: Active inference and belief propagation in the brain.
Abstract: This presentation considers deep temporal models in the brain. It builds on previous formulations of active inference to simulate behaviour and electrophysiological responses under deep (hierarchical) generative models of discrete state transitions. The deeply structured temporal aspect of these models means that evidence is accumulated over distinct temporal scales, enabling inferences about narratives (i.e., temporal scenes). Se illustrate this behaviour in terms of Bayesian belief updating – and associated neuronal processes – to reproduce the epistemic foraging seen in reading. These simulations reproduce these sort of perisaccadic delay period activity and local field potentials seen empirically; including evidence accumulation and place cell activity. Finally, we exploit the deep structure of these models to simulate responses to local (e.g., font type) and global (e.g., semantic) violations; reproducing mismatch negativity and P300 responses respectively. These simulations are presented as an example of how to use basic principles to constrain our understanding of system architectures in the brain – and the functional imperatives that may apply to neuronal networks.
26.09.2019Место проведения: НИУ ВШЭ Армянский переулок 4
Lecture 4: Active inference and artificial curiosity.
Abstract: This talk offers a formal account of insight and learning in terms of active (Bayesian) inference. It deals with the dual problem of inferring states of the world and learning its statistical structure. In contrast to current trends in machine learning (e.g., deep learning), we focus on how agents learn from a small number of ambiguous outcomes to form insight. I will simulations of abstract rule-learning and approximate Bayesian inference to show that minimising (expected) free energy leads to active sampling of novel contingencies. This epistemic, curiosity-directed behaviour closes ‘explanatory gaps’ in knowledge about the causal structure of the world; thereby reducing ignorance, in addition to resolving uncertainty about states of the known world. We then move from inference to model selection or structure learning to show how abductive processes emerge when agents test plausible hypotheses about symmetries in their generative models of the world. The ensuing Bayesian model reduction evokes mechanisms associated with sleep and has all the hallmarks of ‘aha moments’.