{"id":1623,"date":"2018-07-30T09:07:30","date_gmt":"2018-07-29T23:07:30","guid":{"rendered":"https:\/\/www.cognav.net\/?p=1623"},"modified":"2018-07-30T09:10:51","modified_gmt":"2018-07-29T23:10:51","slug":"how-landmark-and-self-motion-cues-combine-during-navigation-to-generate-spatial-representations","status":"publish","type":"post","link":"https:\/\/braininspirednavigation.com\/?p=1623","title":{"rendered":"How landmark and self-motion cues combine during navigation to generate spatial representations?"},"content":{"rendered":"<p style=\"text-align: justify;\">The excerpt note is about how combine landmark and self-motion cues for navigation from Campbell et al., 2018.<\/p>\n<p style=\"text-align: justify;\">Campbell, Malcolm G., Samuel A. Ocko, Caitlin S. Mallory, Isabel I. C. Low, Surya Ganguli &amp; Lisa M. Giocomo. <a href=\"https:\/\/www.nature.com\/articles\/s41593-018-0189-y\"><strong>Principles governing the integration of landmark and self-motion cues in entorhinal cortical codes for navigation<\/strong><\/a>. Nature Neuroscience, volume 21, pages1096\u20131106 (2018).<\/p>\n<p style=\"text-align: justify;\"><span style=\"color: red;\">To navigate, the brain combines self-motion information with sensory landmarks to form a position estimate. <\/span>The neural substrates thought to support such position coding include functionally defined medial entorhinal cortex (MEC) cell types, namely <span style=\"color: red;\">grid cells, head direction cells, border cells, and speed cells<\/span>. Together, these neurons generate an internal map of space, with <span style=\"color: red;\">their codes emerging from interactions between self-motion cues, such as locomotion and optic flow, and sensory cues from environmental landmarks. <\/span><\/p>\n<p style=\"text-align: justify;\">However, the principles by which MEC cells integrate self-motion versus landmark cues remain incompletely understood. How multisensory self-motion cues combine to drive MEC speed cells remains equally unknown. In addition, while previous works often ascribe the neural basis of path integration to MEC functionally defined cell types, the degree to which behaviourally measured path integration position estimates and MEC neural codes follow the same cue combination principles remains unclear.<\/p>\n<p style=\"text-align: justify;\">Here, the authors <span style=\"color: red;\">examine the principles by which both mouse behaviour and MEC cell classes integrate self-motion with visual landmark cues.<\/span> To do this, they analysed the neural activity and behaviour of mice while they explored virtual reality (VR) environments. <span style=\"color: red;\">By combining these experimental approaches with an attractor-based network model, they propose a framework for understanding how optic flow, locomotion and landmark cues interact to generate MEC firing patterns and behavioural position estimates during navigation.<\/span><\/p>\n<p>&nbsp;<\/p>\n<p style=\"text-align: justify;\"><span style=\"color: red;\"><strong>A coupled-oscillator attractor network model elucidates principles for the integration of landmarks and self-motion. <\/strong><\/span><\/p>\n<p style=\"text-align: justify;\">Combined, their data point to an asymmetry in the integration of location and visual cues by grid and speed cells during gain changes. What underlying principles govern this cue-integration process? Previous work has shown that grid cells rely on self-motion input, which can reflect locomotion and optic flow cues, as well as an error-correcting signal provided by landmarks. However, gain changes alter the relationship between distance travelled and the location s of landmarks, as well as the relationship between locomotion and optic flow. Therefore, the responses observed in their data likely reflect a complex interaction between the effects gain changes have on these different relationships. To better understand these dynamics, <span style=\"color: red;\">they modelled the integration of self-motion with landmark input in a 1D attractor network<\/span> (Fig.5).<\/p>\n<p style=\"text-align: justify;\">\u00a0<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-1627 aligncenter\" src=\"https:\/\/www.braininspirednavigation.com\/wp-content\/uploads\/2018\/07\/coupled-oscillator-attractor-network-model-1024x672.png\" alt=\"\" width=\"799\" height=\"524\" srcset=\"https:\/\/braininspirednavigation.com\/wp-content\/uploads\/2018\/07\/coupled-oscillator-attractor-network-model-1024x672.png 1024w, https:\/\/braininspirednavigation.com\/wp-content\/uploads\/2018\/07\/coupled-oscillator-attractor-network-model-150x98.png 150w, https:\/\/braininspirednavigation.com\/wp-content\/uploads\/2018\/07\/coupled-oscillator-attractor-network-model-300x197.png 300w, https:\/\/braininspirednavigation.com\/wp-content\/uploads\/2018\/07\/coupled-oscillator-attractor-network-model-768x504.png 768w, https:\/\/braininspirednavigation.com\/wp-content\/uploads\/2018\/07\/coupled-oscillator-attractor-network-model.png 1272w\" sizes=\"(max-width: 799px) 100vw, 799px\" \/><\/p>\n<p style=\"text-align: center;\">Fig. 5 A coupled-oscillator attractor network model of the integration of landmarks and self-motion input by grid cells. (Campbell et al., 2018)<\/p>\n<p style=\"text-align: justify;\"><span style=\"color: red;\">They added external landmark inputs to standard attractor-based path integration machinery, in which grid cells are modelled as a 1D periodic network of neurons with short-range excitatory and long-range inhibitory synaptic weight profiles.<\/span> In the absence of external input, this neural architecture yields a family of steady-state bump activity patterns, in which grid cell responses are generated when the animal&#8217;s velocity is used drive phase advance in the network. External landmark inputs drive neuronal activity that changes as a function of the animal&#8217;s position relative to landmark cues and serve to reinforce the phase of the attractor network (Fig. 5 b,c).<\/p>\n<p style=\"text-align: justify;\">In this framework<span style=\"color: red;\">, gain changes correspond to a mismatch between the phase, or position estimate, of the attractor network <\/span>(red arrow, Fig. 5 a,c). <span style=\"color: red;\">and the phase of the landmark input<\/span>(blue arrow, Fig. 5 b,c). In this situation, landmark inputs exert a corrective force on the attractor phase, pulling it toward the landmark phase (Fig. 5d). <span style=\"color: red;\">The dynamics governing this process are analogous to a coupled-oscillator system, in which the two oscillators are grid cells, described by the attractor phase, and landmark inputs, described by the landmark phase. <\/span>Coupled-oscillator systems are well studied in physics and provide a clarifying analogy for the cue-integration process here.<\/p>\n<p style=\"text-align: justify;\">\u00a0<\/p>\n<p style=\"text-align: justify;\">Here they found principled regimes under which behaviourally measured position estimates and MEC codes differentially weight the influence of visual landmark and self-motion cues.<\/p>\n<p style=\"text-align: justify;\">First, <span style=\"color: red;\">they found that conflicts between locomotion and visual cues caused grid cells to remap in an asymmetric manner<\/span>, with gain increases causing phase shifts and gain decreases causing grid scale changes. This asymmetry was mirrored by multiple MEC speed signals.<\/p>\n<p style=\"text-align: justify;\">Second, <span style=\"color: red;\">they developed a coupled-oscillator attractor model that explained how grid responses to gain manipulations could arise from competition between conflicting self-motion and landmark cues<\/span>. This model successfully predicted grid responses to an intermediate gain change.<\/p>\n<p style=\"text-align: justify;\">Finally, <span style=\"color: red;\">they used a path integration task to demonstrate a behavioural asymmetry in the weighting of visual versus locomotor cues that matched grid and speed responses<\/span>.<\/p>\n<p style=\"text-align: justify;\">Taken together, these findings provide a framework for understanding the dynamics of cue combination in MEC neural codes and navigational behaviour. <span style=\"color: red;\"><strong>This framework could be useful in interpreting grid cell responses to different environmental geometries, in which distortion, shearing, spatial frequency changes or remapping could reflect competition between landmark and self-motion inputs or context or experience-dependent changes in these inputs.<\/strong> <\/span><\/p>\n<p style=\"text-align: justify;\">The ability of the path integration system to operate in both subcritical and supercritical regimes likely serves an adaptive purpose during navigation. For example, the subcritical regime is appropriate when landmark input is close enough to path integration to be used for error correction. <span style=\"color: red;\">However, if landmark change location or become unreliable, creating a large disagreement between landmark input and path integration, the network can enter the supercritical regime and pull free from the influence of landmarks<\/span>. The decoherence threshold could therefore reflect the animal&#8217;s expectations about the reliability of landmark input. <span style=\"color: red;\">This idea that nonlinear cue integration serves an adaptive purpose during navigation <\/span>may be a more general principle of parahippocampal computation. Recent work used VR gain changes to show that <span style=\"color: red;\">hippocampal place cells integrate visual and locomotor information nonlinearly.<\/span> These data strongly resemble the subcritical regime of their model, raising the possibility that some of the principles they reveal governing the integration of different information sources by both MEC neural codes and behaviour may generalize to other brain regions that support navigation.<\/p>\n<p style=\"text-align: justify;\">\u00a0<\/p>\n<p style=\"text-align: justify;\">For further info, please read the paper Campbell et al., 2018.<\/p>\n<p style=\"text-align: justify;\">Campbell, Malcolm G., Samuel A. Ocko, Caitlin S. Mallory, Isabel I. C. Low, Surya Ganguli &amp; Lisa M. Giocomo. <a href=\"https:\/\/www.nature.com\/articles\/s41593-018-0189-y\"><strong>Principles governing the integration of landmark and self-motion cues in entorhinal cortical codes for navigation<\/strong><\/a>. Nature Neuroscience, volume 21, pages1096\u20131106 (2018).<\/p>\n<p style=\"text-align: justify;\">\u00a0<\/p>\n<p style=\"text-align: justify;\">There are some relevant work in robotic navigation combining visual cues and self-motion cues, such as RatSLAM. This framework model will give us many inspirations for enabling robots to navigate autonomously.<\/p>\n<p style=\"text-align: justify;\"><span style=\"color: red;\"><strong>How to expand the RatSLAM model to adapt visual and locomotor information nonlinearly in changing environments inspired by the framework model in Campbell et al., 2018? <\/strong><\/span><\/p>\n<p style=\"text-align: justify;\">\u00a0<\/p>\n<p style=\"text-align: justify;\">Some works about RatSLAM as following links.<\/p>\n<p style=\"text-align: justify;\"><a href=\"https:\/\/www.braininspirednavigation.com\/?p=1166\">How to perform Path Integration in RatSLAM?<\/a><\/p>\n<p style=\"text-align: justify;\"><a href=\"https:\/\/www.braininspirednavigation.com\/?p=1185\">How to represent robot&#8217;s pose with a rate-coded neural network CAN in RatSLAM? <\/a><\/p>\n<p style=\"text-align: justify;\"><a href=\"https:\/\/www.braininspirednavigation.com\/?p=1171\">How does the velocity take effects on movement of activity of Pose Cells in RatSLAM?<\/a><\/p>\n<p style=\"text-align: justify;\"><a href=\"https:\/\/www.braininspirednavigation.com\/?p=1105\">How to update the activity of pose cells in RatSLAM?<\/a><\/p>\n<p style=\"text-align: justify;\"><a href=\"https:\/\/www.braininspirednavigation.com\/?p=1046\">How Self-Motion Updates the Head Direction Cell Attractor?<\/a><\/p>\n<p style=\"text-align: justify;\"><a href=\"How%20to%20perform%20robot%20place%20recognition%20with%20multi-scale,%20multi-sensor%20system%20inspired%20by%20place%20cells?\">How to perform robot place recognition with multi-scale, multi-sensor system inspired by place cells?<\/a><\/p>\n<p style=\"text-align: justify;\"><a href=\"https:\/\/www.braininspirednavigation.com\/?p=1059\">How to enable robot cognitive mapping inspired by Grid Cells, Head Direction Cells and Speed Cells\uff1f<\/a><\/p>\n<p style=\"text-align: justify;\"><a href=\"https:\/\/www.braininspirednavigation.com\/?p=1010\">How to implement internal dynamics of the head direction network in brain inspired 1D SLAM?<\/a><\/p>\n<p style=\"text-align: justify;\"><a href=\"https:\/\/www.braininspirednavigation.com\/?p=950\">Continuous Attractor Neural Network (CANN) and 1D CANN for Head Direction<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>The excerpt note is about how combine landmark and self-motion cues for navigation from Campbell et al., 2018. Campbell, Malcolm G., Samuel A. Ocko, Caitlin S. Mallory, Isabel I. C. Low, Surya Ganguli &amp; Lisa M. Giocomo. Principles governing the integration of landmark and self-motion cues in entorhinal cortical codes for navigation. Nature Neuroscience, volume [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[114,96,257,346],"tags":[105,251,249,391,85,115,392,375],"_links":{"self":[{"href":"https:\/\/braininspirednavigation.com\/index.php?rest_route=\/wp\/v2\/posts\/1623"}],"collection":[{"href":"https:\/\/braininspirednavigation.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/braininspirednavigation.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/braininspirednavigation.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/braininspirednavigation.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=1623"}],"version-history":[{"count":4,"href":"https:\/\/braininspirednavigation.com\/index.php?rest_route=\/wp\/v2\/posts\/1623\/revisions"}],"predecessor-version":[{"id":1629,"href":"https:\/\/braininspirednavigation.com\/index.php?rest_route=\/wp\/v2\/posts\/1623\/revisions\/1629"}],"wp:attachment":[{"href":"https:\/\/braininspirednavigation.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=1623"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/braininspirednavigation.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=1623"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/braininspirednavigation.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=1623"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}