{"id":1729,"date":"2019-02-10T14:16:06","date_gmt":"2019-02-10T04:16:06","guid":{"rendered":"https:\/\/www.cognav.net\/?p=1729"},"modified":"2019-02-10T14:19:06","modified_gmt":"2019-02-10T04:19:06","slug":"how-a-simple-robotics-model-of-mammal-navigation-is-useful-to-interpret-neurobiological-recordings","status":"publish","type":"post","link":"https:\/\/braininspirednavigation.com\/?p=1729","title":{"rendered":"How a simple robotics model of mammal navigation is useful to interpret neurobiological recordings"},"content":{"rendered":"<p style=\"text-align: justify;\">Place recognition is a complex process involving idiothetic and allothetic information. In mammals, evidence suggests that visual information stemming from the temporal and parietal cortical areas (\u2018what\u2019 and \u2018where\u2019 information) is merged at the level of the entorhinal cortex (EC) to build a compact code of a place.Local views extracted from specific feature points can provide information important for view cells (in primates) and place cells (in rodents) even when the environment changes dramatically. Robotics experiments using conjunctive cells merging \u2018what\u2019 and \u2018where\u2019 information related to different local views show their important role for obtaining place cells with strong generalization capabilities.<\/p>\n<p><a href=\"http:\/\/jeb.biologists.org\/content\/jexbio\/222\/Suppl_1\/jeb186932.full.pdf\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-1730\" src=\"https:\/\/www.braininspirednavigation.com\/wp-content\/uploads\/2019\/02\/Visual-place-cell.png\" alt=\"\" width=\"949\" height=\"673\" srcset=\"https:\/\/braininspirednavigation.com\/wp-content\/uploads\/2019\/02\/Visual-place-cell.png 949w, https:\/\/braininspirednavigation.com\/wp-content\/uploads\/2019\/02\/Visual-place-cell-150x106.png 150w, https:\/\/braininspirednavigation.com\/wp-content\/uploads\/2019\/02\/Visual-place-cell-300x213.png 300w, https:\/\/braininspirednavigation.com\/wp-content\/uploads\/2019\/02\/Visual-place-cell-768x545.png 768w\" sizes=\"(max-width: 949px) 100vw, 949px\" \/><\/a><\/p>\n<p style=\"text-align: center;\">Fig. Visual place cell from the merging of \u2018what\u2019 and \u2018where\u2019 information. (The figure is from the Gaussier et al. 2019.)<\/p>\n<p style=\"text-align: justify;\">In the paper Gaussier et al. 2019, the authors show how a simple robotics model of mammal navigation is useful to interpret neurobiological recordings. They question the current models of the dMEC as a path integrator. Instead, they propose that the EC is a generic merging tool that builds a compact representation of the cortical activity. They summarize experiments and simulations showing that grid cells related to PI could be explained as a modulo projection of cortical activity computed in the RSC, where PI could take place. Furthermore, they suggest that the visual grid cells recorded in the human EC could also be explained by the same mechanism.\u00a0<\/p>\n<p style=\"text-align: justify;\">For further info, please read the paper\u00a0 Gaussier et al. 2019.<\/p>\n<p style=\"text-align: justify;\">Philippe Gaussier, Jean Paul Banquet, Nicolas Cuperlier, Mathias Quoy, Lise Aubin, Pierre-Yves Jacob, Francesca Sargolini, Etienne Save, Jeffrey L. Krichmar, Bruno Poucet.\u00a0<a href=\"http:\/\/jeb.biologists.org\/content\/jexbio\/222\/Suppl_1\/jeb186932.full.pdf\"><strong>Merging information in the entorhinal cortex: what can we learn from robotics experiments and modeling?\u00a0<\/strong><\/a>Journal of Experimental Biology 2019 222: jeb186932 doi: 10.1242\/jeb.186932 Published 6 February 2019\u00a0<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Place recognition is a complex process involving idiothetic and allothetic information. In mammals, evidence suggests that visual information stemming from the temporal and parietal cortical areas (\u2018what\u2019 and \u2018where\u2019 information) is merged at the level of the entorhinal cortex (EC) to build a compact code of a place.Local views extracted from specific feature points can [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[126,114,96,346],"tags":[105,100],"_links":{"self":[{"href":"https:\/\/braininspirednavigation.com\/index.php?rest_route=\/wp\/v2\/posts\/1729"}],"collection":[{"href":"https:\/\/braininspirednavigation.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/braininspirednavigation.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/braininspirednavigation.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/braininspirednavigation.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=1729"}],"version-history":[{"count":2,"href":"https:\/\/braininspirednavigation.com\/index.php?rest_route=\/wp\/v2\/posts\/1729\/revisions"}],"predecessor-version":[{"id":1732,"href":"https:\/\/braininspirednavigation.com\/index.php?rest_route=\/wp\/v2\/posts\/1729\/revisions\/1732"}],"wp:attachment":[{"href":"https:\/\/braininspirednavigation.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=1729"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/braininspirednavigation.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=1729"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/braininspirednavigation.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=1729"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}