{"id":1892,"date":"2019-05-12T22:18:49","date_gmt":"2019-05-12T12:18:49","guid":{"rendered":"https:\/\/www.cognav.net\/?p=1892"},"modified":"2019-05-12T22:18:49","modified_gmt":"2019-05-12T12:18:49","slug":"neuromorphic-perception-event-based-vision","status":"publish","type":"post","link":"https:\/\/braininspirednavigation.com\/?p=1892","title":{"rendered":"Neuromorphic Perception: Event-based Vision"},"content":{"rendered":"<p style=\"text-align: justify;\">Guillermo Gallego, Tobi Delbruck, Garrick Orchard, Chiara Bartolozzi, Brian Taba, Andrea Censi,\u00a0Stefan Leutenegger, Andrew Davison, Jorg Conradt, Kostas Daniilidis, Davide Scaramuzza.\u00a0<a href=\"https:\/\/arxiv.org\/abs\/1904.08405\"><strong>Event-based Vision: A Survey<\/strong><\/a>.\u00a0arXiv preprint arXiv:1904.08405 (2019).<\/p>\n<p style=\"text-align: justify;\">Abstract:\u00a0<strong><span style=\"color: #ff0000;\">Event cameras are bio-inspired sensors that work radically different from traditional cameras. Instead of capturing images at a fixed rate, they measure per-pixel brightness changes asynchronously.<\/span><\/strong> This results in <span style=\"color: #ff0000;\"><strong>a stream of events<\/strong><\/span>, which <span style=\"color: #ff0000;\"><strong>encode the time, location and sign of the brightness changes<\/strong><\/span>.<\/p>\n<p style=\"text-align: justify;\">Event cameras posses outstanding properties compared to traditional cameras: <span style=\"color: #ff0000;\"><strong>very high dynamic range (140 dB vs. 60 dB), high temporal resolution (in the order of microseconds), low power consumption, and do not suffer from motion blur.<\/strong><\/span> Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as <span style=\"color: #ff0000;\"><strong>high speed and high dynamic range<\/strong><\/span>. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential.<\/p>\n<p style=\"text-align: justify;\"><strong><span style=\"color: #ff0000;\">This paper provides a comprehensive overview of the emerging field of event-based vision<\/span><\/strong>, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. The authors <span style=\"color: #ff0000;\"><strong>present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition).<\/strong><\/span> They also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, <strong><span style=\"color: #ff0000;\">they highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world.<\/span><\/strong><\/p>\n<p>For further info, please visit the <a href=\"http:\/\/rpg.ifi.uzh.ch\/research_dvs.html\">RPG group<\/a> and read the paper Gallego et al. 2019.\u00a0<\/p>\n<p style=\"text-align: justify;\">Guillermo Gallego, Tobi Delbruck, Garrick Orchard, Chiara Bartolozzi, Brian Taba, Andrea Censi,\u00a0Stefan Leutenegger, Andrew Davison, Jorg Conradt, Kostas Daniilidis, Davide Scaramuzza.\u00a0<a href=\"https:\/\/arxiv.org\/abs\/1904.08405\"><strong>Event-based Vision: A Survey<\/strong><\/a>.\u00a0arXiv preprint arXiv:1904.08405 (2019).<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Guillermo Gallego, Tobi Delbruck, Garrick Orchard, Chiara Bartolozzi, Brian Taba, Andrea Censi,\u00a0Stefan Leutenegger, Andrew Davison, Jorg Conradt, Kostas Daniilidis, Davide Scaramuzza.\u00a0Event-based Vision: A Survey.\u00a0arXiv preprint arXiv:1904.08405 (2019). Abstract:\u00a0Event cameras are bio-inspired sensors that work radically different from traditional cameras. Instead of capturing images at a fixed rate, they measure per-pixel brightness changes asynchronously. This results [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[479,126,179],"tags":[488,142,111,487,486],"_links":{"self":[{"href":"https:\/\/braininspirednavigation.com\/index.php?rest_route=\/wp\/v2\/posts\/1892"}],"collection":[{"href":"https:\/\/braininspirednavigation.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/braininspirednavigation.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/braininspirednavigation.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/braininspirednavigation.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=1892"}],"version-history":[{"count":1,"href":"https:\/\/braininspirednavigation.com\/index.php?rest_route=\/wp\/v2\/posts\/1892\/revisions"}],"predecessor-version":[{"id":1893,"href":"https:\/\/braininspirednavigation.com\/index.php?rest_route=\/wp\/v2\/posts\/1892\/revisions\/1893"}],"wp:attachment":[{"href":"https:\/\/braininspirednavigation.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=1892"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/braininspirednavigation.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=1892"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/braininspirednavigation.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=1892"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}