{"id":1537,"date":"2018-05-31T16:03:52","date_gmt":"2018-05-31T06:03:52","guid":{"rendered":"https:\/\/www.cognav.net\/?p=1537"},"modified":"2018-05-31T16:03:52","modified_gmt":"2018-05-31T06:03:52","slug":"patch-normalization-for-visual-template-matching-based-on-panoramic-images-in-bio-inspired-navigation-system","status":"publish","type":"post","link":"https:\/\/braininspirednavigation.com\/?p=1537","title":{"rendered":"Patch Normalization for Visual Template Matching based on Panoramic Images in Bio-inspired Navigation System"},"content":{"rendered":"<p style=\"text-align: justify;\"><span style=\"font-family: arial, helvetica, sans-serif; font-size: 12pt;\">The excerpt note is about how to enhance the signal-to-noise ratio by using patch normalization in each image to enhance edge information and eliminate image intensity variation from <span style=\"color: #222222; background-color: white;\">Edward P. et al. IJRR 2016<\/span>.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"font-family: arial, helvetica, sans-serif; font-size: 12pt;\">Patch normalization is performed by dividing an image into a grid of square patches and for each pixel, subtracting the patch mean and then dividing by the patch standard deviation. Finally, a mean difference score, <img decoding=\"async\" src=\"https:\/\/www.braininspirednavigation.com\/wp-content\/uploads\/2018\/05\/053118_0609_PatchNormal1.png\" alt=\"\" \/> , is calculated for each query-database image pair by calculating the <img decoding=\"async\" src=\"https:\/\/www.braininspirednavigation.com\/wp-content\/uploads\/2018\/05\/053118_0609_PatchNormal2.png\" alt=\"\" \/> -norm with the Sum of Absolute Difference (SAD) metric<\/span><\/p>\n<p style=\"text-align: center;\"><span style=\"font-family: arial, helvetica, sans-serif; font-size: 12pt;\"><img decoding=\"async\" src=\"https:\/\/www.braininspirednavigation.com\/wp-content\/uploads\/2018\/05\/053118_0609_PatchNormal3.png\" alt=\"\" \/><\/span><\/p>\n<p style=\"text-align: center;\"><span style=\"font-family: arial, helvetica, sans-serif; font-size: 12pt;\"><img decoding=\"async\" src=\"https:\/\/www.braininspirednavigation.com\/wp-content\/uploads\/2018\/05\/053118_0609_PatchNormal4.png\" alt=\"\" \/><\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"font-family: arial, helvetica, sans-serif; font-size: 12pt;\">Where <img decoding=\"async\" src=\"https:\/\/www.braininspirednavigation.com\/wp-content\/uploads\/2018\/05\/053118_0609_PatchNormal5.png\" alt=\"\" \/> returns a central <img decoding=\"async\" src=\"https:\/\/www.braininspirednavigation.com\/wp-content\/uploads\/2018\/05\/053118_0609_PatchNormal6.png\" alt=\"\" \/> region of the image, and <img decoding=\"async\" src=\"https:\/\/www.braininspirednavigation.com\/wp-content\/uploads\/2018\/05\/053118_0609_PatchNormal7.png\" alt=\"\" \/> returns a <img decoding=\"async\" src=\"https:\/\/www.braininspirednavigation.com\/wp-content\/uploads\/2018\/05\/053118_0609_PatchNormal8.png\" alt=\"\" \/>region of the image, offset by <img decoding=\"async\" src=\"https:\/\/www.braininspirednavigation.com\/wp-content\/uploads\/2018\/05\/053118_0609_PatchNormal9.png\" alt=\"\" \/> from the center, where <img decoding=\"async\" src=\"https:\/\/www.braininspirednavigation.com\/wp-content\/uploads\/2018\/05\/053118_0609_PatchNormal10.png\" alt=\"\" \/> and <img decoding=\"async\" src=\"https:\/\/www.braininspirednavigation.com\/wp-content\/uploads\/2018\/05\/053118_0609_PatchNormal11.png\" alt=\"\" \/>, and <img decoding=\"async\" src=\"https:\/\/www.braininspirednavigation.com\/wp-content\/uploads\/2018\/05\/053118_0609_PatchNormal12.png\" alt=\"\" \/> and <img decoding=\"async\" src=\"https:\/\/www.braininspirednavigation.com\/wp-content\/uploads\/2018\/05\/053118_0609_PatchNormal13.png\" alt=\"\" \/>are the dimensions of the low-resolution, patch-normalized grayscale images. We compare central subregions of <img decoding=\"async\" src=\"https:\/\/www.braininspirednavigation.com\/wp-content\/uploads\/2018\/05\/053118_0609_PatchNormal14.png\" alt=\"\" \/> and <img decoding=\"async\" src=\"https:\/\/www.braininspirednavigation.com\/wp-content\/uploads\/2018\/05\/053118_0609_PatchNormal15.png\" alt=\"\" \/> over a range of offsets up to horizontal and vertical maxima (<img decoding=\"async\" src=\"https:\/\/www.braininspirednavigation.com\/wp-content\/uploads\/2018\/05\/053118_0609_PatchNormal16.png\" alt=\"\" \/> and <img decoding=\"async\" src=\"https:\/\/www.braininspirednavigation.com\/wp-content\/uploads\/2018\/05\/053118_0609_PatchNormal17.png\" alt=\"\" \/>, respectively), such that the SAD score of the overlapping region is minimized. As each query frame is compare with all database frames using SAD, we form a difference vector, <img decoding=\"async\" src=\"https:\/\/www.braininspirednavigation.com\/wp-content\/uploads\/2018\/05\/053118_0609_PatchNormal18.png\" alt=\"\" \/> , for each frame.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"font-family: arial, helvetica, sans-serif; font-size: 12pt;\">Lastly, we modify the raw difference scores by considering that we are searching for coherent sequences of locally-best image matches.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"font-family: arial, helvetica, sans-serif; font-size: 12pt;\">Hence, the local matching contrast is enhanced by normalizing each element <img decoding=\"async\" src=\"https:\/\/www.braininspirednavigation.com\/wp-content\/uploads\/2018\/05\/053118_0609_PatchNormal19.png\" alt=\"\" \/> in the difference vector <img decoding=\"async\" src=\"https:\/\/www.braininspirednavigation.com\/wp-content\/uploads\/2018\/05\/053118_0609_PatchNormal20.png\" alt=\"\" \/> within a neighbourhood centered around it<\/span><\/p>\n<p style=\"text-align: center;\"><span style=\"font-family: arial, helvetica, sans-serif; font-size: 12pt;\"><img decoding=\"async\" src=\"https:\/\/www.braininspirednavigation.com\/wp-content\/uploads\/2018\/05\/053118_0609_PatchNormal21.png\" alt=\"\" \/><\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"font-family: arial, helvetica, sans-serif; font-size: 12pt;\">Where <img decoding=\"async\" src=\"https:\/\/www.braininspirednavigation.com\/wp-content\/uploads\/2018\/05\/053118_0609_PatchNormal22.png\" alt=\"\" \/> and <img decoding=\"async\" src=\"https:\/\/www.braininspirednavigation.com\/wp-content\/uploads\/2018\/05\/053118_0609_PatchNormal23.png\" alt=\"\" \/> are the mean and standard deviation of the neighbourhood vector, respectively<\/span><\/p>\n<p style=\"text-align: center;\"><span style=\"font-family: arial, helvetica, sans-serif; font-size: 12pt;\"><img decoding=\"async\" src=\"https:\/\/www.braininspirednavigation.com\/wp-content\/uploads\/2018\/05\/053118_0609_PatchNormal24.png\" alt=\"\" \/><\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"font-family: arial, helvetica, sans-serif; font-size: 12pt;\">Where <img decoding=\"async\" src=\"https:\/\/www.braininspirednavigation.com\/wp-content\/uploads\/2018\/05\/053118_0609_PatchNormal25.png\" alt=\"\" \/> is defined by a neighbourhood radius parameter, <img decoding=\"async\" src=\"https:\/\/www.braininspirednavigation.com\/wp-content\/uploads\/2018\/05\/053118_0609_PatchNormal26.png\" alt=\"\" \/>, and bounded by the length of <img decoding=\"async\" src=\"https:\/\/www.braininspirednavigation.com\/wp-content\/uploads\/2018\/05\/053118_0609_PatchNormal27.png\" alt=\"\" \/>. This process gives the normalized difference vector, <img decoding=\"async\" src=\"https:\/\/www.braininspirednavigation.com\/wp-content\/uploads\/2018\/05\/053118_0609_PatchNormal28.png\" alt=\"\" \/>.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"font-family: arial, helvetica, sans-serif; font-size: 12pt;\">References:<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #222222; font-family: Arial; font-size: 12pt; background-color: white;\"><span style=\"font-family: arial, helvetica, sans-serif;\">Pepperell, Edward, Peter Corke, and Michael Milford. &#8220;<a href=\"http:\/\/journals.sagepub.com\/doi\/abs\/10.1177\/0278364915618766\"><strong>Routed roads: Probabilistic vision-based place recognition for changing conditions, split streets and varied viewpoints<\/strong><\/a>.&#8221;\u00a0<em>The International Journal of Robotics Research<\/em>\u00a035, no. 9 (2016): 1057-1179.<\/span> <\/span><\/p>\n","protected":false},"excerpt":{"rendered":"<p>The excerpt note is about how to enhance the signal-to-noise ratio by using patch normalization in each image to enhance edge information and eliminate image intensity variation from Edward P. et al. IJRR 2016. Patch normalization is performed by dividing an image into a grid of square patches and for each pixel, subtracting the patch [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[114,96,257,179],"tags":[350,279,352,351],"_links":{"self":[{"href":"https:\/\/braininspirednavigation.com\/index.php?rest_route=\/wp\/v2\/posts\/1537"}],"collection":[{"href":"https:\/\/braininspirednavigation.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/braininspirednavigation.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/braininspirednavigation.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/braininspirednavigation.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=1537"}],"version-history":[{"count":2,"href":"https:\/\/braininspirednavigation.com\/index.php?rest_route=\/wp\/v2\/posts\/1537\/revisions"}],"predecessor-version":[{"id":1539,"href":"https:\/\/braininspirednavigation.com\/index.php?rest_route=\/wp\/v2\/posts\/1537\/revisions\/1539"}],"wp:attachment":[{"href":"https:\/\/braininspirednavigation.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=1537"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/braininspirednavigation.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=1537"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/braininspirednavigation.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=1537"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}