Olga Moskvyak, F. Maire, A. Armstrong, Feras Dayoub, Mahsa Baktash
{"title":"基于姿态不变嵌入的自然标记对蝠鲼的鲁棒再识别","authors":"Olga Moskvyak, F. Maire, A. Armstrong, Feras Dayoub, Mahsa Baktash","doi":"10.1109/DICTA52665.2021.9647359","DOIUrl":null,"url":null,"abstract":"Visual re-identification of individual animals that bear unique natural body markings is an essential task in wildlife conservation. The photo databases of animal markings grow with each new observation and identifying an individual means matching against thousands of images. We focus on the re-identification of manta rays because the existing process is time-consuming and only semi-automatic. The current solution Manta Matcher requires images of high quality with the pattern of interest in a near frontal view limiting the use of photos sourced from citizen scientists. This paper presents a novel application of a deep convolutional neural network (CNN) for visual re-identification based on natural markings. Our contribution is an experimental demonstration of the superiority of CNNs in learning embeddings for patterns under viewpoint changes on a novel and challenging dataset. We show that our system can handle more variations in viewing angle, occlusions and illumination compared to the current solution. Our system achieves top-10 accuracy of 98% with only 2 matching examples in the database which makes it of practical value and ready for adoption by marine biologists. We also evaluate our system on a dataset of humpback whale flukes to demonstrate that the approach is generic and not species-specific.","PeriodicalId":424950,"journal":{"name":"2021 Digital Image Computing: Techniques and Applications (DICTA)","volume":"47 ","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"28","resultStr":"{\"title\":\"Robust Re-identification of Manta Rays from Natural Markings by Learning Pose Invariant Embeddings\",\"authors\":\"Olga Moskvyak, F. Maire, A. Armstrong, Feras Dayoub, Mahsa Baktash\",\"doi\":\"10.1109/DICTA52665.2021.9647359\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Visual re-identification of individual animals that bear unique natural body markings is an essential task in wildlife conservation. The photo databases of animal markings grow with each new observation and identifying an individual means matching against thousands of images. We focus on the re-identification of manta rays because the existing process is time-consuming and only semi-automatic. The current solution Manta Matcher requires images of high quality with the pattern of interest in a near frontal view limiting the use of photos sourced from citizen scientists. This paper presents a novel application of a deep convolutional neural network (CNN) for visual re-identification based on natural markings. Our contribution is an experimental demonstration of the superiority of CNNs in learning embeddings for patterns under viewpoint changes on a novel and challenging dataset. We show that our system can handle more variations in viewing angle, occlusions and illumination compared to the current solution. Our system achieves top-10 accuracy of 98% with only 2 matching examples in the database which makes it of practical value and ready for adoption by marine biologists. We also evaluate our system on a dataset of humpback whale flukes to demonstrate that the approach is generic and not species-specific.\",\"PeriodicalId\":424950,\"journal\":{\"name\":\"2021 Digital Image Computing: Techniques and Applications (DICTA)\",\"volume\":\"47 \",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-02-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"28\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 Digital Image Computing: Techniques and Applications (DICTA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/DICTA52665.2021.9647359\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 Digital Image Computing: Techniques and Applications (DICTA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DICTA52665.2021.9647359","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Robust Re-identification of Manta Rays from Natural Markings by Learning Pose Invariant Embeddings
Visual re-identification of individual animals that bear unique natural body markings is an essential task in wildlife conservation. The photo databases of animal markings grow with each new observation and identifying an individual means matching against thousands of images. We focus on the re-identification of manta rays because the existing process is time-consuming and only semi-automatic. The current solution Manta Matcher requires images of high quality with the pattern of interest in a near frontal view limiting the use of photos sourced from citizen scientists. This paper presents a novel application of a deep convolutional neural network (CNN) for visual re-identification based on natural markings. Our contribution is an experimental demonstration of the superiority of CNNs in learning embeddings for patterns under viewpoint changes on a novel and challenging dataset. We show that our system can handle more variations in viewing angle, occlusions and illumination compared to the current solution. Our system achieves top-10 accuracy of 98% with only 2 matching examples in the database which makes it of practical value and ready for adoption by marine biologists. We also evaluate our system on a dataset of humpback whale flukes to demonstrate that the approach is generic and not species-specific.