{"title":"使用深度学习的水田对象vs基于像素的水旱检测","authors":"Aakash Thapa, B. Neupane, T. Horanont","doi":"10.1109/IIAIAAI55812.2022.00095","DOIUrl":null,"url":null,"abstract":"Disasters like flood and drought in paddy fields create unprecedented issues for farmers and a country’s economy. Some countries compensate these farmers based on the validation to the victim’s claims. In this paper, we study two deep learning-based methods that can verify these claims from the geo-tagged photographs sent by the farmers of their farms at the time of the disaster. Moreover, we demonstrate and compare the efficiency of the two methods: pixel-based semantic segmentation using DeepLabv3+ and an object-based scene recognition method using PlacesCNN. Both of the methods are powered by ResNet architecture backbones. Due to the unavailability of existing datasets for agricultural scenes, especially for the paddy farms, we prepare our own training dataset to train the Deeplabv3+ model and use an existing dataset for the PlacesCNN model. We further create a decision-based method framework that allows us to predict flood and drought from several other classes. The DeepLabv3+ and PlacesCNN-based methods achieve an accuracy of 89.09% and 93.64% respectively. Our experiments show that the object-based method is superior to the pixel-based approach in terms of accuracy, data preparation, computational speed and expense.","PeriodicalId":156230,"journal":{"name":"2022 12th International Congress on Advanced Applied Informatics (IIAI-AAI)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Object vs Pixel-based Flood/Drought Detection in Paddy Fields using Deep Learning\",\"authors\":\"Aakash Thapa, B. Neupane, T. Horanont\",\"doi\":\"10.1109/IIAIAAI55812.2022.00095\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Disasters like flood and drought in paddy fields create unprecedented issues for farmers and a country’s economy. Some countries compensate these farmers based on the validation to the victim’s claims. In this paper, we study two deep learning-based methods that can verify these claims from the geo-tagged photographs sent by the farmers of their farms at the time of the disaster. Moreover, we demonstrate and compare the efficiency of the two methods: pixel-based semantic segmentation using DeepLabv3+ and an object-based scene recognition method using PlacesCNN. Both of the methods are powered by ResNet architecture backbones. Due to the unavailability of existing datasets for agricultural scenes, especially for the paddy farms, we prepare our own training dataset to train the Deeplabv3+ model and use an existing dataset for the PlacesCNN model. We further create a decision-based method framework that allows us to predict flood and drought from several other classes. The DeepLabv3+ and PlacesCNN-based methods achieve an accuracy of 89.09% and 93.64% respectively. Our experiments show that the object-based method is superior to the pixel-based approach in terms of accuracy, data preparation, computational speed and expense.\",\"PeriodicalId\":156230,\"journal\":{\"name\":\"2022 12th International Congress on Advanced Applied Informatics (IIAI-AAI)\",\"volume\":\"51 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 12th International Congress on Advanced Applied Informatics (IIAI-AAI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IIAIAAI55812.2022.00095\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 12th International Congress on Advanced Applied Informatics (IIAI-AAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IIAIAAI55812.2022.00095","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Object vs Pixel-based Flood/Drought Detection in Paddy Fields using Deep Learning
Disasters like flood and drought in paddy fields create unprecedented issues for farmers and a country’s economy. Some countries compensate these farmers based on the validation to the victim’s claims. In this paper, we study two deep learning-based methods that can verify these claims from the geo-tagged photographs sent by the farmers of their farms at the time of the disaster. Moreover, we demonstrate and compare the efficiency of the two methods: pixel-based semantic segmentation using DeepLabv3+ and an object-based scene recognition method using PlacesCNN. Both of the methods are powered by ResNet architecture backbones. Due to the unavailability of existing datasets for agricultural scenes, especially for the paddy farms, we prepare our own training dataset to train the Deeplabv3+ model and use an existing dataset for the PlacesCNN model. We further create a decision-based method framework that allows us to predict flood and drought from several other classes. The DeepLabv3+ and PlacesCNN-based methods achieve an accuracy of 89.09% and 93.64% respectively. Our experiments show that the object-based method is superior to the pixel-based approach in terms of accuracy, data preparation, computational speed and expense.