Sixing Liu, Ming Liu, Yan Chai, Shuang Li, H. Miao
{"title":"基于改进YOLOv5s和深度相机的辣椒采摘识别与定位","authors":"Sixing Liu, Ming Liu, Yan Chai, Shuang Li, H. Miao","doi":"10.13031/aea.15347","DOIUrl":null,"url":null,"abstract":"HighlightsAn improved YOLOv5s deep learning model was used to identify peppers in complex background.The deep-level features on 3D (O-XYZ) coordinate of peppers were extracted using RealSense depth camera.An image database set of pepper in different scenes was established.A pepper recognition and location system were constructed based on improved YOLOv5s network.The proposed method achieved a mean average precision of 95.6% and minimum depth error of 0.001 m.Abstract. In order to investigate the impact of different scenes on the recognition performance and obtain the location information of picking targets, the recognition and location system based on improved YOLOv5s network and RealSense depth camera was constructed in this study. An image database in different scenes was established including light intensity, occlusion and overlap degree of pepper. An improved YOLOv5s deep learning model with bidirectional feature pyramid network (BiFPN) was used for the deep feature extraction and high-precision detection of pepper, and the effects of different scenes on recognition accuracy of the model were studied. The results showed that mean average precision (mAP) of YOLOv5s model reached 0.956, which was respectively 6.1%, 9.3%, 44.4%, and 8.2% higher than that of YOLOv4, YOLOv3, YOLOv2, and Faster R-CNN model. The model had good robustness under daytime and evening scenes with the mAP value higher than 0.9. The detection accuracy of the model in the leaf occlusion scenes was better than that of fruit overlap. The detection error was 0.001m which could not affect the picking positioning precision when the Z value of three-dimensional coordinates (O-XYZ) of pepper was 0.2 m. The improved algorithm can accurately recognize and extract three-dimensional coordinates of pepper, which reduces the calculations by eliminating lots of duplicate and redundant prediction boxes and provides a reference for trajectory planning of pepper picking operation. Keywords: Different scenes, Pepper recognition and location, Picking operation, YOLOv5s.","PeriodicalId":55501,"journal":{"name":"Applied Engineering in Agriculture","volume":null,"pages":null},"PeriodicalIF":0.8000,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Recognition and Location of Pepper Picking Based on Improved YOLOv5s and Depth Camera\",\"authors\":\"Sixing Liu, Ming Liu, Yan Chai, Shuang Li, H. Miao\",\"doi\":\"10.13031/aea.15347\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"HighlightsAn improved YOLOv5s deep learning model was used to identify peppers in complex background.The deep-level features on 3D (O-XYZ) coordinate of peppers were extracted using RealSense depth camera.An image database set of pepper in different scenes was established.A pepper recognition and location system were constructed based on improved YOLOv5s network.The proposed method achieved a mean average precision of 95.6% and minimum depth error of 0.001 m.Abstract. In order to investigate the impact of different scenes on the recognition performance and obtain the location information of picking targets, the recognition and location system based on improved YOLOv5s network and RealSense depth camera was constructed in this study. An image database in different scenes was established including light intensity, occlusion and overlap degree of pepper. An improved YOLOv5s deep learning model with bidirectional feature pyramid network (BiFPN) was used for the deep feature extraction and high-precision detection of pepper, and the effects of different scenes on recognition accuracy of the model were studied. The results showed that mean average precision (mAP) of YOLOv5s model reached 0.956, which was respectively 6.1%, 9.3%, 44.4%, and 8.2% higher than that of YOLOv4, YOLOv3, YOLOv2, and Faster R-CNN model. The model had good robustness under daytime and evening scenes with the mAP value higher than 0.9. The detection accuracy of the model in the leaf occlusion scenes was better than that of fruit overlap. The detection error was 0.001m which could not affect the picking positioning precision when the Z value of three-dimensional coordinates (O-XYZ) of pepper was 0.2 m. The improved algorithm can accurately recognize and extract three-dimensional coordinates of pepper, which reduces the calculations by eliminating lots of duplicate and redundant prediction boxes and provides a reference for trajectory planning of pepper picking operation. Keywords: Different scenes, Pepper recognition and location, Picking operation, YOLOv5s.\",\"PeriodicalId\":55501,\"journal\":{\"name\":\"Applied Engineering in Agriculture\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.8000,\"publicationDate\":\"2023-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Applied Engineering in Agriculture\",\"FirstCategoryId\":\"97\",\"ListUrlMain\":\"https://doi.org/10.13031/aea.15347\",\"RegionNum\":4,\"RegionCategory\":\"农林科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"AGRICULTURAL ENGINEERING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Engineering in Agriculture","FirstCategoryId":"97","ListUrlMain":"https://doi.org/10.13031/aea.15347","RegionNum":4,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"AGRICULTURAL ENGINEERING","Score":null,"Total":0}
Recognition and Location of Pepper Picking Based on Improved YOLOv5s and Depth Camera
HighlightsAn improved YOLOv5s deep learning model was used to identify peppers in complex background.The deep-level features on 3D (O-XYZ) coordinate of peppers were extracted using RealSense depth camera.An image database set of pepper in different scenes was established.A pepper recognition and location system were constructed based on improved YOLOv5s network.The proposed method achieved a mean average precision of 95.6% and minimum depth error of 0.001 m.Abstract. In order to investigate the impact of different scenes on the recognition performance and obtain the location information of picking targets, the recognition and location system based on improved YOLOv5s network and RealSense depth camera was constructed in this study. An image database in different scenes was established including light intensity, occlusion and overlap degree of pepper. An improved YOLOv5s deep learning model with bidirectional feature pyramid network (BiFPN) was used for the deep feature extraction and high-precision detection of pepper, and the effects of different scenes on recognition accuracy of the model were studied. The results showed that mean average precision (mAP) of YOLOv5s model reached 0.956, which was respectively 6.1%, 9.3%, 44.4%, and 8.2% higher than that of YOLOv4, YOLOv3, YOLOv2, and Faster R-CNN model. The model had good robustness under daytime and evening scenes with the mAP value higher than 0.9. The detection accuracy of the model in the leaf occlusion scenes was better than that of fruit overlap. The detection error was 0.001m which could not affect the picking positioning precision when the Z value of three-dimensional coordinates (O-XYZ) of pepper was 0.2 m. The improved algorithm can accurately recognize and extract three-dimensional coordinates of pepper, which reduces the calculations by eliminating lots of duplicate and redundant prediction boxes and provides a reference for trajectory planning of pepper picking operation. Keywords: Different scenes, Pepper recognition and location, Picking operation, YOLOv5s.
期刊介绍:
This peer-reviewed journal publishes applications of engineering and technology research that address agricultural, food, and biological systems problems. Submissions must include results of practical experiences, tests, or trials presented in a manner and style that will allow easy adaptation by others; results of reviews or studies of installations or applications with substantially new or significant information not readily available in other refereed publications; or a description of successful methods of techniques of education, outreach, or technology transfer.