International Conference on Image Processing and Intelligent Control最新文献

筛选
英文 中文
A neural network model for adversarial defense based on deep learning 基于深度学习的对抗防御神经网络模型
International Conference on Image Processing and Intelligent Control Pub Date : 2023-08-09 DOI: 10.1117/12.3000789
Zhiying Wang, Yong Wang
{"title":"A neural network model for adversarial defense based on deep learning","authors":"Zhiying Wang, Yong Wang","doi":"10.1117/12.3000789","DOIUrl":"https://doi.org/10.1117/12.3000789","url":null,"abstract":"Deep learning has achieved great success in many fields, such as image classification and target detection. Adding small disturbance which is hard to be detected by the human eyes to original images can make the neural network output error results with high confidence. An image after adding small disturbance is an adversarial example. The existence of adversarial examples brings a huge security problem to deep learning. In order to effectively defend against adversarial examples attacks, an adversarial example defense method based on image reconstruction is proposed by analyzing the existing adversarial examples attack methods and defense methods. Our data set is based on ImageNet 1k data set, and some filtering and expansion are carried out. Four attack modes, FGSM, BIM, DeepFool and C&W are selected to test the defense method. Based on the EDSR network, multi-scale feature fusion module and subspace attention module are added. By capturing the global correlation information of the image, the disturbance can be removed, while the image texture details can be better preserved, and the defense performance can be improved. The experimental results show that the proposed method has good defense effect.","PeriodicalId":210802,"journal":{"name":"International Conference on Image Processing and Intelligent Control","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126726056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research on optical detection technology for underwater archaeology 水下考古光学探测技术研究
International Conference on Image Processing and Intelligent Control Pub Date : 2023-08-09 DOI: 10.1117/12.3002208
Wei Mu, Ruohan Zheng, Wenrui Zhang
{"title":"Research on optical detection technology for underwater archaeology","authors":"Wei Mu, Ruohan Zheng, Wenrui Zhang","doi":"10.1117/12.3002208","DOIUrl":"https://doi.org/10.1117/12.3002208","url":null,"abstract":"In response to the problem that the current image processing technology and underwater target recognition algorithms are not yet mature enough in the field of underwater archaeology, this article innovatively applies object detection and underwater image clarity technology to the field of underwater archaeology. We propose a method for detecting and recognizing underwater cultural heritage based on optical devices. The method includes ocean image preprocessing and underwater cultural heritage object recognition based on YOLO V4. The results of experiments demonstrate that the proposed method can effectively and accurately detect and recognize targets in the underwater cultural heritage scene, and the clear image of the underwater relics after image preprocessing can better assist archaeologists in observing the species and distribution of samples in the real scene.","PeriodicalId":210802,"journal":{"name":"International Conference on Image Processing and Intelligent Control","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115529067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Video description method with fusion of instance-aware temporal features 融合实例感知时间特征的视频描述方法
International Conference on Image Processing and Intelligent Control Pub Date : 2023-08-09 DOI: 10.1117/12.3000765
Ju Huang, He Yan, Lingkun Liu, Yuhan Liu
{"title":"Video description method with fusion of instance-aware temporal features","authors":"Ju Huang, He Yan, Lingkun Liu, Yuhan Liu","doi":"10.1117/12.3000765","DOIUrl":"https://doi.org/10.1117/12.3000765","url":null,"abstract":"There are still challenges in the field of video understanding today, especially how to use natural language to describe the visual content in videos. Existing video encoder-decoder models struggle to extract deep semantic information and effectively understand the complex contextual semantics in a video sequence. Furthermore, different visual elements in the video contribute differently to the generation of video text descriptions. In this paper, we propose a video description method that fuses instance-aware temporal features. We extract local features of instances on the temporal sequence to enhance perception of temporal instances. We also employ spatial attention to perform weighted fusion of temporal features. Finally, we use bidirectional long short-term memory networks to encode the contextual semantic information of the video sequence, thereby helping to generate higher quality descriptive text. Experimental results on two public datasets demonstrate that our method achieves good performance on various evaluation metrics.","PeriodicalId":210802,"journal":{"name":"International Conference on Image Processing and Intelligent Control","volume":"61 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114131405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Oilfield water injection surface monitoring system 油田注水地面监测系统
International Conference on Image Processing and Intelligent Control Pub Date : 2023-08-09 DOI: 10.1117/12.3000768
Shuchen Xing, Nan Song, Xuhui Wen
{"title":"Oilfield water injection surface monitoring system","authors":"Shuchen Xing, Nan Song, Xuhui Wen","doi":"10.1117/12.3000768","DOIUrl":"https://doi.org/10.1117/12.3000768","url":null,"abstract":"Due to the complex and harsh oilfield production environment, oilfield enterprises need a set of digital intelligent monitoring system to realize the real-time monitoring and control of oilfield water injection. This paper designs a field monitoring system of oilfield water injection based on configuration software. The system uses the general monitoring configuration software of Beijing Force Control Yuantong Technology Co., Ltd., which can realize modular function division and has the functions of fault alarm data recording and query. The PID control technology is applied to the system, and the performance of the technology is verified by simulation. It has been verified that the system can control the flow stably and intelligently, and can ensure long-term effective operation.","PeriodicalId":210802,"journal":{"name":"International Conference on Image Processing and Intelligent Control","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115395302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research on the improved apple classification method of AlexNet 改进的AlexNet苹果分类方法研究
International Conference on Image Processing and Intelligent Control Pub Date : 2023-08-09 DOI: 10.1117/12.3000778
Huifang Yang, Weihua Wang, Zhicheng Mao
{"title":"Research on the improved apple classification method of AlexNet","authors":"Huifang Yang, Weihua Wang, Zhicheng Mao","doi":"10.1117/12.3000778","DOIUrl":"https://doi.org/10.1117/12.3000778","url":null,"abstract":"To address the issue of high cost and low efficiency in the manual sorting of apples, we proposed an improved apple classification method based on the AlexNet architecture. The algorithm added a batch normalization layer after each convolutional layer in the network structure to speed up the model's training process. Furthermore, we replaced the fully connected layer with a global average pooling layer to reduce the number of training parameters and save model training time. To improve the algorithm's robustness, we also performed data augmentation on the training samples before validating the algorithm to obtain an expanded dataset. Experimental results showed that the improved AlexNet network shortened the training time by 0.54%, increased the testing speed by 2.5%, and improved the accuracy by 1.12% compared to the original AlexNet network. Moreover, the training time of the improved AlexNet network was lower than that of other networks (AlexNet, ResNet50, Vgg16). The improved AlexNet network can efficiently and quickly classify apples and promote the automation of apple classification.","PeriodicalId":210802,"journal":{"name":"International Conference on Image Processing and Intelligent Control","volume":"222 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115536111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improved PSO-GA-based LSSVM flight conflict detection model 改进的基于pso - ga的LSSVM飞行冲突检测模型
International Conference on Image Processing and Intelligent Control Pub Date : 2023-08-09 DOI: 10.1117/12.3000794
Qiting Liu, Qi Wang, Yulin Cao, Jinyue Wang
{"title":"Improved PSO-GA-based LSSVM flight conflict detection model","authors":"Qiting Liu, Qi Wang, Yulin Cao, Jinyue Wang","doi":"10.1117/12.3000794","DOIUrl":"https://doi.org/10.1117/12.3000794","url":null,"abstract":"With the rapid development of civil aviation industry, the air traffic flow is increasing, which brings a large load to air traffic control, airports and other units, the safety of flight activities has become a research hotspot, flight conflict detection is a necessary link to ensure the safety of flight activities, the increase in air traffic flow requires its more accurate, efficient and stable operation. Based on the least squares support vector machine (LSSVM) in machine learning, this study uses the information provided by ADS-B, such as heading, position and altitude, combined with the regulations and conflict protection zones in actual operation, to classify the occurrence and severity of flight conflicts under the same moment, i.e., to perform multiple classifications, and uses a hybrid optimization algorithm of genetic + particle swarm to optimize this support vector machine model, and proposes A very efficient and accurate real-time flight conflict detection model is proposed. Finally, simulation analysis shows that the support vector machine is faster and more accurate than the traditional SVM, and has excellent conflict detection capability, and by differentiating the classified conflict levels and performing supervised learning, it can provide accurate warnings for upcoming flight conflicts, which can draw early attention of ATCs and provide a basis for the next flight conflict resolution. Eventually, the conflict detection model is expected to be compatible to airborne and ground surveillance equipment, which can significantly improve the safety of flight activities and has a broad application prospect and important research value.","PeriodicalId":210802,"journal":{"name":"International Conference on Image Processing and Intelligent Control","volume":"314 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115936956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3D target detection based on dynamic occlusion processing 基于动态遮挡处理的三维目标检测
International Conference on Image Processing and Intelligent Control Pub Date : 2023-08-09 DOI: 10.1117/12.3000786
Jishen Peng, Jun Ma, Li Li
{"title":"3D target detection based on dynamic occlusion processing","authors":"Jishen Peng, Jun Ma, Li Li","doi":"10.1117/12.3000786","DOIUrl":"https://doi.org/10.1117/12.3000786","url":null,"abstract":"In order to solve the multi-vehicle mutual occlusion problem encountered in 3D target detection by self-driving vehicles, this paper proposes a monocular 3D detection method that includes dynamic occlusion determination. The method adds a dynamic occlusion processing module to the CenterNet3D network framework to improve the accuracy of 3D target detection of occluded vehicles in the road. Specifically, the occlusion determination module of the method uses the 2D detection results extracted from target detection as the occlusion relationship determination condition, wherein the method of changing the occlusion determination threshold with the depth value is introduced. Then the occlusion compensation module is used to compensate and adjust the 3D detection results of the occurring occluded vehicles, and finally the 3D target detection results are output. The experimental results show that the method improves the accuracy of both vehicle center point detection and 3D dimensional detection results in the case of long-distance continuous vehicle occlusion. And compared with other existing methods, the accuracy of 3D detection results and bird's-eye view detection results are improved by 1%-2.64% in the case of intersection over union of 0.5. The method can compensate for the occluded vehicles in 3D target detection and improve the accuracy","PeriodicalId":210802,"journal":{"name":"International Conference on Image Processing and Intelligent Control","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114918538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research and application of 3D simulation of truck formation based on Unreal Engine 基于虚幻引擎的卡车编队三维仿真研究与应用
International Conference on Image Processing and Intelligent Control Pub Date : 2023-08-09 DOI: 10.1117/12.3001392
Zhenzhou Wang, Fang Wu, Jiangnan Zhang, Jianguang Wu
{"title":"Research and application of 3D simulation of truck formation based on Unreal Engine","authors":"Zhenzhou Wang, Fang Wu, Jiangnan Zhang, Jianguang Wu","doi":"10.1117/12.3001392","DOIUrl":"https://doi.org/10.1117/12.3001392","url":null,"abstract":"In order to show the transport conditions of goods on different roads and provide more real and three-dimensional transport information for situation inference users, this paper proposes a simple and PID controlled three-dimensional simulation method for truck formation based on Unreal Engine. Firstly, based on the basic theory of automatic control [1] , the longitudinal lollipop controller and the transverse PID controller are designed respectively based on the lollipop control and PID control ideas, and the perception-decision framework is combined to realize the automatic driving of the truck along the spline line on the road. On this basis, a truck controller is designed to realize the truck formation driving with high recovery degree based on the leader-follower strategy. The results show that the truck based on PID control can accurately drive along the road line. With the cooperation of truck formation controller, the whole process of formation, maintenance and driving of truck formation can be basically restored.","PeriodicalId":210802,"journal":{"name":"International Conference on Image Processing and Intelligent Control","volume":"12782 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128967080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluation of design factors of an interactive interface of intangible cultural heritage APP based on user experience 基于用户体验的非物质文化遗产APP交互界面设计因素评价
International Conference on Image Processing and Intelligent Control Pub Date : 2023-08-09 DOI: 10.1117/12.3000771
Chengjun Zhou, Ruowei Li
{"title":"Evaluation of design factors of an interactive interface of intangible cultural heritage APP based on user experience","authors":"Chengjun Zhou, Ruowei Li","doi":"10.1117/12.3000771","DOIUrl":"https://doi.org/10.1117/12.3000771","url":null,"abstract":"In this paper, the non-cultural material heritage mobile terminal APP interface is the carrier, according to the user experience of the interactive interface design. By using user interview, observation, qualitative research and quantitative research, and based on the theoretical model of user experience, the author conducted data collection and analysis using user interview and questionnaire survey to obtain four evaluation indexes and eight sub-criteria for users' interaction interface of intangible cultural heritage apps. The analytic hierarchy process was introduced into weight calculation. The weight of each evaluation factor is obtained through investigation and calculation, and the evaluation level of each element is determined by referring to the Likert scale. The evaluation data of the design scheme is obtained through the questionnaire method, the fuzzy analysis is carried out on the results of the questionnaire, and the final evaluation results are obtained according to the principle of full membership to provide implementable improvement suggestions for the interactive interface design to improve the user experience. The research results have theoretical guiding significance for the interactive interface design of intangible cultural heritage apps.","PeriodicalId":210802,"journal":{"name":"International Conference on Image Processing and Intelligent Control","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114128327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Application of Videolog visualization technology in workover operation 视频可视化技术在修井作业中的应用
International Conference on Image Processing and Intelligent Control Pub Date : 2023-08-09 DOI: 10.1117/12.3001436
Ying Zhang, Jiatian Zhang, Wenhao Jin
{"title":"Application of Videolog visualization technology in workover operation","authors":"Ying Zhang, Jiatian Zhang, Wenhao Jin","doi":"10.1117/12.3001436","DOIUrl":"https://doi.org/10.1117/12.3001436","url":null,"abstract":"The actual underground situation is of great significance for workover operation. Videolog visualization technology can clearly and accurately obtain the underground color video information, and provide effective guidance for workover operation. This paper introduces the system composition, working principle and functional parameters of Videolog equipment, and gives an example of its practical application in workover operation, which shows that Videolog visualization technology is more efficient, safe and intuitive than traditional downhole video technology, and has a good application prospect in workover operation field.","PeriodicalId":210802,"journal":{"name":"International Conference on Image Processing and Intelligent Control","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116248229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信