{"title":"Simulating Quantum Turing Machine in Augmented Reality","authors":"Wanwan Li","doi":"10.1145/3599589.3599606","DOIUrl":"https://doi.org/10.1145/3599589.3599606","url":null,"abstract":"As quantum computing theory is attracting attention from researchers nowadays, visualizing the quantum computing process is necessary for fundamental quantum computing education and research. Especially, connecting traditional computational theory with advanced quantum computing concepts is an extremely important step in learning and understanding quantum computing. In this paper, we propose a practical interactive interface for simulating Quantum Turing Machine (QTM) in Augmented Reality (AR) that combines the traditional Turing machine computational model with the quantum computing simulation. Through such an interface, users can use a C-like script to represent a QTM and simulate such QTM in an immersive augmented reality platform through the Vuforia AR engine. After validating our proposed QTM AR simulator through a series of experiments, we show the great potential to apply our QTM AR simulator to quantum computing education through an interactive visualization interface in augmented reality.","PeriodicalId":123753,"journal":{"name":"Proceedings of the 2023 8th International Conference on Multimedia and Image Processing","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126936732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reconstruction of hyperspectral images with compressed sensing based on linear mixing model and affinity propagation clustering algorithm","authors":"Youli zou, Zhi-yun Xiao, Kuntao Ye","doi":"10.1145/3599589.3599602","DOIUrl":"https://doi.org/10.1145/3599589.3599602","url":null,"abstract":"The increasing spatial and spectral resolution of hyperspectral images results in a significant rise in data volume, which poses a challenge for data storage and transmission. Therefore, improving the efficiency of storage and transmission by enhancing the reconstruction performance of hyperspectral images at low sampling rates or same sampling rates conditions is a crucial topic in compressed sensing. Previous research has shown that a linear mixing model and distributed compressed sensing method outperform traditional compressed sensing reconstruction algorithms in recovering original data. However, the low estimating accuracy of both the endmembers matrix and abundance matrix due to the random selection of reference bands limits the reconstruction performance. To address this problem, we proposed a compressed sensing reconstruction algorithm based on a linear mixing model and affinity propagation clustering algorithm. Our method improves reconstruction performance by enhancing the estimating accuracy of the endmembers and abundance matrices. During the sampling stage, the affinity propagation clustering algorithm is used to group the spectral bands according to the spectral correlation of hyperspectral images, where the clustering center serving as the reference band and the other bands as non-reference bands. During the reconstruction stage, the number of endmembers from the reference band is estimated fist, and the endmembers matrix and the abundance matrix are then estimated. Finally, the endmembers matrix and estimated abundance matrix are used for reconstruction. Experimental results show that our proposed algorithm achieves higher performance in reconstructing hyperspectral images than the linear mixing model-based distributed compressed sensing method.","PeriodicalId":123753,"journal":{"name":"Proceedings of the 2023 8th International Conference on Multimedia and Image Processing","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125431466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fast Recognition of Distributed Fiber Optic Vibration Sensing Signal based on Machine Vision in High-speed Railway Security","authors":"Nachuan Yang, Yongjun Zhao, Fuqiang Wang","doi":"10.1145/3599589.3599603","DOIUrl":"https://doi.org/10.1145/3599589.3599603","url":null,"abstract":"Accurate and effective identification of multi-vibration events detected based on the phase-sensitive optical time-domain reflectometer (Φ-OTDR) is an effective method to achieve precise alarm. This study proposes a real-time classification method of Φ-OTDR multi-vibration events based on the combination of convolutional neural network (CNN), bi-directional long short-term memory network (Bi-LSTM) and connectionist temporal classification (CTC), which can quickly and effectively identify the type and number of vibrations contained in the data image when multiple vibration signals are present in a single image, and manual alignment is not required for model training. Noncoherent integration and pulse cancellers are used for raw signal processing to generate spatio-temporal images. CNN is used to extract spatial dimensional features in spatio-temporal images, Bi-LSTM extracts temporal dimensional correlation features, and the hybrid features are automatically aligned with the labels by CTC. A dataset of 8,000 vibration images containing 17,589 abnormal vibration events is collected for model training, validation and testing. Experiments show that the recognition model C3B3 trained with this method can achieve 210 FPS and 99.62% F1 score on the test set. The system can achieve the real-time classification of multiple vibration targets at the perimeter of high-speed railway and effectively reduce the false alarm rate of the system.","PeriodicalId":123753,"journal":{"name":"Proceedings of the 2023 8th International Conference on Multimedia and Image Processing","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126732946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Research on the Sudden Scientific Public Opinion Theme Map and Science Communication Path Method in the Sina Weibo: Case on \"2022 Nobel Prize in Physics\"","authors":"Xiaojue Huo, Ruxue Fan","doi":"10.1145/3599589.3599608","DOIUrl":"https://doi.org/10.1145/3599589.3599608","url":null,"abstract":"This research takes sudden scientific public opinion events as the research object, and constructs a public opinion map using LDA, network analysis, and Sankey diagrams. Through methods such as topic division, latent semantic association, and semantic flow, the influence of key words on topics in scientific public opinion is analyzed. The flowing associated words are associated with the source of propagation, and the evolution of scientific communication paths is displayed using knowledge graph visualization. Optimized strategies are proposed for semantic ambiguity, displacement of intermediate nodes, and key user nodes discovered in scientific communication.","PeriodicalId":123753,"journal":{"name":"Proceedings of the 2023 8th International Conference on Multimedia and Image Processing","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133299381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Comparative Study of Different Ground Objects Classification Based on UAV Orthophoto","authors":"Huijun Han, Zhang Shuo, S. Xie","doi":"10.1145/3599589.3599598","DOIUrl":"https://doi.org/10.1145/3599589.3599598","url":null,"abstract":"UAV technology is characterized by strong environmental adaptability, flexibility, low cost and high resolution, and has been gradually applied in the field of land use classification in recent years. In order to explore a fast ground objects feature extraction method suitable for high resolution UAV orthophoto image, three commonly used supervised classification methods (Maximum likelihood classification, Mahalanobis distance classification, and Minimum distance classification) are selected to compare and analyze the ground objects feature classification in the study area. The results show that the maximum likelihood classification method is better than the other two classification methods, the classification results are basically consistent with the actual situation, and the overall classification accuracy is 94.21%,1.93% and 11.61% higher than the other two methods respectively; Kappa coefficient can reach 88.29%, which is 4.32% and 14.59% higher than other three methods respectively. Therefore, when the supervised classification method is selected for UAV orthophoto classification, the maximum likelihood method performs best among the three ground objects feature classification methods, and can be given priority in the application of high-resolution UAV orthophoto classification.","PeriodicalId":123753,"journal":{"name":"Proceedings of the 2023 8th International Conference on Multimedia and Image Processing","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127399110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Overlapping Community Discovery Algorithm Based on Seed Node Importance Selection","authors":"Chen Liu, Xiaoyan Zheng, Pengcheng Zhao","doi":"10.1145/3599589.3599607","DOIUrl":"https://doi.org/10.1145/3599589.3599607","url":null,"abstract":"Mining community structure in a complex network is of great theoretical and practical significance to real life. optimization algorithm, LFM algorithm, takes a random approach in selecting seed nodes, which leads to un-stable quality of generated communities. this paper, the importance of nodes is defined as the basis for seed node selection with the help of the density peak clustering idea. The set of nodes with high node importance and their neighbors are selected as seed nodes, the seed nodes are expanded; the isolated node attribution in the network is calculated; Finally, similar communities are merged to obtain the final community structure. Experiments on real datasets and LFR benchmark network datasets can finally obtain higher quality community structure.","PeriodicalId":123753,"journal":{"name":"Proceedings of the 2023 8th International Conference on Multimedia and Image Processing","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128631747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Application of Deep Learning in Lunar Volcanic Dome Identification","authors":"Chen Sun","doi":"10.1145/3599589.3599597","DOIUrl":"https://doi.org/10.1145/3599589.3599597","url":null,"abstract":"Lunar domes have always been one of the important windows to understand lunar volcanic activity, however traditional identification methods for geological domes are expensive, so this study attempts to establish an automatic identification method for lunar volcanic domes. Given that no previous research in this area has attempted to automate the identification of lunar volcanic domes, our team attempted to automate the process for the first time. To achieve the purpose of this research, the researchers first obtained the dome coordinates from the list of known lunar domes and intercepted the data we needed from the corresponding coordinates on the CCD and DEM moon pictures. Subsequently, the researchers screened the data to find data with more obvious features and used these data to train 9 mainstream image recognition models and compared their accuracy rates to verify the feasibility of this study. Finally, the researchers counted the mAP and AP (IoU=0.5) of the nine models and found that the highest of them could reach 0.64 (mAP) and 0.74 (AP). Therefore, this study can conclude that an automated method for identifying lunar volcanic domes should be feasible.","PeriodicalId":123753,"journal":{"name":"Proceedings of the 2023 8th International Conference on Multimedia and Image Processing","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133040199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"MFC-Net: A Multiple Feature Complementation Network for Person Re-identification in Aerial Imagery","authors":"Zichen Yin, Dongmei Liu","doi":"10.1145/3599589.3599593","DOIUrl":"https://doi.org/10.1145/3599589.3599593","url":null,"abstract":"Person re-identification on Unmanned Aerial Vehicles (UAVs) platforms has received widespread attention, but visual monitoring on UAVs is affected by pixels, angles, and more misalignment, which impairs the discriminative ability of the learning representation and brings new challenges to person re-identification tasks. In order to solve the problem, we propose a Multiple Feature Complementation Network (MFC-Net). MFC-Net consists of two modules, the Parallel Dual Attention Modules (PDAM) and the Multilayer Feature Fusion Module (MFFM). The PDAM consists of two attention branches—Multiscale Channel Attention (MCA) and Weighted Positional Attention (WPA). The PDAM can effectively perceive regional features and better focus the image. The MFFM further fuses two complementary attention features, which effectively solves the problems of angle and misalignment and improves the accuracy of person re-identification. Compared with existing techniques, MFC-Net performs well in the person re-identification of aerial imagery.","PeriodicalId":123753,"journal":{"name":"Proceedings of the 2023 8th International Conference on Multimedia and Image Processing","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125205132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zixuan Zhang, Fan Shi, Xinbo Geng, Y. Tao, Jing He
{"title":"GCA-Net: Global Cross Aware Network for salient object detection","authors":"Zixuan Zhang, Fan Shi, Xinbo Geng, Y. Tao, Jing He","doi":"10.1145/3599589.3599594","DOIUrl":"https://doi.org/10.1145/3599589.3599594","url":null,"abstract":"In recent years, significant progress has been made in salient object detection. Nevertheless, there remains a need for further improvements in the effective combination of local and global perspectives. Combining global perception with the local focus can enhance the ability to capture the integrity of protruding objects while achieving accurate segmentation. In this paper, we introduce a newly proposed salient object detection network named GCA-Net, which is built upon three essential components. Firstly, we utilize atrous convolution to effectively aggregate contextual information and refine high-level semantic information via spatial pooling. Secondly, we employ 1-dim crossover operations to achieve global perception while minimizing computational effort. Lastly, We have innovatively integrated a contour-aware loss into our salient object detection task to constrain our model’s predictions, leading to more accurate segmentation. After evaluating our model on the established salient object detection benchmark dataset, we conducted extensive experiments to demonstrate the exceptional performance of our method. As a result, our method achieved 1st in both Sm and MAE.","PeriodicalId":123753,"journal":{"name":"Proceedings of the 2023 8th International Conference on Multimedia and Image Processing","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114396541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Proceedings of the 2023 8th International Conference on Multimedia and Image Processing","authors":"","doi":"10.1145/3599589","DOIUrl":"https://doi.org/10.1145/3599589","url":null,"abstract":"","PeriodicalId":123753,"journal":{"name":"Proceedings of the 2023 8th International Conference on Multimedia and Image Processing","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134061465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}