{"title":"Specific emitter identification unaffected by time through adversarial domain adaptation and continual learning","authors":"","doi":"10.1016/j.engappai.2024.109324","DOIUrl":"10.1016/j.engappai.2024.109324","url":null,"abstract":"<div><p>Timely identifying the emitter of target signals is crucial for communication security in complex electromagnetic environments. Specific emitter identification (SEI) is a technique to identify emitters using the hardware fingerprints of emitters. In this paper, the impact of changes in hardware fingerprints over time on SEI is solved, which significantly deteriorates identification performance. This issue is addressed from two aspects: one is mitigating the impact of these changes, and the other is tracking and adapting to them. For the aspect of mitigating impact, an alternating adversarial domain adaptation (AADA) method is proposed to eliminate the time-varying component in hardware fingerprints. Subsequently, a feature map calculation method using weighted Euclidean distance is designed, preserving the main parameters of feature maps for each emitter. For the aspect of tracking and adapting to changes, a continual learning method was designed based on feature maps of each emitter. This approach incorporates the selective annotation of unlabeled new data with an iterative optimization training process. To validate the effectiveness of the proposed method, we independently collected comprehensive time-variant datasets as well as simpler datasets with varying receivers and environments. The proposed method was tested on these datasets and compared with existing conventional and advanced methods. The experimental results indicate that the proposed SEI method exhibits superior recognition performance. Compared to existing methods, it achieved an average recognition accuracy improvement of over 8% on the time-variant dataset, and demonstrated enhanced robustness against these three types of variations.</p></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":7.5,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142239578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Research on predictive modeling method of loader working resistance in a sensor-less environment","authors":"","doi":"10.1016/j.engappai.2024.109263","DOIUrl":"10.1016/j.engappai.2024.109263","url":null,"abstract":"<div><p>In view of the inconvenient installation and high cost of the current multi-sensor data prediction methods for predicting loader working resistance, this study proposes a method oriented towards predicting loader working resistance in environments with fewer sensors. First, building on previous research (Wu et al., 2023), non-essential sensor features are removed by a maximum information coefficient (MIC)method that incorporates expert experience. Second, the Optuna automation framework is embedded to realize the training and testing of the proposed method and compare its prediction performance with other popular methods. Finally, in order to verify its generalization performance, it is validated using loader operation data under different working conditions. The results of this study demonstrate that the proposed method effectively and accurately characterizes the work resistance of loaders under operating conditions. With short testing times and excellent generalization performance, the method proves highly applicable and valuable.</p></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":7.5,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142239576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deep transfer learning-based time-varying model for deformation monitoring of high earth-rock dams","authors":"","doi":"10.1016/j.engappai.2024.109310","DOIUrl":"10.1016/j.engappai.2024.109310","url":null,"abstract":"<div><p>The analysis of deformation, a critical aspect of high earth-rock dams, holds immense importance for ensuring the safety and stability of dam operations. The structural behavior of high earth-rock dams exhibits time-varying nonlinear characteristics influenced by materials and loads. Over time, the fitting and prediction abilities of static dam Structural Health Monitoring (SHM) models tend to diminish. To address this, a novel SHM model is proposed in this study. It leverages deep transfer learning to enhance prediction accuracy and generalization by incorporating a starting point timestamp and employing a transfer learning approach. The methodology begins with the construction of an encoder structure based on the graph convolutional network and long and short-term memory model. Additionally, the attention mechanism-based encoder structure is designed to include starting point time markers. Knowledge migration is then executed through transfer learning, thereby improving the model's generalization to the time-varying deformation challenge. The proposed model is applied to a horizontal displacement monitoring project for a 185.5 m-high panel rockfill dam. Ablation experiments demonstrate that the transfer learning method effectively enhances the model's handling of time-varying deformation by improving prediction accuracy, with a more pronounced effect observed for measurement points close to the top of the dam and the upstream dam face. Comparison with eight baseline models validates that the proposed model achieves optimal prediction, fitting performance, and generalization. Consequently, the model emerges as a more suitable choice for the deformation health monitoring of high earth-rock dam projects.</p></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":7.5,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142239580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Cycle association prototype network for few-shot semantic segmentation","authors":"","doi":"10.1016/j.engappai.2024.109309","DOIUrl":"10.1016/j.engappai.2024.109309","url":null,"abstract":"<div><p>Few-shot segmentation aims to train a segmentation model that can quickly adapt to novel classes referring to only a few annotated samples. Existing few-shot segmentation methods are based on the meta-learning strategy and extract support samples’ information from a support set and then apply the information to make predictions on query images. However, most methods abstract support features into prototype vectors and ignore the crucial relationship between query and support samples. To address the problem, we propose a cycle association prototype network that focuses on pixel-wise relationships between support and query images for more accurate segmentation. Specifically, a cycle-consistent prototype module is proposed to select reliable support features and to generate prototype. To capture cross-scale relations and overcome object variations, we introduce a scale-aware prior mask generation module to offer rich guidance for objects of varying sizes and shapes via calculating the pixel-level similarity between the support and query image features. Finally, a mask generation module, which contains two parallel modules, feature fusion module and transformer decoder, is utilized to predict the query image. Extensive experiments on two datasets show that our method yields superior performance with state-of-the-art methods.</p></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":7.5,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142239659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Observer-based adaptive neural consensus control of nonlinear multi-agent systems under input and output quantization","authors":"","doi":"10.1016/j.engappai.2024.109279","DOIUrl":"10.1016/j.engappai.2024.109279","url":null,"abstract":"<div><p>In this article, for a series of nonlinear multi-agent systems under input and output quantization, a novel observer-based adaptive neural leader-following consensus control strategy is raised. Different from the existing output feedback consensus control strategies, in this raised strategy, the output and input of the agent are communicated through a directed network and quantized before communication. First of all, according to the quantized input and output information, a neural networks (NNs)-based distributed state observer is built by using the NNs to approximate the unknown functions. Secondly, in the backstepping process, the partial derivatives of the virtual control signals are non-existent because of the quantized output’s discontinuity. To avoid this issue, a command filtering technique is applied. Moreover, by constructing an intermediate auxiliary control signal, an actual adaptive consensus controller is designed. Thirdly, to compensate for the impact of quantization errors, Lemma 3 is presented. On this basis, the raised strategy guarantees that all signals of the closed-loop system are semi-globally bounded and the followers’ outputs converge to a neighborhood of the output of the leader. Lastly, two examples are applied to demonstrate the feasibility of this strategy.</p></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":7.5,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142233909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"TinyDepth: Lightweight self-supervised monocular depth estimation based on transformer","authors":"","doi":"10.1016/j.engappai.2024.109313","DOIUrl":"10.1016/j.engappai.2024.109313","url":null,"abstract":"<div><p>Monocular depth estimation plays an important role in autonomous driving, virtual reality, augmented reality, and other fields. Self-supervised monocular depth estimation has received much attention because it does not require hard-to-obtain depth labels during training. The previously used convolutional neural network (CNN) has shown limitations in modeling large-scale spatial dependencies. A new idea for monocular depth estimation is replacing the CNN architecture or merging it with a Vision Transformer (ViT) architecture that can model large-scale spatial dependencies in images. However, there are still problems with too many parameters and calculations, making deployment difficult on mobile platforms. In response to these problems, we propose TinyDepth, a lightweight self-supervised monocular depth estimation method based on Transformer that employs hierarchical representation learning suitable for dense prediction, uses mobile convolution to reduce parameters and computational overhead. and includes a novel decoder based on multi-scale fusion attention that improves the local and global inference capability of the network through scale-wise attention processing and layer-wise fusion sampling for more accurate depth prediction. In experiments, TinyDepth achieved state-of-the-art results with few parameters on the Karlsruhe Institute of Technology and Toyota Technological Institute at Chicago (KITTI) dataset, and exhibited good generalization ability on the challenging indoor New York University (NYU) dataset. Source code is available at <span><span>https://github.com/ZYCheng777/TinyDepth</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":7.5,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142233910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evolution mechanism and influencing factors of multidimensional public opinion dissemination from the perspective of game theory","authors":"","doi":"10.1016/j.engappai.2024.109319","DOIUrl":"10.1016/j.engappai.2024.109319","url":null,"abstract":"<div><p>With the popularization and development of the Internet, in-depth investigation into the evolutionary mechanism of online multidimensional public opinion dissemination is crucial to modern public opinion management. This study introduces game theory into the analysis of the driving mechanism and propagation law of online public opinion, constructs a multi-dimensional public opinion network model covering social, psychological, viewpoint and environmental dimensions, and combines systematic simulation and empirical analysis. The study found: (1) In the process of online public opinion communication, the activity level of micro individuals is influenced by the comprehensive benefits of the game. The game benefits are internally affected by the benefits of publishing and receiving public opinion, and the cost of publishing public opinion. Externally, they are affected by the trust between opinion leaders; (2) Increasing the costs of publishing public opinion can effectively reduce the proportion of active participants in public opinion. Increasing the benefits of receiving or publishing public opinion will increase the proportion of active participants in public opinion; (3) A higher average social bandwagon accelerates the polarization process of viewpoints in the spread of online public opinion, while a higher average social trust slows down the speed of viewpoint polarization in the spread of online public opinion. An unfavorable external environment helps promote the polarization of viewpoints in the spread of online public opinion. This study provides a new perspective for understanding and predicting the dissemination of online public opinion, and a scientific basis for the government to formulate effective online public opinion management strategies.</p></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":7.5,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142233908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A weighted Gaussian process regression model based on improved local outlier factor and its application in state of health estimation of lithium-ion battery","authors":"","doi":"10.1016/j.engappai.2024.109314","DOIUrl":"10.1016/j.engappai.2024.109314","url":null,"abstract":"<div><p>Battery state of health estimation is an important part of battery management system, which can improve the reliability and economy of battery use, and data-driven based estimation has become a hot topic in the field. It is accepted that data-driven modeling methods strongly rely on the accuracy of the acquired data, but it is inevitable that outliers will affect the original data measurement, which has an impact on data-driven modeling. This paper proposes a weighted Gaussian process regression model based on improved local outlier factor. Firstly, entropy weight method is introduced to calculate the contribution of each attribute of the sample to further construct a modified Euclidean distance, which reduce the discriminability of data in high-dimensional space in standard local outlier factor. Then, a density-based local outlier detection approach based on improved local outlier factor is developed to assign low weights for samples with high potential outlier, and the weight matrix is incorporated with the standard Gaussian process regression to construct weighted Gaussian process regression model, which solve the heteroscedasticity caused by outlier. Finally, the effectiveness of the proposed method is verified by comparative experiments, and the results illuminate that the proposed model has higher estimation accuracy compared with the existing methods, and achieves smaller error regarding multiple error indicators.</p></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":7.5,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142239579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Customizable 6 degrees of freedom grasping dataset and an interactive training method for graph convolutional network","authors":"","doi":"10.1016/j.engappai.2024.109320","DOIUrl":"10.1016/j.engappai.2024.109320","url":null,"abstract":"<div><p>The field of robotic grasping has seen significant progress with the development of deep learning and the creation of large-scale datasets like the Cornell Grasping Dataset (Jiang et al., 2011) and DexNet (Mahler et al., 2016). However, challenges persist due to the reliance on manually annotated datasets, limited by data scarcity, high costs, biases, and a lack of diversity in gripper types and three-dimensional information, hampering their effectiveness in real-world applications. To confront these issues, an innovative method was introduced for generating robotic grasping datasets in a simulated environment, eliminating the need for manual annotations. The method utilizes a highly realistic movement of the gripper, offering extensive customization options for a variety of gripper types. It also introduces detailed evaluation metrics specifically designed to assess different gripper designs, ensuring accurate and meaningful analysis of grasping efficacy. Further, it excels in simulating a wide range of industrial scenarios, significantly enhancing the dataset's diversity and applicability in real-world applications. In addition, an end-to-end grasping prediction network is introduced, which leverages advanced graph convolution techniques to predict optimal grasping points and orientations from point cloud. It also serves as an effective baseline for the proposed grasping dataset. Lastly, the authors propose a novel interactive training method for deep learning models driven by data generation, featuring real-time interaction between the model and the data generator with a rule-based strategy that optimizes the training workflow based on feedback. Experimental results demonstrate that the interactive training method enables models to achieve superior outcomes in a shorter timeframe compared to those trained using traditional methods.</p></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":7.5,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142239577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automatic real-time crack detection using lightweight deep learning models","authors":"","doi":"10.1016/j.engappai.2024.109340","DOIUrl":"10.1016/j.engappai.2024.109340","url":null,"abstract":"<div><p>Crack detection methods using deep learning models such as convolutional neural network (CNN) and the newly developed vision transformer (ViT) are expanding. However, there is still a lack of comparative evaluation of these models in real-time crack detection. In this paper, a total of 14 lightweight deep learning models, comprising seven CNN models, five ViT models and two hybrid models, are trained to build deep learning-based crack detection methods. Comprehensive experiments are conducted on the publicly available DeepCrack dataset, including accuracy, inference time, robustness and transfer learning experiments to compare the effectiveness and real-time performance of models. In terms of accuracy metrics and robustness performance, the ViT model using SegFormer segmentation method with MiT-B1 as backbone has the best performance, and in terms of the model inference time, the ViT models using TopFormer segmentation method demonstrate the fastest performance. If both the accuracy and inference time are considered, TopFormer with its small version of the backbone network has relatively better real-time performance, while the ViT model using SegFormer segmentation method with MiT-B0 as backbone and the CNN model using the fully convolutional network (FCN) segmentation method with HRNetV2-W18-Small as backbone have higher mean intersection over union (mIoU) values on computers and mobile devices, respectively. We also find that pre-training on a dataset that is more relevant to the target application scenario rather than on the widely used ImageNet gives better results for deep learning models. This study provides a reference for engineers to make choices about lightweight deep learning models.</p></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":7.5,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142232167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}