Ogbole Collins Inalegwu;Rony Kumer Saha;Yeshwanth Reddy Mekala;Farhan Mumtaz;Nicholas Dionise;Zane Voss;Jeffrey D. Smith;Ronald J. O'Malley;Rex E. Gerald;Jie Huang
{"title":"Advancing Temperature Monitoring of the Bottom Anode in a Direct Current Electric Arc Furnace Operations With Distributed Optical Fiber Sensors","authors":"Ogbole Collins Inalegwu;Rony Kumer Saha;Yeshwanth Reddy Mekala;Farhan Mumtaz;Nicholas Dionise;Zane Voss;Jeffrey D. Smith;Ronald J. O'Malley;Rex E. Gerald;Jie Huang","doi":"10.1109/TIM.2025.3583377","DOIUrl":"https://doi.org/10.1109/TIM.2025.3583377","url":null,"abstract":"The bottom anode in the direct current electric arc furnace (dc EAF) is critical for completing the electrical circuit necessary for sustaining the arc within the furnace. For pin-type bottom anodes, monitoring of the temperature of select pins instrumented with thermocouples (TCs) is performed to track bottom wear in EAF and inform the operator when the furnace should be removed from service. This work presents the results from a plant trial using distributed temperature monitoring of bottom anode pins in a 165-ton dc EAF over a two-month service period utilizing two optical fiber sensing techniques: fiber Bragg grating (FBG) and Rayleigh backscattering (RBS). The early detection of temperature anomalies along the length of the anode pin through distributed sensing enhances operational safety, providing a robust alternative to traditional TCs.","PeriodicalId":13341,"journal":{"name":"IEEE Transactions on Instrumentation and Measurement","volume":"74 ","pages":"1-12"},"PeriodicalIF":5.6,"publicationDate":"2025-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144663722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Virtual Radar Model and Processing for All-Directional Measurements of Vital Signs","authors":"Danke Jiang;Silong Tu;Yingjie Ye;Zhenyu Liu","doi":"10.1109/TIM.2025.3583384","DOIUrl":"https://doi.org/10.1109/TIM.2025.3583384","url":null,"abstract":"To address the limitations of the radar horizontal field of view (HFOV), this article proposes a novel virtual frequency-modulated continuous-wave (FMCW) radar model to achieve simultaneous all-directional measurements through the integration of rotation motor and radar. However, weak vital signs are susceptible to interference from vibrations caused by the torque ripple of motor, which leads to the generation of ghost objects and phase noise interference. First, a Doppler combination filtering (DCF) method is proposed to localize the subjects and suppress interference from torque ripple vibrations, utilizing multiple Doppler frequencies associated with breathing for improved precision. Second, an adaptive convergence variational mode extraction-weight pair accumulation (ACVME-WPA) method is proposed to separate breathing and heartbeat signals from the phase signal affected by motor vibration. This method incorporates two key ideas: one involves determining the adjustment rate of the penalty value based on the iterative differences between center frequencies and the other focuses on aggregating multiple phase signals using weighted pairs to enhance performance by considering the HFOV overlap of adjacent virtual radars. Experimental results show that the proposed methods can achieve all-directional measurements of localization, respiration rate (RR), and heartbeat rate (HR) for single subjects with various breathing patterns, with HR mean absolute error (MAE) of 0.026 Hz in the breath-holding experiment, and also work well for multiperson scenarios with different angles and window lengths. In the four subjects’ experiment, the proposed method still works even when subjects are not facing the radar. For right-side orientation, the RR and HR MAEs were 0.007 and 0.025 Hz, respectively.","PeriodicalId":13341,"journal":{"name":"IEEE Transactions on Instrumentation and Measurement","volume":"74 ","pages":"1-14"},"PeriodicalIF":5.6,"publicationDate":"2025-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144598001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multiscale Spatial–Frequency-Domain Cross-Transformer for Hyperspectral Image Classification","authors":"Cheng Shi;Pupu Chen;Li Fang;Minghua Zhao;Xinhong Hei;Qiguang Miao","doi":"10.1109/TIM.2025.3578703","DOIUrl":"https://doi.org/10.1109/TIM.2025.3578703","url":null,"abstract":"Recently, the Transformer has achieved significant success in the hyperspectral image (HSI) classification task. However, most Transformers and their variants focus more on spatial-domain global feature learning, ignoring the complementary characteristics provided by frequency-domain features. The fast Fourier transform (FFT), due to its sensitivity to frequency-domain information, has become a primary tool for frequency-domain analysis. However, different frequency bands are often assigned the same attention values, and the differences between different frequency bands are not considered. To fully explore and fusion spatial- and frequency-domain features, we propose a multiscale spatial–frequency-domain cross-Transformer (SFDCT-Former) network. We design a two-branch structure for spatial-domain and frequency-domain feature learning: one branch utilizes the multihead self-attention (MHSA) module for spatial-domain feature learning, while the other incorporates a multifrequency-domain Transformer (MFre-Former) encoder for frequency-domain feature learning. The MFre-Former encoder divides the frequency domain into nonoverlapping frequency bands and assigns distinct attention to each frequency band, therefore, different frequency-domain information can be captured more precisely. Furthermore, to fuse the spatial- and frequency-domain features, we design a multilevel cross-attention (MLCA) fusion module. The MLCA module effectively combines spatial- and frequency-domain features at different levels to better capture their complementary characteristics. Extensive experiments conducted on four publicly available HSI datasets demonstrate that the proposed method outperforms nine state-of-the-art methods in classification performance. The code is available at <uri>https://github.com/AAAA-CS/SFDCT-Former</uri>","PeriodicalId":13341,"journal":{"name":"IEEE Transactions on Instrumentation and Measurement","volume":"74 ","pages":"1-15"},"PeriodicalIF":5.6,"publicationDate":"2025-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144557704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"T2MFDF: An LLM-Enhanced Multimodal Fault Diagnosis Framework Integrating Time-Series and Textual Data","authors":"Jiajing Zhou;Yuanjun Guo;Zhile Yang;Jinning Yang;Zhao An;Kang Li;Seán McLoone","doi":"10.1109/TIM.2025.3583374","DOIUrl":"https://doi.org/10.1109/TIM.2025.3583374","url":null,"abstract":"In modern industrial applications, accurate fault diagnosis is critical for ensuring machinery reliability, yet traditional methods struggle with the complexity and interdependencies of faults, particularly in bearing systems. This article proposes a novel multimodal fault diagnosis framework that integrates time-series vibration signals with textual descriptions, leveraging a BERT-based large language model (LLM) to enhance feature representation and capture semantic relationships between fault categories. By utilizing LLM, the model improves generalization across diverse fault scenarios, addressing the limitations of previous models. The proposed framework incorporates a multimodal data augmentation module, which enhances feature diversity and enriches the representation of complex fault patterns. Furthermore, leveraging large multimodal models facilitates better handling of fault classification by integrating both sequential patterns from time-series data and contextual information from textual descriptions. The textual modality is constructed using templates informed by diagnostic features, allowing the LLM to extract semantically meaningful representations aligned with specific fault characteristics. Experimental results demonstrate the superiority of the proposed multimodal approach, which achieves maximum improvements of 32.647% in ACC and 35.5% in <inline-formula> <tex-math>$F1$ </tex-math></inline-formula>-score compared to unimodal methods. In the transferability evaluation, the model achieves a Tr-ACC of 92.295%, demonstrating its robustness and adaptability to unseen datasets. Extensive experiments on industrial-bearing datasets validate the effectiveness of the proposed framework, which outperforms traditional models and highlights its potential for real-world applications.","PeriodicalId":13341,"journal":{"name":"IEEE Transactions on Instrumentation and Measurement","volume":"74 ","pages":"1-11"},"PeriodicalIF":5.6,"publicationDate":"2025-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144597685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"PDSAM: Prompt-Driven SAM for Track Defect Detection","authors":"Yu Fang;Pan Tao;Tianrui Li;Fan Min","doi":"10.1109/TIM.2025.3583378","DOIUrl":"https://doi.org/10.1109/TIM.2025.3583378","url":null,"abstract":"The track defect detection is critical for ensuring the safety and reliability of railway systems. Existing machine vision-based approaches are hindered by three key issues: high-time complexity stemming from end-to-end network training, limited availability of training data (with only a few hundred labeled images), and suboptimal prediction precision. To address these challenges, this article introduces the prompt-driven segment anything model (PDSAM), a novel image semantic segmentation framework that introduces a paradigm shift in problem formulation. The core contribution lies in reformulating the segmentation task as a prompt generation problem, which offers two correlated advantages. First, a simplified prompt generation network reduces both training time and data requirements compared with standalone segmentation networks. Second, an upscaling and visual prompting technique restores spatial resolution and mitigates the risk of local optima in feature optimization, enabling more precise and fine-grained segmentation outputs. Experimental evaluations on benchmark datasets demonstrate that PDSAM outperforms state-of-the-art methods in both prediction accuracy and computational efficiency for railway track defect detection. The proposed framework’s source code and pretrained models (PTMs) are publicly available to facilitate reproducibility and further research, accessible at: <uri>https://github.com/FreddyDylan/PDSAM/</uri>","PeriodicalId":13341,"journal":{"name":"IEEE Transactions on Instrumentation and Measurement","volume":"74 ","pages":"1-17"},"PeriodicalIF":5.6,"publicationDate":"2025-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144550465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Design and Implementation of Eye-Safe Band LiDAR System Based on Solid-State Photomultiplier for Kilometer-Range Applications","authors":"Hajun Song;Hansol Jang;Heesuk Jang;Taehyun Yoon","doi":"10.1109/TIM.2025.3583360","DOIUrl":"https://doi.org/10.1109/TIM.2025.3583360","url":null,"abstract":"Light detection and ranging (LiDAR) technology is increasingly being applied to several fields, including defense surveillance and reconnaissance, where it is used to detect small objects at long ranges. In these scenarios, highly sensitive detection capabilities are essential, for which high-power lasers can be employed. In particular, the 1550-nm wavelength band offers the advantage of being eye safe, allowing for increased laser output. However, because increasing the laser output requires a larger system size and higher power consumption, this approach has inherent limitations. In this study, we implemented a small payload (<2 kg) LiDAR system based on a solid-state photomultiplier (SSPM), which has high sensitivity at 1550 nm. The experimental results demonstrated that the SSPM provides high sensitivity, detecting signals below −50.9 dBm with a low false alarm rate of 0.0014%. Although background light can potentially increase false alarms, this effect is mitigated by using optical filters. Therefore, the proposed detection scheme based on the SSPM can generate point cloud images without the need for complex postprocessing or calculations to mitigate speckle noise. We tested the feasibility of the SSPM as a high-sensitivity photodetector for the LiDAR system by designing and implementing a compact SSPM-based LiDAR system. The experimental results confirmed that an object with a reflectivity of 9% and located 801.22-m away could be detected using the proposed LiDAR system only with a peak power of 3.5 kW and receiver aperture diameter of 2 cm. Moreover, a kilometer-range (1145.71 m) image was successfully acquired using the proposed LiDAR system with the same parameters.","PeriodicalId":13341,"journal":{"name":"IEEE Transactions on Instrumentation and Measurement","volume":"74 ","pages":"1-9"},"PeriodicalIF":5.6,"publicationDate":"2025-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11052835","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144623977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fabrication and Testing of a 64-Element CMUT Ring Array for Underwater Ultrasound Imaging","authors":"Zhaodong Li;Wendong Zhang;Zhihao Wang;Shurui Liu;Jingwen Wang;Xiangcheng Zeng;Chenya Zhao;Mehmet Yilmaz;Changde He;Licheng Jia;Guojun Zhang;Li Qin;Renxin Wang","doi":"10.1109/TIM.2025.3583366","DOIUrl":"https://doi.org/10.1109/TIM.2025.3583366","url":null,"abstract":"Capacitive micromachined ultrasonic transducers (CMUTs) have demonstrated great potential in ultrasonic imaging due to their wide bandwidth, high electromechanical coupling coefficient, flexible design, and ease of integration. However, traditional CMUT devices require high drive voltages, which may pose potential safety risks to humans and limit their widespread application in imaging fields. To address this issue, this study innovatively designs and fabricates a large-diameter (20 cm) CMUT annular array with low drive voltage (25-V dc and 15-V ac). Finite element simulation results show that the collapse voltage of the CMUT is 52 V. Devices fabricated using wafer bonding technology exhibit excellent linear I–V characteristics and a “U” shaped C–V curve. In air, the resonant frequency is 4.71 MHz; after polydimethylsiloxane (PDMS) electrical insulation encapsulation, the resonant frequency drops to 2.74 MHz when submerged in water. The fabricated CMUT elements exhibit a −6-dB bandwidth of 118%, a −6-dB beamwidth of 13°, and a receive sensitivity of −205 dB @ 2.5 MHz. The maximum normalized consistency error among the 64 array elements is 0.3. In the underwater imaging experiment, five targets with varying sizes, positions, and sound speeds embedded in a tissue-mimicking phantom were successfully reconstructed. The maximum radial error in the reconstructed target center positions was 5%. These results demonstrate that the designed low-voltage CMUT ring array possesses excellent imaging performance and significant potential for underwater ultrasound imaging applications.","PeriodicalId":13341,"journal":{"name":"IEEE Transactions on Instrumentation and Measurement","volume":"74 ","pages":"1-12"},"PeriodicalIF":5.6,"publicationDate":"2025-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144597937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ying Xie;Jingkai Shang;Ruixiang Deng;Xianlun Tang;Wuqiang Yang
{"title":"FRISNET: A Fast Real-Time Instance Segmentation Network Fusing Frequency Domain and Multilevel Features","authors":"Ying Xie;Jingkai Shang;Ruixiang Deng;Xianlun Tang;Wuqiang Yang","doi":"10.1109/TIM.2025.3583292","DOIUrl":"https://doi.org/10.1109/TIM.2025.3583292","url":null,"abstract":"It is challenging to obtain accurate location information of instances and segmentation masks, considering the intricacy and diversity of practical scenarios. This article presents a fast real-time instance segmentation network (FRISNET) by fusing the information from the frequency domain and space domain. Based on you only look at coefficients (YOLACT), which is the fastest instance segmentation method, the frequency domain representation is introduced into a convolutional neural network (CNN). By fast Fourier transform (FFT), features of different frequencies extracted from the frequency domain are fused with the characteristic map of the spatial domain. Accurate global location information and clear semantic information are obtained using CNN. To take advantage of the high-resolution information of target location and feature information from the bottom level, as well as the supervision information located at the entirely identical level, a brand fresh bottom-up feature fusion branch and skip connection at the same level are introduced based on the top-down feature pyramid network (FPN) feature fusion network, enabling the feature extraction network to possess diverse feature representation. The proposed instance segmentation model is trained on open standard datasets of PASCAL segmentation boundary detection (PASCAL SBD) and Microsoft Common Objects in Context (MS COCO). The results show that the proposed method improves instance segmentation accuracy. It achieves a mean average precision (mAP) of 34.5 at 31.77 frames/s (FPS) on MS COCO. This performance is 1.17% higher than YOLACT++ with the ResNet-50 architecture. The model’s speed also shows potential for use in motion planning and tactile sensors for robotic grasping tasks. This could further enhance execution efficiency and operational reliability.","PeriodicalId":13341,"journal":{"name":"IEEE Transactions on Instrumentation and Measurement","volume":"74 ","pages":"1-14"},"PeriodicalIF":5.6,"publicationDate":"2025-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144572970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hu Hongyu;Tang Minghong;Gao Fei;Bao Mingxi;Gao Zhenhai
{"title":"Road Surface Friction Estimation Based on LiDAR Reflectivity for Intelligent Vehicle","authors":"Hu Hongyu;Tang Minghong;Gao Fei;Bao Mingxi;Gao Zhenhai","doi":"10.1109/TIM.2025.3583367","DOIUrl":"https://doi.org/10.1109/TIM.2025.3583367","url":null,"abstract":"The road surface friction coefficient is a key factor in the decision-making and control strategies of autonomous driving systems. This study presents a groundbreaking method for estimating the road surface friction coefficient using light detection and ranging (LiDAR) point cloud data, enhancing autonomous vehicles’ prospective and high-precision perception. Data from eight road types formed a robust dataset. Cloth simulation filtering (CSF) and the random sample consensus (RANSAC) algorithm extracted road point clouds accurately. Gaussian filtering then removed reflectivity outliers. Given the correlation among reflectivity, distance, and incident angle, the road surface was segmented for comprehensive feature extraction. A designed deep neural network (DNN) model, trained rigorously with the dataset, achieved road recognition. Using statistical knowledge of road materials and peak friction coefficients determined the road’s friction coefficient. Validation showed the algorithm identifies road types with over 99.62% accuracy, at 55 ms per cycle. This ensures real-time, high-precision estimation of the peak friction coefficient, a major boost for autonomous driving systems.","PeriodicalId":13341,"journal":{"name":"IEEE Transactions on Instrumentation and Measurement","volume":"74 ","pages":"1-12"},"PeriodicalIF":5.6,"publicationDate":"2025-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144550643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multisource Ensemble Network-Based Learning for Knowledge-Informed FeO Prediction in Sintering","authors":"Xuehan Bai;Baocong Zhang;Wei Liu;Cailian Chen;Xuda Ding;Yehan Ma;Xinping Guan","doi":"10.1109/TIM.2025.3579835","DOIUrl":"https://doi.org/10.1109/TIM.2025.3579835","url":null,"abstract":"Lack of crucial state data is a common problem in processing industries, particularly in iron-making. Real-time measurements are often infeasible in harsh conditions such as high temperatures and heavy dust. In practice, technical experts rely on manual observations to make estimates; however, this knowledge is difficult to formalize and quantify. To date, few studies have effectively addressed these two challenges in a unified manner, hindering progress in data acquisition and process optimization. In this article, a novel knowledge-informed method that integrates a multisource fusion model with an ensemble network (KIMEN) is proposed to predict a key chemical indicator in the industry, the FeO content. First, a sParts-Pair comparing sorting (s-PCS) strategy was introduced for knowledge solidification. Furthermore, a knowledge-informed image processing scheme was proposed. In cases of data scarcity, we proposed a two-layer cascaded structure combining gradient boosting decision tree (GBDT) and gated recurrent unit (GRU), which functions as an ensemble recurrent network. When applied to the Guangxi Liuzhou Iron & Steel (Group) Company, the method demonstrates improved prediction performance and practical effectiveness. Experimental results show that the proposed KIMEN outperforms some conventional methods and state-of-the-art approaches. Ablation studies, small-scale experiments, and transfer learning experiments further validate the advantages of our method.","PeriodicalId":13341,"journal":{"name":"IEEE Transactions on Instrumentation and Measurement","volume":"74 ","pages":"1-10"},"PeriodicalIF":5.6,"publicationDate":"2025-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144634826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}