信息工程最新文献

筛选
英文 中文
Erratum to “A deep decentralized privacy-preservation framework for online social networks” 对“在线社交网络的深度去中心化隐私保护框架”的勘误
IF 6.9 3区 计算机科学
Blockchain-Research and Applications Pub Date : 2025-05-19 DOI: 10.1016/j.bcra.2025.100299
Samuel Akwasi Frimpong , Mu Han , Emmanuel Kwame Effah , Joseph Kwame Adjei , Isaac Hanson , Percy Brown
{"title":"Erratum to “A deep decentralized privacy-preservation framework for online social networks”","authors":"Samuel Akwasi Frimpong , Mu Han , Emmanuel Kwame Effah , Joseph Kwame Adjei , Isaac Hanson , Percy Brown","doi":"10.1016/j.bcra.2025.100299","DOIUrl":"10.1016/j.bcra.2025.100299","url":null,"abstract":"","PeriodicalId":53141,"journal":{"name":"Blockchain-Research and Applications","volume":"6 2","pages":"Article 100299"},"PeriodicalIF":6.9,"publicationDate":"2025-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144084293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Antibacterial and anticancer potentials of graphene-silicon nitride nanomaterials-enhanced polymer nanocomposites: advanced characterization and optical behavior insights 石墨烯-氮化硅纳米材料增强聚合物纳米复合材料的抗菌和抗癌潜力:先进的表征和光学行为见解
Journal of Biosafety and Biosecurity Pub Date : 2025-05-12 DOI: 10.1016/j.jobb.2025.04.001
Rawaa A. Abdul-Nabi , Ehssan Al-Bermany
{"title":"Antibacterial and anticancer potentials of graphene-silicon nitride nanomaterials-enhanced polymer nanocomposites: advanced characterization and optical behavior insights","authors":"Rawaa A. Abdul-Nabi ,&nbsp;Ehssan Al-Bermany","doi":"10.1016/j.jobb.2025.04.001","DOIUrl":"10.1016/j.jobb.2025.04.001","url":null,"abstract":"<div><div>Hybrid nanomaterials (HNMs) have become more interesting to researchers for various optoelectronic and biological applications. In response, this investigation focuses on the impact of loading ratios of (0, 1 %, 3 %, and 5 %) of HNMs from graphene oxide (GO) and silicon nitride (Si<sub>3</sub>N<sub>4</sub>). HNMs are utilized to reinforce blended polymers, including polyethylene oxide (PEO), carboxymethyl cellulose (CMC), and nano-polyaniline (PANI) to fabricate (PEO<sub>100K</sub>–CMC–PANI/GO–Si<sub>3</sub>N<sub>4</sub>) using the developed sol–gel-ultrasonic procedure. X-ray diffraction revealed semi-crystalline behavior among all samples, while Fourier transform infrared spectroscopy showed strong physical interfacial interactions among the sample components. Meanwhile, field emission scanning electron and transmission electron microscopies showed a fine dispersion and a homogeneous matrix with significant changes. The optical absorption behavior revealed continuous high absorption peaks at 200–280-nm wavelengths, which strongly impacts (GO–Si<sub>3</sub>N<sub>4</sub>). Increases in concentration also strongly impact (GO–Si<sub>3</sub>N<sub>4</sub>), which results in an improved optical energy gap for the allowed and forbidden transitions from 3.5 eV for the blended polymer to 3 and 2.9 eV by increasing the HNM content. The contributions of HNMs notably enhance the ability to reduce the zones of the bacteria, especially <em>Escherichia coli</em>, from 18 to 26 mm. In effect, HNMs with a concentration higher than 5 % assist in inhibiting the growth of lung cancer (A549) cells. As such, these NCs present good optical behavior for multi-applications, such as biosensors and biological and optoelectronic devices.</div></div>","PeriodicalId":52875,"journal":{"name":"Journal of Biosafety and Biosecurity","volume":"7 2","pages":"Pages 55-68"},"PeriodicalIF":0.0,"publicationDate":"2025-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144106954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Frequency-informed transformer for real-time water pipeline leak detection 频率通知变压器用于实时水管道泄漏检测
自主智能系统(英文) Pub Date : 2025-04-28 DOI: 10.1007/s43684-025-00094-0
Fengnian Liu, Ding Wang, Junya Tang, Lei Wang
{"title":"Frequency-informed transformer for real-time water pipeline leak detection","authors":"Fengnian Liu,&nbsp;Ding Wang,&nbsp;Junya Tang,&nbsp;Lei Wang","doi":"10.1007/s43684-025-00094-0","DOIUrl":"10.1007/s43684-025-00094-0","url":null,"abstract":"<div><p>Water pipeline leaks pose significant risks to urban infrastructure, leading to water wastage and potential structural damage. Existing leak detection methods often face challenges, such as heavily relying on the manual selection of frequency bands or complex feature extraction, which can be both labour-intensive and less effective. To address these limitations, this paper introduces a Frequency-Informed Transformer model, which integrates the Fast Fourier Transform and self-attention mechanisms to enhance water pipe leak detection accuracy. Experimental results show that FiT achieves 99.9% accuracy in leak detection and 98.7% in leak type classification, surpassing other models in both accuracy and processing speed, with an efficient response time of 0.25 seconds. By significantly simplifying key features and frequency band selection and improving accuracy and response time, the proposed method offers a potential solution for real-time water leak detection, enabling timely interventions and more effective pipeline safety management.</p></div>","PeriodicalId":71187,"journal":{"name":"自主智能系统(英文)","volume":"5 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43684-025-00094-0.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143879689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Nonlinear optimal control for the five-axle and three-steering coupled-vehicle system 五轴三转向耦合车辆系统的非线性最优控制
自主智能系统(英文) Pub Date : 2025-04-23 DOI: 10.1007/s43684-025-00097-x
G. Rigatos, M. Abbaszadeh, K. Busawon, P. Siano, M. Al Numay, G. Cuccurullo, F. Zouari
{"title":"Nonlinear optimal control for the five-axle and three-steering coupled-vehicle system","authors":"G. Rigatos,&nbsp;M. Abbaszadeh,&nbsp;K. Busawon,&nbsp;P. Siano,&nbsp;M. Al Numay,&nbsp;G. Cuccurullo,&nbsp;F. Zouari","doi":"10.1007/s43684-025-00097-x","DOIUrl":"10.1007/s43684-025-00097-x","url":null,"abstract":"<div><p>Transportation of heavy loads is often performed by multi-axle multi-steered heavy duty vehicles In this article a novel nonlinear optimal control method is applied to the kinematic model of the five-axle and three-steering coupled vehicle system. First, it is proven that the dynamic model of this articulated multi-vehicle system is differentially flat. Next. the state-space model of the five-axle and three-steering vehicle system undergoes approximate linearization around a temporary operating point that is recomputed at each time-step of the control method. The linearization is based on Taylor series expansion and on the associated Jacobian matrices. For the linearized state-space model of the five-axle and three-steering vehicle system a stabilizing optimal (H-infinity) feedback controller is designed. This controller stands for the solution of the nonlinear optimal control problem under model uncertainty and external perturbations. To compute the controller’s feedback gains an algebraic Riccati equation is repetitively solved at each iteration of the control algorithm. The stability properties of the control method are proven through Lyapunov analysis. The proposed nonlinear optimal control approach achieves fast and accurate tracking of setpoints under moderate variations of the control inputs and minimal dispersion of energy by the propulsion and steering system of the five-axle and three-steering vehicle system.</p></div>","PeriodicalId":71187,"journal":{"name":"自主智能系统(英文)","volume":"5 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43684-025-00097-x.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143861351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multidimensional image morphing-fast image-based rendering of open 3D and VR environments 多维图像变形-基于开放3D和VR环境的快速图像渲染
Virtual Reality Intelligent Hardware Pub Date : 2025-04-01 DOI: 10.1016/j.vrih.2023.06.007
Simon Seibt , Bastian Kuth , Bartosz von Rymon Lipinski , Thomas Chang , Marc Erich Latoschik
{"title":"Multidimensional image morphing-fast image-based rendering of open 3D and VR environments","authors":"Simon Seibt ,&nbsp;Bastian Kuth ,&nbsp;Bartosz von Rymon Lipinski ,&nbsp;Thomas Chang ,&nbsp;Marc Erich Latoschik","doi":"10.1016/j.vrih.2023.06.007","DOIUrl":"10.1016/j.vrih.2023.06.007","url":null,"abstract":"<div><h3>Background</h3><div>In recent years, the demand for interactive photorealistic three-dimensional (3D) environments has increased in various fields, including architecture, engineering, and entertainment. However, achieving a balance between the quality and efficiency of high-performance 3D applications and virtual reality (VR) remains challenging.</div></div><div><h3>Methods</h3><div>This study addresses this issue by revisiting and extending view interpolation for image-based rendering (IBR), which enables the exploration of spacious open environments in 3D and VR. Therefore, we introduce multimorphing, a novel rendering method based on the spatial data structure of 2D image patches, called the image graph. Using this approach, novel views can be rendered with up to six degrees of freedom using only a sparse set of views. The rendering process does not require 3D reconstruction of the geometry or per-pixel depth information, and all relevant data for the output are extracted from the local morphing cells of the image graph. The detection of parallax image regions during preprocessing reduces rendering artifacts by extrapolating image patches from adjacent cells in real-time. In addition, a GPU-based solution was presented to resolve exposure inconsistencies within a dataset, enabling seamless transitions of brightness when moving between areas with varying light intensities.</div></div><div><h3>Results</h3><div>Experiments on multiple real-world and synthetic scenes demonstrate that the presented method achieves high \"VR-compatible\" frame rates, even on mid-range and legacy hardware, respectively. While achieving adequate visual quality even for sparse datasets, it outperforms other IBR and current neural rendering approaches.</div></div><div><h3>Conclusions</h3><div>Using the correspondence-based decomposition of input images into morphing cells of 2D image patches, multidimensional image morphing provides high-performance novel view generation, supporting open 3D and VR environments. Nevertheless, the handling of morphing artifacts in the parallax image regions remains a topic for future research.</div></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"7 2","pages":"Pages 155-172"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143864788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
STDNet: Improved lip reading via short-term temporal dependency modeling STDNet:通过短期时间依赖建模改进唇读
Virtual Reality Intelligent Hardware Pub Date : 2025-04-01 DOI: 10.1016/j.vrih.2024.07.003
Xiaoer Wu , Zhenhua Tan , Ziwei Cheng , Yuran Ru
{"title":"STDNet: Improved lip reading via short-term temporal dependency modeling","authors":"Xiaoer Wu ,&nbsp;Zhenhua Tan ,&nbsp;Ziwei Cheng ,&nbsp;Yuran Ru","doi":"10.1016/j.vrih.2024.07.003","DOIUrl":"10.1016/j.vrih.2024.07.003","url":null,"abstract":"<div><h3>Background</h3><div>Lip reading uses lip images for visual speech recognition. Deep-learning-based lip reading has greatly improved performance in current datasets; however, most existing research ignores the significance of short-term temporal dependencies of lip-shape variations between adjacent frames, which leaves space for further improvement in feature extraction.</div></div><div><h3>Methods</h3><div>This article presents a spatiotemporal feature fusion network (STDNet) that compensates for the deficiencies of current lip-reading approaches in short-term temporal dependency modeling. Specifically, to distinguish more similar and intricate content, STDNet adds a temporal feature extraction branch based on a 3D-CNN, which enhances the learning of dynamic lip movements in adjacent frames while not affecting spatial feature extraction. In particular, we designed a local–temporal block, which aggregates interframe differences, strengthening the relationship between various local lip regions through multiscale convolution. We incorporated the squeeze-and-excitation mechanism into the Global-Temporal Block, which processes a single frame as an independent unitto learn temporal variations across the entire lip region more effectively. Furthermore, attention pooling was introduced to highlight meaningful frames containing key semantic information for the target word.</div></div><div><h3>Results</h3><div>Experimental results demonstrated STDNet's superior performance on the LRW and LRW-1000, achieving word-level recognition accuracies of 90.2% and 53.56%, respectively. Extensive ablation experiments verified the rationality and effectiveness of its modules.</div></div><div><h3>Conclusions</h3><div>The proposed model effectively addresses short-term temporal dependency limitations in lip reading, and improves the temporal robustness of the model against variable-length sequences. These advancements validate the importance of explicit short-term dynamics modeling for practical lip-reading systems.</div></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"7 2","pages":"Pages 173-187"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143864853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Segmentation of CAD models using hybrid representation 基于混合表示的CAD模型分割
Virtual Reality Intelligent Hardware Pub Date : 2025-04-01 DOI: 10.1016/j.vrih.2025.01.001
Claude Uwimana , Shengdi Zhou , Limei Yang , Zhuqing Li , Norbelt Mutagisha , Edouard Niyongabo , Bin Zhou
{"title":"Segmentation of CAD models using hybrid representation","authors":"Claude Uwimana ,&nbsp;Shengdi Zhou ,&nbsp;Limei Yang ,&nbsp;Zhuqing Li ,&nbsp;Norbelt Mutagisha ,&nbsp;Edouard Niyongabo ,&nbsp;Bin Zhou","doi":"10.1016/j.vrih.2025.01.001","DOIUrl":"10.1016/j.vrih.2025.01.001","url":null,"abstract":"<div><div>In this paper, we introduce an innovative method for computer-aided design (CAD) segmentation by concatenating meshes and CAD models. Many previous CAD segmentation methods have achieved impressive performance using single representations, such as meshes, CAD, and point clouds. However, existing methods cannot effectively combine different three-dimensional model types for the direct conversion, alignment, and integrity maintenance of geometric and topological information. Hence, we propose an integration approach that combines the geometric accuracy of CAD data with the flexibility of mesh representations, as well as introduce a unique hybrid representation that combines CAD and mesh models to enhance segmentation accuracy. To combine these two model types, our hybrid system utilizes advanced-neural-network techniques to convert CAD models into mesh models. For complex CAD models, model segmentation is crucial for model retrieval and reuse. In partial retrieval, it aims to segment a complex CAD model into several simple components. The first component of our hybrid system involves advanced mesh-labeling algorithms that harness the digitization of CAD properties to mesh models. The second component integrates labelled face features for CAD segmentation by leveraging the abundant multisemantic information embedded in CAD models. This combination of mesh and CAD not only refines the accuracy of boundary delineation but also provides a comprehensive understanding of the underlying object semantics. This study uses the Fusion 360 Gallery dataset. Experimental results indicate that our hybrid method can segment these models with higher accuracy than other methods that use single representations.</div></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"7 2","pages":"Pages 188-202"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143864854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient and lightweight 3D building reconstruction from drone imagery using sparse line and point clouds 使用稀疏的线和点云从无人机图像中高效和轻量级的3D建筑重建
Virtual Reality Intelligent Hardware Pub Date : 2025-04-01 DOI: 10.1016/j.vrih.2025.02.001
Xiongjie Yin , Jinquan He , Zhanglin Cheng
{"title":"Efficient and lightweight 3D building reconstruction from drone imagery using sparse line and point clouds","authors":"Xiongjie Yin ,&nbsp;Jinquan He ,&nbsp;Zhanglin Cheng","doi":"10.1016/j.vrih.2025.02.001","DOIUrl":"10.1016/j.vrih.2025.02.001","url":null,"abstract":"<div><div>Efficient three-dimensional (3D) building reconstruction from drone imagery often faces data acquisition, storage, and computational challenges because of its reliance on dense point clouds. In this study, we introduced a novel method for efficient and lightweight 3D building reconstruction from drone imagery using line clouds and sparse point clouds. Our approach eliminates the need to generate dense point clouds, and thus significantly reduces the computational burden by reconstructing 3D models directly from sparse data. We addressed the limitations of line clouds for plane detection and reconstruction by using a new algorithm. This algorithm projects 3D line clouds onto a 2D plane, clusters the projections to identify potential planes, and refines them using sparse point clouds to ensure an accurate and efficient model reconstruction. Extensive qualitative and quantitative experiments demonstrated the effectiveness of our method, demonstrating its superiority over existing techniques in terms of simplicity and efficiency.</div></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"7 2","pages":"Pages 111-126"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143864167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deconfounded fashion image captioning with transformer and multimodal retrieval 用变压器和多模态检索解构时尚图像字幕
Virtual Reality Intelligent Hardware Pub Date : 2025-04-01 DOI: 10.1016/j.vrih.2024.08.002
Tao Peng, Weiqiao Yin, Junping Liu, Li Li, Xinrong Hu
{"title":"Deconfounded fashion image captioning with transformer and multimodal retrieval","authors":"Tao Peng,&nbsp;Weiqiao Yin,&nbsp;Junping Liu,&nbsp;Li Li,&nbsp;Xinrong Hu","doi":"10.1016/j.vrih.2024.08.002","DOIUrl":"10.1016/j.vrih.2024.08.002","url":null,"abstract":"<div><h3>Background</h3><div>The annotation of fashion images is a significantly important task in the fashion industry as well as social media and e-commerce. However, owing to the complexity and diversity of fashion images, this task entails multiple challenges, including the lack of fine-grained captions and confounders caused by dataset bias. Specifically, confounders often cause models to learn spurious correlations, thereby reducing their generalization capabilities.</div></div><div><h3>Method</h3><div>In this work, we propose the Deconfounded Fashion Image Captioning (DFIC) framework, which first uses multimodal retrieval to enrich the predicted captions of clothing, and then constructs a detailed causal graph using causal inference in the decoder to perform deconfounding. Multimodal retrieval is used to obtain semantic words related to image features, which are input into the decoder as prompt words to enrich sentence descriptions. In the decoder, causal inference is applied to disentangle visual and semantic features while concurrently eliminating visual and language confounding.</div></div><div><h3>Results</h3><div>Overall, our method can not only effectively enrich the captions of target images, but also greatly reduce confounders caused by the dataset. To verify the effectiveness of the proposed framework, the model was experimentally verified using the FACAD dataset.</div></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"7 2","pages":"Pages 127-138"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143864168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DeepSafe:Two-level deep learning approach for disaster victims detection DeepSafe:用于灾难受害者检测的两级深度学习方法
Virtual Reality Intelligent Hardware Pub Date : 2025-04-01 DOI: 10.1016/j.vrih.2024.08.005
Amir Azizi , Panayiotis Charalambous , Yiorgos Chrysanthou
{"title":"DeepSafe:Two-level deep learning approach for disaster victims detection","authors":"Amir Azizi ,&nbsp;Panayiotis Charalambous ,&nbsp;Yiorgos Chrysanthou","doi":"10.1016/j.vrih.2024.08.005","DOIUrl":"10.1016/j.vrih.2024.08.005","url":null,"abstract":"<div><h3>Background</h3><div>Efficient disaster victim detection (DVD) in urban areas after natural disasters is crucial for minimizing losses. However, conventional search and rescue (SAR) methods often experience delays, which can hinder the timely detection of victims. SAR teams face various challenges, including limited access to debris and collapsed structures, safety risks due to unstable conditions, and disrupted communication networks.</div></div><div><h3>Methods</h3><div>In this paper, we present DeepSafe, a novel two-level deep learning approach for multilevel classification and object detection using a simulated disaster victim dataset. DeepSafe first employs YOLOv8 to classify images into victim and non-victim categories. Subsequently, Detectron2 is used to precisely locate and outline the victims.</div></div><div><h3>Results</h3><div>Experimental results demonstrate the promising performance of DeepSafe in both victim classification and detection. The model effectively identified and located victims under the challenging conditions presented in the dataset.</div></div><div><h3>Conclusion</h3><div>DeepSafe offers a practical tool for real-time disaster management and SAR operations, significantly improving conventional methods by reducing delays and enhancing victim detection accuracy in disaster-stricken urban areas.</div></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"7 2","pages":"Pages 139-154"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143864786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信