探索从枪支子弹图像中分割条纹痕迹的轻量级卷积神经网络

IF 0.8 Q4 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING
Genevieve Chyrmang , Barun Barua , Kangkana Bora , R. Suresh
{"title":"探索从枪支子弹图像中分割条纹痕迹的轻量级卷积神经网络","authors":"Genevieve Chyrmang ,&nbsp;Barun Barua ,&nbsp;Kangkana Bora ,&nbsp;R. Suresh","doi":"10.1016/j.fri.2024.200611","DOIUrl":null,"url":null,"abstract":"<div><div>In the field of forensic ballistic science, the identification of firearms is accomplished by examining the class and individual distinctive marks of fired bullets or cartridge casings instances discovered at the crime scene. The distinctive striation mark, which is engraved on bullets by gun barrels while firing owing to rifling, is one of the important characteristics examined. These striation marks serve as “fingerprints” of firearms. However, manual identification is time-consuming and arduous, necessitating the need for automation. This study focuses on automating striation mark segmentation using novel lightweight deep-learning segmentation models. The motivation behind this study is two-fold: first to assess if lightweight models can replace larger models without sacrificing accuracy and secondly to leverage their efficiency for resource-limited hardware, paving the way for real-time solutions. Proposed models include Mobile Striation-Net (MSN), Attention Gatted MobileStriation-Net (AMSN), Depthwise Attention Gatted Mobile Striation-Net (D-AMSN), and two derivatives of it, which are a pruned version and a quantized variant named PD-AMSN and QD-AMSN respectively. Extensive evaluation of models includes metrics Accuracy, Dice coefficient, Intersection over Union (IOU), Precision, and Recall. A thorough comparative analysis takes into account all models based on parameter counts, frames per second, inference time, and size. Findings shows, all models attains above 95% Accuracy. Dice coefficient and IOU ranges from 0.48 to 0.54 &amp; 0.59 to 0.6 respectively. Precision and Recall consistently range between 63.42% to 64.26% and 73.6% to 77.68% respectively. The “Pruned” variant PD-AMSN performs notably better across metrics than the D-AMSN model demonstrating effective pruning. On the other hand, quantized QD-AMSN have only 6 MB size and 95.42% accuracy. Our models are positioned as forerunners in terms of lightweight design, attention gate integration, decreased parameter counts, and improved accuracy when compared to other previous models. The effectiveness of our models for segmentation tasks and their potential for developing into a portable, real-time automated striation mark segmentation systems in the future, are highlighted through the in-depth analysis.</div></div>","PeriodicalId":40763,"journal":{"name":"Forensic Imaging","volume":"39 ","pages":"Article 200611"},"PeriodicalIF":0.8000,"publicationDate":"2024-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Exploring lightweight convolution neural networks for segmenting striation marks from firearm bullet images\",\"authors\":\"Genevieve Chyrmang ,&nbsp;Barun Barua ,&nbsp;Kangkana Bora ,&nbsp;R. Suresh\",\"doi\":\"10.1016/j.fri.2024.200611\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>In the field of forensic ballistic science, the identification of firearms is accomplished by examining the class and individual distinctive marks of fired bullets or cartridge casings instances discovered at the crime scene. The distinctive striation mark, which is engraved on bullets by gun barrels while firing owing to rifling, is one of the important characteristics examined. These striation marks serve as “fingerprints” of firearms. However, manual identification is time-consuming and arduous, necessitating the need for automation. This study focuses on automating striation mark segmentation using novel lightweight deep-learning segmentation models. The motivation behind this study is two-fold: first to assess if lightweight models can replace larger models without sacrificing accuracy and secondly to leverage their efficiency for resource-limited hardware, paving the way for real-time solutions. Proposed models include Mobile Striation-Net (MSN), Attention Gatted MobileStriation-Net (AMSN), Depthwise Attention Gatted Mobile Striation-Net (D-AMSN), and two derivatives of it, which are a pruned version and a quantized variant named PD-AMSN and QD-AMSN respectively. Extensive evaluation of models includes metrics Accuracy, Dice coefficient, Intersection over Union (IOU), Precision, and Recall. A thorough comparative analysis takes into account all models based on parameter counts, frames per second, inference time, and size. Findings shows, all models attains above 95% Accuracy. Dice coefficient and IOU ranges from 0.48 to 0.54 &amp; 0.59 to 0.6 respectively. Precision and Recall consistently range between 63.42% to 64.26% and 73.6% to 77.68% respectively. The “Pruned” variant PD-AMSN performs notably better across metrics than the D-AMSN model demonstrating effective pruning. On the other hand, quantized QD-AMSN have only 6 MB size and 95.42% accuracy. Our models are positioned as forerunners in terms of lightweight design, attention gate integration, decreased parameter counts, and improved accuracy when compared to other previous models. The effectiveness of our models for segmentation tasks and their potential for developing into a portable, real-time automated striation mark segmentation systems in the future, are highlighted through the in-depth analysis.</div></div>\",\"PeriodicalId\":40763,\"journal\":{\"name\":\"Forensic Imaging\",\"volume\":\"39 \",\"pages\":\"Article 200611\"},\"PeriodicalIF\":0.8000,\"publicationDate\":\"2024-11-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Forensic Imaging\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2666225624000344\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Forensic Imaging","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666225624000344","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0

摘要

在法医弹道学领域,枪支的鉴定是通过检查在犯罪现场发现的已发射子弹或弹壳实例的类别和个别独特标记来完成的。由于膛线的原因,枪管在发射时会在子弹上刻下明显的条纹痕迹,这是检查的重要特征之一。这些条纹可作为枪支的 "指纹"。然而,人工识别既费时又费力,因此有必要实现自动化。本研究的重点是利用新型轻量级深度学习分割模型实现条纹标记分割的自动化。这项研究的动机有两个方面:首先是评估轻量级模型是否能在不牺牲准确性的情况下取代大型模型,其次是利用其效率来解决硬件资源有限的问题,为实时解决方案铺平道路。提出的模型包括移动条纹网(MSN)、注意力锁定移动条纹网(AMSN)、深度注意力锁定移动条纹网(D-AMSN)及其两个衍生模型,即剪枝版本和量化变体,分别命名为 PD-AMSN 和 QD-AMSN。对模型的广泛评估包括准确度、骰子系数、联合交叉(IOU)、精确度和召回率等指标。全面的比较分析考虑了所有基于参数计数、每秒帧数、推理时间和大小的模型。结果表明,所有模型的准确率都在 95% 以上。骰子系数和 IOU 分别为 0.48 至 0.54 & 0.59 至 0.6。精确度和召回率分别为 63.42% 至 64.26% 和 73.6% 至 77.68%。与 D-AMSN 模型相比,"剪枝 "变体 PD-AMSN 在各项指标上的表现明显更好,这表明剪枝工作非常有效。另一方面,量化 QD-AMSN 只有 6 MB 大小,准确率却高达 95.42%。与之前的其他模型相比,我们的模型在轻量级设计、关注门集成、减少参数数量和提高准确性方面都处于领先地位。通过深入分析,我们强调了我们的模型在分割任务中的有效性,以及未来发展成为便携式实时自动条纹标记分割系统的潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Exploring lightweight convolution neural networks for segmenting striation marks from firearm bullet images

Exploring lightweight convolution neural networks for segmenting striation marks from firearm bullet images
In the field of forensic ballistic science, the identification of firearms is accomplished by examining the class and individual distinctive marks of fired bullets or cartridge casings instances discovered at the crime scene. The distinctive striation mark, which is engraved on bullets by gun barrels while firing owing to rifling, is one of the important characteristics examined. These striation marks serve as “fingerprints” of firearms. However, manual identification is time-consuming and arduous, necessitating the need for automation. This study focuses on automating striation mark segmentation using novel lightweight deep-learning segmentation models. The motivation behind this study is two-fold: first to assess if lightweight models can replace larger models without sacrificing accuracy and secondly to leverage their efficiency for resource-limited hardware, paving the way for real-time solutions. Proposed models include Mobile Striation-Net (MSN), Attention Gatted MobileStriation-Net (AMSN), Depthwise Attention Gatted Mobile Striation-Net (D-AMSN), and two derivatives of it, which are a pruned version and a quantized variant named PD-AMSN and QD-AMSN respectively. Extensive evaluation of models includes metrics Accuracy, Dice coefficient, Intersection over Union (IOU), Precision, and Recall. A thorough comparative analysis takes into account all models based on parameter counts, frames per second, inference time, and size. Findings shows, all models attains above 95% Accuracy. Dice coefficient and IOU ranges from 0.48 to 0.54 & 0.59 to 0.6 respectively. Precision and Recall consistently range between 63.42% to 64.26% and 73.6% to 77.68% respectively. The “Pruned” variant PD-AMSN performs notably better across metrics than the D-AMSN model demonstrating effective pruning. On the other hand, quantized QD-AMSN have only 6 MB size and 95.42% accuracy. Our models are positioned as forerunners in terms of lightweight design, attention gate integration, decreased parameter counts, and improved accuracy when compared to other previous models. The effectiveness of our models for segmentation tasks and their potential for developing into a portable, real-time automated striation mark segmentation systems in the future, are highlighted through the in-depth analysis.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Forensic Imaging
Forensic Imaging RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING-
CiteScore
2.20
自引率
27.30%
发文量
39
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信