核爆视频定时标记检测

Daniel T. Schmitt, Gilbert L. Peterson
{"title":"核爆视频定时标记检测","authors":"Daniel T. Schmitt, Gilbert L. Peterson","doi":"10.1109/AIPR.2014.7041902","DOIUrl":null,"url":null,"abstract":"During the 1950s and 1960s the United States conducted and filmed over 200 atmospheric nuclear tests establishing the foundations of atmospheric nuclear detonation behavior. Each explosion was documented with about 20 videos from three or four points of view. Synthesizing the videos into a 3D video will improve yield estimates and reduce error factors. The videos were captured at a nominal 2500 frames per second, but range from 2300-3100 frames per second during operation. In order to combine them into one 3D video, individual video frames need to be correlated in time with each other. When the videos were captured a timing system was used that shined light in a video every 5 milliseconds creating a small circle exposed in the frame. This paper investigates several method of extracting the timing from images in the cases when the timing marks are occluded and washed out, as well as when the films are exposed as expected. Results show an improvement over past techniques. For normal videos, occluded videos, and washed out videos, timing is detected with 99.3%, 77.3%, and 88.6% probability with a 2.6%, 11.3%, 5.9% false alarm rate, respectively.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Timing mark detection on nuclear detonation video\",\"authors\":\"Daniel T. Schmitt, Gilbert L. Peterson\",\"doi\":\"10.1109/AIPR.2014.7041902\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"During the 1950s and 1960s the United States conducted and filmed over 200 atmospheric nuclear tests establishing the foundations of atmospheric nuclear detonation behavior. Each explosion was documented with about 20 videos from three or four points of view. Synthesizing the videos into a 3D video will improve yield estimates and reduce error factors. The videos were captured at a nominal 2500 frames per second, but range from 2300-3100 frames per second during operation. In order to combine them into one 3D video, individual video frames need to be correlated in time with each other. When the videos were captured a timing system was used that shined light in a video every 5 milliseconds creating a small circle exposed in the frame. This paper investigates several method of extracting the timing from images in the cases when the timing marks are occluded and washed out, as well as when the films are exposed as expected. Results show an improvement over past techniques. For normal videos, occluded videos, and washed out videos, timing is detected with 99.3%, 77.3%, and 88.6% probability with a 2.6%, 11.3%, 5.9% false alarm rate, respectively.\",\"PeriodicalId\":210982,\"journal\":{\"name\":\"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)\",\"volume\":\"2 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2014-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/AIPR.2014.7041902\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AIPR.2014.7041902","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

在20世纪50年代和60年代,美国进行了200多次大气层核试验并进行了拍摄,奠定了大气层核爆炸行为的基础。每次爆炸都记录了大约20个视频,从三到四个角度记录。将视频合成为3D视频将提高产量估计并减少误差因素。视频以每秒2500帧的名义捕获,但在操作过程中每秒捕获2300-3100帧。为了将它们组合成一个3D视频,各个视频帧需要在时间上相互关联。当视频被捕获时,使用一个定时系统,每5毫秒在视频中发光一次,在帧中形成一个小圆圈。本文研究了在时间标记被遮挡和冲洗的情况下,以及胶片按预期曝光情况下,从图像中提取时间的几种方法。结果表明,与过去的技术相比,该技术有所改进。对于正常视频、遮挡视频和冲洗视频,时序检测概率分别为99.3%、77.3%和88.6%,虚警率分别为2.6%、11.3%和5.9%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Timing mark detection on nuclear detonation video
During the 1950s and 1960s the United States conducted and filmed over 200 atmospheric nuclear tests establishing the foundations of atmospheric nuclear detonation behavior. Each explosion was documented with about 20 videos from three or four points of view. Synthesizing the videos into a 3D video will improve yield estimates and reduce error factors. The videos were captured at a nominal 2500 frames per second, but range from 2300-3100 frames per second during operation. In order to combine them into one 3D video, individual video frames need to be correlated in time with each other. When the videos were captured a timing system was used that shined light in a video every 5 milliseconds creating a small circle exposed in the frame. This paper investigates several method of extracting the timing from images in the cases when the timing marks are occluded and washed out, as well as when the films are exposed as expected. Results show an improvement over past techniques. For normal videos, occluded videos, and washed out videos, timing is detected with 99.3%, 77.3%, and 88.6% probability with a 2.6%, 11.3%, 5.9% false alarm rate, respectively.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信