Photography, Mobile, and Immersive Imaging最新文献

筛选
英文 中文
Self-calibrated surface acquisition for integrated positioning verification in medical applications 自校准表面采集集成定位验证在医疗应用
Photography, Mobile, and Immersive Imaging Pub Date : 2019-01-13 DOI: 10.2352/issn.2470-1173.2019.4.pmii-353
S. Jörissen, M. Bleier, A. Nüchter
{"title":"Self-calibrated surface acquisition for integrated positioning verification in medical applications","authors":"S. Jörissen, M. Bleier, A. Nüchter","doi":"10.2352/issn.2470-1173.2019.4.pmii-353","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2019.4.pmii-353","url":null,"abstract":"This paper presents a novel approach for a position verification system in medical applications. By replacing the already existing cross line laser projectors with galvoor MEMS-based projectors and utilizing the surveillance cameras, a self-calibration of the system is performed and surface acquisition for positioning verification is demonstrated. The functionality is shown by analyzing the radii of calibration spheres and determining the quality of the captured surface with respect to a reference model. The paper focuses on the demonstration with one pair of camera and projector but can also be extended to a multi-camera-projector system, as present in treatment rooms. Compared to other systems, this approach does not need external hardware and is thus space and cost efficient. Introduction Nowadays, a wide range of medical applications demand accurate patient positioning for a successful treatment. While the positioning for X-ray imaging allows tolerances of several millimeters since typically rather big areas are imaged, the required accuracy for CT-imaging and especially classical radiation therapy, Volumetric Arc Therapy (VMAT), Intensity-Modulated Radiation Therapy (IMRT) and 3D Conformal Radiation Therapy (3D CRT) for cancer treatment is much higher. The goal of radiation therapy is to damage the cancer cells as much as possible, while keeping the amount of radiation within the surrounding tissue to an absolute minimum. The de facto standard procedure for patient positioning in radiation therapy is as follows: An initial CT scan is performed to gather anatomical data for the treatment. Here, markers are placed on the patients skin, which are later used to align the patient with the orthogonal line lasers in the treatment room. In a previous step, those line lasers are calibrated to directly intersect in the linear accelerators (linacs) ”isocenter”, the point where the beams of the rotating linac intercept and therefore the radiation intensity is at its peak. The calibration of the isocenter is done performing the Winston-Lutz test. Once the isocenter is calibrated and the patient aligned, the treatment is started. Typically, the initial CT scans outcome is used for several radiation therapy sessions, so are the markers. Fig. 1 shows a typical treatment room with patient couch, gantry, red room lasers for positioning and a test phantom. The importance of precise patient positioning and the potential of optical surface imaging technologies for both positioning and respiratory gating is becoming more and more clear and was recently confirmed and discussed by publications such as [1], [2] and [3]. This paper provides a new method of verifying the patients Figure 1. Radiation-therapy room with gantry and positioning lasers (red) position with respect to the linacs isocenter. A typical treatment room already consists of multiple cameras for surveillance and line lasers for calibration, isocenter visualization and patient positioning. By replacin","PeriodicalId":309050,"journal":{"name":"Photography, Mobile, and Immersive Imaging","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127946845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Shuttering methods and the artifacts they produce 快门方法和它们产生的伪影
Photography, Mobile, and Immersive Imaging Pub Date : 2019-01-13 DOI: 10.2352/issn.2470-1173.2019.4.pmii-590
H. Dietz, P. Eberhart
{"title":"Shuttering methods and the artifacts they produce","authors":"H. Dietz, P. Eberhart","doi":"10.2352/issn.2470-1173.2019.4.pmii-590","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2019.4.pmii-590","url":null,"abstract":"When exposure times were measured in minutes, the opening and closing of the shutter was essentially instantaneous. As more sensitive films and brighter optics became available, exposure times decreased, the travel time of the shutter mechanism became increasingly significant, and artifacts became visible. Perhaps the best-known shutter artifacts are the spatio-temporal distortions associated with photographing moving subjects using a focal-plane shutter or sequential electronic sampling of pixels (electronic rolling shutter). However, the shutter mechanism also can cause banding with flickering light sources and strange artifacts in out-of-focus regions (bokeh); it can even impact resolution. This paper experimentally evaluates and discusses the artifacts caused by leaf, focal plane, electronic first curtain, and fully electronic sequential-readout shuttering. Introduction The capture of a properly exposed image requires balancing of the various exposure parameters. Sensitivity to changes in exposure factors in general is logarithmic, so APEX (Additive System of Photographic Exposure) encodes all parameters as log values such that doubling or halving the parameter is encoded by adding or subtracting one from the APEX value of that parameter. The result is that equivalent exposures can be determined by the simple linear equation: Ev = Bv + Sv = Tv + Av The exposure value, Ev, represents the total amount of image-forming light. In other words, two exposures are expected to produce “equivalent” images as long as Ev is the same. The values of Bv and Sv are essentially constants for a given scene and camera. The metered luminance of the scene being photographed is the brightness value, Bv. The speed value, Sv, represents the light sensitivity of the film or sensor – the ISO. In digital cameras, the value of Sv typically is determined by the combination of quantum efficiency, analog gain, and digital gain. However, the quantum efficiency is not easily changed after manufacture, so manipulating the analog and/or digital gain to increase the ISO effectively reduces dynamic range. The remaining parameters, Tv and Av, are the things that can be directly controlled by the camera for each capture. The time value, Tv, represents the exposure integration period, commonly known as shutter speed even for systems that lack a mechanical shutter. This is the key parameter of concern in the current work. More precisely, the current work centers on characterizing the subtle differences caused by various implementations of shuttering. For example, some shuttering methods give all pixels the same duration of exposure, but do not expose Figure 1. Still image from high speed video of leaf shutter all pixels during the same time interval – thus causing specific types of artifacts. The aperture value, Av, represents the rate of light transmission through the lens. Using a perfect lens, Av is determined solely by the aperture f /number, which is simply the ratio of the len","PeriodicalId":309050,"journal":{"name":"Photography, Mobile, and Immersive Imaging","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127980596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A methodology in setting auto-flash light activation level of mobile cameras 一种设置移动相机自动闪光灯激活水平的方法
Photography, Mobile, and Immersive Imaging Pub Date : 2019-01-13 DOI: 10.2352/issn.2470-1173.2019.4.pmii-587
Abtin Ghelmansaraei
{"title":"A methodology in setting auto-flash light activation level of mobile cameras","authors":"Abtin Ghelmansaraei","doi":"10.2352/issn.2470-1173.2019.4.pmii-587","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2019.4.pmii-587","url":null,"abstract":"","PeriodicalId":309050,"journal":{"name":"Photography, Mobile, and Immersive Imaging","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130367884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Autofocus by deep reinforcement learning 通过深度强化学习自动对焦
Photography, Mobile, and Immersive Imaging Pub Date : 2019-01-13 DOI: 10.2352/issn.2470-1173.2019.4.pmii-577
Chin-Cheng Chan, Homer H. Chen
{"title":"Autofocus by deep reinforcement learning","authors":"Chin-Cheng Chan, Homer H. Chen","doi":"10.2352/issn.2470-1173.2019.4.pmii-577","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2019.4.pmii-577","url":null,"abstract":"","PeriodicalId":309050,"journal":{"name":"Photography, Mobile, and Immersive Imaging","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127470935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Issues reproducing handshake on mobile phone cameras 在手机相机上再现握手的问题
Photography, Mobile, and Immersive Imaging Pub Date : 2019-01-13 DOI: 10.2352/issn.2470-1173.2019.4.pmii-586
François-Xavier Bucher, J. Park, Ari Partinen, P. Hubel
{"title":"Issues reproducing handshake on mobile phone cameras","authors":"François-Xavier Bucher, J. Park, Ari Partinen, P. Hubel","doi":"10.2352/issn.2470-1173.2019.4.pmii-586","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2019.4.pmii-586","url":null,"abstract":"","PeriodicalId":309050,"journal":{"name":"Photography, Mobile, and Immersive Imaging","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122158038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Face skin tone adaptive automatic exposure control 面部肤色自适应自动曝光控制
Photography, Mobile, and Immersive Imaging Pub Date : 2019-01-13 DOI: 10.2352/issn.2470-1173.2019.4.pmii-578
N. El-Yamany, Jarno Nikkanen, Jihyeon Yi
{"title":"Face skin tone adaptive automatic exposure control","authors":"N. El-Yamany, Jarno Nikkanen, Jihyeon Yi","doi":"10.2352/issn.2470-1173.2019.4.pmii-578","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2019.4.pmii-578","url":null,"abstract":"","PeriodicalId":309050,"journal":{"name":"Photography, Mobile, and Immersive Imaging","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115002941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast restoring of high dynamic range image appearance for multi-partial reset sensor 多部分复位传感器高动态范围图像外观的快速恢复
Photography, Mobile, and Immersive Imaging Pub Date : 2019-01-13 DOI: 10.2352/issn.2470-1173.2019.4.pmii-589
Z. Youssfi, F. Hassan
{"title":"Fast restoring of high dynamic range image appearance for multi-partial reset sensor","authors":"Z. Youssfi, F. Hassan","doi":"10.2352/issn.2470-1173.2019.4.pmii-589","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2019.4.pmii-589","url":null,"abstract":"","PeriodicalId":309050,"journal":{"name":"Photography, Mobile, and Immersive Imaging","volume":"200 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133486683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Credible repair of Sony main-sensor PDAF striping artifacts 可靠的修复索尼主传感器PDAF条纹伪影
Photography, Mobile, and Immersive Imaging Pub Date : 2019-01-13 DOI: 10.2352/issn.2470-1173.2019.4.pmii-585
H. Dietz
{"title":"Credible repair of Sony main-sensor PDAF striping artifacts","authors":"H. Dietz","doi":"10.2352/issn.2470-1173.2019.4.pmii-585","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2019.4.pmii-585","url":null,"abstract":"","PeriodicalId":309050,"journal":{"name":"Photography, Mobile, and Immersive Imaging","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125659782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improved Image Selection for Stack-Based HDR Imaging 改进的基于堆栈的HDR成像图像选择
Photography, Mobile, and Immersive Imaging Pub Date : 2018-06-19 DOI: 10.2352/issn.2470-1173.2019.4.pmii-581
P. V. Beek
{"title":"Improved Image Selection for Stack-Based HDR Imaging","authors":"P. V. Beek","doi":"10.2352/issn.2470-1173.2019.4.pmii-581","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2019.4.pmii-581","url":null,"abstract":"Stack-based high dynamic range (HDR) imaging is a technique for achieving a larger dynamic range in an image by combining several low dynamic range images acquired at different exposures. Minimizing the set of images to combine, while ensuring that the resulting HDR image fully captures the scene's irradiance, is important to avoid long image acquisition and post-processing times. The problem of selecting the set of images has received much attention. However, existing methods either are not fully automatic, can be slow, or can fail to fully capture more challenging scenes. In this paper, we propose a fully automatic method for selecting the set of exposures to acquire that is both fast and more accurate. We show on an extensive set of benchmark scenes that our proposed method leads to improved HDR images as measured against ground truth using the mean squared error, a pixel-based metric, and a visible difference predictor and a quality score, both perception-based metrics.","PeriodicalId":309050,"journal":{"name":"Photography, Mobile, and Immersive Imaging","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126435694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Improving the Reliability of Phase Detection Autofocus 提高相位检测自动对焦的可靠性
Photography, Mobile, and Immersive Imaging Pub Date : 2018-01-28 DOI: 10.2352/ISSN.2470-1173.2018.05.PMII-241
Chin-Cheng Chan, Homer H. Chen
{"title":"Improving the Reliability of Phase Detection Autofocus","authors":"Chin-Cheng Chan, Homer H. Chen","doi":"10.2352/ISSN.2470-1173.2018.05.PMII-241","DOIUrl":"https://doi.org/10.2352/ISSN.2470-1173.2018.05.PMII-241","url":null,"abstract":"","PeriodicalId":309050,"journal":{"name":"Photography, Mobile, and Immersive Imaging","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114214584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信