2020 35th International Conference on Image and Vision Computing New Zealand (IVCNZ)最新文献

筛选
英文 中文
A Multi-View Stereo Evaluation for Fine Object Reconstruction 精细物体重建的多视点立体评价
2020 35th International Conference on Image and Vision Computing New Zealand (IVCNZ) Pub Date : 2020-11-25 DOI: 10.1109/IVCNZ51579.2020.9290742
C. Peat, O. Batchelor, R. Green
{"title":"A Multi-View Stereo Evaluation for Fine Object Reconstruction","authors":"C. Peat, O. Batchelor, R. Green","doi":"10.1109/IVCNZ51579.2020.9290742","DOIUrl":"https://doi.org/10.1109/IVCNZ51579.2020.9290742","url":null,"abstract":"Current stereo matching methods based on end to end learning frameworks have shown strong results in the field of depth estimation, bringing significant improvements in robustness as well as flexibility in the accuracy and evaluation time trade-off. In this line of research we observe that the two sub-fields of binocular, and multi-view stereo have converged and are based on fundamentally the same architectures. In this work we aim to perform an objective comparison of these methods, controlling for architecture, and accounting for the rectification process typically used in binocular stereo. To our knowledge there is no prior work directly comparing the two. We aim to measure the performance of matching between rectified pairs, and plane-sweep based multi-view stereo. We test a range of camera configurations and studying the effectiveness of additional cameras in the context of a synthetic multi-view stereo dataset developed for evaluating 3D reconstruction in agriculture.","PeriodicalId":164317,"journal":{"name":"2020 35th International Conference on Image and Vision Computing New Zealand (IVCNZ)","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121823244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Progress towards imaging biological filaments using X-ray free-electron lasers 利用x射线自由电子激光器成像生物细丝的研究进展
2020 35th International Conference on Image and Vision Computing New Zealand (IVCNZ) Pub Date : 2020-11-25 DOI: 10.1109/IVCNZ51579.2020.9290623
R. D. Arnal, David H. Wojtas, R. Millane
{"title":"Progress towards imaging biological filaments using X-ray free-electron lasers","authors":"R. D. Arnal, David H. Wojtas, R. Millane","doi":"10.1109/IVCNZ51579.2020.9290623","DOIUrl":"https://doi.org/10.1109/IVCNZ51579.2020.9290623","url":null,"abstract":"X-ray free-electron lasers (XFELs) are opening new frontiers in structural biology. The extreme brilliance of these highly coherent X-ray sources allows for ever smaller crystals to be used while still being able to diffract enough photons to provide sufficient data for structure determination. Biomolecules arranged into filaments are an important class of targets that are expected to greatly benefit from the continuous improvements in XFEL capabilities. Here we first review some of the state-of the-art research in using XFELs for the imaging of biological filaments. Extrapolating current trends towards single particle imaging, we consider an intermediate case where diffraction patterns from single filaments can be measured and oriented to form a 3D dataset. Prospects for using iterative projection algorithms (IPAs) for ab initio phase retrieval with such data collected from single filaments are illustrated by the reconstruction of the electron density of a B-DNA structure from simulated, noisy XFEL data.","PeriodicalId":164317,"journal":{"name":"2020 35th International Conference on Image and Vision Computing New Zealand (IVCNZ)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123641913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Sheep: kinship assignment in livestock from facial images 深羊:家畜面部图像的亲属关系分配
2020 35th International Conference on Image and Vision Computing New Zealand (IVCNZ) Pub Date : 2020-11-25 DOI: 10.1109/IVCNZ51579.2020.9290558
Lech Szymanski, Michael Lee
{"title":"Deep Sheep: kinship assignment in livestock from facial images","authors":"Lech Szymanski, Michael Lee","doi":"10.1109/IVCNZ51579.2020.9290558","DOIUrl":"https://doi.org/10.1109/IVCNZ51579.2020.9290558","url":null,"abstract":"For the non-farmer folk all sheep might look the same, but they are in fact morphologically quite different; including when it comes to facial features. Image analysis has already demonstrated that computer-based facial recognition in livestock is very accurate. We investigate the viability of deep learning for assigning kinship in livestock for use in genetic evaluation- given two images of sheep faces, our proposed model predicts their genetic relationship. In this work we present two CNN models: one for face detection (reporting 80% accuracy) and one for kinship detection (reporting 68% balanced accuracy).","PeriodicalId":164317,"journal":{"name":"2020 35th International Conference on Image and Vision Computing New Zealand (IVCNZ)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125350026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Edge-Aware Convolution for RGB-D Image Segmentation 边缘感知卷积在RGB-D图像分割中的应用
2020 35th International Conference on Image and Vision Computing New Zealand (IVCNZ) Pub Date : 2020-11-25 DOI: 10.1109/IVCNZ51579.2020.9290608
Rongsen Chen, Fang-Lue Zhang, Taehyun Rhee
{"title":"Edge-Aware Convolution for RGB-D Image Segmentation","authors":"Rongsen Chen, Fang-Lue Zhang, Taehyun Rhee","doi":"10.1109/IVCNZ51579.2020.9290608","DOIUrl":"https://doi.org/10.1109/IVCNZ51579.2020.9290608","url":null,"abstract":"Convolutional Neural Networks using RGB-D images as input have shown superior performance in recent research in the field of semantic segmentation. In RGB-D data, the depth channel encodes information from the 3D spatial domain, which has an inherent difference with the color channels. It thus needs to be treated in a special way, rather than just processed as another channel of the input signal. Under this purpose, we propose a simple but not trivial edge-aware convolutional kernel to utilize the geometric information contained in the depth channel to extract feature maps in a more effective manner. The edge-aware convolutional kernel is built upon regular convolutional kernel, thus, it can be used to restructure existing CNN models to achieve stable and effective feature extraction for RGB-D data. We compare our result with a previous method that is closely related to our to show our method can provide more effective and stable feature extraction.","PeriodicalId":164317,"journal":{"name":"2020 35th International Conference on Image and Vision Computing New Zealand (IVCNZ)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124715600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Voice Interaction for Augmented Reality Navigation Interfaces with Natural Language Understanding 基于自然语言理解的增强现实导航界面语音交互
2020 35th International Conference on Image and Vision Computing New Zealand (IVCNZ) Pub Date : 2020-11-25 DOI: 10.1109/IVCNZ51579.2020.9290643
Junhong Zhao, Christopher James Parry, R. K. D. Anjos, C. Anslow, Taehyun Rhee
{"title":"Voice Interaction for Augmented Reality Navigation Interfaces with Natural Language Understanding","authors":"Junhong Zhao, Christopher James Parry, R. K. D. Anjos, C. Anslow, Taehyun Rhee","doi":"10.1109/IVCNZ51579.2020.9290643","DOIUrl":"https://doi.org/10.1109/IVCNZ51579.2020.9290643","url":null,"abstract":"Voice interaction with natural language understanding (NLU) has been extensively explored in desktop computers, handheld devices, and human-robot interaction. However, there is limited research into voice interaction with NLU in augmented reality (AR). There are benefits of using voice interaction in AR, such as high naturalness and being hands-free. In this project, we introduce VOARLA, an NLU-powered AR voice interface, which navigate courier driver delivery a package. A user study was completed to evaluate VOARLA against an AR voice interface without NLU to investigate the effectiveness of NLU in the navigation interface in AR. We evaluated from three aspects: accuracy, productivity, and commands learning curve. Results found that using NLU in AR increases the accuracy of the interface by 15%. However, higher accuracy did not correlate to an increase in productivity. Results suggest that NLU helped users remember the commands on the first run when they were unfamiliar with the system. This suggests that using NLU in an AR hands-free application can make the learning curve easier for new users.","PeriodicalId":164317,"journal":{"name":"2020 35th International Conference on Image and Vision Computing New Zealand (IVCNZ)","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121386320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A fair comparison of the EEG signal classification methods for alcoholic subject identification 酒精受试者识别的脑电信号分类方法的比较
2020 35th International Conference on Image and Vision Computing New Zealand (IVCNZ) Pub Date : 2020-11-25 DOI: 10.1109/IVCNZ51579.2020.9290683
M. Awrangjeb, J. D. C. Rodrigues, Bela Stantic, V. Estivill-Castro
{"title":"A fair comparison of the EEG signal classification methods for alcoholic subject identification","authors":"M. Awrangjeb, J. D. C. Rodrigues, Bela Stantic, V. Estivill-Castro","doi":"10.1109/IVCNZ51579.2020.9290683","DOIUrl":"https://doi.org/10.1109/IVCNZ51579.2020.9290683","url":null,"abstract":"The electroencephalogram (EEG) signal, which records the electrical activity in the brain, is useful for assessing the mental state of the alcoholic subject. Since the public release of an EEG dataset by the University of California, Irvine, there have been many attempts to classify the EEG signals of alcoholic’ and ‘healthy’ subjects. These classification methods are hard to compare as they use different subsets of the dataset and many of their algorithmic settings are unknown. The comparison of their published results using the inconsistent and unknown information is unfair. This paper attempts a fair comparison by presenting a level playing field where a public subset of the dataset is employed with known algorithmic settings. Two recently proposed high performing EEG signal classification methods are implemented with different classifiers and cross-validation techniques. While compared it is observed that the wavelet packet decomposition method with the Naïve Bayes classifier and the k-fold cross validation technique outperforms the other method.","PeriodicalId":164317,"journal":{"name":"2020 35th International Conference on Image and Vision Computing New Zealand (IVCNZ)","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115951593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Incorporating Human Body Shape Guidance for Cloth Warping in Model to Person Virtual Try-on Problems 将模型中布料翘曲的人体形状指导结合到人的虚拟试穿问题中
2020 35th International Conference on Image and Vision Computing New Zealand (IVCNZ) Pub Date : 2020-11-25 DOI: 10.1109/IVCNZ51579.2020.9290603
Debapriya Roy, Sanchayan Santra, B. Chanda
{"title":"Incorporating Human Body Shape Guidance for Cloth Warping in Model to Person Virtual Try-on Problems","authors":"Debapriya Roy, Sanchayan Santra, B. Chanda","doi":"10.1109/IVCNZ51579.2020.9290603","DOIUrl":"https://doi.org/10.1109/IVCNZ51579.2020.9290603","url":null,"abstract":"The world of retail has witnessed a lot of change in the last few decades and with a size of 2.4 trillion, the fashion industry is way ahead of others in this aspect. With the blessings of technology like virtual try-on (vton), now even online shoppers can virtually try a product before buying. However, the current image-based virtual try-on methods still have a long way to go when it comes to producing realistic outputs. In general, vton methods work in two stages. The first stage warps the source cloth and the second stage merges the cloth with the person image for predicting the final try-on output. While the second stage is comparatively easier to handle using neural networks, predicting an accurate warp is difficult as replicating actual human body deformation is challenging. A fundamental issue in vton domain is data. Although lots of images of cloth are available over the internet in either social media or e-commerce websites, but most of them are in the form of a human wearing the cloth. However, the existing approaches are constrained to take separate cloth images as the input source clothing. To address these problems, we propose a model to person cloth warping strategy, where the objective is to align the segmented cloth from the model image in a way that fits the target person, thus, alleviating the need of separate cloth images. Compared to the existing approaches of warping, our method shows improvement especially in the case of complex patterns of cloth. Rigorous experiments applied on various public domain datasets establish the efficacy of this method compared to benchmark methods.","PeriodicalId":164317,"journal":{"name":"2020 35th International Conference on Image and Vision Computing New Zealand (IVCNZ)","volume":"264 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134500754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A rapid method of hypercube stitching for snapshot multi-camera system 快照多相机系统超立方体拼接的快速方法
2020 35th International Conference on Image and Vision Computing New Zealand (IVCNZ) Pub Date : 2020-11-25 DOI: 10.1109/IVCNZ51579.2020.9290723
Y. Dixit, M. Al-Sarayreh, C. Craigie, M. M. Reis
{"title":"A rapid method of hypercube stitching for snapshot multi-camera system","authors":"Y. Dixit, M. Al-Sarayreh, C. Craigie, M. M. Reis","doi":"10.1109/IVCNZ51579.2020.9290723","DOIUrl":"https://doi.org/10.1109/IVCNZ51579.2020.9290723","url":null,"abstract":"Snapshot hyperspectral imaging (HSI) systems are rapid and ultra-compact making them potential candidate for real-time food analysis. However, the existing technology limits the working wavelength range of these cameras requiring multiple cameras to cover a wider spectral range. We present a rapid hypercube stitching method which generates an efficiently stitched hypercube from two different HSI cameras providing a wider spectral range as well as spatial information. It shows reliability and robustness over the manual stitching. The method was able to successfully stitch respective hypercubes from near-infrared (NIR) and visible (Vis) cameras producing much lower number of non-overlapping pixels between the hypercubes then would be possible with manual stitching. We demonstrate the application of our method for stitching the hypercubes (NIR and Vis) for 32 beef samples analyzing the stitching efficiency and reliability of spectral information.","PeriodicalId":164317,"journal":{"name":"2020 35th International Conference on Image and Vision Computing New Zealand (IVCNZ)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128646883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image Metrics for Deconvolution of Satellites in Low Earth Orbit 近地轨道卫星反卷积图像度量
2020 35th International Conference on Image and Vision Computing New Zealand (IVCNZ) Pub Date : 2020-11-25 DOI: 10.1109/IVCNZ51579.2020.9290535
Sierra Hickman, Vishnu Anand Muruganandan, S. Weddell, R. Clare
{"title":"Image Metrics for Deconvolution of Satellites in Low Earth Orbit","authors":"Sierra Hickman, Vishnu Anand Muruganandan, S. Weddell, R. Clare","doi":"10.1109/IVCNZ51579.2020.9290535","DOIUrl":"https://doi.org/10.1109/IVCNZ51579.2020.9290535","url":null,"abstract":"Satellites and space debris clutter low Earth orbital paths, causing concern for future launches as the clutter increases the probability of in-orbit collisions. Therefore, it is important to track and characterise these objects. However, Earth’s atmosphere distorts images collected from ground-based telescopes, which can be reduced through post-processing deconvolution to improve images of satellites and space debris. A metric is needed to quantity the quality of the images and deconvolution of these extended objects at finite distances; as well as to characterise the structure and brightness for un-symmetrical satellites in low Earth orbit. This paper uses images of the International Space Station to investigate the use of the structural similarity metric and the regional properties as potential satellite imaging metrics. Our results show that the similarity metric can characterise the orientation of the satellite relative to the observer, while the regional properties serve to quantity the image quality and improvement due to deconvolution.","PeriodicalId":164317,"journal":{"name":"2020 35th International Conference on Image and Vision Computing New Zealand (IVCNZ)","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128039268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Review of Emerging Video Codecs: Challenges and Opportunities 新兴视频编解码器的回顾:挑战与机遇
2020 35th International Conference on Image and Vision Computing New Zealand (IVCNZ) Pub Date : 2020-11-25 DOI: 10.1109/IVCNZ51579.2020.9290536
A. Punchihewa, D. Bailey
{"title":"A Review of Emerging Video Codecs: Challenges and Opportunities","authors":"A. Punchihewa, D. Bailey","doi":"10.1109/IVCNZ51579.2020.9290536","DOIUrl":"https://doi.org/10.1109/IVCNZ51579.2020.9290536","url":null,"abstract":"This paper presents a review of video codecs that are in use and currently being developed, the codec development process, current trends, challenges and opportunities for the research community. There is a paradigm shift in video coding standards. Concurrently, multiple video standards are standardised by standardising organisations. At the same time, royalty free video compression standards are being developed and standardised. Introduction of enhancement-layer-based coding standards will extend the lifetime of legacy video codecs finding middle ground in improved coding efficiency, computational complexity and power requirements. The video coding landscape is changing that is challenged by emergence of multiple video coding standards for different use cases. These may offer some opportunities for coding industry, especially for New Zealand researchers serving niche markets in video games, computer generated videos and animations.","PeriodicalId":164317,"journal":{"name":"2020 35th International Conference on Image and Vision Computing New Zealand (IVCNZ)","volume":"197 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123012928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信