Machine Graphics and Vision最新文献

筛选
英文 中文
Isocontouring with sharp corner features 等高轮廓与尖锐的角落特征
Machine Graphics and Vision Pub Date : 2019-12-01 DOI: 10.22630/mgv.2018.27.1.2
S. Gong, Timothy S Newman
{"title":"Isocontouring with sharp corner features","authors":"S. Gong, Timothy S Newman","doi":"10.22630/mgv.2018.27.1.2","DOIUrl":"https://doi.org/10.22630/mgv.2018.27.1.2","url":null,"abstract":"A method that achieves closed boundary finding in images (including slice images) with sub-pixel precision while enabling expression of sharp corners in that boundary is described. The method is a new extension to the well-known Marching Squares (MS) 2D isocontouring method that recovers sharp corner features that MS usually recovers as chamfered. The method has two major components: (1) detection of areas in the input image likely to contain sharp corner features, and (2) examination of image locations directly adjacent to the area with likely corners. Results of applying the new method, as well as its performance analysis, are also shown.","PeriodicalId":39750,"journal":{"name":"Machine Graphics and Vision","volume":"28 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82535994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Constraint-based algorithm to estimate the line of a milling edge 基于约束的铣削刃线估计算法
Machine Graphics and Vision Pub Date : 2019-01-01 DOI: 10.22630/mgv.2019.28.1.6
Marcin Bator, K. Śmietańska
{"title":"Constraint-based algorithm to estimate the line of a milling edge","authors":"Marcin Bator, K. Śmietańska","doi":"10.22630/mgv.2019.28.1.6","DOIUrl":"https://doi.org/10.22630/mgv.2019.28.1.6","url":null,"abstract":"Each practical task has its constraints. They limit the number of potential solutions. Incorporation of the constraints into the structure of an algorithm makes it possible to speed up computations by reducing the search space and excluding the wrong results. However, such an algorithm needs to be designed for one task only, has a limited usefulness to tasks which have the same set of constrains. Therefore, sometimes is limited to just a single application for which it has been designed, and is difficult to generalise. An algorithm to estimate the straight line representing a milling edge is presented. The algorithm was designed for the measurement purposes and meets the requirements related to precision.","PeriodicalId":39750,"journal":{"name":"Machine Graphics and Vision","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82353212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Data augmentation techniques for transfer learning improvement in drill wear classification using convolutional neural network 基于卷积神经网络的钻头磨损分类迁移学习改进的数据增强技术
Machine Graphics and Vision Pub Date : 2019-01-01 DOI: 10.22630/mgv.2019.28.1.1
J. Kurek, J. Aleksiejuk-Gawron, Izabella Antoniuk, J. Górski, Albina Jegorowa, M. Kruk, A. Orłowski, J. Pach, B. Świderski, Grzegorz Wieczorek
{"title":"Data augmentation techniques for transfer learning improvement in drill wear classification using convolutional neural network","authors":"J. Kurek, J. Aleksiejuk-Gawron, Izabella Antoniuk, J. Górski, Albina Jegorowa, M. Kruk, A. Orłowski, J. Pach, B. Świderski, Grzegorz Wieczorek","doi":"10.22630/mgv.2019.28.1.1","DOIUrl":"https://doi.org/10.22630/mgv.2019.28.1.1","url":null,"abstract":"This paper presents an improved method for recognizing the drill state on the basis of hole images drilled in a laminated chipboard, using convolutional neural network (CNN) and data augmentation techniques. Three classes were used to describe the drill state: red -- for drill that is worn out and should be replaced, yellow -- for state in which the system should send a warning to the operator, indicating that this element should be checked manually, and green -- denoting the drill that is still in good condition, which allows for further use in the production process. The presented method combines the advantages of transfer learning and data augmentation methods to improve the accuracy of the received evaluations. In contrast to the classical deep learning methods, transfer learning requires much smaller training data sets to achieve acceptable results. At the same time, data augmentation customized for drill wear recognition makes it possible to expand the original dataset and to improve the overall accuracy. The experiments performed have confirmed the suitability of the presented approach to accurate class recognition in the given problem, even while using a small original dataset.","PeriodicalId":39750,"journal":{"name":"Machine Graphics and Vision","volume":"34 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87514851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Context-based segmentation of the longissimus muscle in beef with a deep neural network 基于上下文的牛肉最长肌的深度神经网络分割
Machine Graphics and Vision Pub Date : 2019-01-01 DOI: 10.22630/mgv.2019.28.1.5
Karol Talacha, Izabella Antoniuk, L. Chmielewski, M. Kruk, J. Kurek, A. Orłowski, J. Pach, A. Półtorak, B. Świderski, Grzegorz Wieczorek
{"title":"Context-based segmentation of the longissimus muscle in beef with a deep neural network","authors":"Karol Talacha, Izabella Antoniuk, L. Chmielewski, M. Kruk, J. Kurek, A. Orłowski, J. Pach, A. Półtorak, B. Świderski, Grzegorz Wieczorek","doi":"10.22630/mgv.2019.28.1.5","DOIUrl":"https://doi.org/10.22630/mgv.2019.28.1.5","url":null,"abstract":"The problem of segmenting the cross-section through the longissimus muscle in beef carcasses with computer vision methods was investigated. The available data were 111 images of cross-sections coming from 28 cows (typically four images per cow). Training data were the pixels of the muscles, marked manually. The AlexNet deep convolutional neural network was used as the classifier, and single pixels were the classified objects. Each pixel was presented to the network together with its small circular neighbourhood, and with its context represented by the further neighbourhood, darkened by halving the image intensity. The average classification accuracy was 96%. The accuracy without darkening the context was found to be smaller, with a small but statistically significant difference. The segmentation of the longissimus muscle is the introductory stage for the next steps of assessing the quality of beef for the alimentary purposes.","PeriodicalId":39750,"journal":{"name":"Machine Graphics and Vision","volume":"18 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82331212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Textural features based on run length encoding in the classification of furniture surfaces with the orange skin defect 基于行程长度编码的纹理特征在桔黄色表皮缺陷家具表面分类中的应用
Machine Graphics and Vision Pub Date : 2019-01-01 DOI: 10.22630/mgv.2019.28.1.4
J. Pach, Izabella Antoniuk, L. Chmielewski, J. Górski, M. Kruk, J. Kurek, A. Orłowski, K. Śmietańska, B. Świderski, Grzegorz Wieczorek
{"title":"Textural features based on run length encoding in the classification of furniture surfaces with the orange skin defect","authors":"J. Pach, Izabella Antoniuk, L. Chmielewski, J. Górski, M. Kruk, J. Kurek, A. Orłowski, K. Śmietańska, B. Świderski, Grzegorz Wieczorek","doi":"10.22630/mgv.2019.28.1.4","DOIUrl":"https://doi.org/10.22630/mgv.2019.28.1.4","url":null,"abstract":"Textural features based upon thresholding and run length encoding have been successfully applied to the problem of classification of the quality of lacquered surfaces in furniture exhibiting the surface defect known as orange skin. The set of features for one surface patch consists of 12 real numbers. The classifier used was the one nearest neighbour classifier without feature selection. The classification quality was tested on 808 images 300 by 300 pixels, made under controlled, close-to-tangential lighting, with three classes: good, acceptable and bad, in close to balanced numbers. The classification accuracy was not smaller than 98% when the tested surface was not rotated with respect to the training samples, 97% for rotations up to 20 degrees and 95.5% in the worst case for arbitrary rotations.","PeriodicalId":39750,"journal":{"name":"Machine Graphics and Vision","volume":"11 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91270864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BCT Boost Segmentation with U-net in TensorFlow TensorFlow中基于U-net的BCT Boost分割
Machine Graphics and Vision Pub Date : 2019-01-01 DOI: 10.22630/mgv.2019.28.1.3
Grzegorz Wieczorek, Izabella Antoniuk, M. Kruk, J. Kurek, A. Orłowski, J. Pach, B. Świderski
{"title":"BCT Boost Segmentation with U-net in TensorFlow","authors":"Grzegorz Wieczorek, Izabella Antoniuk, M. Kruk, J. Kurek, A. Orłowski, J. Pach, B. Świderski","doi":"10.22630/mgv.2019.28.1.3","DOIUrl":"https://doi.org/10.22630/mgv.2019.28.1.3","url":null,"abstract":"In this paper we present a new segmentation method meant for boost area that remains after removing the tumour using BCT (breast conserving therapy). The selected area is a region on which radiation treatment will later be made. Consequently, an inaccurate designation of this region can result in a treatment missing its target or focusing on healthy breast tissue that otherwise could be spared. Needless to say that exact indication of boost area is an extremely important aspect of the entire medical procedure, where a better definition can lead to optimizing of the coverage of the target volume and, in result, can save normal breast tissue. Precise definition of this area has a potential to both improve the local control of the disease and to ensure better cosmetic outcome for the patient. In our approach we use U-net along with Keras and TensorFlow systems to tailor a precise solution for the indication of the boost area. During the training process we utilize a set of CT images, where each of them came with a contour assigned by an expert. We wanted to achieve a segmentation result as close to given contour as possible. With a rather small initial data set we used data augmentation techniques to increase the number of training examples, while the final outcomes were evaluated according to their similarity to the ones produced by experts, by calculating the mean square error and the structural similarity index (SSIM).","PeriodicalId":39750,"journal":{"name":"Machine Graphics and Vision","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81770113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Image annotating tools for agricultural purpose - A requirements study 农业用图像注释工具。需求研究
Machine Graphics and Vision Pub Date : 2019-01-01 DOI: 10.22630/mgv.2019.28.1.7
Marcin Bator, Maciej Pankiewicz
{"title":"Image annotating tools for agricultural purpose - A requirements study","authors":"Marcin Bator, Maciej Pankiewicz","doi":"10.22630/mgv.2019.28.1.7","DOIUrl":"https://doi.org/10.22630/mgv.2019.28.1.7","url":null,"abstract":"Images of natural scenes, like those relevant for agriculture, are characterised with a variety of forms of objects of interest and similarities between objects that one might want to discriminate. This introduces uncertainty to the analysis of such images. Requirements for an image annotation tool to be used in pattern recognition design for agriculture were discussed. A selection of open source annotating tools were presented. Advices how to use the software to handle uncertainty and missing functionalities were described.","PeriodicalId":39750,"journal":{"name":"Machine Graphics and Vision","volume":"24 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72718199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Classifiers ensemble of transfer learning for improved drill wear classification using convolutional neural network 基于迁移学习的分类器集成改进的卷积神经网络钻头磨损分类
Machine Graphics and Vision Pub Date : 2019-01-01 DOI: 10.22630/mgv.2019.28.1.2
J. Kurek, J. Aleksiejuk-Gawron, Izabella Antoniuk, J. Górski, Albina Jegorowa, M. Kruk, A. Orłowski, J. Pach, B. Świderski, Grzegorz Wieczorek
{"title":"Classifiers ensemble of transfer learning for improved drill wear classification using convolutional neural network","authors":"J. Kurek, J. Aleksiejuk-Gawron, Izabella Antoniuk, J. Górski, Albina Jegorowa, M. Kruk, A. Orłowski, J. Pach, B. Świderski, Grzegorz Wieczorek","doi":"10.22630/mgv.2019.28.1.2","DOIUrl":"https://doi.org/10.22630/mgv.2019.28.1.2","url":null,"abstract":"In this paper we introduce the enhanced drill wear recognition method, based on classifiers ensemble, obtained using transfer learning and data augmentation methods. Red, green and yellow classes are used to describe the current drill state. The first one corresponds to the case when drill should be immediately replaced. The second one denotes a tool that is still in a good condition. The final class refers to the case when a drill is suspected of being worn out, and a human expert evaluation would be required. The proposed algorithm uses three different, pretrained network models and adjusts them to the drill wear classification problem. To ensure satisfactory results, each of the methods used was required to achieve accuracy above 90% for the given classification task. Final evaluation is achieved by voting of all three classifiers. Since the initial data set was small (242 instances), the data augmentation method was used to artificially increase the total number of drill hole images. The experiments performed confirmed that the presented approach can achieve high accuracy, even with such a limited set of training data.","PeriodicalId":39750,"journal":{"name":"Machine Graphics and Vision","volume":"11 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86617921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
An ensemble feature method for food classification 食品分类的集成特征方法
Machine Graphics and Vision Pub Date : 2017-12-01 DOI: 10.22630/mgv.2017.26.1.2
N. Martinel, C. Micheloni, C. Piciarelli
{"title":"An ensemble feature method for food classification","authors":"N. Martinel, C. Micheloni, C. Piciarelli","doi":"10.22630/mgv.2017.26.1.2","DOIUrl":"https://doi.org/10.22630/mgv.2017.26.1.2","url":null,"abstract":"In the last years, several works on automatic image-based food recognition have been proposed, often based on texture feature extraction and classification. However, there is still a lack of proper comparisons to evaluate which approaches are better suited for this specific task. In this work, we adopt a Random Forest classifier to measure the performances of different texture filter banks and feature encoding techniques on three different food image datasets. Comparative results are given to show the performance of each considered approach, as well as to compare the proposed Random Forest classifiers with other feature-based state-of-the-art solutions.","PeriodicalId":39750,"journal":{"name":"Machine Graphics and Vision","volume":"25 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79376425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Applied inverse kinematics for bipedal characters moving on the diverse terrain 应用逆运动学的两足人物移动在不同的地形
Machine Graphics and Vision Pub Date : 2017-12-01 DOI: 10.22630/mgv.2017.26.1.1
Ł. Burdka, P. Rohleder
{"title":"Applied inverse kinematics for bipedal characters moving on the diverse terrain","authors":"Ł. Burdka, P. Rohleder","doi":"10.22630/mgv.2017.26.1.1","DOIUrl":"https://doi.org/10.22630/mgv.2017.26.1.1","url":null,"abstract":"A solution to the problem of adjusting the pose of an animated video game character to the diverse terrain and surroundings is proposed. It is an important task in every modern video game where there is a~focus on animated characters. Not addressing this issue leads to major visual glitches such as legs hovering above the ground surface, or penetrating the obstacles while moving. As presented in this work, the described problem can be effectively solved by examining the surroundings in real-time and applying Inverse Kinematics (IK) as a~procedural post process to the currently used animation.","PeriodicalId":39750,"journal":{"name":"Machine Graphics and Vision","volume":"33 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82414433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信