2018 25th International Conference on Mechatronics and Machine Vision in Practice (M2VIP)最新文献

筛选
英文 中文
An OEE Improvement Method Based on TOC 一种基于TOC的OEE改进方法
Zhengquan Bai, Min Dai, Qiuyu Wei, Zhisheng Zhang
{"title":"An OEE Improvement Method Based on TOC","authors":"Zhengquan Bai, Min Dai, Qiuyu Wei, Zhisheng Zhang","doi":"10.1109/M2VIP.2018.8600875","DOIUrl":"https://doi.org/10.1109/M2VIP.2018.8600875","url":null,"abstract":"Overall Equipment Efficiency (OEE) is applied to measure the actual production capacity of equipment, and Theory of Constraints (TOC) is adopted to improve the system production efficiency. In order to obtain the OEE improvement method based on TOC, bottleneck identification model and buffer model are established. Thus, a multi-attribute bottleneck identification model is constructed based on Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS) and Entropy Method. In addition, a time buffer model based on the Drum-Buffer-Rope (DBR) theory is proposed. This OEE improvement method can significantly increase OEE of bottleneck equipment. Moreover, due to the optimization of bottlenecks, the system production efficiency is improved. The effectiveness of this method is also verified in a semiconductor package process.","PeriodicalId":365579,"journal":{"name":"2018 25th International Conference on Mechatronics and Machine Vision in Practice (M2VIP)","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115154583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Vehicle and Pedestrian Recognition Using Multilayer Lidar based on Support Vector Machine 基于支持向量机的多层激光雷达车辆和行人识别
Zhenyu Lin, M. Hashimoto, Kenta Takigawa, Kazuhiko Takahashi
{"title":"Vehicle and Pedestrian Recognition Using Multilayer Lidar based on Support Vector Machine","authors":"Zhenyu Lin, M. Hashimoto, Kenta Takigawa, Kazuhiko Takahashi","doi":"10.1109/M2VIP.2018.8600877","DOIUrl":"https://doi.org/10.1109/M2VIP.2018.8600877","url":null,"abstract":"Moving-object tracking (estimating position and velocity of moving objects) is a key technology for autonomous driving systems and driving assistance systems in mobile robotics and vehicle automation domains. To predict and avoid collisions, the tracking system has to recognize objects as accurately as possible. This paper presents a method for recognizing vehicles (cars and bicyclists) and pedestrians using multilayer lidar (3D lidar). Lidar data are clustered, and eight-dimensional features are extracted from each of clustered lidar data, such as distance from the lidar, velocity, object size, number of lidar-measurement points, and distribution of reflection intensities. A multiclass support vector machine is applied to classify cars, bicyclists, and pedestrians from these features. Experiments using “The Stanford Track Collection” data set allow us to compare the proposed method with a method based on the random forest algorithm and a conventional 26-dimensional feature-based method. The comparison shows that the proposed method improves recognition accuracy and processing time over the other methods. Therefore, the proposed method can work well under low computational environments.","PeriodicalId":365579,"journal":{"name":"2018 25th International Conference on Mechatronics and Machine Vision in Practice (M2VIP)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131801001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Camera based path planning for low quantity - high variant manufacturing with industrial robots 基于相机的工业机器人小批量高变型制造路径规划
Peter Weßeler, Benjamin Kaiser, Jürgen te Vrugt, A. Lechler, A. Verl
{"title":"Camera based path planning for low quantity - high variant manufacturing with industrial robots","authors":"Peter Weßeler, Benjamin Kaiser, Jürgen te Vrugt, A. Lechler, A. Verl","doi":"10.1109/M2VIP.2018.8600833","DOIUrl":"https://doi.org/10.1109/M2VIP.2018.8600833","url":null,"abstract":"The acquisition costs for industrial robots have been steadily decreasing in past years. Nevertheless, they still face significant drawbacks in the required effort for the preparation of complex robot tasks which causes these systems to be rarely present so far in small and medium-sized enterprises (SME) that focus mainly on small volume, high variant manufacturing. In this paper, we propose a camera-based path planning framework that allows the fast preparation and execution of robot tasks in dynamic environments which leads to less planning overhead, fast program generation and reduced cost and hence overcomes the major impediments for the usage of industrial robots for automation in SMEs with focus on low volume and high variant manufacturing. The framework resolves existing problems in different steps. The exact position and orientation of the workpiece are determined from a 3D environment model scanned by an optical sensor. The so retrieved information is used to plan a collision-free path that meets the boundary conditions of the specific robot task. Experiments show the potential and effectiveness of the the framework presented here by evaluating a case study.","PeriodicalId":365579,"journal":{"name":"2018 25th International Conference on Mechatronics and Machine Vision in Practice (M2VIP)","volume":"212 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132193703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Modification of Lab color model for minimizing blue-green illumination of underwater vision system 为减少水下视觉系统的蓝绿色照明,对Lab颜色模型进行了修改
A. A. Ghani, A. Nasir, Muhammad Aizzat Bin Zakaria, A. N. Ibrahim
{"title":"Modification of Lab color model for minimizing blue-green illumination of underwater vision system","authors":"A. A. Ghani, A. Nasir, Muhammad Aizzat Bin Zakaria, A. N. Ibrahim","doi":"10.1109/M2VIP.2018.8600832","DOIUrl":"https://doi.org/10.1109/M2VIP.2018.8600832","url":null,"abstract":"Deep underwater images suffer from low contrast and blue-green illumination which leads towards restriction to the visibility of the objects. Most of the previous proposed enhancement techniques for underwater vision system applications improve the image contrast but blue-green illumination retains in the images. This paper discusses the improvement of underwater image contrast with concentration in modification of Lab color model. The modification of Lab color element in this color model based on shifting the entire pixels of the image to another shifting values, in which, the new shifted values exhibit an improvement in terms of image contrast and minimizing blue-green illumination based on optimal information in the image. The proposed method integrates two main steps, namely dark-stretched image fusion (DSF) and pixel distribution shifting (PDS). In DSF step, dark channel image is stretched, divided into two channels before it is fused together. Next, the image pixel distribution in Lab color model is shifted towards more natural view based on human visual system. Qualitative evaluation indicates a significant improvement of image contrast in output images.","PeriodicalId":365579,"journal":{"name":"2018 25th International Conference on Mechatronics and Machine Vision in Practice (M2VIP)","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115887064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
The effect of reheating layers in Metal Additive Manufacturing on the external surface finish of a printed part 金属增材制造中再加热层对打印件外表面光洁度的影响
Tanisha Pereira, J. Potgieter, John V. Kennedy, A. Mukhtar, M. Fry
{"title":"The effect of reheating layers in Metal Additive Manufacturing on the external surface finish of a printed part","authors":"Tanisha Pereira, J. Potgieter, John V. Kennedy, A. Mukhtar, M. Fry","doi":"10.1109/M2VIP.2018.8600902","DOIUrl":"https://doi.org/10.1109/M2VIP.2018.8600902","url":null,"abstract":"Additive Manufacturing (AM) is expanding out of the Rapid Prototyping space to an end user product manufacturing technology. The commercial interest in industry, has created research opportunities to study how quality assurance can be provided for AM end user part. There are several concerns lacking solutions at this stage, one of which is the surface finish quality on a metal AM part. The unmodified printed part extracted from an AM metal printer, often has a rough finish on the exterior surface. When the support material is removed from the part, a rough texture from the prongs bonding the support layers and part layers often leaves an even rougher texture in associated areas. For this reason, AM parts require further maintenance, often media polishing, to achieve the parts required surface finish. In cases where structural and mechanical quality is required, the surface finish can have a detrimental impact. While there are several beneficial methods employed in Subtractive, Formative and Joining manufacturing processes to improve metal surface finish, the method of interest in this study is laser reheating. More specifically, the scope of this paper studies effect laser reheating has on the surface finish of AM prints. A review of similar processes for other AM metals is studied to determine testing parameters of interest. The experimental work performed focuses on testing specimens printed in a Direct Metal Printer (DMP), applied to Stainless Steel 17-4 PH powdered material. The aim of this experiment is to determine, through stronger bonding and further melting, whether a smoother and more polished surface can be achieved. The results of the experiment performed showed high laser power and scan speed result in a better polish with each repetition, but increased reheating on just one layer caused an inward collapse. Several quality inspection techniques are compared to determine which proves the most fruitful in a studying the surface finish of the samples. Testing methods utilized, include visual inspection techniques (human eye and Scanning Electron Microscopy (SEM)), mechanical tests (compression and micro-hardness testing) and NDE roughness profiling techniques (dye-penetrant, AFM and Surface Profilometer testing).","PeriodicalId":365579,"journal":{"name":"2018 25th International Conference on Mechatronics and Machine Vision in Practice (M2VIP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129490328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Secure and Robust Color Image Watermarking for Copyright Protection Based on Lifting Wavelet Transform 基于提升小波变换的版权保护彩色图像安全鲁棒水印
Hao Chen, Weiliang Xu
{"title":"Secure and Robust Color Image Watermarking for Copyright Protection Based on Lifting Wavelet Transform","authors":"Hao Chen, Weiliang Xu","doi":"10.1109/M2VIP.2018.8600889","DOIUrl":"https://doi.org/10.1109/M2VIP.2018.8600889","url":null,"abstract":"This paper presents a secure and robust color image watermarking algorithm for copyright protection. This method uses lifting wavelet transform (LWT) to decompose both the host image and the watermark into different sub-bands and performs watermark embedding in the transform domain. A security key is introduced in the algorithm for security purpose. Two color images are used to test the performance of the proposed algorithm. Results show that the proposed watermarking scheme not only has good imperceptibility and but also are robust to various geometric and image processing attacks.","PeriodicalId":365579,"journal":{"name":"2018 25th International Conference on Mechatronics and Machine Vision in Practice (M2VIP)","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116302248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Optimized Characteristic Ratio Assignment for Low-Order Controller Design 低阶控制器设计的特征比优化分配
Yue Qiao, Chengbin Ma
{"title":"Optimized Characteristic Ratio Assignment for Low-Order Controller Design","authors":"Yue Qiao, Chengbin Ma","doi":"10.1109/M2VIP.2018.8600878","DOIUrl":"https://doi.org/10.1109/M2VIP.2018.8600878","url":null,"abstract":"It is known that polynomial method is particularly suitable for designing low-order controllers. So far most of the controller design using the polynomial method was based on a pre-defined standard form of the characteristic ratio assignment (CRA). Meanwhile, with limited parameters of the low-order controllers, an exact CRA following a standard form becomes impossible when the order of the control plant is sufficiently high. This paper proposes a systematic design scheme for an optimized CRA. First the influences of the characteristic ratios are quantified as weight coefficients. Then the objective function of the optimization problem is constructed to minimize the difference between the actual CRA and its nominal form. In addition to damping (i.e., the CRA), the requirements on the stability, speed of response, and robustness are also considered as constraints in an optimization problem. The so-called robust optimization problem is then formulated and solved via an inner-outer optimization formulation. Finally, the controller design for a three-mass benchmark system is applied a case study. The simulation results validate the propose scheme especially the robustness against parameter variation, unmodeled dynamics, and disturbance torque.","PeriodicalId":365579,"journal":{"name":"2018 25th International Conference on Mechatronics and Machine Vision in Practice (M2VIP)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128702746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Novel Method for Detecting Joint Angles Based on Inertial and Magnetic Sensors 基于惯性和磁传感器的关节角检测新方法
Yuqiu Huang, Minghao Gou, Peihan Zhang, Xi Wang, Haoyu Xie, X. Sheng
{"title":"A Novel Method for Detecting Joint Angles Based on Inertial and Magnetic Sensors","authors":"Yuqiu Huang, Minghao Gou, Peihan Zhang, Xi Wang, Haoyu Xie, X. Sheng","doi":"10.1109/M2VIP.2018.8600868","DOIUrl":"https://doi.org/10.1109/M2VIP.2018.8600868","url":null,"abstract":"The development of various robotic and biomedical devices has raised demands for accurate, cheap and quickresponsive joint angle sensors. To address the problem, a novel system together with a computation method for joint angle sensing based on inertial and magnetic sensors has been proposed in this paper. This method utilizes a uniform gravitational field and a uniform magnetic field in space. Low-pass filter is applied to reduce the interference of high-frequency noise. The data are also calibrated using ellipsoidal calibration model. Then the system synthesizes the two filtered and calibrated data to calculate the angle between the arms of a joint. Compared to other existing methods, this method keeps a good accuracy while greatly reduces the cost and complexity of the system. One example of the implementation of this method is given in this paper. And the results of both static and dynamic measurement experiments validate the feasibility of the method.","PeriodicalId":365579,"journal":{"name":"2018 25th International Conference on Mechatronics and Machine Vision in Practice (M2VIP)","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133593641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Impact Dynamics and Parametric Analysis of Planar Biped Robot 平面双足机器人碰撞动力学及参数分析
Long Li, Zhongqu Xie, Xiang Luo
{"title":"Impact Dynamics and Parametric Analysis of Planar Biped Robot","authors":"Long Li, Zhongqu Xie, Xiang Luo","doi":"10.1109/M2VIP.2018.8600845","DOIUrl":"https://doi.org/10.1109/M2VIP.2018.8600845","url":null,"abstract":"In this paper, impact dynamics of general planar biped walking robots are studied. Based on integration method, explicit solutions for external impulses and generalized velocity changes are obtained in a detailed form for both single and double impact. And the conditions for single and double impact are proposed. So given a state just before impact, one can predict the impact type and the state of biped robot just after impact. The relation between single and double impact is developed and a novel method is proposed to distinguish between single and double impact. Next parametric analysis is carried out for a planar five-link biped robot. The result shows that double impact with no-slip hardly happens whereas single impact with no-slip frequently happens for normal biped walking in practice application.","PeriodicalId":365579,"journal":{"name":"2018 25th International Conference on Mechatronics and Machine Vision in Practice (M2VIP)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131263396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A Vision Aid for the Visually Impaired using Commodity Dual-Rear-Camera Smartphones 使用商品双后置摄像头智能手机为视障人士提供视力辅助
M. Nguyen, H. Le, W. Yan, Arpita Dawda
{"title":"A Vision Aid for the Visually Impaired using Commodity Dual-Rear-Camera Smartphones","authors":"M. Nguyen, H. Le, W. Yan, Arpita Dawda","doi":"10.1109/M2VIP.2018.8600857","DOIUrl":"https://doi.org/10.1109/M2VIP.2018.8600857","url":null,"abstract":"Dual- (or multiple) rear cameras on hand-held smartphones are believed to be the future of mobile photography. Recently, many of such new has been released (mainly with dual-rear cameras: one wide-angle and one telephoto). Some of the notable ones are Apple iPhone 7 and 8 Plus, iPhone X, Samsung Galaxy S9, LG V30, Huawei Mate 10. With built-in dual-camera systems, these devices are capable of not only producing better quality picture but also acquiring 3D stereo photos (with depth information collected). Thus, they are capable of capturing the moment in life with depth just like our two eye system. Thanks to this current trend, these phones are now getting cheaper while becoming more power complete. In this paper, we describe a system that makes use of the commercial dual rear-camera phones such as the iPhone X, to provide aids for people who are visually impaired. We propose a design to place the phone on the chest centre of the user who has one or two Bluetooth headphone(s) plugged into the ears to listen to the phone audio outputs. Our system is consist of three modules: (1) the scene context recognition to audio, (2) the 3D stereo reconstruction to audio, and (3) the interactive audio/voice controls. In slightly more detail, the wide-angle camera captures live photos to be investigated by a GPS guided Deep Learning process to describe the scene in front of him/herself (module 1). The telephoto camera captures the more narrow-angle and thus to be stereo reconstructed with the aids of the wide angle’s one to form a depth map (densed area-based distance map). The map helps determine the distance to all visible object(s) to notify the user with critical ones (module 2). This module also makes the phone vibrate when an object(s) located close enough to the user, e.g. within hand reach distance. The user can also query the system by asking various questions to get automatic voice answering (module 3). In addition, a manual rescue module (module 4) is also added when other things have gone wrong. An example of the vision to audio could be ”Overall, likely a corridor, one medium object is 0.5 m away - central left”, or ”Overall, city pathway, front cleared”. Audio command input may be ”read texts”, and the phone will detect and read all texts on closest object. More details on the design and implementation are further described in this paper.","PeriodicalId":365579,"journal":{"name":"2018 25th International Conference on Mechatronics and Machine Vision in Practice (M2VIP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128486471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信