2016 23rd International Conference on Mechatronics and Machine Vision in Practice (M2VIP)最新文献

筛选
英文 中文
Humanoid action imitation learning via boosting sample DQN in virtual demonstrator environment 虚拟演示环境下基于增强样本DQN的人形动作模仿学习
Rong Zhou, Zhisheng Zhang, Kunyyu Peng, Yang Mi, Xiangsheng Huang
{"title":"Humanoid action imitation learning via boosting sample DQN in virtual demonstrator environment","authors":"Rong Zhou, Zhisheng Zhang, Kunyyu Peng, Yang Mi, Xiangsheng Huang","doi":"10.1109/M2VIP.2016.7827324","DOIUrl":"https://doi.org/10.1109/M2VIP.2016.7827324","url":null,"abstract":"With the growth of modern industrial automation, autonomous-learning applied in the field of robot has aroused considerable attentions of researchers. However, those existing learning methods typically require mass among of training set, increasing the difficulty of collecting samples which is time-consuming, while the validity of samples might be divergent greatly, and thus the training efficiency is limited. Simultaneously, the reinforcement learning used in the system was based on the hypothesis that each action in the sequence contribute equally to the consequence, which is not corresponding to the common rules. In this paper, we propose a method, boosting sample DQN, to optimize the validity of training sample set. Inspired by boosting method, by extracting samples from replay memory hierarchically based on statistical results, the efficiency of network training is improved. Our algorithm, which has a small count of parameters, has been transplanted to the dual-arm robot system successfully. This approach learns a set of trajectories for the action of reaching and grabbing target objects using real-time models obtained by interactively wearable sensing equipment. And also, solution was proposed to distinguish weights of different actions. Our method has proved to be adaptive in learning complicated tasks, including grabbing bottle within its scope, as we presented in the paper.","PeriodicalId":125468,"journal":{"name":"2016 23rd International Conference on Mechatronics and Machine Vision in Practice (M2VIP)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125766893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Mobile machine vision development for urine patch detection 尿斑检测的移动机器视觉开发
Akshaya Kumar, Hamid Sharifi, K. Arif
{"title":"Mobile machine vision development for urine patch detection","authors":"Akshaya Kumar, Hamid Sharifi, K. Arif","doi":"10.1109/M2VIP.2016.7827315","DOIUrl":"https://doi.org/10.1109/M2VIP.2016.7827315","url":null,"abstract":"This paper discusses the research of integrating and implementing current technologies to detect urine patches. Currently, cattle urine is damaging the agricultural industry and poses an increasing threat to the environment. Farmers invest a lot of money treating urine patches and nitrate leaches into waterways, which pollutes drinking water and causes fish to die. Current methods are slow and inefficient in detecting urine patches across large areas of land. The proposed method comprises of mounting a smartphone onto a quadcopter and using the smartphone's camera to run OpenCV image processing libraries to visually detect urine patches. The quadcopter serves as a fast method of moving the smartphone, at a fixed height from the ground, to survey and detect urine patches. When a urine patch is detected, its GPS coordinates are obtained, from the smartphone's GPS sensor, and they are sent to the farmer so the patches can be located for treatment.","PeriodicalId":125468,"journal":{"name":"2016 23rd International Conference on Mechatronics and Machine Vision in Practice (M2VIP)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131589553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Two opposite sides synchronous tracking X-ray based robotic system for welding inspection 基于x射线的两对边同步跟踪焊接检测机器人系统
Kai Zheng, Jie Li, Chun Lei Tu, Xing Song Wang
{"title":"Two opposite sides synchronous tracking X-ray based robotic system for welding inspection","authors":"Kai Zheng, Jie Li, Chun Lei Tu, Xing Song Wang","doi":"10.1109/M2VIP.2016.7827334","DOIUrl":"https://doi.org/10.1109/M2VIP.2016.7827334","url":null,"abstract":"For inspecting welding seams of large-scale equipment such as storage tanks and spherical tanks, it usually cost much manpower and material, while automated testing robot can achieve fast and accurate detection. Because X-ray Flat Panel Detector is dependent on specialized automated equipment, it can greatly enhance X-ray inspection technology in large storage tanks that applying the Mecanum Omnidirectional Mobile Robot into automated weld detection. In this paper, an X-ray Flat Panel Detector based wall-climbing robotic system is developed for intelligent detecting of welding seams. The robot system consists of two Mecanum vehicles equipped with either a Flat Panel Detector or an X-ray generator and climbing on both side of the tank wall. Inspection robot can carry detector stably with reliable suction force and adapt to different surfaces. To let the X-ray Flat Panel Detector work properly, dual cameras positioning system is used to ensure synchronous operation of the two robots. Some experiment was conducted and reported.","PeriodicalId":125468,"journal":{"name":"2016 23rd International Conference on Mechatronics and Machine Vision in Practice (M2VIP)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125851516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Dynamic analysis of a cable-climbing robot system 攀索机器人系统的动力学分析
Wang Yue, Xu Fengyu, Yang Zhong
{"title":"Dynamic analysis of a cable-climbing robot system","authors":"Wang Yue, Xu Fengyu, Yang Zhong","doi":"10.1109/M2VIP.2016.7827285","DOIUrl":"https://doi.org/10.1109/M2VIP.2016.7827285","url":null,"abstract":"Cable-climbing robot is designed to inspect the cable-stayed bridge, releasing workers from dangerous environment. Meanwhile, the vibration of cable with a robot should be researched. The cables of bridge will be seriously damaged when large amplitude occurs. Not only that, the inertial force by acceleration should be emphasized as well. In this paper, a cable-climbing robot is introduced first. Then we analyze the vibration system of cable with this robot, regarding the robot as a particle and considering wind effects. We mainly consider two parameters, mass and position, which may strongly influence the vibration. Also, we refer to the Vibration String Equation, regarding the system as pure forced vibration with lumped mass. We give the simulation of the cable without robot for comparison. The result shows that maximum amplitude occurs in the middle of the cable. When the robot climbs on the cable, vibration by the lumped mass is discussed. Here we use a method of subsection, supplementing new condition at robot position. After solving the equations on both sides, we get a piecewise function that concludes mass and position which are important in further research.","PeriodicalId":125468,"journal":{"name":"2016 23rd International Conference on Mechatronics and Machine Vision in Practice (M2VIP)","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114907281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Entity tracking within a Zigbee based smart home 基于Zigbee的智能家居中的实体跟踪
Daniel Konings, A. Budel, F. Alam, Frazer K. Noble
{"title":"Entity tracking within a Zigbee based smart home","authors":"Daniel Konings, A. Budel, F. Alam, Frazer K. Noble","doi":"10.1109/M2VIP.2016.7827294","DOIUrl":"https://doi.org/10.1109/M2VIP.2016.7827294","url":null,"abstract":"Modern Smart Home Automation (SMA) systems are predominantly based on wireless communication standards. New home automation setups can often include dozens of devices ranging from thermostats, humidity sensors, light switches, digital locks and cooling systems, all communicating on a common network. In this paper we focus on how SMA systems built on one of the most common standards (Zigbee), could be leveraged to provide a secondary benefit in the form of an Indoor Positioning System (IPS). IPS can be implemented in the form of Device Free Localization (DfL), Active tracking or as a combination of both techniques. A system containing a DfL implementation can detect and track moving entities by monitoring the changes in received signal strength (RSSI) values between nodes within a wireless network. DfL does not require the entity that is being tracked to carry an electronic device and actively contribute to the localization process. In Active tracking, the tracked entity contributes to the tracking process. In this paper both techniques are implemented individually, and a combination of both techniques is explored. Having implemented DfL and Active tracking, we were able to localize a person within a 3m × 3m quadrant with DfL with 80% accuracy. Active tracking resulted in a higher resolution of tracking compared to DfL, being able to localize a person within a 2m × 2m area with 95% accuracy. The accuracy of Active Tracking was then further increased to 98%, by coupling Active Tracking with DfL measurements.","PeriodicalId":125468,"journal":{"name":"2016 23rd International Conference on Mechatronics and Machine Vision in Practice (M2VIP)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133691399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Vision-based autonomous micro-air-vehicle control for odor source localization 基于视觉的气味源定位自主微型飞行器控制
K. Kurotsuchi, M. Tai, H. Takahashi
{"title":"Vision-based autonomous micro-air-vehicle control for odor source localization","authors":"K. Kurotsuchi, M. Tai, H. Takahashi","doi":"10.1109/M2VIP.2016.7827276","DOIUrl":"https://doi.org/10.1109/M2VIP.2016.7827276","url":null,"abstract":"This paper presents a novel control method for autonomous-odor-source localization using vision and odor sensing by micro air vehicles (MAVs). Our method is based on biomimetics, which enable highly autonomous localization. Our method does not need any instruction signals, including even global positioning system (GPS) signals. An experimenter just blows a whistle, and the MAV starts to hover, to seek an odor source, and to keep hovering near the source. The GPS-signal-free control based on vision enables indoor/underground use. Moreover, the MAV is light-weight (85 grams) and does not cause harm to others even if it accidentally falls. Experiments conducted in the real world were successful in enabling odor source localization using the MAV with a bio-inspired searching method. The distance error of the localization was 63 cm, more accurate than the target distance of 120 cm for individual identification. Our odor source localization is the first step to a proof of concept for a danger warning system. These results will be applied to the system to enable a safer and more secure society.","PeriodicalId":125468,"journal":{"name":"2016 23rd International Conference on Mechatronics and Machine Vision in Practice (M2VIP)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130361111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A novel double-layer, multi-channel soft pneumatic actuator that can achieve multiple motions 一种新型双层多通道气动软执行器,可实现多种运动
Qi Zhang, Zhisheng Zhang
{"title":"A novel double-layer, multi-channel soft pneumatic actuator that can achieve multiple motions","authors":"Qi Zhang, Zhisheng Zhang","doi":"10.1109/M2VIP.2016.7827343","DOIUrl":"https://doi.org/10.1109/M2VIP.2016.7827343","url":null,"abstract":"This paper proposes a novel double-layer, multichannel soft pneumatic actuator that can achieve multiple motions. The technical requirements of the actuator are presented and a prototype of the actuator has been designed and fabricated. We analyze the actuator's motion through finite element modeling simulation. Our experiments show that the proposed actuator is able to achieve multidirectional translational motions and various bending motions, and can meet the technical requirement of the system.","PeriodicalId":125468,"journal":{"name":"2016 23rd International Conference on Mechatronics and Machine Vision in Practice (M2VIP)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116897831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Design of end-effectors for a chemistry automation plant 化工自动化装置末端执行器设计
Akshaya Kumar, Kamila Pillearachichige, Hamid Sharifi, Ben Shaw, Frazer K. Noble
{"title":"Design of end-effectors for a chemistry automation plant","authors":"Akshaya Kumar, Kamila Pillearachichige, Hamid Sharifi, Ben Shaw, Frazer K. Noble","doi":"10.1109/M2VIP.2016.7827304","DOIUrl":"https://doi.org/10.1109/M2VIP.2016.7827304","url":null,"abstract":"In typical chemistry experiments, there are many manual processes; chemistry automation is the process of automating these. In this paper, we describe work done to develop end-effectors that extend current capabilities of chemistry automation plants. The hierarchy established, the design process employed, and four end-effectors: the “Claw”, “Balloon”, “Cross”, and “Band” are presented, described, and discussed. The Claw, Balloon, and Band end-effectors were able to successfully pick-and-place bottles with diameters between 10 and 30 mm. Evaluating the designs, the Band end-effector was chosen as the working solution for use in future work.","PeriodicalId":125468,"journal":{"name":"2016 23rd International Conference on Mechatronics and Machine Vision in Practice (M2VIP)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133234596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Physical animation simulation of robot kit 机器人套件的物理动画仿真
Haitao Gao, Lei Zhang, Wenzheng Ding, Fei Hao, Yinglu Zhou
{"title":"Physical animation simulation of robot kit","authors":"Haitao Gao, Lei Zhang, Wenzheng Ding, Fei Hao, Yinglu Zhou","doi":"10.1109/M2VIP.2016.7827313","DOIUrl":"https://doi.org/10.1109/M2VIP.2016.7827313","url":null,"abstract":"A robot simulator integrating robot assembly, program design and animation simulation is developed and its physical simulation method is mainly studied. To simulate the real world and robotic behavior, architecture of physical animation simulation is built and multi-body dynamics theory including unilateral constraints is applied to physical interaction. A complementarity-based method is proposed to model unilateral constraints, and this method can deal with dependent contacts and coulomb friction as well as non-elastic impulse. In order to solve the multi-body dynamics equations with unilateral constraints effectively, the mixed nonlinear complementary equations are converted into linear complementary equations by eliminating all of the equality constraints and linearizing friction law. Moreover, a direct correction algorithm is established to avoid drift phenomena. Finally, physical animation simulation is applied and gets a good effect.","PeriodicalId":125468,"journal":{"name":"2016 23rd International Conference on Mechatronics and Machine Vision in Practice (M2VIP)","volume":"126 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131751415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Vehicle detection in high resolution satellite images with joint-layer deep convolutional neural networks 基于联合层深度卷积神经网络的高分辨率卫星图像车辆检测
Yanjun Liu, Na Liu, H. Huo, T. Fang
{"title":"Vehicle detection in high resolution satellite images with joint-layer deep convolutional neural networks","authors":"Yanjun Liu, Na Liu, H. Huo, T. Fang","doi":"10.1109/M2VIP.2016.7827266","DOIUrl":"https://doi.org/10.1109/M2VIP.2016.7827266","url":null,"abstract":"Vehicle detection can provide volumes of useful data for city planning and transport management. It has always been a challenging task because of various complicated backgrounds and the relatively small sizes of targets, especially in high resolution satellite images. A novel model called joint-layer deep convolutional neural networks (JLDCNNs), which joins features in the higher layers and the lower layers of deep convolutional neural networks (DCNNs), is proposed in this paper. JLDCNNs aim to cover different scales and detect vehicles from complex satellite images rapidly by overcoming the insufficient feature extraction of traditional DCNNs. The model is evaluated and compared with traditional DCNNs and other methods on our challenging dataset which includes 20 high resolution satellite images (including over 2400 vehicles) collected from Google Earth. JLDCNNs significantly improve the precision rate by 16% and the recall rate by 6% compared with traditional DCNNs, let alone outperform other traditional methods.","PeriodicalId":125468,"journal":{"name":"2016 23rd International Conference on Mechatronics and Machine Vision in Practice (M2VIP)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131807640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信