2022 International Conference on Advanced Robotics and Intelligent Systems (ARIS)最新文献

筛选
英文 中文
Evidential Sensory Fusion of 2D Feature and 3D Shape Information for 3D Occluded Object Recognition in Robotics Applications 基于二维特征和三维形状信息的感官融合的三维遮挡物体识别在机器人中的应用
2022 International Conference on Advanced Robotics and Intelligent Systems (ARIS) Pub Date : 2022-08-24 DOI: 10.1109/ARIS56205.2022.9910450
R. Luo, Chi-Tang Chen
{"title":"Evidential Sensory Fusion of 2D Feature and 3D Shape Information for 3D Occluded Object Recognition in Robotics Applications","authors":"R. Luo, Chi-Tang Chen","doi":"10.1109/ARIS56205.2022.9910450","DOIUrl":"https://doi.org/10.1109/ARIS56205.2022.9910450","url":null,"abstract":"An evidential sensory fusion using 2D feature and 3D shape information method is proposed to recognize the occluded object. For the applications of robotic object fetching, the conventional object recognition methods usually applied the algorithms separately from 2D texture matching or 3D shape fitting. It often causes the wrong recognition results when the objects are occluded. The motivation in this study is to enhance the occluded object recognition via the estimate fusion method from the RGB-D sensor, which provides both 2D image and 3D depth information. To associate the 3D shape with the 2D texture, the region of interest (ROI) is firstly captured in 3D coordinate system, and mapped onto the 2D image. The Dempster-Shafer (DS) evidence theory is applied to fuse the confidences from the recognitions of both 2D texture and 3D shape to increase the recognition rate of occluded objects. The experimental results successfully demonstrate that the proposed evidence fusion recognizes the sample object correctly where it usually has the lower confidences from 2D and 3D recognition algorithms alone, when it operates in a separate fashion.","PeriodicalId":254572,"journal":{"name":"2022 International Conference on Advanced Robotics and Intelligent Systems (ARIS)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126753963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rock Climbing Benchmark for Humanoid Robots 人形机器人攀岩基准
2022 International Conference on Advanced Robotics and Intelligent Systems (ARIS) Pub Date : 2022-08-24 DOI: 10.1109/ARIS56205.2022.9910449
J. Baltes, Saeed Saeedvand
{"title":"Rock Climbing Benchmark for Humanoid Robots","authors":"J. Baltes, Saeed Saeedvand","doi":"10.1109/ARIS56205.2022.9910449","DOIUrl":"https://doi.org/10.1109/ARIS56205.2022.9910449","url":null,"abstract":"In this paper, we present the humanoid robot rock climbing competition as a benchmark problem for complex motion planning under kinematic and dynamic constraints. We also describe an advanced algorithm for motion planning in this domain. The algorithm finds stable configurations where three limbs are anchored to the wall and the fourth limb is moving. We suggest possible search techniques to find a sequence through these control funnels to find a path to the top of the climbing wall.","PeriodicalId":254572,"journal":{"name":"2022 International Conference on Advanced Robotics and Intelligent Systems (ARIS)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116482036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Computational Approach for Cam Design Parameters Optimization of Disk Cam Mechanisms with Oscillating Roller Follower 摆动滚子从动件盘形凸轮机构凸轮设计参数优化的计算方法
2022 International Conference on Advanced Robotics and Intelligent Systems (ARIS) Pub Date : 2022-08-24 DOI: 10.1109/ARIS56205.2022.9910454
Tung-Hsin Pan, Ching-Hsiang Chang, P. Lin, Kuan-Lun Hsu
{"title":"A Computational Approach for Cam Design Parameters Optimization of Disk Cam Mechanisms with Oscillating Roller Follower","authors":"Tung-Hsin Pan, Ching-Hsiang Chang, P. Lin, Kuan-Lun Hsu","doi":"10.1109/ARIS56205.2022.9910454","DOIUrl":"https://doi.org/10.1109/ARIS56205.2022.9910454","url":null,"abstract":"This research develops a computational approach for cam design parameters optimization of disk cam mechanisms with oscillating roller follower. The synthesis procedure of the cam geometry presented in this paper is computed by MATLAB function fmincon. An implementation of such a cam mechanism is presented to demonstrate the effectiveness of this procedure. By using this procedure, it is convenient to synthesis optimal dimensions for cam mechanism with oscillating roller follower under any preferred constraints. Finally, the importance and the irreplaceability of this research is discussed in the final section.","PeriodicalId":254572,"journal":{"name":"2022 International Conference on Advanced Robotics and Intelligent Systems (ARIS)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134009669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spatially-Excited Attention Learning for Fine-Grained Visual Categorization 细粒度视觉分类的空间激发注意学习
2022 International Conference on Advanced Robotics and Intelligent Systems (ARIS) Pub Date : 2022-08-24 DOI: 10.1109/ARIS56205.2022.9910447
Zhaozhi Luo, Min-Hsiang Hung, Yi-Wen Lu, Kuan-Wen Chen
{"title":"Spatially-Excited Attention Learning for Fine-Grained Visual Categorization","authors":"Zhaozhi Luo, Min-Hsiang Hung, Yi-Wen Lu, Kuan-Wen Chen","doi":"10.1109/ARIS56205.2022.9910447","DOIUrl":"https://doi.org/10.1109/ARIS56205.2022.9910447","url":null,"abstract":"Learning distinguishable feature embedding plays an important role in fine-grained visual categorization. The existing methods focus on either designing a complex attention mechanism to boost the overall classification performance or proposing a specific training strategy to enhance the learning of the backbone network to achieve a low-cost backbone-only inference. Unlike all of them, an alternative approach called Spatially-Excited Attention Learning (SEAL) is proposed in this paper. The training of SEAL is similar to that of most of the existing methods, but it provides two alternative streams during a network inference: one stream requires higher effort but provides higher performance; the other is a low-cost backbone-only inference with lower but still comparative performance. Note that both the streams are trained at the same time by SEAL. The experiments show that SEAL achieves the state-of-the-art performance under both complex architecture and backbone-only inference conditions.","PeriodicalId":254572,"journal":{"name":"2022 International Conference on Advanced Robotics and Intelligent Systems (ARIS)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130666949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Open Action Recognition by A 3D Convolutional Neural Network Combining with An Open Fuzzy Min-Max Neural Network 结合开放模糊最小-最大神经网络的三维卷积神经网络开放式动作识别
2022 International Conference on Advanced Robotics and Intelligent Systems (ARIS) Pub Date : 2022-08-24 DOI: 10.1109/ARIS56205.2022.9910444
Chia-Ying Wu, Y. Tsay, A. C. Shih
{"title":"Open Action Recognition by A 3D Convolutional Neural Network Combining with An Open Fuzzy Min-Max Neural Network","authors":"Chia-Ying Wu, Y. Tsay, A. C. Shih","doi":"10.1109/ARIS56205.2022.9910444","DOIUrl":"https://doi.org/10.1109/ARIS56205.2022.9910444","url":null,"abstract":"The 3-dimensional convolution neural network (3D CNN) has demonstrated a high prediction power for action recognition, when the inputs belong to the known classes. In a real application, however, if considering the inputs from unknown classes, previous studies have revealed that some prediction results can have high softmax scores falsely for known classes. That is called the open set recognition problem. Recently, a series of statistical methods based on an openmax approach have been proposed to solve the problem in 2D image data. However, how to apply the approach to video data is still unknown. Without using a prior statistical model, we propose a two-stage approach for open action recognition in this paper. A 3D CNN model is trained in the first stage. Then, the activation vector data, the output from the activation layer, are extracted as the feature data for training a fuzzy min-max neural network (FMMNN) as a classifier in the second stage. Since the value ranges of an activation vector are not limited between 0 and 1, an open FMMNN with a new fuzzy membership function without the normalization of input data is proposed and then constructed by the feature data. Finally, the prediction output is selected by the class with the maximum membership value. In the results, two separated datasets of mouse action videos were used for the training and the prediction test, respectively. We found that the proposed method can indeed improve the prediction performance. Moreover, using the human action and random background videos as two unknown datasets, we also demonstrated that the prediction outputs from known and unknown sets can be distinguished by a single threshold. In short, the proposed open FNNMM can not only improve the prediction performance from the inputs from known classes but also detect the inputs from unknown classes.","PeriodicalId":254572,"journal":{"name":"2022 International Conference on Advanced Robotics and Intelligent Systems (ARIS)","volume":"11 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132693033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Swabbing Robot for Covid-19 Specimen Collection 新型冠状病毒标本采集拭子机器人
2022 International Conference on Advanced Robotics and Intelligent Systems (ARIS) Pub Date : 2022-08-24 DOI: 10.1109/ARIS56205.2022.9910446
Cheng-Yen Chung, Yun-Chi Hsieh, Yi-Hau Lai, P. Yen
{"title":"A Swabbing Robot for Covid-19 Specimen Collection","authors":"Cheng-Yen Chung, Yun-Chi Hsieh, Yi-Hau Lai, P. Yen","doi":"10.1109/ARIS56205.2022.9910446","DOIUrl":"https://doi.org/10.1109/ARIS56205.2022.9910446","url":null,"abstract":"The Covid-19 pandemic has caused large scale of people in danger of infection and death during early outbreak period. Precise screening of the new coronal virus through PCR (Polymerase Chain Reaction) testing on the nasal or oral sample becomes very critical for epidemic control. This study proposes the idea of using a robotic remote manipulation platform for oral and nasal specimen collection operated by medical staffs. The oral cavity image was captured by a compact camera and then displayed on the human machine interface for the medical staffs to confirm the target region for sample collection. The wiping action of the robot was accomplished with a force control with force sensing the contact force between the cotton swab and soft tissue. A prototype of the swabbing robot has been implemented to verify the feasibility and safety of the remote robot-assisted specimen collection.","PeriodicalId":254572,"journal":{"name":"2022 International Conference on Advanced Robotics and Intelligent Systems (ARIS)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129312987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Calibration of a Robot's Tool Center Point Using a Laser Displacement Sensor 用激光位移传感器标定机器人刀具中心点
2022 International Conference on Advanced Robotics and Intelligent Systems (ARIS) Pub Date : 2022-08-24 DOI: 10.1109/ARIS56205.2022.9910448
Chih-Jer Lin, Hsing-Cheng Wang
{"title":"Calibration of a Robot's Tool Center Point Using a Laser Displacement Sensor","authors":"Chih-Jer Lin, Hsing-Cheng Wang","doi":"10.1109/ARIS56205.2022.9910448","DOIUrl":"https://doi.org/10.1109/ARIS56205.2022.9910448","url":null,"abstract":"The conventional tool center point calibration (TCP) method requires the operator to set the actual position of the tool center point by eye. To address this lengthy workflow and low accuracy while improving accuracy and efficiency for time-saving and non-contact calibration, this paper proposes an enhanced automatic TCP calibration method based on a laser displacement sensor and implemented on a cooperative robot with six degrees of freedom. During the calibration process, the robot arm will move a given distance along the X and Y axes and collect the information when the tool passes through the laser during the process to calculate the deflection of the tool, and then continue to move a given distance along the X and Y axes for the second height calibration. After the deflection angle is calculated and calibrated by triangulation, the deflection calibration is completed and the third X and Y axis displacement is performed to find out the exact position of the tool on the X and Y axes. Finally, the tool is moved to a position higher than the laser, and the laser is triggered by moving downward to obtain information to complete the whole experimental process and get the calibrated tool center position. The whole calibration method is firstly verified in the virtual simulation environment and then implemented on the actual cooperative robot. The results of the proposed TCP calibration method can achieve a positioning accuracy of about 0.07 mm, a positioning accuracy of about 0.18 degrees, a positioning repeatability of $boldsymbol{pm 0.083}$ mm, and a positioning repeatability of less than $boldsymbol{pm 0.17}$ degrees. This result meets the requirements of TCP calibration, but also achieves the purpose of simple, economical and time-saving, and it takes only 60 seconds to complete the whole calibration process.","PeriodicalId":254572,"journal":{"name":"2022 International Conference on Advanced Robotics and Intelligent Systems (ARIS)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115003815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI Enhanced Visual Inspection of Post-Polished Workpieces Using You Only Look Once Vision System for Intelligent Robotics Applications 人工智能增强后抛光工件的视觉检测,使用智能机器人应用的“只看一次”视觉系统
2022 International Conference on Advanced Robotics and Intelligent Systems (ARIS) Pub Date : 2022-08-24 DOI: 10.1109/ARIS56205.2022.9910441
R. Luo, Zheng-Lun Yu
{"title":"AI Enhanced Visual Inspection of Post-Polished Workpieces Using You Only Look Once Vision System for Intelligent Robotics Applications","authors":"R. Luo, Zheng-Lun Yu","doi":"10.1109/ARIS56205.2022.9910441","DOIUrl":"https://doi.org/10.1109/ARIS56205.2022.9910441","url":null,"abstract":"The objective of this paper is to provide a solution of automated optical inspection for post-polished workpieces using you only look once YOLOv5 vision system. It intends to assist human labor who checks workpieces with eyes in the long terms. Robots have now become an essential role in industrial applications. An example application is automating polishing tasks using robots. However, polishing still requires people to be involved in the post-processing especially in product detection. YOLOv5 can be applied in multiple scenarios because it is known for high accuracy and fast detection in real-time image detection. In this paper YOLOv5 approach have been implemented for robotic faucet surface inspection and demonstrated the success of the process.","PeriodicalId":254572,"journal":{"name":"2022 International Conference on Advanced Robotics and Intelligent Systems (ARIS)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132588253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Deep Comparison Network for Visual Prognosis of a Linear Slide 线性滑梯视觉预后的深度比较网络
2022 International Conference on Advanced Robotics and Intelligent Systems (ARIS) Pub Date : 2022-08-24 DOI: 10.1109/ARIS56205.2022.9910443
Chia-Jui Yang, Bo Wen, Chih-Hung G. Li
{"title":"A Deep Comparison Network for Visual Prognosis of a Linear Slide","authors":"Chia-Jui Yang, Bo Wen, Chih-Hung G. Li","doi":"10.1109/ARIS56205.2022.9910443","DOIUrl":"https://doi.org/10.1109/ARIS56205.2022.9910443","url":null,"abstract":"Linear Slides are important components widely adopted in the manufacturing sector, particularly automated production lines. Damage to the linear slide can cause abnormal machine vibration and result in production line failure. Common failure modes of the linear slide include ball slider wear and severe rail surface contamination or abrasion. Monitoring the condition of the linear slide is of great value and importance in the Industry 4.0 era. There is an emerging need for developing a prognosis method for the linear slide to prevent the unexpected breakdown of the production line. The most common online inspection method utilizes an accelerometer to monitor the system's vibration. However, as abnormal vibration is the result and not the cause of the slide damage, it does not serve well as a prognostic signal. This article proposed an innovative prognosis method by recruiting low-resolution cameras to monitor the rail surface condition. We conducted endurance tests on several linear slides and determined the end of life by the vibration measurements and the pre-compression values. We then annotated the rail surface images with the service percentages and formed the training set for a deep convolutional neural network (CNN). We design the CNN architecture as a dual-input comparison network that compares the initial image and the current image to predict the service percentage of the linear slide. The method appeared promising, judging by the preliminary test results; however, the prediction accuracy needs further improvements before actual application. The comparison network presented the advantage of generalization to various illumination conditions. The cost of low-resolution cameras is also much lower than accelerometers.","PeriodicalId":254572,"journal":{"name":"2022 International Conference on Advanced Robotics and Intelligent Systems (ARIS)","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132961712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Deep Learning Approach to Predict Dissolved Oxygen in Aquaculture 水产养殖中溶解氧预测的深度学习方法
2022 International Conference on Advanced Robotics and Intelligent Systems (ARIS) Pub Date : 2022-08-24 DOI: 10.1109/ARIS56205.2022.9910453
Simon Peter Khabusi, Yonggui Huang
{"title":"A Deep Learning Approach to Predict Dissolved Oxygen in Aquaculture","authors":"Simon Peter Khabusi, Yonggui Huang","doi":"10.1109/ARIS56205.2022.9910453","DOIUrl":"https://doi.org/10.1109/ARIS56205.2022.9910453","url":null,"abstract":"Fish is one of the major sources of protein nutrients for people. Most fish supply comes from the natural habitants which include rivers, lakes, seas and oceans. However, the high demand has necessitated fish farming from man-made lakes, ponds and swamps. There are various issues that pose risks to fish survival and growth, and among these include the level of dissolved oxygen (DO) in the water which is an essential environmental condition whose scarcity leads to suffocation of fish and ultimately death. This study aimed at designing a prediction model for DO in aquatic environments. To achieve the objective, time series data consisting of 70374 records and 15 attributes from Mumford Cove in Connecticut, USA collected for over 5 years was preprocessed and used to train long-short term memory (LSTM) recurrent neural network (RNN) for DO prediction. The training and testing data were obtained by splitting the dataset into 70% and 30%, respectively. Regression models include linear regression (LR), support vector regression (SVR) and decision tree regression (DTR) were also created for comparisons. The performance of the models was evaluated on the basis of mean absolute percentage error (MAPE), mean squared error (MSE), mean absolute error (MAE) and coefficient of determination ($mathbf{R^{2}}$ score). LSTM achieved superior performance compared to the regression models. Conclusively, DO on such multivariate time series data can be well achieved with LSTM RNN.","PeriodicalId":254572,"journal":{"name":"2022 International Conference on Advanced Robotics and Intelligent Systems (ARIS)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128015891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信