Lakshmi Narayana Thalluri, Kiranmai Babburu, Aravind Kumar Madam, K. V. V. Kumar, G. V. Ganesh, Konari Rajasekhar, Koushik Guha, Md. Baig Mohammad, S. S. Kiran, Addepalli V. S. Y. Narayana Sarma, Vegesna Venkatasiva Naga Yaswanth
{"title":"Automated face recognition system for smart attendance application using convolutional neural networks","authors":"Lakshmi Narayana Thalluri, Kiranmai Babburu, Aravind Kumar Madam, K. V. V. Kumar, G. V. Ganesh, Konari Rajasekhar, Koushik Guha, Md. Baig Mohammad, S. S. Kiran, Addepalli V. S. Y. Narayana Sarma, Vegesna Venkatasiva Naga Yaswanth","doi":"10.1007/s41315-023-00310-1","DOIUrl":"https://doi.org/10.1007/s41315-023-00310-1","url":null,"abstract":"<p>In this paper, a touch less automated face recognition system for smart attendance application was designed using convolutional neural network (CNN). The presented touch less smart attendance system is useful for offices and college’s attendance applications with this the spread of covid-19 type viruses can be restrict. The CNN was trained with dedicated database of 1890 faces with different illumination levels and rotate angles of total 30 targeted classes. A CNN performance analysis was done with 9-layer and 11-layer with different activation functions i.e., Step, Sigmoid, Tanh, softmax, and ReLu. An 11-layer CNN with ReLu activation function offers an accuracy of 96.2% for the designed face database. The system is capable to detect multiple faces from test images using Viola Jones algorithm. Eventually, a web application was designed which helps to monitor the attendance and to generate the report.</p>","PeriodicalId":44563,"journal":{"name":"International Journal of Intelligent Robotics and Applications","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139420852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hendri Maja Saputra, Nur Safwati Mohd Nor, Estiko Rijanto, Mohd Zarhamdy Md Zain, Intan Zaurah Mat Darus, Edwar Yazid
{"title":"A review of robotic charging for electric vehicles","authors":"Hendri Maja Saputra, Nur Safwati Mohd Nor, Estiko Rijanto, Mohd Zarhamdy Md Zain, Intan Zaurah Mat Darus, Edwar Yazid","doi":"10.1007/s41315-023-00306-x","DOIUrl":"https://doi.org/10.1007/s41315-023-00306-x","url":null,"abstract":"<p>This paper reviews the technical aspects of robotic charging for Electric Vehicles (EVs), aiming to identify research trends, methods, and challenges. It implemented the Systematic Literature Review (SLR), starting with the formulation of research question; searching and collecting articles from databases, including Web of Science, Scopus, Dimensions, and Lens; selecting articles; and data extraction. We reviewed the articles published from 2012 to 2022 and found that the number of publications increased exponentially. The top five keywords were electric vehicle, robotic, automatic charging, pose estimation, and computer vision. We continued an in-depth review from the points of view of autonomous docking, charging socket detection-pose estimation, plug insertion, and robot manipulator. No article used a camera, Lidar, or Laser as the sensor that reported successful autonomous docking without position error. Furthermore, we identified two problems when using computer vision for the socket pose estimation and the plug insertion: low robustness against different socket shapes and light conditions; inability to monitor excessive plugging force. Using infrared to locate the socket yielded more robustness. However, it requires modification of the socket on the vehicle. A few articles used a camera and force/torque sensors to control the plug insertion based on different control approaches: model-based control and data-driven machine learning. The challenges were to increase the success rate and shorten the time. Most researchers used commercial 6-DOF robot manipulators, whereas a few designed lower-DOF robot manipulators. Another research challenge was developing a 4-DOF robot manipulator with compliance that ensures a 100% success rate of plug insertion.</p>","PeriodicalId":44563,"journal":{"name":"International Journal of Intelligent Robotics and Applications","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2023-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138495137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Md Fahim Shahoriar Titu, S. M. Rezwanul Haque, Rifad Islam, Akram Hossain, Mohammad Abdul Qayum, Riasat Khan
{"title":"Experiments with cooperative robots that can detect object’s shape, color and size to perform tasks in industrial workplaces","authors":"Md Fahim Shahoriar Titu, S. M. Rezwanul Haque, Rifad Islam, Akram Hossain, Mohammad Abdul Qayum, Riasat Khan","doi":"10.1007/s41315-023-00305-y","DOIUrl":"https://doi.org/10.1007/s41315-023-00305-y","url":null,"abstract":"<p>Automation and human-robot collaboration are increasing in modern workplaces such as industrial manufacturing. Nowadays, humans rely heavily on advanced robotic devices to perform tasks quickly and accurately. Modern robots with computer vision and artificial intelligence are gaining attention and popularity rapidly. This paper demonstrates how a robot can automatically detect an object’s shape, color, and size using computer vision techniques and act based on information feedback. In this work, a powerful computational model for a robot has been developed that distinguishes an object’s shape, size, and color in real time with high accuracy. Then it can integrate a robotic arm to pick a specific object. A dataset of 6558 images of various monochromatic objects has been developed, containing three colors against a white background and five shapes for the research. The designed system for detection has achieved 99.8% success in an object’s shape detection. Also, the system demonstrated 100% success in the object’s color and size detection with the OpenCV image processing framework. On the other hand, the prototype robotic system based on Raspberry Pi-4B has achieved 80.7% accuracy for geometrical shape detection and 81.07%, and 59.77% accuracy for color recognition and distance measurement, respectively. Moreover, the system guided a robotic arm to pick up the object based on its color and shape with a mean response time of 19 seconds. The idea is to simulate a workplace environment where a worker will ask the robotic systems to perform a task on a specific object. Our robotic system can accurately identify the object’s attributes (e.g., 100%) and is able to perform the task reliably (81%). However, reliability can be improved by using a more powerful computing system, such as the robotic prototype. The article’s contribution is to use a cutting-edge computer vision technique to detect and categorize objects with the help of a small private dataset to shorten the training duration and enable the suggested system to adapt to components that may be needed for creating a new industrial product in a shorter period. The source code and images of the collected dataset can be found at: https://github.com/TituShahoriar/cse499B_Hardware_Proposed_System.</p>","PeriodicalId":44563,"journal":{"name":"International Journal of Intelligent Robotics and Applications","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2023-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138495136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Survey on learning-based scene extrapolation in robotics","authors":"Selma Güzel, Sırma Yavuz","doi":"10.1007/s41315-023-00303-0","DOIUrl":"https://doi.org/10.1007/s41315-023-00303-0","url":null,"abstract":"<p>Human’s imagination capability provides recognition of unseen environment which should be improved in robots in order to have better mapping, planning, navigation and exploration capabilities in the fields where the robots are utilized such as military, disasters, and industry. The task of completion of a partial scene via estimating the unobserved parts relied on the known information is called scene extrapolation. It increases performance and satisfies a valid approximation of unseen content even if it is impossible or hard to obtain it due to the issues related with security, environment, etc. In this survey paper, the studies related to learning-based scene extrapolation in robotics are presented and evaluated taking the efficiencies and limitations of the methods into account to provide researchers in this field a general overview on this task and encourage them to improve the current studies for higher success. In addition, the methods which use common datasets and metrics are compared. To the best of our knowledge, there isn’t any survey on this essential topic and we hope this survey will compensate this.\u0000</p>","PeriodicalId":44563,"journal":{"name":"International Journal of Intelligent Robotics and Applications","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2023-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138495118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"3D LiDAR-based obstacle detection and tracking for autonomous navigation in dynamic environments","authors":"Arindam Saha, Bibhas Chandra Dhara","doi":"10.1007/s41315-023-00302-1","DOIUrl":"https://doi.org/10.1007/s41315-023-00302-1","url":null,"abstract":"","PeriodicalId":44563,"journal":{"name":"International Journal of Intelligent Robotics and Applications","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134953639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Image-based visual servoing control of remotely operated vehicle for underwater pipeline inspection","authors":"Xiongfeng Yi, Zheng Chen","doi":"10.1007/s41315-023-00301-2","DOIUrl":"https://doi.org/10.1007/s41315-023-00301-2","url":null,"abstract":"","PeriodicalId":44563,"journal":{"name":"International Journal of Intelligent Robotics and Applications","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136352749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Research on safety design and optimization of collaborative robots","authors":"Mingwei Hu","doi":"10.1007/s41315-023-00299-7","DOIUrl":"https://doi.org/10.1007/s41315-023-00299-7","url":null,"abstract":"","PeriodicalId":44563,"journal":{"name":"International Journal of Intelligent Robotics and Applications","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136308258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lidia G. S. Rocha, Pedro H. C. Kim, Kelen C. Teixeira Vivaldini
{"title":"Performance analysis of path planning techniques for autonomous robots","authors":"Lidia G. S. Rocha, Pedro H. C. Kim, Kelen C. Teixeira Vivaldini","doi":"10.1007/s41315-023-00298-8","DOIUrl":"https://doi.org/10.1007/s41315-023-00298-8","url":null,"abstract":"","PeriodicalId":44563,"journal":{"name":"International Journal of Intelligent Robotics and Applications","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135306545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}