{"title":"Inverse Kinematics Solution of Programmable Universal Machine for Assembly (PUMA) Robot","authors":"Gurjeet Singh, V. Banga, T. Yingthawornsuk","doi":"10.1109/SITIS.2019.00088","DOIUrl":"https://doi.org/10.1109/SITIS.2019.00088","url":null,"abstract":"This present paper focused on the PUMA 560 robot arm's kinematics problem or the connection between angle in each joint and the end-effectors' position. In this paper the Forward and Inverse Kinematic solution of PUMA Robot is solved by analytical approach. All the values of theta are solved by Denavit-Hartenberg analysis (DH). In other words, it deals with finding the homogeneous transformation matrix that describes the position and orientation of the tool frame with respect to the global reference frame. On the other hand, inverse kinematics is used to calculate the joint angles required to achieve the desired position and orientation. The same transformation matrix which resulted from the forward kinematics in order to describe the position and the orientation of the tool frame relative to the robot base frame is used here in the inverse kinematics to solve for the joint angles.","PeriodicalId":301876,"journal":{"name":"2019 15th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129927565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Katia Lupinetti, D. Cabiddu, F. Giannini, M. Monti
{"title":"CAD3A: A Web-Based Application to Visualize and Semantically Enhance CAD Assembly Models","authors":"Katia Lupinetti, D. Cabiddu, F. Giannini, M. Monti","doi":"10.1109/SITIS.2019.00080","DOIUrl":"https://doi.org/10.1109/SITIS.2019.00080","url":null,"abstract":"Nowadays, there is a significant interest in new media types such as 3D models. Thanks to the computer graphics advances, 3D content can be rendered in real-time on desktop and mobile devices improving the user experience. In addition, 3D contents may be endowed with heterogeneous metadata to improve the user understanding. This paper aims at improving the fruition of 3D models of industrial products. This type of content is designed as CAD (Computer-Aided Design) models, whose representation is not suitable for visualization over the web. We present an online processor of 3D models able to extract semantic content. The extracted data are represented in a meaningful and structured manner, such that the user can first visualize them on demand through any device, and then export the results for further analysis or to be processed by other external applications if necessary. The exploitation of Web technologies makes the framework easy to use both from desktop and mobile devices, no matter their specific hardware and software equipment.","PeriodicalId":301876,"journal":{"name":"2019 15th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121372715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Ubiquitous Face-Ear Recognition Based on Frames Sequence Capture and Analysis","authors":"Liberato Iannitelli, S. Ricciardi","doi":"10.1109/SITIS.2019.00115","DOIUrl":"https://doi.org/10.1109/SITIS.2019.00115","url":null,"abstract":"Unimodal biometric systems performance is known to be easily affected by intra-class variations, noisy samples, spoofing techniques and environmental conditions. These problems get even more challenging whenever biometric data acquisition is performed \"in-the-wild\". Some of these limitations can notably be addressed by means of multi-biometric approaches, exploiting different biometric traits, multiple samples and multiple algorithms to establish the identity of an individual. To this regard, the present study describes a face+ear biometric system requiring just a single combined video capture of the subject's face to work in a ubiquitous operative scenario. Exploiting the video capture capabilities provided by most smartphones' built-in cameras, the proposed method acquires subject's face both frontally and sideways within a single video sample. The resulting frames sequence is then analyzed to find the ones most suited, quality wise, to feed the two parallel biometric pipelines. Different data-fusion strategies, working either at score level with quality-based adaptive weighting or at decision level, have been applied to the output of face and ear matching stages to the aim of improving system's accuracy and reliability. Preliminary experimental results show good recognition accuracy coupled to an unusual easiness of operation for a ubiquitous multimodal biometric system.","PeriodicalId":301876,"journal":{"name":"2019 15th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126614980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Toscani, R. G. Rodríguez, Dar’ya Guarnera, G. Guarnera, Assim Kalouaz, K. Gegenfurtner
{"title":"Assessment of OLED Head Mounted Display for Vision Research with Virtual Reality","authors":"M. Toscani, R. G. Rodríguez, Dar’ya Guarnera, G. Guarnera, Assim Kalouaz, K. Gegenfurtner","doi":"10.1109/SITIS.2019.00120","DOIUrl":"https://doi.org/10.1109/SITIS.2019.00120","url":null,"abstract":"Vision researchers often rely on visual display technology to present observers with controlled stimuli, usually by means of a calibrated computer screen. Virtual Reality (VR) may allow a similar level of control, together with higher realism of the stimulation and a visual field larger than what is achievable on a standard computer monitor. To produce the desired luminance and color of the stimuli, accurate characterization of the spectral properties of the display is necessary. However, this process might not be trivial on VR displays, because 1) the Head Mounted Displays (HMD) used in VR are typically designed to be light-weight and low energy consuming, thus they might not meet some of the standard assumptions in display calibration, 2) the VR software might affect the color and luminance signal in a complex way, further complicating the calibration process. Here we show that 1) a common, off-the-shelf display used in our experiments behaves similarly to a standard OLED monitor, 2) the VR gaming engine we tested (Unreal Engine 4) introduces a complex behavior, 3) which can be disabled. This allows to accurately control colors and luminance emitted by the display, thus enabling its use for perceptual experiments.","PeriodicalId":301876,"journal":{"name":"2019 15th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133305682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deterministic vs. Random Initializations for K-Means Color Image Quantization","authors":"H. Palus, M. Frackiewicz","doi":"10.1109/SITIS.2019.00020","DOIUrl":"https://doi.org/10.1109/SITIS.2019.00020","url":null,"abstract":"We present six methods for initialising the K-means clustering algorithm used for color image quantization. We test these initialization methods on a few quantization levels and on 24 color images contained in the Kodak image dataset. In the vast majority of the examined cases the best results were obtained for the initialization of KM++. The evaluation of the results was carried out using the MSE and several new perceptual quality indices.","PeriodicalId":301876,"journal":{"name":"2019 15th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116060758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Image Sharpening by Grid Warping with Curvature Analysis","authors":"A. Nasonov, A. Krylov","doi":"10.1109/SITIS.2019.00051","DOIUrl":"https://doi.org/10.1109/SITIS.2019.00051","url":null,"abstract":"The paper proposes an improvement of the grid warping algorithm for solving the edge sharpening problem. The idea of the grid warping is to transform the neighborhood of the edges in order to make the edge transient area thinner. This approach does not amplify the noise and does not introduce ringing artifact. The idea of the improvement is to analyze the curvature of the gradient field at edge point and adjust warping vectors.","PeriodicalId":301876,"journal":{"name":"2019 15th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS)","volume":"354 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116130853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
H. E. A. E. Abdallaoui, A. E. Fazziki, F. Z. Ennaji, M. Sadgal
{"title":"A System for Collecting and Analyzing Road Accidents Big Data","authors":"H. E. A. E. Abdallaoui, A. E. Fazziki, F. Z. Ennaji, M. Sadgal","doi":"10.1109/SITIS.2019.00108","DOIUrl":"https://doi.org/10.1109/SITIS.2019.00108","url":null,"abstract":"Many factors explain traffic accidents, such as the type of the accident site, its environment, the driver's behavior, and other uncertain complex factors. As a result, the occurrence of road accidents is non-linear, so it is necessary to explore the correlation between data from many aspects to minimize the risk. After data preprocessing following a classification using the datamining tools, relevant information can be deduced about the causes of the high-frequency accidents. Depending on the results obtained, we can verify the accuracy of the extracted information, and this can help predict new situations with similar data in the future. The aim is to choose the most accurate extraction process, by analyzing the characteristics of the data and their relationship with the analysis and the extraction process. In this paper, we propose a decision-making system for the traffic accident data analysis in order to extract information relevant to the prevention of the road risk. This system is based on appropriate datamining techniques for collecting, pre-processing and exploring accident data to categorize road accidents and identify the most problematic sites.","PeriodicalId":301876,"journal":{"name":"2019 15th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122701606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Image Patch Similarity Through a Meta-Learning Metric Based Approach","authors":"Patricia L. Suárez, A. Sappa, B. Vintimilla","doi":"10.1109/SITIS.2019.00087","DOIUrl":"https://doi.org/10.1109/SITIS.2019.00087","url":null,"abstract":"This paper proposes a novel approach to learn the best representation of the image patches to determine the similarity degree between cross-spectral regions (patches). The present work tackles this problem using a few-shot metric based meta-learning framework able to compare image regions and determining a similarity measure to decide if there is similarity between the compared patches. Our model is training end-to-end from scratch. Experimental results have shown that the proposed approach effectively estimates the similarity of the patches and, comparing it with the state of the art approaches, shows better results.","PeriodicalId":301876,"journal":{"name":"2019 15th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124881780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Esther Chabi Adjobo, A. T. S. Mahama, P. Gouton, J. Tossa
{"title":"Proposition of Convolutional Neural Network Based System for Skin Cancer Detection","authors":"Esther Chabi Adjobo, A. T. S. Mahama, P. Gouton, J. Tossa","doi":"10.1109/SITIS.2019.00018","DOIUrl":"https://doi.org/10.1109/SITIS.2019.00018","url":null,"abstract":"Skin cancer automated diagnosis tools play a vital role in timely screening, helping dermatologists focus on melanoma cases. Best arts on automated melanoma screening use deep learning-based approaches, especially deep convolutional neural networks (CNN) to improve performances. Because of the large number of parameters that could be involved during training in CNN many training samples are needed to avoid overfitting problem. Gabor filtering can efficiently extract spatial information including edges and textures, which may reduce the features extraction burden to CNN. In this paper, we proposed a Gabor Convolutional Network (GCN) model to improve the performance of automated diagnosis of skin cancer systems. The model combines a CNN model and Gabor filtering and serves three functions: generation of Gabor filter banks, CNN construction and filter injection. We performed experiments with dermoscopic images and results were interpreted according to classification accuracy. The results we have obtained show that our GCN offers the best classification accuracy with a value of 96.39% against 94.02% for the CNN model.","PeriodicalId":301876,"journal":{"name":"2019 15th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126015086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Augmented Reality for Tissue Converting Maintenance","authors":"S. Coscetti, D. Moroni, G. Pieri, M. Tampucci","doi":"10.1109/SITIS.2019.00098","DOIUrl":"https://doi.org/10.1109/SITIS.2019.00098","url":null,"abstract":"Tissue converting lines represent one of the key plants in the paper production field: thanks to them, paper tissue is converted into its final form for domestic and sanitary usage. One of the critical points of the tissue converting lines is the productivity and the possibility to follow the conversion process at a relatively low cost. Although the actual lines have high productivity yet, the study of state of the art has shown that choke-points still exist, caused by inadequate automation. In this paper, we present the preliminary results of a project which aims at removing such obstacles towards complete automation, by introducing a set of innovations based on ICT solutions applied to advanced automation. In detail, advanced computer vision and video analytics methods will be applied to monitor converting lines pervasively and to extract automatically process information to self-regulate either specific machine and global parameters. Augmented reality interfaces are being designed and developed to support converting line monitoring and maintenance, both ordinary and extraordinary. An Artificial Intelligence module provides suggestions and instructions to the operators in order to guarantee the production level even in the case of unskilled staff. The automation of such processes will improve factory safety, decrease manual interventions, and, thus, will increase production line up-time and efficiency.","PeriodicalId":301876,"journal":{"name":"2019 15th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126108591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}