Adduru U. G. Sankararao, P. Rajalakshmi, Sivasakthi Kaliamoorthy, Sunitha Choudhary
{"title":"Water Stress Detection in Pearl Millet Canopy with Selected Wavebands using UAV Based Hyperspectral Imaging and Machine Learning","authors":"Adduru U. G. Sankararao, P. Rajalakshmi, Sivasakthi Kaliamoorthy, Sunitha Choudhary","doi":"10.1109/SAS54819.2022.9881337","DOIUrl":"https://doi.org/10.1109/SAS54819.2022.9881337","url":null,"abstract":"The major bottleneck in plant phenotyping is the assessment of thousands of genotypes under field conditions, which can be accelerated through Unmanned Aerial Vehicle (UAV) based sensing. Phenotyping for complex traits such as abiotic stress (drought) adaptation can be explored more precisely through the rich spectral information acquired by Hyperspectral Imaging (HSI) sensors. HSI sensors can identify plant water stress early by observing the changes in canopy reflectance due to drought. This study used a UAV-based HSI sensor in the 400-1000 nm range to identify canopy water stress in the pearl millet crop. Five Machine learning-based Feature Selection (FS) methods were used to identify the top-ranked ten wavebands sensitive to canopy water stress. Wavelengths around 692, 714-716, 763-769, 774-882, 870, and 949 nm were repeatedly selected by two or more FS methods. The Recursive feature elimination method with the Support vector machine (SVM) classifier outperformed the other FS methods in selecting the best bands subset. SVM classifier with linear kernel on the selected bands could classify two water stress levels with 95.38% accuracy and early detect stress with 80.76% accuracy in the pearl millet canopy. This study will benefit the agriculture sector by accelerating crop phenotyping using UAV-based HSI.","PeriodicalId":129732,"journal":{"name":"2022 IEEE Sensors Applications Symposium (SAS)","volume":"20 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132192224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Habashy, J. J. Valdés, Madison Cohen-McFarlane, Pengcheng Xi, Bruce Wallace, R. Goubran, F. Knoefel
{"title":"Cough Classification Using Audio Spectrogram Transformer","authors":"K. Habashy, J. J. Valdés, Madison Cohen-McFarlane, Pengcheng Xi, Bruce Wallace, R. Goubran, F. Knoefel","doi":"10.1109/SAS54819.2022.9881344","DOIUrl":"https://doi.org/10.1109/SAS54819.2022.9881344","url":null,"abstract":"A variety of technologies can support aging in place, including smart home sensing that can enable independent living through real-time data analysis. In this work, we study cough sound analysis as the cough is a key symptom of many respiratory illnesses and conditions. Based on a data set of cough recordings, we propose a two-pronged approach: the first leverages unsupervised learning to compute intrinsic dimensions of the data and maps raw data for visualizations, and the second uses the insight to train machine learning models through transfer learning on Vision Transformer models. Data augmentation approaches are implemented to improve the performance of the models and our top-performing model achieves an F1-score of 0.804. This study suggests the feasibility of using smart sensing and deep learning for gaining insights into the health of older adults.","PeriodicalId":129732,"journal":{"name":"2022 IEEE Sensors Applications Symposium (SAS)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117221093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Georgekutty Jose Maniyattu, Eldho Geegy, N. Leiter, Maximilian Wohlschlager, M. Versen, C. Laforsch
{"title":"Development of a neural network to identify plastics using Fluorescence Lifetime Imaging Microscopy","authors":"Georgekutty Jose Maniyattu, Eldho Geegy, N. Leiter, Maximilian Wohlschlager, M. Versen, C. Laforsch","doi":"10.1109/SAS54819.2022.9881372","DOIUrl":"https://doi.org/10.1109/SAS54819.2022.9881372","url":null,"abstract":"Plastics have become a major part of human’s daily life. An uncontrolled usage of plastic leads to an accumulation in the environment posing a threat to flora and fauna, if not recycled correctly. The correct sorting and recycling of the most commonly available plastic types and an identification of plastic in the environment are important. Fluorescence lifetime imaging microscopy shows a high potential in sorting and identifying plastic types. A data-based and an image-based classification are investigated using python programming language to demonstrate the potential of a neural network based on fluorescence lifetime images to identify plastic types. The results indicate that the data-based classification has a higher identification accuracy compared to the image-based classification.","PeriodicalId":129732,"journal":{"name":"2022 IEEE Sensors Applications Symposium (SAS)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115546176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sanghyun Park, Dongheon Lee, Jisoo Choi, Dohyeon Ko, Minji Lee, Zack Murphy, Nowf Binhowidy, Anthony H. Smith
{"title":"Feasibility of Measuring Shot Group Using LoRa Technology and YOLO V5","authors":"Sanghyun Park, Dongheon Lee, Jisoo Choi, Dohyeon Ko, Minji Lee, Zack Murphy, Nowf Binhowidy, Anthony H. Smith","doi":"10.1109/SAS54819.2022.9881356","DOIUrl":"https://doi.org/10.1109/SAS54819.2022.9881356","url":null,"abstract":"Shooting is a common activity all over the world for both military and recreational purposes. Shooting performance can be measured from the size of the shot group (grouping). Shooters have been calculating the size of the group by measuring the distance between bullet impacts using their hands. This paper aims to create a reasonable automated shot grouping size measuring module that is available from several kilometers away. It includes an IoT(Internet of Things) system and a mobile application that users can access. LoRa technology is adopted for covering long distances, and YOLO V5 is implemented to detect bullet impacts. Mathematical methods for calculating accurate distance and engineering techniques to fill the needs are described with experiments on various parameters and conditions. The proposed module showed that indoor tests measured the shot group with a mean accuracy of 91.8%. For future work, outdoor tests, which were affected by environmental control variables, are expected to give better accuracy.","PeriodicalId":129732,"journal":{"name":"2022 IEEE Sensors Applications Symposium (SAS)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121584681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Philipp Stelzer, Sebastian Reicher, Georg Macher, C. Steger, Raphael Schermann
{"title":"Live Migration of a 3D Flash LiDAR System between two Independent Data Processing Systems with Redundant Design","authors":"Philipp Stelzer, Sebastian Reicher, Georg Macher, C. Steger, Raphael Schermann","doi":"10.1109/SAS54819.2022.9881255","DOIUrl":"https://doi.org/10.1109/SAS54819.2022.9881255","url":null,"abstract":"Self-driving and self-flying vehicles have the ability to drive respectively fly independently without the intervention of an operator. For this purpose, these vehicles need sensors for environment perception and data processing systems, which are safety-critical, to process the obtained raw data from these sensors. However, if such safety-critical systems fail, this can have fatal consequences and can affect human lives and/or the environment, especially in the case of highly automated vehicles. A total failure of these systems is one of the worst scenarios in an automated vehicle. Therefore, such safety-critical systems are often designed redundantly in order to prevent a total failure of environment perception. In order to ensure that the operation of the vehicle can continue safely, however, the live migration from one system to the other must be carried out with as little downtime as possible. In our publication, we present a concept for a 3D Flash LiDAR live migration between two independent data processing systems with redundant design. This concept provides a solution for highly automated vehicles to remain fail-operational in case one of the redundant data processing systems fails. The results obtained from the implemented concept, without specifically addressing performance, are also provided to demonstrate feasibility.","PeriodicalId":129732,"journal":{"name":"2022 IEEE Sensors Applications Symposium (SAS)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122721611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Harald Gietler, Christoph Böhm, Stefan Ainetter, Christian Schöffmann, F. Fraundorfer, S. Weiss, H. Zangl
{"title":"Forestry Crane Automation using Learning-based Visual Grasping Point Prediction","authors":"Harald Gietler, Christoph Böhm, Stefan Ainetter, Christian Schöffmann, F. Fraundorfer, S. Weiss, H. Zangl","doi":"10.1109/SAS54819.2022.9881370","DOIUrl":"https://doi.org/10.1109/SAS54819.2022.9881370","url":null,"abstract":"This paper presents an approach to automate the log-grasping of a forestry crane. A common hydraulic actuated log-crane is converted into a robotic device by retrofitting it with various sensors yielding perception of internal and environmental states. The approach uses a learning-based visual grasp detection. Once a suitable grasping candidate is determined, the crane starts its kinematic controlled operation. The system’s design process is based on a real-sim-real transfer to avoid possibly harmful, to humans and itself, crane behavior. Firstly, the grasping position prediction network is trained with real-world images. Secondly, an accurate simulation model of the crane, including photo-realistic synthetic images, is established. Note that in simulation, the prediction network trained on real-world data can be used without re-training. The simulation is used to design and verify the crane’s control- and the path planning scheme. In this stage, potentially dangerous maneuvers or insufficient quality of sensory information become visible. Thirdly, the elaborated closed-loop system configuration is transferred to the real-world forestry crane. The pick and place capabilities are verified in simulation as well as experimentally. A comparison shows that simulation and real-world scenarios perform equally well, validating the proposed real-sim-real design procedure.1","PeriodicalId":129732,"journal":{"name":"2022 IEEE Sensors Applications Symposium (SAS)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128605776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bhaskar Anand, Harshal Verma, A. Thakur, Parvez Alam, P. Rajalakshmi
{"title":"Evaluation of the quality of LiDAR data in the varying ambient light","authors":"Bhaskar Anand, Harshal Verma, A. Thakur, Parvez Alam, P. Rajalakshmi","doi":"10.1109/SAS54819.2022.9881373","DOIUrl":"https://doi.org/10.1109/SAS54819.2022.9881373","url":null,"abstract":"Light detection and ranging (LiDAR) is a widely used sensor for Intelligent transportation systems (ITS). It precisely determines the depth of the objects present around a vehicle. In this paper, the effect of light on the quality of acquired LiDAR data has been presented. The data was captured at different times in a day with varied light conditions. In the early morning and evening, there is partial light. At the night there is no light whereas in the mid-day there is perfect light condition. The data was acquired in the above four timings. On the acquired point cloud data, segmentation of an object, a person in the experiment, was performed. The number of object points and the point density have been observed to examine if light affects the quality of LiDAR data. The results, of the experiments, performed, suggest that the variation of light has little or no effect on the quality of LiDAR data.","PeriodicalId":129732,"journal":{"name":"2022 IEEE Sensors Applications Symposium (SAS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120978146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Improvement of Door Recognition Algorithm using Lidar and RGB-D camera for Mobile Manipulator","authors":"Taehyeon Kim, Minwoo Kang, Sumin Kang, D. Kim","doi":"10.1109/SAS54819.2022.9881249","DOIUrl":"https://doi.org/10.1109/SAS54819.2022.9881249","url":null,"abstract":"The characteristics of mobile manipulator, which can perform various tasks in dynamic environments, have the advantage of driving and doing diverse tasks in large indoor environments with complex structures such as high-rise buildings. However, in order to efficiently navigate in such environments, mapping process containing information about diverse objects that robot can interact with is essential. Among these objects, door is of great importance, but door recognition is challenge because there are doors of various structures and sizes even in single indoor environment. This paper proposes an improvement of door recognition algorithm for mobile manipulator robot using RGB-D camera attached to end effector of manipulator and Lidar of mobile platform. Basically, laser scan data from Lidar is processed by line fitting algorithm and vision data from RGB-D camera is processed by YOLOv3. The process by laser scan data enables the first door recognition. And Additional recognition through vision data is possible by controlling the manipulator according to the weights given by the first recognition. The proposed algorithm has been verified in a simulation environment based on real-world, and we confirmed that it has a higher recognition success rate compared to traditional algorithms.","PeriodicalId":129732,"journal":{"name":"2022 IEEE Sensors Applications Symposium (SAS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115541372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Idan Shem Tov, B. Mukherjee, J. Hayon, Laura Hargreaves, A. Shluger, Y. Rosenwaks
{"title":"Hydrogen Induced Dipole Layer in Pd-SiO2 Based Gas Sensors","authors":"Idan Shem Tov, B. Mukherjee, J. Hayon, Laura Hargreaves, A. Shluger, Y. Rosenwaks","doi":"10.1109/SAS54819.2022.9881358","DOIUrl":"https://doi.org/10.1109/SAS54819.2022.9881358","url":null,"abstract":"A palladium (Pd) functionalized electrostatically formed nanowire (EFN) sensor, a silicon-on-insulator (SOI) based multi-gate transistor, has proven to be an ultra-sensitive platform for hydrogen (H<inf>2</inf>) sensing. This EFN includes a Pd– SiO<inf>2</inf>–Silicon, a metal-oxide-semiconductor (MOS) structure which is studied here in detail. We compare the EFN threshold voltage shift (∆V<inf>TH</inf>) due to H<inf>2</inf> adsorption, to the calculated ∆V<inf>TH</inf> due to dipoles placed at the Pd/SiO<inf>2</inf> interface of the EFN device. We show that the potential drop at the Pd/SiO<inf>2</inf> interface is responsible for the ultra-sensitive hydrogen sensing of the EFN.","PeriodicalId":129732,"journal":{"name":"2022 IEEE Sensors Applications Symposium (SAS)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125731749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}