M. Ferguson, Seongwoon Jeong, K. Law, Svetlana Levitan, Anantha Narayanan Narayanan, Rainer Burkhardt, T. Jena, Y. T. Lee
{"title":"A Standardized Representation of Convolutional Neural Networks for Reliable Deployment of Machine Learning Models in the Manufacturing Industry","authors":"M. Ferguson, Seongwoon Jeong, K. Law, Svetlana Levitan, Anantha Narayanan Narayanan, Rainer Burkhardt, T. Jena, Y. T. Lee","doi":"10.1115/DETC2019-97095","DOIUrl":"https://doi.org/10.1115/DETC2019-97095","url":null,"abstract":"\u0000 The use of deep convolutional neural networks is becoming increasingly popular in the engineering and manufacturing sectors. However, managing the distribution of trained models is still a difficult task, partially due to the limitations of standardized methods for neural network representation. This paper seeks to address this issue by proposing a standardized format for convolutional neural networks, based on the Predictive Model Markup Language (PMML). A number of pre-trained ImageNet models are converted to the proposed PMML format to demonstrate the flexibility and utility of this format. These models are then fine-tuned to detect casting defects in Xray images. Finally, a scoring engine is developed to evaluate new input images against models in the proposed format. The utility of the proposed format and scoring engine is demonstrated by benchmarking the performance of the defect-detection models on a range of different computation platforms. The scoring engine and trained models are made available at https://github.com/maxkferg/python-pmml.","PeriodicalId":352702,"journal":{"name":"Volume 1: 39th Computers and Information in Engineering Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115086836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Prediction of Nitrogen Concentration in Fuel Cells Using Data-Driven Modeling","authors":"Tong Lin, Leiming Hu, S. Litster, L. Kara","doi":"10.1115/detc2019-98477","DOIUrl":"https://doi.org/10.1115/detc2019-98477","url":null,"abstract":"\u0000 This paper presents a set of data-driven methods for predicting nitrogen concentration in proton exchange membrane fuel cells (PEMFCs). The nitrogen that accumulates in the anode channel is a critical factor giving rise to significant inefficiency in fuel cells. While periodically purging the gases in the anode channel is a common strategy to combat nitrogen accumulation, such open-loop strategies also create sub-optimal purging decisions. Instead, an accurate prediction of nitrogen concentration can help devise optimal purging strategies. However, model based approaches such as CFD simulations for nitrogen prediction are often unavailable for long-stack fuel cells due to the complexity of the chemical environment, or are inherently slow preventing them from being used for real-time nitrogen prediction on deployed fuel cells. As one step toward addressing this challenge, we explore a set of data-driven techniques for learning a regression model from the input parameters to the nitrogen build-up using a model-based fuel cell simulator as an offline data generator. This allows the trained machine learning system to make fast decisions about nitrogen concentration during deployment based on other parameters that can be obtained through sensors. We describe the various methods we explore, compare the outcomes, and provide future directions in utilizing machine learning for fuel cell physics modeling in general.","PeriodicalId":352702,"journal":{"name":"Volume 1: 39th Computers and Information in Engineering Conference","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114631300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Assessment of Joint Parameters in a Kinect Sensor Based Rehabilitation Game","authors":"S. Manna, V. Dubey","doi":"10.1115/detc2019-97519","DOIUrl":"https://doi.org/10.1115/detc2019-97519","url":null,"abstract":"\u0000 A Kinect sensor based basketball game is developed for delivering post-stroke exercises in association with a newly developed elbow exoskeleton. Few interesting features such as audio-visual feedback and scoring have been added to the game platform to enhance patient’s engagement during exercises. After playing the game, the performance score has been calculated based on their reachable points and reaching time to measure their current health conditions. During exercises, joint parameters are measured using the motion capture technique of Kinect sensor. The measurement accuracy of Kinect sensor is validated by two comparative studies where two healthy subjects were asked to move elbow joint in front of Kinect sensor wearing the developed elbow exoskeleton. In the first study, the joint information collected from Kinect sensor was compared with the exoskeleton based sensor. In the next study, the length of upperarm and forearm measured by Kinect were compared with the standard anthropometric data. The measurement errors between Kinect and exoskeleton are turned out to be in the acceptable range; 1% for subject 1 and 0.44% for subject 2 in case of joint angle; 5.55% and 3.58% for subject 1 and subject 2 respectively in case of joint torque. The average errors of Kinect measurement as compared to the anthropometric data of the two subjects are 16.52% for upperarm length and 9.87% for forearm length. It shows that Kinect sensor can measure the activity of joint movement with a minimum margin of error.","PeriodicalId":352702,"journal":{"name":"Volume 1: 39th Computers and Information in Engineering Conference","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115194971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Mobile Based Real-Time Occlusion Between Real and Digital Objects in Augmented Reality","authors":"Pranav Jain, Conrad S. Tucker","doi":"10.1115/detc2019-98440","DOIUrl":"https://doi.org/10.1115/detc2019-98440","url":null,"abstract":"\u0000 In this paper, a mobile-based augmented reality (AR) method is presented that is capable of accurate occlusion between digital and real-world objects in real-time. AR occlusion is the process of hiding or showing virtual objects behind physical ones. Existing approaches that address occlusion in AR applications typically require the use of markers or depth sensors, coupled with compute machines (e.g., laptop or desktop). Furthermore, real-world environments are cluttered and contain motion artifacts that result in occlusion errors and improperly rendered virtual objects, relative to the real world environment. These occlusion errors can lead users to have an incorrect perception of the environment around them while using an AR application, namely not knowing a real-world object is present. Moving the technology to mobile-based AR environments is necessary to reduce the cost and complexity of these technologies. This paper presents a mobile-based AR method that brings real and virtual objects into a similar coordinate system so that virtual objects do not obscure nearby real-world objects in an AR environment. This method captures and processes visual data in real-time, allowing the method to be used in a variety of non-static environments and scenarios. The results of the case study show that the method has the potential to reduce compute complexity, maintain high frame rates to run in real-time, and maintain occlusion efficacy.","PeriodicalId":352702,"journal":{"name":"Volume 1: 39th Computers and Information in Engineering Conference","volume":"121 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129419358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}