{"title":"Actuation Confirmation and Negation via Facial-Identity and -Expression Recognition","authors":"A. L. Cheng, H. Bier, Galoget Latorre","doi":"10.1109/ETCM.2018.8580319","DOIUrl":null,"url":null,"abstract":"This paper presents the implementation of a facial-identity and -expression recognition mechanism that confirms or negates physical and/or computational actuations in an intelligent built-environment. Said mechanism is built via Google Brain’s TensorFlow (as regards facial identity recognition) and Google Cloud Platform’s Cloud Vision API (as regards facial gesture recognition); and it is integrated into the ongoing development of an intelligent built-environment framework, viz., Design-to-Robotic-Production & -Operation (D2RP&O), conceived at Delft University of Technology (TUD). The present work builds on the inherited technological ecosystem and technical functionality of the Design-to-Robotic-Operation (D2RO) component of said framework; and its implementation is validated via two scenarios (physical and computational). In the first scenario—and building on an inherited adaptive mechanism—if building-skin components perceive a rise in interior temperature levels, natural ventilation is promoted by increasing degrees of aperture. This measure is presently confirmed or negated by a corresponding facial expression on the part of the user in response to said reaction, which serves as an intuitive override / feedback mechanism to the intelligent building-skin mechanism’s decision-making process. In the second scenario—and building on another inherited mechanism—if an accidental fall is detected and the user remains consciously or unconsciously collapsed, a series of automated emergency notifications (e.g., SMS, email, etc.) are sent to family and/or care-takers by particular mechanisms in the intelligent built-environment. The precision of this measure and its execution are presently confirmed by (a) identity detection of the victim, and (b) recognition of a reflexive facial gesture of pain and/or displeasure. The work presented in this paper promotes a considered relationship between the architecture of the built-environment and the Information and Communication Technologies (ICTs) embedded and/or deployed.","PeriodicalId":334574,"journal":{"name":"2018 IEEE Third Ecuador Technical Chapters Meeting (ETCM)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE Third Ecuador Technical Chapters Meeting (ETCM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ETCM.2018.8580319","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6
Abstract
This paper presents the implementation of a facial-identity and -expression recognition mechanism that confirms or negates physical and/or computational actuations in an intelligent built-environment. Said mechanism is built via Google Brain’s TensorFlow (as regards facial identity recognition) and Google Cloud Platform’s Cloud Vision API (as regards facial gesture recognition); and it is integrated into the ongoing development of an intelligent built-environment framework, viz., Design-to-Robotic-Production & -Operation (D2RP&O), conceived at Delft University of Technology (TUD). The present work builds on the inherited technological ecosystem and technical functionality of the Design-to-Robotic-Operation (D2RO) component of said framework; and its implementation is validated via two scenarios (physical and computational). In the first scenario—and building on an inherited adaptive mechanism—if building-skin components perceive a rise in interior temperature levels, natural ventilation is promoted by increasing degrees of aperture. This measure is presently confirmed or negated by a corresponding facial expression on the part of the user in response to said reaction, which serves as an intuitive override / feedback mechanism to the intelligent building-skin mechanism’s decision-making process. In the second scenario—and building on another inherited mechanism—if an accidental fall is detected and the user remains consciously or unconsciously collapsed, a series of automated emergency notifications (e.g., SMS, email, etc.) are sent to family and/or care-takers by particular mechanisms in the intelligent built-environment. The precision of this measure and its execution are presently confirmed by (a) identity detection of the victim, and (b) recognition of a reflexive facial gesture of pain and/or displeasure. The work presented in this paper promotes a considered relationship between the architecture of the built-environment and the Information and Communication Technologies (ICTs) embedded and/or deployed.