{"title":"Temporal-Sound based User Interface for Smart Home","authors":"K. Tani, Nobuyuki Umezu","doi":"10.5121/csit.2021.112107","DOIUrl":null,"url":null,"abstract":"We propose a gesture-based interface to control a smart home. Our system replaces existing physical controls with our temporal sound commands using accelerometer. In our preliminary experiments, we recorded the sounds generated by six different gestures (knocking the desk, mouse clicking, and clapping) and converted them into spectrogram images. Classification learning was performed on these images using a CNN. Due to the difference between the microphones used, the classification results are not successful for most of the data. We then recorded acceleration values, instead of sounds, using a smart watch. 5 types of motions were performed in our experiments to execute activity classification on these acceleration data using a machine learning library named Core ML provided by Apple Inc.. These results still have much room to be improved.","PeriodicalId":190330,"journal":{"name":"Web, Internet Engineering & Signal Processing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Web, Internet Engineering & Signal Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5121/csit.2021.112107","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
We propose a gesture-based interface to control a smart home. Our system replaces existing physical controls with our temporal sound commands using accelerometer. In our preliminary experiments, we recorded the sounds generated by six different gestures (knocking the desk, mouse clicking, and clapping) and converted them into spectrogram images. Classification learning was performed on these images using a CNN. Due to the difference between the microphones used, the classification results are not successful for most of the data. We then recorded acceleration values, instead of sounds, using a smart watch. 5 types of motions were performed in our experiments to execute activity classification on these acceleration data using a machine learning library named Core ML provided by Apple Inc.. These results still have much room to be improved.