M. Mozaffari, Shenyong Guan, Shuangyue Wen, Nan Wang, Won-Sook Lee
{"title":"Guided Learning of Pronunciation by Visualizing Tongue Articulation in Ultrasound Image Sequences","authors":"M. Mozaffari, Shenyong Guan, Shuangyue Wen, Nan Wang, Won-Sook Lee","doi":"10.1109/CIVEMSA.2018.8440000","DOIUrl":null,"url":null,"abstract":"Ultrasound has been used as one of the primary technologies utilized widely for clinical diagnosis due to its affordability, non-invasive characteristic, portability, and its fast performance in acquisition. Recently, it started to be used as a visual feedback method for tongue articulation, thanks to its capacity of real-time visualization and video capture of underlying structures inside the mouth. When an Ultrasound transducer is placed along the mid-line under a chin, it shows the tongue motion in sagittal view while speaking. As it is still quite difficult to understand the structure in ultrasound images, we proposed a guided learning system for pronunciation by visualizing tongue articulation in Ultrasound image sequences. Video image registration technique has been employed to project sagittal section of tongue back to the corresponding position on the subject head. The proposed system targets speech therapy and foreign language pronunciation lessons. Two main technology components are (i) Ultrasound tongue image segmentation and tracking (ii) registration of Ultrasound image sequences on video of a subject during the speech. Our experiments on Chinese English learners revealed that the proposed system is capable of providing the beneficial improvement on English pronunciation.","PeriodicalId":305399,"journal":{"name":"2018 IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"11","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CIVEMSA.2018.8440000","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 11
Abstract
Ultrasound has been used as one of the primary technologies utilized widely for clinical diagnosis due to its affordability, non-invasive characteristic, portability, and its fast performance in acquisition. Recently, it started to be used as a visual feedback method for tongue articulation, thanks to its capacity of real-time visualization and video capture of underlying structures inside the mouth. When an Ultrasound transducer is placed along the mid-line under a chin, it shows the tongue motion in sagittal view while speaking. As it is still quite difficult to understand the structure in ultrasound images, we proposed a guided learning system for pronunciation by visualizing tongue articulation in Ultrasound image sequences. Video image registration technique has been employed to project sagittal section of tongue back to the corresponding position on the subject head. The proposed system targets speech therapy and foreign language pronunciation lessons. Two main technology components are (i) Ultrasound tongue image segmentation and tracking (ii) registration of Ultrasound image sequences on video of a subject during the speech. Our experiments on Chinese English learners revealed that the proposed system is capable of providing the beneficial improvement on English pronunciation.