{"title":"The Relationship between Pragmatic Language Competence, Parental Child Abuse, and Teacher-Child Relationships in School Bullying: Focusing on Gender Differences in Fourth Graders in Elementary School Children","authors":"Eun Ju Lee","doi":"10.12963/csd.23978","DOIUrl":"https://doi.org/10.12963/csd.23978","url":null,"abstract":"Objectives: The relationship between child pragmatic language competence, parental child abuse, and teacher-child relationships in school bullying was examined by gender group. Methods: 340 fourth-grade boys (34.41%) and 648 girls (65.59%) who participated in the Panel Study on Korean Children (PSKC) were analyzed for child pragmatic language competence (CPLC), parent-child abuse (PCAQ), teacher-child relationship (STRS), Revised Olweus Bully/Victim Questionnaire (OBVQ-R). Results: In the case of male students, controlling their Communication function and Discourse management according to contextual variation can help maintain intimacy in teacher-child relationships, but it does not affect school bullying. In addition, physical or mental abuse by parents was not related to the damage of male students’ peer school bullying. On the other hand, in the case of female students, emotional abuse of parents was a major variable that significantly affected the degree of damage (frequency) of peer school violence, as well as nonverbal communication skills and communication functions. Conclusion: If the results of this study are applied to the development and application of school bullying prevention programs, it would be desirable to distinguish between male and female students. First, in the case of female students, it was confirmed that non-verbal communication skills were especially important in the relationship between teachers, peers, and parents. And in the case of male students, direct verbal skills such as discourse management, communication function, and contextual variation ability according to the situation seem to affect intimacy with teachers and peer school bullying.","PeriodicalId":45124,"journal":{"name":"Communication Sciences and Disorders-CSD","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135081434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Developing the Korean Version of a Semantic Feature Database for Semantic Feature Analysis Treatment","authors":"Sujin Choi, Ju Eun Kim, Jee Eun Sung","doi":"10.12963/csd.23963","DOIUrl":"https://doi.org/10.12963/csd.23963","url":null,"abstract":"Objectives: A naming deficit is a common linguistic issue for individuals with aphasia. Semantic Feature Analysis (SFA) is a widely used approach to improving the naming abilities of individuals with aphasia. The purpose of this study was to develop a Korean version of the semantic feature database used in SFA treatment. Methods: The item lists and semantic features of nouns and verbs were modified to reflect the linguistic characteristics and cultural context of Korea. To assess the semantic relatedness of the items and semantic features, the researchers conducted two validation studies. In the first study, forty young participants were recruited, and the semantic features were revised if the agreement rate was less than 80%. In the second study, sixteen speech language pathologists participated. Results: The final list included 213 nouns and 159 verbs, with 24 semantic features for each word. The researchers composed 5,112 noun semantic features and 3,816 verb semantic features. In the first validation, the matching rate of semantic features was over 80%, except for 21 noun semantic features and 167 verb semantic features. In the second validation, 21 noun semantic features and 161 verb semantic features showed a matching rate of over 80%. The researchers modified six verb semantic features with a matching rate of less than 80%. Conclusion: This study successfully developed a Korean version of the semantic feature database for noun and verb naming treatment. This database provides a valuable resource for improving the naming abilities of individuals with aphasia.","PeriodicalId":45124,"journal":{"name":"Communication Sciences and Disorders-CSD","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135081437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Survey on Communication Needs and Functions of Adults Using Personal Assistance Services","authors":"Eunhye Lee, Sangeun Shin","doi":"10.12963/csd.23991","DOIUrl":"https://doi.org/10.12963/csd.23991","url":null,"abstract":"Objectives: This study aimed to examine the communication needs and communication functions required by adults with cerebral palsy and developmental disabilities in various activity support areas, with the goal of enhancing the effective operation of personal assistance services. Methods: A total of 30 personal assistants, who had experience assisting adults with communication difficulties due to cerebral palsy or developmental disabilities, were divided into two groups: 15 adults with cerebral palsy and 15 adults with developmental disabilities. They were assessed for communication needs using a 5-point Likert scale across nine areas of activity support (Personal hygiene, Body functions, Meal support, Moving indoors, Cleaning, Laundry, Cooking, Commuting, and Going out) and six communication functions (Requests, Responds, Objective comments, Statements, Acknowledge, and Organization devices). Results: The communication needs of the cerebral palsy group were significantly higher than those of the developmental disability group, with a main effect of the activity support domain and a significant interaction effect between disability type and activity support domain. While the cerebral palsy group did not differ in communication needs across the activity support domains, the developmental disability group had higher communication needs in the going out domain than in the body functions and moving indoors domains. The cerebral palsy group had higher scores in the communication functions than the developmental disability group, but neither group showed significant differences in scores between types of communication functions. Conclusion: Communication needs and functions and the need for communication support were discussed according to disability type and activity support domain.","PeriodicalId":45124,"journal":{"name":"Communication Sciences and Disorders-CSD","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135081432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Meta-analysis of Stuttering Prevalence and Incidence","authors":"Kyungjae Lee","doi":"10.12963/csd.23974","DOIUrl":"https://doi.org/10.12963/csd.23974","url":null,"abstract":"Objectives: Stuttering prevalence and incidence can be measured in very diverse ways and such differences in methodologies may contribute to variability of the results. Moreover stuttering prevalence and incidence can be different according to factors such as gender and age. The current study tried to provide a comprehensive view on stuttering prevalence and incidence through meta-analysis of the research results. We also tried to determine whether stuttering prevalence and incidence would be different according to gender, age, and region. Methods: A total of four databases (two Korean and two English databases) were used in the current study for article search. A total of 27 articles (26 English, 1 Korean) met the inclusion/exclusion criteria and were analyzed in the current study. Results: Overall stuttering prevalence was about 1.5% and overall stuttering incidence was about 3.9%. There was a statistically significant difference in stuttering prevalence according to gender and age. Stuttering prevalence for males and preschoolers was almost twice as much as that of females and other age groups. However there was no significant difference in prevalence according to regions. Furthermore there was no significant difference in stuttering incidence according to gender. Conclusion: The meta-analysis results of the current study showed very similar, but still somewhat different stuttering prevalence and incidence compared to the commonly held belief. Such differences may be due to the typical characteristics of the studies analyzed in the current study. There may be future studies on more diverse factors that influence stuttering prevalence and incidence.","PeriodicalId":45124,"journal":{"name":"Communication Sciences and Disorders-CSD","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135082552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Relationship among Cognition, Receptive Vocabulary and Speech Production Skills in Children with Cerebral Palsy","authors":"Pil Yeon Jeong, Hyun Sub Sim","doi":"10.12963/csd.23977","DOIUrl":"https://doi.org/10.12963/csd.23977","url":null,"abstract":"Objectives: The purpose of this study was to identify the differences of cognition in children with cerebral palsy (CP) based on a Speech, Language Profile Group (SLPG), and explore the relationship among cognition, receptive vocabulary, and speech production skills. Methods: Forty children aged 4-16 years with CP, 10 with no speech motor involvement and age-appropriate language ability (NSMI-LCT), 7 with no speech motor involvement and impaired language ability (NSMI-LCI), 11 with speech motor involvement and age-appropriate language ability (SMI-LCT), and 12 with speech motor involvement and impaired language ability (SMI-LCI) participated in the study (spastic 31, dyskinetic 3, ataxic 2, mixed 4). To evaluate cognitive ability, language ability, and speech production skill, data were collected from the K-WISC-III or K-WIPPSI, receptive vocabulary test, prolonged vowel /a/, Assessment of Articulation and Phonology for Children, and carrier phrases repetition task. Results: The results showed significant differences between the NSMI-LCT and SMI-LCI groups in cognitive abilities. Moreover, cognitive abilities in children with CP were significantly related to receptive vocabulary and speech rate. Conclusion: This study revealed that cognition has an internal relationship with receptive vocabulary and speech production skills. Therefore cognition, language, and speech ability are important factors in the assessment and intervention for children with CP. This study suggest that multidimensional considerations are crucial in evaluating and intervening in children with CP.","PeriodicalId":45124,"journal":{"name":"Communication Sciences and Disorders-CSD","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135081427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Verbal Imitation in 1 to 2-Year-Old Children","authors":"Seunghee Ha, Jiyoon Kwon, Yulim Jeong","doi":"10.12963/csd.23979","DOIUrl":"https://doi.org/10.12963/csd.23979","url":null,"abstract":"Objectives: This study aimed to examine the verbal imitation of words and nonwords in children aged 1 to 2 years old. Methods: The study involved four age groups, each six months apart, ranging from 1 to 2 years of age. The children were asked to repeat 12 real words and 12 nonwords, with corresponding reference materials. The responses were classified as correct, incorrect, or no responses. The incorrect responses were further analyzed to identify patterns, including whole-word errors, segmental errors, babbling, and different vocabulary responses. Verbal imitation performances were compared in terms of age and word types. Results: The ratio of no responses in verbal imitation decreased significantly, while the ratio of correct responses increased significantly between the late 1-year and early 2-year age groups. The interaction effect between word types and age in the correct responses of verbal imitation was significant, indicating that 1-year-old and early 2-year-old children did not show significant differences between words and nonwords, whereas late 2-year-old children exhibited better imitation performances for words compared to nonwords. Children produced significantly higher ratios of incorrect responses in verbal imitation for nonwords compared to words. Babbling accounted for more than half of the incorrect responses produced by the early 1-year-olds, which dramatically decreased in older children groups and was seldom observed among 2-year-olds. The late 1-year-olds and 2-year-olds primarily demonstrated whole-word error patterns in both word and non-word imitations. Conclusion: This study confirmed that verbal imitation abilities significantly increased in accordance with children’s growing stage of phonological and lexical development.","PeriodicalId":45124,"journal":{"name":"Communication Sciences and Disorders-CSD","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135081431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Development and Applied Research of Korean-Index of Phonetic Complexity-Revision (K-IPC-R)","authors":"Ran Lee, Jinsoon Han, Eun-ju Lee","doi":"10.12963/csd.23973","DOIUrl":"https://doi.org/10.12963/csd.23973","url":null,"abstract":"Objectives: In this paper, the Korean-Index of phonetic complexity-revision (K-IPC-R) was revised and supplemented to use the Korean-Index of phonetic complexity (K-IPC; Lee, Han, Shim, 2004) as a phonological evaluation index of the independent analysis method. Methods: Speech samples of 20 typically developing children of 18 to 23 months, 24 to 29 months, and 30 to 35 months of age were collected from each child during spontaneous play. The analizing and scoring criteria of K-IPC-R were devised on the basis of the phonological characteristics found in the speech samples and the results of previous studies related to speech sound development. Results: The K-IPC-R was composed of nine sub-indices: consonant by place class, consonant by manner class, consonant by phonation type, vowel type, shape of syllables, length of syllables, singleton consonants by place variegation, presence of consonant chain, and consonant chain type. The scoring criteria were determined to weight 1 point to velars, fricatives, affricates, liquids, aspirated, diphthongs, closed syllable words, and three or more syllable words. And 1 point is weighted when two or more consonants with different place of articulation are present within a word. Also, 1 point is weighted if a consonant chain appeared and another 1 point is weighted if the place of articulation of the chained consonants are different. Conclusion: This study is meaningful in proposing the K-IPC-R as a phonological evaluation index of an independent analysis method that can evaluate the phonological ability of children at the initial level of phonological development, whose phonological ability is difficult to evaluate with a relational analysis method.","PeriodicalId":45124,"journal":{"name":"Communication Sciences and Disorders-CSD","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135081573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Comprehension of Indirect Speech Act according to Emoticon Expression Types in Instant Messenger Conversation of School-Aged Children","authors":"Soo Jung Jun, EunBin Ha, Eun-Ju Lee","doi":"10.12963/csd.23989","DOIUrl":"https://doi.org/10.12963/csd.23989","url":null,"abstract":"Objectives: This study aims to examine if there is a difference in indirect speech comprehension ability based on grade level and to identify which type of emoticon is most helpful in the students’ understanding of indirect speech. Methods: A total of 60 students were gathered, consisting of 20 children each from the second, fourth, and sixth grades of elementary school, who lived in Seoul and Gyeongi-do. The indirect speech comprehension task divided the emoticons into four types: facial expression clue, motion clue, object clue, and no clue. The experiment was conducted using a slideshow, where students read sentences and answered clues. They were then required to select the meaning of the underlined part in the last sentence of the article among three answer choices: the correct answer, example of surface meaning answer, and example with weak connection answer. The task consisted of a total of 32 questions, with 8 questions for each emoticon type and an additional 16 questions involving direct conversations as filler questions. Results: A statistically significant difference was observed in indirect speech comprehension scores between second and fourth graders, as well as between second and sixth graders. Statistical significance was found in the indirect speech comprehension task with regard to grade level and emoticon types. In addition, the effect of interaction between grade level and emoticon type was statistically significant. There was a decreasing trend in the overall frequency of error types as grade level increased. Conclusion: This study revealed that, as the grade level increases, individuals develop the ability to comprehend indirect speech accurately, even when the meaning is not explicit on the surface, by understanding the social context and utilizing their knowledge of the world. For future research, it is essential to examine the tendencies of children with disabilities or conversational language difficulties compared to non-disabled children.","PeriodicalId":45124,"journal":{"name":"Communication Sciences and Disorders-CSD","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135082548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Quantity and Quality of Parental Linguistic Input to Young Children with Cochlear Implants: A Longitudinal Study","authors":"Dayea Lee, Youngmee Lee, Youjin Lee","doi":"10.12963/csd.23981","DOIUrl":"https://doi.org/10.12963/csd.23981","url":null,"abstract":"Objectives: This study aims to investigate longitudinal changes in the quantity and quality of the parental linguistic input of children with cochlear implants (CIs) and children with typical hearing (TH) and find the significant parental linguistic variables that might positively be related to children’s language development in the CI group. Methods: Participants were 33 parent-child dyads, including 16 children with CIs and 17 with TH at the initial visit. They participated in a 20-minute free-play task at three-time points (initial, 6-month, and 12-month visits). Results: There were no significant differences between the CI and TH groups in the total number of utterances (NTU) and the total number of words (NTW). However, the two groups significantly differed in the different number of words (NDW) and higher-level facilitative language techniques (FLTs). In addition, the CI group produced more utterances than TH parents at the initial visit. However, there were no significant differences in the NTU between CI and TH groups at 6- and 12-month visits. Furthermore, both groups used fewer NDW and mental state words at the initial visit than at the 12-month visit. Nevertheless, CI and TH groups used fewer higher-level FLTs at the initial visit than at the 6- and 12-month visits. Lastly, qualitative parental linguistic input is a critical factor, continuously affecting language development in children with CIs. Conclusion: These findings suggest that early intervention programs should be designed to enable parents to use more qualitative linguistic features in daily routines to build their child’s language skills.","PeriodicalId":45124,"journal":{"name":"Communication Sciences and Disorders-CSD","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135082549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Effects of Audio-visual and Contextual Information on Auditory Word Recognition Performance in Normal Aging","authors":"Yoo-Jung Cho, Ji-Wan Ha","doi":"10.12963/csd.23995","DOIUrl":"https://doi.org/10.12963/csd.23995","url":null,"abstract":"Objectives: The present study compared performance on an auditory word recognition task with different visual, auditory, and contextual information across age groups to determine how these variables affect auditory comprehension in normal aging. Methods: We conducted an auditory word recognition task with 20 normal adults from each of the following age groups: young, middle-aged, old, and elderly, and compared their correct response scores and analyzed their error responses. We also conducted a correlation analysis between the score of auditory comprehension in Reading Diagnostic Assessment, RDA and the performance of the auditory word recognition task. Results: There was a significant between-group main effect, as well as within-group main effects of auditory and contextual information, and interaction effects between auditory information and group, contextual information and group, and auditory information and contextual information. The old and elderly groups made more errors in the following order of error types: unrelated errors, formal errors, mixed errors, and semantic errors; and there were no significant differences in error patterns between the two groups. There was a significant positive correlation between the RDA and the auditory word recognition task. Conclusion: This study confirms that auditory word recognition performance declines with aging in normal adults, and that difficulties may be more pronounced under certain conditions, such as noise and low contextual sentences.","PeriodicalId":45124,"journal":{"name":"Communication Sciences and Disorders-CSD","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135081439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}