{"title":"动态单元组合和元音分类","authors":"O. Hoshino, K. Mitsunaga, M. Miyamoto, K. Kuroiwa","doi":"10.1109/ICONIP.2002.1198156","DOIUrl":null,"url":null,"abstract":"By simulating a neural network model we investigated roles of background spectral components of vowel sounds in the neuronal representation of vowel sounds. The model consists of two networks, by which vowel sounds are processed in a hierarchical manner. The first network, which is tonotopically organized, detects spectral peaks called first and second formant frequencies (F1 and F2). The second network has a tonotopic two-dimensional structure and receives input from the first network in a convergent manner. The second network detects the combinatory information of the first (F1) and second (F2) formant frequencies of vowel sounds. We trained the model with five Japanese vowels spoken by different people and modified synaptic connection strengths of the second network according to the Hebbian learning rule, by which relevant dynamic cell assemblies expressing categories of vowels were organized. We show that for creating the dynamic cell assemblies background components around two-formant peaks (F1, F2) are not necessary but advantageous for the creation of the cell assemblies.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"61 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Dynamic cell assemblies and vowel sound categorization\",\"authors\":\"O. Hoshino, K. Mitsunaga, M. Miyamoto, K. Kuroiwa\",\"doi\":\"10.1109/ICONIP.2002.1198156\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"By simulating a neural network model we investigated roles of background spectral components of vowel sounds in the neuronal representation of vowel sounds. The model consists of two networks, by which vowel sounds are processed in a hierarchical manner. The first network, which is tonotopically organized, detects spectral peaks called first and second formant frequencies (F1 and F2). The second network has a tonotopic two-dimensional structure and receives input from the first network in a convergent manner. The second network detects the combinatory information of the first (F1) and second (F2) formant frequencies of vowel sounds. We trained the model with five Japanese vowels spoken by different people and modified synaptic connection strengths of the second network according to the Hebbian learning rule, by which relevant dynamic cell assemblies expressing categories of vowels were organized. We show that for creating the dynamic cell assemblies background components around two-formant peaks (F1, F2) are not necessary but advantageous for the creation of the cell assemblies.\",\"PeriodicalId\":146553,\"journal\":{\"name\":\"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.\",\"volume\":\"61 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2002-11-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICONIP.2002.1198156\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICONIP.2002.1198156","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Dynamic cell assemblies and vowel sound categorization
By simulating a neural network model we investigated roles of background spectral components of vowel sounds in the neuronal representation of vowel sounds. The model consists of two networks, by which vowel sounds are processed in a hierarchical manner. The first network, which is tonotopically organized, detects spectral peaks called first and second formant frequencies (F1 and F2). The second network has a tonotopic two-dimensional structure and receives input from the first network in a convergent manner. The second network detects the combinatory information of the first (F1) and second (F2) formant frequencies of vowel sounds. We trained the model with five Japanese vowels spoken by different people and modified synaptic connection strengths of the second network according to the Hebbian learning rule, by which relevant dynamic cell assemblies expressing categories of vowels were organized. We show that for creating the dynamic cell assemblies background components around two-formant peaks (F1, F2) are not necessary but advantageous for the creation of the cell assemblies.