{"title":"语音和音乐欣赏的计算机模型","authors":"P. Denes, M. Mathews","doi":"10.1145/1476589.1476633","DOIUrl":null,"url":null,"abstract":"Computers have been used extensively in speech research for over 10 years now; they have been applied to the synthesis of musical sounds for a slightly shorter period. The results have produced models of sound production and perception which are intimately related to the synthesis rules programmed in the computer and indeed the program is often the best available model of the production or perception process.","PeriodicalId":294588,"journal":{"name":"Proceedings of the December 9-11, 1968, fall joint computer conference, part I","volume":"32 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1968-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Computer models for speech and music appreciation\",\"authors\":\"P. Denes, M. Mathews\",\"doi\":\"10.1145/1476589.1476633\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Computers have been used extensively in speech research for over 10 years now; they have been applied to the synthesis of musical sounds for a slightly shorter period. The results have produced models of sound production and perception which are intimately related to the synthesis rules programmed in the computer and indeed the program is often the best available model of the production or perception process.\",\"PeriodicalId\":294588,\"journal\":{\"name\":\"Proceedings of the December 9-11, 1968, fall joint computer conference, part I\",\"volume\":\"32 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1968-12-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the December 9-11, 1968, fall joint computer conference, part I\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/1476589.1476633\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the December 9-11, 1968, fall joint computer conference, part I","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/1476589.1476633","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Computers have been used extensively in speech research for over 10 years now; they have been applied to the synthesis of musical sounds for a slightly shorter period. The results have produced models of sound production and perception which are intimately related to the synthesis rules programmed in the computer and indeed the program is often the best available model of the production or perception process.