{"title":"论文摘要","authors":"Matthew Lampert","doi":"10.2307/jstudpaullett.2.2.0171","DOIUrl":null,"url":null,"abstract":"My research interests are in natural language processing and machine learning. I am interested in developing techniques that would make computers learn to robustly understand and process natural languages. Building computing systems that can process and converse in natural languages has been a long-standing goal of artificial intelligence and researchers have approached this goal from two opposing directions. One of the directions can be described as “broad and shallow” in which researchers have focused on tasks like information extraction, word sense disambiguation, semantic role labeling etc., that involve analyzing open domain natural language text but the analysis done is typically shallow which is suitable just enough for inferring some simple properties about the text. The second direction can be described as “narrow and deep” in which researchers have focused on deeper analysis of natural language text but restricted to specific domains. The topic of my dissertation research, learning for semantic parsing, is an example task from this direction. It is the task of learning to map domain-specific natural language sentences into their complete, formal meaning representations which a computer program can execute to perform some domain-related task, like answering database queries or controlling a robot.","PeriodicalId":29841,"journal":{"name":"Journal for the Study of Paul and His Letters","volume":"1 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Dissertation Summary\",\"authors\":\"Matthew Lampert\",\"doi\":\"10.2307/jstudpaullett.2.2.0171\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"My research interests are in natural language processing and machine learning. I am interested in developing techniques that would make computers learn to robustly understand and process natural languages. Building computing systems that can process and converse in natural languages has been a long-standing goal of artificial intelligence and researchers have approached this goal from two opposing directions. One of the directions can be described as “broad and shallow” in which researchers have focused on tasks like information extraction, word sense disambiguation, semantic role labeling etc., that involve analyzing open domain natural language text but the analysis done is typically shallow which is suitable just enough for inferring some simple properties about the text. The second direction can be described as “narrow and deep” in which researchers have focused on deeper analysis of natural language text but restricted to specific domains. The topic of my dissertation research, learning for semantic parsing, is an example task from this direction. It is the task of learning to map domain-specific natural language sentences into their complete, formal meaning representations which a computer program can execute to perform some domain-related task, like answering database queries or controlling a robot.\",\"PeriodicalId\":29841,\"journal\":{\"name\":\"Journal for the Study of Paul and His Letters\",\"volume\":\"1 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2012-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal for the Study of Paul and His Letters\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.2307/jstudpaullett.2.2.0171\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal for the Study of Paul and His Letters","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2307/jstudpaullett.2.2.0171","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
My research interests are in natural language processing and machine learning. I am interested in developing techniques that would make computers learn to robustly understand and process natural languages. Building computing systems that can process and converse in natural languages has been a long-standing goal of artificial intelligence and researchers have approached this goal from two opposing directions. One of the directions can be described as “broad and shallow” in which researchers have focused on tasks like information extraction, word sense disambiguation, semantic role labeling etc., that involve analyzing open domain natural language text but the analysis done is typically shallow which is suitable just enough for inferring some simple properties about the text. The second direction can be described as “narrow and deep” in which researchers have focused on deeper analysis of natural language text but restricted to specific domains. The topic of my dissertation research, learning for semantic parsing, is an example task from this direction. It is the task of learning to map domain-specific natural language sentences into their complete, formal meaning representations which a computer program can execute to perform some domain-related task, like answering database queries or controlling a robot.