A backpropagation network for classifying auditory brainstem evoked potentials: input level biasing, temporal and spectral inputs and learning patterns
{"title":"A backpropagation network for classifying auditory brainstem evoked potentials: input level biasing, temporal and spectral inputs and learning patterns","authors":"Dogan Alpsan, can Ozdamar","doi":"10.1109/IJCNN.1989.118422","DOIUrl":null,"url":null,"abstract":"Summary form only given, as follows. The results of an investigation conducted to examine the effects of various input data forms on learning of a neural network for classifying auditory evoked potentials are presented. The long-term objective is to use the classification in an automated device for hearing threshold testing. Feedforward multilayered neural networks trained with the backpropagation method are used. The effects of presenting the data to the neural network in various temporal and spectral modes are explored. Results indicate that temporal and spectral information complement one another and increase performance when used together. Learning curves and dot graphs as they are used in this study may reveal network learning strategies. The nature of such learning patterns found in this study is discussed.<<ETX>>","PeriodicalId":199877,"journal":{"name":"International 1989 Joint Conference on Neural Networks","volume":"40 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1989-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International 1989 Joint Conference on Neural Networks","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN.1989.118422","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6
Abstract
Summary form only given, as follows. The results of an investigation conducted to examine the effects of various input data forms on learning of a neural network for classifying auditory evoked potentials are presented. The long-term objective is to use the classification in an automated device for hearing threshold testing. Feedforward multilayered neural networks trained with the backpropagation method are used. The effects of presenting the data to the neural network in various temporal and spectral modes are explored. Results indicate that temporal and spectral information complement one another and increase performance when used together. Learning curves and dot graphs as they are used in this study may reveal network learning strategies. The nature of such learning patterns found in this study is discussed.<>