G. Marwa, B. Mohamed, C. Najoua, Bedoui Mohamed Hédi
{"title":"LVQ人工神经网络的通用神经结构","authors":"G. Marwa, B. Mohamed, C. Najoua, Bedoui Mohamed Hédi","doi":"10.1109/ICEMIS.2017.8272996","DOIUrl":null,"url":null,"abstract":"This paper reports an approach for the implementation of learning vector quantization (LVQ) neural network with different generic architectures and a reduction of the latency. Our approach is based on a hardware/software design (HW/SW) for on-chip and on-line learning with generic architectures. It is based, as well, on a variable topology (the number of neurons in the hidden layer and the number of entries are scalable) that makes it generic and usable for many applications without hardware modifications. In this contribution, we have integrated the parallelism rate into the data path, which is responsible for calculating the minimum distance, weights and labels, in order to solve problems of application latency. Therefore, our approach allows a compromise between latency, power and parallelism. These generic architectures allow enlightening the vision of the designers for the right choice of architecture that suits their needs. These different designs can be used for different applications including applications for vigilance states detection, image processing, EEG signals and ECG signals, etc.","PeriodicalId":117908,"journal":{"name":"2017 International Conference on Engineering & MIS (ICEMIS)","volume":"108 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Generic neural architecture for LVQ artificial neural networks\",\"authors\":\"G. Marwa, B. Mohamed, C. Najoua, Bedoui Mohamed Hédi\",\"doi\":\"10.1109/ICEMIS.2017.8272996\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper reports an approach for the implementation of learning vector quantization (LVQ) neural network with different generic architectures and a reduction of the latency. Our approach is based on a hardware/software design (HW/SW) for on-chip and on-line learning with generic architectures. It is based, as well, on a variable topology (the number of neurons in the hidden layer and the number of entries are scalable) that makes it generic and usable for many applications without hardware modifications. In this contribution, we have integrated the parallelism rate into the data path, which is responsible for calculating the minimum distance, weights and labels, in order to solve problems of application latency. Therefore, our approach allows a compromise between latency, power and parallelism. These generic architectures allow enlightening the vision of the designers for the right choice of architecture that suits their needs. These different designs can be used for different applications including applications for vigilance states detection, image processing, EEG signals and ECG signals, etc.\",\"PeriodicalId\":117908,\"journal\":{\"name\":\"2017 International Conference on Engineering & MIS (ICEMIS)\",\"volume\":\"108 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-05-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2017 International Conference on Engineering & MIS (ICEMIS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICEMIS.2017.8272996\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 International Conference on Engineering & MIS (ICEMIS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICEMIS.2017.8272996","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Generic neural architecture for LVQ artificial neural networks
This paper reports an approach for the implementation of learning vector quantization (LVQ) neural network with different generic architectures and a reduction of the latency. Our approach is based on a hardware/software design (HW/SW) for on-chip and on-line learning with generic architectures. It is based, as well, on a variable topology (the number of neurons in the hidden layer and the number of entries are scalable) that makes it generic and usable for many applications without hardware modifications. In this contribution, we have integrated the parallelism rate into the data path, which is responsible for calculating the minimum distance, weights and labels, in order to solve problems of application latency. Therefore, our approach allows a compromise between latency, power and parallelism. These generic architectures allow enlightening the vision of the designers for the right choice of architecture that suits their needs. These different designs can be used for different applications including applications for vigilance states detection, image processing, EEG signals and ECG signals, etc.