2020 31st Irish Signals and Systems Conference (ISSC)最新文献

筛选
英文 中文
Genetic Algorithmic Parameter Optimisation of a Recurrent Spiking Neural Network Model 循环脉冲神经网络模型的遗传算法参数优化
2020 31st Irish Signals and Systems Conference (ISSC) Pub Date : 2020-03-30 DOI: 10.1109/ISSC49989.2020.9180185
Ifeatu Ezenwe, Alok Joshi, KongFatt Wong-Lin
{"title":"Genetic Algorithmic Parameter Optimisation of a Recurrent Spiking Neural Network Model","authors":"Ifeatu Ezenwe, Alok Joshi, KongFatt Wong-Lin","doi":"10.1109/ISSC49989.2020.9180185","DOIUrl":"https://doi.org/10.1109/ISSC49989.2020.9180185","url":null,"abstract":"Neural networks are complex algorithms that loosely model the behaviour of the human brain. They play a significant role in computational neuroscience and artificial intelligence. The next generation of neural network models is based on the spike timing activity of neurons: spiking neural networks (SNNs). However, model parameters in SNNs are difficult to search and optimise. Previous studies using genetic algorithm (GA) optimisation of SNNs were focused mainly on simple, feedforward, or oscillatory networks, but not much work has been done on optimising cortex-like recurrent SNNs. In this work, we investigated the use of GAs to search for optimal parameters in recurrent SNNs to reach targeted neuronal population firing rates, e.g. as in experimental observations. We considered a cortical column based SNN comprising 1000 Izhikevich spiking neurons for computational efficiency and biologically realism. The model parameters explored were the neuronal biased input currents. First, we found for this particular SNN, the optimal parameter values for targeted population averaged firing activities, and the convergence of algorithm by ~100 generations. We then showed that the GA optimal population size was within ~16-20 while the crossover rate that returned the best fitness value was ~0.95. Overall, we have successfully demonstrated the feasibility of implementing GA to optimize model parameters in a recurrent cortical based SNN.","PeriodicalId":351013,"journal":{"name":"2020 31st Irish Signals and Systems Conference (ISSC)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124486578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Re-Training StyleGAN - A First Step Towards Building Large, Scalable Synthetic Facial Datasets 重新训练StyleGAN -迈向构建大型,可扩展的合成面部数据集的第一步
2020 31st Irish Signals and Systems Conference (ISSC) Pub Date : 2020-03-24 DOI: 10.1109/ISSC49989.2020.9180189
Viktor Varkarakis, S. Bazrafkan, P. Corcoran
{"title":"Re-Training StyleGAN - A First Step Towards Building Large, Scalable Synthetic Facial Datasets","authors":"Viktor Varkarakis, S. Bazrafkan, P. Corcoran","doi":"10.1109/ISSC49989.2020.9180189","DOIUrl":"https://doi.org/10.1109/ISSC49989.2020.9180189","url":null,"abstract":"StyleGAN is a state-of-art generative adversarial network architecture that generates random 2D high-quality synthetic facial data samples. In this paper we recap the StyleGAN architecture and training methodology and present our experiences of retraining it on a number of alternative public datasets. Practical issues and challenges arising from the retraining process are discussed. Tests and validation results are presented and a comparative analysis of several different re-trained StyleGAN weightings is provided. The role of this tool in building large, scalable datasets of synthetic facial data is also discussed.","PeriodicalId":351013,"journal":{"name":"2020 31st Irish Signals and Systems Conference (ISSC)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125129211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
High-Accuracy Facial Depth Models derived from 3D Synthetic Data 基于三维合成数据的高精度面部深度模型
2020 31st Irish Signals and Systems Conference (ISSC) Pub Date : 2020-03-13 DOI: 10.1109/ISSC49989.2020.9180166
Faisal Khan, Shubhajit Basak, Hossein Javidnia, M. Schukat, P. Corcoran
{"title":"High-Accuracy Facial Depth Models derived from 3D Synthetic Data","authors":"Faisal Khan, Shubhajit Basak, Hossein Javidnia, M. Schukat, P. Corcoran","doi":"10.1109/ISSC49989.2020.9180166","DOIUrl":"https://doi.org/10.1109/ISSC49989.2020.9180166","url":null,"abstract":"In this paper, we explore how synthetically generated 3D face models can be used to construct a high-accuracy ground truth for depth. This allows us to train the Convolutional Neural Networks (CNN) to solve facial depth estimation problems. These models provide sophisticated controls over image variations including pose, illumination, facial expressions and camera position. 2D training samples can be rendered from these models, typically in RGB format, together with depth information. Using synthetic facial animations, a dynamic facial expression or facial action data can be rendered for a sequence of image frames together with ground truth depth and additional metadata such as head pose, light direction, etc. The synthetic data is used to train a CNN-based facial depth estimation system which is validated on both synthetic and real images. Potential fields of application include 3D reconstruction, driver monitoring systems, robotic vision systems, and advanced scene understanding.","PeriodicalId":351013,"journal":{"name":"2020 31st Irish Signals and Systems Conference (ISSC)","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124629590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Chairs Address 椅子的地址
2020 31st Irish Signals and Systems Conference (ISSC) Pub Date : 2013-10-01 DOI: 10.1109/3dtv.2013.6676630
J. Watson
{"title":"Chairs Address","authors":"J. Watson","doi":"10.1109/3dtv.2013.6676630","DOIUrl":"https://doi.org/10.1109/3dtv.2013.6676630","url":null,"abstract":"I am delighted that its first visit of the 3DTV-CON to the United Kingdom is to the city of Aberdeen, Scotland, the heart of Europe's oil and gas industry. 3D technologies are beginning to play a crucial role in the Energy industry, both in its traditional oil and gas activities, and in the blossoming renewable Energy sector.","PeriodicalId":351013,"journal":{"name":"2020 31st Irish Signals and Systems Conference (ISSC)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115952803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信