{"title":"Performance evaluation of Dynamic Neural Networks for mobile radio path loss prediction","authors":"A. Bhuvaneshwari, R. Hemalatha, T. Satyasavithri","doi":"10.1109/UPCON.2016.7894698","DOIUrl":null,"url":null,"abstract":"The prediction of path loss for the mobile radio signals is an important part in the design phase of the wireless cellular networks. In the process of modelling the path loss, the GSM 900 MHz signals are collected experimentally using Test Mobile System (TEMS) tool in the dense urban environment of Hyderabad city. In this paper, the best suited Cost 231 Hata empirical propagation model is implemented using three major dynamic neural networks namely, Focused Time Delay Neural Network (FTDNN), Distributed Time Delay Neural Network (DTDNN) which are feed forward dynamic neural networks and Layer Recurrent Neural Network (LRNN) which is a feedback dynamic neural network. The aim of these implementations is to minimise the errors between simulations and measurements. The dynamic neural networks are trained using Levenberg-Marquardt and Scaled Conjugate Gradient training algorithms. Comparisons are made by varying the number of neurons in the hidden layer and changing the training epochs. The performance is analysed in terms of correlation with the measured data, standard deviation, mean error between the targets and outputs and computation times. From the results it is inferred that, the best correlation between simulations and measurements is 0.9972, standard deviation of error (0.04) and mean error (−5.379e-5) are least for Layer Recurrent Neural Network, trained by Levenberg method, but at the cost of increased computation time. With respect to the feed forward dynamic networks, the results show that FTDNN trained by Levenberg algorithm has a better performance compared to DTDNN.","PeriodicalId":151809,"journal":{"name":"2016 IEEE Uttar Pradesh Section International Conference on Electrical, Computer and Electronics Engineering (UPCON)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 IEEE Uttar Pradesh Section International Conference on Electrical, Computer and Electronics Engineering (UPCON)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/UPCON.2016.7894698","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 10
Abstract
The prediction of path loss for the mobile radio signals is an important part in the design phase of the wireless cellular networks. In the process of modelling the path loss, the GSM 900 MHz signals are collected experimentally using Test Mobile System (TEMS) tool in the dense urban environment of Hyderabad city. In this paper, the best suited Cost 231 Hata empirical propagation model is implemented using three major dynamic neural networks namely, Focused Time Delay Neural Network (FTDNN), Distributed Time Delay Neural Network (DTDNN) which are feed forward dynamic neural networks and Layer Recurrent Neural Network (LRNN) which is a feedback dynamic neural network. The aim of these implementations is to minimise the errors between simulations and measurements. The dynamic neural networks are trained using Levenberg-Marquardt and Scaled Conjugate Gradient training algorithms. Comparisons are made by varying the number of neurons in the hidden layer and changing the training epochs. The performance is analysed in terms of correlation with the measured data, standard deviation, mean error between the targets and outputs and computation times. From the results it is inferred that, the best correlation between simulations and measurements is 0.9972, standard deviation of error (0.04) and mean error (−5.379e-5) are least for Layer Recurrent Neural Network, trained by Levenberg method, but at the cost of increased computation time. With respect to the feed forward dynamic networks, the results show that FTDNN trained by Levenberg algorithm has a better performance compared to DTDNN.