ERN: Neural Networks & Related Topics (Topic)最新文献

筛选
英文 中文
Detecting Edgeworth Cycles 检测埃奇沃斯循环
ERN: Neural Networks & Related Topics (Topic) Pub Date : 2021-10-01 DOI: 10.2139/ssrn.3934367
Timothy Holt, Mitsuru Igami, S. Scheidegger
{"title":"Detecting Edgeworth Cycles","authors":"Timothy Holt, Mitsuru Igami, S. Scheidegger","doi":"10.2139/ssrn.3934367","DOIUrl":"https://doi.org/10.2139/ssrn.3934367","url":null,"abstract":"We propose algorithms to detect \"Edgeworth cycles,\" asymmetric price movements that have caused antitrust concerns in many countries. We formalize four existing methods and propose six new methods based on spectral analysis and machine learning. We evaluate their accuracy in station-level gasoline-price data from Western Australia, New South Wales, and Germany. Most methods achieve high accuracy in the first two, but only a few can detect nuanced cycles in the third. Results suggest whether researchers find a positive or negative statistical relationship between cycles and markups, and hence their implications for competition policy, crucially depends on the choice of methods.","PeriodicalId":114865,"journal":{"name":"ERN: Neural Networks & Related Topics (Topic)","volume":"184 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122309618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Forecasting High-Dimensional Covariance Matrices of Asset Returns with Hybrid GARCH-LSTMs 混合garch - lstm预测资产收益的高维协方差矩阵
ERN: Neural Networks & Related Topics (Topic) Pub Date : 2021-08-25 DOI: 10.2139/ssrn.3912782
L. Boulet
{"title":"Forecasting High-Dimensional Covariance Matrices of Asset Returns with Hybrid GARCH-LSTMs","authors":"L. Boulet","doi":"10.2139/ssrn.3912782","DOIUrl":"https://doi.org/10.2139/ssrn.3912782","url":null,"abstract":"Several academics have studied the ability of hybrid models mixing univariate Generalized Autoregressive Conditional Heteroskedasticity (GARCH) models and neural networks to deliver better volatility predictions than purely econometric models. Despite presenting very promising results, the generalization of such models to the multivariate case has yet to be studied. Moreover, very few papers have examined the ability of neural networks to predict the covariance matrix of asset returns, and all use a rather small number of assets, thus not addressing what is known as the curse of dimensionality. The goal of this paper is to investigate the ability of hybrid models, mixing GARCH processes and neural networks, to forecast covariance matrices of asset returns. To do so, we propose a new model, based on multivariate GARCHs that decompose volatility and correlation predictions. The volatilities are here forecast using hybrid neural networks while correlations follow a traditional econometric process. After implementing the models in a minimum variance portfolio framework, our results are as follows. First, the addition of GARCH parameters as inputs is beneficial to the model proposed. Second, the use of one-hot-encoding to help the neural network differentiate between each stock improves the performance. Third, the new model proposed is very promising as it not only outperforms the equally weighted portfolio, but also by a significant margin its econometric counterpart that uses univariate GARCHs to predict the volatilities.","PeriodicalId":114865,"journal":{"name":"ERN: Neural Networks & Related Topics (Topic)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115435252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improve the Prediction of Wind Speed using Hyperbolic Tangent Function with Artificial Neural Network 利用人工神经网络改进双曲正切函数对风速的预测
ERN: Neural Networks & Related Topics (Topic) Pub Date : 2021-07-11 DOI: 10.2139/ssrn.3884548
Tabassum Jahan, Ameenuddin Ahmad
{"title":"Improve the Prediction of Wind Speed using Hyperbolic Tangent Function with Artificial Neural Network","authors":"Tabassum Jahan, Ameenuddin Ahmad","doi":"10.2139/ssrn.3884548","DOIUrl":"https://doi.org/10.2139/ssrn.3884548","url":null,"abstract":"India is truculent to meet the electric power demands of a fast-expanding economy. Restructuring of the power industry has only increased several challenges for the power system engineers. The two largest challenges facing the Indian power sector are: \u0000 \u0000Fuel supply uncertainty and deteriorating distribution companies (discoms) finances. Considering dominance of coal in India's fuel mix, coal shortages can severely impede investments in the generation segment. India is aiming to attain 175 GW of renewable energy which would consist of 100 GW from solar energy, 10 GW from bio-power, 60 GW from wind power, and 5 GW from small hydropower plants by the year 2020. Investors have promised to achieve more than 270 GW, which is significantly above the ambitious targets. \u0000 \u0000Wind energy generation, at present facing many problems like wind speed prediction. In this paper to improve the wind speed prediction by using hyperbolic tangent function with artificial neural network. Hyperbolic tangent with artificial neural network, we know that ANN work like a human brain to perform computations on real time or hyperbolic tangent function lies between 1 to -1 would be near the zero.","PeriodicalId":114865,"journal":{"name":"ERN: Neural Networks & Related Topics (Topic)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132475599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using Deep Q-Networks to Train an Agent to Navigate the Unity ML-Agents Banana Environment 使用深度Q-Networks训练Agent在Unity ML-Agents香蕉环境中导航
ERN: Neural Networks & Related Topics (Topic) Pub Date : 2021-07-07 DOI: 10.2139/ssrn.3881878
Oluwaseyi (Tony) Awoga CPA, PRM
{"title":"Using Deep Q-Networks to Train an Agent to Navigate the Unity ML-Agents Banana Environment","authors":"Oluwaseyi (Tony) Awoga CPA, PRM","doi":"10.2139/ssrn.3881878","DOIUrl":"https://doi.org/10.2139/ssrn.3881878","url":null,"abstract":"Deep Q-learning is the combination of the Q-learning process with a function approximation technique such as a neural network. According to (Zai & Brown 2020), the main idea behind Q-learning is the use of an algorithm to predict a state-action pair, and to then compare the results generated from this prediction to the observed accumulated rewards at some later time. The parameters of the algorithms are then updated so that it makes better predictions next time. While this technique has some advantages that make it very useful for solving reinforcement learning problems, it also falls short for solving complex problems with large state-space. In fact, (Google DeepMind 2015) supported the above conclusion in its seminal paper entitled “Human-level control through deep reinforcement learning”. In this paper, (Mnih et al 2015) asserted that “to use reinforcement learning successfully in situations approaching real-world complexity, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experiences to new situations”. To achieve this objective they stated further, “we developed a novel agent, a deep Q-network (DQN), which is able to combine reinforcement learning with a class of artificial neural network known as deep neural networks”. While Q-learning as a tool for solving reinforcement learning problems has enjoyed some remarkable successes in the past, it was not until the introduction of DQN that practitioners were able to use it to solve large-scale problems. Prior to that, reinforcement learning was limited to “applications and domains in which useful features could be handcrafted, or to domains with fully observed, low-dimensional state spaces”, (Mnih et al 2015) argued further.","PeriodicalId":114865,"journal":{"name":"ERN: Neural Networks & Related Topics (Topic)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130537597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
What Can Analysts Learn from Artificial Intelligence about Fundamental Analysis? 分析师能从人工智能中学到什么基本面分析?
ERN: Neural Networks & Related Topics (Topic) Pub Date : 2020-11-01 DOI: 10.2139/ssrn.3745078
Oliver Binz, K. Schipper, Kevin Standridge
{"title":"What Can Analysts Learn from Artificial Intelligence about Fundamental Analysis?","authors":"Oliver Binz, K. Schipper, Kevin Standridge","doi":"10.2139/ssrn.3745078","DOIUrl":"https://doi.org/10.2139/ssrn.3745078","url":null,"abstract":"We apply a machine learning algorithm to estimate Nissim and Penman’s (2001) structural framework that decomposes profitability into increasingly disaggregated profitability drivers. Our approach explicitly accommodates the non-linearities that precluded Nissim and Penman from estimating their framework. We find that out-of-sample profitability forecasts from our approach are generally more accurate than those of benchmark models. We use the profitability forecasts to estimate intrinsic values using the financial statement analysis design choices in Nissim and Penman’s framework and find that hypothetical investing strategies based on these value estimates generate risk-adjusted returns. Design choices that improve performance include increasingly granular disaggregation, a focus on core items, and long-horizon forecasts of operating performance. Perhaps surprisingly, we find only mixed evidence of benefits from incorporating historical financial statement information from beyond the current period.","PeriodicalId":114865,"journal":{"name":"ERN: Neural Networks & Related Topics (Topic)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115168746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Review of EEG Feature Selection by Neural Networks 神经网络EEG特征选择研究进展
ERN: Neural Networks & Related Topics (Topic) Pub Date : 2020-08-17 DOI: 10.2139/ssrn.3675950
I. Rakhmatulin
{"title":"Review of EEG Feature Selection by Neural Networks","authors":"I. Rakhmatulin","doi":"10.2139/ssrn.3675950","DOIUrl":"https://doi.org/10.2139/ssrn.3675950","url":null,"abstract":"The basis of the work of electroencephalography (EEG) is the registration of electrical impulses from the brain using a special sensor or electrode. This method is used to treat and diagnose various diseases. In the past few years, due to the development of neural network technologies, the interest of researchers in EEG has noticeably increased. Neural networks for training the model require obtaining data with minimal noise distortion. In the processing of EEG signals to eliminate noise (artifacts), signal filtering and various methods for extracting signs are used. The presented manuscript provides a detailed analysis of modern methods for extracting the signs of an EEG signal used in studies of the last decade. The information presented in this paper will allow researchers to understand how to more carefully process the data of EEG signals before using neural networks to classify the signal. Due to the absence of any standards in the method of extracting EEG signs, the most important moment of this manuscript is a detailed description of the necessary steps for recognizing artifacts, which will allow researchers to maximize the potential of neural networks in the tasks of classifying EEG signal.","PeriodicalId":114865,"journal":{"name":"ERN: Neural Networks & Related Topics (Topic)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124014695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
A Data-Driven Market Simulator for Small Data Environments 面向小数据环境的数据驱动市场模拟器
ERN: Neural Networks & Related Topics (Topic) Pub Date : 2020-06-21 DOI: 10.2139/ssrn.3632431
Hans Bühler, Blanka Horvath, Terry Lyons, Imanol Perez Arribas, Ben Wood
{"title":"A Data-Driven Market Simulator for Small Data Environments","authors":"Hans Bühler, Blanka Horvath, Terry Lyons, Imanol Perez Arribas, Ben Wood","doi":"10.2139/ssrn.3632431","DOIUrl":"https://doi.org/10.2139/ssrn.3632431","url":null,"abstract":"Neural network based data-driven market simulation unveils a new and flexible way of modelling financial time series without imposing assumptions on the underlying stochastic dynamics. Though in this sense generative market simulation is model-free, the concrete modelling choices are nevertheless decisive for the features of the simulated paths. We give a brief overview of currently used generative modelling approaches and performance evaluation metrics for financial time series, and address some of the challenges to achieve good results in the latter. We also contrast some classical approaches of market simulation with simulation based on generative modelling and highlight some advantages and pitfalls of the new approach. While most generative models tend to rely on large amounts of training data, we present here a generative model that works reliably in environments where the amount of available training data is notoriously small. Furthermore, we show how a rough paths perspective combined with a parsimonious Variational Autoencoder framework provides a powerful way for encoding and evaluating financial time series in such environments where available training data is scarce. Finally, we also propose a suitable performance evaluation metric for financial time series and discuss some connections of our Market Generator to deep hedging.","PeriodicalId":114865,"journal":{"name":"ERN: Neural Networks & Related Topics (Topic)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114101683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 51
Deep-CAPTCHA: A Deep Learning Based CAPTCHA Solver for Vulnerability Assessment Deep-CAPTCHA:基于深度学习的漏洞评估CAPTCHA求解器
ERN: Neural Networks & Related Topics (Topic) Pub Date : 2020-06-15 DOI: 10.2139/ssrn.3633354
Zahra Noury, Mahdi Rezaei
{"title":"Deep-CAPTCHA: A Deep Learning Based CAPTCHA Solver for Vulnerability Assessment","authors":"Zahra Noury, Mahdi Rezaei","doi":"10.2139/ssrn.3633354","DOIUrl":"https://doi.org/10.2139/ssrn.3633354","url":null,"abstract":"CAPTCHA is a human-centred test to distinguish a human operator from bots, attacking programs, or other computerised agents that tries to imitate human intelligence. In this research, we investigate a way to crack visual CAPTCHA tests by an automated deep learning based solution. The goal of this research is to investigate the weaknesses and vulnerabilities of the CAPTCHA generator systems; hence, developing more robust CAPTCHAs, without taking the risks of manual try and fail efforts. We develop a Convolutional Neural Network called Deep-CAPTCHA to achieve this goal. The proposed platform is able to investigate both numerical and alphanumerical CAPTCHAs. To train and develop an efficient model, we have generated a dataset of 500,000 CAPTCHAs to train our model. In this paper, we present our customised deep neural network model, we review the research gaps, the existing challenges, and the solutions to cope with the issues. Our network's cracking accuracy leads to a high rate of 98.94% and 98.31% for the numerical and the alpha-numerical test datasets, respectively. That means more works is required to develop robust CAPTCHAs, to be non-crackable against automated artificial agents. As the outcome of this research, we identify some efficient techniques to improve the security of the CAPTCHAs, based on the performance analysis conducted on the Deep-CAPTCHA model.","PeriodicalId":114865,"journal":{"name":"ERN: Neural Networks & Related Topics (Topic)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126420427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
Optimal Deep Neural Networks by Maximization of the Approximation Power 基于逼近能力最大化的最优深度神经网络
ERN: Neural Networks & Related Topics (Topic) Pub Date : 2020-06-10 DOI: 10.2139/ssrn.3578850
Hector F. Calvo-Pardo, Tullio Mancini, Jose Olmo
{"title":"Optimal Deep Neural Networks by Maximization of the Approximation Power","authors":"Hector F. Calvo-Pardo, Tullio Mancini, Jose Olmo","doi":"10.2139/ssrn.3578850","DOIUrl":"https://doi.org/10.2139/ssrn.3578850","url":null,"abstract":"We propose an optimal architecture for deep neural networks of given size. The optimal architecture obtains from maximizing the minimum number of linear regions approximated by a deep neural network with a ReLu activation function. The accuracy of the approximation function relies on the neural network structure, characterized by the number, dependence and hierarchy between the nodes within and across layers. For a given number of nodes, we show how the accuracy of the approximation improves as we optimally choose the width and depth of the network. More complex datasets naturally summon bigger-sized architectures that perform better applying our optimization procedure. A Monte-Carlo simulation exercise illustrates the outperformance of the optimised architecture against cross-validation methods and gridsearch for linear and nonlinear prediction models. The application of this methodology to the Boston Housing dataset confirms empirically the outperformance of our method against state-of the-art machine learning models.","PeriodicalId":114865,"journal":{"name":"ERN: Neural Networks & Related Topics (Topic)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121265182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Estimating Parameters of Structural Models Using Neural Networks 基于神经网络的结构模型参数估计
ERN: Neural Networks & Related Topics (Topic) Pub Date : 2020-04-23 DOI: 10.2139/ssrn.3496098
Y. Wei, Zhenling Jiang
{"title":"Estimating Parameters of Structural Models Using Neural Networks","authors":"Y. Wei, Zhenling Jiang","doi":"10.2139/ssrn.3496098","DOIUrl":"https://doi.org/10.2139/ssrn.3496098","url":null,"abstract":"Machine learning tools such as neural networks see increasing applications in marketing and economics for predictive tasks, such as classifying images and forecasting choices. Instead of these predictive tasks, we explore using neural nets to estimate the parameter values for an economic model. The neural net is trained with model-generated datasets. Through training, the neural net learns a direct mapping from (the moments of) a dataset to the parameter values under which the dataset is generated. We show this Neural Net Estimator (NNE) converges to Bayesian parameter posterior when the number of training datasets is sufficiently large. We examine the performance of NNE in two Monte Carlo studies. NNE incurs substantially smaller simulation costs compared to simulated MLE and GMM, while achieving no worse estimation accuracy. NNE is also easy to implement with the wide availability of neural net training packages.","PeriodicalId":114865,"journal":{"name":"ERN: Neural Networks & Related Topics (Topic)","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115425518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信