{"title":"Visualizing The Implicit Model Selection Tradeoff","authors":"Zezhen He, Yaron Shaposhnik","doi":"10.2139/ssrn.3946701","DOIUrl":"https://doi.org/10.2139/ssrn.3946701","url":null,"abstract":"The recent rise of machine learning (ML) has been leveraged by practitioners and researchers to provide new solutions to an ever growing number of business problems. As with other ML applications, these solutions rely on model selection, which is typically achieved by evaluating certain metrics on models separately and selecting the model whose evaluations (i.e., accuracy-related loss and/or certain interpretability measures) are optimal. However, empirical evidence suggests that, in practice, multiple models often attain competitive results. Therefore, while models’ overall performance could be similar, they could operate quite differently. This results in an implicit tradeoff in models’ performance throughout the feature space which resolving requires new model selection tools.\u0000This paper explores methods for comparing predictive models in an interpretable manner to uncover the tradeoff and help resolve it. To this end, we propose various methods that synthesize ideas from supervised learning, unsupervised learning, dimensionality reduction, and visualization to demonstrate how they can be used to inform model developers about the model selection process. Using various datasets and a simple Python interface, we demonstrate how practitioners and researchers could benefit from applying these approaches to better understand the broader impact of their model selection choices.","PeriodicalId":406435,"journal":{"name":"CompSciRN: Other Machine Learning (Topic)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116016895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Scheffer, Nick Limmen, R. Damgrave, A. Martinetti, B. Rosic, L. V. van Dongen
{"title":"Troubleshooting: a Dynamic Solution for Achieving Reliable Fault Detection by Combining Augmented Reality and Machine Learning","authors":"S. Scheffer, Nick Limmen, R. Damgrave, A. Martinetti, B. Rosic, L. V. van Dongen","doi":"10.2139/ssrn.3945964","DOIUrl":"https://doi.org/10.2139/ssrn.3945964","url":null,"abstract":"Today’s perplexing maintenance operations and rapid technology development require an understanding of the complex working environment and processing of dynamic and real-time information. However, the environment complexity and an exponential increase in data volume create new challenges and demands and hence make troubleshooting extremely difficult. To overcome the previously mentioned issues and provide the operator real-time access to fast-flowing information, we propose a hybrid solution made of augmented reality further combined with machine learning software. In particular, we present a dynamic reference map of all the required modules and relations that connect machine learning with augmented reality on an example of adaptive fault detection. The proposed dynamic reference map is applied to a pilot case study for immediate validation. To highlight the effectiveness of the proposed solution, the more challenging task of measuring the impact of combining augmented reality with machine learning for fault analysis on maintenance decisions is addressed.","PeriodicalId":406435,"journal":{"name":"CompSciRN: Other Machine Learning (Topic)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126523728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Policy Optimization Using Semiparametric Models for Dynamic Pricing","authors":"Jianqing Fan, Yongyi Guo, Mengxin Yu","doi":"10.2139/ssrn.3922825","DOIUrl":"https://doi.org/10.2139/ssrn.3922825","url":null,"abstract":"In this paper, we study the contextual dynamic pricing problem where the market value of a product is linear in their observed features plus some market noise. Products are sold one at a time, and only a binary response indicating the success or failure of a sale is observed. Our model setting is similar to cite{JN19} except that we expand the demand curve to a semiparametric model and need to learn dynamically both parametric and nonparametric components. We propose a dynamic statistical learning and decision-making policy that combines semiparametric estimation from a generalized linear model with an unknown link and online decision making to minimize regret (maximize revenue). Under mild conditions, we show that for a market noise c.d.f. $F(cdot)$ with $m$-th order derivative, our policy achieves a regret upper bound of $tilde{cO}_{d}(T^{frac{2m+1}{4m-1}})$ for $mgeq 2$, where $T$ is time horizon and $tilde{cO}_{d}$ is the order that hides logarithmic terms and the dimensionality of feature $d$. The upper bound is further reduced to $tilde{cO}_{d}(sqrt{T})$ if $F$ is super smooth whose Fourier transform decays exponentially. In terms of dependence on the horizon $T$, these upper bounds are close to $Omega(sqrt{T})$, the lower bound where the market noise distribution belongs to a parametric class. We further generalize these results to the case when the product features are dynamically dependent, satisfying some strong mixing conditions.","PeriodicalId":406435,"journal":{"name":"CompSciRN: Other Machine Learning (Topic)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125734015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Policy Gradient Methods Find the Nash Equilibrium in N-player General-sum Linear-quadratic Games","authors":"B. Hambly, Renyuan Xu, Huining Yang","doi":"10.2139/ssrn.3894471","DOIUrl":"https://doi.org/10.2139/ssrn.3894471","url":null,"abstract":"We consider a general-sum N-player linear-quadratic game with stochastic dynamics over a finite horizon and prove the global convergence of the natural policy gradient method to the Nash equilibrium. In order to prove convergence of the method we require a certain amount of noise in the system. We give a condition, essentially a lower bound on the covariance of the noise in terms of the model parameters, in order to guarantee convergence. We illustrate our results with numerical experiments to show that even in situations where the policy gradient method may not converge in the deterministic setting, the addition of noise leads to convergence.","PeriodicalId":406435,"journal":{"name":"CompSciRN: Other Machine Learning (Topic)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129715304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deep Learning under Model Uncertainty","authors":"M. Merz, Mario V. Wuthrich","doi":"10.2139/ssrn.3875151","DOIUrl":"https://doi.org/10.2139/ssrn.3875151","url":null,"abstract":"Deep learning has proven to lead to very powerful predictive models, often outperforming classical regression models such as generalized linear models. Deep learning models perform representation learning, which means that they do covariate engineering themselves so that explanatory variables are optimally transformed for the predictive problem at hand. A crucial object in deep learning is the loss function (objective function) for model fitting which implicitly reflects the distributional properties of the observed samples. The purpose of this article is to discuss the choice of this loss function, in particular, we give a specific proposal of a loss function choice under model uncertainty. This proposal turns out to robustify representation learning and prediction.","PeriodicalId":406435,"journal":{"name":"CompSciRN: Other Machine Learning (Topic)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129704894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Are Agent-based Models Universal Approximators? [Extended Pre-print]","authors":"Joseph A. E. Shaheen","doi":"10.2139/ssrn.3867586","DOIUrl":"https://doi.org/10.2139/ssrn.3867586","url":null,"abstract":"Universal approximation functions are well known and studied in canonical mathematics. Here we theorize the existence of an independent class of universal approximation agents and agent-based models. We draw upon historical references from mathematical analysis, the development of machine learning and the agent-based modeling lines of inquiry.","PeriodicalId":406435,"journal":{"name":"CompSciRN: Other Machine Learning (Topic)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124491041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E. Benhamou, J. Ohana, D. Saltiel, B. Guez, S. Ohana
{"title":"Explainable AI (XAI) Models Applied to Planning in Financial Markets","authors":"E. Benhamou, J. Ohana, D. Saltiel, B. Guez, S. Ohana","doi":"10.2139/ssrn.3862437","DOIUrl":"https://doi.org/10.2139/ssrn.3862437","url":null,"abstract":"Regime changes planning in financial markets is well known to be hard to explain and interpret. Can an asset manager ex-plain clearly the intuition of his regime changes prediction on equity market ? To answer this question, we consider a gradi-ent boosting decision trees (GBDT) approach to plan regime changes on S&P 500 from a set of 150 technical, fundamen-tal and macroeconomic features. We report an improved ac-curacy of GBDT over other machine learning (ML) methods on the S&P 500 futures prices. We show that retaining fewer and carefully selected features provides improvements across all ML approaches. Shapley values have recently been intro-duced from game theory to the field of ML. This approach allows a robust identification of the most important variables planning stock market crises, and of a local explanation of the crisis probability at each date, through a consistent features attribution. We apply this methodology to analyse in detail the March 2020 financial meltdown, for which the model of-fered a timely out of sample prediction. This analysis unveils in particular the contrarian predictive role of the tech equity sector before and after the crash.","PeriodicalId":406435,"journal":{"name":"CompSciRN: Other Machine Learning (Topic)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128134483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Multi-Domain Solution","authors":"Sébastien Roussel-Konan","doi":"10.2139/ssrn.3839738","DOIUrl":"https://doi.org/10.2139/ssrn.3839738","url":null,"abstract":"Negative Interest Loan Funded Utopianism World-Saving Socio-Economic Development. Cultivating the population. Offering the population custom jobs funded by negative interest loans. Negative interest loan subsidized wage adjustment as well as negative interest loan subsidized inflation mitigation. Refinancing individual (personal), corporate (business, charity) and national (federal, provincial, state, ..., and municipal) debts through loans with negative interest.<br><br>Research, development, innovation, implementation, integration, indoctrination and ideation on the human condition while I test advanced medical biotechnology I''ve developed with the Holy Ghost during my ongoing custom studies in omniology, omniosophy and omnimatics focused on human and artificial intelligence along pharmaceutical grade nutrition for physical health and mental clarity have allowed me to go beyond a standard. <br><br>JYSKE bank in Denmark, has negative interest mortgages at around 80K$ for the public. Central European Bank issued 1.3 trillion euros in negative interest loans @ -1% to banks. Science is at risk. Mistake of dismissing technological potential due to limited resources. accomplish these goals through negative interest loans as there is insufficient funding available through private and public finances. <br><br>Before the automation of our economy goes any further. Ideological goal ultimate way of thinking of making the world a better place rendering the world as the best place where we have the technology and systems to be happy and free indefinitely with negative interest loan funded democratic ultrainternationalistic totalitarian utopianism oriented systems for immortality, freedom and euphoria.","PeriodicalId":406435,"journal":{"name":"CompSciRN: Other Machine Learning (Topic)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123843299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Riccardo Aiolfi, N. Moreni, M. Bianchetti, Marco Scaringi, Filippo Fogliani
{"title":"Learning Bermudans","authors":"Riccardo Aiolfi, N. Moreni, M. Bianchetti, Marco Scaringi, Filippo Fogliani","doi":"10.2139/ssrn.3837499","DOIUrl":"https://doi.org/10.2139/ssrn.3837499","url":null,"abstract":"American and Bermudan-type financial instruments are often priced with specific Monte Carlo techniques whose efficiency critically depends on the effective dimensionality of the problem and the available computational power. In our work we focus on Bermudan Swaptions, well-known interest rate derivatives embedded in callable debt instruments or traded in the OTC market for hedging or speculation purposes, and we adopt an original pricing approach based on Supervised Learning (SL) algorithms. In particular, we link the price of a Bermudan Swaption to its natural hedges, i.e. the underlying European Swaptions, and other sound financial quantities through SL non-parametric regressions. We test different algorithms, from linear models to decision tree-based models and Artificial Neural Networks (ANN), analyzing their predictive performances. All the SL algorithms result to be reliable and fast, allowing to overcome the computational bottleneck of standard Monte Carlo simulations; the best performing algorithms for our problem result to be Ridge, ANN and Gradient Boosted Regression Tree. Moreover, using feature importance techniques, we are able to rank the most important driving factors of a Bermudan Swaption price, confirming that the value of the maximum underlying European Swaption is the prevailing feature.","PeriodicalId":406435,"journal":{"name":"CompSciRN: Other Machine Learning (Topic)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124974287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Neural Networks in Finance: A Descriptive Systematic Review","authors":"Dr. K. Riyazahmed","doi":"10.46281/IJFB.V5I2.997","DOIUrl":"https://doi.org/10.46281/IJFB.V5I2.997","url":null,"abstract":"Traditional statistical methods pose challenges in data analysis due to irregularity in the financial data. To improve accuracy, financial researchers use machine learning architectures for the past two decades. Neural Networks (NN) are a widely used architecture in financial research. Despite the wider usage, NN application in finance is yet to be well defined. Hence, this descriptive study classifies and examines the NN application in finance into four broad categories i.e., investment prediction, credit evaluation, financial distress, and other financial applications. Likewise, the review classifies the NN methods used under each category into standard, optimized and hybrid NN. Further, accuracy measures used by the research work widely differ, in turn, pose challenges for comparison of a NN under each category and reduces the scope of formalizing a theory to choose optimum network model under each category.","PeriodicalId":406435,"journal":{"name":"CompSciRN: Other Machine Learning (Topic)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131272510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}