{"title":"Troll Farms and Voter Disinformation","authors":"Philipp Denter, Boris Ginzburg","doi":"10.2139/ssrn.3919032","DOIUrl":"https://doi.org/10.2139/ssrn.3919032","url":null,"abstract":"Political agents often attempt to influence elections through \"troll farms\" that flood social media platforms with messages from fake accounts that emulate genuine information. We study the ability of troll farms to manipulate elections. We show that such disinformation tactics is more effective when voters are otherwise well-informed. Thus, for example, societies with high-quality media are more vulnerable to electoral manipulation.","PeriodicalId":175553,"journal":{"name":"Informatics eJournal","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124264656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sunny Sibi, T. S. Pillai, P. S. Chandran, N. NishaKumari, O. RanjithK, M. Deepak, P. Devanand
{"title":"An Image Quality Assessment System for Evaluating MR Reconstruction Pipeline Using Single Image Acquisition Method","authors":"Sunny Sibi, T. S. Pillai, P. S. Chandran, N. NishaKumari, O. RanjithK, M. Deepak, P. Devanand","doi":"10.2139/ssrn.3736531","DOIUrl":"https://doi.org/10.2139/ssrn.3736531","url":null,"abstract":"Image quality assessment becomes critical for measuring the performance of Magnetic Resonance (MR) image reconstruction algorithms. The raw data acquired from MR scanner is processed through a series of components in the MR reconstruction pipeline to generate an image. The MR reconstruction pipeline includes the algorithms for reconstruction, quality enhancement, artifact reduction, and so on. The implementation of the pipeline requires evaluation of large number of filters and reconstruction algorithms, which demands a simple and effective mechanism to assess the quality of output at each stage. The main goal of the proposed work is to provide research groups and MR algorithm developers with protocols and tools necessary to conveniently assess the quality of MR images and to be able to compare the reconstruction methods among each other. This paper focuses on the implementation of an image quality assessment mechanism using the single image method by specifying Region of Interest (ROI) from object and background as suggested by National Electrical Manufacturers Association (NEMA). The single image method does not produce consistent results as the SNR values change drastically based on the background noise. It has been observed that, by careful selection of noise region, the drastic changes in values of SNR can be avoided with single acquisition techniques. This mechanism is supported by a visual aiding tool to carefully select signal and noise regions and avoid the errors introduced in blind selection of ROI. Using the single acquisition method and visual aiding tool, SNR comparison of images from two different reconstruction methods can be easily evaluated.","PeriodicalId":175553,"journal":{"name":"Informatics eJournal","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115986075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Stockgram: Deep Learning Model for Digitizing Financial Communications via Natural Language Generation","authors":"Purva Singh","doi":"10.5121/ijnlc.2020.9401","DOIUrl":"https://doi.org/10.5121/ijnlc.2020.9401","url":null,"abstract":"This paper proposes a deep learning model, StockGram, to automate financial communications via natural language generation. StockGram is a seq2seq model that generates short and coherent versions of financial news reports based on the client's point of interest from numerous pools of verified resources. The proposed model is developed to mitigate the pain points of advisors who invest numerous hours while scanning through these news reports manually. StockGram leverages bi-directional LSTM cells that allows a recurrent system to make its prediction based on both past and future word sequences and hence predicts the next word in the sequence more precisely. The proposed model utilizes custom word-embeddings, GloVe, which incorporates global statistics to generate vector representations of news articles in an unsupervised manner and allows the model to converge faster. StockGram is evaluated based on the semantic closeness of the generated report to the provided prime words.","PeriodicalId":175553,"journal":{"name":"Informatics eJournal","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129876685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Reasonable Robot Standard: Bringing Artificial Intelligence Law into the 21st Century","authors":"Michael Conklin","doi":"10.2139/ssrn.3675181","DOIUrl":"https://doi.org/10.2139/ssrn.3675181","url":null,"abstract":"This is a review of The Reasonable Robot: Artificial Intelligence and the Law, by Ryan Abbott. The book does an excellent job providing insights into the legal challenges that arise from the proliferation of artificial intelligence (AI). It is well organized, divided into the four main areas of AI legal impact: tax, tort, intellectual property, and criminal. While each area could be read on its own, it is interesting to note the underlying theme all these areas have in common. Namely, as AI increasingly occupies the roles once held by people, it will need to be treated under the law more like a person. Overall, the book does an outstanding job discussing proposed solutions for AI technology and the law. However, some of Abbott’s proposals are based on a faulty assumption. Like many modern-tech analysts, Abbott overemphasizes the threat that adopting new technology will displace human workers.","PeriodicalId":175553,"journal":{"name":"Informatics eJournal","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128830522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Efficacy of Non-Negative Matrix Factorization for Feature Selection in Cancer Data","authors":"Parth Patel, K. Passi, Chakresh Kumar Jain","doi":"10.5121/ijdkp.2020.10401","DOIUrl":"https://doi.org/10.5121/ijdkp.2020.10401","url":null,"abstract":"Over the past few years, there has been a considerable spread of micro-array technology in many biological patterns, particularly in those pertaining to cancer diseases like leukemia, prostate, colon cancer, etc. The primary bottleneck that one experiences in the proper understanding of such datasets lies in their dimensionality, and thus for an efficient and effective means of studying the same, a reduction in their dimension to a large extent is deemed necessary. This study is a bid to suggesting different algorithms and approaches for the reduction of dimensionality of such micro-array datasets.This study exploits the matrix-like structure of such micro-array data and uses a popular technique called Non-Negative Matrix Factorization (NMF) to reduce the dimensionality, primarily in the field of biological data. Classification accuracies are then compared for these algorithms.This technique gives an accuracy of 98%.","PeriodicalId":175553,"journal":{"name":"Informatics eJournal","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124373841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Challenges Facing IFRS for Accounting of Cryptocurrencies","authors":"Feras Shehada, M. Shehada","doi":"10.2139/ssrn.3664571","DOIUrl":"https://doi.org/10.2139/ssrn.3664571","url":null,"abstract":"The study aimed to explain and analyse the challenges of accounting for cryptocurrencies in light of the current accounting framework of the International Financial Reporting Standards (IFRS) and identify an appropriate model for accounting of cryptocurrencies. The study sample included the academicians in the accounting department of Palestinian Universities located in Gaza strip. For the purpose of measuring the variables, the study designed a questionnaire to achieve this purpose.<br><br>The findings of the study concluded that there are deficiencies in the IFRS for accounting of cryptocurrencies compared with traditional IFRS framework. It also concluded that using business models of enterprises, the differences in the usual activity of enterprises and the economic substance, leading to different use for accounting forms of cryptocurrencies compared with traditional IFRS framework.<br>","PeriodicalId":175553,"journal":{"name":"Informatics eJournal","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126337745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Emerging Role of Nano-informatics in America","authors":"Rahul Reddy Nadikattu","doi":"10.2139/ssrn.3614535","DOIUrl":"https://doi.org/10.2139/ssrn.3614535","url":null,"abstract":"The rapid expansion of nanotechnology coupled with its integration into different scientific domains has led to a new era. In recent decades, there has been an emergence of nano-informatics in the USA and other European countries which deals with information science and nanotechnology. The present study reports the initiation and applicative properties of nano-informatics in accordance to its scientific roles in America. In order to achieve the required milestone in nano-informatics, there are major challenges that need to be addressed as scanty reports are available on nano-informatics. Hence, the study highlights the importance of nano-informatics in data curation and mining along with the role of governing bodies and the establishment of databases. These establishments are well organized to implement the untapped role of nano-informatics which is rapidly growing. The information provided in the present mini-review adds scientific inputs towards the growing knowledge of nanotechnology and information science along with their future prospects.","PeriodicalId":175553,"journal":{"name":"Informatics eJournal","volume":"157 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114737200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E. Dumitrescu, Sullivan Hué, Christophe Hurlin, S. Tokpavi
{"title":"Machine Learning or Econometrics for Credit Scoring: Let's Get the Best of Both Worlds","authors":"E. Dumitrescu, Sullivan Hué, Christophe Hurlin, S. Tokpavi","doi":"10.2139/ssrn.3553781","DOIUrl":"https://doi.org/10.2139/ssrn.3553781","url":null,"abstract":"Decision trees and related ensemble methods like random forest are state-of-the-art tools in the field of machine learning for credit scoring. Although they are shown to outperform logistic regression, they lack interpretability and this drastically reduces their use in the credit risk management industry, where decision-makers and regulators need transparent score functions. This paper proposes to get the best of both worlds, introducing a new, simple and interpretable credit scoring method which uses information from decision trees to improve the performance of logistic regression. Formally, rules extracted from various short-depth decision trees built with couples of predictive variables are used as predictors in a penalized or regularized logistic regression. By modeling such univariate and bivariate threshold effects, we achieve significant improvement in model performance for the logistic regression while preserving its simple interpretation. Applications using simulated and four real credit defaults datasets show that our new method outperforms traditional logistic regressions. Moreover, it compares competitively to random forest, while providing an interpretable scoring function. JEL Classification: G10 C25, C53","PeriodicalId":175553,"journal":{"name":"Informatics eJournal","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121531198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Measuring the Effect of the Digitalization","authors":"Elvin Mammadli, Vsevolod Klivak","doi":"10.2139/ssrn.3524823","DOIUrl":"https://doi.org/10.2139/ssrn.3524823","url":null,"abstract":"Digitalization has changed the rules both in the private and public sectors of the economy. Therefore, the study of its effects has become more relevant. In previous studies, authors mainly focused on the definition of the term, its boundaries and the creation of indexes. The main downside of such papers devoted to this topic is the lack of a quantitative approach. The impacts of digitalization on economic indicators have not been quantitatively investigated in depth. This paper studies the impact of digitalization on the economy, and more specifically on GDP. The first part consists of creating a synthetic index, the Index of Digitalization (ID), which reflects the state of digitalization at the country level. The second part is dedicated to validating the ID using a Panel Data Model, where GDP in previous years is set as a dependent variable that defines a direct connection.","PeriodicalId":175553,"journal":{"name":"Informatics eJournal","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129602622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Technology, Adaptation and the Efficient Market Hypothesis","authors":"Daniel Grabowski","doi":"10.2139/ssrn.3649446","DOIUrl":"https://doi.org/10.2139/ssrn.3649446","url":null,"abstract":"Tests of the efficient market hypothesis (EMH) suffer from a known flaw that has nevertheless received insufficient attention in the literature. Just as in Zeno’s famous paradox of Achilles and the tortoise, while we are testing market efficiency the measure of efficiency has already moved on. Prediction technology and data availability perpetually improve beyond the level that was available to market participants during the sample period. This introduces a bias into the tests. If one believes in the existence of deterministic patterns and of technological progress, one has to expect to discover inefficiencies in historical data. We can only expect prices to reflect all available in- formation to the degree that current technology enables. From this argument follows a necessity to adjust the EMH. This article discusses possible adjustments and develops the concept of technological efficiency. Based on a review of empirical evidence, this concept is shown to resolve many anomalies and to help bridge the gap between the financial economics literature and the rapidly advancing machine learning literature.","PeriodicalId":175553,"journal":{"name":"Informatics eJournal","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132069628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}