Log. J. IGPLPub Date : 2019-12-13DOI: 10.1093/jigpal/jzz056
Esteban Jove, P. Blanco-Rodríguez, J. Casteleiro-Roca, Héctor Quintián-Pardo, Francisco Javier Moreno Arboleda, J. López-Vázquez, B. A. Rodríguez-Gómez, M. Meizoso-López, A. P. Pazos, F. J. D. C. Juez, Sung-Bae Cho, J. Calvo-Rolle
{"title":"Missing data imputation over academic records of electrical engineering students","authors":"Esteban Jove, P. Blanco-Rodríguez, J. Casteleiro-Roca, Héctor Quintián-Pardo, Francisco Javier Moreno Arboleda, J. López-Vázquez, B. A. Rodríguez-Gómez, M. Meizoso-López, A. P. Pazos, F. J. D. C. Juez, Sung-Bae Cho, J. Calvo-Rolle","doi":"10.1093/jigpal/jzz056","DOIUrl":"https://doi.org/10.1093/jigpal/jzz056","url":null,"abstract":"\u0000 Nowadays, the quality standards of higher education institutions pay special attention to the performance and evaluation of the students. Then, having a complete academic record of each student, such as number of attempts, average grade and so on, plays a key role. In this context, the existence of missing data, which can happen for different reasons, leads to affect adversely interesting future analysis. Therefore, the use of imputation techniques is presented as a helpful tool to estimate the value of missing data. This work deals with the academic records of engineering students, in which imputation techniques are applied. More specifically, it is assessed and compared to the performance of the multivariate imputation by chained equations methodology, the adaptive assignation algorithm (AAA) based on multivariate adaptive regression splines and a hybridization based on self-organisation maps with Mahalanobis distances and AAA algorithm. The results show that proposed methods obtain successfully results regardless the number of missing values, in general terms.","PeriodicalId":304915,"journal":{"name":"Log. J. IGPL","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126847405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Log. J. IGPLPub Date : 2019-12-13DOI: 10.1093/jigpal/jzz040
Cristina Barés Gómez, M. Fontaine, A. Nepomuceno
{"title":"Knowledge in action: logico-philosophical approach to linguistic evidentiality","authors":"Cristina Barés Gómez, M. Fontaine, A. Nepomuceno","doi":"10.1093/jigpal/jzz040","DOIUrl":"https://doi.org/10.1093/jigpal/jzz040","url":null,"abstract":"\u0000 The present study focuses on a grammatical category called evidentiality. The primary meaning of evidentiality is concerned with information source. That is, it expresses whether something has been seen, heard or inferred. The aim here is to conduct a conceptual study of evidentiality in which use is made of formal tools. The fundamental intuition is that the distinction between ‘evidence’as ‘proof’and ‘evidentiality’as ‘to do with proof’is a crucial one. Evidentiality is a dynamic notion to be analysed through the use of knowledge by the agents, a knowledge in action, which involves an in-coming state and an out-coming state that is typical of the transmission of information. We propose our own approach in which the dynamics of knowledge in action is grasped in the context of a dynamic epistemic logic.","PeriodicalId":304915,"journal":{"name":"Log. J. IGPL","volume":"329 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132982287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Log. J. IGPLPub Date : 2019-12-09DOI: 10.1093/jigpal/jzz052
M. C. Limas, H. Alaiz-Moretón, Laura Fernández-Robles, Javier Alfonso-Cendón, C. F. Llamas, Lidia Sánchez-González, H. Pérez
{"title":"Non-removal strategy for outliers in predictive models: The PAELLA algorithm case","authors":"M. C. Limas, H. Alaiz-Moretón, Laura Fernández-Robles, Javier Alfonso-Cendón, C. F. Llamas, Lidia Sánchez-González, H. Pérez","doi":"10.1093/jigpal/jzz052","DOIUrl":"https://doi.org/10.1093/jigpal/jzz052","url":null,"abstract":"\u0000 This paper reports the experience of using the PAELLA algorithm as a helper tool in robust regression instead of as originally intended for outlier identification and removal. This novel usage of the algorithm takes advantage of the occurrence vector calculated by the algorithm in order to strengthen the effect of the more reliable samples and lessen the impact of those that otherwise would be considered outliers. Following that aim, a series of experiments is conducted in order to learn how to better use the information contained in the occurrence vector. Using a contrively difficult artificial data set, a reference predictive model is fit using the whole raw dataset. The second experiment reports the results of fitting a similar predictive model but discarding the samples marked as outliers by PAELLA. The third experiment uses the occurrence vector provided by PAELLA in order to classify the observations in multiple bins and fit every possible model changing which bins are considered for fitting and which are discarded in that particular model. The fourth experiment introduces a sampling process before fitting in which the occurrence vector represents the likelihood of being considered in the training data set. The fifth experiment considers the sampling process as an internal step to be performed interleaved between the training epochs. The last experiment compares our approach using weighted neural networks to a state of the art method.","PeriodicalId":304915,"journal":{"name":"Log. J. IGPL","volume":"304 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134272640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Log. J. IGPLPub Date : 2019-12-09DOI: 10.1093/jigpal/jzz049
Ángel Arroyo, Carlos Cambra, Álvaro Herrero, V. Tricio, E. Corchado
{"title":"Self-Organizing Maps to Validate Anti-Pollution Policies","authors":"Ángel Arroyo, Carlos Cambra, Álvaro Herrero, V. Tricio, E. Corchado","doi":"10.1093/jigpal/jzz049","DOIUrl":"https://doi.org/10.1093/jigpal/jzz049","url":null,"abstract":"\u0000 This study presents the application of self-organizing maps to air-quality data in order to analyze episodes of high pollution in Madrid (Spain’s capital city). The goal of this work is to explore the dataset and then compare several scenarios with similar atmospheric conditions (periods of high Nitrogen dioxide concentration): some of them when no actions were taken and some when traffic restrictions were imposed. The levels of main pollutants, recorded at these stations for eleven days at four different times from 2015 to 2018, are analyzed in order to determine the effectiveness of the anti-pollution measures. The visualization of trajectories on the self-organizing map let us clearly see the evolution of pollution levels and consequently evaluate the effectiveness of the taken measures, after and during the protocol activation time.","PeriodicalId":304915,"journal":{"name":"Log. J. IGPL","volume":"22 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131904464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Log. J. IGPLPub Date : 2019-12-09DOI: 10.1093/jigpal/jzz053
Seul-Gi Choi, Sung-Bae Cho
{"title":"Evolutionary Reinforcement Learning for Adaptively Detecting Database Intrusions","authors":"Seul-Gi Choi, Sung-Bae Cho","doi":"10.1093/jigpal/jzz053","DOIUrl":"https://doi.org/10.1093/jigpal/jzz053","url":null,"abstract":"\u0000 Relational database management system (RDBMS) is the most popular database system. It is important to maintain data security from information leakage and data corruption. RDBMS can be attacked by an outsider or an insider. It is difficult to detect an insider attack because its patterns are constantly changing and evolving. In this paper, we propose an adaptive database intrusion detection system that can be resistant to potential insider misuse using evolutionary reinforcement learning, which combines reinforcement learning and evolutionary learning. The model consists of two neural networks, an evaluation network and an action network. The action network detects the intrusion, and the evaluation network provides feedback to the detection of the action network. Evolutionary learning is effective for dynamic patterns and atypical patterns, and reinforcement learning enables online learning. Experimental results show that the performance for detecting abnormal queries improves as the proposed model learns the intrusion adaptively using Transaction Processing performance Council-E scenario-based virtual query data. The proposed method achieves the highest performance at 94.86%, and we demonstrate the usefulness of the proposed method by performing 5-fold cross-validation.","PeriodicalId":304915,"journal":{"name":"Log. J. IGPL","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128952252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Log. J. IGPLPub Date : 2019-12-07DOI: 10.1093/jigpal/jzz050
Alvaro Botas, R. Rodríguez, Vicente Matellán Olivera, Juan Felipe García Sierra, M. T. Trobajo, M. Carriegos
{"title":"On Fingerprinting of Public Malware Analysis Services","authors":"Alvaro Botas, R. Rodríguez, Vicente Matellán Olivera, Juan Felipe García Sierra, M. T. Trobajo, M. Carriegos","doi":"10.1093/jigpal/jzz050","DOIUrl":"https://doi.org/10.1093/jigpal/jzz050","url":null,"abstract":"\u0000 Automatic public malware analysis services (PMAS, e.g. VirusTotal, Jotti or ClamAV, to name a few) provide controlled, isolated and virtual environments to analyse malicious software (malware) samples. Unfortunately, malware is currently incorporating techniques to recognize execution onto a virtual or sandbox environment; when an analysis environment is detected, malware behaves as a benign application or even shows no activity. In this work, we present an empirical study and characterization of automatic PMAS, considering 26 different services. We also show a set of features that allow to easily fingerprint these services as analysis environments; the lower the unlikeability of these features, the easier for us (and thus for malware) to fingerprint the analysis service they belong to. Finally, we propose a method for these analysis services to counter or at least mitigate our proposal.","PeriodicalId":304915,"journal":{"name":"Log. J. IGPL","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122235416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Log. J. IGPLPub Date : 2019-12-05DOI: 10.1093/jigpal/jzz042
Massimiliano Carrara, D. Chiffi, C. Florio
{"title":"Pragmatic logics for hypotheses and evidence","authors":"Massimiliano Carrara, D. Chiffi, C. Florio","doi":"10.1093/jigpal/jzz042","DOIUrl":"https://doi.org/10.1093/jigpal/jzz042","url":null,"abstract":"\u0000 The present paper is devoted to present two pragmatic logics and their corresponding intended interpretations according to which an illocutionary act of (scientific) hypothesis-making is justified by a scintilla of evidence. The paper first introduces a general pragmatic frame for assertions, expanded to hypotheses, ${mathsf{AH}}$ and a hypothetical pragmatic logic for evidence ${mathsf{HLP}}$. Both ${mathsf{AH}}$ and ${mathsf{HLP}}$ are extensions of the Logic for Pragmatics, $mathcal{L}^P$. We compare ${mathsf{AH}}$ and $mathsf{HLP}$. Then, we underline the expressive and inferential richness of both systems in dealing with hypothetical judgements, especially when based on different, sometimes conflicting, evidence.","PeriodicalId":304915,"journal":{"name":"Log. J. IGPL","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133254727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Log. J. IGPLPub Date : 2019-11-25DOI: 10.1093/JIGPAL/JZZ024
T. M. Ferguson
{"title":"Corrigendum: The Keisler - Shelah theorem for QmbC through semantical atomization","authors":"T. M. Ferguson","doi":"10.1093/JIGPAL/JZZ024","DOIUrl":"https://doi.org/10.1093/JIGPAL/JZZ024","url":null,"abstract":"","PeriodicalId":304915,"journal":{"name":"Log. J. IGPL","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130919537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Log. J. IGPLPub Date : 2019-11-25DOI: 10.1093/JIGPAL/JZZ008
Matteo Pascucci
{"title":"Propositional quantifiers in labelled natural deduction for normal modal logic","authors":"Matteo Pascucci","doi":"10.1093/JIGPAL/JZZ008","DOIUrl":"https://doi.org/10.1093/JIGPAL/JZZ008","url":null,"abstract":"\u0000 This article concerns the treatment of propositional quantification in a framework of labelled natural deduction for modal logic developed by Basin, Matthews and Viganò. We provide a detailed analysis of a basic calculus that can be used for a proof-theoretic rendering of minimal normal multimodal systems with quantification over stable domains of propositions. Furthermore, we consider variations of the basic calculus obtained via relational theories and domain theories allowing for quantification over possibly unstable domains of propositions. The main result of the article is that fragments of the labelled calculi not exploiting reductio ad absurdum enjoy the Church–Rosser property and the strong normalization property; such result is obtained by combining Girard’s method of reducibility candidates and labelled languages of lambda calculus codifying the structure of modal proofs.","PeriodicalId":304915,"journal":{"name":"Log. J. IGPL","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126520427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Log. J. IGPLPub Date : 2019-11-25DOI: 10.1093/JIGPAL/JZZ009
N. Borges, Edwin Pin
{"title":"Universal first-order logic is superfluous in the second level of the polynomial-time hierarchy","authors":"N. Borges, Edwin Pin","doi":"10.1093/JIGPAL/JZZ009","DOIUrl":"https://doi.org/10.1093/JIGPAL/JZZ009","url":null,"abstract":"\u0000 In this paper we prove that $forall textrm{FO}$, the universal fragment of first-order logic, is superfluous in $varSigma _2^p$ and $varPi _2^p$. As an example, we show that this yields a syntactic proof of the $varSigma _2^p$-completeness of value-cost satisfiability. The superfluity method is interesting since it gives a way to prove completeness of problems involving numerical data such as lengths, weights and costs and it also adds to the programme started by Immerman and Medina about the syntactic approach in the study of completeness.","PeriodicalId":304915,"journal":{"name":"Log. J. IGPL","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122913329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}