German Cuaya-Simbro, A. Pérez-Sanpablo, A. Muñoz-Meléndez, Ivett Quiñones Urióstegui, Eduardo-F. Morales-Manzanares, Lidia Nuñez-Carrera
{"title":"Comparison of Machine Learning Models to Predict Risk of Falling in Osteoporosis Elderly","authors":"German Cuaya-Simbro, A. Pérez-Sanpablo, A. Muñoz-Meléndez, Ivett Quiñones Urióstegui, Eduardo-F. Morales-Manzanares, Lidia Nuñez-Carrera","doi":"10.2478/fcds-2020-0005","DOIUrl":"https://doi.org/10.2478/fcds-2020-0005","url":null,"abstract":"Abstract Falls are a multifactorial cause of injuries for older people. Subjects with osteoporosis are more vulnerable to falls. The focus of this study is to investigate the performance of the different machine learning models built on spatiotemporal gait parameters to predict falls particularly in subjects with osteoporosis. Spatiotemporal gait parameters and prospective registration of falls were obtained from a sample of 110 community dwelling older women with osteoporosis (age 74.3 ± 6.3) and 143 without osteoporosis (age 68.7 ± 6.8). We built four different models, Support Vector Machines, Neuronal Networks, Decision Trees, and Dynamic Bayesian Networks (DBN), for each specific set of parameters used, and compared them considering their accuracy, precision, recall and F-score to predict fall risk. The F-score value shows that DBN based models are more efficient to predict fall risk, and the best result obtained is when we use a DBN model using the experts’ variables with FSMC’s variables, mixed variables set, obtaining an accuracy of 80%, and recall of 73%. The results confirm the feasibility of computational methods to complement experts’ knowledge to predict risk of falling within a period of time as high as 12 months.","PeriodicalId":42909,"journal":{"name":"Foundations of Computing and Decision Sciences","volume":"45 1","pages":"66 - 77"},"PeriodicalIF":1.1,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48129698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Division-by-q dichotomization for interval uncertainty reduction by cutting off equal parts from the left and right based on expert judgments under short-termed observations","authors":"V. Romanuke","doi":"10.2478/fcds-2020-0008","DOIUrl":"https://doi.org/10.2478/fcds-2020-0008","url":null,"abstract":"Abstract A problem of reducing interval uncertainty is considered by an approach of cutting off equal parts from the left and right. The interval contains admissible values of an observed object’s parameter. The object’s parameter cannot be measured directly or deductively computed, so it is estimated by expert judgments. Terms of observations are short, and the object’s statistical data are poor. Thus an algorithm of flexibly reducing interval uncertainty is designed via adjusting the parameter by expert procedures and allowing to control cutting off. While the parameter is adjusted forward, the interval becomes progressively narrowed after every next expert procedure. The narrowing is performed via division-by-q dichotomization cutting off the q−1-th parts from the left and right. If the current parameter’s value falls outside of the interval, forward adjustment is canceled. Then backward adjustment is executed, where one of the endpoints is moved backwards. Adjustment is not executed when the current parameter’s value enclosed within the interval is simultaneously too close to both left and right endpoints. If the value is “trapped” like that for a definite number of times in succession, the early stop fires.","PeriodicalId":42909,"journal":{"name":"Foundations of Computing and Decision Sciences","volume":"45 1","pages":"125 - 155"},"PeriodicalIF":1.1,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41470727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An enhanced differential evolution algorithm with adaptation of switching crossover strategy for continuous optimization","authors":"P. Puphasuk, J. Wetweerapong","doi":"10.2478/fcds-2020-0007","DOIUrl":"https://doi.org/10.2478/fcds-2020-0007","url":null,"abstract":"Abstract Designing an efficient optimization method which also has a simple structure is generally required by users for its applications to a wide range of practical problems. In this research, an enhanced differential evolution algorithm with adaptation of switching crossover strategy (DEASC) is proposed as a general-purpose population-based optimization method for continuous optimization problems. DEASC extends the solving ability of a basic differential evolution algorithm (DE) whose performance significantly depends on user selection of the control parameters: scaling factor, crossover rate and population size. Like the original DE, the proposed method is aimed at e ciency, simplicity and robustness. The appropriate population size is selected to work in accordance with good choices of the scaling factors. Then, the switching crossover strategy of using low or high crossover rates are incorporated and adapted to suit the problem being solved. In this manner, the adaptation strategy is just a convenient add-on mechanism. To verify the performance of DEASC, it is tested on several benchmark problems of various types and di culties, and compared with some well-known methods in the literature. It is also applied to solve some practical systems of nonlinear equations. Despite its much simpler algorithmic structure, the experimental results show that DEASC greatly enhances the basic DE. It is able to solve all the test problems with fast convergence speed and overall outperforms the compared methods which have more complicated structures. In addition, DEASC also shows promising results on high dimensional test functions.","PeriodicalId":42909,"journal":{"name":"Foundations of Computing and Decision Sciences","volume":"45 1","pages":"124 - 97"},"PeriodicalIF":1.1,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45193583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Recent Results on Computational Molecular Modeling of The Origins of Life","authors":"Juan Francisco Carrascoza Mayen, J. Błażewicz","doi":"10.2478/fcds-2020-0003","DOIUrl":"https://doi.org/10.2478/fcds-2020-0003","url":null,"abstract":"Abstract In the last decade of research in the origins of life, there has been an increase in the interest on theoretical molecular modeling methods aimed to improve the accuracy and speed of the algorithms that solve the molecular mechanics and chemical reactions of the matter. Research on the scenarios of prebiotic chemistry has also advanced. The presented work attempts to discuss the latest computational techniques and trends implemented so far. Although it is difficult to cover the full extent of the current publications, we tried to orient the reader into the modern tendencies and challenges faced by those who are in the origins of life field.","PeriodicalId":42909,"journal":{"name":"Foundations of Computing and Decision Sciences","volume":"45 1","pages":"35 - 46"},"PeriodicalIF":1.1,"publicationDate":"2020-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46315767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Scheduling High Multiplicity Coupled Tasks","authors":"W. Wojciechowicz, M. Gabay","doi":"10.2478/fcds-2020-0004","DOIUrl":"https://doi.org/10.2478/fcds-2020-0004","url":null,"abstract":"Abstract The coupled tasks scheduling problem is class of scheduling problems, where each task consists of two operations and a separation gap between them. The high-multiplicity is a compact encoding, where identical tasks are grouped together, and the group is specified instead of each individual task. Consequently the encoding of a problem instance is decreased significantly. In this article we derive a lower bound for the problem variant as well as propose an asymptotically optimal algorithm. The theoretical results are complemented with computational experiment, where a new algorithm is compared with three other algorithms implemented.","PeriodicalId":42909,"journal":{"name":"Foundations of Computing and Decision Sciences","volume":"45 1","pages":"47 - 61"},"PeriodicalIF":1.1,"publicationDate":"2020-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42819109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Morality, protection, security and gain: lessons from a minimalistic, economically inspired multi-agent model","authors":"Maciej Komosinski, T. Zok","doi":"10.2478/fcds-2020-0002","DOIUrl":"https://doi.org/10.2478/fcds-2020-0002","url":null,"abstract":"Abstract In this work, we introduce a simple multi-agent simulation model with two roles of agents that correspond to moral and immoral attitudes. The model is given explicitly by a set of mathematical equations with continuous variables and is characterized by four parameters: morality, protection, and two efficiency parameters. Agents are free to adjust their roles to maximize individual gains. The model is analyzed theoretically to find conditions for its stability, i.e., the fractions of agents of both roles that lead to an equilibrium in their gains. A multi-agent simulation is also developed to verify the dynamics of the model for all values of morality and protection parameters, and to identify potential discrepancies with the theoretical analysis.","PeriodicalId":42909,"journal":{"name":"Foundations of Computing and Decision Sciences","volume":"45 1","pages":"17 - 33"},"PeriodicalIF":1.1,"publicationDate":"2020-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45171765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Sovereign Rating Analysis through the Dominance-Based Rough Set Approach","authors":"Ayrton Benedito Gaia do Couto, L. Gomes","doi":"10.2478/fcds-2020-0001","DOIUrl":"https://doi.org/10.2478/fcds-2020-0001","url":null,"abstract":"Abstract The classifications of risk made by international rating agencies aim at guiding investors when it comes to the capacity and disposition of the evaluated countries to honor their public debt commitments. In this study, the analysis of economic variables of sovereign rating, in a context of vagueness and uncertainty, leads the inference of patterns (multi-criteria rules) by following the Dominance-based Rough Set Approach (DRSA). The discovery of patterns in data may be useful for subsidizing foreign investment decisions in countries; and this knowledge base may be used in rule-based expert systems (learning from training examples).The present study seeks to complement the analysis produced by an international credit rating agency, Standard & Poor’s (S&P), for the year 2018.","PeriodicalId":42909,"journal":{"name":"Foundations of Computing and Decision Sciences","volume":"45 1","pages":"16 - 3"},"PeriodicalIF":1.1,"publicationDate":"2020-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48076484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A blockchain based Trusted Persistent Identifier system for Big Data in Science","authors":"E. Bellini","doi":"10.2478/fcds-2019-0018","DOIUrl":"https://doi.org/10.2478/fcds-2019-0018","url":null,"abstract":"Abstract A stable reference of Internet resources is crucial not only to identify a resource in a trustworthy and certified way but also to guarantee continuous access to it over time. The current practice in scientific publication as the use of a Persistent Identifier (PID) like a DOI or Handle, is becoming attractive also for the datasets. In fact, in the era of Big Data, the aspects of replicability and verification of the scientific result are paramount. In this paper we verify the functional feasibility of permissioned blockchain technology as a tool to implement a Trustworthy Persistent Identifier (T-PID) system for datasets in the scientific domain.","PeriodicalId":42909,"journal":{"name":"Foundations of Computing and Decision Sciences","volume":"44 1","pages":"351 - 377"},"PeriodicalIF":1.1,"publicationDate":"2019-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43508496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Mutants as Patches: Towards a formal approach to Mutation Testing","authors":"T. Lewowski, L. Madeyski","doi":"10.2478/fcds-2019-0019","DOIUrl":"https://doi.org/10.2478/fcds-2019-0019","url":null,"abstract":"Abstract Background: Mutation testing is a widely explored technique used to evaluate the quality of software tests, but little attention has been given to its mathematical foundations. Aim: We provide a formal description of the core concepts in mutation testing, relations between them and conclusions that can be drawn from the presented model. Method: We introduce concepts of mutant space and patch space, and refer to patch merging procedure from the patch theory. We explicitly present constraints, such as location-dependence, that affect mutation operators. We also present a way to use introduced formalism with traditional operators proposed in other papers. Results: The proposed formalism allows to describe interactions between separate mutations using well-known abstract algebra notation. Conclusion: The presented formalism may substantially decrease the number of tested weak mutants and increase the number of valuable ones, while giving tools to partially address the problem of equivalent mutants, particularly for higher-order mutation testing. However, additional empirical evaluation is still needed.","PeriodicalId":42909,"journal":{"name":"Foundations of Computing and Decision Sciences","volume":"44 1","pages":"379 - 405"},"PeriodicalIF":1.1,"publicationDate":"2019-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43398612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jedrzej Musial, Emmanuel Kieffer, Mateusz Guzek, Grégoire Danoy, Shyam S. Wagle, P. Bouvry, J. Błażewicz
{"title":"Cloud Brokering with Bundles: Multi-objective Optimization of Services Selection","authors":"Jedrzej Musial, Emmanuel Kieffer, Mateusz Guzek, Grégoire Danoy, Shyam S. Wagle, P. Bouvry, J. Błażewicz","doi":"10.2478/fcds-2019-0020","DOIUrl":"https://doi.org/10.2478/fcds-2019-0020","url":null,"abstract":"Abstract Cloud computing has become one of the major computing paradigms. Not only the number of offered cloud services has grown exponentially but also many different providers compete and propose very similar services. This situation should eventually be beneficial for the customers, but considering that these services slightly differ functionally and non-functionally -wise (e.g., performance, reliability, security), consumers may be confused and unable to make an optimal choice. The emergence of cloud service brokers addresses these issues. A broker gathers information about services from providers and about the needs and requirements of the customers, with the final goal of finding the best match. In this paper, we formalize and study a novel problem that arises in the area of cloud brokering. In its simplest form, brokering is a trivial assignment problem, but in more complex and realistic cases this does not longer hold. The novelty of the presented problem lies in considering services which can be sold in bundles. Bundling is a common business practice, in which a set of services is sold together for the lower price than the sum of services’ prices that are included in it. This work introduces a multi-criteria optimization problem which could help customers to determine the best IT solutions according to several criteria. The Cloud Brokering with Bundles (CBB) models the different IT packages (or bundles) found on the market while minimizing (maximizing) different criteria. A proof of complexity is given for the single-objective case and experiments have been conducted with a special case of two criteria: the first one being the cost and the second is artificially generated. We also designed and developed a benchmark generator, which is based on real data gathered from 19 cloud providers. The problem is solved using an exact optimizer relying on a dichotomic search method. The results show that the dichotomic search can be successfully applied for small instances corresponding to typical cloud-brokering use cases and returns results in terms of seconds. For larger problem instances, solving times are not prohibitive, and solutions could be obtained for large, corporate clients in terms of minutes.","PeriodicalId":42909,"journal":{"name":"Foundations of Computing and Decision Sciences","volume":"44 1","pages":"407 - 426"},"PeriodicalIF":1.1,"publicationDate":"2019-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41561647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}