{"title":"Mitigating Disaster using Secure Threshold-Cloud Architecture","authors":"Elochukwu A. Ukwandu, W. Buchanan, Gordon Russell","doi":"10.32474/ctcsa.2018.01.000107","DOIUrl":null,"url":null,"abstract":"With the introduction of cloud services for disaster management on a scalable rate, there appears to be the needed succour by small business owners to get a cheaper and more secure disaster recovery mechanism to provide business continuity and remain competitive with other large businesses. But that is not to be so, as cloud outages became a nightmare. Recent statistics by Ponemon Institute [1] on Cost of Data Centre Outages, shows an increasing rate of 38% from $505,502 in 2010 to $740,357 as at January 2016. Using activity-based costing they were able to capture direct and indirect cost to: Damage to mission-critical data; Impact of downtime on organizational productivity; Damages to equipment and other assets and so on. The statistics were derived from 63 data centres based in the United States of America. These events may have encouraged the adoption of multi-cloud services so as to divert customers traffic in the event of cloud outage. Some finegrained proposed solutions on these are focused on Redundancy and Backup such as: Local Backup by [2]; Geographical Redundancy and Backup [3]; The use of Inter-Private Cloud Storage [4]; Resource Management for data recovery in storage clouds [5], and so on. But in all these, cloud service providers see disaster recovery as a way of getting the system back online and making data available after a service disruption, and not on contending disaster by providing robustness that is capable of mitigating shocks and losses resulting from these disasters.","PeriodicalId":303860,"journal":{"name":"Current Trends in Computer Sciences & Applications","volume":"105 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Current Trends in Computer Sciences & Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.32474/ctcsa.2018.01.000107","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
With the introduction of cloud services for disaster management on a scalable rate, there appears to be the needed succour by small business owners to get a cheaper and more secure disaster recovery mechanism to provide business continuity and remain competitive with other large businesses. But that is not to be so, as cloud outages became a nightmare. Recent statistics by Ponemon Institute [1] on Cost of Data Centre Outages, shows an increasing rate of 38% from $505,502 in 2010 to $740,357 as at January 2016. Using activity-based costing they were able to capture direct and indirect cost to: Damage to mission-critical data; Impact of downtime on organizational productivity; Damages to equipment and other assets and so on. The statistics were derived from 63 data centres based in the United States of America. These events may have encouraged the adoption of multi-cloud services so as to divert customers traffic in the event of cloud outage. Some finegrained proposed solutions on these are focused on Redundancy and Backup such as: Local Backup by [2]; Geographical Redundancy and Backup [3]; The use of Inter-Private Cloud Storage [4]; Resource Management for data recovery in storage clouds [5], and so on. But in all these, cloud service providers see disaster recovery as a way of getting the system back online and making data available after a service disruption, and not on contending disaster by providing robustness that is capable of mitigating shocks and losses resulting from these disasters.