{"title":"A firewalling scheme for securing MPOA-based enterprise networks","authors":"Jun Xu, M. Singhal","doi":"10.1109/HASE.1998.731613","DOIUrl":"https://doi.org/10.1109/HASE.1998.731613","url":null,"abstract":"A well-known security problem with MPOA is that cut-through connections generally bypass firewall routers if there are any. None of the previously proposed approaches solved the problem properly. We propose a novel firewalling scheme for MPOA that nicely fixes the security hole. Our firewalling scheme has three outstanding advantages that make it ideal for securing MPOA-based enterprise networks. First, based on our novel concept of logical chokepoints, our firewalling scheme does not require the existence of physical chokepoints inside the network. Second, the scheme is nicely embedded into the MPOA protocol so that its cost, performance overhead, and protocol complexity are reduced to a minimum. Third, the scheme is centrally administrated so that it scales well to very large networks.","PeriodicalId":340424,"journal":{"name":"Proceedings Third IEEE International High-Assurance Systems Engineering Symposium (Cat. No.98EX231)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114980742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Software component independence","authors":"Denise M. Woit, David V. Mason","doi":"10.1109/HASE.1998.731597","DOIUrl":"https://doi.org/10.1109/HASE.1998.731597","url":null,"abstract":"Independence is a fundamental requirement for calculating system reliability from component reliabilities, whether in hardware or software systems. Markov analysis is often used in such calculation; however, procedures as conventionally used do not qualify as nodes in a Markov system. We outline the requirements for several classes of component independence and use the CPS (continuation passing style) transformation to convert conventional procedures into fragments that are appropriate to Markov analysis.","PeriodicalId":340424,"journal":{"name":"Proceedings Third IEEE International High-Assurance Systems Engineering Symposium (Cat. No.98EX231)","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125929880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Diego Del Gobbo, M. Napolitano, J. Callahan, B. Cukic
{"title":"Experience in developing system requirements specification for a sensor failure detection and identification scheme","authors":"Diego Del Gobbo, M. Napolitano, J. Callahan, B. Cukic","doi":"10.1109/HASE.1998.731614","DOIUrl":"https://doi.org/10.1109/HASE.1998.731614","url":null,"abstract":"This paper presents insights gained while developing the system requirements specification of a flight control system within a formal framework. SCR methodology has been used for the description of the requirements of the sensor failure detection and identification scheme. The emphasis is on the practical aspects and experience gained through the application of a formal method in developing the system level requirements for the given application.","PeriodicalId":340424,"journal":{"name":"Proceedings Third IEEE International High-Assurance Systems Engineering Symposium (Cat. No.98EX231)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126633618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Human-centered virtual machine of problem solving agents, software agents, intelligent agents and objects","authors":"R. Khosla","doi":"10.1109/HASE.1998.731636","DOIUrl":"https://doi.org/10.1109/HASE.1998.731636","url":null,"abstract":"This paper outlines a human-centered virtual machine of problem solving agents, intelligent agents, software agents and objects. It deals with issues related to high-assurance (e.g. reliability, availability, real-time and others) through design of human-centered system architecture in which technology is a primitive. The human-centered virtual machine is based on a number of human-centered perspectives including the distributed cognition approach. The human-centered virtual machine has been applied in complex data intensive time critical problems like real-time alarm processing and fault diagnosis, air combat simulation and business (decision support).","PeriodicalId":340424,"journal":{"name":"Proceedings Third IEEE International High-Assurance Systems Engineering Symposium (Cat. No.98EX231)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114724096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Matching software fault tolerance and application needs","authors":"E. Shokri, H. Hecht","doi":"10.1109/HASE.1998.731622","DOIUrl":"https://doi.org/10.1109/HASE.1998.731622","url":null,"abstract":"The designation \"fault tolerant software\" has been used for techniques ranging from roll-back and retry to N-version programming, from data mirroring to functional redundancy. If the term is to be meaningful, qualifying definitions are required. This paper attempts to provide these by analyzing the capabilities of representative software fault tolerance techniques described in prior literature and matching these with the needs of representative environments in which fault tolerance may be applied. This paper suggests five categories for comparison of application needs and fault-tolerance capabilities: accuracy, deadline, state preservation, coverage, and economy of resources. It then shows how representative needs and capabilities can be characterized in identical terms by these categories. Algorithms are developed for either ranking (ordering) the importance of categories or assigning weighting factors to them. The algorithms suggest partially-suitable matches where there is no complete match between the application needs and the capabilities of fault-tolerance techniques. Examples of the selection technique are presented.","PeriodicalId":340424,"journal":{"name":"Proceedings Third IEEE International High-Assurance Systems Engineering Symposium (Cat. No.98EX231)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131024427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Andrews, Andre Bai, Tom Chen, Charles Anderson, A. Hajjar
{"title":"Fast antirandom (FAR) test generation","authors":"A. Andrews, Andre Bai, Tom Chen, Charles Anderson, A. Hajjar","doi":"10.1109/HASE.1998.731625","DOIUrl":"https://doi.org/10.1109/HASE.1998.731625","url":null,"abstract":"Anti-random testing has proved useful in a series of empirical evaluations. The basic premise of anti-random testing is to choose new test vectors that are as far away from existing test inputs as possible. The distance measure is the Hamming or Cartesian distance. Unfortunately, this method essentially requires emuneration of the input space and computation of each input vector when used on an arbitrary set of existing test data. This prevents scale-up to a large test sets and/or long input vectors. We present and empirically evaluate a technique to generate anti-random vectors that is computationally feasible for large input vectors and long sequences of tests. We also show how this fast anti-random test generation (FAR) can consider retained state (i.e. effects of subsequent inputs on each other). We evaluate effectiveness using branch coverage as the testing criterion.","PeriodicalId":340424,"journal":{"name":"Proceedings Third IEEE International High-Assurance Systems Engineering Symposium (Cat. No.98EX231)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131669934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Formal specification in collaborative design of critical software tools","authors":"D. Coppit, K. Sullivan","doi":"10.1109/HASE.1998.731590","DOIUrl":"https://doi.org/10.1109/HASE.1998.731590","url":null,"abstract":"Engineers use software tools to analyze designs for critical systems. Because important decisions are based on tool results, tools must provide valid modeling constructs, engineers must understand them to validate their models; and tools must be implemented without major error. Such tools thus demand careful conceptual and software design. One aspect of such design is the use of rigorous specification and design techniques. This paper contributes a case study on the use of such techniques in the collaborative development of a dynamic fault tree analysis tool. The collaboration involved software engineering researchers knowledgeable in software specification and design and reliability engineering researchers expert in fault tree analysis. Our work revealed conceptual and implementation errors in an earlier version of the tool. Our study supports the position that there is a need for rigorous software specification and design in developing novel analysis tools, and that collaboration between software engineers and domain experts is feasible and profitable.","PeriodicalId":340424,"journal":{"name":"Proceedings Third IEEE International High-Assurance Systems Engineering Symposium (Cat. No.98EX231)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129110120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Design considerations in Boeing 777 fly-by-wire computers","authors":"Y. C. Yeh","doi":"10.1109/HASE.1998.731596","DOIUrl":"https://doi.org/10.1109/HASE.1998.731596","url":null,"abstract":"The new technologies in flight control avionics systems selected for the Boeing 777 airplane program consist of the following: fly-by-wire (FBW), the ARINC 629 data bus, and deferred maintenance. The FBW must meet extremely high levels of functional integrity and availability. The heart of the FBW concept is the use of triple redundancy for all hardware resources: the computing system, airplane electrical power, hydraulic power and communication paths. The multiple redundant hardware is required to meet the numerical safety requirements. Hardware redundancy can be relied upon only if hardware faults can be contained; fail-passive electronics are necessary building blocks for the FBW systems. In addition, the FBW computer architecture must consider other fault tolerance issues: generic errors, common mode faults, near-coincidence faults and dissimilarity.","PeriodicalId":340424,"journal":{"name":"Proceedings Third IEEE International High-Assurance Systems Engineering Symposium (Cat. No.98EX231)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129299614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On-chip cache memory resilience","authors":"S. Hwang, G. Choi","doi":"10.1109/HASE.1998.731620","DOIUrl":"https://doi.org/10.1109/HASE.1998.731620","url":null,"abstract":"This paper investigates the system-level impact of soft errors occurring in cache memory and proposes a novel cache-memory design approach for improving the soft-error resilience. Radiation experiments are conducted to quantify the severity of errors attributed to transients occurring in a cache memory subsystem. Simulation-based fault injections are then conducted to determine major failure modes and to assess the cost/benefits in cache memory designs/configuration alternatives. The performance, reliability, and overhead for each design configuration, e.g., cache block-size and write policy, are studied. The results indicate that the performance enhancement approaches using large cache block-sizes can adversely affect the soft-error sensitivity of the system. Write-through cache design is more susceptible to incomplete/incorrect program termination, while write-back cache design is more prone to data corruptions. A resilient cache design scheme, selective set invalidation (SSI), that better scrubs the cache-memory errors is proposed and evaluated.","PeriodicalId":340424,"journal":{"name":"Proceedings Third IEEE International High-Assurance Systems Engineering Symposium (Cat. No.98EX231)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129237522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fault and leak tolerance in firewall engineering","authors":"Robert N. Smith, S. Bhattacharya","doi":"10.1109/HASE.1998.731603","DOIUrl":"https://doi.org/10.1109/HASE.1998.731603","url":null,"abstract":"The idea and associated benefits of a Firewall cascade, with the firewalls (FWs) placed across a large complex network, distributed system has been proposed and evaluated by the authors (R.N. Smith and S. Bhattacharya, 1997). The paper extends the FW cascade approach to illustrate its applicability in a perspective of FW fault tolerance. We target the class of FW faults that are due to design errors, e.g., FW leaks. Given that most large complex FW designs are likely to contain design errors or leaks, the end-to-end security objective is how best to deploy a set of such potentially leaky FWs in a way that their net effect can seal or eliminate a majority of the FW leaks. The key idea of a FW cascade adding leak tolerance is due to the heterogeneity of different COTS FWs, as well as a higher assurance that not all distinct FWs are likely to contain identical leaks. The proposed capability in the paper enables a prudent design of a secure network that can scale along the levels of security needs, while maximizing performance, reducing cost and enhancing leak tolerance.","PeriodicalId":340424,"journal":{"name":"Proceedings Third IEEE International High-Assurance Systems Engineering Symposium (Cat. No.98EX231)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114879906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}