{"title":"Testing coverage analysis for software component validation","authors":"J. Gao, Raquel Espinoza, Jingsha He","doi":"10.1109/COMPSAC.2005.150","DOIUrl":"https://doi.org/10.1109/COMPSAC.2005.150","url":null,"abstract":"Constructing component-based software using reusable components is becoming a widely used approach. Since the quality of a component-based system is highly dependent on the quality of its components, component quality validation becomes very critical to both component vendors and users. Effectively validating component quality needs adequate test models and testing coverage criteria. This paper proposes an adequate test model and test coverage criteria for component validation. The paper discusses a dynamic approach to analyze component test coverage based on the proposed test model and test coverage criteria. The major contribution of this paper is its dynamic test coverage analysis solution to monitor API-based component validation and reuse. The paper reports the recent development efforts of a component test coverage analysis tool, and presents an application example.","PeriodicalId":419267,"journal":{"name":"29th Annual International Computer Software and Applications Conference (COMPSAC'05)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114990330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Trust management in a distributed environment","authors":"B. Thuraisingham","doi":"10.1109/COMPSAC.2005.163","DOIUrl":"https://doi.org/10.1109/COMPSAC.2005.163","url":null,"abstract":"Cybercrime as well as threats to national security are costing U.S. organizations billions of dollars each year. These organizations could be government organizations, financial corporations, medical hospitals and academic institutions. There is a critical need for organizations to share data within and across the organizations so that analysts could analyze the data, mine the data, and make effective decisions. Each organization could share information within the infosphere of that organization. An infosphere may consist of the data, applications and services that are needed for the operation of the organization. Organizations may share data with one another across what is called a global infosphere that spans multiple infospheres. While access control is an important security concern for organizational data sharing, managing trust is also an important consideration. For example, A may have the authorization to share the data with B, but A may not trust B. Trust management and negotiation has been studied extensively by Winslett et al. and Bertino et al. in the systems TrustBuilder and TrustX. In this paper we will discuss the issues on managing trust in a distributed environment. Much of the discussion is based on the work on secure knowledge management (Bertino et al. 2005).","PeriodicalId":419267,"journal":{"name":"29th Annual International Computer Software and Applications Conference (COMPSAC'05)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130543347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Optimizing and simplifying software metric models constructed using maximum likelihood methods","authors":"Victor K. Y. Chan, W. E. Wong","doi":"10.1109/COMPSAC.2005.116","DOIUrl":"https://doi.org/10.1109/COMPSAC.2005.116","url":null,"abstract":"A software metric model can be used to predict a target metric (e.g., the development work effort) for a future release of a software system based on the project's predictor metrics (e.g., the project team size). However, missing or incomplete data often appear in the data samples used to construct the model. So far, the least biased and thus the most recommended software metric models for dealing with the missing/incomplete data are those constructed by using the maximum likelihood methods. It is true that the inclusion of a particular predictor metric in the model construction is initially based on an intuitive or experience-based assumption that the predictor metric impacts significantly the target metric. Nevertheless, this assumption has to be verified. Previous research on metric models constructed by using the maximum likelihood methods simply took this verification for granted. This can result in probable inclusion of superfluous predictor metric(s) and/or unnecessary predictor metric complexity. In this paper, we propose a methodology to optimize and simplify such models based on the results of appropriate hypothesis tests. An experiment is also reported to demonstrate the use of our methodology in trimming redundant predictor metric(s) and/or unnecessary predictor metric complexity.","PeriodicalId":419267,"journal":{"name":"29th Annual International Computer Software and Applications Conference (COMPSAC'05)","volume":"113 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123319151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A productivity metric based on statistical pattern recognition","authors":"J. L. Sharpe, João W. Cangussu","doi":"10.1109/COMPSAC.2005.31","DOIUrl":"https://doi.org/10.1109/COMPSAC.2005.31","url":null,"abstract":"The generally accepted calculation to measure the productivity of a software engineer is based on economic theory and is borrowed from traditional product manufacturing environments. Managers often measure the productivity of a worker to determine merit-based raises or to provide feedback to workers with poor productivity. The assumption is that this calculation of a worker's productivity is directly proportional to a worker's value to the company. The motivation for the approach proposed here is that such relationship may not be algebraically captured with respect to the productivity of software engineers. To better capture the productivity of a software engineer and his value to a company, the productivity problem is reformulated here as a pattern recognition problem and solved using clustering. By defining a general productivity operator, clustering has been used to map the domain of the productivity operator to a range of productivity classes. This new approach has been successfully applied to randomly generated project data and actual project data from the NASA SATC software metric database.","PeriodicalId":419267,"journal":{"name":"29th Annual International Computer Software and Applications Conference (COMPSAC'05)","volume":"560 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116239655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SecureC: control-flow protection against general buffer overflow attack","authors":"H. Nishiyama","doi":"10.1109/COMPSAC.2005.136","DOIUrl":"https://doi.org/10.1109/COMPSAC.2005.136","url":null,"abstract":"Increasing damage from computer virus or worms creating significant problems worldwide. These malicious programs take advantage of computer vulnerabilities to distort the control-flow of the target system. Among these vulnerabilities, buffer overflow is most frequently used as a means of intrusion. To protect against buffer overflow attacks, we have developed a source-to-source translator called SecureC. It incorporates two novel protection methods, \"shadow stack\" and \"code pointer protection\" that prevent control-flow transfer caused by buffer overflow attacks. Evaluation using 11 SPEC CPU2000 benchmark programs showed that SecureC prevents buffer overflow attacks with only 6.1% performance penalty.","PeriodicalId":419267,"journal":{"name":"29th Annual International Computer Software and Applications Conference (COMPSAC'05)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123884809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Parallel changes: detecting semantic interferences","authors":"G. L. Thione, D. Perry","doi":"10.1109/COMPSAC.2005.117","DOIUrl":"https://doi.org/10.1109/COMPSAC.2005.117","url":null,"abstract":"Parallel changes are a basic fact of modern software development. Where previously we looked at prima facie interference, here we investigate a less direct form that we call semantic interference. We reduce the forms of semantic interference that we are interested in to overlapping defuse pairs. Using program slicing and data flow analysis, we present algorithms for detecting semantic interference for both concurrent changes (allowed in optimistic version management systems) and sequential parallel changes (supported in pessimistic version management systems), and for changes that are both immediate and distant in time. We provide these algorithms for changes that are additions, showing that interference caused by deletions can be detected by considering the two sets of changes in reverse-time order.","PeriodicalId":419267,"journal":{"name":"29th Annual International Computer Software and Applications Conference (COMPSAC'05)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121070944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Incremental Class Testing from a Class Test Order","authors":"Y. Labiche","doi":"10.1109/COMPSAC.2005.89","DOIUrl":"https://doi.org/10.1109/COMPSAC.2005.89","url":null,"abstract":"Many approaches exist to decide the order in which classes should be integrated during (integration) testing. Most of them, based on an analysis of class dependencies (for instance described in a UML class diagram) aim at producing a partial order indicating which classes should be tested in sequence and which ones can be tested in parallel. We argue in this article that, thanks to the specifics of such a class test order, it is possible to define an incremental strategy for testing classes that promotes reuse during testing, not only along class inheritance hierarchies.","PeriodicalId":419267,"journal":{"name":"29th Annual International Computer Software and Applications Conference (COMPSAC'05)","volume":"17 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121010311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Handling high-bandwidth traffic aggregates by receiver-driven feedback control","authors":"C. Tan, D. Chiu, John C.S. Lui, David K. Y. Yau","doi":"10.1109/COMPSAC.2005.83","DOIUrl":"https://doi.org/10.1109/COMPSAC.2005.83","url":null,"abstract":"High-bandwidth traffic aggregates may occur during times of flooding-based distributed denial-of-service attacks or flash crowds. Congestion control of these traffic aggregates is important to avoid congestion collapse of network services. This paper presents a class of feedback-control algorithms that proactively protect a network server from overload by installing rate throttles in a set of upstream routers. A control-theoretical framework is proposed to optimize the control setting such that throttling can be achieved in a distributed and fair manner. We develop control-theoretic algorithms that (1) are highly adaptive by avoiding the configuration of unnecessary control parameters, (2) provide max-min fairness for any number of throttling routers, (3) respond very quickly to network changes, (4) are extremely robust against extrinsic factors beyond the system control, and (5) are stable under given delay bounds.","PeriodicalId":419267,"journal":{"name":"29th Annual International Computer Software and Applications Conference (COMPSAC'05)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121309122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Tridirectional computed chaining: an efficient hashing algorithm for limited space applications","authors":"M. Loeb, A. L. Tharp","doi":"10.1109/COMPSAC.2005.162","DOIUrl":"https://doi.org/10.1109/COMPSAC.2005.162","url":null,"abstract":"Hardware innovations such as motes, RFIDs, embedded microprocessors, and wireless sensors, have provided the foundation for a plethora of new applications for business, government, industry, and individuals. These applications include connected cities, smart homes and appliances, improved security and surveillance, smart vehicles, business integration, and e-commerce. Rarely does a user complain about a software system being too fast. Even though processor speeds have increased significantly, access times for external storage have lagged, causing performance bottlenecks. This paper presents a new hashing scheme, tridirectional computed chaining, which is especially relevant for applications in which space may be limited but fast access is important. The performance enhancements result primarily from using a pseudolink field with three-way branching rather than one allowing only a linear chain.","PeriodicalId":419267,"journal":{"name":"29th Annual International Computer Software and Applications Conference (COMPSAC'05)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115144327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Ryan, Sojan Markose, Xiaoqing Frank Liu, Ying Cheng
{"title":"Structured object-oriented co-analysis/co-design of hardware/software for the FACTS power system","authors":"M. Ryan, Sojan Markose, Xiaoqing Frank Liu, Ying Cheng","doi":"10.1109/COMPSAC.2005.147","DOIUrl":"https://doi.org/10.1109/COMPSAC.2005.147","url":null,"abstract":"There are several approaches to the hardware/software design in embedded systems, ranging from the traditional sequential methods which focus on the determination of the hardware architecture prior to software design, to newer object-oriented approaches that attempt to apply software engineering methods to hardware design without a systematic process. This paper discusses a structured object-oriented methodology for the integrated co-analysis and co-design of hardware/software systems using an extended high order object-oriented modeling technique (HOOMT). This methodology offers a uniform method for hardware and software developers to jointly develop the specifications for and partitioning of the hardware and software components of a system, as well as developing the interfaces between components, and allows easy design migration of components between hardware and software. In this paper it is applied to the co-analysis/co-design of the hardware and software of a simulated advanced power grid control system.","PeriodicalId":419267,"journal":{"name":"29th Annual International Computer Software and Applications Conference (COMPSAC'05)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123168098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}