{"title":"Reliable health monitoring and fault management infrastructure based on embedded instrumentation and IEEE 1687","authors":"A. Jutman, K. Shibin, S. Devadze","doi":"10.1109/AUTEST.2016.7589605","DOIUrl":"https://doi.org/10.1109/AUTEST.2016.7589605","url":null,"abstract":"Semiconductor products manufactured with latest and emerging processes are increasingly prone to wear out and aging. While the fault occurrence rate in such systems increases, the fault tolerance techniques are becoming even more expensive and one cannot rely on them alone. Rapid emergence of embedded instrumentation as an industrial paradigm and adoption of respective IEEE 1687 standard by key players of semiconductor industry opens up new horizons in developing efficient on-line health monitoring frameworks for prognostics and fault management. The paper describes a cross-layer framework capable of handling soft and hard faults as well as the system's degradation. In addition to mitigating/correcting the faults, the system may systematically monitor, detect, localize, diagnose and classify them (manage faults). As a result of such fault management approach, the system may continue operating and degrade gracefully even in case if some of the system's resources become unusable due to intolerable faults. The main focus of this paper is however to discuss the dependability properties of the Fault Management framework itself and related infrastructure.","PeriodicalId":314357,"journal":{"name":"2016 IEEE AUTOTESTCON","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114714496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Planning tomorrow's test data today: Simple tips for future-proofing your test data","authors":"Tom Armes","doi":"10.1109/AUTEST.2016.7589631","DOIUrl":"https://doi.org/10.1109/AUTEST.2016.7589631","url":null,"abstract":"The complexity and volume for manufacturing and engineering test data in modern electronics manufacturing is growing every year. The proliferation of formats and structures of data means that test data storage, retrieval and analysis becomes increasingly difficult. Each silo of test data (like subcomponent Supply, R&D, RMA, Manufacturing, and Repair) has different parameters, attributes, and testing procedures than the others. In addition, each business unit within an organization may have different procedures, testing nomenclature, and processes than other business units even if they're making the same products. Question: With these challenges in mind, if you were asked to choose a way to store your data for the next 30 years and make it usable and integrated with enterprise data, how would you do it? And with smart sensors expected to heavily impact manufacturing, how will you be able to gather additional product testing data and make it integrate with legacy data? In this paper and discussion, we'll talk about the available technologies and file formats that Test Engineers should consider when preparing to write out test data from complex manufacturing and engineering test beds. By storing and structuring your test data with the best practices discussed in this discussion track, you'll be able to do efficiently store, quickly analyze, and futureproof your test output to work with other enterprise data.","PeriodicalId":314357,"journal":{"name":"2016 IEEE AUTOTESTCON","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114835604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Comparison of the efficiency of two methods on RF/MW power amplifier gain compression test","authors":"Esra Nurgun","doi":"10.1109/AUTEST.2016.7589600","DOIUrl":"https://doi.org/10.1109/AUTEST.2016.7589600","url":null,"abstract":"In this study, our aim is to present the improvement gained in manufacturing throughput by comparing two measurement methods on a Receiver Module. Namely, the methods compared are classical SA/SG sweep method and NA-GCA option method.","PeriodicalId":314357,"journal":{"name":"2016 IEEE AUTOTESTCON","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129602309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A simplified overview of handling faults in systems around us","authors":"H. Al-Asaad","doi":"10.1109/AUTEST.2016.7589642","DOIUrl":"https://doi.org/10.1109/AUTEST.2016.7589642","url":null,"abstract":"Faults cause the observed behavior of a system to deviate from the expected behavior. They occur throughout the lifetime of a typical system. They may occur in the design phase of the system, the manufacturing phase, or during normal operation. In this paper, we present a simplified overview of handling faults in electronic as well as nonelectronic systems such as systems in engineering, science, biology, etc. The paper first defines the concepts of faults, errors, and failures. It then demonstrates the high cost of failure via several examples. The goal of this paper is to present a detailed and simplified discussion of the methods of handling faults in systems around us including fault avoidance; fault dismissal; error detection; fault location; error correction; fault masking; fault tolerance; and reconfiguration. Faults in a system can be divided into three categories: Faults that can be avoided (at a substantial cost), faults that can be dismissed due to various reasons, and faults that must be handled correctly before they become errors and ultimately lead to the overall failure of the system. The paper discusses the various techniques that prevent the faults in the system from leading the system to failure. The paper also discusses the requirements for fault tolerance and the methods used to achieve the desired fault tolerant capabilities in a typical system. The paper also presents two case studies to illustrate the concepts described above. The first system is the personal computer and the second system is the human digestive system. The case studies demonstrate the significant similarities between handling faults in electronic and non-electronic systems.","PeriodicalId":314357,"journal":{"name":"2016 IEEE AUTOTESTCON","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122175075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Benefits of universal switching in ATE: Exploring the benefits of COTS Universal Switching solutions","authors":"Robert C. Waldeck","doi":"10.1109/AUTEST.2016.7589582","DOIUrl":"https://doi.org/10.1109/AUTEST.2016.7589582","url":null,"abstract":"The commercial offerings in ATE over the last 20-30 years has shown a strong disparity between COTS solutions available to the integrator vs the solutions offered by the large turnkey ATE manufacturers. This disparity is primarily focused in the area of switching. For the system integrators, the choice has been a wide variety of switching products, primarily in formats such as VXI or more recently PXI. These offerings are various unrelated switches in Matrix, Tree and SPDT or SPST formats. There has been minimal effort among the card providers to offer a set of cards which work together to create a unified switching system. The Turnkey system providers on the other hand have primarily focused on providing systems with highly integrated Universal Switching architectures. The reason for the disparity is puzzling as the Turnkey system providers clearly see strong advantage in the Universal Switching Architecture, strong enough to spend significant resources in developing a proprietary switching system of their own. Surprisingly, the commercial card providers have not jumped onto this bandwagon and developed commercial alternatives for System Integrators until recently. Recently we have been spending time with more customers wrestling with Legacy ATE challenges. It has been our experience that Universal Switching can be both a way forward as well as providing a future platform more suited for TPS transportability. This paper will explore both the functionality as well as the merits of Universal Switching systems in an effort to help the reader make an informed decision. Considering that the fielding of any group of TPS's on a system platform frequently exceeds the cost of the Test System itself, sometimes to a large degree, the ability to reduce the cost of the TPS development as well as future transition to a next generation system is a significant driver in the reduction of the overall cost of ownership of the program. We will also explore how Universal Switching can be used to replace non-Universal Switching in fielded ATE Systems, and how it more readily supports new technology insertions and creates a next generation test platform which more readily supports TPS transportability across platforms.","PeriodicalId":314357,"journal":{"name":"2016 IEEE AUTOTESTCON","volume":"448 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121433670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"BER test time optimization","authors":"S. Shinde, J. S. Knudsen","doi":"10.1109/AUTEST.2016.7589610","DOIUrl":"https://doi.org/10.1109/AUTEST.2016.7589610","url":null,"abstract":"In commerce, time to market (TTM) is defined as the length of time it takes from a product being conceived until its being available for sale. There are no standards for measuring TTM and it varies from product to product. The product life cycle of PC's is 2-3 years whereas for mobile products its 1-2 years. Major part of product's life cycle time is taken by testing phase. Hence it becomes essential to reduce number of tests to very critical ones and also to reduce test time. For high speed serial interfaces one of the quality measures of digital transmission is bit error ratio (BER). BER is defined as the ratio of number of received bits in error to the total number of bits transmitted. BER testing for high speed serial interfaces requires long string of bits to be sent and hence requires very long test time, usually in minutes and hours. This test time finally translates to money and hence should be shortened. This paper explains how hypothesis definition and testing can reduce BER test time and cost cutting can be achieved. Further research goes in to verification phase of this test methodology with test lab exercise.","PeriodicalId":314357,"journal":{"name":"2016 IEEE AUTOTESTCON","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115164214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Construction of chaotic sensing matrix for fractional bandlimited signal associated by fractional fourier transform","authors":"Haoran Zhao, Liyan Qiao, Libao Deng, Y. Chen","doi":"10.1109/AUTEST.2016.7589640","DOIUrl":"https://doi.org/10.1109/AUTEST.2016.7589640","url":null,"abstract":"Fractional Fourier transform (FrFT) is a powerful tool for the non-stationary signals because of its additional degree of freedom in the time-frequency plane. Due to the importance of the FrFT in signal processing, most of the bandlimited sampling theorems in traditional frequency domain have been extended to fractional Fourier bandlimited signals based on the relationship between the FrFT and regular integer order Fourier transform (FT). However, the implementations of those existing extensions are not efficient because of the high sampling rate which is related to the maximum fractional Fourier frequency of the signal. Compressed Sensing (CS) is a useful tool to collect information directly which reduces sampling pressure, computational load as well as saving the storage space. The construction of sensing matrix is the basic issue. Most of CS demand that the sensing matrix is constructed by random under-sampling which is uncontrollable and hard to be realized by hardware. This paper proposes a deterministic construction of sensing matrix for the multiband signals in the fractional Fourier domain (FrFD). We give the sparse basis of the signal and derive the sensing matrix based on the analog to information conversion technology. The sensing matrix is constructed by random sign matrix and Toeplitzed matrix. The sub-sampling method is used to obtain the structural signal. Theoretically, the matrix satisfies the incoherent condition and the entire structure of system is practical. We show in this paper that the sampling rate is much lower than the Nyquist rate. The signal reconstruction is studied based on the framework of compressed sensing. The performance of the proposed sampling method is verified by the simulation. The probability of the successful reconstruction and the mean squared error (MSE) are both analyzed. The numerical results suggest that proposed system is effective for a spectrum-blind sparse multiband signal in the FrFD and demonstrate its promising potentials.","PeriodicalId":314357,"journal":{"name":"2016 IEEE AUTOTESTCON","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132796936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Recommended practice for insuring reuse and integrity of ATS data by the application of IEEE SCC20/ATML standards","authors":"M. Modi, J. Stanco, Patrick Verbovsky","doi":"10.1109/AUTEST.2016.7589581","DOIUrl":"https://doi.org/10.1109/AUTEST.2016.7589581","url":null,"abstract":"The IEEE SCC20/Automatic Test Markup Language (ATML) standards are currently being used to describe a host of Automatic Test Equipment (ATE) related documents. These standards cover test descriptions, requirements and specifications of ATE instruments and UUTs in an all-encompassing test environment. These standards provide the necessary elements needed to achieve the goals of reducing the logistic footprint associated with complex system testing through data portability and reuse. The IEEE SCC20/ATML standards provide ability to capture electronic products design/specification test data required for life cycle support. However, in order to achieve the full benefits of these standards one must recognize the tasks of implementing the standards to provide the information necessary to achieve the goal of reduced support equipment proliferation and cost reduction. While these standards go a long way in achieving these objectives, a number of issues must be addressed. In order to support this environment, the IEEE SCC20/ATML standards provide for a number of ways to develop IEEE compliant documents. However, without a set of comprehensive procedures and supporting tools the optimum reuse and data integrity of these products may not be achieved. This situation is caused by the scope of the testing environment which utilizes the integration of many elements and events that occur over a products life cycle [1]. This situation leads to a data provenance issue resulting in data that may be inconsistent with IEEE SCC20/ATML documents. This paper will discuss how to handle the data issues by describing an approach and methodology addressing the data reuse and portability issues. The recommended methods focus on insuring that the IEEE SCC20/ATML developed products results in the highest degree of reuse, interchangeability and data integrity throughout the different use cases of both government and industry. The way to apply these methods starts with the source of the data. In this case the source would be a semantical taxonomy that describes how the IEEE SCC20/ATML documents should be structured for supporting the data required by the use cases. Due to the large scope of this effort, this paper will concentrate on a specific example use case utilizing select standards and tools to aid in producing compliant IEEE SCC20/ATML standard products that will result in the reuse and interoperability of these products. It will focus on the data needed to test a UUT and how that data is defined and utilized in the resulting documentation. The activities requiring this data, the events and resources acting on this data will be covered. The intent is to maintain the integrity and validity of the data throughout the products (UUT) testing life cycle. It is intended that paper will lead to improved use and enhancements of these standards. This information is intended to be used in developing a recommended practice approach that will support the use of these standards in the","PeriodicalId":314357,"journal":{"name":"2016 IEEE AUTOTESTCON","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132213139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Valuation and optimization for performance based logistics using continuous time Bayesian networks","authors":"L. Perreault, Monica Thornton, John W. Sheppard","doi":"10.1109/AUTEST.2016.7589568","DOIUrl":"https://doi.org/10.1109/AUTEST.2016.7589568","url":null,"abstract":"When awarding contracts in the private sector, there are a number of logistical concerns that agencies such as the Department of Defense (DoD) must address. In an effort to maximize the operational effectiveness of the resources provided by these contracts, the DoD and other government agencies have altered their approach to contracting through the adoption of a performance based logistics (PBL) strategy. PBL contracts allow the client to purchase specific levels of performance, rather than providing the contractor with the details of the desired solution in advance. For both parties, the difficulty in developing and adhering to a PBL contract lies in the quantification of performance, which is typically done using one or more easily evaluated objectives. In this work, we address the problem of evaluating PBL performance objectives through the use of continuous time Bayesian networks (CTBNs). The CTBN framework allows for the representation of complex performance objectives, which can be evaluated quickly using a mathematically sound approach. Additionally, the method introduced here can be used in conjunction with an optimization algorithm to aid in the process of selecting a design alternative that will best meet the needs of the contract, and the goals of the contracting agency. Finally, the CTBN models used to evaluate PBL objectives can also be used to predict likely system behavior, making this approach extremely useful for PHM as well.","PeriodicalId":314357,"journal":{"name":"2016 IEEE AUTOTESTCON","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121764311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Cost model for verifying requirements","authors":"Edward Dou","doi":"10.1109/AUTEST.2016.7589570","DOIUrl":"https://doi.org/10.1109/AUTEST.2016.7589570","url":null,"abstract":"Testable requirements are the foundation to any development program. The number of requirements and the technical difficulty of satisfying those requirements are factors that drive program cost and schedule. Being able to quickly assess the scope of requirement verification and costing that activity is essential to the proposal process. For awarded programs, controlling and costing requirements volatility is critical to ensuring sufficient resources to execute the program and meet customer need dates. When considering requirements verification, to include regression testing, a balance is often needed between the cost and the coverage provided. These challenges are commonly encountered during program startup and execution. This paper presents a cost model, Cost Model for Verifying Requirements (CMVR), to assist program managers in quickly assessing the financial impact of verifying requirements as a result of changing (e.g. adding, modifying, and deleting) requirements. Of note, this paper focuses on more formal testing and verification activities, but does not address development and integration aspects. For the CMVR model to provide accurate results, the test team should first fully map requirements to test events. In doing so, requirements should be traced from the stakeholder (e.g. customer requirements) through derived requirements to test objectives and ultimately to test scripts/procedures. Each test script and procedure will need to be assessed to determine the cost (man-hours and duration) to complete the test objective. With the linkage between requirements and test events established, programs can then use the cost model for bidding, evaluating requirements volatility, and developing test sets that optimize the cost-benefit ratio. Bidding: During bidding, requirements are often not fully developed. The CMVR model addresses these ambiguities by providing a portfolio mix (easy, moderate, difficult) based on historical data, enabling program managers to select or alter - similar to tailoring ones 401K plan. Requirements Volatility: Evaluating the impact of requirements volatility on test costs requires assessing development, test setup, execution, and analysis of potential efficiencies that can be leveraged from overlapping tests. Developing Test Sets: With limited time and resources, programs may need to identify a subset of tests to execute (such as for regression testing). Programs will need to determine the focus areas of requirements (Depth), the test requirement coverage (breadth), and the critical must-test requirements. This paper also includes examples of utilizing the CMVR model and demonstrating how this capability enables quickly assessing cost and schedule impacts due to a change in requirements. In summary, The CMVR cost model provides program managers with an important tool to quickly assess the testing cost of requirements.","PeriodicalId":314357,"journal":{"name":"2016 IEEE AUTOTESTCON","volume":"358 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122810896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}