{"title":"A Novel Hardened Design of a CMOS Memory Cell at 32nm","authors":"Sheng Lin, Yong-Bin Kim, F. Lombardi","doi":"10.1109/DFT.2009.18","DOIUrl":"https://doi.org/10.1109/DFT.2009.18","url":null,"abstract":"This paper proposes a new design for hardening a CMOS memory cell at the nano feature size of 32nm. By separating the circuitry for the write and read operations, the static stability of the proposed cell configuration increases more than 4.4 times at typical process corner, respectively compared to previous designs. Simulation shows that by appropriately sizing the pull-down transistors, the proposed cell results in a 40% higher critical charge and 13% less delay than the conventional design. Simulation results are provided using the predictive technology file for 32nm feature size in CMOS to show that the proposed hardened memory cell is best suited when designing memories for both high performance and soft error tolerance.","PeriodicalId":405651,"journal":{"name":"2009 24th IEEE International Symposium on Defect and Fault Tolerance in VLSI Systems","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124939457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Analyzing Formal Verification and Testing Efforts of Different Fault Tolerance Mechanisms","authors":"Meng Zhang, Anita Lungu, Daniel J. Sorin","doi":"10.1109/DFT.2009.23","DOIUrl":"https://doi.org/10.1109/DFT.2009.23","url":null,"abstract":"Pre-fabrication design verification and post-fabrication chip testing are two important stages in the product realization process. These two stages consume a large part of resources in the form of time, money, and engineering effort during the process [1]. Therefore, it is important to take into account the design verification (such as through formal verification) effort and chip testing effort when we design a system. This paper analyzes the impact on formal verification effort and testing effort due to adding different fault tolerance mechanisms to baseline systems. By comparing the experimental results of different designs, we conclude that re-execution (time redundancy) is the most efficient mechanism when considering formal verification and testing efforts together, followed by parity code, dual modular redundancy (DMR), and triple modular redundancy (TMR). We also present the ratio of verification effort to testing effort to assist designers in their trade-off analysis when deciding how to allocate their budget between formal verification and testing. Particularly, we find even for a designated fault tolerance mechanism, some small change in structure can lead to dramatic changes in the efforts. These findings have implications for practical industrial production.","PeriodicalId":405651,"journal":{"name":"2009 24th IEEE International Symposium on Defect and Fault Tolerance in VLSI Systems","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125141535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Future of Test -- Product Integration and its Impact on Test","authors":"Michael Campbell","doi":"10.1109/DFT.2009.67","DOIUrl":"https://doi.org/10.1109/DFT.2009.67","url":null,"abstract":"Driving leading edge products with high quality while designs, flows, and processes advance with Moore’s Law will require the semiconductor industry to continue to drive for increasing innovative DFT strategies. The test industry will need to drive for new ideas in the areas of: yield analysis, modeling, test techniques, and defect / fault tolerance. To continue cost effective products while costs escalate, yield analysis will need to take far greater consideration of advanced statistical techniques including consideration of spatial randomness. As UDSM processes become more sensitive to variations in lithography, random particle defects, overlay errors, and printability, it is inevitable that new methods will need to be developed to address the economics of Moore’s Law. At the same time, this convergence will put more demand on design/ circuit techniques as well as the need for advanced yield and process control techniques. IP integration drives intersection of dissimilar IP (EG: low power, high speed, RF, etc) requirements where the need for fault tolerant design will be required to achieve HVM. The key area for new DFT development is analog like methods to accommodate defect tolerance will be required HSIO, integrated RF cores, as well as the introduction of non-conventional fabrication methods are required to meet cost, quality and reliability demands.","PeriodicalId":405651,"journal":{"name":"2009 24th IEEE International Symposium on Defect and Fault Tolerance in VLSI Systems","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122204979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Analysis of Resistive Open Defects in a Synchronizer","authors":"Hyoung-Kook Kim, W. Jone, Laung-Terng Wang","doi":"10.1109/DFT.2009.34","DOIUrl":"https://doi.org/10.1109/DFT.2009.34","url":null,"abstract":"This paper presents fault modeling and analysis for open defects in a synchronizer that is implemented by two D flip-flops. Open defects are injected into any node of the synchronizer, and HSPICE is used to perform circuit analysis. The major purpose of this analysis is to find all possible faults that might occur in the synchronizer by open defects. The results obtained can be used to develop methods for testing the interfacing circuits between different clock domains which are implemented with the synchronizer.","PeriodicalId":405651,"journal":{"name":"2009 24th IEEE International Symposium on Defect and Fault Tolerance in VLSI Systems","volume":"150 45","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114004446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Luca Amati, C. Bolchini, L. Frigerio, F. Salice, B. Eklow, Arnold Suvatne, E. Brambilla, F. Franzoso, Michele Martin
{"title":"An Incremental Approach to Functional Diagnosis","authors":"Luca Amati, C. Bolchini, L. Frigerio, F. Salice, B. Eklow, Arnold Suvatne, E. Brambilla, F. Franzoso, Michele Martin","doi":"10.1109/DFT.2009.29","DOIUrl":"https://doi.org/10.1109/DFT.2009.29","url":null,"abstract":"This paper presents a methodology for an incremental approach to functional fault diagnosis of complex boards, used to identify candidate failing components based on the results of the executed tests, once a misbehavior has been detected but not localized. The proposal aims at reducing both time and effort during the diagnostic phase, by executing a subset of the available tests, analyzing the achieved results, and then supporting the operator by suggesting what tests should be run next, to identify the faulty component, should the already gathered information be insufficient. A methodology has been defined to analyze the available results, and to evaluate the effectiveness of the remaining tests to find the most probable cause of failure in a reduced number of additional test runs. The approach has been validated on a portion of a real life board and other circuits, to tune parameters and to evaluate the performance of the proposed methodology.","PeriodicalId":405651,"journal":{"name":"2009 24th IEEE International Symposium on Defect and Fault Tolerance in VLSI Systems","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127305593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Software-Based Hardware Fault Tolerance for Many-Core Architectures","authors":"H. Wunderlich","doi":"10.1109/DFT.2009.36","DOIUrl":"https://doi.org/10.1109/DFT.2009.36","url":null,"abstract":"Software-based hardware fault tolerance describes a class of techniques which allows software to detect and correct errors introduced by unreliable hardware. With the advent of many-core architectures, the already existing reliability issues, like temporal and structural variations or the sensitivity against soft-errors, are becoming an even more serious problem. Software-based hardware fault tolerance is able to provide cost-effective solutions. This presentation will point out the new opportunities and challenges for applying software-based hardware fault tolerance to emerging many-core architectures. We will discuss the tradeoff between the application of these techniques and the classical hardware-based fault tolerance in terms of fault coverage, overhead, and performance.","PeriodicalId":405651,"journal":{"name":"2009 24th IEEE International Symposium on Defect and Fault Tolerance in VLSI Systems","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122914451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Testing of Switch Blocks in Three-Dimensional FPGA","authors":"Takumi Hoshi, K. Namba, Hideo Ito","doi":"10.1109/DFT.2009.26","DOIUrl":"https://doi.org/10.1109/DFT.2009.26","url":null,"abstract":"In recent years, programmable interconnects in Field Programmable Gate Arrays (FPGAs)become a bottleneck of improving performance. So, for improving performance of FPGAs, a design of programmable interconnects is a key element, and innovative routing architecture is being desired. From this viewpoint, three dimensional FPGAs (3D-FPGAs) were proposed and focused. 3D-FPGAs have multiple layers which connected by vertical wires through 3D- Switch Block (SB). The main difference between the structures of the traditional two dimensional (2D) FPGAs and 3D-FPGAs is in 3D-SBs, and thus parts in 3D-FPGAs other than the SBs can be tested using existing methods for 2D-FPGAs. However, 3D-SBs cannot be tested by traditional testing for 2D-FPGAs. This paper presents testing for 3D-SBs in 3D-FPGAs. The proposed testing can detect stuck-at, bridging, stuck-open and stuck-on faults on three-dimensional switch blocks, and requires five test configurations to detect these catastrophic faults.","PeriodicalId":405651,"journal":{"name":"2009 24th IEEE International Symposium on Defect and Fault Tolerance in VLSI Systems","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116564091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An ILP formulation to Unify Power Efficiency and Fault Detection at Register-Transfer Level","authors":"Yu Liu, Kaijie Wu","doi":"10.1109/DFT.2009.19","DOIUrl":"https://doi.org/10.1109/DFT.2009.19","url":null,"abstract":"As the integration level and clock speed of VLSI devices keep rising, power consumption of those devices increases dramatically. At the same time, shrinking size of transistors that enables denser and smaller chips running at faster clock speeds makes devices more susceptible to environment-induced faults. Both power reduction and concurrent error detection are becoming enabling technologies in Very Deep Sub Micron and nanometer technology domains. However, existing techniques either minimize power of “fault-free” devices, or improve fault tolerance without concerning power. Little work has been proposed to optimize the two objectives simultaneously. In this paper we attack this problem by unifying power efficiency and fault tolerance in a comprehensive Integer Linear Programming formulation. The proposed approach is tested using known benchmarks.","PeriodicalId":405651,"journal":{"name":"2009 24th IEEE International Symposium on Defect and Fault Tolerance in VLSI Systems","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132586845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. D. Guglielmo, F. Fummi, G. Pravadelli, M. Hampton, Florian Letombe
{"title":"On the Functional Qualification of a Platform Model","authors":"G. D. Guglielmo, F. Fummi, G. Pravadelli, M. Hampton, Florian Letombe","doi":"10.1109/DFT.2009.15","DOIUrl":"https://doi.org/10.1109/DFT.2009.15","url":null,"abstract":"This work focuses on the use of functional qualification for measuring the quality of co-verification environments for hardware/software (HW/SW) platform models. Modeling and verifying complex embedded platforms requires co-simulating one or more CPUs running embedded applications on top of an operating system, and connected to some hardware devices. The paper describes first a HW/SW co-simulation framework which supports all mechanisms used by software, in particular by device drivers, to access hardware devices so that the target CPU’s machine code can be simulated. In particular, synchronization between hardware and software is performed by the co-simulation framework and, therefore, no adaptation is required in device drivers and hardware models to handle synchronization messages. Then, CertitudeTM, a flexible functional qualification tool, is introduced. Functional qualification is based on the theory of mutation analysis, but it is extended by considering a mutation to be killed only if a testcase fails. Certitude(TM) automatically inserts mutants into the HW/SW models and determines if the verification environment can detect these mutations. A known mutant that cannot be detected points to a verification weakness. If a mutant cannot be detected, there is evidence that actual design errors would also not be detected by the co-verification environment. This is an iterative process and functional qualification solution provides the verifier with information to improve the co-verification environment quality. The proposed approach has been successfully applied on an industrial platform as shown in the experimental result section.","PeriodicalId":405651,"journal":{"name":"2009 24th IEEE International Symposium on Defect and Fault Tolerance in VLSI Systems","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130184956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Improving the Effectiveness of XOR-based Decompressors through Horizontal/Vertical Move of Stimulus Fragments","authors":"N. Alawadhi, O. Sinanoglu","doi":"10.1109/DFT.2009.9","DOIUrl":"https://doi.org/10.1109/DFT.2009.9","url":null,"abstract":"While test stimulus compression helps reduce test time and data volume, and thus alleviates test costs, the delivery of certain test vectors may not be possible, leading to test quality degradation. Whether a test vector is encodable in the presence of a decompressor strongly hinges on the distribution of its care bits. Utilization of stimulus manipulation techniques improves test pattern encodability as the distribution of care bits can be judiciously controlled. The desired test vector is delivered by resolving the stimulus conflicts that would have otherwise lead to pattern unencodability. Stimulus manipulation in the form of horizontal move of stimulus fragments has been shown to improve fan-out decompressors. In this work, we propose manipulation techniques in the form of horizontal or vertical move of stimulus fragments in order to improve XOR-based decompressors. Improvement in test pattern encodability reflects into savings in test costs and/or increase in test quality. The hardware and algorithmic support for each solution are also elaborated on, demonstrating the practicality of the proposed manipulation techniques.","PeriodicalId":405651,"journal":{"name":"2009 24th IEEE International Symposium on Defect and Fault Tolerance in VLSI Systems","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127174731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}