{"title":"Guest Editors' Introduction: Silicon Debug and Diagnosis","authors":"N. Nicolici, B. Benware","doi":"10.1109/MDAT.2013.2279324","DOIUrl":null,"url":null,"abstract":"h TROUBLESHOOTING HOW AND why circuits and systems fail is important and is rapidly growing in industry significance. Debug and diagnosis may be needed for yield improvement, process monitoring, correcting the design function, failure mode learning for research and development, or just getting a working first prototype. This detective work is, however, very tricky. Sources of difficulty include circuit and system complexity, packaging, limited physical access, shortened product creation cycle, and time to market. New and efficient solutions for debug and diagnosis have a much needed and highly visible impact on productivity. This special section of IEEE Design & Test includes the extended versions of the three best contributions presented at the Silicon Debug and Diagnosis (SDD) Workshop, which was held in Anaheim, CA, USA, in November 2012. It was the eighth of a series of highly successful technical workshops that consider issues related to debug and diagnosis of semiconductor circuits and systems: from prototype bring-up to volume production. The first paper, ‘‘Linking the verification and validation of complex integrated circuits through shared coverage metrics’’ by Hung et al., discusses how to bridge pre-implementation (commonly referred to also as ‘‘pre-silicon’’) verification to postimplementation validation in an emulation environment. Considering the inherent flexibility offered by field-programmable gate arrays (FPGAs), the authors discuss howembedded instrumentation can aiddata acquisition and coverage measurement in FPGA designs. The evolution of FPGA trace collection methods is elaborated, showing how recent tools can facilitate a set of predetermined cover points to be observed without requiring recompilation. Further, recent research is aimed at enabling any cover point to be measured in FPGA prototypes. In the second paper, entitled ‘‘Evolution of graphics Northbridge test and debug architectures across four generations of AMD ASICs,’’ Margulis et al., present the evolution of the design for test and debug (commonly referred to as DFx) architectures over four generations of AMD designs. The paper covers different aspects of DFx, ranging from scan architecture to control (centralized, modular, hierarchical) to debug buses (asynchronous/synchronous, source synchronous). The key points are that DFx methodology must be physical-design friendly and account for high clock frequencies, needed to acquire and dump the trace data, as well as be aware of the power savings features, such clock and power gating. In the last paper of this special section, entitled ‘‘Deriving feature fail rate from silicon volume diagnostics data,’’ Malik et al., address the challenge of identifying layout geometries that lead to systematic yield loss. As the subwavelength lithography gap continues to widen, this class of defect is becoming an increasingly dominant source of failures. With design-for-manufacturability (DFM) tools, it is possible to identify potential weaknesses in a design, but it remains extremely difficult to assess which DFM features will actually cause yield loss. The authors of this paper present a methodology to quantify the","PeriodicalId":50392,"journal":{"name":"IEEE Design & Test of Computers","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2013-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/MDAT.2013.2279324","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Design & Test of Computers","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MDAT.2013.2279324","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
h TROUBLESHOOTING HOW AND why circuits and systems fail is important and is rapidly growing in industry significance. Debug and diagnosis may be needed for yield improvement, process monitoring, correcting the design function, failure mode learning for research and development, or just getting a working first prototype. This detective work is, however, very tricky. Sources of difficulty include circuit and system complexity, packaging, limited physical access, shortened product creation cycle, and time to market. New and efficient solutions for debug and diagnosis have a much needed and highly visible impact on productivity. This special section of IEEE Design & Test includes the extended versions of the three best contributions presented at the Silicon Debug and Diagnosis (SDD) Workshop, which was held in Anaheim, CA, USA, in November 2012. It was the eighth of a series of highly successful technical workshops that consider issues related to debug and diagnosis of semiconductor circuits and systems: from prototype bring-up to volume production. The first paper, ‘‘Linking the verification and validation of complex integrated circuits through shared coverage metrics’’ by Hung et al., discusses how to bridge pre-implementation (commonly referred to also as ‘‘pre-silicon’’) verification to postimplementation validation in an emulation environment. Considering the inherent flexibility offered by field-programmable gate arrays (FPGAs), the authors discuss howembedded instrumentation can aiddata acquisition and coverage measurement in FPGA designs. The evolution of FPGA trace collection methods is elaborated, showing how recent tools can facilitate a set of predetermined cover points to be observed without requiring recompilation. Further, recent research is aimed at enabling any cover point to be measured in FPGA prototypes. In the second paper, entitled ‘‘Evolution of graphics Northbridge test and debug architectures across four generations of AMD ASICs,’’ Margulis et al., present the evolution of the design for test and debug (commonly referred to as DFx) architectures over four generations of AMD designs. The paper covers different aspects of DFx, ranging from scan architecture to control (centralized, modular, hierarchical) to debug buses (asynchronous/synchronous, source synchronous). The key points are that DFx methodology must be physical-design friendly and account for high clock frequencies, needed to acquire and dump the trace data, as well as be aware of the power savings features, such clock and power gating. In the last paper of this special section, entitled ‘‘Deriving feature fail rate from silicon volume diagnostics data,’’ Malik et al., address the challenge of identifying layout geometries that lead to systematic yield loss. As the subwavelength lithography gap continues to widen, this class of defect is becoming an increasingly dominant source of failures. With design-for-manufacturability (DFM) tools, it is possible to identify potential weaknesses in a design, but it remains extremely difficult to assess which DFM features will actually cause yield loss. The authors of this paper present a methodology to quantify the