W. Joyner, J. Kawa, L. Liebmann, D. Pan, Martin D. F. Wong, David Yeh
{"title":"ICCAD roundtable the many challenges of triple patterning [ICCAD Roundtable]","authors":"W. Joyner, J. Kawa, L. Liebmann, D. Pan, Martin D. F. Wong, David Yeh","doi":"10.1109/MDAT.2014.2337471","DOIUrl":"https://doi.org/10.1109/MDAT.2014.2337471","url":null,"abstract":"","PeriodicalId":50392,"journal":{"name":"IEEE Design & Test of Computers","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/MDAT.2014.2337471","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62451442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"“Our Original 3D Idea Still Has To Happen; And It Will”","authors":"E. Marinissen","doi":"10.1109/MDAT.2013.2286547","DOIUrl":"https://doi.org/10.1109/MDAT.2013.2286547","url":null,"abstract":"Ivo Bolsens is senior vice president (SVP) and chief technology officer (CTO) at Xilinx, a leading supplier of field-programmable gate arrays (FPGAs). At Xilinx, he is responsible for advanced technology development, as well as the company's research laboratories and university program. Design & Test's Erik Jan Marinissen met with Dr. Bolsens in November 2012 during the IEEE International Test Conference in Anaheim, CA,USA, where Bolsens was the opening keynote speaker at the co-located 3D-TEST Workshop . Shortly prior to his keynote talk, this interview took place during lunch at a Disneyland restaurant, covering Bolsens' career from IMEC to Xilinx, current and future FPGAs, the differences between research and company environments, and Xilinx recent 3D-FPGA product.","PeriodicalId":50392,"journal":{"name":"IEEE Design & Test of Computers","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/MDAT.2013.2286547","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62451195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Look at Variability and Aging","authors":"A. Ivanov","doi":"10.1109/MDAT.2014.2298356","DOIUrl":"https://doi.org/10.1109/MDAT.2014.2298356","url":null,"abstract":"h THE NEED FOR more densely packed, faster and more energy-efficient devices has forced significant evolutions in device architectures in recent years. A challenging byproduct of such trends is increased variability of the performance parameters of manufactured devices and a move towards increased electrical stresses imposed on devices during their field use. In turn, increased electrical stress causes further increases in performance variability over time. In shorter form, two major obstacles arise for modern IC designers: variability and aging. The focus of this special issue is to bring to our attention the need for mediating impacts of variability and aging across the many stages of IC and integrated systems design. Developing innovative, yet feasible solutions for these matters are an urgent concern for future computing systems surging forward on the cusp of innovation. This month we bring you a collection of articles that thoroughly examine how the impacts of variability and aging are seen by experts and can be dealt with. To begin this special-themed issue, a paper by Bowman et al. discusses the effects of variability on microprocessor performance through an analysis of error-detection and recovery circuits, among other circuit types and monitors. An article by Wang et al. then provides a detailed look at the variability and reliability of 6T-SRAM memory systems, to show the criticality of variability effects on SRAM in computing systems. In our third article, Gupta and Roy continue the SRAM focus but specifically regard FinFET technology with a cost-benefit analysis through specific device and circuit co-designmethods. Next, Chen et al. shed light on mitigating strategies to offset NBTI effects in current circuit optimizing methods. We follow this with a paper by Stott et al. that looks at the impacts of variability and aging in FPGAs. This article proposes an adaptive system that can reconfigure its own architecture to counter variability and aging effects. A sixth entry by Debashi and Fey presents the capabilities of using Boolean Satisfiability in automating speedpath debugging under timing variations. We have also included in this final issue of 2013, three general interest articles that step away from our detailed look at variability and aging. The first of the three is an examination by Villacorta et al. of a dominant failuremechanism innanometer technology that of open defects in vias. Results of risk and reliability analyses show that new electromigration design rules are needed in light of resistive vias. The following article, provided by Laraba et al. from the TIMA Laboratory in Grenoble, demonstrates a low-cost digital monitoring alternative in reduced code testing. The last featured article is a contribution from Sayil et al. on transient noise effects caused by single event particles. The authors focus on coupling-induced soft-error mechanisms in combinational logic. We conclude this issue with ‘‘The Last Byte’’ by Scott Dav","PeriodicalId":50392,"journal":{"name":"IEEE Design & Test of Computers","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/MDAT.2014.2298356","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62451862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Guest Editors' Introduction: Special Issue on Variability and Aging","authors":"A. Rubio, Antonio González","doi":"10.1109/MDAT.2013.2297040","DOIUrl":"https://doi.org/10.1109/MDAT.2013.2297040","url":null,"abstract":"The articles in this special section focus on new technological innovationsin EDA design. The constant evolution of electronic systems has been fueled by the continuous and tremendous progress of silicon technology manufacturing. Since 1960, when the first MOS transistor was manufactured with dimensions around 50 cm, process technology has been constantly enhancing until the current 22-nm MOS technology. Every two years a new process generation roughly doubles the device density, following what is known as Moore's law. Besides, every new generation offers faster devices that consume less energy by operation. This has put in the hands of architects more powerful and energy-efficient building blocks on top of which they have designed more effective architectures with increasing capabilities. Silicon MOSFETs have been the workhorse devices for information technologies during all these last decades. However, these technology advances have to deal with important challenges coming from physical limitations of the underlying transistors, which are affected by severe manufacturing process parameters variability and aging caused by electrical degradation of materials due to the intense electrical stress during operation.","PeriodicalId":50392,"journal":{"name":"IEEE Design & Test of Computers","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/MDAT.2013.2297040","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62451852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Guest Editors' Introduction: Silicon Debug and Diagnosis","authors":"N. Nicolici, B. Benware","doi":"10.1109/MDAT.2013.2279324","DOIUrl":"https://doi.org/10.1109/MDAT.2013.2279324","url":null,"abstract":"h TROUBLESHOOTING HOW AND why circuits and systems fail is important and is rapidly growing in industry significance. Debug and diagnosis may be needed for yield improvement, process monitoring, correcting the design function, failure mode learning for research and development, or just getting a working first prototype. This detective work is, however, very tricky. Sources of difficulty include circuit and system complexity, packaging, limited physical access, shortened product creation cycle, and time to market. New and efficient solutions for debug and diagnosis have a much needed and highly visible impact on productivity. This special section of IEEE Design & Test includes the extended versions of the three best contributions presented at the Silicon Debug and Diagnosis (SDD) Workshop, which was held in Anaheim, CA, USA, in November 2012. It was the eighth of a series of highly successful technical workshops that consider issues related to debug and diagnosis of semiconductor circuits and systems: from prototype bring-up to volume production. The first paper, ‘‘Linking the verification and validation of complex integrated circuits through shared coverage metrics’’ by Hung et al., discusses how to bridge pre-implementation (commonly referred to also as ‘‘pre-silicon’’) verification to postimplementation validation in an emulation environment. Considering the inherent flexibility offered by field-programmable gate arrays (FPGAs), the authors discuss howembedded instrumentation can aiddata acquisition and coverage measurement in FPGA designs. The evolution of FPGA trace collection methods is elaborated, showing how recent tools can facilitate a set of predetermined cover points to be observed without requiring recompilation. Further, recent research is aimed at enabling any cover point to be measured in FPGA prototypes. In the second paper, entitled ‘‘Evolution of graphics Northbridge test and debug architectures across four generations of AMD ASICs,’’ Margulis et al., present the evolution of the design for test and debug (commonly referred to as DFx) architectures over four generations of AMD designs. The paper covers different aspects of DFx, ranging from scan architecture to control (centralized, modular, hierarchical) to debug buses (asynchronous/synchronous, source synchronous). The key points are that DFx methodology must be physical-design friendly and account for high clock frequencies, needed to acquire and dump the trace data, as well as be aware of the power savings features, such clock and power gating. In the last paper of this special section, entitled ‘‘Deriving feature fail rate from silicon volume diagnostics data,’’ Malik et al., address the challenge of identifying layout geometries that lead to systematic yield loss. As the subwavelength lithography gap continues to widen, this class of defect is becoming an increasingly dominant source of failures. With design-for-manufacturability (DFM) tools, it is possible to identify pot","PeriodicalId":50392,"journal":{"name":"IEEE Design & Test of Computers","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/MDAT.2013.2279324","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62450976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Most Important DFT Tool","authors":"S. Davidson","doi":"10.1109/MDAT.2013.2283589","DOIUrl":"https://doi.org/10.1109/MDAT.2013.2283589","url":null,"abstract":"h I WENT TO a lot of talks on system test at the recent 2013 International Test Conference. Excellent progress is being made on new standards, some of which are discussed in the special section of this issue of Design & Test. However, there don’t seem to have been many fundamental changes. The quality of system test is still hard to measure. One speaker mentioned tracking quality by collecting the number of field failures. This reminded me of the early days of IC test, when functional test writers measured their coverage this way. This strategy had some big flaws. First, we can seldom collect all field fails, and the ones we do get are usually ‘‘no trouble found,’’ so it is hard to tell if the fail was the result of a test escape or misdiagnosis. But a bigger problem was that the time between test writing and a failure is so long that it is usually too late to either improve the test or learn from it. System test is even worse since the time between a factory test and product installation is even longer than that between IC test and board test, products are spread all over the world, and diagnosis is even harder than for IC fails. For ICs, this problem got solved when we began fault-simulating functional tests. We received an immediate estimate of test quality. Test writers soon found that their tests were nowhere near as good as they imagined. More importantly, management got a single number that was easy to understand. When this value was too far away from 100% the product team was told to improve it. Scan and other forms of DFT became a lot more attractiveVespecially when the alternative was spending long nights improving coverage by hand. This is why I maintain that the fault simulator is the most important DFT tool. Without a faultcoverage number, it would be hard to motivate designers to add DFT, and thus make ATPG and BIST possible. So all we have to do to improve system test is to start to fault simulate it. The need to improve coverage will drive innovations in systemlevel DFT and in automating test generation. It might take 10 or 20 years, but the problem will be solved. ‘‘But wait,’’ I hear the cries, ‘‘there is no fault model for system test. How do we do fault simulation?’’ We did have a fault model, the stuck-at fault, for IC test. But it modeled defects which seldom occurred. Its benefit was to force people to generate a larger number of more diverse patterns, patterns which did detect the defects. ATPG can be considered a weighted random pattern test generation; random because it does not target real defects, and weighted to detect stuck-at faults. If we can simulate a system, we can use software fault-insertion methods to insert many faults. A lot of excellent work has been done using these for electronic test. If we insert too few, we’ll think we are done before covering everything. It will require different ways of speeding up high-level fault simulation, but it can be done. DFT and test automation will follow naturallyVfault s","PeriodicalId":50392,"journal":{"name":"IEEE Design & Test of Computers","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/MDAT.2013.2283589","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62451051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Privacy Through Obscurity","authors":"S. Davidson","doi":"10.1109/MDAT.2013.2283595","DOIUrl":"https://doi.org/10.1109/MDAT.2013.2283595","url":null,"abstract":"h I RECENTLY READ an editorial in an electronics magazine about license plate readers, devices used by police and government to scan license plates on cars, look them up in a database, and report if the car is stolen or if the owner of the car is wanted by the law. Information about the location of cars whose licenses are read is kept a long time, a potential privacy problem. Even plates of cars not involved in nefarious activities are scanned. Because license plates are displayed in public, it is perfectly legal to record them. This is not the only way in which we are being recorded. In the old days, if the police wanted to find out what happened at a particular location, they had to find witnesses. Today, police can also consult footage from the large number of surveillance cameras in the area. In England, these are owned by the government, but in the United States, there seem to be just as many owned by businesses, not to mention the prevalence of cell phone cameras. In Russia, many cars use dashboard-mounted cameras, and so the recent meteorite event was captured. Even meteorite privacy is not safe. Anyone having the slightest involvement in computer security knows that ‘‘security through obscurity’’ is one of the worst policies to follow. This policy tries to keep security holes secret, and hopes that no one finds out. This might have worked when access to computers was controlled by a small set of professionals, but today even the slightest flaw will be broadcast around the world as fast as a video of a cute kitten. Those of us well out of college grew up in a time of what we can call ‘‘privacy through obscurity.’’ Perhaps people could read your license plate, but unless your car was very suspicious and you were unlucky, it was unlikely that anyone would record it or even notice it. Unless you were famous, no one but friends would take your picture. Politicians and Hollywood stars learned to live with constant exposure and loss of privacy, but at least they were well compensated. One unexpected side effect of work by engineers and computer scientists is that we are all Hollywood stars. But we don’t make the big bucks. Technology has made it possible for our public presence to be recorded and stored. Today, at least a person has to watch the videos to see if you are in themVwork is being done on automating this also. Our privacy through obscurity is no more. Gordon Bell has a project of recording his entire life. Today, we are all Gordon Bell. I’ve often wondered when he’d have time to look at this. However, I can imagine software that could look through streams of video and other information and go right to the moments you want to reliveVor the moments some observer wants to look at more closely. We tell our kids to be careful of their on-line presences, because someone might be watching. Perhaps they are well ahead of us. Someone will always be watching, in real life as well as on-line, and our kids are just getting ready for a world of li","PeriodicalId":50392,"journal":{"name":"IEEE Design & Test of Computers","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/MDAT.2013.2283595","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62451094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Look at IEEE P1687 Internal JTAG (IJTAG)","authors":"A. Ivanov","doi":"10.1109/MDAT.2013.2283590","DOIUrl":"https://doi.org/10.1109/MDAT.2013.2283590","url":null,"abstract":"h IN LIGHT OF the recent introduction of IEEE’s P1687, Internal JTAG (IJTAG) standard, the first half of this special issue is dedicated to familiarizing our readers with some of the experiences and drawbacks, implementations and troubleshooting challenges, and other economic and technical implications that the proposed standard raises. Such an evolution from the older established technologies and practices necessitates that we extract from relevant work done by industry professionals and high-profile researchers to help bring our readers up to speed. This is what we hope to have accomplished here. In true Design and Test form, we have also included some variety on the back end of the issue, to round out the P1687 articles with an international vantage on different approaches to addressing faults and defects at different levels of abstraction and integration, including the threats posed by hardware Trojans. We begin our issue on the topic of IJTAG with an article by authors in industry and academia from Texas and California that provide a new perspective on FPGA-based testers in light of IJTAG. They show that for each IC or board design, embedded FPGA testers must be reconfigured, proving difficult for creating standardized observation systems. The authors propose a methodology that effectively allows embedded vectors to automatically be retargeted to the TAP port of the FPGA. Ultimately, they show how to create a standardized Command, Control, and Observation system. Second, we present a paper from the AlcatelLucent Labs in Villarceaux, France, and New Jersey, which outlines some shortcomings in traditional vector-level control of dynamic IJTAG operations. The authors introduce state machine-level control, which forms a more comprehensive solution allowing a greater potential usage of IJTAG. Our next article, compiled by researchers in Tallinn, Estonia, proposes a new instrumentation infrastructure, based on IJTAG, which supports fault management that automatically collects and delivers detection information to operating systems. The authors specifically demonstrate the efficiency of this infrastructure. Next, we receive a breather from the deeply technical side of things, in favor of viewing the effects of the latest IJTAG on the fiscal element of our industry. Martin Keim, from Mentor Graphics, explores three industry scenarios, showing the cost of adopting IJTAG versus the costs of sticking with older, traditional systems. Then we step out of the IJTAG box and begin our latter half of the issue with an analysis of the clustering of defects on silicon wafers. A group of authors from Southeast Asia present a new, flexible three-stage automation tool aimed at improving cluster analysis, while also offering sufficient devicespecific customization to allow the accommodation of a wide variety of product types. Next, researchers from Huazhong University, China, reiterate concerns in the security of highcapacity storage of critical sensitive informat","PeriodicalId":50392,"journal":{"name":"IEEE Design & Test of Computers","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/MDAT.2013.2283590","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62451080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A look at silicon debug and diagnosis [From the EIC]","authors":"A. Ivanov","doi":"10.1109/MDAT.2013.2283588","DOIUrl":"https://doi.org/10.1109/MDAT.2013.2283588","url":null,"abstract":"The Editor-in-Cheif is pleased to present this July-August issue, which brings you a combination of articles that cover a rich set of academic and industrial work addressing state-of-the-art issues of design, verification, and test of different IC-based systems represented at different levels of abstraction. The figure of merit for the different approaches varies, depending on the specific case, but are generally variations of cost, performance, quality, yield, and feasibility. An overview of each of the technical articles and features is presented.","PeriodicalId":50392,"journal":{"name":"IEEE Design & Test of Computers","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/MDAT.2013.2283588","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62451039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A look at trusted SoC with untrusted components","authors":"A. Ivanov","doi":"10.1109/MDAT.2013.2258103","DOIUrl":"https://doi.org/10.1109/MDAT.2013.2258103","url":null,"abstract":"","PeriodicalId":50392,"journal":{"name":"IEEE Design & Test of Computers","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/MDAT.2013.2258103","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62450717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}