{"title":"The Most Important DFT Tool","authors":"S. Davidson","doi":"10.1109/MDAT.2013.2283589","DOIUrl":null,"url":null,"abstract":"h I WENT TO a lot of talks on system test at the recent 2013 International Test Conference. Excellent progress is being made on new standards, some of which are discussed in the special section of this issue of Design & Test. However, there don’t seem to have been many fundamental changes. The quality of system test is still hard to measure. One speaker mentioned tracking quality by collecting the number of field failures. This reminded me of the early days of IC test, when functional test writers measured their coverage this way. This strategy had some big flaws. First, we can seldom collect all field fails, and the ones we do get are usually ‘‘no trouble found,’’ so it is hard to tell if the fail was the result of a test escape or misdiagnosis. But a bigger problem was that the time between test writing and a failure is so long that it is usually too late to either improve the test or learn from it. System test is even worse since the time between a factory test and product installation is even longer than that between IC test and board test, products are spread all over the world, and diagnosis is even harder than for IC fails. For ICs, this problem got solved when we began fault-simulating functional tests. We received an immediate estimate of test quality. Test writers soon found that their tests were nowhere near as good as they imagined. More importantly, management got a single number that was easy to understand. When this value was too far away from 100% the product team was told to improve it. Scan and other forms of DFT became a lot more attractiveVespecially when the alternative was spending long nights improving coverage by hand. This is why I maintain that the fault simulator is the most important DFT tool. Without a faultcoverage number, it would be hard to motivate designers to add DFT, and thus make ATPG and BIST possible. So all we have to do to improve system test is to start to fault simulate it. The need to improve coverage will drive innovations in systemlevel DFT and in automating test generation. It might take 10 or 20 years, but the problem will be solved. ‘‘But wait,’’ I hear the cries, ‘‘there is no fault model for system test. How do we do fault simulation?’’ We did have a fault model, the stuck-at fault, for IC test. But it modeled defects which seldom occurred. Its benefit was to force people to generate a larger number of more diverse patterns, patterns which did detect the defects. ATPG can be considered a weighted random pattern test generation; random because it does not target real defects, and weighted to detect stuck-at faults. If we can simulate a system, we can use software fault-insertion methods to insert many faults. A lot of excellent work has been done using these for electronic test. If we insert too few, we’ll think we are done before covering everything. It will require different ways of speeding up high-level fault simulation, but it can be done. DFT and test automation will follow naturallyVfault simulation, after all, is the most important DFT tool. h","PeriodicalId":50392,"journal":{"name":"IEEE Design & Test of Computers","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2013-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/MDAT.2013.2283589","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Design & Test of Computers","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MDAT.2013.2283589","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
h I WENT TO a lot of talks on system test at the recent 2013 International Test Conference. Excellent progress is being made on new standards, some of which are discussed in the special section of this issue of Design & Test. However, there don’t seem to have been many fundamental changes. The quality of system test is still hard to measure. One speaker mentioned tracking quality by collecting the number of field failures. This reminded me of the early days of IC test, when functional test writers measured their coverage this way. This strategy had some big flaws. First, we can seldom collect all field fails, and the ones we do get are usually ‘‘no trouble found,’’ so it is hard to tell if the fail was the result of a test escape or misdiagnosis. But a bigger problem was that the time between test writing and a failure is so long that it is usually too late to either improve the test or learn from it. System test is even worse since the time between a factory test and product installation is even longer than that between IC test and board test, products are spread all over the world, and diagnosis is even harder than for IC fails. For ICs, this problem got solved when we began fault-simulating functional tests. We received an immediate estimate of test quality. Test writers soon found that their tests were nowhere near as good as they imagined. More importantly, management got a single number that was easy to understand. When this value was too far away from 100% the product team was told to improve it. Scan and other forms of DFT became a lot more attractiveVespecially when the alternative was spending long nights improving coverage by hand. This is why I maintain that the fault simulator is the most important DFT tool. Without a faultcoverage number, it would be hard to motivate designers to add DFT, and thus make ATPG and BIST possible. So all we have to do to improve system test is to start to fault simulate it. The need to improve coverage will drive innovations in systemlevel DFT and in automating test generation. It might take 10 or 20 years, but the problem will be solved. ‘‘But wait,’’ I hear the cries, ‘‘there is no fault model for system test. How do we do fault simulation?’’ We did have a fault model, the stuck-at fault, for IC test. But it modeled defects which seldom occurred. Its benefit was to force people to generate a larger number of more diverse patterns, patterns which did detect the defects. ATPG can be considered a weighted random pattern test generation; random because it does not target real defects, and weighted to detect stuck-at faults. If we can simulate a system, we can use software fault-insertion methods to insert many faults. A lot of excellent work has been done using these for electronic test. If we insert too few, we’ll think we are done before covering everything. It will require different ways of speeding up high-level fault simulation, but it can be done. DFT and test automation will follow naturallyVfault simulation, after all, is the most important DFT tool. h