{"title":"RNA: Advanced phase tracking method for digital waveform reconstruction","authors":"Takashi Ito, H. Okawara, Jinlei Liu","doi":"10.1109/TEST.2012.6401592","DOIUrl":"https://doi.org/10.1109/TEST.2012.6401592","url":null,"abstract":"This paper describes how to measure an eye diagram by using ATE (Automated Test Equipment) digital channel and to do a correlation with an oscilloscope for over-Giga-bps class high speed digital interfaces. The novel method named RNA (Recovering aNAlysis) is a post processing method to perform “phase tracking” in the eye diagram measurement on an ATE which does not have a CDR (Clock Data Recovery) hardware integrated. The RNA is an enhancement of the method named DNA (Data aNAlysis) that constructs an eye diagram by coherent waveform reconstruction with utilizing an ATE digital channel. The DNA is an elegant method to reconstruct digital signal waveform and its eye diagram; however it is not immune to slow jitter or wander of signal. In recent high speed digital interfaces, jitter tolerance is very critical so that the DNA is insufficient for coping with test devices containing slow jitter. The RNA is an enhanced DNA by implementing software quasi-CDR described in this paper. It significantly expands the application coverage. The eye diagram processed by this method is improved and good for parametric measurement such as rise time/fall time tests. Especially, it allows to do an easy correlation to the eye measured by an oscilloscope. Therefore the transition from bench systems to ATE for production test becomes smooth and efficient.","PeriodicalId":353290,"journal":{"name":"2012 IEEE International Test Conference","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126468028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Brandon Noia, Shreepad Panth, K. Chakrabarty, S. Lim
{"title":"Scan test of die logic in 3D ICs using TSV probing","authors":"Brandon Noia, Shreepad Panth, K. Chakrabarty, S. Lim","doi":"10.1109/TEST.2012.6401568","DOIUrl":"https://doi.org/10.1109/TEST.2012.6401568","url":null,"abstract":"Pre-bond testing of TSVs and die logic is a significant challenge and a potential roadblock for 3D integration. BIST solutions introduce considerable die area overhead. Oversized probe pads on TSVs to provide pre-bond test access limit both test bandwidth and TSV density. This paper presents a solution to these problems, allowing a probe card to contact TSVs without the need for probe pads, enabling both TSV and pre-bond scan test. Two possible pre-bond scan test configurations are shown - they provide varying degrees of test parallelism. HSPICE simulations are performed on a logic-on-logic 3D benchmark. Results show that the ratio of the number of probe needles available for test access to the number of pre-bond scan chains determines which pre-bond scan configuration results in the shortest test time. Maximum pre-bond scan-in and scan-out shift-clock speeds are determined for dies in a benchmark 3D design. These clock speeds show that pre-bond scan test can be performed quickly, at a speed that is comparable to scan testing of packaged dies. The maximum clock speed can also be tuned by changing the drive strength of the probe and on-die drivers of the TSV network. Estimates are also provided for peak and average power consumption during pre-bond scan test. On-die area overhead for the proposed method is estimated to be between 1.0% and 2.2% for three dies in the 3D stack.","PeriodicalId":353290,"journal":{"name":"2012 IEEE International Test Conference","volume":"34 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132056718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Solecki, J. Tyszer, Grzegorz Mrugalski, N. Mukherjee, J. Rajski
{"title":"Low power programmable PRPG with enhanced fault coverage gradient","authors":"J. Solecki, J. Tyszer, Grzegorz Mrugalski, N. Mukherjee, J. Rajski","doi":"10.1109/TEST.2012.6401559","DOIUrl":"https://doi.org/10.1109/TEST.2012.6401559","url":null,"abstract":"This paper describes a low power programmable generator capable of producing pseudorandom test patterns with desired toggling levels and enhanced fault coverage gradient compared to best-to-date BIST-based PRPGs. We introduce a method to automatically select several controls of the generator allowing easy and precise tuning. The same technique is subsequently employed to deterministically guide the generator toward test sequences with improved fault-coverage-to-pattern-count ratios. Experimental results obtained for industrial designs illustrate feasibility of the proposed test scheme and are reported herein.","PeriodicalId":353290,"journal":{"name":"2012 IEEE International Test Conference","volume":"52 10","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132433221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"In-system constrained-random stimuli generation for post-silicon validation","authors":"A. Kinsman, Ho Fai Ko, N. Nicolici","doi":"10.1109/TEST.2012.6401541","DOIUrl":"https://doi.org/10.1109/TEST.2012.6401541","url":null,"abstract":"When generating the verification stimuli in a pre-silicon environment, the primary objectives are to reduce the simulation time and the pattern count for achieving the target coverage goals. In a hardware environment, because an increase in the number of stimuli is inherently compensated by the advantage of real-time execution, the objective augments to considering hardware complexity when designing in-system stimuli generators that must operate according to user-programmable constraints. In this paper we introduce a structured methodology for porting in-system the constrained-random stimuli generation aspect from a pre-silicon verification environment.","PeriodicalId":353290,"journal":{"name":"2012 IEEE International Test Conference","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131108403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
I. Aleksejev, A. Jutman, S. Devadze, S. Odintsov, T. Wenzel
{"title":"FPGA-based synthetic instrumentation for board test","authors":"I. Aleksejev, A. Jutman, S. Devadze, S. Odintsov, T. Wenzel","doi":"10.1109/TEST.2012.6401571","DOIUrl":"https://doi.org/10.1109/TEST.2012.6401571","url":null,"abstract":"This paper studies a new approach for board-level test based on synthesizable embedded instruments implemented on FPGA. This very recent methodology utilizes programmable logic devices (FPGA) that are usually available on modern PCBs to a large extent. The purpose of an embedded instrument is to carry out a vast portion of test application related procedures, perform measurement and configuration of system components thus minimizing the usage of external test equipment. By replacing traditional test and measurement equipment with embedded synthetic instruments it is possible not only to achieve the significant reduction of test costs but also facilitate high-speed and at-speed testing. We detail the motivation and classify the FPGA-based instrumentation into different categories based on the implementation and application domains. Experimental results show the efficiency of this approach.","PeriodicalId":353290,"journal":{"name":"2012 IEEE International Test Conference","volume":"43 3-4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132811882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An ATE architecture for implementing very high efficiency concurrent testing","authors":"Takahiro Nakajima, Takeshi Yaguchi, Hajime Sugimura","doi":"10.1109/TEST.2012.6401551","DOIUrl":"https://doi.org/10.1109/TEST.2012.6401551","url":null,"abstract":"With the spread of SOC and SIP devices, the independence of IP core operations inside devices have recently been increasing, and there has been growing demand for concurrent testing. In this paper, we propose an Automatic Test Equipment (ATE) architecture that implements concurrent testing with true parallel execution. This architecture makes concurrent testing easy to develop and achieves very high concurrent efficiency. It also exhibits very high multi-site efficiency when used in combination with multi-site testing. It is therefore expected to substantially reduce the Cost of Test (CoT). To confirm these effects, we present experimental results using four mixed-signal devices in both multi-site testing and concurrent testing. We also discuss some applications of the proposed scheme.","PeriodicalId":353290,"journal":{"name":"2012 IEEE International Test Conference","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133511154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Experiences with non-intrusive sensors for RF built-in test","authors":"L. Abdallah, H. Stratigopoulos, S. Mir, C. Kelma","doi":"10.1109/TEST.2012.6401587","DOIUrl":"https://doi.org/10.1109/TEST.2012.6401587","url":null,"abstract":"This paper discusses a new type of sensors to enable a built-in test in RF circuits. The proposed sensors provide DC or low-frequency measurements, thus they can reduce drastically the testing cost. Their key characteristic is that they are nonintrusive, e.g. they are not connected electrically to the RF circuit. Thus, the performances of the RF circuit are unaffected by the monitoring operation. The sensors function as process monitors and share the same environment with the RF circuit. The underlying principle is that the sensors and the RF circuit are subject to the same process variations, thus shifts in the performances of the RF circuit can be inferred implicitly by shifts in the outputs of the sensors. We present experimental results on fabricated samples that include an LNA with embedded sensors. The samples are collected from different sites of a wafer such that they exhibit process variations. We demonstrate that the performances of the RF circuit can be predicted with sufficient accuracy through the sensors by employing the alternate test paradigm.","PeriodicalId":353290,"journal":{"name":"2012 IEEE International Test Conference","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114803283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An experiment of burn-in time reduction based on parametric test analysis","authors":"N. Sumikawa, Li-C. Wang, M. Abadir","doi":"10.1109/TEST.2012.6401595","DOIUrl":"https://doi.org/10.1109/TEST.2012.6401595","url":null,"abstract":"Burn-in is a common test approach to screen out unreliable parts. The cost of burn-in can be significant due to long burn-in periods and expensive equipment. This work studies the potential of using parametric test data to reduce the time of burn-in. The experiment focuses on developing parametric test models based on test data collected after 10 hours of burn-in to predict parts likely-to-fail after 24 and 48 hours of burn-in. Our study shows that 24-hour and 48-hour burn-in failures behave abnormally in multivariate parametric test spaces after 10 hours of burn-in. Hence, it is possible to develop multivariate test models to identify these likely-to-fail parts early in a burn-in cycle. This study is carried out on 8 lots of test data from a burn-in experiment based on a 3-axis accelerometer design. The study shows that after 10 hours of burn-in, it is possible to identify a large portion of all parts that do not require longer burn-in time, potentially providing significant cost saving.","PeriodicalId":353290,"journal":{"name":"2012 IEEE International Test Conference","volume":"241 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116144972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiaoxin Fan, Huaxing Tang, Yu Huang, Wu-Tung Cheng, S. Reddy, B. Benware
{"title":"Improved volume diagnosis throughput using dynamic design partitioning","authors":"Xiaoxin Fan, Huaxing Tang, Yu Huang, Wu-Tung Cheng, S. Reddy, B. Benware","doi":"10.1109/TEST.2012.6401564","DOIUrl":"https://doi.org/10.1109/TEST.2012.6401564","url":null,"abstract":"A method based on dynamic design partition is presented to increase the throughput of volume diagnosis by increasing the number of failing dies diagnosed within a given time T using given constrained computational resources C. Recently we proposed a static design partitioning method to reduce the diagnosis memory footprint for large designs [1] to achieve this objective. The method in [1] is applied once for each design without using the information of test patterns and failure files, and then diagnosis is performed on an appropriate block(s) of the design partition for a failure file. Even though the memory footprint of diagnosis is reduced the diagnosis quality is impacted to unacceptable levels for some types of defects such as bridges. In this paper, we propose a new failure dependent design partitioning method to improve volume diagnosis throughput with a minimal impact on diagnosis quality. For each failure file, the proposed method first determines the small partition needed to diagnose this failure, and then performs the diagnosis on this partition instead of the complete design. Since the partition is far smaller, both the run time and the memory usage of diagnosis can be significantly reduced better than when earlier proposed static partition is used. Extensive experiments were conducted on several large industrial designs to validate the proposed method. It has been observed that the typical partition size for various defects is less than 3% of the size of the original design. Also diagnosis runs much faster (>;2X) on the partition. Combining these two factors, the throughput of volume diagnosis can be improved by an order of magnitude.","PeriodicalId":353290,"journal":{"name":"2012 IEEE International Test Conference","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123149075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A design flow to maximize yield/area of physical devices via redundancy","authors":"M. Mirza-Aghatabar, M. Breuer, S. Gupta","doi":"10.1109/TEST.2012.6401582","DOIUrl":"https://doi.org/10.1109/TEST.2012.6401582","url":null,"abstract":"This paper deals with using redundancy to maximize the number of “workable” die one can produce from a silicon wafer. When redundant modules are used to enhance yield, several issues need to be addressed, such as power, performance degradation, testability, area, and partitioning the original logic design into modules. The focus of this paper is on the long ignored issue of partitioning and clustering to form modules that are to be replicated. For this purpose we propose a design flow with two phases. The first phase consists of a partitioning process that generates all combinational logic blocks (CLBs) of a given logic circuit. CLB partitioning addresses design and test constraints such as timing closure and testing complexity, by using redundancy at finer levels of granularity. In the second phase we carry out an overall optimization of the generated CLBs to find the optimal level of granularity for replication to maximize yield/area. Using a real design (OpenSPARC T2) and defect densities projected in the near future, the experimental results show that the output of our design flow outperforms the traditional redundant design with spare core, e.g. we achieved 1.1 to 13.3 times better yield/area as a function of defect density.","PeriodicalId":353290,"journal":{"name":"2012 IEEE International Test Conference","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125619120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}