{"title":"超过45nm节点的技术建模和表征","authors":"S. Nassif","doi":"10.1109/ASPDAC.2008.4483944","DOIUrl":null,"url":null,"abstract":"The semiconductor industry is unique in that it produces products with little or no prototyping! While a car company will build (and crash) many prototypes before converging on a final design, integrated circuits are built almost entirely on a basis of computer predictions. These predictions are based on models of performance based on simulation performed at multiple hierarchical levels, but always rooted in the end in classical circuit simulation using tools like the venerable Spice [1]. But as we continue to scale technology further, we observe a diminishing rate of performance return which is in turn causing a spiral of increasing manufacturing process complexity in an attempt to maintain performance per historical trends. This increase in technology complexity is introducing a number of systematic (i.e. design dependent) sources of design variability which demand modeling and characterization resources. At the same time, we are entering a regime where the averaging effect of the law of large numbers is becoming weaker, resulting in an increase in influence of fundamental atomistic variations. Phenomena like channel dopant fluctuations [2] and line-edge roughness [3] are creating a random variability noise floor which is difficult to get around without significant process impact. The result of the increase in these, and other sources of variability is a corresponding increase in important circuit phenomena like SRAM stability and leakage power variations. The net result is a gradual breakdown of the traditional \";device model + design rule\"; contact between design and manufacturing, and a corresponding lack of predictability in fabrication outcome that is endangering the profitability of Silicon semiconductor manufacturing as we enter what may be the last handful of generations of CMOS. This lack of predictability is happening because of two important factors. ldr The overall CMOS technology slowdown has led to rapidly increasing complexity in the process and in its interaction with design. This has in turn caused an increase in the number and magnitude of systematic sources of mismatch between simulation models (both at the circuit simulation and timing levels) and hardware measurements. ldr Manufacturing variability, both systematic and random, -long a source of concern only for analog design- is becoming important for digital designs as well and thus its prediction is now a first order priority. However, it is competing for the attention of researchers and CAD developers with a host of other so-called nm effects, thus slowing down the delivery of needed solutions. The result is (a) our ability to arbitrarily compose a design out of disparate components is compromised because of a high degree of interaction between these components , and (b) our ability to predict the nominal performance of a design as well as its tolerances and sensitivities is in danger. In this talk, we will review these issues and show how they are all related to the core issue of model to hardware matching. We will also show examples of potential solutions to this problem some of which are currently being developed in IBM, and some which are longer term and would benefit greatly from the attention of the academic community.","PeriodicalId":277556,"journal":{"name":"2008 Asia and South Pacific Design Automation Conference","volume":"4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2008-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":"{\"title\":\"Technology modeling and characterization beyond the 45nm node\",\"authors\":\"S. Nassif\",\"doi\":\"10.1109/ASPDAC.2008.4483944\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The semiconductor industry is unique in that it produces products with little or no prototyping! While a car company will build (and crash) many prototypes before converging on a final design, integrated circuits are built almost entirely on a basis of computer predictions. These predictions are based on models of performance based on simulation performed at multiple hierarchical levels, but always rooted in the end in classical circuit simulation using tools like the venerable Spice [1]. But as we continue to scale technology further, we observe a diminishing rate of performance return which is in turn causing a spiral of increasing manufacturing process complexity in an attempt to maintain performance per historical trends. This increase in technology complexity is introducing a number of systematic (i.e. design dependent) sources of design variability which demand modeling and characterization resources. At the same time, we are entering a regime where the averaging effect of the law of large numbers is becoming weaker, resulting in an increase in influence of fundamental atomistic variations. Phenomena like channel dopant fluctuations [2] and line-edge roughness [3] are creating a random variability noise floor which is difficult to get around without significant process impact. The result of the increase in these, and other sources of variability is a corresponding increase in important circuit phenomena like SRAM stability and leakage power variations. The net result is a gradual breakdown of the traditional \\\";device model + design rule\\\"; contact between design and manufacturing, and a corresponding lack of predictability in fabrication outcome that is endangering the profitability of Silicon semiconductor manufacturing as we enter what may be the last handful of generations of CMOS. This lack of predictability is happening because of two important factors. ldr The overall CMOS technology slowdown has led to rapidly increasing complexity in the process and in its interaction with design. This has in turn caused an increase in the number and magnitude of systematic sources of mismatch between simulation models (both at the circuit simulation and timing levels) and hardware measurements. ldr Manufacturing variability, both systematic and random, -long a source of concern only for analog design- is becoming important for digital designs as well and thus its prediction is now a first order priority. However, it is competing for the attention of researchers and CAD developers with a host of other so-called nm effects, thus slowing down the delivery of needed solutions. The result is (a) our ability to arbitrarily compose a design out of disparate components is compromised because of a high degree of interaction between these components , and (b) our ability to predict the nominal performance of a design as well as its tolerances and sensitivities is in danger. In this talk, we will review these issues and show how they are all related to the core issue of model to hardware matching. We will also show examples of potential solutions to this problem some of which are currently being developed in IBM, and some which are longer term and would benefit greatly from the attention of the academic community.\",\"PeriodicalId\":277556,\"journal\":{\"name\":\"2008 Asia and South Pacific Design Automation Conference\",\"volume\":\"4 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2008-01-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"9\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2008 Asia and South Pacific Design Automation Conference\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ASPDAC.2008.4483944\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2008 Asia and South Pacific Design Automation Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ASPDAC.2008.4483944","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Technology modeling and characterization beyond the 45nm node
The semiconductor industry is unique in that it produces products with little or no prototyping! While a car company will build (and crash) many prototypes before converging on a final design, integrated circuits are built almost entirely on a basis of computer predictions. These predictions are based on models of performance based on simulation performed at multiple hierarchical levels, but always rooted in the end in classical circuit simulation using tools like the venerable Spice [1]. But as we continue to scale technology further, we observe a diminishing rate of performance return which is in turn causing a spiral of increasing manufacturing process complexity in an attempt to maintain performance per historical trends. This increase in technology complexity is introducing a number of systematic (i.e. design dependent) sources of design variability which demand modeling and characterization resources. At the same time, we are entering a regime where the averaging effect of the law of large numbers is becoming weaker, resulting in an increase in influence of fundamental atomistic variations. Phenomena like channel dopant fluctuations [2] and line-edge roughness [3] are creating a random variability noise floor which is difficult to get around without significant process impact. The result of the increase in these, and other sources of variability is a corresponding increase in important circuit phenomena like SRAM stability and leakage power variations. The net result is a gradual breakdown of the traditional ";device model + design rule"; contact between design and manufacturing, and a corresponding lack of predictability in fabrication outcome that is endangering the profitability of Silicon semiconductor manufacturing as we enter what may be the last handful of generations of CMOS. This lack of predictability is happening because of two important factors. ldr The overall CMOS technology slowdown has led to rapidly increasing complexity in the process and in its interaction with design. This has in turn caused an increase in the number and magnitude of systematic sources of mismatch between simulation models (both at the circuit simulation and timing levels) and hardware measurements. ldr Manufacturing variability, both systematic and random, -long a source of concern only for analog design- is becoming important for digital designs as well and thus its prediction is now a first order priority. However, it is competing for the attention of researchers and CAD developers with a host of other so-called nm effects, thus slowing down the delivery of needed solutions. The result is (a) our ability to arbitrarily compose a design out of disparate components is compromised because of a high degree of interaction between these components , and (b) our ability to predict the nominal performance of a design as well as its tolerances and sensitivities is in danger. In this talk, we will review these issues and show how they are all related to the core issue of model to hardware matching. We will also show examples of potential solutions to this problem some of which are currently being developed in IBM, and some which are longer term and would benefit greatly from the attention of the academic community.