{"title":"SPOT: Fast and Optimal Built-In Redundancy Analysis Using Smart Potential Case Collection","authors":"Donghyun Han;Sunghoon Kim;Dayoung Kim;Sungho Kang","doi":"10.1109/TVLSI.2024.3499955","DOIUrl":"https://doi.org/10.1109/TVLSI.2024.3499955","url":null,"abstract":"With advancements in manufacturing and design technology, memory integration density has improved. However, as integration density increases, the cost of testing and repairing memory has also risen, posing a significant challenge in memory production. To address this challenge, built-in self-repair (BISR) has been proposed. Traditional built-in redundancy analysis (BIRAs) performs limited analysis of faults during the fault collection process, resulting in a significant delay in generating a repair solution after the test sequence is completed. This inefficiency arises from the time required to repair the memory posttest. This article proposes a new fast and optimal BIRA using smart potential case collection. The proposed BIRA conducts a detailed analysis of detected faults during the test process. Using this novel fault collection results, a potential case is generated. This is a repair case that can repair the memory with a high probability and is generated immediately after the test sequence ends. If the memory cannot be repaired by the potential case, an exhaustive search is conducted for the faults requiring further analysis to generate an optimal repair solution. Compared to previous studies, the proposed BIRA demonstrates extremely low analysis time with an optimal repair rate.","PeriodicalId":13425,"journal":{"name":"IEEE Transactions on Very Large Scale Integration (VLSI) Systems","volume":"33 3","pages":"780-792"},"PeriodicalIF":2.8,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143489105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shuming Guo;Yinyin Lin;Hao Wang;Yao Li;Chongyan Gu;Weiqiang Liu;Yijun Cui
{"title":"A 0.09-pJ/Bit Logic-Compatible Multiple-Time Programmable (MTP) Memory-Based PUF Design for IoT Applications","authors":"Shuming Guo;Yinyin Lin;Hao Wang;Yao Li;Chongyan Gu;Weiqiang Liu;Yijun Cui","doi":"10.1109/TVLSI.2024.3496735","DOIUrl":"https://doi.org/10.1109/TVLSI.2024.3496735","url":null,"abstract":"The Internet of Things (IoT) allows devices to interact for real-time data transfer and remote control. However, IoT hardware devices have been shown security vulnerabilities. Edge device authentications, as a crucial process for IoT systems, generate and use unique IDs for secure data transmissions. Conventional authentication techniques, computational and heavyweight, are challenging and infeasible in IoT due to limited resources in IoTs. Physical unclonable functions (PUFs), a lightweight hardware-based security primitive, were proposed for resource-constrained applications. We propose a new PUF design for resource-constrained IoT devices based on low-cost logic-compatible multiple-time programmable (MTP) memory cells. The structure includes an array of MTP differential memory cells and a PUF extraction circuit. The extraction method uses the random distribution of BL current after programming each memory cell in logic-compatible MTP memory as the entropy source of PUF. Responses are obtained by comparing the current values of two memory cells under a certain address by challenge, forming challenge–response pairs (CRPs). This scheme does not increase hardware consumption and circuit differences on edge devices and is intrinsic PUF. Finally, 200 PUF chips were fabricated by CSMC based on the 0.153-\u0000<inline-formula> <tex-math>$mu $ </tex-math></inline-formula>\u0000m MCU single-gate CMOS process. The performance of the logic-compatible MTP memory cell and its PUF was evaluated. A logic-compatible MTP cell has good programming erase efficiency and good durability and retention. The uniqueness of the proposed PUF is 50.29%, the uniformity is 51.82%, and the reliability is 93.61%.","PeriodicalId":13425,"journal":{"name":"IEEE Transactions on Very Large Scale Integration (VLSI) Systems","volume":"33 1","pages":"248-260"},"PeriodicalIF":2.8,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142918376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tianning Gao;Yifan Wang;Ming Zhu;Xiulong Wu;Dian Zhou;Zhaori Bi
{"title":"An RISC-V PPA-Fusion Cooperative Optimization Framework Based on Hybrid Strategies","authors":"Tianning Gao;Yifan Wang;Ming Zhu;Xiulong Wu;Dian Zhou;Zhaori Bi","doi":"10.1109/TVLSI.2024.3496858","DOIUrl":"https://doi.org/10.1109/TVLSI.2024.3496858","url":null,"abstract":"The optimization of RISC-V designs, encompassing both microarchitecture and CAD tool parameters, is a great challenge due to an extensive and high-dimensional search space. Conventional optimization methods, such as case-specific approaches and black-box optimization approaches, often fall short of addressing the diverse and complex nature of RISC-V designs. To achieve optimal results across various RISC-V designs, we propose the cooperative optimization framework (COF) that integrates multiple black-box optimizers, each specializing in different optimization problems. The COF introduces the landscape knowledge exchange mechanism (LKEM) to direct the optimizers to share their knowledge of the optimization problem. Moreover, the COF employs the dynamic computational resource allocation (DCRA) strategies to dynamically allocate computational resources to the optimizers. The DCRA strategies are guided by the optimizer efficiency evaluation (OEE) mechanism and a time series forecasting (TSF) model. The OEE provides real-time performance evaluations. The TSF model forecasts the optimization progress made by the optimizers, given the allocated computational resources. In our experiments, the COF reduced the cycle per instruction (CPI) of the Berkeley out-of-order machine (BOOM) by 15.36% and the power of Rocket-Chip by 12.84% without constraint violation compared to the respective initial designs.","PeriodicalId":13425,"journal":{"name":"IEEE Transactions on Very Large Scale Integration (VLSI) Systems","volume":"33 1","pages":"140-153"},"PeriodicalIF":2.8,"publicationDate":"2024-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142918391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dhandeep Challagundla;Ignatius Bezzam;Riadul Islam
{"title":"ArXrCiM: Architectural Exploration of Application-Specific Resonant SRAM Compute-in-Memory","authors":"Dhandeep Challagundla;Ignatius Bezzam;Riadul Islam","doi":"10.1109/TVLSI.2024.3502359","DOIUrl":"https://doi.org/10.1109/TVLSI.2024.3502359","url":null,"abstract":"While general-purpose computing follows von Neumann’s architecture, the data movement between memory and processor elements dictates the processor’s performance. The evolving compute-in-memory (CiM) paradigm tackles this issue by facilitating simultaneous processing and storage within static random-access memory (SRAM) elements. Numerous design decisions taken at different levels of hierarchy affect the figures of merit (FoMs) of SRAM, such as power, performance, area, and yield. The absence of a rapid assessment mechanism for the impact of changes at different hierarchy levels on global FoMs poses a challenge to accurately evaluating innovative SRAM designs. This article presents an automation tool designed to optimize the energy and latency of SRAM designs incorporating diverse implementation strategies for executing logic operations within the SRAM. The tool structure allows easy comparison across different array topologies and various design strategies to result in energy-efficient implementations. Our study involves a comprehensive comparison of over 6900+ distinct design implementation strategies for École Polytechnique Fédérale de Lausanne (EPFL) combinational benchmark circuits on the energy-recycling resonant CiM (rCiM) architecture designed using Taiwan Semiconductor Manufacturing Company (TSMC) 28-nm technology. When provided with a combinational circuit, the tool aims to generate an energy-efficient implementation strategy tailored to the specified input memory and latency constraints. The tool reduces 80.9% of energy consumption on average across all benchmarks while using the six-topology implementation compared with the baseline implementation of single-macro topology by considering the parallel processing capability of rCiM cache size ranging from 4 to 192 kB.","PeriodicalId":13425,"journal":{"name":"IEEE Transactions on Very Large Scale Integration (VLSI) Systems","volume":"33 1","pages":"179-192"},"PeriodicalIF":2.8,"publicationDate":"2024-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142918425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"MASL-AFU: A High Memory Access Efficiency 2-D Scalable LUT-Based Activation Function Unit for On-Device DNN Training","authors":"Zhaoteng Meng;Lin Shu;Jianing Zeng;Zhan Li;Kailin Lv;Haoyue Yang;Jie Hao","doi":"10.1109/TVLSI.2024.3488782","DOIUrl":"https://doi.org/10.1109/TVLSI.2024.3488782","url":null,"abstract":"On-device deep neural network (DNN) training faces constraints in storage capacity and energy supply. Existing works primarily focus on optimizing the training of convolutional and batch normalization (BN) layers to improve the compute-to-communication (CTC) ratio and reduce the energy cost of off-chip memory access (MA). However, the training of activation layers remains challenging due to the additional off-chip MA required for derivative calculations. This article proposes MASL-AFU, an architecture designed to accelerate the activation layer in on-device DNN training. MASL-AFU leverages nonuniform piecewise linear (NUPWL) functions to speed up the forward propagation (FP) in the activation layer. During the error propagation (EP) process, retrieving derivatives from a lookup table (LUT) eliminates the need for redundant retrieval of the input data used in FP. By storing LUT indices instead of the original activation inputs, MASL-AFU significantly reduces and accelerates MA. Compared to other activation function units, MASL-AFU offers up to a <inline-formula> <tex-math>$5.8times $ </tex-math></inline-formula> increase in computational and off-chip MA efficiency. In addition, MASL-AFU incorporates two dimensions of scalability: data precision and the number of LUT entries. These scalable, hardware-friendly methods enhance MASL-AFU’s area efficiency by up to <inline-formula> <tex-math>$3.24times $ </tex-math></inline-formula> and energy efficiency by up to <inline-formula> <tex-math>$3.85times $ </tex-math></inline-formula>.","PeriodicalId":13425,"journal":{"name":"IEEE Transactions on Very Large Scale Integration (VLSI) Systems","volume":"33 3","pages":"707-719"},"PeriodicalIF":2.8,"publicationDate":"2024-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143489185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A 28-Gb/s Single-Ended PAM-4 Transceiver With Active-Inductor Equalizer and Amplitude- Detection LSB Decoder for Memory Interfaces","authors":"Hwaseok Shin;Hyoshin Kang;Yoonjae Choi;Jincheol Sim;Jonghyuck Choi;Youngwook Kwon;Seungwoo Park;Seongcheol Kim;Changmin Sim;Junseob So;Taehwan Kim;Chulwoo Kim","doi":"10.1109/TVLSI.2024.3496878","DOIUrl":"https://doi.org/10.1109/TVLSI.2024.3496878","url":null,"abstract":"This study proposes a power-efficient 28-Gb/s single-ended four-level pulse amplitude modulation (PAM-4) transceiver (TRX) for next-generation memory interfaces. In the transmitter (TX), an active-inductor equalizer (EQAI) is utilized, while in the receiver (RX), an amplitude-detection least significant bit (LSB) decoder is employed. In the TX, conventional equalization techniques consume substantial power owing to the inclusion of additional components and strong driving power required to mitigate channel-induced intersymbol interference (ISI). However, the proposed EQAI achieves a bandwidth extension up to the Nyquist frequency through gain boosting while reducing hardware costs and minimizing the driving strength. This results in a simple structure with operational efficiency, facilitating low power consumption and a compact area compared with conventional TX equalizers. In PAM-4 RX, the power dissipation is proportional to the clock buffer and the number of comparators used for data decoding. To improve the hardware cost and the power usage in the RX, the proposed RX design utilizes an amplitude-detection LSB decoder, which reduces the number of comparators and comprises a one-stage structure by detecting the amplitude differences between the reference and input voltages during LSB decoding. This ensures the hardware cost and power consumption improvement while implementing a one-tap direct decision feedback equalizer (DFE). The TRX for memory interfaces is optimized for low-power performance by employing these methods, resulting in a notable energy efficiency of 0.96 pJ/bit. This structure is fabricated using a 28-nm CMOS technology, and the core area of the TRX occupies 0.0053 mm2.","PeriodicalId":13425,"journal":{"name":"IEEE Transactions on Very Large Scale Integration (VLSI) Systems","volume":"33 3","pages":"662-672"},"PeriodicalIF":2.8,"publicationDate":"2024-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143489250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alexandre Almeida da Silva;Lucas Nogueira;Alexandre Coelho;Jarbas A. N. Silveira;César Marcon
{"title":"Securet3d: An Adaptive, Secure, and Fault-Tolerant Aware Routing Algorithm for Vertically–Partially Connected 3D-NoC","authors":"Alexandre Almeida da Silva;Lucas Nogueira;Alexandre Coelho;Jarbas A. N. Silveira;César Marcon","doi":"10.1109/TVLSI.2024.3500575","DOIUrl":"https://doi.org/10.1109/TVLSI.2024.3500575","url":null,"abstract":"Multiprocessor systems-on-chip (MPSoCs) based on 3-D networks-on-chip (3D-NoCs) are crucial architectures for robust parallel computing, efficiently sharing resources across complex applications. To ensure the secure operation of these systems, it is essential to implement adaptive, fault-tolerant mechanisms capable of protecting sensitive data. This work proposes the Securet3d routing algorithm, which establishes secure data paths in fault-tolerant 3D-NoCs. Our approach enhances the Reflect3d algorithm by introducing a detailed scheme for mapping secure paths and improving the system’s ability to withstand faults. To validate its effectiveness, we compare Securet3d with three other fault-tolerant routing algorithms for vertically-partially connected 3D-NoCs. All algorithms were implemented in SystemVerilog and evaluated through simulation using ModelSim and hardware synthesis with Cadence’s Genus tool. Experimental results show that Securet3d reduces latency and enhances cost-effectiveness compared with other approaches. When implemented with a 28-nm technology library, Securet3d demonstrates minimal area and energy overhead, indicating scalability and efficiency. Under denial-of-service (DoS) attacks, Securet3d maintains basically unaltered average packet latencies on 70, 90, and 29 clock cycles for uniform random, bit-complement, and shuffle traffic, significantly lower than those of other algorithms without including security mechanisms (5763, 4632, and 3712 clock cycles in average, respectively). These results highlight the superior security, scalability, and adaptability of Securet3d for complex communication systems.","PeriodicalId":13425,"journal":{"name":"IEEE Transactions on Very Large Scale Integration (VLSI) Systems","volume":"33 1","pages":"275-287"},"PeriodicalIF":2.8,"publicationDate":"2024-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142918383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Efficient Hardware Accelerator Based on Medium Granularity Dataflow for SpTRSV","authors":"Qian Chen;Xiaofeng Yang;Shengli Lu","doi":"10.1109/TVLSI.2024.3497166","DOIUrl":"https://doi.org/10.1109/TVLSI.2024.3497166","url":null,"abstract":"Sparse triangular solve (SpTRSV) is widely used in various domains. Numerous studies have been conducted using CPUs, GPUs, and specific hardware accelerators, where dataflows can be categorized into coarse and fine granularity. Coarse dataflows offer good spatial locality but suffer from low parallelism, while fine dataflows provide high parallelism but disrupt the spatial structure, leading to increased nodes and poor data reuse. This article proposes a novel hardware accelerator for SpTRSV or SpTRSV-like directed acyclic graphs (DAGs). The accelerator implements a medium granularity dataflow through hardware-software codesign and achieves both excellent spatial locality and high parallelism. In addition, a partial sum caching mechanism is introduced to reduce the blocking frequency of processing elements (PEs), and a reordering algorithm of intranode edges’ computation is developed to enhance data reuse. Experimental results on 245 benchmarks with node counts reaching up to 85392 demonstrate that this work achieves average performance improvements of <inline-formula> <tex-math>$7.0times $ </tex-math></inline-formula> (up to <inline-formula> <tex-math>$27.8times $ </tex-math></inline-formula>) over CPUs and <inline-formula> <tex-math>$5.8times $ </tex-math></inline-formula> (up to <inline-formula> <tex-math>$98.8times $ </tex-math></inline-formula>) over GPUs. Compared with the state-of-the-art technique (DPU-v2), this work shows a <inline-formula> <tex-math>$2.5times $ </tex-math></inline-formula> (up to <inline-formula> <tex-math>$5.9times $ </tex-math></inline-formula>) average performance improvement and <inline-formula> <tex-math>$1.7times $ </tex-math></inline-formula> (up to <inline-formula> <tex-math>$4.1times $ </tex-math></inline-formula>) average energy efficiency enhancement.","PeriodicalId":13425,"journal":{"name":"IEEE Transactions on Very Large Scale Integration (VLSI) Systems","volume":"33 3","pages":"807-820"},"PeriodicalIF":2.8,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143489104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiawei Cao;Chongtao Guo;Houjun Wang;Zhigang Wang;Hao Li;Geoffrey Ye Li
{"title":"Deep Learning-Based Performance Testing for Analog Integrated Circuits","authors":"Jiawei Cao;Chongtao Guo;Houjun Wang;Zhigang Wang;Hao Li;Geoffrey Ye Li","doi":"10.1109/TVLSI.2024.3496777","DOIUrl":"https://doi.org/10.1109/TVLSI.2024.3496777","url":null,"abstract":"In this brief, we propose a deep learning-based performance testing framework to minimize the number of required test modules while guaranteeing the accuracy requirement, where a test module corresponds to a combination of one circuit and one stimulus. First, we apply a deep neural network (DNN) to establish the mapping from the response of the circuit under test (CUT) in each module to all specifications to be tested. Then, the required test modules are selected by solving a 0–1 integer programming problem. Finally, the predictions from the selected test modules are combined by a DNN to form the specification estimations. The simulation results validate the proposed approach in terms of testing accuracy and cost.","PeriodicalId":13425,"journal":{"name":"IEEE Transactions on Very Large Scale Integration (VLSI) Systems","volume":"33 4","pages":"1187-1191"},"PeriodicalIF":2.8,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143667705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jie Li;Chuanlun Zhang;Wenxuan Yang;Heng Li;Xiaoyan Wang;Chuanjun Zhao;Shuangli Du;Yiguang Liu
{"title":"FPGA-Based Low-Bit and Lightweight Fast Light Field Depth Estimation","authors":"Jie Li;Chuanlun Zhang;Wenxuan Yang;Heng Li;Xiaoyan Wang;Chuanjun Zhao;Shuangli Du;Yiguang Liu","doi":"10.1109/TVLSI.2024.3496751","DOIUrl":"https://doi.org/10.1109/TVLSI.2024.3496751","url":null,"abstract":"The 3-D vision computing is a key application in unmanned systems, satellites, and planetary rovers. Learning-based light field (LF) depth estimation is one of the major research directions in 3-D vision computing. However, conventional learning-based depth estimation methods involve a large number of parameters and floating-point operations, making it challenging to achieve low-power, fast, and high-precision LF depth estimation on a field-programmable gate array (FPGA). Motivated by this issue, an FPGA-based low-bit, lightweight LF depth estimation network (L\u0000<inline-formula> <tex-math>$^{3}text {FNet}$ </tex-math></inline-formula>\u0000) is proposed. First, a hardware-friendly network is designed, which has small weight parameters, low computational load, and a simple network architecture with minor accuracy loss. Second, we apply efficient hardware unit design and software-hardware collaborative dataflow architecture to construct an FPGA-based fast, low-bit acceleration engine. Experimental results show that compared with the state-of-the-art works with lower mean-square error (mse), L\u0000<inline-formula> <tex-math>$^{3}text {FNet}$ </tex-math></inline-formula>\u0000 can reduce the computational load by more than 109 times and weight parameters by approximately 78 times. Moreover, on the ZCU104 platform, it requires 95.65% lookup tables (LUTs), 80.67% digital signal processors (DSPs), 80.93% BlockRAM (BRAM), 58.52% LUTRAM, and 9.493-W power consumption to achieve an efficient acceleration engine with a latency as low as 272 ns. The code and model of the proposed method are available at \u0000<uri>https://github.com/sansi-zhang/L3FNet</uri>\u0000.","PeriodicalId":13425,"journal":{"name":"IEEE Transactions on Very Large Scale Integration (VLSI) Systems","volume":"33 1","pages":"88-101"},"PeriodicalIF":2.8,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142918178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}