{"title":"Manipulated Lookup Table Method for Efficient High-Performance Modular Multiplier","authors":"Anawin Opasatian;Makoto Ikeda","doi":"10.1109/TVLSI.2024.3505920","DOIUrl":"https://doi.org/10.1109/TVLSI.2024.3505920","url":null,"abstract":"Modular multiplication is a fundamental operation in many cryptographic systems, with its efficiency playing a crucial role in the overall performance of these systems. Since many cryptographic systems operate with a fixed modulus, we propose an enhancement to the fixed modulus lookup table (LuT) method used for modular reduction, which we refer to as the manipulated LuT (MLuT) method. Our approach applies to any modulus and has demonstrated comparable performance compared with some specialized reduction algorithms designed for specific moduli. The strength of our proposed method in terms of circuit performance is shown by implementing it on Virtex7 and Virtex Ultrascale+ FPGA as the LUT-based MLuT modular multiplier (LUT-MLuTMM) with generalized parallel counters (GPCs) used in the summation step. In one-stage implementations, our proposed method achieves up to a 90% reduction in area and a 50% reduction in latency compared with the generic LuT method. In multistage implementations, our approach offers the best area-interleaved time product, with improvements of 39%, 13%, and 29% over the current state-of-the-art for ~256-bit, SIKE434, and BLS12-381 modular multipliers, respectively. These results demonstrate the potential of our method for high-performance cryptographic accelerators employing a fixed modulus.","PeriodicalId":13425,"journal":{"name":"IEEE Transactions on Very Large Scale Integration (VLSI) Systems","volume":"33 1","pages":"114-127"},"PeriodicalIF":2.8,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10777922","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142918390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A 0.875–0.95-pJ/b 40-Gb/s PAM-3 Baud-Rate Receiver With One-Tap DFE","authors":"Jhe-En Lin;Shen-Iuan Liu","doi":"10.1109/TVLSI.2024.3507714","DOIUrl":"https://doi.org/10.1109/TVLSI.2024.3507714","url":null,"abstract":"This article presents a 40-Gb/s (25.6-GBaud) three-level pulse amplitude modulation (PAM-3) baud-rate receiver with one-tap decision-feedback equalize (DFE). A baud-rate phase detector (BRPD) that locks at the point with zero first postcursor is proposed. In addition, by reusing the BRPD’s error samplers, a weighting coefficient calibration is presented to select the DFE weighting coefficient that maximizes the top level of the eye diagram, thereby improving eye height across different channel losses. An inductorless continuous-time linear equalizer (CTLE) and a variable gain amplifier (VGA) are also included. The VGA adjusts the output common-mode resistance to control data swing, reducing power consumption when the required swing is small. Furthermore, by using the modified summer-merged slicers, the capacitance from the slicers to the VGA is reduced. Finally, a digital clock/data recovery (CDR) circuit is presented, which includes a demultiplexer (DeMUX) with a short delay time to reduce the loop latency. The 40-Gb/s PAM-3 receiver is fabricated in 28-nm CMOS technology. For a 25.6-Gbaud pseudorandom ternary sequence of \u0000<inline-formula> <tex-math>$3^{7}$ </tex-math></inline-formula>\u0000–1, the measured bit error rate (BER) is below \u0000<inline-formula> <tex-math>$10^{-12}$ </tex-math></inline-formula>\u0000 for channel losses of 9 and 17.5 dB. At a 9-dB loss, total power consumption is 35-mW with a calculated FoM of 0.875-pJ/bit. At 17.5-dB loss, total power consumption is 38-mW with a calculated FoM of 0.95-pJ/bit.","PeriodicalId":13425,"journal":{"name":"IEEE Transactions on Very Large Scale Integration (VLSI) Systems","volume":"33 1","pages":"168-178"},"PeriodicalIF":2.8,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142918175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"VSAGE: An End-to-End Automated VCO-Based ΔΣ ADC Generator","authors":"Ken Li;Tian Xie;Tzu-Han Wang;Shaolan Li","doi":"10.1109/TVLSI.2024.3507567","DOIUrl":"https://doi.org/10.1109/TVLSI.2024.3507567","url":null,"abstract":"This article presents VSAGE, an agile end-to-end automated voltage-controlled oscillator (VCO)-based \u0000<inline-formula> <tex-math>$Delta Sigma $ </tex-math></inline-formula>\u0000 analog-to-digital converter (ADC) generator. It exploits time-domain architectures and design mindset, so that the design flow is highly oriented around digital standard cells in contrast to the transistor-level-focused approach in conventional analog design. Through this, it speeds up and simplifies both the synthesis phase and layout phase. Combined with an efficient knowledge-machine learning (ML)-guided synthesis flow, it can translate input specifications to a full system layout with reliable performance within minutes. This work also features a compact oscillator and system modeling method that facilitates light-resource accurate computation and network training. The generator is verified with 12 design cases in 65-nm and 28-nm processes, proving its capability of generating competitive design with good process portability.","PeriodicalId":13425,"journal":{"name":"IEEE Transactions on Very Large Scale Integration (VLSI) Systems","volume":"33 1","pages":"128-139"},"PeriodicalIF":2.8,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142918379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"MCM-SR: Multiple Constant Multiplication-Based CNN Streaming Hardware Architecture for Super-Resolution","authors":"Seung-Hwan Bae;Hyuk-Jae Lee;Hyun Kim","doi":"10.1109/TVLSI.2024.3504513","DOIUrl":"https://doi.org/10.1109/TVLSI.2024.3504513","url":null,"abstract":"Convolutional neural network (CNN)-based super-resolution (SR) methods have become prevalent in display devices due to their superior image quality. However, the significant computational demands of CNN-based SR require hardware accelerators for real-time processing. Among the hardware architectures, the streaming architecture can significantly reduce latency and power consumption by minimizing external dynamic random access memory (DRAM) access. Nevertheless, this architecture necessitates a considerable hardware area, as each layer needs a dedicated processing engine. Furthermore, achieving high hardware utilization in this architecture requires substantial design expertise. In this article, we propose methods to reduce the hardware resources of CNN-based SR accelerators by applying the multiple constant multiplication (MCM) algorithm. We propose a loop interchange method for the convolution (CONV) operation to reduce the logic area by 23% and an adaptive loop interchange method for each layer that considers both the static random access memory (SRAM) and logic area simultaneously to reduce the SRAM size by 15%. In addition, we improve the MCM graph exploration speed by \u0000<inline-formula> <tex-math>$5.4times $ </tex-math></inline-formula>\u0000 while maintaining the SR quality through beam search when CONV weights are approximated to reduce the hardware resources.","PeriodicalId":13425,"journal":{"name":"IEEE Transactions on Very Large Scale Integration (VLSI) Systems","volume":"33 1","pages":"75-87"},"PeriodicalIF":2.8,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142918474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shuming Guo;Yinyin Lin;Hao Wang;Yao Li;Chongyan Gu;Weiqiang Liu;Yijun Cui
{"title":"A 0.09-pJ/Bit Logic-Compatible Multiple-Time Programmable (MTP) Memory-Based PUF Design for IoT Applications","authors":"Shuming Guo;Yinyin Lin;Hao Wang;Yao Li;Chongyan Gu;Weiqiang Liu;Yijun Cui","doi":"10.1109/TVLSI.2024.3496735","DOIUrl":"https://doi.org/10.1109/TVLSI.2024.3496735","url":null,"abstract":"The Internet of Things (IoT) allows devices to interact for real-time data transfer and remote control. However, IoT hardware devices have been shown security vulnerabilities. Edge device authentications, as a crucial process for IoT systems, generate and use unique IDs for secure data transmissions. Conventional authentication techniques, computational and heavyweight, are challenging and infeasible in IoT due to limited resources in IoTs. Physical unclonable functions (PUFs), a lightweight hardware-based security primitive, were proposed for resource-constrained applications. We propose a new PUF design for resource-constrained IoT devices based on low-cost logic-compatible multiple-time programmable (MTP) memory cells. The structure includes an array of MTP differential memory cells and a PUF extraction circuit. The extraction method uses the random distribution of BL current after programming each memory cell in logic-compatible MTP memory as the entropy source of PUF. Responses are obtained by comparing the current values of two memory cells under a certain address by challenge, forming challenge–response pairs (CRPs). This scheme does not increase hardware consumption and circuit differences on edge devices and is intrinsic PUF. Finally, 200 PUF chips were fabricated by CSMC based on the 0.153-\u0000<inline-formula> <tex-math>$mu $ </tex-math></inline-formula>\u0000m MCU single-gate CMOS process. The performance of the logic-compatible MTP memory cell and its PUF was evaluated. A logic-compatible MTP cell has good programming erase efficiency and good durability and retention. The uniqueness of the proposed PUF is 50.29%, the uniformity is 51.82%, and the reliability is 93.61%.","PeriodicalId":13425,"journal":{"name":"IEEE Transactions on Very Large Scale Integration (VLSI) Systems","volume":"33 1","pages":"248-260"},"PeriodicalIF":2.8,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142918376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tianning Gao;Yifan Wang;Ming Zhu;Xiulong Wu;Dian Zhou;Zhaori Bi
{"title":"An RISC-V PPA-Fusion Cooperative Optimization Framework Based on Hybrid Strategies","authors":"Tianning Gao;Yifan Wang;Ming Zhu;Xiulong Wu;Dian Zhou;Zhaori Bi","doi":"10.1109/TVLSI.2024.3496858","DOIUrl":"https://doi.org/10.1109/TVLSI.2024.3496858","url":null,"abstract":"The optimization of RISC-V designs, encompassing both microarchitecture and CAD tool parameters, is a great challenge due to an extensive and high-dimensional search space. Conventional optimization methods, such as case-specific approaches and black-box optimization approaches, often fall short of addressing the diverse and complex nature of RISC-V designs. To achieve optimal results across various RISC-V designs, we propose the cooperative optimization framework (COF) that integrates multiple black-box optimizers, each specializing in different optimization problems. The COF introduces the landscape knowledge exchange mechanism (LKEM) to direct the optimizers to share their knowledge of the optimization problem. Moreover, the COF employs the dynamic computational resource allocation (DCRA) strategies to dynamically allocate computational resources to the optimizers. The DCRA strategies are guided by the optimizer efficiency evaluation (OEE) mechanism and a time series forecasting (TSF) model. The OEE provides real-time performance evaluations. The TSF model forecasts the optimization progress made by the optimizers, given the allocated computational resources. In our experiments, the COF reduced the cycle per instruction (CPI) of the Berkeley out-of-order machine (BOOM) by 15.36% and the power of Rocket-Chip by 12.84% without constraint violation compared to the respective initial designs.","PeriodicalId":13425,"journal":{"name":"IEEE Transactions on Very Large Scale Integration (VLSI) Systems","volume":"33 1","pages":"140-153"},"PeriodicalIF":2.8,"publicationDate":"2024-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142918391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dhandeep Challagundla;Ignatius Bezzam;Riadul Islam
{"title":"ArXrCiM: Architectural Exploration of Application-Specific Resonant SRAM Compute-in-Memory","authors":"Dhandeep Challagundla;Ignatius Bezzam;Riadul Islam","doi":"10.1109/TVLSI.2024.3502359","DOIUrl":"https://doi.org/10.1109/TVLSI.2024.3502359","url":null,"abstract":"While general-purpose computing follows von Neumann’s architecture, the data movement between memory and processor elements dictates the processor’s performance. The evolving compute-in-memory (CiM) paradigm tackles this issue by facilitating simultaneous processing and storage within static random-access memory (SRAM) elements. Numerous design decisions taken at different levels of hierarchy affect the figures of merit (FoMs) of SRAM, such as power, performance, area, and yield. The absence of a rapid assessment mechanism for the impact of changes at different hierarchy levels on global FoMs poses a challenge to accurately evaluating innovative SRAM designs. This article presents an automation tool designed to optimize the energy and latency of SRAM designs incorporating diverse implementation strategies for executing logic operations within the SRAM. The tool structure allows easy comparison across different array topologies and various design strategies to result in energy-efficient implementations. Our study involves a comprehensive comparison of over 6900+ distinct design implementation strategies for École Polytechnique Fédérale de Lausanne (EPFL) combinational benchmark circuits on the energy-recycling resonant CiM (rCiM) architecture designed using Taiwan Semiconductor Manufacturing Company (TSMC) 28-nm technology. When provided with a combinational circuit, the tool aims to generate an energy-efficient implementation strategy tailored to the specified input memory and latency constraints. The tool reduces 80.9% of energy consumption on average across all benchmarks while using the six-topology implementation compared with the baseline implementation of single-macro topology by considering the parallel processing capability of rCiM cache size ranging from 4 to 192 kB.","PeriodicalId":13425,"journal":{"name":"IEEE Transactions on Very Large Scale Integration (VLSI) Systems","volume":"33 1","pages":"179-192"},"PeriodicalIF":2.8,"publicationDate":"2024-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142918425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alexandre Almeida da Silva;Lucas Nogueira;Alexandre Coelho;Jarbas A. N. Silveira;César Marcon
{"title":"Securet3d: An Adaptive, Secure, and Fault-Tolerant Aware Routing Algorithm for Vertically–Partially Connected 3D-NoC","authors":"Alexandre Almeida da Silva;Lucas Nogueira;Alexandre Coelho;Jarbas A. N. Silveira;César Marcon","doi":"10.1109/TVLSI.2024.3500575","DOIUrl":"https://doi.org/10.1109/TVLSI.2024.3500575","url":null,"abstract":"Multiprocessor systems-on-chip (MPSoCs) based on 3-D networks-on-chip (3D-NoCs) are crucial architectures for robust parallel computing, efficiently sharing resources across complex applications. To ensure the secure operation of these systems, it is essential to implement adaptive, fault-tolerant mechanisms capable of protecting sensitive data. This work proposes the Securet3d routing algorithm, which establishes secure data paths in fault-tolerant 3D-NoCs. Our approach enhances the Reflect3d algorithm by introducing a detailed scheme for mapping secure paths and improving the system’s ability to withstand faults. To validate its effectiveness, we compare Securet3d with three other fault-tolerant routing algorithms for vertically-partially connected 3D-NoCs. All algorithms were implemented in SystemVerilog and evaluated through simulation using ModelSim and hardware synthesis with Cadence’s Genus tool. Experimental results show that Securet3d reduces latency and enhances cost-effectiveness compared with other approaches. When implemented with a 28-nm technology library, Securet3d demonstrates minimal area and energy overhead, indicating scalability and efficiency. Under denial-of-service (DoS) attacks, Securet3d maintains basically unaltered average packet latencies on 70, 90, and 29 clock cycles for uniform random, bit-complement, and shuffle traffic, significantly lower than those of other algorithms without including security mechanisms (5763, 4632, and 3712 clock cycles in average, respectively). These results highlight the superior security, scalability, and adaptability of Securet3d for complex communication systems.","PeriodicalId":13425,"journal":{"name":"IEEE Transactions on Very Large Scale Integration (VLSI) Systems","volume":"33 1","pages":"275-287"},"PeriodicalIF":2.8,"publicationDate":"2024-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142918383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jie Li;Chuanlun Zhang;Wenxuan Yang;Heng Li;Xiaoyan Wang;Chuanjun Zhao;Shuangli Du;Yiguang Liu
{"title":"FPGA-Based Low-Bit and Lightweight Fast Light Field Depth Estimation","authors":"Jie Li;Chuanlun Zhang;Wenxuan Yang;Heng Li;Xiaoyan Wang;Chuanjun Zhao;Shuangli Du;Yiguang Liu","doi":"10.1109/TVLSI.2024.3496751","DOIUrl":"https://doi.org/10.1109/TVLSI.2024.3496751","url":null,"abstract":"The 3-D vision computing is a key application in unmanned systems, satellites, and planetary rovers. Learning-based light field (LF) depth estimation is one of the major research directions in 3-D vision computing. However, conventional learning-based depth estimation methods involve a large number of parameters and floating-point operations, making it challenging to achieve low-power, fast, and high-precision LF depth estimation on a field-programmable gate array (FPGA). Motivated by this issue, an FPGA-based low-bit, lightweight LF depth estimation network (L\u0000<inline-formula> <tex-math>$^{3}text {FNet}$ </tex-math></inline-formula>\u0000) is proposed. First, a hardware-friendly network is designed, which has small weight parameters, low computational load, and a simple network architecture with minor accuracy loss. Second, we apply efficient hardware unit design and software-hardware collaborative dataflow architecture to construct an FPGA-based fast, low-bit acceleration engine. Experimental results show that compared with the state-of-the-art works with lower mean-square error (mse), L\u0000<inline-formula> <tex-math>$^{3}text {FNet}$ </tex-math></inline-formula>\u0000 can reduce the computational load by more than 109 times and weight parameters by approximately 78 times. Moreover, on the ZCU104 platform, it requires 95.65% lookup tables (LUTs), 80.67% digital signal processors (DSPs), 80.93% BlockRAM (BRAM), 58.52% LUTRAM, and 9.493-W power consumption to achieve an efficient acceleration engine with a latency as low as 272 ns. The code and model of the proposed method are available at \u0000<uri>https://github.com/sansi-zhang/L3FNet</uri>\u0000.","PeriodicalId":13425,"journal":{"name":"IEEE Transactions on Very Large Scale Integration (VLSI) Systems","volume":"33 1","pages":"88-101"},"PeriodicalIF":2.8,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142918178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ahmed M. Mohey;Jelin Leslin;Gaurav Singh;Marko Kosunen;Jussi Ryynänen;Martin Andraud
{"title":"A 22-nm All-Digital Time-Domain Neural Network Accelerator for Precision In-Sensor Processing","authors":"Ahmed M. Mohey;Jelin Leslin;Gaurav Singh;Marko Kosunen;Jussi Ryynänen;Martin Andraud","doi":"10.1109/TVLSI.2024.3496090","DOIUrl":"https://doi.org/10.1109/TVLSI.2024.3496090","url":null,"abstract":"Deep neural network (DNN) accelerators are increasingly integrated into sensing applications, such as wearables and sensor networks, to provide advanced in-sensor processing capabilities. Given wearables’ strict size and power requirements, minimizing the area and energy consumption of DNN accelerators is a critical concern. In that regard, computing DNN models in the time domain is a promising architecture, taking advantage of both technology scaling friendliness and efficiency. Yet, time-domain accelerators are typically not fully digital, limiting the full benefits of time-domain computation. In this work, we propose an all-digital time-domain accelerator with a small size and low energy consumption to target precision in-sensor processing like human activity recognition (HAR). The proposed accelerator features a simple and efficient architecture without dependencies on analog nonidealities such as leakage and charge errors. An eight-neuron layer (core computation layer) is implemented in 22-nm FD-SOI technology. The layer occupies \u0000<inline-formula> <tex-math>$70 times ,70,mu $ </tex-math></inline-formula>\u0000m while supporting multibit inputs (8-bit) and weights (8-bit) with signed accumulation up to 18 bits. The power dissipation of the computation layer is 576\u0000<inline-formula> <tex-math>$mu $ </tex-math></inline-formula>\u0000W at 0.72-V supply and 500-MHz clock frequency achieving an average area efficiency of 24.74 GOPS/mm2 (up to 544.22 GOPS/mm2), an average energy efficiency of 0.21 TOPS/W (up to 4.63 TOPS/W), and a normalized energy efficiency of 13.46 1b-TOPS/W (up to 296.30 1b-TOPS/W).","PeriodicalId":13425,"journal":{"name":"IEEE Transactions on Very Large Scale Integration (VLSI) Systems","volume":"32 12","pages":"2220-2231"},"PeriodicalIF":2.8,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142821135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}