Qiang He;Guobiao Zhang;Jiawei Wang;Ruikun Luo;Xiaohai Dai;Yuchong Hu;Feifei Chen;Hai Jin;Yun Yang
{"title":"EdgeHydra: Fault-Tolerant Edge Data Distribution Based on Erasure Coding","authors":"Qiang He;Guobiao Zhang;Jiawei Wang;Ruikun Luo;Xiaohai Dai;Yuchong Hu;Feifei Chen;Hai Jin;Yun Yang","doi":"10.1109/TPDS.2024.3493034","DOIUrl":"https://doi.org/10.1109/TPDS.2024.3493034","url":null,"abstract":"In the edge computing environment, app vendors can distribute popular data from the cloud to edge servers to provide low-latency data retrieval. A key problem is how to distribute these data from the cloud to edge servers cost-effectively. Under current schemes, a file is divided into some data blocks for parallel transmissions from the cloud to target edge servers. Edge servers can then combine received data blocks to reconstruct the file. While this method expedites the data distribution process, it presents potential drawbacks. It is sensitive to transmission delays and transmission failures caused by runtime exceptions like network fluctuations and server failures. This paper presents EdgeHydra, the first edge data distribution scheme that tackles this challenge through fault tolerance based on erasure coding. Under EdgeHydra, a file is encoded into data blocks and parity blocks for parallel transmission from the cloud to target edge servers. An edge server can reconstruct the file upon the receipt of a sufficient number of these blocks without having to wait for all the blocks in transmission. It also innovatively employs a leaderless block supplement mechanism to ensure the receipt of sufficient blocks for individual target edge servers. These improve the robustness of the data distribution process significantly. Extensive experiments show that EdgeHydra can tolerate delays and failures in individual transmission links effectively, outperforming the state-of-the-art scheme by up to 50.54% in distribution time.","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":"36 1","pages":"29-42"},"PeriodicalIF":5.6,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10746622","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142672047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Real Relative Encoding Genetic Algorithm for Workflow Scheduling in Heterogeneous Distributed Computing Systems","authors":"Junqiang Jiang;Zhifang Sun;Ruiqi Lu;Li Pan;Zebo Peng","doi":"10.1109/TPDS.2024.3492210","DOIUrl":"https://doi.org/10.1109/TPDS.2024.3492210","url":null,"abstract":"This paper introduces a novel Real Relative encoding Genetic Algorithm (R\u0000<inline-formula><tex-math>$^{2}$</tex-math></inline-formula>\u0000GA) to tackle the workflow scheduling problem in heterogeneous distributed computing systems (HDCS). R\u0000<inline-formula><tex-math>$^{2}$</tex-math></inline-formula>\u0000GA employs a unique encoding mechanism, using real numbers to represent the relative positions of tasks in the schedulable task set. Decoding is performed by interpreting these real numbers in relation to the directed acyclic graph (DAG) of the workflow. This approach ensures that any sequence of randomly generated real numbers, produced by cross-over and mutation operations, can always be decoded into a valid solution, as the precedence constraints between tasks are explicitly defined by the DAG. The proposed encoding and decoding mechanism simplifies genetic operations and facilitates efficient exploration of the solution space. This inherent flexibility also allows R\u0000<inline-formula><tex-math>$^{2}$</tex-math></inline-formula>\u0000GA to be easily adapted to various optimization scenarios in workflow scheduling within HDCS. Additionally, R\u0000<inline-formula><tex-math>$^{2}$</tex-math></inline-formula>\u0000GA overcomes several issues associated with traditional genetic algorithms (GAs) and existing real-number encoding GAs, such as the generation of chromosomes that violate task precedence constraints and the strict limitations on gene value ranges. Experimental results show that R\u0000<inline-formula><tex-math>$^{2}$</tex-math></inline-formula>\u0000GA consistently delivers superior performance in terms of solution quality and efficiency compared to existing techniques.","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":"36 1","pages":"1-14"},"PeriodicalIF":5.6,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142672036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"H5Intent: Autotuning HDF5 With User Intent","authors":"Hariharan Devarajan;Gerd Heber;Kathryn Mohror","doi":"10.1109/TPDS.2024.3492704","DOIUrl":"https://doi.org/10.1109/TPDS.2024.3492704","url":null,"abstract":"The complexity of data management in HPC systems stems from the diversity in I/O behavior exhibited by new workloads, multistage workflows, and multitiered storage systems. The HDF5 library is a popular interface to interact with storage systems in HPC workloads. The library manages the complexity of diverse I/O behaviors by providing user-level configurations to optimize the I/O for HPC workloads. The HDF5 library exposes hundreds of configuration properties that can be set to alter how HDF5 manages I/O requests for better performance. However, determining which properties to set is quite challenging for users who lack expertise in HDF5 library internals. We propose a paradigm change through our H5Intent software, where users specify the intent of I/O operations and the software can set various HDF5 properties automatically to optimize the I/O behavior. This work demonstrates several use cases where mapping user-defined intents to HDF5 properties can be exploited to optimize I/O. In this study, we make three observations. First, I/O intents can accurately define HDF5 properties while managing conflicts between various properties and improving the I/O performance of microbenchmarks by up to 22×. Second, I/O intents can be efficiently passed to HDF5 with a small footprint of 6.74MB per node for thousands of intents per process. Third, an H5Intent VOL connector can dynamically map I/O intents to HDF5 properties for various I/O behaviors exhibited by our microbenchmark and improve I/O performance by up to 8.8×. Overall, H5Intent software improves the I/O performance of complex large-scale workloads we studied by up to 11×.","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":"36 2","pages":"108-119"},"PeriodicalIF":5.6,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142810643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dissecting the Software-Based Measurement of CPU Energy Consumption: A Comparative Analysis","authors":"Guillaume Raffin;Denis Trystram","doi":"10.1109/TPDS.2024.3492336","DOIUrl":"https://doi.org/10.1109/TPDS.2024.3492336","url":null,"abstract":"Information and Communications Technologies (ICT) are an increasingly important contributor to the environmental crisis. Computer scientists need tools for measuring the footprint of the code they produce and for optimizing it. Running Average Power Limit (RAPL) is a low-level interface designed by Intel that provides a measure of the energy consumption of a CPU (and more) without the need for additional hardware. Since 2017, it is available on most x86 processors, including AMD processors. More and more people are using RAPL for energy measurement, mostly like a black box without deep knowledge of its behavior. Unfortunately, this causes mistakes when implementing measurement tools. In this article, we propose to come back to the basic mechanisms that allow to use RAPL measurements and present a critical analysis of their operations. In addition to long-established mechanisms, we explore the suitability of the recent eBPF technology (formerly and abbreviation for extended Berkeley Packet Filter) for working with RAPL. We release an implementation in Rust that avoids the pitfalls we detected in existing tools, improving correctness, timing accuracy and performance, with desirable properties for monitoring and profiling parallel applications. We provide an experimental study with multiple benchmarks and processor models to evaluate the efficiency of the various mechanisms and their impact on parallel software. We show that no mechanism provides a significant performance advantage over the others. However, they differ significantly in terms of ease-of-use and resiliency. We believe that this work will help the community to develop correct, resilient and lightweight measurement tools.","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":"36 1","pages":"96-107"},"PeriodicalIF":5.6,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142736584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"DyLaClass: Dynamic Labeling Based Classification for Optimal Sparse Matrix Format Selection in Accelerating SpMV","authors":"Zheng Shi;Yi Zou;Xianfeng Song;Shupeng Li;Fangming Liu;Quan Xue","doi":"10.1109/TPDS.2024.3488053","DOIUrl":"https://doi.org/10.1109/TPDS.2024.3488053","url":null,"abstract":"Sparse matrix-vector multiplication (SpMV) is crucial in many scientific and engineering applications, particularly concerning the effectiveness of different sparse matrix storage formats for various architectures, no single format excels across all hardware. Previous research has focused on trying different algorithms to build predictors for the best format, yet it overlooked how to address the issue of the best format changing in the same hardware environment and how to reduce prediction overhead rather than merely considering the overhead in building predictors. This paper proposes a novel classification algorithm for optimizing sparse matrix storage formats, DyLaClass, based on dynamic labeling and flexible feature selection. Particularly, we introduce mixed labels and features with strong correlations, allowing us to achieve ultra-high prediction accuracy with minimal feature inputs, significantly reducing feature extraction overhead. For the first time, we propose the concept of the most suitable storage format rather than the best storage format, which can stably predict changes in the best format for the same matrix across multiple SpMV executions. We further demonstrate the proposed method on the University of Florida’s public sparse matrix collection dataset. Experimental results show that compared to existing work, our method achieves up to 91% classification accuracy. Using two different hardware platforms for verification, the proposed method outperforms existing methods by 1.26 to 1.43 times. Most importantly, the stability of the proposed prediction model is 25.5% higher than previous methods, greatly increasing the feasibility of the model in practical field applications.","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":"35 12","pages":"2624-2639"},"PeriodicalIF":5.6,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142595927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"PeakFS: An Ultra-High Performance Parallel File System via Computing-Network-Storage Co-Optimization for HPC Applications","authors":"Yixiao Chen;Haomai Yang;Kai Lu;Wenlve Huang;Jibin Wang;Jiguang Wan;Jian Zhou;Fei Wu;Changsheng Xie","doi":"10.1109/TPDS.2024.3485754","DOIUrl":"https://doi.org/10.1109/TPDS.2024.3485754","url":null,"abstract":"Emerging high-performance computing (HPC) applications with diverse workload characteristics impose greater demands on parallel file systems (PFSs). PFSs also require more efficient software designs to fully utilize the performance of modern hardware, such as multi-core CPUs, Remote Direct Memory Access (RDMA), and NVMe SSDs. However, existing PFSs expose great limitations under these requirements due to limited multi-core scalability, unaware of HPC workloads, and disjointed network-storage optimizations. In this article, we present PeakFS, an ultra-high performance parallel file system via computing-network-storage co-optimization for HPC applications. PeakFS designs a shared-nothing scheduling system based on link-reduced task dispatching with lock-free queues to reduce concurrency overhead. Besides, PeakFS improves I/O performance with flexible distribution strategies, memory-efficient indexing, and metadata caching according to HPC I/O characteristics. Finally, PeakFS shortens the critical path of request processing through network-storage co-optimizations. Experimental results show that the metadata and data performance of PeakFS reaches more than 90% of the hardware limits. For metadata throughput, PeakFS achieves a 3.5–19× improvement over GekkoFS and outperforms BeeGFS by three orders of magnitude.","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":"35 12","pages":"2578-2595"},"PeriodicalIF":5.6,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142595929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Algorithms for Data Sharing-Aware Task Allocation in Edge Computing Systems","authors":"Sanaz Rabinia;Niloofar Didar;Marco Brocanelli;Daniel Grosu","doi":"10.1109/TPDS.2024.3486184","DOIUrl":"https://doi.org/10.1109/TPDS.2024.3486184","url":null,"abstract":"Edge computing has been developed as a low-latency data driven computation paradigm close to the end user to maximize profit, and/or minimize energy consumption. Edge computing allows each user’s task to analyze locally-acquired sensor data at the edge to reduce the resource congestion and improve the efficiency of data processing. To reduce application latency and data transferred to edge servers it is essential to consider data sharing for some user tasks that operate on the same data items. In this article, we formulate the data sharing-aware allocation problem which has as objectives the maximization of profit and minimization of network traffic by considering data-sharing characteristics of tasks on servers. Because the problem is \u0000<inline-formula><tex-math>${sf NP-hard}$</tex-math></inline-formula>\u0000, we design the \u0000<inline-formula><tex-math>${sf DSTA}$</tex-math></inline-formula>\u0000 algorithm to find a feasible solution in polynomial time. We investigate the approximation guarantees of \u0000<inline-formula><tex-math>${sf DSTA}$</tex-math></inline-formula>\u0000 by determining the approximation ratios with respect to the total profit and the amount of total data traffic in the edge network. We also design a variant of \u0000<inline-formula><tex-math>${sf DSTA}$</tex-math></inline-formula>\u0000, called \u0000<inline-formula><tex-math>${sf DSTAR}$</tex-math></inline-formula>\u0000 that uses a smart rearrangement of tasks to allocate some of the unallocated tasks for increased total profit. We perform extensive experiments to investigate the performance of \u0000<inline-formula><tex-math>${sf DSTA}$</tex-math></inline-formula>\u0000 and \u0000<inline-formula><tex-math>${sf DSTAR}$</tex-math></inline-formula>\u0000, and compare them with a representative greedy baseline that only maximizes profit. Our experimental analysis shows that, compared to the baseline, \u0000<inline-formula><tex-math>${sf DSTA}$</tex-math></inline-formula>\u0000 reduces the total data traffic in the edge network by up to 20% across 45 case study instances at a small profit loss. In addition, \u0000<inline-formula><tex-math>${sf DSTAR}$</tex-math></inline-formula>\u0000 increases the total profit by up to 27% and the number of allocated tasks by 25% compared to \u0000<inline-formula><tex-math>${sf DSTA}$</tex-math></inline-formula>\u0000, all while limiting the increase of total data traffic in the network.","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":"36 1","pages":"15-28"},"PeriodicalIF":5.6,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142672115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Design and Performance Evaluation of Linearly Extensible Cube-Triangle Network for Multicore Systems","authors":"Savita Gautam;Abdus Samad;Mohammad S. Umar","doi":"10.1109/TPDS.2024.3486219","DOIUrl":"https://doi.org/10.1109/TPDS.2024.3486219","url":null,"abstract":"High-performance interconnection networks are currently being used to design Massively Parallel Computers. Selecting the set of nodes on which parallel tasks execute plays a vital role in the performance of such systems. These networks when deployed to run large parallel applications suffer from communication latencies which ultimately affect the system throughput. Mesh and Torus are primary examples of topologies used in such systems. However, these are being replaced with more efficient and complicated hybrid topologies such as ZMesh and x-Folded TM networks. This paper presents a new topology named as Linearly Extensible Cube-Triangle (LECΔ) which focuses on low latency, lesser average distance and improved throughput. It is symmetrical in nature and exhibits the desirable properties of similar networks with lesser complexity and cost. For N x N network, the LECΔ topology has lesser network latency than that of Mesh, ZMesh, Torus and x-Folded networks. The proposed LECΔ network produces reduced average distance, diameter and cost. It has a high value of bisection width and good scalability. The simulation results show that the performance of LECΔ network is similar to that of Mesh, ZMesh, Torus and x-Folded networks. The results verify the efficiency of the LECΔ network as evaluated and compared with similar networks.","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":"35 12","pages":"2596-2607"},"PeriodicalIF":5.6,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142595892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Breaking the Memory Wall for Heterogeneous Federated Learning via Model Splitting","authors":"Chunlin Tian;Li Li;Kahou Tam;Yebo Wu;Cheng-Zhong Xu","doi":"10.1109/TPDS.2024.3480115","DOIUrl":"https://doi.org/10.1109/TPDS.2024.3480115","url":null,"abstract":"Federated Learning (FL) enables multiple devices to collaboratively train a shared model while preserving data privacy. Ever-increasing model complexity coupled with limited memory resources on the participating devices severely bottlenecks the deployment of FL in real-world scenarios. Thus, a framework that can effectively break the memory wall while jointly taking into account the hardware and statistical heterogeneity in FL is urgently required. In this article, we propose \u0000<italic>SmartSplit</i>\u0000 a framework that effectively reduces the memory footprint on the device side while guaranteeing the training progress and model accuracy for heterogeneous FL through model splitting. Towards this end, \u0000<italic>SmartSplit</i>\u0000 employs a hierarchical structure to adaptively guide the overall training process. In each training round, the central manager, hosted on the server, dynamically selects the participating devices and sets the cutting layer by jointly considering the memory budget, training capacity, and data distribution of each device. The MEC manager, deployed within the edge server, proceeds to split the local model and perform training of the server-side portion. Meanwhile, it fine-tunes the splitting points based on the time-evolving statistical importance. The on-device manager, embedded inside each mobile device, continuously monitors the local training status while employing cost-aware checkpointing to match the runtime dynamic memory budget. Extensive experiments on representative datasets are conducted on both commercial off-the-shelf mobile device testbeds. The experimental results show that \u0000<italic>SmartSplit</i>\u0000 excels in FL training on highly memory-constrained mobile SoCs, offering up to a 94% peak latency reduction and 100-fold memory savings. It enhances accuracy performance by 1.49%-57.18% and adaptively adjusts to dynamic memory budgets through cost-aware recomputation","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":"35 12","pages":"2513-2526"},"PeriodicalIF":5.6,"publicationDate":"2024-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142524102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Mitosis: A Scalable Sharding System Featuring Multiple Dynamic Relay Chains","authors":"Keyuan Wang;Linpeng Jia;Zhaoxiong Song;Yi Sun","doi":"10.1109/TPDS.2024.3480223","DOIUrl":"https://doi.org/10.1109/TPDS.2024.3480223","url":null,"abstract":"Sharding is a prevalent approach for addressing performance issues in blockchain. To reduce governance complexities and ensure system security, a common practice involves a relay chain to coordinate cross-shard transactions. However, with a growing number of shards and cross-shard transactions, the single relay chain usually first suffers from performance bottleneck and shows poor scalability, thus making the relay chain's scalability vital for sharding systems. To solve this, we propose \u0000<italic>Mitosis</i>\u0000, the first multi-relay architecture to improve the relay chain's scalability by sharding the relay chain itself. Our proposed relay sharding algorithm dynamically adjusts the number of relays or optimizes the topology between relays and shards to adaptively scale up relay chain's performance. Furthermore, to guarantee the security of the multi-relay architecture, a new validator reconfiguration scheme is designed, accompanied by a comprehensive security analysis of \u0000<italic>Mitosis</i>\u0000. Through simulation experiments on two mainstream relay chain paradigms, we demonstrate that \u0000<italic>Mitosis</i>\u0000 can achieve high scalability and outperform state-of-the-art baselines in terms of workload of relays, relay chain throughput, and transaction latency.","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":"35 12","pages":"2497-2512"},"PeriodicalIF":5.6,"publicationDate":"2024-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10716349","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142518166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}