{"title":"Predicting mechanical properties of concrete structures using metaheuristic-optimization-based machine learning models","authors":"Ngoc-Mai Nguyen","doi":"10.1016/j.asoc.2025.112893","DOIUrl":"10.1016/j.asoc.2025.112893","url":null,"abstract":"<div><div>More powerful machine learning (ML) models for predicting the behavior of concrete structures could considerably improve the efficiency and safety of civil engineering. This study presents a Metalearning system for systematically integrating metaheuristic optimization algorithms with ML models to create robust hybrid models. The system combines 15 advanced metaheuristic algorithms with 11 powerful ML models, generating over 150 hybrid ensemble models. This research is distinguished by the creation of a comprehensive hybridized ML framework, the development of novel hybrid models not previously explored in the literature, and the demonstrated superiority of metaheuristic-optimized homogeneous ensemble models over traditional ensemble and single hybrid models. The effectiveness of the proposed system was validated through three real-world case studies, showcasing superior predictive performance compared to existing ML models and traditional structural formulas. In particular, a least-squares support vector regression (LSSVR) model optimized with forensic-based investigation (FBI) achieved the highest accuracy for predicting shear strength in two-way flat reinforced concrete slabs; its root mean square error was 73.06 kN, mean absolute error was 42.82 kN, mean absolute percentage error was 10.58 %, and R<sup>2</sup> was 0.970. The FBI-(ANN+LSSVR) outperformed other models in predicting the ultimate bearing capacity of shallow foundations. The FBI-LSSVR-FS model achieved outstanding predictive performance for the construction cost index with MAPE values of 1.23 %. The key contributions of this study are the establishment of a reliable system for advanced ML hybridization to enhance generalizability, development of numerous innovative ensemble models that have not been previously used, and development of a user-friendly interface to support structural engineers in effectively applying ML-based inference models to solve practical problems in the civil engineering.</div></div>","PeriodicalId":50737,"journal":{"name":"Applied Soft Computing","volume":"172 ","pages":"Article 112893"},"PeriodicalIF":7.2,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143480331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An ensemble deep learning network based on 2D convolutional neural network and 1D LSTM with self-attention for bearing fault diagnosis","authors":"Liying Wang, Weiguo Zhao","doi":"10.1016/j.asoc.2025.112889","DOIUrl":"10.1016/j.asoc.2025.112889","url":null,"abstract":"<div><div>Intelligent classification methods based on deep learning (DL) have become widely adopted for bearing fault diagnosis (BFD). However, it is acknowledged that relying on single feature extraction methods may not yield comprehensive representations of the information features. Additionally, DL-based approaches for extracting features from vibration signals typically utilize either one-dimensional (1D) or two-dimensional (2D) networks, which can restrict the network's ability to extract features effectively. In this paper, a time series data representation method called the relative angle matrix (RAM) method is firstly proposed. This method converts 1D time series into 2D images by calculating the angle differences between multiple vectors and a central vector, thereby extracting the hidden spatial features present in the original data. Then, this paper introduces an ensemble deep learning network called 1D2D-EDL, which integrates 1D-based and 2D-based DL mechanisms for feature extraction and classification, leveraging the strengths of each approach. The 1D2D-EDL comprises two channels: the 1D channel combines long short-term memory (LSTM) and multi-head self-attention (MSA) to process raw 1D time series data, facilitating feature extraction in both the time and frequency domains. Meanwhile, the 2D channel employs convolutional neural network (CNN) components to process 2D images for spatial feature extraction, which are derived from the original time series data using the RAM method. Finally, the feature information from these two channels is fused using a feature fusion method. To preliminarily validate the effectiveness of the RAM method, three competitive 2D conversion methods are employed, including Gramian angular difference field (GADF), Gramian angular sum field (GASF), and Markov transition field (MTF). These methods are applied alongside the proposed RAM method within the same CNN network for fault diagnosis testing. The results indicate that the RAM method significantly enhances the diagnostic accuracy of the CNN compared to the other 2D conversion methods. Furthermore, the bearing fault dataset from the University of Ottawa is utilized to validate the performance of the 1D2D-EDL. A comparative analysis with other DL methods using multiple statistical metrics demonstrates the superiority of the 1D2D-EDL. Specifically, when diagnosing faults under four different speed conditions, the 1D2D-EDL attains accuracy rates of 100 %, 99.33 %, 100 %, and 100 %, respectively. This study proposes the incorporation of a novel perspective classifier to enhance DL models for bearing fault diagnosis. The source code of RAM is available at <span><span>https://ww2.mathworks.cn/matlabcentral/fileexchange/180197-relative-angle-matrix-ram</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50737,"journal":{"name":"Applied Soft Computing","volume":"172 ","pages":"Article 112889"},"PeriodicalIF":7.2,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143474054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Comparative analysis of deep learning algorithms for predicting construction project delays in Saudi Arabia","authors":"Saleh Alsulamy","doi":"10.1016/j.asoc.2025.112890","DOIUrl":"10.1016/j.asoc.2025.112890","url":null,"abstract":"<div><div>Construction projects in Saudi Arabia often encounter delays, which present significant challenges to project managers and result in financial losses and stakeholder dissatisfaction. Effectively managing these delays is essential for maintaining project timelines and optimizing resource use. This study explores the hypothesis that advanced deep learning algorithms can significantly improve the prediction and management of construction project delays in Saudi Arabia. The research focuses on three algorithms: Generative Adversarial Networks (GAN), Long Short-Term Memory (LSTM), and Multilayer Perceptron (MLP), evaluating their effectiveness across datasets with varying class imbalances. A structured methodology was employed to assess the algorithms based on key performance metrics, including accuracy, precision, sensitivity, specificity, and misclassification errors. GAN, LSTM, and MLP were trained and tested using real-world construction project data, incorporating tools such as k-fold cross-validation for validation. The GAN model achieved the highest accuracy at 91 %, with a misclassification error of 9 %, outperforming both LSTM (accuracy: 88 %, error: 12 %) and MLP (accuracy: 83 %, error: 17 %). GAN also demonstrated superior precision (90 %) and sensitivity (87 %), making it the most reliable algorithm for delay risk assessment. While LSTM was effective, it had slightly lower precision (88 %) but exhibited strong generalization to unseen data. MLP showed the weakest performance, with higher misclassification rates and lower robustness. These findings suggest that deep learning models, particularly GAN, can significantly improve decision-making and delay mitigation in construction projects.</div></div>","PeriodicalId":50737,"journal":{"name":"Applied Soft Computing","volume":"172 ","pages":"Article 112890"},"PeriodicalIF":7.2,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143464776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Efficient fuzzy-based high utility pattern computing and analyzing approach with temporal properties","authors":"Unil Yun, Hyeonmo Kim , Hanju Kim , Seungwan Park","doi":"10.1016/j.asoc.2025.112902","DOIUrl":"10.1016/j.asoc.2025.112902","url":null,"abstract":"<div><div>Fuzzy logic in soft computing deals with intuitive and comprehensive intelligence to find solutions to problems in the uncertain real world. Considering the fuzzy set concept and knowledge discovery of utility-driven patterns simultaneously, quantities of sets of items hidden within vast data can be represented in an easy-to-understand linguistic representation. This can lead to more reasonable decision-making. Together with these fascinating results, temporal fuzzy utility pattern analysis has emerged as a significant area in the last few years to consider the duration of transactions in temporal quantitative data. The latest temporal approaches have improved resource efficiency by storing information on patterns with efficient data structures. However, although a list-based approach is known to be robust and follows a mechanism that does not generate candidates, it requires explosive comparison operations that are unsuitable for processing long-length patterns, especially in big data analysis. To solve this issue, we present a novel indexed list-based structure along with a data analysis method designed to allow rapid pattern growth as well as prevent the generation of candidates for discovering high temporal fuzzy utility patterns. Performance tests on real and synthetic datasets demonstrate that the proposed approach exhibits superior time efficiency and scalability relative to state-of-the-art methods with minimal compromise in memory, all while extracting accurate results. Moreover, comprehensive experiments demonstrate the capability of the proposed method for practical use cases and its effectiveness in search space pruning.</div></div>","PeriodicalId":50737,"journal":{"name":"Applied Soft Computing","volume":"172 ","pages":"Article 112902"},"PeriodicalIF":7.2,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143480208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Learning efficient branch-and-bound for solving Mixed Integer Linear Programs","authors":"Shuhan Du, Junbo Tong, Wenhui Fan","doi":"10.1016/j.asoc.2025.112863","DOIUrl":"10.1016/j.asoc.2025.112863","url":null,"abstract":"<div><div>Mixed Integer Linear Programs (MILPs) are widely used to model various real-world optimization problems, traditionally solved using the branch-and-bound (B&B) algorithm framework. Recent advances in Machine Learning (ML) have inspired enhancements in B&B by enabling data-driven decision-making. Two critical decisions in B&B are node selection and variable selection, which directly influence computational efficiency. While prior studies have applied ML to enhance these decisions, they have predominantly focused on either node selection or variable selection, addressing the decision individually and overlooking the significant interdependence between the two. This paper introduces a novel ML-based approach that integrates both decisions within the B&B framework using a unified neural network architecture. By leveraging a bipartite graph representation of MILPs and employing Graph Neural Networks, the model learns adaptive strategies tailored to different problem types through imitation of expert-designed policies. Experiments on various benchmarks show that the integrated policy adapts better to different problem classes than models targeting individual decisions, delivering strong performance in solving time, search tree size, and optimization dynamics across various configurations. It also surpasses competitive baselines, including the state-of-the-art open-source solver SCIP and a recent reinforcement learning-based approach, demonstrating its potential for broader application in MILP solving.</div></div>","PeriodicalId":50737,"journal":{"name":"Applied Soft Computing","volume":"172 ","pages":"Article 112863"},"PeriodicalIF":7.2,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143429859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhi-Ze Wu , Chang-Jiang Du , Xin-Qi Wang , Le Zou , Fan Cheng , Teng Li , Fu-Dong Nian , Thomas Weise , Xiao-Feng Wang
{"title":"Domain Adaptation via Feature Disentanglement for cross-domain image classification","authors":"Zhi-Ze Wu , Chang-Jiang Du , Xin-Qi Wang , Le Zou , Fan Cheng , Teng Li , Fu-Dong Nian , Thomas Weise , Xiao-Feng Wang","doi":"10.1016/j.asoc.2025.112868","DOIUrl":"10.1016/j.asoc.2025.112868","url":null,"abstract":"<div><div>Image classification is an important application area of soft computing. In many real-world application scenarios, image classifiers are applied to domains that differ from the original training data. This so-called domain shift significantly reduces classification accuracy. To tackle this issue, unsupervised domain adaptation (UDA) techniques have been developed to bridge the gap between source and target domains. These techniques achieve this by transferring knowledge from a labeled source domain to an unlabeled target domain. We develop a novel and effective coarse-to-fine domain adaptation method called Domain Adaptation via Feature Disentanglement (DAFD), which has two new key components: First, our Class-Relevant Feature Selection (CRFS) module disentangles class-relevant features from class-irrelevant ones. This prevents the network from overfitting to irrelevant data and enhances its focus on crucial information for accurate classification. This reduces the complexity of domain alignment, which improves the classification accuracy on the target domain. Second, our Dynamic Local Maximum Mean Discrepancy module DLMMD achieves a fine-grained feature alignment by minimizing the discrepancy among class-relevant features from different domains. The alignment process now becomes more adaptive and contextually sensitive, enhancing the ability of the model to recognize domain-specific patterns and characteristics. The combination of the CRFS and DLMMD modules results in an effective alignment of class-relevant features. Domain knowledge is successfully transferred from the source to the target domain. Our comprehensive experiments on four standard datasets demonstrate that DAFD is robust and highly effective in cross-domain image classification tasks.</div></div>","PeriodicalId":50737,"journal":{"name":"Applied Soft Computing","volume":"172 ","pages":"Article 112868"},"PeriodicalIF":7.2,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143445320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Solving an imperfect EPQ model with safety stock for type-I and type-II screening error under constrained fuzzy Newton interpolation approach","authors":"Mou Jana , Sujit Kumar De , Adrijit Goswami","doi":"10.1016/j.asoc.2025.112866","DOIUrl":"10.1016/j.asoc.2025.112866","url":null,"abstract":"<div><div>This article deals with an industrial production process of a single item with safety stock and deterioration over time. First of all, we have considered an economic production quantity (EPQ) inventory model where the items are screened multiple times and the screening process itself has Type-I and Type-II errors. Some parts of the imperfect items are reworkable (serviceable) and the unusable items are discarded from the inventory instantly. Incorporating rework cost, disposal cost and screening cost in the inventory process, a total average system cost function has been studied and it has been optimized analytically. But to capture the flexibilities of the demand rate and all unit cost components (for comparative analysis) a fuzzy model has been developed. Indeed, we know that the defuzzification is a crucial step in any fuzzy inferential system, aimed at converting fuzzy outputs into equivalent crisp values for final decision-making. To get the model optimum we optimize the fuzzy membership function developed with the help of Newton’s general interpolation formula for the proposed constrained non-linear optimization problem. The major novelties of this work include the construction of a new fuzzy membership function and techniques of decision making by means of a solution algorithm. For model validation, a numerical example has been analyzed on the basis of a case study and it has been compared with some of the existing methods. Findings reveal that the proposed method dominates others and up to 39.36% cost reduction is possible as a whole. Finally, sensitivity analysis and graphical illustrations have been carried out, followed by scope of future work.</div></div>","PeriodicalId":50737,"journal":{"name":"Applied Soft Computing","volume":"172 ","pages":"Article 112866"},"PeriodicalIF":7.2,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143480330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Parallel attribute reduction in high-dimensional data: An efficient MapReduce strategy with fuzzy discernibility matrix","authors":"Pandu Sowkuntla , P.S.V.S. Sai Prasad","doi":"10.1016/j.asoc.2025.112870","DOIUrl":"10.1016/j.asoc.2025.112870","url":null,"abstract":"<div><div>The hybrid paradigm of fuzzy-rough set theory, which combines fuzzy and rough sets, has proven effective in attribute reduction for hybrid decision systems encompassing both numerical and categorical attributes. However, current parallel/distributed approaches are limited to handling datasets with either categorical or numerical attributes and often rely on fuzzy dependency measures. There exists little research on parallel/distributed attribute reduction for large-scale hybrid decision systems. The challenge of handling high-dimensional data in hybrid decision systems necessitates efficient distributed computing techniques to ensure scalability and performance. MapReduce, a widely used framework for distributed processing, provides an organized approach to handling large-scale data. Despite its potential, there is a noticeable lack of attribute reduction techniques that leverage MapReduce’s capabilities with a fuzzy discernibility matrix, which can significantly improve the efficiency of processing high-dimensional hybrid datasets. This paper introduces a vertically partitioned fuzzy discernibility matrix within the MapReduce computation model to address the high dimensionality of hybrid datasets. The proposed MapReduce strategy for attribute reduction minimizes data movement during the shuffle and sort phase, overcoming limitations present in existing approaches. Furthermore, the method’s efficiency is enhanced by integrating a feature known as SAT-region removal, which removes matrix entries that satisfy the maximum satisfiability conditions during the attribute reduction process. Extensive experimental analysis validates the proposed method, demonstrating its superior performance compared to recent parallel/distributed methods in attribute reduction.</div></div>","PeriodicalId":50737,"journal":{"name":"Applied Soft Computing","volume":"172 ","pages":"Article 112870"},"PeriodicalIF":7.2,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143455106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ruxin Zhao , Lixiang Fu , Jiajie Kang , Chang Liu , Wei Wang , Haizhou Wu , Yang Shi , Chao Jiang , Rui Wang
{"title":"Data-driven evolutionary algorithms based on initialization selection strategies, POX crossover and multi-point random mutation for flexible job shop scheduling problems","authors":"Ruxin Zhao , Lixiang Fu , Jiajie Kang , Chang Liu , Wei Wang , Haizhou Wu , Yang Shi , Chao Jiang , Rui Wang","doi":"10.1016/j.asoc.2025.112901","DOIUrl":"10.1016/j.asoc.2025.112901","url":null,"abstract":"<div><div>In the fields of manufacturing and production, the precise solution of the flexible job shop scheduling problem (FJSP) is crucial for improving production efficiency and optimizing resource allocation. However, the complexity of FJSP often leads traditional optimization methods to face high computational costs and lengthy processing times. To address this problem, we propose a data-driven evolutionary algorithm based on initialization selection strategies, POX crossover, and multi-point random mutation (DDEA-PMI). This algorithm replaces the real objective function by constructing a radial basis function (RBF) surrogate model to reduce expensive computational costs and shorten solution time. In the process of solving FJSP, we use global selection (GS), local selection (LS), and random selection (RS) initialization selection strategies to obtain an initial population with high diversity. In order to reduce the generation of infeasible solutions, we use the POX crossover operator, which selects partial gene sequences from the parent generation and maps them to the offspring to preserve excellent features and ensure the feasibility of the solution. In addition, we design a multi-point random mutation operation to enhance the diversity of the population. Through the multi-point mutation strategy, it is able to explore more comprehensively in the solution space to increase the possibility of finding the optimal solution. To verify the effectiveness of DDEA-PMI, we compare it with three same types of data-driven evolutionary algorithms. We compare and analyze the DDEA-PMI with three algorithms after removing one of our proposed strategies. The experimental results show that DDEA-PMI is effective and has advantages in solving FJSP.</div></div>","PeriodicalId":50737,"journal":{"name":"Applied Soft Computing","volume":"172 ","pages":"Article 112901"},"PeriodicalIF":7.2,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143464774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On generalized Sugeno’s class generator and parametrized intuitionistic fuzzy approach for enhancing low-light images","authors":"Maheshkumar C.V. , David Raj M. , Saraswathi D.","doi":"10.1016/j.asoc.2025.112865","DOIUrl":"10.1016/j.asoc.2025.112865","url":null,"abstract":"<div><div>Enhancing low-light images poses a significant challenge in terms of pixel distortion, color degradation, detail loss, over enhancement and noise amplification, particularly in images that have both low light and normal light region. In recent years, researchers have increasingly turned their attention to intuitionistic fuzzy set based approaches for low light image enhancement due to their flexibility in the representation of a pixel. In this work, the generalized Sugeno’s class of generating function is proposed. Since the parameter value in the existing generating functions lies in an unbounded interval, it is difficult to find the best parameter value. By using the proposed generalized version, a few intuitionistic generating functions are analyzed where the parameter value lies in a bounded interval. A searching algorithm is also proposed to find the parameter value that maximizes the entropy of an image for any membership and generating function. Regardless of the number of decimals, the proposed approach finds the best parameter value iteratively. Then, in HSI color space, an enhancement model is designed utilizing the intuitionistic fuzzy image achieved using best parameter value and contrast-limited adaptive histogram equalization. The proposed method performs better compared to the state-of-the-art models. Also, seven image quality mathematical metrics — entropy, SSIM, correlation coefficient <span><math><mrow><mo>(</mo><mi>r</mi><mo>)</mo></mrow></math></span>, PSNR, AMBE, number of edge pixels <span><math><mrow><mo>(</mo><msub><mrow><mi>N</mi></mrow><mrow><mi>g</mi></mrow></msub><mo>)</mo></mrow></math></span> and the fitness function are implemented to compare the proposed and state-of-the-art models.</div></div>","PeriodicalId":50737,"journal":{"name":"Applied Soft Computing","volume":"172 ","pages":"Article 112865"},"PeriodicalIF":7.2,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143471604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}