Guangao Wang , Yongzhen Ke , Shuai Yang , Kai Wang , Wen Guo , Fan Qin
{"title":"A novel framework for aesthetic assessment of portrait sketches via multi-feature integration and self-supervised learning","authors":"Guangao Wang , Yongzhen Ke , Shuai Yang , Kai Wang , Wen Guo , Fan Qin","doi":"10.1016/j.eswa.2025.128659","DOIUrl":"10.1016/j.eswa.2025.128659","url":null,"abstract":"<div><div>Image Aesthetic Assessment (IAA) has developed rapidly in recent years, but the automated assessment of sketch portraits, as a core part of formal art examinations, remains largely unexplored. To fill this gap, we construct the Sketch Head Portrait Dataset (SHPD), the first large-scale, publicly available dataset containing 14,084 sketch portraits, of which 1,339 are rated by experts. Based on SHPD, we propose the Sketch Paintings Aesthetic Assessment Network (SPAAN), which aims to provide accurate and efficient aesthetic assessment. SPAAN integrates three complementary feature streams: a general feature network captures global compositional cues, while two self-supervised sketch feature networks learn contour lines and value scale features through aesthetic quality degradation pretext task. These feature streams are re-weighted and aggregated through a lightweight multi-feature optimization and fusion module through a channel attention mechanism and a multi-layer perceptron-based weighting algorithm to simulate the multi-dimensional scoring criteria in real sketch evaluation scenarios. Extensive experiments on SHPD show that SPAAN outperforms mainstream general aesthetic assessment methods, verifying the effectiveness of our adopted self-supervised learning method and fusion strategy. This work contributes to advancing large-scale portrait sketch assessment tasks and provides a new research direction for artistic image aesthetic assessment (AIAA). Dataset and code are available at: <span><span>https://gitee.com/yongzhenke/SPAAN</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50461,"journal":{"name":"Expert Systems with Applications","volume":"292 ","pages":"Article 128659"},"PeriodicalIF":7.5,"publicationDate":"2025-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144330802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alejandro Rodriguez Dominguez , Muhammad Shahzad , Xia Hong
{"title":"Multi-hypothesis prediction for portfolio optimization: A structured ensemble learning approach to risk diversification","authors":"Alejandro Rodriguez Dominguez , Muhammad Shahzad , Xia Hong","doi":"10.1016/j.eswa.2025.128633","DOIUrl":"10.1016/j.eswa.2025.128633","url":null,"abstract":"<div><div>This work proposes a unified framework for portfolio allocation, covering both asset selection and optimization, based on a multiple-hypothesis predict-then-optimize approach. The portfolio is modeled as a structured ensemble, where each predictor corresponds to a specific asset or hypothesis. Structured ensembles formally link predictors’ diversity, captured via ensemble loss decomposition, to out-of-sample risk diversification. A structured data set of predictor output is constructed with a parametric diversity control, which influences both the training process and the diversification outcomes. This data set is used as input for a supervised ensemble model, the target portfolio of which must align with the ensemble combiner rule implied by the loss. For squared loss, the arithmetic mean applies, yielding the equal-weighted portfolio as the optimal target. For asset selection, a novel method is introduced which prioritizes assets from more diverse predictor sets, even at the expense of lower average predicted returns, through a diversity-quality trade-off. This form of diversity is applied before the portfolio optimization stage and is compatible with a wide range of allocation techniques. Experiments conducted on the full S&P 500 universe and a data set of 1300 global bonds of various types over more than two decades validate the theoretical framework. Results show that both sources of diversity effectively extend the boundaries of achievable portfolio diversification, delivering strong performance across both one-step and multi-step allocation tasks.</div></div>","PeriodicalId":50461,"journal":{"name":"Expert Systems with Applications","volume":"292 ","pages":"Article 128633"},"PeriodicalIF":7.5,"publicationDate":"2025-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144331235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jinshan Zeng , Yan Zhang , Yiyang Yuan , Ling Tu , Yefei Wang
{"title":"Few-shot font generation via stroke prompt and hierarchical representation learning","authors":"Jinshan Zeng , Yan Zhang , Yiyang Yuan , Ling Tu , Yefei Wang","doi":"10.1016/j.eswa.2025.128656","DOIUrl":"10.1016/j.eswa.2025.128656","url":null,"abstract":"<div><div>Few-shot font generation aims to generate a specified font with a few reference characters. Existing few-shot font generation methods are broadly categorized into glyph-driven and deep-driven approaches, where the kind of glyph-driven methods has well local glyph preservation performance but unsatisfactory overall glyph generation performance, and the kind of deep-driven methods is good at preserving the overall glyph but inadequate to yield the varied glyph details. In this paper, we propose a novel few-shot font generation model by combining stroke and deep font priors to embody merits of both glyph-driven and deep-driven models. Specifically, we incorporate the stroke prior into the model via prompt learning to control the generation of local glyph details represented by strokes, and capture the shape and size variations of characters using deep font prior via the hierarchical representation learning to improve the generation of overall glyph. We conduct extensive experiments over a dataset with 148 fonts collected by ourselves, which show the superior performance of the proposed model for Chinese font generation compared to state-of-the-art models, and achieve good performance in zero-shot cross-lingual font generation. The codes of the proposed model are available from the link <span><span>https://github.com/JinshanZeng/SPH-Font</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50461,"journal":{"name":"Expert Systems with Applications","volume":"293 ","pages":"Article 128656"},"PeriodicalIF":7.5,"publicationDate":"2025-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144338262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Semantic-Preserved Generative Adversarial Network optimized with Tyrannosaurus Optimization Algorithm for Liver Disorder Classification","authors":"T. Haritha , A.V. Santhosh Babu","doi":"10.1016/j.eswa.2025.128675","DOIUrl":"10.1016/j.eswa.2025.128675","url":null,"abstract":"<div><div>Liver disease causes around two million deaths annually, accounting for 4% of global deaths. The use of algorithms for early disease prediction in large clinical datasets shows promise, but is often constrained by the complexity and variability of the data.<!--> <!-->Therefore, Semantic-Preserved Generative Adversarial Network optimized using Tyrannosaurus Optimization Algorithm for Liver Disorder Classification (SPGAN<strong>-</strong>TOA-LDC) is proposed in this paper. Input data is collected from Indian Liver Patient Dataset (ILPD). The gathered data is fed into preprocessing stage using Distributed Set-Membership Fusion Filtering (DSMFF) to remove data redundancy. Hiking Optimization Algorithm (HOA) is employed to select the optimal features from the dataset. The selected features are supplied to Semantic-Preserved Generative Adversarial Network (SPGAN) to classify the liver disorder as liver disease and non-liver disease. Generally, SPGAN does not show any adaption of maximization approaches for identify the optimal parameters to ensure precise categorization of liver disorders. Tyrannosaurus Optimization Algorithm (TOA) is used for improving the weight parameters of SPGAN. The proposed SPGAN<strong>-</strong>TOA-LDC approach achieves 20.58%, 18.73% and 25.62% high accuracy, 24.58%, 28.73% and 15.62% high precision compared with existing techniques: CNN along machine learning models under Extra Trees Classifier with MRMR feature selection for liver disease identification on cloud (CNN-ILPD-LDC), Early identification of liver disease utilizing hybrid soft computing methods for optimum feature selection with categorization (DNN-ILPD-EDL), Liver disease patients prediction depending on integrated projection base statistical feature extraction and machine learning approaches (KNN-ILPD-LDC) respectively.</div></div>","PeriodicalId":50461,"journal":{"name":"Expert Systems with Applications","volume":"293 ","pages":"Article 128675"},"PeriodicalIF":7.5,"publicationDate":"2025-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144338261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tauana Ohland dos Santos , Luís Alvaro de Lima Silva , Alfredo Cossetin Neto , Edison Pignaton de Freitas
{"title":"Solving pathfinding problems in cubic grids using 3D neighborhood extension","authors":"Tauana Ohland dos Santos , Luís Alvaro de Lima Silva , Alfredo Cossetin Neto , Edison Pignaton de Freitas","doi":"10.1016/j.eswa.2025.128663","DOIUrl":"10.1016/j.eswa.2025.128663","url":null,"abstract":"<div><div>Pathfinding in three-dimensional environments is essential for solving various application problems. Pathfinding in 3D spaces presents significantly greater complexity than in two-dimensional environments, primarily due to the increased number of potential paths an agent can traverse. This complexity is further compounded by three-dimensional obstacles, which introduce an additional layer of difficulty to pathfinding and necessitate solutions capable of efficiently navigating complex scenarios. Additionally, when considering the third dimension, different movement directions become relevant for identifying low-cost and smooth routes in 3D space. To address these challenges, this work investigates an innovative technique called <em>3D Neighborhood Expansion</em>, which uniformly expands the neighborhood search in three-dimensional space. The proposed 3D neighborhood expansion is then integrated into relevant path-smoothing algorithms. The primary goal is to analyze the impact of expanding the neighborhood search, controlled by the parameter <span><math><mi>k</mi></math></span>, on the performance of pathfinding algorithms in 3D environments. Specifically, this work examines whether increasing <span><math><mi>k</mi></math></span> results in more direct and smoother paths. The technique is tested using voxel-based maps, which offer realistic representations of 3D space. Based on a statistical analysis of various path search metrics, experiments conducted with the A*, Theta*, and JPS algorithms demonstrate that expanding the neighborhood significantly improves the quality of the resulting paths as <span><math><mi>k</mi></math></span> increases. These findings are crucial for enhancing practical applications in computer games, robotics, and simulation systems.</div></div>","PeriodicalId":50461,"journal":{"name":"Expert Systems with Applications","volume":"292 ","pages":"Article 128663"},"PeriodicalIF":7.5,"publicationDate":"2025-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144330801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jinghao Lu , Fan Zhang , Xiaofeng Zhang , Yujuan Sun , Hua Wang
{"title":"MCNR: Multiscale feature-based latent data component extraction linear regression model","authors":"Jinghao Lu , Fan Zhang , Xiaofeng Zhang , Yujuan Sun , Hua Wang","doi":"10.1016/j.eswa.2025.128634","DOIUrl":"10.1016/j.eswa.2025.128634","url":null,"abstract":"<div><div>Time series forecasting is of great significance in various fields and is widely applied in industries such as finance and energy management. However, time series data often contains rich periodic features, and there is a high correlation between these features, with the core trend often being implicit in certain features. Therefore, effectively separating and extracting core information from complex multidimensional data, while avoiding noise interference, has become an urgent problem to be solved. To address this, we propose a multi-scale latent feature extraction model, MCNR, for specific periods. It assigns weighted labels to periodic data points through a backward-weighted periodic module, thereby giving more weight to the periodic points and allowing the model to focus on key periodic features. Another core innovation of the MCNR model is the division of the retrospective window into different scales to capture long-term, mid-term, and short-term time features. For data of different scales, the model uses the Regularized Latent Component Regression (RLC) module for latent component extraction and regularization. By focusing on the correlation between each dimension and the predicted value, it uses principal component analysis to extract linear combinations of multivariate features, thus effectively separating the core regions of the data. This process significantly improves the model’s adaptability to different time series structures. Additionally, MCNR introduces a Multilevel Data Normalization (MDN) module. Through the reversibility of MDN, the model can adapt to the distribution differences of the data, normalizing the features based on the mean and standard deviation of the data, thereby removing the trend and seasonal components from the data and further enhancing the model’s stability, robustness, and generalization ability. Compared to the latest mainstream models, MCNR achieves the same experimental results with only approximately 10k parameters, and in experiments on multiple datasets, the model improved the Mean Squared Error (MSE) by 6.58 %.</div></div>","PeriodicalId":50461,"journal":{"name":"Expert Systems with Applications","volume":"292 ","pages":"Article 128634"},"PeriodicalIF":7.5,"publicationDate":"2025-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144322748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dynamic pricing and inventory control of perishable products by a deep reinforcement learning algorithm","authors":"Alireza Kavoosi , Reza Tavakkoli-Moghaddam , Hedieh Sajedi , Nazanin Tajik , Keivan Tafakkori","doi":"10.1016/j.eswa.2025.128570","DOIUrl":"10.1016/j.eswa.2025.128570","url":null,"abstract":"<div><div>Using continuous action spaces to set prices simultaneously and order quantities, this study proposes a unified deep reinforcement learning (DRL) framework for dynamic pricing and perishables inventory control in a vendor-managed environment. Sales revenue plus penalties for spoiling, returns, and transport costs are combined to create a multi-component reward that reflects profit. We incorporate a potential-based shaping term <span><math><mrow><mstyle><mi>Φ</mi></mstyle><mo>(</mo><mi>s</mi><mo>)</mo></mrow></math></span> constructed from inventory heuristics to direct exploration and shorten training time, guaranteeing no change in policy optimality. In contrast to other DRL algorithms and classical benchmarks, our empirical study, which includes seasonal demand and random returns, shows that an agent based on proximal policy optimization achieves better cumulative reward and service level.</div></div>","PeriodicalId":50461,"journal":{"name":"Expert Systems with Applications","volume":"291 ","pages":"Article 128570"},"PeriodicalIF":7.5,"publicationDate":"2025-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144296786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qiuzhan Zhou, Yinggang Li, Cong Wang, Jingsong Wang
{"title":"A deep reinforcement learning framework for optimized dummy pad placement in PCB electroplating","authors":"Qiuzhan Zhou, Yinggang Li, Cong Wang, Jingsong Wang","doi":"10.1016/j.eswa.2025.128639","DOIUrl":"10.1016/j.eswa.2025.128639","url":null,"abstract":"<div><div>The placement of dummy pads is critical in large-scale printed circuit board (PCB) design to ensure uniform copper plating. However, space constraints make dense arrangement difficult in blank areas. Thus, an efficient layout strategy is urgently needed to achieve optimal plating uniformity with limited dummy pads. Existing algorithms struggle with large search spaces, high evaluation costs, and limited accuracy due to the complexity and scale of full-board PCB optimization. To address these challenges, we propose a model framework that integrates neural network prediction with reinforcement learning optimization. The PCB is first partitioned into smaller sub-regions to reduce computational complexity. A reward network is then constructed, incorporating spatial pyramid pooling and external attention mechanisms to capture both local and global spatial features. This network enables accurate prediction of plating outcomes under different dummy pad configurations. Guided by these predictions, we employ the proximal policy optimization (PPO) algorithm to train a dummy pad layout network that autonomously explores optimal design strategies. Importantly, a real-time reward function is introduced to decompose the contribution of each dummy pad, effectively mitigating the sparse reward problem and accelerating convergence. Experimental results demonstrate that our approach outperforms traditional engineering methods in terms of copper thickness uniformity and layout efficiency. Furthermore, it reveals hidden design patterns and underlying physical principles, providing both practical guidance and theoretical insight into PCB layout optimization.</div></div>","PeriodicalId":50461,"journal":{"name":"Expert Systems with Applications","volume":"292 ","pages":"Article 128639"},"PeriodicalIF":7.5,"publicationDate":"2025-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144330877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Addressing multi-scale temporal variability: deep integration and application of the CNN and transformer model in monthly streamflow prediction","authors":"Jinsheng Fan , Guo-An Yu , Mingmeng Zhao , Hucheng Zong","doi":"10.1016/j.eswa.2025.128658","DOIUrl":"10.1016/j.eswa.2025.128658","url":null,"abstract":"<div><div>Accurate monthly streamflow prediction is essential for effective water resource management, hydropower operation, and ecological sustainability. However, streamflow processes are inherently nonlinear and exhibit considerable multiscale temporal variability, driven by both natural conditions and potential anthropogenic influences. To address these challenges, we propose a novel hybrid deep learning model, ISVM-CovTransformer, which integrates the Improved Sparrow Search Algorithm (ISSA), Variational Mode Decomposition (VMD), Mutual Information (MI), and a composite CovTransformer architecture. Within this framework, ISSA is utilized to optimize the parameters of VMD for efficient signal decomposition, while MI is employed to identify informative input features with strong predictive relevance. The CovTransformer model, combining Convolutional Neural Networks (CNN) and Transformer layers, enables the simultaneous extraction of localized temporal patterns and long-range dependencies, thereby enhancing the model’s ability to capture complex runoff dynamics and improve prediction accuracy.</div><div>Using monthly precipitation and streamflow data from the Tangnaihai, Toudaoguai, and Huayuankou hydrological stations, experimental results demonstrate that the proposed model outperforms baseline approaches. Specifically, during the testing phase, the model achieved an NSC of 0.9686, RMSE of 91.99 m<sup>3</sup>/s, MAE of 70.90 m<sup>3</sup>/s, R<sup>2</sup> of 0.9702, and a PBIAS of −1.198 % at Tangnaihai; an NSC of 0.9498, RMSE of 90.35 m<sup>3</sup>/s, MAE of 724.63 m<sup>3</sup>/s, R<sup>2</sup> of 0.9554, and a PBIAS of 2.573 % at Toudaoguai; and an NSC of 0.9302, RMSE of 174.36 m<sup>3</sup>/s, MAE of 44.42 m<sup>3</sup>/s, R<sup>2</sup> of 0.9393, and a PBIAS of 3.309 % at Huayuankou. These findings confirm the proposed model’s effectiveness for monthly streamflow forecasting and suggest that it provides a theoretically sound and generalizable framework, with potential extensions to related hydrological applications such as sediment transport modeling.</div></div>","PeriodicalId":50461,"journal":{"name":"Expert Systems with Applications","volume":"292 ","pages":"Article 128658"},"PeriodicalIF":7.5,"publicationDate":"2025-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144322777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
You Keshun , Liu Chenlu , Lin Yanghui , Qiu Guangqi , Gu Yingku
{"title":"DTMPI-DIVR: A digital twins for multi-margin physical information via dynamic interaction of virtual and real sound-vibration signals for bearing fault diagnosis without real fault samples","authors":"You Keshun , Liu Chenlu , Lin Yanghui , Qiu Guangqi , Gu Yingku","doi":"10.1016/j.eswa.2025.128592","DOIUrl":"10.1016/j.eswa.2025.128592","url":null,"abstract":"<div><div>Traditional methods for bearing fault diagnosis have limitations, such as relying on unimodal modelling and requiring a large amount of labelled fault data. To address these issues, a multi-margin physical information digital twin framework (DTMPI-DIVR) is proposed. This framework is based on the dynamic interaction of real and virtual signals and can realize bearing fault diagnosis without real fault samples. A 15-degree-of-freedom nonlinear sound-vibration coupled dynamics model is constructed to simulate the complex behaviour of rotating machinery, and a signal decoupling algorithm is introduced to extract independent fault-related margin information from multimodal signals. Gaussian process regression (GPR) is used to construct a reduced-order agent model, and six significance parameters are screened by sensitivity analysis to achieve efficient evaluation and optimization of physical hyperparameters. Moreover, virtual fault signals are generated based on the initial hyperparameters and compared with the actual signals, and the hyperparameters are optimized using the Firefly algorithm with the time–frequency domain relevance error threshold as the objective function. The time–frequency domain relevance error is continuously calculated through real-time simulation, and the saliency parameters are dynamically updated to ensure that the physical and actual working conditions are consistent. The experiments show that the diagnosis accuracy under fault-free data learning is up to 94.5 %, and 92.38 % is maintained under −2dB noise, which comprehensively surpasses the existing methods and verifies the advancement of the sound-vibration signal fusion strategy and digital twinning of multi-marginal physical information.</div></div>","PeriodicalId":50461,"journal":{"name":"Expert Systems with Applications","volume":"292 ","pages":"Article 128592"},"PeriodicalIF":7.5,"publicationDate":"2025-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144297354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}