Expert Systems最新文献

筛选
英文 中文
Imbalanced survival prediction for gastric cancer patients based on improved XGBoost with cost sensitive and focal loss 基于改进型 XGBoost 的胃癌患者失衡生存预测,具有成本敏感性和病灶损失性
IF 3 4区 计算机科学
Expert Systems Pub Date : 2024-07-02 DOI: 10.1111/exsy.13666
Liangchen Xu, Chonghui Guo
{"title":"Imbalanced survival prediction for gastric cancer patients based on improved XGBoost with cost sensitive and focal loss","authors":"Liangchen Xu,&nbsp;Chonghui Guo","doi":"10.1111/exsy.13666","DOIUrl":"10.1111/exsy.13666","url":null,"abstract":"<p>Accurate prediction of gastric cancer survival state is one of great significant tasks for clinical decision-making. Many advanced machine learning classification techniques have been applied to predict the survival status of cancer patients in three or 5 years, however, many of them have a low sensitivity because of class imbalance. This is a non-negligible problem due to the poor prognosis of gastric cancer patients. Furthermore, models in the medical domain require strong interpretability to increase their applicability. Due to the better performance and interpretability of the XGBoost model, we design a loss function taking into account cost sensitive and focal loss from the algorithm level for XGBoost to deal with the imbalance problem. We apply the improved model into the prediction of the survival status of gastric cancer patients and analyse the important related features. We use two types of indicators to evaluate the model, and we also design the confusion matrix of two models' predictive results to compare two models. The results show that the improved model has better performance. Furthermore, we calculate the importance of features related to survival with three different time periods and analyse their evolution, which are consistent with existing clinical research or further expand their research conclusions. These all support for clinically relevant decision-making and has the potential to expand into survival prediction of other cancer patients.</p>","PeriodicalId":51053,"journal":{"name":"Expert Systems","volume":"41 11","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141551444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Portfolio construction using explainable reinforcement learning 利用可解释强化学习构建投资组合
IF 3 4区 计算机科学
Expert Systems Pub Date : 2024-07-02 DOI: 10.1111/exsy.13667
Daniel González Cortés, Enrique Onieva, Iker Pastor, Laura Trinchera, Jian Wu
{"title":"Portfolio construction using explainable reinforcement learning","authors":"Daniel González Cortés,&nbsp;Enrique Onieva,&nbsp;Iker Pastor,&nbsp;Laura Trinchera,&nbsp;Jian Wu","doi":"10.1111/exsy.13667","DOIUrl":"10.1111/exsy.13667","url":null,"abstract":"<p>While machine learning's role in financial trading has advanced considerably, algorithmic transparency and explainability challenges still exist. This research enriches prior studies focused on high-frequency financial data prediction by introducing an explainable reinforcement learning model for portfolio management. This model transcends basic asset prediction, formulating concrete, actionable trading strategies. The methodology is applied in a custom trading environment mimicking the CAC-40 index's financial conditions, allowing the model to adapt dynamically to market changes based on iterative learning from historical data. Empirical findings reveal that the model outperforms an equally weighted portfolio in out-of-sample tests. The study offers a dual contribution: it elevates algorithmic planning while significantly boosting transparency and interpretability in financial machine learning. This approach tackles the enduring ‘black-box’ issue and provides a holistic, transparent framework for managing investment portfolios.</p>","PeriodicalId":51053,"journal":{"name":"Expert Systems","volume":"41 11","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/exsy.13667","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141551443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An efficient object tracking based on multi‐head cross‐attention transformer 基于多头交叉注意力变换器的高效物体追踪技术
IF 3.3 4区 计算机科学
Expert Systems Pub Date : 2024-07-01 DOI: 10.1111/exsy.13650
Jiahai Dai, Huimin Li, Shan Jiang, Hongwei Yang
{"title":"An efficient object tracking based on multi‐head cross‐attention transformer","authors":"Jiahai Dai, Huimin Li, Shan Jiang, Hongwei Yang","doi":"10.1111/exsy.13650","DOIUrl":"https://doi.org/10.1111/exsy.13650","url":null,"abstract":"Object tracking is an essential component of computer vision and plays a significant role in various practical applications. Recently, transformer‐based trackers have become the predominant method for tracking due to their robustness and efficiency. However, existing transformer‐based trackers typically focus solely on the template features, neglecting the interactions between the search features and the template features during the tracking process. To address this issue, this article introduces a multi‐head cross‐attention transformer for visual tracking (MCTT), which effectively enhance the interaction between the template branch and the search branch, enabling the tracker to prioritize discriminative feature. Additionally, an auxiliary segmentation mask head has been designed to produce a pixel‐level feature representation, enhancing and tracking accuracy by predicting a set of binary masks. Comprehensive experiments have been performed on benchmark datasets, such as LaSOT, GOT‐10k, UAV123 and TrackingNet using various advanced methods, demonstrating that our approach achieves promising tracking performance. MCTT achieves an AO score of 72.8 on the GOT‐10k.","PeriodicalId":51053,"journal":{"name":"Expert Systems","volume":"28 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141510284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Crowdfunding performance prediction using feature-selection-based machine learning models 利用基于特征选择的机器学习模型预测众筹绩效
IF 3 4区 计算机科学
Expert Systems Pub Date : 2024-06-27 DOI: 10.1111/exsy.13646
Yuanyue Feng, Yuhong Luo, Nianjiao Peng, Ben Niu
{"title":"Crowdfunding performance prediction using feature-selection-based machine learning models","authors":"Yuanyue Feng,&nbsp;Yuhong Luo,&nbsp;Nianjiao Peng,&nbsp;Ben Niu","doi":"10.1111/exsy.13646","DOIUrl":"10.1111/exsy.13646","url":null,"abstract":"<div>\u0000 \u0000 \u0000 <section>\u0000 \u0000 <h3> Background</h3>\u0000 \u0000 <p>Crowdfunding is increasingly favoured by entrepreneurs for online financing. Predicting crowdfunding success can provide valuable guidance for stakeholders. It is a new attempt to evaluate the relative performance of different machine learning algorithms for crowdfunding prediction.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Objectives</h3>\u0000 \u0000 <p>This study aims to identify the key factors of crowdfunding, and find the different performance and usage of machine learning algorithms for crowdfunding prediction.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Method</h3>\u0000 \u0000 <p>We crawled data from MoDian.com, a Chinese crowdfunding platform, and predicted the crowdfunding performance using four machine learning algorithms, which is a new exploration in this area. Most of the existing literature focuses on empirical analysis. This work solves the problem of predicting crowdfunding performance using a dataset with a minimal number of highly contributive features, which has higher accuracy compared to the regression analysis.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Results</h3>\u0000 \u0000 <p>The experiment results show that feature-selection-based machine learning models are effective and beneficial in crowdfunding prediction.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Conclusion</h3>\u0000 \u0000 <p>Feature selection can significantly improve the prediction performance of the machine learning models. KNN achieved the best prediction results with five features: number of backers, target amount, number of project likes, number of project comments, and sponsor fans. The prediction accuracy was improved by 16%, the precision was improved by 13.23%, the recall was improved by 22.66%, the F-score was improved by 18.48%, and the AUC was improved by 14.9%.</p>\u0000 </section>\u0000 </div>","PeriodicalId":51053,"journal":{"name":"Expert Systems","volume":"41 10","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141510285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AES software and hardware system co-design for resisting side channel attacks 抵御侧信道攻击的 AES 软硬件系统协同设计
IF 3 4区 计算机科学
Expert Systems Pub Date : 2024-06-26 DOI: 10.1111/exsy.13664
Liguo Dong, Xinliang Ye, Libin Zhuang, Ruidian Zhan, M. Shamim Hossain
{"title":"AES software and hardware system co-design for resisting side channel attacks","authors":"Liguo Dong,&nbsp;Xinliang Ye,&nbsp;Libin Zhuang,&nbsp;Ruidian Zhan,&nbsp;M. Shamim Hossain","doi":"10.1111/exsy.13664","DOIUrl":"10.1111/exsy.13664","url":null,"abstract":"<p>The threat of side-channel attacks poses a significant risk to the security of cryptographic algorithms. To counter this threat, we have designed an AES system capable of defending against such attacks, supporting AES-128, AES-192, and AES-256 encryption standards. In our system, the CPU oversees the AES hardware via the AHB bus and employs true random number generation to provide secure random inputs for computations. The hardware implementation of the AES S-box utilizes complex domain inversion techniques, while intermediate data is shielded using full-time masking. Furthermore, the system incorporates double-path error detection mechanisms to thwart fault propagation. Our results demonstrate that the system effectively conceals key power information, providing robust resistance against CPA attacks, and is capable of detecting injected faults, thereby mitigating fault-based attacks.</p>","PeriodicalId":51053,"journal":{"name":"Expert Systems","volume":"41 10","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141522544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Facial emotion recognition: A comprehensive review 面部情绪识别:全面回顾
IF 3 4区 计算机科学
Expert Systems Pub Date : 2024-06-26 DOI: 10.1111/exsy.13670
Manmeet Kaur, Munish Kumar
{"title":"Facial emotion recognition: A comprehensive review","authors":"Manmeet Kaur,&nbsp;Munish Kumar","doi":"10.1111/exsy.13670","DOIUrl":"10.1111/exsy.13670","url":null,"abstract":"<p>Facial emotion recognition (FER) represents a significant outcome of the rapid advancements in artificial intelligence (AI) technology. In today's digital era, the ability to decipher emotions from facial expressions has evolved into a fundamental mode of human interaction and communication. As a result, FER has penetrated diverse domains, including but not limited to medical diagnosis, customer feedback analysis, the automation of automobile driver systems, and the evaluation of student comprehension. Furthermore, it has matured into a captivating and dynamic research field, capturing the attention and curiosity of contemporary scholars and scientists. The primary objective of this paper is to provide an exhaustive review of FER systems. Its significance goes beyond offering a comprehensive resource; it also serves as a valuable guide for emerging researchers in the FER domain. Through a meticulous examination of existing FER systems and methodologies, this review equips them with essential insights and guidance for their future research pursuits. Moreover, this comprehensive review contributes to the expansion of their knowledge base, facilitating a profound understanding of this rapidly evolving field. In a world increasingly dependent on technology for communication and interaction, the study of FER holds a pivotal role in human-computer interaction (HCI). It not only provides valuable insights but also unlocks a multitude of possibilities for future innovations and applications. As we continue to integrate AI and facial emotion recognition into our daily lives, the importance of comprehending and enhancing FER systems becomes increasingly evident. This paper serves as a stepping stone for researchers, nurturing their involvement in this exciting and ever-evolving field.</p>","PeriodicalId":51053,"journal":{"name":"Expert Systems","volume":"41 10","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141522545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A multi‐focus image fusion network deployed in smart city target detection 部署在智慧城市目标检测中的多焦点图像融合网络
IF 3.3 4区 计算机科学
Expert Systems Pub Date : 2024-06-26 DOI: 10.1111/exsy.13662
Haojie Zhao, Shuang Guo, Gwanggil Jeon, Xiaomin Yang
{"title":"A multi‐focus image fusion network deployed in smart city target detection","authors":"Haojie Zhao, Shuang Guo, Gwanggil Jeon, Xiaomin Yang","doi":"10.1111/exsy.13662","DOIUrl":"https://doi.org/10.1111/exsy.13662","url":null,"abstract":"In the global monitoring of smart cities, the demands of global object detection systems based on cloud and fog computing in intelligent systems can be satisfied by photographs with globally recognized properties. Nevertheless, conventional techniques are constrained by the imaging depth of field and can produce artefacts or indistinct borders, which can be disastrous for accurately detecting the object. In light of this, this paper proposes an artificial intelligence‐based gradient learning network that gathers and enhances domain information at different sizes in order to produce globally focused fusion results. Gradient features, which provide a lot of boundary information, can eliminate the problem of border artefacts and blur in multi‐focus fusion. The multiple‐receptive module (MRM) facilitates effective information sharing and enables the capture of object properties at different scales. In addition, with the assistance of the global enhancement module (GEM), the network can effectively combine the scale features and gradient data from various receptive fields and reinforce the features to provide precise decision maps. Numerous experiments have demonstrated that our approach outperforms the seven most sophisticated algorithms currently in use.","PeriodicalId":51053,"journal":{"name":"Expert Systems","volume":"38 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141531655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dual resource constrained flexible job shop scheduling with sequence-dependent setup time 双资源受限灵活作业车间调度与取决于序列的设置时间
IF 3 4区 计算机科学
Expert Systems Pub Date : 2024-06-25 DOI: 10.1111/exsy.13669
Sasan Barak, Shima Javanmard, Reza Moghdani
{"title":"Dual resource constrained flexible job shop scheduling with sequence-dependent setup time","authors":"Sasan Barak,&nbsp;Shima Javanmard,&nbsp;Reza Moghdani","doi":"10.1111/exsy.13669","DOIUrl":"10.1111/exsy.13669","url":null,"abstract":"<p>This study addresses the imperative need for efficient solutions in the context of the dual resource constrained flexible job shop scheduling problem with sequence-dependent setup times (DRCFJS-SDSTs). We introduce a pioneering tri-objective mixed-integer linear mathematical model tailored to this complex challenge. Our model is designed to optimize the assignment of operations to candidate multi-skilled machines and operators, with the primary goals of minimizing operators' idleness cost and sequence-dependent setup time-related expenses. Additionally, it aims to mitigate total tardiness and earliness penalties while regulating maximum machine workload. Given the NP-hard nature of the proposed DRCFJS-SDST, we employ the epsilon constraint method to derive exact optimal solutions for small-scale problems. For larger instances, we develop a modified variant of the multi-objective invasive weed optimization (MOIWO) algorithm, enhanced by a fuzzy sorting algorithm for competitive exclusion. In the absence of established benchmarks in the literature, we validate our solutions against those generated by multi-objective particle swarm optimization (MOPSO) and non-dominated sorted genetic algorithm (NSGA-II). Through comparative analysis, we demonstrate the superior performance of MOIWO. Specifically, when compared with NSGA-II, MOIWO achieves success rates of 90.83% and shows similar performance in 4.17% of cases. Moreover, compared with MOPSO, MOIWO achieves success rates of 84.17% and exhibits similar performance in 9.17% of cases. These findings contribute significantly to the advancement of scheduling optimization methodologies.</p>","PeriodicalId":51053,"journal":{"name":"Expert Systems","volume":"41 10","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/exsy.13669","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141510286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ImageVeriBypasser: An image verification code recognition approach based on Convolutional Neural Network ImageVeriBypasser:基于卷积神经网络的图像验证码识别方法
IF 3 4区 计算机科学
Expert Systems Pub Date : 2024-06-25 DOI: 10.1111/exsy.13658
Tong Ji, Yuxin Luo, Yifeng Lin, Yuer Yang, Qian Zheng, Siwei Lian, Junjie Li
{"title":"ImageVeriBypasser: An image verification code recognition approach based on Convolutional Neural Network","authors":"Tong Ji,&nbsp;Yuxin Luo,&nbsp;Yifeng Lin,&nbsp;Yuer Yang,&nbsp;Qian Zheng,&nbsp;Siwei Lian,&nbsp;Junjie Li","doi":"10.1111/exsy.13658","DOIUrl":"10.1111/exsy.13658","url":null,"abstract":"<p>The recent period has witnessed automated crawlers designed to automatically crack passwords, which greatly risks various aspects of our lives. To prevent passwords from being cracked, image verification codes have been implemented to accomplish the human–machine verification. It is important to note, however, that the most widely-used image verification codes, especially the visual reasoning Completely Automated Public Turing tests to tell Computers and Humans Apart (CAPTCHAs), are still susceptible to attacks by artificial intelligence. Taking the visual reasoning CAPTCHAs representing the image verification codes, this study introduces an enhanced approach for generating image verification codes and proposes an improved Convolutional Neural Network (CNN)-based recognition system. After we add a fully connected layer and briefly solve the edge of stability issue, the accuracy of the improved CNN model can smoothly approach 98.40% within 50 epochs on the image verification codes with four digits using a large initial learning rate of 0.01. Compared with the baseline model, it is approximately 37.82% better in accuracy without obvious curve oscillation. The improved CNN model can also smoothly reach the accuracy of 99.00% within 7500 epochs on the image verification codes with six characters, including digits, upper-case alphabets, lower-case alphabets, and symbols. A detailed comparison between our proposed approach and the baseline one is presented. The relationship between the time consumption and the length of the seeds is compared theoretically. Subsequently, we figure out the threat assignments on the visual reasoning CAPTCHAs with different lengths based on four machine learning models. Based on the threat assignments, the Kaplan-Meier (KM) curves are computed.</p>","PeriodicalId":51053,"journal":{"name":"Expert Systems","volume":"41 10","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/exsy.13658","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141522547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Marine predators optimization with deep learning model for video-based facial expression recognition 利用深度学习模型优化基于视频的海洋捕食者面部表情识别
IF 3 4区 计算机科学
Expert Systems Pub Date : 2024-06-24 DOI: 10.1111/exsy.13657
Mal Hari Prasad, P. Swarnalatha
{"title":"Marine predators optimization with deep learning model for video-based facial expression recognition","authors":"Mal Hari Prasad,&nbsp;P. Swarnalatha","doi":"10.1111/exsy.13657","DOIUrl":"10.1111/exsy.13657","url":null,"abstract":"<p>Video-based facial expression recognition (VFER) technique intends to categorize an input video into different kinds of emotions. It remains a challenging issue because of the gap between visual features and emotions, problems in handling the delicate movement of muscles, and restricted datasets. One of the effective solutions to solve this problem is the exploitation of efficient features defining facial expressions to carry out FER. Generally, the VFER find useful in several areas like unmanned driving, venue management, urban safety management, and senseless attendance. Recent advances in computer vision and deep learning (DL) techniques enable the design of automated VFER models. In this aspect, this study establishes a new Marine Predators Optimization with Deep Learning Model for Video-based Facial Expression Recognition (MPODL-VFER) technique. The presented MPODL-VFER technique mainly aims to classify different kinds of facial emotions in the video. To accomplish this, the presented MPODL-VFER technique derives features using the deep convolutional neural network based densely connected network (DenseNet) model. The presented MPODL-VFER technique employs MPO technique for the hyperparameter adjustment of the DenseNet model. Finally, Elman Neural Network (ENN) model is exploited for emotion recognition purposes. For assuring the enhanced recognition performance of the MPODL-VFER approach, a comparison study was developed on benchmark dataset. The comprehensive results have shown the significant outcome of MPODL-VFER model over other approaches.</p>","PeriodicalId":51053,"journal":{"name":"Expert Systems","volume":"41 10","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141522546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信