Big Data and Cognitive Computing最新文献

筛选
英文 中文
Improving Clothing Product Quality and Reducing Waste Based on Consumer Review Using RoBERTa and BERTopic Language Model 基于RoBERTa和BERTopic语言模型的消费者评论提高服装产品质量和减少浪费
Big Data and Cognitive Computing Pub Date : 2023-10-25 DOI: 10.3390/bdcc7040168
Andry Alamsyah, Nadhif Ditertian Girawan
{"title":"Improving Clothing Product Quality and Reducing Waste Based on Consumer Review Using RoBERTa and BERTopic Language Model","authors":"Andry Alamsyah, Nadhif Ditertian Girawan","doi":"10.3390/bdcc7040168","DOIUrl":"https://doi.org/10.3390/bdcc7040168","url":null,"abstract":"The disposability of clothing has emerged as a critical concern, precipitating waste accumulation due to product quality degradation. Such consequences exert significant pressure on resources and challenge sustainability efforts. In response, this research focuses on empowering clothing companies to elevate product excellence by harnessing consumer feedback. Beyond insights, this research extends to sustainability by providing suggestions on refining product quality by improving material handling, gradually mitigating waste production, and cultivating longevity, therefore decreasing discarded clothes. Managing a vast influx of diverse reviews necessitates sophisticated natural language processing (NLP) techniques. Our study introduces a Robustly optimized BERT Pretraining Approach (RoBERTa) model calibrated for multilabel classification and BERTopic for topic modeling. The model adeptly distills vital themes from consumer reviews, exhibiting astounding accuracy in projecting concerns across various dimensions of clothing quality. NLP’s potential lies in endowing companies with insights into consumer review, augmented by the BERTopic to facilitate immersive exploration of harvested review topics. This research presents a thorough case for integrating machine learning to foster sustainability and waste reduction. The contribution of this research is notable for its integration of RoBERTa and BERTopic in multilabel classification tasks and topic modeling in the fashion industry. The results indicate that the RoBERTa model exhibits remarkable performance, as demonstrated by its macro-averaged F1 score of 0.87 and micro-averaged F1 score of 0.87. Likewise, BERTopic achieves a coherence score of 0.67, meaning the model can form an insightful topic.","PeriodicalId":36397,"journal":{"name":"Big Data and Cognitive Computing","volume":"16 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135113017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Identifying Probable Dementia in Undiagnosed Black and White Americans Using Machine Learning in Veterans Health Administration Electronic Health Records 使用退伍军人健康管理局电子健康记录中的机器学习识别未确诊的黑人和白人美国人可能的痴呆症
Big Data and Cognitive Computing Pub Date : 2023-10-19 DOI: 10.3390/bdcc7040167
Yijun Shao, Kaitlin Todd, Andrew Shutes-David, Steven P. Millard, Karl Brown, Amy Thomas, Kathryn Chen, Katherine Wilson, Qing T. Zeng, Debby W. Tsuang
{"title":"Identifying Probable Dementia in Undiagnosed Black and White Americans Using Machine Learning in Veterans Health Administration Electronic Health Records","authors":"Yijun Shao, Kaitlin Todd, Andrew Shutes-David, Steven P. Millard, Karl Brown, Amy Thomas, Kathryn Chen, Katherine Wilson, Qing T. Zeng, Debby W. Tsuang","doi":"10.3390/bdcc7040167","DOIUrl":"https://doi.org/10.3390/bdcc7040167","url":null,"abstract":"The application of natural language processing and machine learning (ML) in electronic health records (EHRs) may help reduce dementia underdiagnosis, but models that are not designed to reflect minority populations may instead perpetuate underdiagnosis. To improve the identification of undiagnosed dementia, particularly in Black Americans (BAs), we developed support vector machine (SVM) ML models to assign dementia risk scores based on features identified in unstructured EHR data (via latent Dirichlet allocation and stable topic extraction in n = 1 M notes) and structured EHR data. We hypothesized that separate models would show differentiation between racial groups, so the models were fit separately for BAs (n = 5 K with dementia ICD codes, n = 5 K without) and White Americans (WAs; n = 5 K with codes, n = 5 K without). To validate our method, scores were generated for separate samples of BAs (n = 10 K) and WAs (n = 10 K) without dementia codes, and the EHRs of 1.2 K of these patients were reviewed by dementia experts. All subjects were age 65+ and drawn from the VA, which meant that the samples were disproportionately male. A strong positive relationship was observed between SVM-generated risk scores and undiagnosed dementia. BAs were more likely than WAs to have undiagnosed dementia per chart review, both overall (15.3% vs. 9.5%) and among Veterans with >90th percentile cutoff scores (25.6% vs. 15.3%). With chart reviews as the reference standard and varied cutoff scores, the BA model performed slightly better than the WA model (AUC = 0.86 with negative predictive value [NPV] = 0.98, positive predictive value [PPV] = 0.26, sensitivity = 0.61, specificity = 0.92 and accuracy = 0.91 at >90th percentile cutoff vs. AUC = 0.77 with NPV = 0.98, PPV = 0.15, sensitivity = 0.43, specificity = 0.91 and accuracy = 0.89 at >90th). Our findings suggest that race-specific ML models can help identify BAs who may have undiagnosed dementia. Future studies should examine model generalizability in settings with more females and test whether incorporating these models into clinical settings increases the referral of undiagnosed BAs to specialists.","PeriodicalId":36397,"journal":{"name":"Big Data and Cognitive Computing","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135779428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HAMCap: A Weak-Supervised Hybrid Attention-Based Capsule Neural Network for Fine-Grained Climate Change Debate Analysis HAMCap:用于细粒度气候变化辩论分析的弱监督混合基于注意力的胶囊神经网络
Big Data and Cognitive Computing Pub Date : 2023-10-17 DOI: 10.3390/bdcc7040166
Kun Xiang, Akihiro Fujii
{"title":"HAMCap: A Weak-Supervised Hybrid Attention-Based Capsule Neural Network for Fine-Grained Climate Change Debate Analysis","authors":"Kun Xiang, Akihiro Fujii","doi":"10.3390/bdcc7040166","DOIUrl":"https://doi.org/10.3390/bdcc7040166","url":null,"abstract":"Climate change (CC) has become a central global topic within the multiple branches of social disciplines. Natural Language Processing (NLP) plays a superior role since it has achieved marvelous accomplishments in various application scenarios. However, CC debates are ambiguous and complicated to interpret even for humans, especially when it comes to the aspect-oriented fine-grained level. Furthermore, the lack of large-scale effective labeled datasets is always a plight encountered in NLP. In this work, we propose a novel weak-supervised Hybrid Attention Masking Capsule Neural Network (HAMCap) for fine-grained CC debate analysis. Specifically, we use vectors with allocated different weights instead of scalars, and a hybrid attention mechanism is designed in order to better capture and represent information. By randomly masking with a Partial Context Mask (PCM) mechanism, we can better construct the internal relationship between the aspects and entities and easily obtain a large-scale generated dataset. Considering the uniqueness of linguistics, we propose a Reinforcement Learning-based Generator-Selector mechanism to automatically update and select data that are beneficial to model training. Empirical results indicate that our proposed ensemble model outperforms baselines on downstream tasks with a maximum of 50.08% on accuracy and 49.48% on F1 scores. Finally, we draw interpretable conclusions about the climate change debate, which is a widespread global concern.","PeriodicalId":36397,"journal":{"name":"Big Data and Cognitive Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135993146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ZeroTrustBlock: Enhancing Security, Privacy, and Interoperability of Sensitive Data through ZeroTrust Permissioned Blockchain ZeroTrustBlock:通过ZeroTrust许可区块链增强敏感数据的安全性、隐私性和互操作性
Big Data and Cognitive Computing Pub Date : 2023-10-17 DOI: 10.3390/bdcc7040165
Pratik Thantharate, Anurag Thantharate
{"title":"ZeroTrustBlock: Enhancing Security, Privacy, and Interoperability of Sensitive Data through ZeroTrust Permissioned Blockchain","authors":"Pratik Thantharate, Anurag Thantharate","doi":"10.3390/bdcc7040165","DOIUrl":"https://doi.org/10.3390/bdcc7040165","url":null,"abstract":"With the digitization of healthcare, an immense amount of sensitive medical data are generated and shared between various healthcare stakeholders—however, traditional health data management mechanisms present interoperability, security, and privacy challenges. The centralized nature of current health information systems leads to single points of failure, making the data vulnerable to cyberattacks. Patients also have little control over their medical records, raising privacy concerns. Blockchain technology presents a promising solution to these challenges through its decentralized, transparent, and immutable properties. This research proposes ZeroTrustBlock, a comprehensive blockchain framework for secure and private health information exchange. The decentralized ledger enhances integrity, while permissioned access and smart contracts enable patient-centric control over medical data sharing. A hybrid on-chain and off-chain storage model balances transparency with confidentiality. Integration gateways bridge ZeroTrustBlock protocols with existing systems like EHRs. Implemented on Hyperledger Fabric, ZeroTrustBlock demonstrates substantial security improvements over mainstream databases via cryptographic mechanisms, formal privacy-preserving protocols, and access policies enacting patient consent. Results validate the architecture’s effectiveness in achieving 14,200 TPS average throughput, 480 ms average latency for 100,000 concurrent transactions, and linear scalability up to 20 nodes. However, enhancements around performance, advanced cryptography, and real-world pilots are future work. Overall, ZeroTrustBlock provides a robust application of blockchain capabilities to transform security, privacy, interoperability, and patient agency in health data management.","PeriodicalId":36397,"journal":{"name":"Big Data and Cognitive Computing","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136033313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Cognitive Assessment Based on Electroencephalography Analysis in Virtual and Augmented Reality Environments, Using Head Mounted Displays: A Systematic Review 基于脑电图分析的认知评估在虚拟和增强现实环境中,使用头戴式显示器:系统综述
Big Data and Cognitive Computing Pub Date : 2023-10-13 DOI: 10.3390/bdcc7040163
Foteini Gramouseni, Katerina D. Tzimourta, Pantelis Angelidis, Nikolaos Giannakeas, Markos G. Tsipouras
{"title":"Cognitive Assessment Based on Electroencephalography Analysis in Virtual and Augmented Reality Environments, Using Head Mounted Displays: A Systematic Review","authors":"Foteini Gramouseni, Katerina D. Tzimourta, Pantelis Angelidis, Nikolaos Giannakeas, Markos G. Tsipouras","doi":"10.3390/bdcc7040163","DOIUrl":"https://doi.org/10.3390/bdcc7040163","url":null,"abstract":"The objective of this systematic review centers on cognitive assessment based on electroencephalography (EEG) analysis in Virtual Reality (VR), Augmented Reality (AR) and Mixed Reality (MR) environments, projected on Head Mounted Displays (HMD), in healthy individuals. A range of electronic databases were searched (Scopus, ScienceDirect, IEEE Explore and PubMed), using PRISMA research method and 82 experimental studies were included in the final report. Specific aspects of cognitive function were evaluated, including cognitive load, immersion, spatial awareness, interaction with the digital environment and attention. These were analyzed based on various aspects of the analysis, including the number of participants, stimuli, frequency bands range, data preprocessing and data analysis. Based on the analysis conducted, significant findings have emerged both in terms of the experimental structure related to cognitive neuroscience and the key parameters considered in the research. Also, numerous significant avenues and domains requiring more extensive exploration have been identified within neuroscience and cognition research in digital environments. These encompass factors such as the experimental setup, including issues like narrow participant populations and the feasibility of using EEG equipment with a limited number of sensors to overcome the challenges posed by the time-consuming placement of a multi-electrode EEG cap. There is a clear need for more in-depth exploration in signal analysis, especially concerning the α, β, and γ sub-bands and their role in providing more precise insights for evaluating cognitive states. Finally, further research into augmented and mixed reality environments will enable the extraction of more accurate conclusions regarding their utility in cognitive neuroscience.","PeriodicalId":36397,"journal":{"name":"Big Data and Cognitive Computing","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135855609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MM-EMOR: Multi-Modal Emotion Recognition of Social Media Using Concatenated Deep Learning Networks MM-EMOR:使用连接深度学习网络的社交媒体多模态情感识别
Big Data and Cognitive Computing Pub Date : 2023-10-13 DOI: 10.3390/bdcc7040164
Omar Adel, Karma M.Fathalla, Ahmed Abo ElFarag
{"title":"MM-EMOR: Multi-Modal Emotion Recognition of Social Media Using Concatenated Deep Learning Networks","authors":"Omar Adel, Karma M.Fathalla, Ahmed Abo ElFarag","doi":"10.3390/bdcc7040164","DOIUrl":"https://doi.org/10.3390/bdcc7040164","url":null,"abstract":"Emotion recognition is crucial in artificial intelligence, particularly in the domain of human–computer interaction. The ability to accurately discern and interpret emotions plays a critical role in helping machines to effectively decipher users’ underlying intentions, allowing for a more streamlined interaction process that invariably translates into an elevated user experience. The recent increase in social media usage, as well as the availability of an immense amount of unstructured data, has resulted in a significant demand for the deployment of automated emotion recognition systems. Artificial intelligence (AI) techniques have emerged as a powerful solution to this pressing concern in this context. In particular, the incorporation of multimodal AI-driven approaches for emotion recognition has proven beneficial in capturing the intricate interplay of diverse human expression cues that manifest across multiple modalities. The current study aims to develop an effective multimodal emotion recognition system known as MM-EMOR in order to improve the efficacy of emotion recognition efforts focused on audio and text modalities. The use of Mel spectrogram features, Chromagram features, and the Mobilenet Convolutional Neural Network (CNN) for processing audio data are central to the operation of this system, while an attention-based Roberta model caters to the text modality. The methodology of this study is based on an exhaustive evaluation of this approach across three different datasets. Notably, the empirical findings show that MM-EMOR outperforms competing models across the same datasets. This performance boost is noticeable, with accuracy gains of an impressive 7% on one dataset and a substantial 8% on another. Most significantly, the observed increase in accuracy for the final dataset was an astounding 18%.","PeriodicalId":36397,"journal":{"name":"Big Data and Cognitive Computing","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135856260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Contemporary Art Authentication with Large-Scale Classification 大规模分类的当代艺术鉴定
Big Data and Cognitive Computing Pub Date : 2023-10-09 DOI: 10.3390/bdcc7040162
Todd Dobbs, Abdullah-Al-Raihan Nayeem, Isaac Cho, Zbigniew Ras
{"title":"Contemporary Art Authentication with Large-Scale Classification","authors":"Todd Dobbs, Abdullah-Al-Raihan Nayeem, Isaac Cho, Zbigniew Ras","doi":"10.3390/bdcc7040162","DOIUrl":"https://doi.org/10.3390/bdcc7040162","url":null,"abstract":"Art authentication is the process of identifying the artist who created a piece of artwork and is manifested through events of provenance, such as art gallery exhibitions and financial transactions. Art authentication has visual influence via the uniqueness of the artist’s style in contrast to the style of another artist. The significance of this contrast is proportional to the number of artists involved and the degree of uniqueness of an artist’s collection. This visual uniqueness of style can be captured in a mathematical model produced by a machine learning (ML) algorithm on painting images. Art authentication is not always possible as provenance can be obscured or lost through anonymity, forgery, gifting, or theft of artwork. This paper presents an image-only art authentication attribute marker of contemporary art paintings for a very large number of artists. The experiments in this paper demonstrate that it is possible to use ML-generated models to authenticate contemporary art from 2368 to 100 artists with an accuracy of 48.97% to 91.23%, respectively. This is the largest effort for image-only art authentication to date, with respect to the number of artists involved and the accuracy of authentication.","PeriodicalId":36397,"journal":{"name":"Big Data and Cognitive Computing","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135095107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Empirical Study on Core Data Asset Identification in Data Governance 数据治理中核心数据资产识别的实证研究
Big Data and Cognitive Computing Pub Date : 2023-10-07 DOI: 10.3390/bdcc7040161
Yunpeng Chen, Ying Zhao, Wenxuan Xie, Yanbo Zhai, Xin Zhao, Jiang Zhang, Jiang Long, Fangfang Zhou
{"title":"An Empirical Study on Core Data Asset Identification in Data Governance","authors":"Yunpeng Chen, Ying Zhao, Wenxuan Xie, Yanbo Zhai, Xin Zhao, Jiang Zhang, Jiang Long, Fangfang Zhou","doi":"10.3390/bdcc7040161","DOIUrl":"https://doi.org/10.3390/bdcc7040161","url":null,"abstract":"Data governance aims to optimize the value derived from data assets and effectively mitigate data-related risks. The rapid growth of data assets increases the risk of data breaches. One key solution to reduce this risk is to classify data assets according to their business value and criticality to the enterprises, allocating limited resources to protect core data assets. The existing methods rely on the experience of professionals and cannot identify core data assets across business scenarios. This work conducts an empirical study to address this issue. First, we utilized data lineage graphs with expert-labeled core data assets to investigate the experience of data users on core data asset identification from a scenario perspective. Then, we explored the structural features of core data assets on data lineage graphs from an abstraction perspective. Finally, one expert seminar was conducted to derive a set of universal indicators to identify core data assets by synthesizing the results from the two perspectives. User and field studies were conducted to demonstrate the effectiveness of the indicators.","PeriodicalId":36397,"journal":{"name":"Big Data and Cognitive Computing","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135251931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Defining Semantically Close Words of Kazakh Language with Distributed System Apache Spark 用分布式系统Apache Spark定义哈萨克语语义封闭词
Big Data and Cognitive Computing Pub Date : 2023-09-27 DOI: 10.3390/bdcc7040160
Dauren Ayazbayev, Andrey Bogdanchikov, Kamila Orynbekova, Iraklis Varlamis
{"title":"Defining Semantically Close Words of Kazakh Language with Distributed System Apache Spark","authors":"Dauren Ayazbayev, Andrey Bogdanchikov, Kamila Orynbekova, Iraklis Varlamis","doi":"10.3390/bdcc7040160","DOIUrl":"https://doi.org/10.3390/bdcc7040160","url":null,"abstract":"This work focuses on determining semantically close words and using semantic similarity in general in order to improve performance in information retrieval tasks. The semantic similarity of words is an important task with many applications from information retrieval to spell checking or even document clustering and classification. Although, in languages with rich linguistic resources, the methods and tools for this task are well established, some languages do not have such tools. The first step in our experiment is to represent the words in a collection in a vector form and then define the semantic similarity of the terms using a vector similarity method. In order to tame the complexity of the task, which relies on the number of word (and, consequently, of the vector) pairs that have to be combined in order to define the semantically closest word pairs, A distributed method that runs on Apache Spark is designed to reduce the calculation time by running comparison tasks in parallel. Three alternative implementations are proposed and tested using a list of target words and seeking the most semantically similar words from a lexicon for each one of them. In a second step, we employ pre-trained multilingual sentence transformers to capture the content semantics at a sentence level and a vector-based semantic index to accelerate the searches. The code is written in MapReduce, and the experiments and results show that the proposed methods can provide an interesting solution for finding similar words or texts in the Kazakh language.","PeriodicalId":36397,"journal":{"name":"Big Data and Cognitive Computing","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135580073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Pruning Method Based on Feature Map Similarity Score 一种基于特征映射相似度评分的剪枝方法
Big Data and Cognitive Computing Pub Date : 2023-09-26 DOI: 10.3390/bdcc7040159
Jihua Cui, Zhenbang Wang, Ziheng Yang, Xin Guan
{"title":"A Pruning Method Based on Feature Map Similarity Score","authors":"Jihua Cui, Zhenbang Wang, Ziheng Yang, Xin Guan","doi":"10.3390/bdcc7040159","DOIUrl":"https://doi.org/10.3390/bdcc7040159","url":null,"abstract":"As the number of layers of deep learning models increases, the number of parameters and computation increases, making it difficult to deploy on edge devices. Pruning has the potential to significantly reduce the number of parameters and computations in a deep learning model. Existing pruning methods frequently require a specific distribution of network parameters to achieve good results when measuring filter importance. As a result, a feature map similarity score-based pruning method is proposed. We calculate the similarity score of each feature map to measure the importance of the filter and guide filter pruning using the similarity between the filter output feature maps to measure the redundancy of the corresponding filter. Pruning experiments on ResNet-56 and ResNet-110 networks on Cifar-10 datasets can compress the model by more than 70% while maintaining a higher compression ratio and accuracy than traditional methods.","PeriodicalId":36397,"journal":{"name":"Big Data and Cognitive Computing","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134886460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信