Information Processing & Management最新文献

筛选
英文 中文
Global and local hypergraph learning method with semantic enhancement for POI recommendation 用于 POI 推荐的全局和局部超图学习方法与语义增强功能
IF 7.4 1区 管理学
Information Processing & Management Pub Date : 2024-09-04 DOI: 10.1016/j.ipm.2024.103868
Jun Zeng , Hongjin Tao , Haoran Tang , Junhao Wen , Min Gao
{"title":"Global and local hypergraph learning method with semantic enhancement for POI recommendation","authors":"Jun Zeng ,&nbsp;Hongjin Tao ,&nbsp;Haoran Tang ,&nbsp;Junhao Wen ,&nbsp;Min Gao","doi":"10.1016/j.ipm.2024.103868","DOIUrl":"10.1016/j.ipm.2024.103868","url":null,"abstract":"<div><p>The deep semantic information mining extracts deep semantic features from textual data and effectively utilizes the world knowledge embedded in these features, so it is widely researched in recommendation tasks. In spite of the extensive utilization of contextual information in prior Point-of-Interest research, the insufficient and non-informative textual content has led to the neglect of deep semantic study. Besides, effectively integrating the deep semantic information into the trajectory modeling process is also an open question for further exploration. Therefore, this paper proposes HyperSE, to leverage prompt engineering and pre-trained language models for deep semantic enhancement. Besides, HyperSE effectively extracts higher-order collaborative signals from global and local hypergraphs, seamlessly integrating topological and semantic information to enhance trajectory modeling. Experimental results show that HyperSE outperforms the strong baseline, demonstrating the effectiveness of the deep semantic information and the model’s efficiency.</p></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"62 1","pages":"Article 103868"},"PeriodicalIF":7.4,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0306457324002279/pdfft?md5=328e43038a8c794bb1c90f66aafb0929&pid=1-s2.0-S0306457324002279-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142136512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leveraging sensory knowledge into Text-to-Text Transfer Transformer for enhanced emotion analysis 将感官知识纳入文本到文本转换器,增强情感分析能力
IF 7.4 1区 管理学
Information Processing & Management Pub Date : 2024-09-04 DOI: 10.1016/j.ipm.2024.103876
Qingqing Zhao , Yuhan Xia , Yunfei Long , Ge Xu , Jia Wang
{"title":"Leveraging sensory knowledge into Text-to-Text Transfer Transformer for enhanced emotion analysis","authors":"Qingqing Zhao ,&nbsp;Yuhan Xia ,&nbsp;Yunfei Long ,&nbsp;Ge Xu ,&nbsp;Jia Wang","doi":"10.1016/j.ipm.2024.103876","DOIUrl":"10.1016/j.ipm.2024.103876","url":null,"abstract":"<div><p>This study proposes an innovative model (i.e., SensoryT5), which integrates sensory knowledge into the T5 (Text-to-Text Transfer Transformer) framework for emotion classification tasks. By embedding sensory knowledge within the T5 model’s attention mechanism, SensoryT5 not only enhances the model’s contextual understanding but also elevates its sensitivity to the nuanced interplay between sensory information and emotional states. Experiments on four emotion classification datasets, three sarcasm classification datasets one subjectivity analysis dataset, and one opinion classification dataset (ranging from binary to 32-class tasks) demonstrate that our model outperforms state-of-the-art baseline models (including the baseline T5 model) significantly. Specifically, SensoryT5 achieves a maximal improvement of 3.0% in both the accuracy and the F1 score for emotion classification. In sarcasm classification tasks, the model surpasses the baseline models by the maximal increase of 1.2% in accuracy and 1.1% in the F1 score. Furthermore, SensoryT5 continues to demonstrate its superior performances for both subjectivity analysis and opinion classification, with increases in ACC and the F1 score by 0.6% for the subjectivity analysis task and increases in ACC by 0.4% and the F1 score by 0.6% for the opinion classification task, when compared to the second-best models. These improvements underscore the significant potential of leveraging cognitive resources to deepen NLP models’ comprehension of emotional nuances and suggest an interdisciplinary research between the areas of NLP and neuro-cognitive science.</p></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"62 1","pages":"Article 103876"},"PeriodicalIF":7.4,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0306457324002358/pdfft?md5=010384bf6159d75304020042a5bf9441&pid=1-s2.0-S0306457324002358-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142136469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Prototype-oriented hypergraph representation learning for anomaly detection in tabular data 面向原型的超图表示学习,用于表格数据中的异常检测
IF 7.4 1区 管理学
Information Processing & Management Pub Date : 2024-09-04 DOI: 10.1016/j.ipm.2024.103877
Shu Li , Yi Lu , Shicheng Jiu , Haoxiang Huang , Guangqi Yang , Jiong Yu
{"title":"Prototype-oriented hypergraph representation learning for anomaly detection in tabular data","authors":"Shu Li ,&nbsp;Yi Lu ,&nbsp;Shicheng Jiu ,&nbsp;Haoxiang Huang ,&nbsp;Guangqi Yang ,&nbsp;Jiong Yu","doi":"10.1016/j.ipm.2024.103877","DOIUrl":"10.1016/j.ipm.2024.103877","url":null,"abstract":"<div><p>Anomaly detection in tabular data holds significant importance across various industries such as manufacturing, healthcare, and finance. However, existing methods are constrained by the size and diversity of datasets, leading to poor generalization. Moreover, they primarily concentrate on feature correlations while overlooking interactions among data instances. Furthermore, the vulnerability of these methods to noisy data hinders their deployment in practical engineering applications. To tackle these issues, this paper proposes prototype-oriented hypergraph representation learning for anomaly detection in tabular data (PHAD). Specifically, PHAD employs a diffusion-based data augmentation strategy tailored for tabular data to enhance both the size and diversity of the training data. Subsequently, it constructs a hypergraph from the combined augmented and original training data to capture higher-order correlations among data instances by leveraging hypergraph neural networks. Lastly, PHAD utilizes an adaptive fusion of local and global data representations to derive the prototype of latent normal data, serving as a benchmark for detecting anomalies. Extensive experiments on twenty-six public datasets across various engineering fields demonstrate that our proposed PHAD outperforms other state-of-the-art methods in terms of performance, robustness, and efficiency.</p></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"62 1","pages":"Article 103877"},"PeriodicalIF":7.4,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S030645732400236X/pdfft?md5=e59b23608cc5adebfe7da6af514044f4&pid=1-s2.0-S030645732400236X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142136513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Does usage scenario matter? Investigating user perceptions, attitude and support for policies towards ChatGPT 使用场景重要吗?调查用户对 ChatGPT 的看法、态度和支持政策
IF 7.4 1区 管理学
Information Processing & Management Pub Date : 2024-08-31 DOI: 10.1016/j.ipm.2024.103867
Wenjia Yan , Bo Hu , Yu-li Liu , Changyan Li , Chuling Song
{"title":"Does usage scenario matter? Investigating user perceptions, attitude and support for policies towards ChatGPT","authors":"Wenjia Yan ,&nbsp;Bo Hu ,&nbsp;Yu-li Liu ,&nbsp;Changyan Li ,&nbsp;Chuling Song","doi":"10.1016/j.ipm.2024.103867","DOIUrl":"10.1016/j.ipm.2024.103867","url":null,"abstract":"<div><p>ChatGPT's impressive performance enables users to increasingly apply it to a variety of scenarios. However, previous studies investigating people's perceptions or attitudes towards ChatGPT have not considered the effects of the usage scenario. This paper aims to extract the representative scenarios of ChatGPT, explore differences in user perceptions for each scenario, and provide a policy support model. We extracted five scenarios by collecting 50 open-ended responses from Mturk, including “Scenario 1: Daily life tasks,” “Scenario 2: Enhance efficiency (work and education purposes),” “Scenario 3: Replace manpower (work and education purposes),” “Scenario 4: Browsing and general information seeking,” “Scenario 5: Enjoyment.” Subsequently, we identified four key variables to be tested (i.e., information quality, perceived risk, attitude, and policy support), and classified usage scenarios into different categories according to the perception variables measured via an online survey (<em>n</em> = 514). Finally, we built a model including the four variables and tested it for each scenario. The results of this study provide deep insights into user perceptions towards ChatGPT in distinct scenarios.</p></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"61 6","pages":"Article 103867"},"PeriodicalIF":7.4,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142094770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Keywords-enhanced Contrastive Learning Model for travel recommendation 用于旅行推荐的关键词增强型对比学习模型
IF 7.4 1区 管理学
Information Processing & Management Pub Date : 2024-08-31 DOI: 10.1016/j.ipm.2024.103874
Lei Chen , Guixiang Zhu , Weichao Liang , Jie Cao , Yihan Chen
{"title":"Keywords-enhanced Contrastive Learning Model for travel recommendation","authors":"Lei Chen ,&nbsp;Guixiang Zhu ,&nbsp;Weichao Liang ,&nbsp;Jie Cao ,&nbsp;Yihan Chen","doi":"10.1016/j.ipm.2024.103874","DOIUrl":"10.1016/j.ipm.2024.103874","url":null,"abstract":"<div><p>Travel recommendation aims to infer travel intentions of users by analyzing their historical behaviors on Online Travel Agencies (OTAs). However, crucial keywords in clicked travel product titles, such as destination and itinerary duration, indicating tourists’ intentions, are often overlooked. Additionally, most previous studies only consider stable long-term user interests or temporary short-term user preferences, making the recommendation performance unreliable. To mitigate these constraints, this paper proposes a novel <strong>K</strong>eywords-enhanced <strong>C</strong>ontrastive <strong>L</strong>earning <strong>M</strong>odel (KCLM). KCLM simultaneously implements personalized travel recommendation and keywords generation tasks, integrating long-term and short-term user preferences within both tasks. Furthermore, we design two kinds of contrastive learning tasks for better user and travel product representation learning. The preference contrastive learning aims to bridge the gap between long-term and short-term user preferences. The multi-view contrastive learning focuses on modeling the coarse-grained commonality between clicked products and their keywords. Extensive experiments are conducted on two tourism datasets and a large-scale e-commerce dataset. The experimental results demonstrate that KCLM achieves substantial gains in both metrics compared to the best-performing baseline methods. Specifically, HR@20 improved by 5.79%–14.13%, MRR@20 improved by 6.57%–18.50%. Furthermore, to have an intuitive understanding of the keyword generation by the KCLM model, we provide a case study for several randomized examples.</p></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"61 6","pages":"Article 103874"},"PeriodicalIF":7.4,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142094771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SelfCP: Compressing over-limit prompt via the frozen large language model itself SelfCP:通过冻结的大型语言模型本身压缩超限提示
IF 7.4 1区 管理学
Information Processing & Management Pub Date : 2024-08-30 DOI: 10.1016/j.ipm.2024.103873
Jun Gao , Ziqiang Cao , Wenjie Li
{"title":"SelfCP: Compressing over-limit prompt via the frozen large language model itself","authors":"Jun Gao ,&nbsp;Ziqiang Cao ,&nbsp;Wenjie Li","doi":"10.1016/j.ipm.2024.103873","DOIUrl":"10.1016/j.ipm.2024.103873","url":null,"abstract":"<div><p>Long prompt leads to huge hardware costs when using transformer-based Large Language Models (LLMs). Unfortunately, many tasks, such as summarization, inevitably introduce long documents, and the wide application of in-context learning easily makes the prompt length explode. This paper proposes a Self-Compressor (SelfCP), which adopts the target LLM itself to compress over-limit prompts into dense vectors on top of a sequence of learnable embeddings (<strong>memory tags</strong>) while keeping the allowed prompts unmodified. Dense vectors are then projected into <strong>memory tokens</strong> via a learnable connector, allowing the same LLM to understand them. The connector and the memory tag are supervised-tuned under the language modeling objective of the LLM on relatively long texts selected from publicly accessed datasets involving an instruction dataset to make SelfCP respond to various prompts, while the target LLM keeps frozen during training. We build the lightweight SelfCP upon 2 different backbones with merely 17M learnable parameters originating from the connector and a learnable embedding. Evaluation on both English and Chinese benchmarks demonstrate that SelfCP effectively substitutes 12<span><math><mo>×</mo></math></span> over-limit prompts with memory tokens to reduce memory costs and booster inference throughputs, yet improving response quality. The outstanding performance brings an efficient solution for LLMs to tackle long prompts without training LLMs from scratch.</p></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"61 6","pages":"Article 103873"},"PeriodicalIF":7.4,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142094769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IDC-CDR: Cross-domain Recommendation based on Intent Disentanglement and Contrast Learning IDC-CDR:基于意图分离和对比学习的跨域推荐
IF 7.4 1区 管理学
Information Processing & Management Pub Date : 2024-08-29 DOI: 10.1016/j.ipm.2024.103871
Jing Xu, Mingxin Gan, Hang Zhang, Shuhao Zhang
{"title":"IDC-CDR: Cross-domain Recommendation based on Intent Disentanglement and Contrast Learning","authors":"Jing Xu,&nbsp;Mingxin Gan,&nbsp;Hang Zhang,&nbsp;Shuhao Zhang","doi":"10.1016/j.ipm.2024.103871","DOIUrl":"10.1016/j.ipm.2024.103871","url":null,"abstract":"<div><p>Using the user’s past activity across different domains, the cross-domain recommendation (CDR) predicts the items that users are likely to click. Most recent studies on CDR model user interests at the item level. However because items in other domains are inherently heterogeneous, direct modeling of past interactions from other domains to augment user representation in the target domain may limit the effectiveness of recommendation. Thus, in order to enhance the performance of cross-domain recommendation, we present a model called Cross-domain Recommendation based on Intent Disentanglement and Contrast Learning (IDC-CDR) that performs contrastive learning at the intent level between domains and disentangles user interaction intents in various domains. Initially, user–item interaction graphs were created for both single-domain and cross-domain scenarios. Then, by modeling the intention distribution of each user–item interaction, the interaction intention graph and its representation were updated repeatedly. The comprehensive local intent is then obtained by fusing the local domain intents of the source domain and the target domain using the attention technique. In order to enhance representation learning and knowledge transfer, we ultimately develop a cross-domain intention contrastive learning method. Using three pairs of cross-domain scenarios from Amazon and the KuaiRand dataset, we carry out comprehensive experiments. The experimental findings demonstrate that the recommendation performance can be greatly enhanced by IDC-CDR, with an average improvement of 20.62% and 25.32% for HR and NDCG metrics, respectively.</p></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"61 6","pages":"Article 103871"},"PeriodicalIF":7.4,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142088396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Patent transformation prediction: When a patent can be transformed 专利转化预测:专利何时可以转化
IF 7.4 1区 管理学
Information Processing & Management Pub Date : 2024-08-27 DOI: 10.1016/j.ipm.2024.103872
Weidong Liu , Yu Zhang , Xiangfeng Luo , Yan Cao , Keqin Gan , Fuming Ye , Wei Tang , Minglong Zhang
{"title":"Patent transformation prediction: When a patent can be transformed","authors":"Weidong Liu ,&nbsp;Yu Zhang ,&nbsp;Xiangfeng Luo ,&nbsp;Yan Cao ,&nbsp;Keqin Gan ,&nbsp;Fuming Ye ,&nbsp;Wei Tang ,&nbsp;Minglong Zhang","doi":"10.1016/j.ipm.2024.103872","DOIUrl":"10.1016/j.ipm.2024.103872","url":null,"abstract":"<div><p>Patent transformation is a pivotal pathway for realizing technological advancements, and patent transformation prediction is a potential strategy for improving the patent transformation rate. Existing automated patent transformation prediction models do not predict the transformation time, causing invalid conclusions for these valid patents. In this study, we propose a patent transformation prediction model to predict patent transformation time. (1) To obtain patent features in different time periods, the years elapsed since the patent application are segmented into multiple time slots; (2) For each patent, we extract static features and dynamic features of each time slot after constructing and embedding a dynamic graph of the patent; (3) The features for each time slot are concatenated as the input of the dynamic model which utilizes a neural network to predict the patent transformation of the time slot. We measure the model in diverse domains, each of which includes 10,000 patent transformation data. The experimental results show that precision, recall, and F1 scores are approximately 80% for predicting patent transformation in the next 3 years. Additionally, our study yields some novel findings: (1) later applied patents have a higher transformation speed; (2) over 90% of patent transformations occur within 13 years since the patent application; (3) dynamic features, especially dynamic structured features, have a significantly greater impact on patent transformation prediction compared to static features; (4) our model performs stably on different experiment data.</p></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"61 6","pages":"Article 103872"},"PeriodicalIF":7.4,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142083416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing protection in high-dimensional data: Distributed differential privacy with feature selection 加强对高维数据的保护:带有特征选择的分布式差分隐私
IF 7.4 1区 管理学
Information Processing & Management Pub Date : 2024-08-27 DOI: 10.1016/j.ipm.2024.103870
I Made Putrama , Péter Martinek
{"title":"Enhancing protection in high-dimensional data: Distributed differential privacy with feature selection","authors":"I Made Putrama ,&nbsp;Péter Martinek","doi":"10.1016/j.ipm.2024.103870","DOIUrl":"10.1016/j.ipm.2024.103870","url":null,"abstract":"<div><p>The computational cost for implementing data privacy protection tends to rise as the dimensions increase, especially on correlated datasets. For this reason, a faster data protection mechanism is needed to handle high-dimensional data while balancing utility and privacy. This study introduces an innovative framework to improve the performance by leveraging distributed computing strategies. The framework integrates specific feature selection algorithms and distributed mutual information computation, which is crucial for sensitivity assessment. Additionally, it is optimized using a hyperparameter tuning technique based on Bayesian optimization, which focuses on minimizing either a combined score of the Bayesian information criterion (BIC) and Akaike’s Information Criterion (AIC) or by minimizing the Maximal Information Coefficient (MIC) score individually. Extensive testing on 12 datasets with tens to thousands of features was conducted for classification and regression tasks. With our method, the sensitivity of the resulting data is lower than alternative approaches, requiring less perturbation for an equivalent level of privacy. Using a novel Privacy Deviation Coefficient (PDC) metric, we assess the performance disparity between original and perturbed data. Overall, there is a significant execution time improvement of 64.30% on the computation, providing valuable insights for practical applications.</p></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"61 6","pages":"Article 103870"},"PeriodicalIF":7.4,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142083417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-granularity attribute similarity model for user alignment across social platforms under pre-aligned data sparsity 预对齐数据稀疏性下跨社交平台用户对齐的多粒度属性相似性模型
IF 7.4 1区 管理学
Information Processing & Management Pub Date : 2024-08-23 DOI: 10.1016/j.ipm.2024.103866
Yongqiang Peng , Xiaoliang Chen , Duoqian Miao , Xiaolin Qin , Xu Gu , Peng Lu
{"title":"Multi-granularity attribute similarity model for user alignment across social platforms under pre-aligned data sparsity","authors":"Yongqiang Peng ,&nbsp;Xiaoliang Chen ,&nbsp;Duoqian Miao ,&nbsp;Xiaolin Qin ,&nbsp;Xu Gu ,&nbsp;Peng Lu","doi":"10.1016/j.ipm.2024.103866","DOIUrl":"10.1016/j.ipm.2024.103866","url":null,"abstract":"<div><p>Cross-platform User Alignment (UA) aims to identify accounts belonging to the same individual across multiple social network platforms. This study seeks to enhance the performance of UA tasks while reducing the required sample data. Previous research has focused excessively on model design, lacking optimization throughout the entire process, making it challenging to achieve performance without heavy reliance on labeled data. This paper proposes a semi-supervised Multi-Granularity Attribute Similarity Model (MGASM). First, MGASM optimizes the embedding process through multi-granularity modeling at the levels of characters, words, articles, structures, and labels, and enhances missing data by leveraging adjacent text attributes. Next, MGASM quantifies the correlation between attributes of the same granularity by constructing Multi-Granularity Attribute Cosine Distance Distribution Vectors (MA-CDDVs). These vectors form the basis for a binary classification similarity model trained to calculate similarity scores for user pairs. Additionally, an attribute reappearance score correction (ARSC) mechanism is introduced to further refine the ranking of candidate users. Extensive experiments on the Weibo-Douban and DBLP17-DBLP19 datasets demonstrate that compared to state-of-the-art methods, The hit-precision of the MGASM series has significantly improved by 68.15% and 27.02%, almost reaching 100% precision. The F1 score has increased by 37.6% and 21.4%.</p></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"61 6","pages":"Article 103866"},"PeriodicalIF":7.4,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142044545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信