{"title":"IEEE Transactions on Computational Social Systems Information for Authors","authors":"","doi":"10.1109/TCSS.2025.3608423","DOIUrl":"https://doi.org/10.1109/TCSS.2025.3608423","url":null,"abstract":"","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 5","pages":"C4-C4"},"PeriodicalIF":4.5,"publicationDate":"2025-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11194050","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145230025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Guest Editorial: Special Issue on Trends in Social Multimedia Computing: Models, Methodologies, and Applications","authors":"Amit Kumar Singh;Jungong Han;Stefano Berretti","doi":"10.1109/TCSS.2025.3606570","DOIUrl":"https://doi.org/10.1109/TCSS.2025.3606570","url":null,"abstract":"","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 5","pages":"3747-3750"},"PeriodicalIF":4.5,"publicationDate":"2025-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11193968","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145230028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"IEEE Transactions on Computational Social Systems Publication Information","authors":"","doi":"10.1109/TCSS.2025.3608419","DOIUrl":"https://doi.org/10.1109/TCSS.2025.3608419","url":null,"abstract":"","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 5","pages":"C2-C2"},"PeriodicalIF":4.5,"publicationDate":"2025-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11194049","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145315350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"IEEE Systems, Man, and Cybernetics Society Information","authors":"","doi":"10.1109/TCSS.2025.3608421","DOIUrl":"https://doi.org/10.1109/TCSS.2025.3608421","url":null,"abstract":"","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 5","pages":"C3-C3"},"PeriodicalIF":4.5,"publicationDate":"2025-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11193967","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145230019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Unsupervised Video Summarization Based on Spatiotemporal Semantic Graph and Enhanced Attention Mechanism","authors":"Xin Cheng;Lei Yang;Rui Li","doi":"10.1109/TCSS.2025.3579570","DOIUrl":"https://doi.org/10.1109/TCSS.2025.3579570","url":null,"abstract":"Generative adversarial networks (GANs) have demonstrated potential in enhancing keyframe selection and video reconstruction via adversarial training among unsupervised approaches. Nevertheless, GANs struggle to encapsulate the intricate spatiotemporal dynamics in videos, which is essential for producing coherent and informative summaries. To address these challenges, we introduce an unsupervised video summarization framework that synergistically integrates temporal–spatial semantic graphs (TSSGraphs) with a bilinear additive attention (BAA) mechanism. TSSGraphs are designed to effectively model temporal and spatial relationships among video frames by combining temporal convolution and dynamic edge convolution, thereby extracting salient features while mitigating model complexity. The BAA mechanism enhances the framework’s ability to capture critical motion information by addressing feature sparsity and eliminating redundant parameters, ensuring robust attention to significant motion dynamics. Experimental assessments on the SumMe and TVSum benchmark datasets reveal that our method attains improvements of up to 4.0% and 3.3% in F-score, respectively, compared to current methodologies. Moreover, our system demonstrates diminished parameter overhead throughout training and inference stages, particularly excelling in contexts with significant motion content.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 5","pages":"3751-3764"},"PeriodicalIF":4.5,"publicationDate":"2025-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145230072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yutian Li;Zhuopan Yang;Zhenguo Yang;Xiaoping Li;Wenyin Liu;Qing Li
{"title":"Multimodal Disentangled Fusion Network via VAEs for Multimodal Zero-Shot Learning","authors":"Yutian Li;Zhuopan Yang;Zhenguo Yang;Xiaoping Li;Wenyin Liu;Qing Li","doi":"10.1109/TCSS.2025.3575939","DOIUrl":"https://doi.org/10.1109/TCSS.2025.3575939","url":null,"abstract":"Addressing the bias problem in multimodal zero-shot learning tasks is challenging due to the domain shift between seen and unseen classes, as well as the semantic gap across different modalities. To tackle these challenges, we propose a multimodal disentangled fusion network (MDFN) that unifies the class embedding space for multimodal zero-shot learning. MDFN exploits feature disentangled variational autoencoder (FD-VAE) in two branches to distangle unimodal features into modality-specific representations that are semantically consistent and unrelated, where semantics are shared within classes. In particular, semantically consistent representations and unimodal features are integrated to retain the semantics of the original features in the form of residuals. Furthermore, multimodal conditional VAE (MC-VAE) in two branches is adopted to learn cross-modal interactions with modality-specific conditions. Finally, the complementary multimodal representations achieved by MC-VAE are encoded into a fusion network (FN) with a self-adaptive margin center loss (SAMC-loss) to predict target class labels in embedding forms. By learning the distance among domain samples, SAMC-loss promotes intraclass compactness and interclass separability. Experiments on zero-shot and news event datasets demonstrate the superior performance of MDFN, with the harmonic mean improved by 27.2% on the MMED dataset and 5.1% on the SUN dataset.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 5","pages":"3684-3697"},"PeriodicalIF":4.5,"publicationDate":"2025-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145230085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Coordinate System Transformation Method for Comparing Different Types of Data in Different Dataset Using Singular Value Decomposition","authors":"Emiko Uchiyama;Wataru Takano;Yoshihiko Nakamura;Tomoki Tanaka;Katsuya Iijima;Gentiane Venture;Vincent Hernandez;Kenta Kamikokuryo;Ken-ichiro Yabu;Takahiro Miura;Kimitaka Nakazawa;Bo-Kyung Son","doi":"10.1109/TCSS.2025.3561078","DOIUrl":"https://doi.org/10.1109/TCSS.2025.3561078","url":null,"abstract":"In the current era of AI technology, where systems increasingly rely on big data to process vast amounts of societal information, efficient methods for integrating and utilizing diverse datasets are essential. This article presents a novel approach for transforming the feature space of different datasets through singular value decomposition (SVD) to extract common and hidden features as using the prior domain knowledge. Specifically, we apply this method to two datasets: 1) one related to physical and cognitive frailty in the elderly; and 2) another focusing on identifying <italic>IKIGAI</i> (happiness, self-efficacy, and sense of contribution) in volunteer staff of a civic health promotion activity. Both datasets consist of multiple sub-datasets measured using different modalities, such as facial expressions, sound, activity, and heart rates. By defining feature extraction methods for each subdataset, we compare and integrate the overlapping data. The results demonstrated that our method could effectively preserve common characteristics across different data types, offering a more interpretable solution than traditional dimensionality reduction methods based on linear and nonlinear transformation. This approach has significant implications for data integration in multidisciplinary fields and opens the door for future applications to a wide range of datasets.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 5","pages":"3610-3626"},"PeriodicalIF":4.5,"publicationDate":"2025-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11073557","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145255998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mazin Abdalla;Parya Abadeh;Zeinab Noorian;Amira Ghenai;Fattane Zarrinkalam;Soroush Zamani Alavijeh
{"title":"The Impact of Listening to Music on Stress Level for Anxiety, Depression, and PTSD: Mixed-Effect Models and Propensity Score Analysis","authors":"Mazin Abdalla;Parya Abadeh;Zeinab Noorian;Amira Ghenai;Fattane Zarrinkalam;Soroush Zamani Alavijeh","doi":"10.1109/TCSS.2025.3561073","DOIUrl":"https://doi.org/10.1109/TCSS.2025.3561073","url":null,"abstract":"The intersection of music and mental health has gained increasing attention, with previous studies highlighting music’s potential to reduce stress and anxiety. Despite these promising findings, many of these studies are limited by small sample sizes and traditional observational methods, leaving a gap in our understanding of music’s broader impact on mental health. In response to these limitations, this study introduces a novel approach that combines generalized linear mixed models (GLMM) with propensity score matching (PSM) to explore the relationship between music listening and stress levels among social media users diagnosed with anxiety, depression, and posttraumatic stress disorder (PTSD). Our research not only identifies associative patterns between music listening and stress but also provides a more rigorous examination of potential causal effects, taking into account demographic factors such as education level, gender, and age. Our findings reveal that across all mental health conditions, music listening is significantly associated with reduced stress levels, with an observed 21.3% reduction for anxiety, 15.4% for depression, and 19.3% for PTSD. Additionally, users who listened to music were more likely to report a zero stress score, indicating a stronger relaxation effect. Further, our analysis of demographic variations shows that age and education level influence the impact of music on stress reduction, highlighting the potential for personalized interventions. These findings contribute to a deeper understanding of music’s therapeutic potential, particularly in crafting interventions tailored to the diverse needs of different populations.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 5","pages":"3816-3830"},"PeriodicalIF":4.5,"publicationDate":"2025-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145230080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ye Zhang;Qing Gao;Rong Hu;Qingtang Ding;Boyang Li;Yulan Guo
{"title":"Differentiable Prior-Driven Data Augmentation for Sensor-Based Human Activity Recognition","authors":"Ye Zhang;Qing Gao;Rong Hu;Qingtang Ding;Boyang Li;Yulan Guo","doi":"10.1109/TCSS.2025.3565414","DOIUrl":"https://doi.org/10.1109/TCSS.2025.3565414","url":null,"abstract":"Sensor-based human activity recognition (HAR) usually suffers from the problem of insufficient annotated data, due to the difficulty in labeling the intuitive signals of wearable sensors. To this end, recent advances have adopted handcrafted operations or generative models for data augmentation. The handcrafted operations are driven by some physical priors of human activities, e.g., action distortion and strength fluctuations. However, these approaches may face challenges in maintaining semantic data properties. Although the generative models have better data adaptability, it is difficult for them to incorporate important action priors into data generation. This article proposes a differentiable prior-driven data augmentation framework for HAR. First, we embed the handcrafted augmentation operations into a differentiable module, which adaptively selects and optimizes the operations to be combined together. Then, we construct a generative module to add controllable perturbations to the data derived by the handcrafted operations and further improve the diversity of data augmentation. By integrating the handcrafted operation module and the generative module into one learnable framework, the generalization performance of the recognition models is enhanced effectively. Extensive experimental results with three different classifiers on five public datasets demonstrate the effectiveness of the proposed framework. Project page: <uri>https://github.com/crocodilegogogo/DriveData-Under-Review</uri>.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 5","pages":"3778-3790"},"PeriodicalIF":4.5,"publicationDate":"2025-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145230069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}