Visual InformaticsPub Date : 2025-12-01Epub Date: 2025-07-29DOI: 10.1016/j.visinf.2025.100258
Fan Yan , Yong Wang , Xuanwu Yue , Kam-Kwai Wong , Ketian Mao , Rong Zhang , Huamin Qu , Haiyang Zhu , Minfeng Zhu , Wei Chen
{"title":"FundSelector: A visual analysis system for mutual fund selection","authors":"Fan Yan , Yong Wang , Xuanwu Yue , Kam-Kwai Wong , Ketian Mao , Rong Zhang , Huamin Qu , Haiyang Zhu , Minfeng Zhu , Wei Chen","doi":"10.1016/j.visinf.2025.100258","DOIUrl":"10.1016/j.visinf.2025.100258","url":null,"abstract":"<div><div>Mutual funds are one of the most important and popular investment ways for ordinary investors to maintain and increase the value of their assets. However, it is challenging for ordinary investors to select optimal mutual funds from thousands of fund choices managed by different managers. Various investors often have different personal investment preferences and it is difficult to characterize their preferences quickly. Also, mutual fund performance relies on various factors (e.g., the economic market and the management of fund managers), and most of these factors are dynamically changing, making it difficult to efficiently compare different mutual funds in detail. To address these challenges, we propose FundSelector, an interactive multi-view visual analytics system that quantifies user preferences to rank mutual funds and allows ordinary investors to explore mutual fund performance in terms of multiple factors and scales. Two novel visual designs are proposed to enable detailed comparisons of mutual funds. Rank-informed bipartite contribution bar chart provides interpretable fund ranking results by explicitly showing both positive and negative factors. Elastic trend chart allows investors to analyze and compare the temporal evolution of the mutual funds’ performances in a customizable way. We evaluated FundSelector through two case studies and interviews with eight ordinary investors. The results highlight its effectiveness and utility.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 4","pages":"Article 100258"},"PeriodicalIF":3.8,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145420022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visual InformaticsPub Date : 2025-12-01Epub Date: 2025-08-22DOI: 10.1016/j.visinf.2025.100263
Tianyi Huang , Zhengjun Zhang , Shenghui Cheng
{"title":"Hamiltonian cycle clustering with asymmetric correlation","authors":"Tianyi Huang , Zhengjun Zhang , Shenghui Cheng","doi":"10.1016/j.visinf.2025.100263","DOIUrl":"10.1016/j.visinf.2025.100263","url":null,"abstract":"<div><div>Analysts who explore high-dimensional data usually want three answers at once: Which samples belong together, how close the resulting groups are, and who influences whom accordingly. Classical clustering provides only hard labels, hiding both inter-cluster affinities and correlation flow. We introduce Hamiltonian Cycle Clustering with Asymmetric Correlation HCC-AC, a framework that converts the clustering task into an interpretable map where structure and directionality are visible at a single glance. HCC-AC first learns soft memberships by optimizing a joint global–local loss, preserving manifold structure while turning each label into a probability. These probabilities drive a Hamiltonian-cycle embedding: cluster anchors are ordered by affinity and placed evenly on a circle; samples fall radially towards their most-likely anchor, so clusters, their similarities (arc lengths), and outliers emerge immediately. Directed arrows connect anchors, their lengths showing correlation strength, transforming the map into a legible narrative of influence. Experiments on five benchmark datasets demonstrate that HCC-AC improves the knowledge discovery in clustering, i.e., indexes the clustering results, flags outliers reliably, and uncovers correlation pathways.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 4","pages":"Article 100263"},"PeriodicalIF":3.8,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145420023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visual InformaticsPub Date : 2025-12-01Epub Date: 2025-08-21DOI: 10.1016/j.visinf.2025.100266
Teerapord Lin, Paisit Khanarsa
{"title":"Sequential pattern recognition in CAD operations: A deep learning framework for next-action prediction","authors":"Teerapord Lin, Paisit Khanarsa","doi":"10.1016/j.visinf.2025.100266","DOIUrl":"10.1016/j.visinf.2025.100266","url":null,"abstract":"<div><div>Computer-Aided Design (CAD) systems have become essential tools in engineering and design fields. However, the complexity of these systems can create a steep learning curve and reduce efficiency for users. To address this challenge, a deep learning-based approach for predicting the next CAD command in a design sequence is proposed, leveraging sentence embeddings and sequential pattern recognition to enhance prediction accuracy. The method utilizes the Multilingual Universal Sentence Encoder (MUSE) to generate dense vector representations of CAD commands, effectively capturing semantic relationships between different design operations. These embeddings are then combined with distance features that encode the sequential patterns between consecutive commands to create comprehensive representation of design workflows. Two neural architectures are implemented and evaluated: the Convolutional Sequence Embedding Recommendation (Caser) model and the Tiny-Transformer model, each tested with four different feature configurations (random embeddings, random embeddings with distance features, MUSE embeddings, and MUSE embeddings with distance features), resulting in eight model variants total. Experimental results demonstrate that the CNN-based Caser model consistently outperforms the attention-based Tiny-Transformer across all configurations. The best performing model, Caser with MUSE embeddings and distance features, achieves the highest accuracy at 0.5902 and precision at 0.5917, representing a 7.6% improvement over traditional methods and a 2.5% improvement over the best Transformer variant. Our analysis of the training dynamics reveals that models with distance features converge faster and demonstrate more stable validation loss, highlighting the complementary roles of semantic understanding and sequential pattern recognition in CAD command prediction. However, while Transformer models showed competitive baseline performance, they failed to benefit from additional feature engineering, unlike the Caser models which effectively leveraged both semantic and sequential information. These findings show that incorporating both semantic understanding of commands and their sequential relationships significantly improves prediction accuracy, potentially enhancing user experience by providing intelligent command suggestions during the CAD design process.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 4","pages":"Article 100266"},"PeriodicalIF":3.8,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145420021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"From perception to reflection: A layered framework for aesthetic education in the digital design of ancient painting","authors":"Xiaojiao Chen, Wenru Qi, Yulian Yang, Xiaosong Wang, Wei Chen","doi":"10.1016/j.visinf.2025.100290","DOIUrl":"10.1016/j.visinf.2025.100290","url":null,"abstract":"","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 4","pages":"Article 100290"},"PeriodicalIF":3.8,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145736222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visual InformaticsPub Date : 2025-12-01Epub Date: 2025-08-29DOI: 10.1016/j.visinf.2025.100271
Yijie Lian , Jianing Hao , Wei Zeng , Qiong Luo
{"title":"A survey of visual insight mining: Connecting data and insights via visualization","authors":"Yijie Lian , Jianing Hao , Wei Zeng , Qiong Luo","doi":"10.1016/j.visinf.2025.100271","DOIUrl":"10.1016/j.visinf.2025.100271","url":null,"abstract":"<div><div>Insight mining transforms complex data into actionable knowledge, enabling effective decision-making across diverse domains. Given the richness and interpretative power of visualizations, visual insight mining – the process of extracting meaningful insights from raw data through intuitive visual representations – has become increasingly vital. This survey systematically reviews the current landscape of visual insight mining, addressing the critical questions: <em>“How can visualizations be generated from data?”</em> and <em>“How can insights be extracted from visualizations?”</em>. Specifically, we delve into six distinct tasks (i.e., task decomposition, visualization generation, visualization recommendation, chart parsing, chart question answering, and insight generation) in the process of visual insight mining, and provide a comprehensive analysis of rule-based, learning-based, and large-model-based methods for each task. Based on the survey, we discuss current research challenges and outline future opportunities. By viewing visualization as a bridge in the data-to-insight path, this survey offers a structured foundation for further exploration in visual insight mining.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 4","pages":"Article 100271"},"PeriodicalIF":3.8,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145680949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visual InformaticsPub Date : 2025-12-01Epub Date: 2025-08-22DOI: 10.1016/j.visinf.2025.100268
Xiaojia Zhu , Chunyu Li , Rui Chen , Zhiwen Shao
{"title":"Self-similarity guided regression with contrast enhancement for spine segmentation","authors":"Xiaojia Zhu , Chunyu Li , Rui Chen , Zhiwen Shao","doi":"10.1016/j.visinf.2025.100268","DOIUrl":"10.1016/j.visinf.2025.100268","url":null,"abstract":"<div><div>Accurate spine segmentation is critical for scoliosis diagnosis and treatment. For instance, automatic Cobb angle measurement for scoliosis relies on precisely localized vertebral masks. However, it remains a challenging task due to low tissue contrast, blurred vertebral edges, and overlapping anatomical structures. In this paper, we propose SRNet, a pure segmentation network that produces binary masks of each vertebra. SRNet integrates two novel components, a Self-similarity Guided Dynamic Convolution (SGDC) module and a Contrast-Enhanced Boundary Decoder (CEBD). SGDC exploits the repetitive structure of vertebrae by leveraging non-local attention to compute self-similarity across feature maps and dynamic convolution to combine multiple convolution kernels adaptively. CEBD sharpens segmentation boundaries via a reverse-attention mechanism that erases the coarse prediction and focuses on missing edge details, combined with a spectral-residual filter that amplifies high-frequency edge information. Extensive experiments on the AASCE spine X-ray dataset show that our SRNet achieves a high Dice score of 92.37%, outperforming state-of-the-art approaches. While our primary focus here is mask segmentation, the accurate vertebral masks produced by SRNet could readily support future tasks such as scoliosis Cobb angle estimation.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 4","pages":"Article 100268"},"PeriodicalIF":3.8,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145568875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A methodological approach towards human-centered visual analytics","authors":"Emmanouil Adamakis , George Margetis , Stavroula Ntoa , Constantine Stephanidis","doi":"10.1016/j.visinf.2025.100269","DOIUrl":"10.1016/j.visinf.2025.100269","url":null,"abstract":"<div><div>Visual analytics focuses on amplifying users’ reasoning and understanding by enhancing data analysis procedures with the efficient incorporation of information visualization and data processing techniques. In this study, we conduct an overview of this multidisciplinary field, focusing on both the process that formalizes its primary concepts and the affiliated research areas. We identify key developments in each area, as well as the challenges that arise when these areas are interconnected under the visual analytics process. We consider that to address the identified challenges, an appropriate representation of key user needs is essential. Therefore, inspired by human-centered design and its principles, we propose a novel methodological approach comprising a human-centered definition of visual analytics that expands on models of the field and quantifies the intermediate states of a data analysis. In addition to the theoretical aspects of the definition, we also provide a set of directions that align the process with technical aspects of the development cycle. In this respect, our research endeavor aims to transform the visual analytics process into an essential method for both conceptualizing data analysis systems capable of anticipating user needs and for streamlining their technical implementation.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 4","pages":"Article 100269"},"PeriodicalIF":3.8,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145615118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visual InformaticsPub Date : 2025-12-01Epub Date: 2025-09-08DOI: 10.1016/j.visinf.2025.100270
Li Liu , Jing Duan , Xiaodong Fu , Wei Peng , Lijun Liu
{"title":"Unified 3D Gaussian splatting for motion and defocus blur reconstruction","authors":"Li Liu , Jing Duan , Xiaodong Fu , Wei Peng , Lijun Liu","doi":"10.1016/j.visinf.2025.100270","DOIUrl":"10.1016/j.visinf.2025.100270","url":null,"abstract":"<div><div>This paper proposes a unified 3D Gaussian splatting framework consisting of three key components for motion and defocus blur reconstruction. First, a dual-blur perception module is designed to generate pixel-wise masks and predict the types of motion and defocus blur, guiding structural feature extraction. Second, a blur-aware Gaussian splatting integrates blur-aware features into the splatting process for accurate modeling of the global and local scene structure. Third, an Unoptimized Gaussian Ratio (UGR)-opacity joint optimization strategy is proposed to refine under-optimized regions, improving reconstruction accuracy under complex blur conditions. Experiments on a newly constructed motion and defocus blur dataset demonstrate the effectiveness of the proposed method for novel view synthesis. Compared with state-of-the-art methods, our framework achieves improvements of 0.28 dB, 2.46% and 39.88% on PSNR, SSIM, and LPIPS, respectively. For deblurring tasks, it achieves improvements of 0.36 dB, 3.24% and 28.96% on the same metrics. These results highlight the robustness and effectiveness of this approach. Additional visual results and video renderings are available on our project webpage: <span><span>https://sunbeam-217.github.io/Dual-blur-reconstruction/</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 4","pages":"Article 100270"},"PeriodicalIF":3.8,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145615119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visual InformaticsPub Date : 2025-12-01Epub Date: 2025-08-21DOI: 10.1016/j.visinf.2025.100267
Yu Xu , Ziang Wang , Fan Tang , Juan Cao , Xirong Li , Jintao Li
{"title":"Attribute guided adversarial editing for face privacy protection","authors":"Yu Xu , Ziang Wang , Fan Tang , Juan Cao , Xirong Li , Jintao Li","doi":"10.1016/j.visinf.2025.100267","DOIUrl":"10.1016/j.visinf.2025.100267","url":null,"abstract":"<div><div>Nowadays, the proliferation of portraits or photographs containing human faces on the internet has created significant risks of illegal privacy collection and analysis by intelligent systems. Previous attempts to protect against unauthorized identification by face recognition models have primarily involved manipulating or adding adversarial perturbations to photos. However, it remains a challenge to balance privacy protection effectiveness and maintaining image visual quality. That is, to successfully attack real-world black-box face recognition models, significant manipulation is required for the source image, which will obviously damage the image visual quality. To address these issues, we propose an attribute-guided face identity protection (AG-FIP) approach that can protect facial privacy effectively without introducing meaningless or conspicuous artifacts into the source image. The proposed method involves mapping the images to latent space and subsequently implementing an adversarial attack through attribute editing. An attribute selection module followed by an attribute adversarially editing module is proposed to enhance the efficiency and effectiveness of adversarial attacks. Experimental results demonstrate that our approach outperforms SOTAs in terms of confusing black-box face recognition models, commercial face recognition APIs, and image visual quality.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 4","pages":"Article 100267"},"PeriodicalIF":3.8,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145680951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}