{"title":"IEEE Transactions on Big Data","authors":"","doi":"10.1109/mcg.2024.3403463","DOIUrl":"https://doi.org/10.1109/mcg.2024.3403463","url":null,"abstract":"","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"15 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2024-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141526724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Visual Computing for Autonomous Driving","authors":"Siming Chen, Liang Gou, Michael Kamp, Dong Sun","doi":"10.1109/mcg.2024.3397581","DOIUrl":"https://doi.org/10.1109/mcg.2024.3397581","url":null,"abstract":"Autonomous driving (AD) technology has experienced unprecedented growth in recent years, propelled by advancements in artificial intelligence. The transition from theoretical concepts to tangible implementations of self-driving cars holds immense promise in revolutionizing transportation, with the potential to significantly reduce traffic accidents and associated costs. However, despite this rapid progress, the field still grapples with underutilization of the vast datasets generated by autonomous vehicles, particularly in the realm of visualization and visual analytics, or in a broader sense, visual computing.","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"214 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2024-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141528793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"IEEE Annals of the History of Computing","authors":"","doi":"10.1109/mcg.2024.3403459","DOIUrl":"https://doi.org/10.1109/mcg.2024.3403459","url":null,"abstract":"","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"65 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2024-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141532213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"IEEE Computer Society Career Center","authors":"","doi":"10.1109/mcg.2024.3403409","DOIUrl":"https://doi.org/10.1109/mcg.2024.3403409","url":null,"abstract":"","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"1 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2024-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141532212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"EVCSeer: An Exploratory Study on Electric Vehicle Charging Stations Utilization Via Visual Analytics","authors":"Yutian Zhang, Shuxian Gu, Quan Li, Haipeng Zeng","doi":"10.1109/mcg.2024.3396451","DOIUrl":"https://doi.org/10.1109/mcg.2024.3396451","url":null,"abstract":"","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"102 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2024-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140830703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Natalia Andrienko, Gennady Andrienko, Alexander Artikis, Periklis Mantenoglou, Salvatore Rinzivillo
{"title":"Human-in-the-Loop: Visual Analytics for Building Models Recognizing Behavioral Patterns in Time Series.","authors":"Natalia Andrienko, Gennady Andrienko, Alexander Artikis, Periklis Mantenoglou, Salvatore Rinzivillo","doi":"10.1109/MCG.2024.3379851","DOIUrl":"10.1109/MCG.2024.3379851","url":null,"abstract":"<p><p>Detecting complex behavioral patterns in temporal data, such as moving object trajectories, often relies on precise formal specifications derived from vague domain concepts. However, such methods are sensitive to noise and minor fluctuations, leading to missed pattern occurrences. Conversely, machine learning (ML) approaches require abundant labeled examples, posing practical challenges. Our visual analytics approach enables domain experts to derive, test, and combine interval-based features to discriminate patterns and generate training data for ML algorithms. Visual aids enhance recognition and characterization of expected patterns and discovery of unexpected ones. Case studies demonstrate feasibility and effectiveness of the approach, which offers a novel framework for integrating human expertise and analytical reasoning with ML techniques, advancing data analytics.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":"14-29"},"PeriodicalIF":1.7,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140177816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"To Authenticity, and Beyond! Building Safe and Fair Generative AI Upon the Three Pillars of Provenance.","authors":"John Collomosse, Andy Parsons, Mike Potel","doi":"10.1109/MCG.2024.3380168","DOIUrl":"10.1109/MCG.2024.3380168","url":null,"abstract":"<p><p>Provenance facts, such as who made an image and how, can provide valuable context for users to make trust decisions about visual content. Against a backdrop of inexorable progress in generative AI for computer graphics, over two billion people will vote in public elections this year. Emerging standards and provenance enhancing tools promise to play an important role in fighting fake news and the spread of misinformation. In this article, we contrast three provenance enhancing technologies-metadata, fingerprinting, and watermarking-and discuss how we can build upon the complementary strengths of these three pillars to provide robust trust signals to support stories told by real and generative images. Beyond authenticity, we describe how provenance can also underpin new models for value creation in the age of generative AI. In doing so, we address other risks arising with generative AI such as ensuring training consent, and the proper attribution of credit to creatives who contribute their work to train generative models. We show that provenance may be combined with distributed ledger technology to develop novel solutions for recognizing and rewarding creative endeavor in the age of generative AI.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"44 3","pages":"82-90"},"PeriodicalIF":1.7,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141437836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Visualization and Visual Analytics in Autonomous Driving.","authors":"Sudhir K Routray","doi":"10.1109/MCG.2024.3381450","DOIUrl":"10.1109/MCG.2024.3381450","url":null,"abstract":"<p><p>Autonomous driving is no longer a topic of science fiction. Advancements of autonomous driving technologies are now reliable. Effectively harnessing the information is essential for enhancing the safety, reliability, and efficiency of autonomous vehicles. In this article, we explore the pivotal role of visualization and visual analytics (VA) techniques used in autonomous driving. By employing sophisticated data visualization methods, VA, researchers, and practitioners transform intricate datasets into intuitive visual representations, providing valuable insights for decision-making processes. This article delves into various visualization approaches, including spatial-temporal mapping, interactive dashboards, and machine learning-driven analytics, tailored specifically for autonomous driving scenarios. Furthermore, it investigates the integration of real-time sensor data, sensor coordination with VA, and machine learning algorithms to create comprehensive visualizations. This research advocates for the pivotal role of visualization and VA in shaping the future of autonomous driving systems, fostering innovation, and ensuring the safe integration of self-driving vehicles.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":"43-53"},"PeriodicalIF":1.7,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140289722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ji Hwan Park, Vikash Prasad, Sydney Newsom, Fares Najar, Rakhi Rajan
{"title":"IdMotif: An Interactive Motif Identification in Protein Sequences.","authors":"Ji Hwan Park, Vikash Prasad, Sydney Newsom, Fares Najar, Rakhi Rajan","doi":"10.1109/MCG.2023.3345742","DOIUrl":"10.1109/MCG.2023.3345742","url":null,"abstract":"<p><p>This article presents a visual analytics framework, idMotif, to support domain experts in identifying motifs in protein sequences. A motif is a short sequence of amino acids usually associated with distinct functions of a protein, and identifying similar motifs in protein sequences helps us to predict certain types of disease or infection. idMotif can be used to explore, analyze, and visualize such motifs in protein sequences. We introduce a deep-learning-based method for grouping protein sequences and allow users to discover motif candidates of protein groups based on local explanations of the decision of a deep-learning model. idMotif provides several interactive linked views for between and within protein cluster/group and sequence analysis. Through a case study and experts' feedback, we demonstrate how the framework helps domain experts analyze protein sequences and motif identification.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":"114-125"},"PeriodicalIF":1.7,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138833089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"@theSource: Welcome.","authors":"Nicholas F Polys, Nicholas Polys","doi":"10.1109/MCG.2024.3384728","DOIUrl":"10.1109/MCG.2024.3384728","url":null,"abstract":"<p><p>This inaugural article sets the stage and scope for a new department in IEEE Computer Graphics and Applications: @theSource. In this department, we set out to address the questions, \"How have open source projects and open Standards driven graphics innovations and applications?\" and \"What can we learn from them?\" Thus, we are broadly concerned with how open communities and ecosystems have (and are) impacting computer graphics. The intent is to highlight: open source software (such as architectures, engines, frameworks, libraries, services); open Standards and open source data and models; and applications as well as the impacts of open graphics technologies. We also consider historical and summative reviews on the cultural and economic aspects of open source and open Standards graphics ecosystems, such as visualization and mixed reality.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"44 3","pages":"69-73"},"PeriodicalIF":1.7,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141437834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}