IEEE Computer Graphics and Applications最新文献

筛选
英文 中文
Testing the Capability of AI Art Tools for Urban Design. 测试人工智能艺术工具在城市设计方面的能力。
IF 1.8 4区 计算机科学
IEEE Computer Graphics and Applications Pub Date : 2024-03-01 Epub Date: 2024-03-25 DOI: 10.1109/MCG.2024.3356169
Connor Phillips, Junfeng Jiao, Emmalee Clubb
{"title":"Testing the Capability of AI Art Tools for Urban Design.","authors":"Connor Phillips, Junfeng Jiao, Emmalee Clubb","doi":"10.1109/MCG.2024.3356169","DOIUrl":"10.1109/MCG.2024.3356169","url":null,"abstract":"<p><p>This study aimed to evaluate the performance of three artificial intelligence (AI) image synthesis models, Dall-E 2, Stable Diffusion, and Midjourney, in generating urban design imagery based on scene descriptions. A total of 240 images were generated and evaluated by two independent professional evaluators using an adapted sensibleness and specificity average metric. The results showed significant differences between the three AI models, as well as differing scores across urban scenes, suggesting that some projects and design elements may be more challenging for AI art generators to represent visually. Analysis of individual design elements showed high accuracy in common features like skyscrapers and lawns, but less frequency in depicting unique elements such as sculptures and transit stops. AI-generated urban designs have potential applications in the early stages of exploration when rapid ideation and visual brainstorming are key. Future research could broaden the style range and include more diverse evaluative metrics. The study aims to guide the development of AI models for more nuanced and inclusive urban design applications, enhancing tools for architects and urban planners.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":"37-45"},"PeriodicalIF":1.8,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139503006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NeRF-In: Free-Form Inpainting for Pretrained NeRF With RGB-D Priors. NeRF- in:自由形式的绘画与RGB-D先验的预训练NeRF。
IF 1.8 4区 计算机科学
IEEE Computer Graphics and Applications Pub Date : 2024-03-01 Epub Date: 2024-03-25 DOI: 10.1109/MCG.2023.3336224
I-Chao Shen, Hao-Kang Liu, Bing-Yu Chen
{"title":"NeRF-In: Free-Form Inpainting for Pretrained NeRF With RGB-D Priors.","authors":"I-Chao Shen, Hao-Kang Liu, Bing-Yu Chen","doi":"10.1109/MCG.2023.3336224","DOIUrl":"10.1109/MCG.2023.3336224","url":null,"abstract":"<p><p>Neural radiance field (NeRF) has emerged as a versatile scene representation. However, it is still unintuitive to edit a pretrained NeRF because the network parameters and the scene appearance are often not explicitly associated. In this article, we introduce the first framework that enables users to retouch undesired regions in a pretrained NeRF scene without accessing any training data and category-specific data prior. The user first draws a free-form mask to specify a region containing the unwanted objects over an arbitrary rendered view from the pretrained NeRF. Our framework transfers the user-drawn mask to other rendered views and estimates guiding color and depth images within transferred masked regions. Next, we formulate an optimization problem that jointly inpaints the image content in all masked regions by updating NeRF's parameters. We demonstrate our framework on diverse scenes and show it obtained visually plausible and structurally consistent results using less user manual efforts.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":"100-109"},"PeriodicalIF":1.8,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138453100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How Text-to-Image Generative AI Is Transforming Mediated Action. 从文本到图像的生成式人工智能如何改变媒介行动》(How Text-to-Image Generative AI is Transforming Mediated Action.
IF 1.8 4区 计算机科学
IEEE Computer Graphics and Applications Pub Date : 2024-03-01 Epub Date: 2024-03-25 DOI: 10.1109/MCG.2024.3355808
Henriikka Vartiainen, Matti Tedre
{"title":"How Text-to-Image Generative AI Is Transforming Mediated Action.","authors":"Henriikka Vartiainen, Matti Tedre","doi":"10.1109/MCG.2024.3355808","DOIUrl":"10.1109/MCG.2024.3355808","url":null,"abstract":"<p><p>This article examines the intricate relationship between humans and text-to-image generative models (generative artificial intelligence/genAI) in the realm of art. The article frames that relationship in the theory of mediated action-a well-established theory that conceptualizes how tools shape human thoughts and actions. The article describes genAI systems as learning, cocreating, and communicating, multimodally capable hybrid systems that distill and rely on the wisdom and creativity of massive crowds of people and can sometimes surpass them. Those systems elude the theoretical description of the role of tools and locus of control in mediated action. The article asks how well the theory can accommodate both the transformative potential of genAI tools in creative fields and art, and the ethics of the emergent social dynamics it generates. The article concludes by discussing the fundamental changes and broader implications that genAI brings to the realm of mediated action and, ultimately, to the very fabric of our daily lives.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":"12-22"},"PeriodicalIF":1.8,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139576009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Databiting: Lightweight, Transient, and Insight Rich Exploration of Personal Data. 数据化:对个人数据进行轻量级、瞬时和富于洞察力的探索。
IF 1.8 4区 计算机科学
IEEE Computer Graphics and Applications Pub Date : 2024-03-01 DOI: 10.1109/MCG.2024.3353888
Bradley Rey, Bongshin Lee, Eun Kyoung Choe, Pourang Irani, Theresa-Marie Rhyne
{"title":"Databiting: Lightweight, Transient, and Insight Rich Exploration of Personal Data.","authors":"Bradley Rey, Bongshin Lee, Eun Kyoung Choe, Pourang Irani, Theresa-Marie Rhyne","doi":"10.1109/MCG.2024.3353888","DOIUrl":"https://doi.org/10.1109/MCG.2024.3353888","url":null,"abstract":"<p><p>As mobile and wearable devices are becoming increasingly powerful, access to personal data is within reach anytime and anywhere. Currently, methods of data exploration while on-the-go and in-situ are, however, often limited to glanceable and micro visualizations, which provide narrow insight. In this article, we introduce the notion of databiting, the act of interacting with personal data to obtain richer insight through lightweight and transient exploration. We focus our discussion on conceptualizing databiting and arguing its potential values. We then discuss five research considerations that we deem important for enabling databiting: contextual factors, interaction modalities, the relationship between databiting and other forms of exploration, personalization, and evaluation challenges. We envision this line of work in databiting could enable people to easily gain meaningful personal insight from their data anytime and anywhere.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"44 2","pages":"65-72"},"PeriodicalIF":1.8,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140289723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integrated Augmented and Virtual Reality Technologies for Realistic Fire Drill Training. 用于真实消防演习培训的增强现实和虚拟现实集成技术。
IF 1.8 4区 计算机科学
IEEE Computer Graphics and Applications Pub Date : 2024-03-01 Epub Date: 2024-03-25 DOI: 10.1109/MCG.2023.3303028
Hosan Kang, Jinseong Yang, Beom-Seok Ko, Bo-Seong Kim, Oh-Young Song, Soo-Mi Choi
{"title":"Integrated Augmented and Virtual Reality Technologies for Realistic Fire Drill Training.","authors":"Hosan Kang, Jinseong Yang, Beom-Seok Ko, Bo-Seong Kim, Oh-Young Song, Soo-Mi Choi","doi":"10.1109/MCG.2023.3303028","DOIUrl":"10.1109/MCG.2023.3303028","url":null,"abstract":"<p><p>In this article, we propose a novel fire drill training system designed specifically to integrate augmented reality (AR) and virtual reality (VR) technologies into a single head-mounted display device to provide realistic as well as safe and diverse experiences. Applying hybrid AR/VR technologies in fire drill training may be beneficial because they can overcome limitations such as space-time constraints, risk factors, training costs, and difficulties in real environments. The proposed system can improve training effectiveness by transforming arbitrary real spaces into real-time, realistic virtual fire situations, and by interacting with tangible training props. Moreover, the system can create intelligent and realistic fire effects in AR by estimating not only the object type but also its physical properties. Our user studies demonstrated the potential of integrated AR/VR for improving training and education.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":"89-99"},"PeriodicalIF":1.8,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10010747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Jon McCormack: Art Infused With [Artificial] Intelligence. 乔恩-麦科马克:注入 [人工] 智能的艺术。
IF 1.8 4区 计算机科学
IEEE Computer Graphics and Applications Pub Date : 2024-03-01 DOI: 10.1109/MCG.2023.3348588
Jon McCormack, Francesca Samsel, Bruce D Campbell, Francesca Samsel
{"title":"Jon McCormack: Art Infused With [Artificial] Intelligence.","authors":"Jon McCormack, Francesca Samsel, Bruce D Campbell, Francesca Samsel","doi":"10.1109/MCG.2023.3348588","DOIUrl":"https://doi.org/10.1109/MCG.2023.3348588","url":null,"abstract":"<p><p>We requested an interview with Jon McCormack after we encountered his work when looking for artists doing compelling work at the intersection of art and artificial intelligence (AI).</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"44 2","pages":"46-54"},"PeriodicalIF":1.8,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140289725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generative AI for Visualization: Opportunities and Challenges. 用于可视化的生成式人工智能:机遇与挑战。
IF 1.8 4区 计算机科学
IEEE Computer Graphics and Applications Pub Date : 2024-03-01 DOI: 10.1109/MCG.2024.3362168
Rahul C Basole, Timothy Major, Rahul C Basole, Francesco Ferrise
{"title":"Generative AI for Visualization: Opportunities and Challenges.","authors":"Rahul C Basole, Timothy Major, Rahul C Basole, Francesco Ferrise","doi":"10.1109/MCG.2024.3362168","DOIUrl":"https://doi.org/10.1109/MCG.2024.3362168","url":null,"abstract":"<p><p>Recent developments in artificial intelligence (AI) and machine learning (ML) have led to the creation of powerful generative AI methods and tools capable of producing text, code, images, and other media in response to user prompts. Significant interest in the technology has led to speculation about what fields, including visualization, can be augmented or replaced by such approaches. However, there remains a lack of understanding about which visualization activities may be particularly suitable for the application of generative AI. Drawing on examples from the field, we map current and emerging capabilities of generative AI across the different phases of the visualization lifecycle and describe salient opportunities and challenges.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"44 2","pages":"55-64"},"PeriodicalIF":1.8,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140289724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Virtual Access to STEM Careers: In the Field Experiments. STEM 职业虚拟通道:实地实验。
IF 1.8 4区 计算机科学
IEEE Computer Graphics and Applications Pub Date : 2024-03-01 DOI: 10.1109/MCG.2024.3361002
David C Hollock, Nicholas J Brunsink, Austin B Whittaker, Andrew Lawson, Toni B Pence, Brittany Morago, Elham Ebrahimi, James Stocker, Amelia Moody, Amy Taylor, Beatriz Sousa Santos, Alejandra J Magana
{"title":"Virtual Access to STEM Careers: In the Field Experiments.","authors":"David C Hollock, Nicholas J Brunsink, Austin B Whittaker, Andrew Lawson, Toni B Pence, Brittany Morago, Elham Ebrahimi, James Stocker, Amelia Moody, Amy Taylor, Beatriz Sousa Santos, Alejandra J Magana","doi":"10.1109/MCG.2024.3361002","DOIUrl":"10.1109/MCG.2024.3361002","url":null,"abstract":"<p><p>The Virtual Access to STEM Careers (VASC) project is an intertwined classroom and virtual reality (VR) curricular program for third through fourth graders. Elementary school students learn about and take on the roles and responsibilities of STEM occupations through authentic, problem-based tasks with physical kits and immersive VR environments. This article reports on a round of curriculum and virtual environment development and in-classroom experimentation that was guided by preliminary results gathered from our initial VASC prototyping and testing. This specific iteration focuses on curriculum for learning about sea turtles and tasks regularly completed by park rangers and marine biologists who work with these creatures and a new backend data collection component to analyze participant behavior. Our results showed that educators were able to setup and integrate VASC into their classrooms with relative ease. Elementary school students were able to learn how to interface with our system quickly and enjoyed being in the environment, making a positive link to STEM education.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"44 2","pages":"73-80"},"PeriodicalIF":1.8,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140289727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lightweight 3-D Convolutional Occupancy Networks for Virtual Object Reconstruction. 用于虚拟物体重构的轻量级三维卷积占位网络
IF 1.8 4区 计算机科学
IEEE Computer Graphics and Applications Pub Date : 2024-03-01 Epub Date: 2024-03-25 DOI: 10.1109/MCG.2024.3359822
Claudia Melis Tonti, Lorenzo Papa, Irene Amerini
{"title":"Lightweight 3-D Convolutional Occupancy Networks for Virtual Object Reconstruction.","authors":"Claudia Melis Tonti, Lorenzo Papa, Irene Amerini","doi":"10.1109/MCG.2024.3359822","DOIUrl":"10.1109/MCG.2024.3359822","url":null,"abstract":"<p><p>The increasing demand for edge devices causes the necessity for recent technologies to be adaptable to nonspecialized hardware. In particular, in the context of augmented, virtual reality, and computer graphics, the 3-D object reconstruction task from a sparse point cloud is highly computationally demanding and for this reason, it is difficult to accomplish on embedded devices. In addition, the majority of earlier works have focused on mesh quality at the expense of speeding up the creation process. In order to find the best balance between time for mesh generation and mesh quality, we aim to tackle the object reconstruction process by developing a lightweight implicit representation. To achieve this goal, we leverage the use of convolutional occupancy networks. We show the effectiveness of the proposed approach through extensive experiments on the ShapeNet dataset using systems with different resources such as GPU, CPU, and an embedded device.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":"23-36"},"PeriodicalIF":1.8,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139698969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sitting or Standing in VR: About Comfort, Conflicts, and Hazards. 虚拟现实中的坐姿或站姿:关于舒适、冲突和危险。
IF 1.8 4区 计算机科学
IEEE Computer Graphics and Applications Pub Date : 2024-03-01 DOI: 10.1109/MCG.2024.3352349
Daniel Zielasko, Bernhard E Riecke, Mark Billinghurst, Michele Fiorentino, Kyle Johnsen
{"title":"Sitting or Standing in VR: About Comfort, Conflicts, and Hazards.","authors":"Daniel Zielasko, Bernhard E Riecke, Mark Billinghurst, Michele Fiorentino, Kyle Johnsen","doi":"10.1109/MCG.2024.3352349","DOIUrl":"https://doi.org/10.1109/MCG.2024.3352349","url":null,"abstract":"<p><p>This article examines the choices between sitting and standing in virtual reality (VR) experiences, addressing conflicts, challenges, and opportunities. It explores issues such as the risk of motion sickness in stationary users and virtual rotations, the formation of mental models, consistent authoring, affordances, and the integration of embodied interfaces for enhanced interactions. Furthermore, it delves into the significance of multisensory integration and the impact of postural mismatches on immersion and acceptance in VR. Ultimately, the article underscores the importance of aligning postural choices and embodied interfaces with the goals of VR applications, be it for entertainment or simulation, to enhance user experiences.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"44 2","pages":"81-88"},"PeriodicalIF":1.8,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140289726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信