第 34.6 期社论

IF 0.9 4区 计算机科学 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING
Nadia Magnenat Thalmann, Daniel Thalmann
{"title":"第 34.6 期社论","authors":"Nadia Magnenat Thalmann, Daniel Thalmann","doi":"10.1002/cav.2227","DOIUrl":null,"url":null,"abstract":"<p>This issue contains 12 regular papers. In the first paper, Hong Li et al. present an animation translation method based on edge enhancement and coordinate attention, which is called FAEC-GAN. They design a novel edge discrimination network to identify the edge features of images, so that the generated anime images can present clear and coherent lines. And the coordinate attention module is introduced in the encoder to adapt the model to the geometric changes in translation, to produce more realistic animation images. In addition, the method combines the focal frequency loss and pixel loss, which can pay attention to both the frequency domain information and pixel information of the generated image to improve the visual effect of the image.</p>\n<p>In the second paper, Rahul Jain et al. propose an algorithm to convert a depth video into a single dynamic image known as a linked motion image (LMI). The LMI has been given to a classifier consisting of an ensemble of three modified pre-trained convolutional neural networks (CNNs). The experiments were conducted using two datasets: a multimodal large-scale EgoGesture dataset and The MSR Gesture 3D dataset. For the EgoGesture dataset, the proposed method achieved an accuracy of 92.91%, which is better than the state-of-the-art methods. For the MSR Gesture 3D dataset, the proposed method accuracy is 100%, which outperforms the state-of-the-art methods. The recognition accuracy and precision of each gesture are also highlighted in this work.</p>\n<p>In the third paper, Rustam Akhunov et al. propose a set of experiments to aid the evaluation of the main categories of fluid-boundary interactions that are important in computer animation, i.e. no motion (resting) fluid, tangential and normal motion of a fluid with respect to the boundary, and a fluid impacting a corner. They propose 10 experiments, comprising experimental setup and quantitative evaluation with optional visual inspections, that are arranged in four groups which focus on one of the main category of fluid-boundary interactions. The authors use these experiments to evaluate three particle-based boundary handling methods, that is, Pressure Mirroring (PM), Pressure Boundaries (PB) and Moving Least Squares Pressure Extrapolation (MLS), in combination with two incompressible SPH fluid simulation methods, namely IISPH and DFSPH.</p>\n<p>In the fourth paper, Shenghuan Zhao et al. present three Extended Reality (XR) apps (AR, MR, and VR) to interactively visualize façade fenestration geometries and indoor illuminance simulations. Then XR technologies are assessed by 120 students and young architects, from task performance and engagement level two aspects. The task performance is measured by correct rate and time consumption two indicators, while the engagement level is measured by usability and interest two indicators. Evaluation results show that compared to AR and VR, MR is the best XR technology for this aim. VR outperforms AR on three indicators except for usability. By exposing three different XR technologies' performances in aiding fenestration design, this study increases the practical value of applying XR to the building design field.</p>\n<p>The fifth paper by Jing Zhao et al. focuses on a multiple-fluid coupling simulation algorithm based on MPM and PFM. First, based on the MPM, they model multiphase flow on Eulerian grids and capture the sharp interfaces between immiscible fluids combined with the PFM. The gas phase is further treated as a fluid during the gas–liquid interaction. Second, to demonstrate the natural fluid moving evolution from the high energy state to the low energy state, the paper proposes the local minimize bulk energy function to control the low energy state. Finally, the paper designs and achieves various groups of multiple-fluid coupling comparison experiments. Experimental results showed that the proposed approach can simulate various effects of rapid diffusion in the multiple-fluid coupling, such as complete dissolution, mutual solubility, extraction, and other phenomena.</p>\n<p>In the sixth paper, Jiwei Zhang et al. propose a novel method fusing multiple heterogeneous features through a multi-feature subspace representation network (MFSRN) to maximize the classification performance while keeping the disparity among features as small as possible, that is, common-subspace constraints. The authors conducted comparative experiments with state-of-the-art models on the bird's-eye view person dataset, and extensive experimental results demonstrated that the proposed MFSRN could achieve better recognition performance. Furthermore, the validity and stability of the method are confirmed.</p>\n<p>In the seventh paper, Sahadeb Shit et al. propose a convolutional neural network (CNN)-based image dehazing and detection approach, called End to End Dehaze and Detection Network (EDD-N), for proper image visualization and detection. This network is trained on real-time hazy images that are directly used to recover dehaze images without a transmission map. EDD-N is robust, and accuracy is higher than any other proposed model. The authors also conducted extensive experiments using real-time foggy images. The quantitative and qualitative evaluations of the hazy dataset verify the proposed method's superiority over other dehazing methods. Moreover, the proposed method validated real-time object detection tasks in adverse weather conditions and improved the intelligent transportation system.</p>\n<p>In the eighth paper, Chaehan So et al. designed a virtual being from a deep learning-generated face and a conversational AI model acting as a virtual conversation partner in an online conferencing software and evaluated it in 11 perceptions of social attributes. Compared to prior expectations, participants perceived the virtual being as distinctly higher in warmth (engaging, empathic, and approachable) but lower in realism and credibility after 5 days of 10-min daily conversations (Study 1). Further, the authors explored the idea of simplifying the technical setup to reduce the technical entry barrier for such experiments (Study 2). To this aim, they conducted several trials of fine-tuning a small conversational model of 90 million parameters until its performance metrics improved. Testing this fine-tuned model with users revealed that this model was not perceived differently from a large conversational model.</p>\n<p>In the ninth paper, Di Qi et al. propose a novel split and join approach to simulate a side-to-side stapled intestinal anastomosis in virtual reality. They mimic the intestine model using a new hybrid representation—a grid-linked particles model for physics simulation and a surface mesh for rendering. The proposed split and join operations handle the updates of both the grid-linked particles model and the surface mesh during the anastomosis procedure. The simulation results demonstrate the feasibility of the proposed approach in simulating intestine models and the side-to-side anastomosis operation.</p>\n<p>The tenth paper by Lanfeng Zhou et al. present a novel graph convolution mixed with point cloud deep learning method. In this method, the skinned multi-person linear model is regarded as a graph structure input, and the coarsened graph is obtained by graph convolution. After feeding the coarsened graph into the PointNet network, the coordinates of Dazhui are output. Different from the existing methods, the proposed method can directly label the results on the adaptive model, thus improving the accuracy on different models. An optimization method based on graph structure is introduced for better fit the predicted acupoints to the surface. In addition, a dataset marked with Dazhui is constructed for training. Experiments show that the accuracy of positioning could meet the requirements of needle application under certain circumstances.</p>\n<p>In the eleventh paper, Jian Lu et al. propose to reduce the interference of the factors in skeleton-based action recognition, by taking the joint coordinate modal information of 2D skeleton to represent the change of human posture as the research point: first, the joint coordinates are obtained from RGB video or image using a detector. Then the feature extraction network is combined to perform multi-level feature learning to establish correspondence between actions and corresponding multi-level features. Finally, the hierarchical attention mechanism is introduced to design the CHAN model. By calculating the association between elements, the weight of the action classification is redistributed. The proposed method has good performance on UT-Kinect, KTH and NTU RGB + D datasets.</p>\n<p>The last paper by Numan Ali et al. first conducted a subjective study with field experts to investigate about the practical implementation of their existing virtual chemistry laboratory (VCL). To consider the suggestions of the field experts, they propose task specific aids based virtual reality chemistry laboratory (TSA-VRCL) to minimize students' cognitive load and enhance their performance. The task specific aids consist of an arrow, animation and audio aids that are separately rendered with each step of the experimental tasks. During evaluations, 80 students performed the experiments in four different groups using four different experimental conditions. Evaluations revealed that the proposed TSA-VRCL minimizes students' cognitive load and enhances their performance.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"18 1","pages":""},"PeriodicalIF":0.9000,"publicationDate":"2023-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Editorial issue 34.6\",\"authors\":\"Nadia Magnenat Thalmann, Daniel Thalmann\",\"doi\":\"10.1002/cav.2227\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>This issue contains 12 regular papers. In the first paper, Hong Li et al. present an animation translation method based on edge enhancement and coordinate attention, which is called FAEC-GAN. They design a novel edge discrimination network to identify the edge features of images, so that the generated anime images can present clear and coherent lines. And the coordinate attention module is introduced in the encoder to adapt the model to the geometric changes in translation, to produce more realistic animation images. In addition, the method combines the focal frequency loss and pixel loss, which can pay attention to both the frequency domain information and pixel information of the generated image to improve the visual effect of the image.</p>\\n<p>In the second paper, Rahul Jain et al. propose an algorithm to convert a depth video into a single dynamic image known as a linked motion image (LMI). The LMI has been given to a classifier consisting of an ensemble of three modified pre-trained convolutional neural networks (CNNs). The experiments were conducted using two datasets: a multimodal large-scale EgoGesture dataset and The MSR Gesture 3D dataset. For the EgoGesture dataset, the proposed method achieved an accuracy of 92.91%, which is better than the state-of-the-art methods. For the MSR Gesture 3D dataset, the proposed method accuracy is 100%, which outperforms the state-of-the-art methods. The recognition accuracy and precision of each gesture are also highlighted in this work.</p>\\n<p>In the third paper, Rustam Akhunov et al. propose a set of experiments to aid the evaluation of the main categories of fluid-boundary interactions that are important in computer animation, i.e. no motion (resting) fluid, tangential and normal motion of a fluid with respect to the boundary, and a fluid impacting a corner. They propose 10 experiments, comprising experimental setup and quantitative evaluation with optional visual inspections, that are arranged in four groups which focus on one of the main category of fluid-boundary interactions. The authors use these experiments to evaluate three particle-based boundary handling methods, that is, Pressure Mirroring (PM), Pressure Boundaries (PB) and Moving Least Squares Pressure Extrapolation (MLS), in combination with two incompressible SPH fluid simulation methods, namely IISPH and DFSPH.</p>\\n<p>In the fourth paper, Shenghuan Zhao et al. present three Extended Reality (XR) apps (AR, MR, and VR) to interactively visualize façade fenestration geometries and indoor illuminance simulations. Then XR technologies are assessed by 120 students and young architects, from task performance and engagement level two aspects. The task performance is measured by correct rate and time consumption two indicators, while the engagement level is measured by usability and interest two indicators. Evaluation results show that compared to AR and VR, MR is the best XR technology for this aim. VR outperforms AR on three indicators except for usability. By exposing three different XR technologies' performances in aiding fenestration design, this study increases the practical value of applying XR to the building design field.</p>\\n<p>The fifth paper by Jing Zhao et al. focuses on a multiple-fluid coupling simulation algorithm based on MPM and PFM. First, based on the MPM, they model multiphase flow on Eulerian grids and capture the sharp interfaces between immiscible fluids combined with the PFM. The gas phase is further treated as a fluid during the gas–liquid interaction. Second, to demonstrate the natural fluid moving evolution from the high energy state to the low energy state, the paper proposes the local minimize bulk energy function to control the low energy state. Finally, the paper designs and achieves various groups of multiple-fluid coupling comparison experiments. Experimental results showed that the proposed approach can simulate various effects of rapid diffusion in the multiple-fluid coupling, such as complete dissolution, mutual solubility, extraction, and other phenomena.</p>\\n<p>In the sixth paper, Jiwei Zhang et al. propose a novel method fusing multiple heterogeneous features through a multi-feature subspace representation network (MFSRN) to maximize the classification performance while keeping the disparity among features as small as possible, that is, common-subspace constraints. The authors conducted comparative experiments with state-of-the-art models on the bird's-eye view person dataset, and extensive experimental results demonstrated that the proposed MFSRN could achieve better recognition performance. Furthermore, the validity and stability of the method are confirmed.</p>\\n<p>In the seventh paper, Sahadeb Shit et al. propose a convolutional neural network (CNN)-based image dehazing and detection approach, called End to End Dehaze and Detection Network (EDD-N), for proper image visualization and detection. This network is trained on real-time hazy images that are directly used to recover dehaze images without a transmission map. EDD-N is robust, and accuracy is higher than any other proposed model. The authors also conducted extensive experiments using real-time foggy images. The quantitative and qualitative evaluations of the hazy dataset verify the proposed method's superiority over other dehazing methods. Moreover, the proposed method validated real-time object detection tasks in adverse weather conditions and improved the intelligent transportation system.</p>\\n<p>In the eighth paper, Chaehan So et al. designed a virtual being from a deep learning-generated face and a conversational AI model acting as a virtual conversation partner in an online conferencing software and evaluated it in 11 perceptions of social attributes. Compared to prior expectations, participants perceived the virtual being as distinctly higher in warmth (engaging, empathic, and approachable) but lower in realism and credibility after 5 days of 10-min daily conversations (Study 1). Further, the authors explored the idea of simplifying the technical setup to reduce the technical entry barrier for such experiments (Study 2). To this aim, they conducted several trials of fine-tuning a small conversational model of 90 million parameters until its performance metrics improved. Testing this fine-tuned model with users revealed that this model was not perceived differently from a large conversational model.</p>\\n<p>In the ninth paper, Di Qi et al. propose a novel split and join approach to simulate a side-to-side stapled intestinal anastomosis in virtual reality. They mimic the intestine model using a new hybrid representation—a grid-linked particles model for physics simulation and a surface mesh for rendering. The proposed split and join operations handle the updates of both the grid-linked particles model and the surface mesh during the anastomosis procedure. The simulation results demonstrate the feasibility of the proposed approach in simulating intestine models and the side-to-side anastomosis operation.</p>\\n<p>The tenth paper by Lanfeng Zhou et al. present a novel graph convolution mixed with point cloud deep learning method. In this method, the skinned multi-person linear model is regarded as a graph structure input, and the coarsened graph is obtained by graph convolution. After feeding the coarsened graph into the PointNet network, the coordinates of Dazhui are output. Different from the existing methods, the proposed method can directly label the results on the adaptive model, thus improving the accuracy on different models. An optimization method based on graph structure is introduced for better fit the predicted acupoints to the surface. In addition, a dataset marked with Dazhui is constructed for training. Experiments show that the accuracy of positioning could meet the requirements of needle application under certain circumstances.</p>\\n<p>In the eleventh paper, Jian Lu et al. propose to reduce the interference of the factors in skeleton-based action recognition, by taking the joint coordinate modal information of 2D skeleton to represent the change of human posture as the research point: first, the joint coordinates are obtained from RGB video or image using a detector. Then the feature extraction network is combined to perform multi-level feature learning to establish correspondence between actions and corresponding multi-level features. Finally, the hierarchical attention mechanism is introduced to design the CHAN model. By calculating the association between elements, the weight of the action classification is redistributed. The proposed method has good performance on UT-Kinect, KTH and NTU RGB + D datasets.</p>\\n<p>The last paper by Numan Ali et al. first conducted a subjective study with field experts to investigate about the practical implementation of their existing virtual chemistry laboratory (VCL). To consider the suggestions of the field experts, they propose task specific aids based virtual reality chemistry laboratory (TSA-VRCL) to minimize students' cognitive load and enhance their performance. The task specific aids consist of an arrow, animation and audio aids that are separately rendered with each step of the experimental tasks. During evaluations, 80 students performed the experiments in four different groups using four different experimental conditions. Evaluations revealed that the proposed TSA-VRCL minimizes students' cognitive load and enhances their performance.</p>\",\"PeriodicalId\":50645,\"journal\":{\"name\":\"Computer Animation and Virtual Worlds\",\"volume\":\"18 1\",\"pages\":\"\"},\"PeriodicalIF\":0.9000,\"publicationDate\":\"2023-12-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computer Animation and Virtual Worlds\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1002/cav.2227\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"COMPUTER SCIENCE, SOFTWARE ENGINEERING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Animation and Virtual Worlds","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1002/cav.2227","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0

摘要

本期包含 12 篇常规论文。在第一篇论文中,李宏等人提出了一种基于边缘增强和坐标注意的动漫翻译方法,称为 FAEC-GAN。他们设计了一种新颖的边缘识别网络来识别图像的边缘特征,从而使生成的动漫图像能呈现清晰连贯的线条。并在编码器中引入坐标注意模块,使模型适应平移中的几何变化,从而生成更逼真的动漫图像。此外,该方法将焦点频率损耗和像素损耗相结合,可以同时关注生成图像的频域信息和像素信息,从而改善图像的视觉效果。在第二篇论文中,Rahul Jain 等人提出了一种将深度视频转换为单一动态图像的算法,称为链接运动图像(LMI)。LMI 交给了一个分类器,该分类器由三个经过修改的预训练卷积神经网络(CNN)组成。实验使用了两个数据集:多模态大规模 EgoGesture 数据集和 MSR 手势 3D 数据集。对于 EgoGesture 数据集,所提出的方法达到了 92.91% 的准确率,优于最先进的方法。对于 MSR 手势 3D 数据集,所提出的方法的准确率为 100%,优于最先进的方法。在第三篇论文中,Rustam Akhunov 等人提出了一套实验来帮助评估计算机动画中重要的流体与边界相互作用的主要类别,即无运动(静止)流体、流体相对于边界的切线运动和法线运动以及流体撞击角落。他们提出了 10 项实验,包括实验设置、定量评估和可选的视觉检查,这些实验分为四组,分别侧重于流体与边界相互作用中的一个主要类别。作者利用这些实验评估了三种基于粒子的边界处理方法,即压力镜像法(PM)、压力边界法(PB)和移动最小二乘法压力外推法(MLS),以及两种不可压缩 SPH 流体模拟方法,即 IISPH 和 DFSPH。随后,120 名学生和青年建筑师从任务表现和参与度两个方面对 XR 技术进行了评估。任务性能通过正确率和耗时两个指标来衡量,而参与度则通过可用性和兴趣两个指标来衡量。评估结果表明,与 AR 和 VR 相比,MR 是实现这一目标的最佳 XR 技术。除可用性外,VR 在三项指标上均优于 AR。通过揭示三种不同的 XR 技术在辅助门窗设计中的表现,这项研究提高了将 XR 应用于建筑设计领域的实用价值。第五篇论文由赵静等人撰写,主要研究基于 MPM 和 PFM 的多流体耦合模拟算法。首先,他们以 MPM 为基础,在欧拉网格上建立多相流模型,并结合 PFM 捕捉不相溶流体之间的尖锐界面。在气液相互作用过程中,气相进一步被视为流体。其次,为了演示流体从高能态到低能态的自然运动演化,本文提出了局部最小体能函数来控制低能态。最后,本文设计并实现了多组多流体耦合对比实验。实验结果表明,所提出的方法可以模拟多流体耦合中快速扩散的各种效应,如完全溶解、互溶、萃取等现象。在第六篇论文中,张继伟等人提出了一种通过多特征子空间表示网络(MFSRN)融合多个异构特征的新方法,在保持特征间差异尽可能小(即共用子空间约束)的同时,最大限度地提高分类性能。作者在鸟瞰人物数据集上与最先进的模型进行了对比实验,大量的实验结果表明,所提出的 MFSRN 可以获得更好的识别性能。在第七篇论文中,Sahadeb Shit 等人提出了一种基于卷积神经网络(CNN)的图像去雾化和检测方法,称为端到端去雾化和检测网络(EDD-N),用于适当的图像可视化和检测。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Editorial issue 34.6

This issue contains 12 regular papers. In the first paper, Hong Li et al. present an animation translation method based on edge enhancement and coordinate attention, which is called FAEC-GAN. They design a novel edge discrimination network to identify the edge features of images, so that the generated anime images can present clear and coherent lines. And the coordinate attention module is introduced in the encoder to adapt the model to the geometric changes in translation, to produce more realistic animation images. In addition, the method combines the focal frequency loss and pixel loss, which can pay attention to both the frequency domain information and pixel information of the generated image to improve the visual effect of the image.

In the second paper, Rahul Jain et al. propose an algorithm to convert a depth video into a single dynamic image known as a linked motion image (LMI). The LMI has been given to a classifier consisting of an ensemble of three modified pre-trained convolutional neural networks (CNNs). The experiments were conducted using two datasets: a multimodal large-scale EgoGesture dataset and The MSR Gesture 3D dataset. For the EgoGesture dataset, the proposed method achieved an accuracy of 92.91%, which is better than the state-of-the-art methods. For the MSR Gesture 3D dataset, the proposed method accuracy is 100%, which outperforms the state-of-the-art methods. The recognition accuracy and precision of each gesture are also highlighted in this work.

In the third paper, Rustam Akhunov et al. propose a set of experiments to aid the evaluation of the main categories of fluid-boundary interactions that are important in computer animation, i.e. no motion (resting) fluid, tangential and normal motion of a fluid with respect to the boundary, and a fluid impacting a corner. They propose 10 experiments, comprising experimental setup and quantitative evaluation with optional visual inspections, that are arranged in four groups which focus on one of the main category of fluid-boundary interactions. The authors use these experiments to evaluate three particle-based boundary handling methods, that is, Pressure Mirroring (PM), Pressure Boundaries (PB) and Moving Least Squares Pressure Extrapolation (MLS), in combination with two incompressible SPH fluid simulation methods, namely IISPH and DFSPH.

In the fourth paper, Shenghuan Zhao et al. present three Extended Reality (XR) apps (AR, MR, and VR) to interactively visualize façade fenestration geometries and indoor illuminance simulations. Then XR technologies are assessed by 120 students and young architects, from task performance and engagement level two aspects. The task performance is measured by correct rate and time consumption two indicators, while the engagement level is measured by usability and interest two indicators. Evaluation results show that compared to AR and VR, MR is the best XR technology for this aim. VR outperforms AR on three indicators except for usability. By exposing three different XR technologies' performances in aiding fenestration design, this study increases the practical value of applying XR to the building design field.

The fifth paper by Jing Zhao et al. focuses on a multiple-fluid coupling simulation algorithm based on MPM and PFM. First, based on the MPM, they model multiphase flow on Eulerian grids and capture the sharp interfaces between immiscible fluids combined with the PFM. The gas phase is further treated as a fluid during the gas–liquid interaction. Second, to demonstrate the natural fluid moving evolution from the high energy state to the low energy state, the paper proposes the local minimize bulk energy function to control the low energy state. Finally, the paper designs and achieves various groups of multiple-fluid coupling comparison experiments. Experimental results showed that the proposed approach can simulate various effects of rapid diffusion in the multiple-fluid coupling, such as complete dissolution, mutual solubility, extraction, and other phenomena.

In the sixth paper, Jiwei Zhang et al. propose a novel method fusing multiple heterogeneous features through a multi-feature subspace representation network (MFSRN) to maximize the classification performance while keeping the disparity among features as small as possible, that is, common-subspace constraints. The authors conducted comparative experiments with state-of-the-art models on the bird's-eye view person dataset, and extensive experimental results demonstrated that the proposed MFSRN could achieve better recognition performance. Furthermore, the validity and stability of the method are confirmed.

In the seventh paper, Sahadeb Shit et al. propose a convolutional neural network (CNN)-based image dehazing and detection approach, called End to End Dehaze and Detection Network (EDD-N), for proper image visualization and detection. This network is trained on real-time hazy images that are directly used to recover dehaze images without a transmission map. EDD-N is robust, and accuracy is higher than any other proposed model. The authors also conducted extensive experiments using real-time foggy images. The quantitative and qualitative evaluations of the hazy dataset verify the proposed method's superiority over other dehazing methods. Moreover, the proposed method validated real-time object detection tasks in adverse weather conditions and improved the intelligent transportation system.

In the eighth paper, Chaehan So et al. designed a virtual being from a deep learning-generated face and a conversational AI model acting as a virtual conversation partner in an online conferencing software and evaluated it in 11 perceptions of social attributes. Compared to prior expectations, participants perceived the virtual being as distinctly higher in warmth (engaging, empathic, and approachable) but lower in realism and credibility after 5 days of 10-min daily conversations (Study 1). Further, the authors explored the idea of simplifying the technical setup to reduce the technical entry barrier for such experiments (Study 2). To this aim, they conducted several trials of fine-tuning a small conversational model of 90 million parameters until its performance metrics improved. Testing this fine-tuned model with users revealed that this model was not perceived differently from a large conversational model.

In the ninth paper, Di Qi et al. propose a novel split and join approach to simulate a side-to-side stapled intestinal anastomosis in virtual reality. They mimic the intestine model using a new hybrid representation—a grid-linked particles model for physics simulation and a surface mesh for rendering. The proposed split and join operations handle the updates of both the grid-linked particles model and the surface mesh during the anastomosis procedure. The simulation results demonstrate the feasibility of the proposed approach in simulating intestine models and the side-to-side anastomosis operation.

The tenth paper by Lanfeng Zhou et al. present a novel graph convolution mixed with point cloud deep learning method. In this method, the skinned multi-person linear model is regarded as a graph structure input, and the coarsened graph is obtained by graph convolution. After feeding the coarsened graph into the PointNet network, the coordinates of Dazhui are output. Different from the existing methods, the proposed method can directly label the results on the adaptive model, thus improving the accuracy on different models. An optimization method based on graph structure is introduced for better fit the predicted acupoints to the surface. In addition, a dataset marked with Dazhui is constructed for training. Experiments show that the accuracy of positioning could meet the requirements of needle application under certain circumstances.

In the eleventh paper, Jian Lu et al. propose to reduce the interference of the factors in skeleton-based action recognition, by taking the joint coordinate modal information of 2D skeleton to represent the change of human posture as the research point: first, the joint coordinates are obtained from RGB video or image using a detector. Then the feature extraction network is combined to perform multi-level feature learning to establish correspondence between actions and corresponding multi-level features. Finally, the hierarchical attention mechanism is introduced to design the CHAN model. By calculating the association between elements, the weight of the action classification is redistributed. The proposed method has good performance on UT-Kinect, KTH and NTU RGB + D datasets.

The last paper by Numan Ali et al. first conducted a subjective study with field experts to investigate about the practical implementation of their existing virtual chemistry laboratory (VCL). To consider the suggestions of the field experts, they propose task specific aids based virtual reality chemistry laboratory (TSA-VRCL) to minimize students' cognitive load and enhance their performance. The task specific aids consist of an arrow, animation and audio aids that are separately rendered with each step of the experimental tasks. During evaluations, 80 students performed the experiments in four different groups using four different experimental conditions. Evaluations revealed that the proposed TSA-VRCL minimizes students' cognitive load and enhances their performance.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Computer Animation and Virtual Worlds
Computer Animation and Virtual Worlds 工程技术-计算机:软件工程
CiteScore
2.20
自引率
0.00%
发文量
90
审稿时长
6-12 weeks
期刊介绍: With the advent of very powerful PCs and high-end graphics cards, there has been an incredible development in Virtual Worlds, real-time computer animation and simulation, games. But at the same time, new and cheaper Virtual Reality devices have appeared allowing an interaction with these real-time Virtual Worlds and even with real worlds through Augmented Reality. Three-dimensional characters, especially Virtual Humans are now of an exceptional quality, which allows to use them in the movie industry. But this is only a beginning, as with the development of Artificial Intelligence and Agent technology, these characters will become more and more autonomous and even intelligent. They will inhabit the Virtual Worlds in a Virtual Life together with animals and plants.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信