Computational Visual Media最新文献

筛选
英文 中文
MusicFace: Music-driven expressive singing face synthesis 音乐脸谱音乐驱动的表情歌唱脸部合成
IF 6.9 3区 计算机科学
Computational Visual Media Pub Date : 2023-11-30 DOI: 10.1007/s41095-023-0343-7
Pengfei Liu, Wenjin Deng, Hengda Li, Jintai Wang, Yinglin Zheng, Yiwei Ding, Xiaohu Guo, Ming Zeng
{"title":"MusicFace: Music-driven expressive singing face synthesis","authors":"Pengfei Liu, Wenjin Deng, Hengda Li, Jintai Wang, Yinglin Zheng, Yiwei Ding, Xiaohu Guo, Ming Zeng","doi":"10.1007/s41095-023-0343-7","DOIUrl":"https://doi.org/10.1007/s41095-023-0343-7","url":null,"abstract":"<p>It remains an interesting and challenging problem to synthesize a vivid and realistic singing face driven by music. In this paper, we present a method for this task with natural motions for the lips, facial expression, head pose, and eyes. Due to the coupling of mixed information for the human voice and backing music in common music audio signals, we design a decouple-and-fuse strategy to tackle the challenge. We first decompose the input music audio into a human voice stream and a backing music stream. Due to the implicit and complicated correlation between the two-stream input signals and the dynamics of the facial expressions, head motions, and eye states, we model their relationship with an attention scheme, where the effects of the two streams are fused seamlessly. Furthermore, to improve the expressivenes of the generated results, we decompose head movement generation in terms of speed and direction, and decompose eye state generation into short-term blinking and long-term eye closing, modeling them separately. We have also built a novel dataset, SingingFace, to support training and evaluation of models for this task, including future work on this topic. Extensive experiments and a user study show that our proposed method is capable of synthesizing vivid singing faces, qualitatively and quantitatively better than the prior state-of-the-art.\u0000</p>","PeriodicalId":37301,"journal":{"name":"Computational Visual Media","volume":"47 1","pages":""},"PeriodicalIF":6.9,"publicationDate":"2023-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139079090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3D hand pose and shape estimation from monocular RGB via efficient 2D cues 通过高效的二维线索从单目 RGB 进行三维手部姿势和形状估计
IF 6.9 3区 计算机科学
Computational Visual Media Pub Date : 2023-11-30 DOI: 10.1007/s41095-023-0346-4
Fenghao Zhang, Lin Zhao, Shengling Li, Wanjuan Su, Liman Liu, Wenbing Tao
{"title":"3D hand pose and shape estimation from monocular RGB via efficient 2D cues","authors":"Fenghao Zhang, Lin Zhao, Shengling Li, Wanjuan Su, Liman Liu, Wenbing Tao","doi":"10.1007/s41095-023-0346-4","DOIUrl":"https://doi.org/10.1007/s41095-023-0346-4","url":null,"abstract":"<p>Estimating 3D hand shape from a single-view RGB image is important for many applications. However, the diversity of hand shapes and postures, depth ambiguity, and occlusion may result in pose errors and noisy hand meshes. Making full use of 2D cues such as 2D pose can effectively improve the quality of 3D human hand shape estimation. In this paper, we use 2D joint heatmaps to obtain spatial details for robust pose estimation. We also introduce a depth-independent 2D mesh to avoid depth ambiguity in mesh regression for efficient hand-image alignment. Our method has four cascaded stages: 2D cue extraction, pose feature encoding, initial reconstruction, and reconstruction refinement. Specifically, we first encode the image to determine semantic features during 2D cue extraction; this is also used to predict hand joints and for segmentation. Then, during the pose feature encoding stage, we use a hand joints encoder to learn spatial information from the joint heatmaps. Next, a coarse 3D hand mesh and 2D mesh are obtained in the initial reconstruction step; a mesh squeeze-and-excitation block is used to fuse different hand features to enhance perception of 3D hand structures. Finally, a global mesh refinement stage learns non-local relations between vertices of the hand mesh from the predicted 2D mesh, to predict an offset hand mesh to fine-tune the reconstruction results. Quantitative and qualitative results on the FreiHAND benchmark dataset demonstrate that our approach achieves state-of-the-art performance.\u0000</p>","PeriodicalId":37301,"journal":{"name":"Computational Visual Media","volume":"26 1","pages":""},"PeriodicalIF":6.9,"publicationDate":"2023-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139078754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A causal convolutional neural network for multi-subject motion modeling and generation 基于因果卷积神经网络的多主体运动建模与生成
IF 6.9 3区 计算机科学
Computational Visual Media Pub Date : 2023-11-30 DOI: 10.1007/s41095-022-0307-3
Shuaiying Hou, Congyi Wang, Wenlin Zhuang, Yu Chen, Yangang Wang, Hujun Bao, Jinxiang Chai, Weiwei Xu
{"title":"A causal convolutional neural network for multi-subject motion modeling and generation","authors":"Shuaiying Hou, Congyi Wang, Wenlin Zhuang, Yu Chen, Yangang Wang, Hujun Bao, Jinxiang Chai, Weiwei Xu","doi":"10.1007/s41095-022-0307-3","DOIUrl":"https://doi.org/10.1007/s41095-022-0307-3","url":null,"abstract":"<p>Inspired by the success of WaveNet in multi-subject speech synthesis, we propose a novel neural network based on causal convolutions for multi-subject motion modeling and generation. The network can capture the intrinsic characteristics of the motion of different subjects, such as the influence of skeleton scale variation on motion style. Moreover, after fine-tuning the network using a small motion dataset for a novel skeleton that is not included in the training dataset, it is able to synthesize high-quality motions with a personalized style for the novel skeleton. The experimental results demonstrate that our network can model the intrinsic characteristics of motions well and can be applied to various motion modeling and synthesis tasks.\u0000</p>","PeriodicalId":37301,"journal":{"name":"Computational Visual Media","volume":"83 2","pages":""},"PeriodicalIF":6.9,"publicationDate":"2023-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138494583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A visual modeling method for spatiotemporal and multidimensional features in epidemiological analysis: Applied COVID-19 aggregated datasets 流行病学分析中时空和多维特征的可视化建模方法:应用COVID-19汇总数据集
IF 6.9 3区 计算机科学
Computational Visual Media Pub Date : 2023-11-30 DOI: 10.1007/s41095-023-0353-5
Yu Dong, Christy Jie Liang, Yi Chen, Jie Hua
{"title":"A visual modeling method for spatiotemporal and multidimensional features in epidemiological analysis: Applied COVID-19 aggregated datasets","authors":"Yu Dong, Christy Jie Liang, Yi Chen, Jie Hua","doi":"10.1007/s41095-023-0353-5","DOIUrl":"https://doi.org/10.1007/s41095-023-0353-5","url":null,"abstract":"<p>The visual modeling method enables flexible interactions with rich graphical depictions of data and supports the exploration of the complexities of epidemiological analysis. However, most epidemiology visualizations do not support the combined analysis of objective factors that might influence the transmission situation, resulting in a lack of quantitative and qualitative evidence. To address this issue, we developed a portrait-based visual modeling method called <i>+msRNAer</i>. This method considers the spatiotemporal features of virus transmission patterns and multidimensional features of objective risk factors in communities, enabling portrait-based exploration and comparison in epidemiological analysis. We applied <i>+msRNAer</i> to aggregate COVID-19-related datasets in New South Wales, Australia, combining COVID-19 case number trends, geo-information, intervention events, and expert-supervised risk factors extracted from local government area-based censuses. We perfected the <i>+msRNAer</i> workflow with collaborative views and evaluated its feasibility, effectiveness, and usefulness through one user study and three subject-driven case studies. Positive feedback from experts indicates that <i>+msRNAer</i> provides a general understanding for analyzing comprehension that not only compares relationships between cases in time-varying and risk factors through portraits but also supports navigation in fundamental geographical, timeline, and other factor comparisons. By adopting interactions, experts discovered functional and practical implications for potential patterns of long-standing community factors regarding the vulnerability faced by the pandemic. Experts confirmed that <i>+msRNAer</i> is expected to deliver visual modeling benefits with spatiotemporal and multidimensional features in other epidemiological analysis scenarios.\u0000</p>","PeriodicalId":37301,"journal":{"name":"Computational Visual Media","volume":"146 1","pages":""},"PeriodicalIF":6.9,"publicationDate":"2023-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139078759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
6DOF pose estimation of a 3D rigid object based on edge-enhanced point pair features 基于边缘增强点对特征的三维刚体6DOF位姿估计
IF 6.9 3区 计算机科学
Computational Visual Media Pub Date : 2023-11-30 DOI: 10.1007/s41095-022-0308-2
Chenyi Liu, Fei Chen, Lu Deng, Renjiao Yi, Lintao Zheng, Chenyang Zhu, Jia Wang, Kai Xu
{"title":"6DOF pose estimation of a 3D rigid object based on edge-enhanced point pair features","authors":"Chenyi Liu, Fei Chen, Lu Deng, Renjiao Yi, Lintao Zheng, Chenyang Zhu, Jia Wang, Kai Xu","doi":"10.1007/s41095-022-0308-2","DOIUrl":"https://doi.org/10.1007/s41095-022-0308-2","url":null,"abstract":"<p>The point pair feature (PPF) is widely used for 6D pose estimation. In this paper, we propose an efficient 6D pose estimation method based on the PPF framework. We introduce a well-targeted down-sampling strategy that focuses on edge areas for efficient feature extraction for complex geometry. A pose hypothesis validation approach is proposed to resolve ambiguity due to symmetry by calculating the edge matching degree. We perform evaluations on two challenging datasets and one real-world collected dataset, demonstrating the superiority of our method for pose estimation for geometrically complex, occluded, symmetrical objects. We further validate our method by applying it to simulated punctures.\u0000</p>","PeriodicalId":37301,"journal":{"name":"Computational Visual Media","volume":"83 1","pages":""},"PeriodicalIF":6.9,"publicationDate":"2023-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138494584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A survey on facial image deblurring 人脸图像去模糊研究进展
IF 6.9 3区 计算机科学
Computational Visual Media Pub Date : 2023-11-30 DOI: 10.1007/s41095-023-0336-6
Bingnan Wang, Fanjiang Xu, Quan Zheng
{"title":"A survey on facial image deblurring","authors":"Bingnan Wang, Fanjiang Xu, Quan Zheng","doi":"10.1007/s41095-023-0336-6","DOIUrl":"https://doi.org/10.1007/s41095-023-0336-6","url":null,"abstract":"<p>When a facial image is blurred, it significantly affects high-level vision tasks such as face recognition. The purpose of facial image deblurring is to recover a clear image from a blurry input image, which can improve the recognition accuracy, etc. However, general deblurring methods do not perform well on facial images. Therefore, some face deblurring methods have been proposed to improve performance by adding semantic or structural information as specific priors according to the characteristics of the facial images. In this paper, we survey and summarize recently published methods for facial image deblurring, most of which are based on deep learning. First, we provide a brief introduction to the modeling of image blurring. Next, we summarize face deblurring methods into two categories: model-based methods and deep learning-based methods. Furthermore, we summarize the datasets, loss functions, and performance evaluation metrics commonly used in the neural network training process. We show the performance of classical methods on these datasets and metrics and provide a brief discussion on the differences between model-based and learning-based methods. Finally, we discuss the current challenges and possible future research directions.\u0000</p>","PeriodicalId":37301,"journal":{"name":"Computational Visual Media","volume":"14 1","pages":""},"PeriodicalIF":6.9,"publicationDate":"2023-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138517018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards robustness and generalization of point cloud representation: A geometry coding method and a large-scale object-level dataset 点云表示的鲁棒性和泛化:一种几何编码方法和大规模对象级数据集
IF 6.9 3区 计算机科学
Computational Visual Media Pub Date : 2023-11-30 DOI: 10.1007/s41095-022-0305-5
Mingye Xu, Zhipeng Zhou, Yali Wang, Yu Qiao
{"title":"Towards robustness and generalization of point cloud representation: A geometry coding method and a large-scale object-level dataset","authors":"Mingye Xu, Zhipeng Zhou, Yali Wang, Yu Qiao","doi":"10.1007/s41095-022-0305-5","DOIUrl":"https://doi.org/10.1007/s41095-022-0305-5","url":null,"abstract":"<p>Robustness and generalization are two challenging problems for learning point cloud representation. To tackle these problems, we first design a novel geometry coding model, which can effectively use an invariant eigengraph to group points with similar geometric information, even when such points are far from each other. We also introduce a large-scale point cloud dataset, PCNet184. It consists of 184 categories and 51,915 synthetic objects, which brings new challenges for point cloud classification, and provides a new benchmark to assess point cloud cross-domain generalization. Finally, we perform extensive experiments on point cloud classification, using ModelNet40, ScanObjectNN, and our PCNet184, and segmentation, using ShapeNetPart and S3DIS. Our method achieves comparable performance to state-of-the-art methods on these datasets, for both supervised and unsupervised learning. Code and our dataset are available at https://github.com/MingyeXu/PCNet184.\u0000</p>","PeriodicalId":37301,"journal":{"name":"Computational Visual Media","volume":"83 3","pages":""},"PeriodicalIF":6.9,"publicationDate":"2023-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138494571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A survey of urban visual analytics: Advances and future directions. 城市视觉分析调查:进展与未来方向。
IF 17.3 3区 计算机科学
Computational Visual Media Pub Date : 2023-01-01 Epub Date: 2022-10-18 DOI: 10.1007/s41095-022-0275-7
Zikun Deng, Di Weng, Shuhan Liu, Yuan Tian, Mingliang Xu, Yingcai Wu
{"title":"A survey of urban visual analytics: Advances and future directions.","authors":"Zikun Deng, Di Weng, Shuhan Liu, Yuan Tian, Mingliang Xu, Yingcai Wu","doi":"10.1007/s41095-022-0275-7","DOIUrl":"10.1007/s41095-022-0275-7","url":null,"abstract":"<p><p>Developing effective visual analytics systems demands care in characterization of domain problems and integration of visualization techniques and computational models. Urban visual analytics has already achieved remarkable success in tackling urban problems and providing fundamental services for smart cities. To promote further academic research and assist the development of industrial urban analytics systems, we comprehensively review urban visual analytics studies from four perspectives. In particular, we identify 8 urban domains and 22 types of popular visualization, analyze 7 types of computational method, and categorize existing systems into 4 types based on their integration of visualization techniques and computational models. We conclude with potential research directions and opportunities.</p>","PeriodicalId":37301,"journal":{"name":"Computational Visual Media","volume":"9 1","pages":"3-39"},"PeriodicalIF":17.3,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9579670/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40655032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Message from the Best Paper Award Committee 最佳论文奖委员会的致辞
IF 6.9 3区 计算机科学
Computational Visual Media Pub Date : 2022-04-20 DOI: 10.1007/s41095-022-0285-5
Ming C. Lin,Xin Tong,Wenping Wang
{"title":"Message from the Best Paper Award Committee","authors":"Ming C. Lin,Xin Tong,Wenping Wang","doi":"10.1007/s41095-022-0285-5","DOIUrl":"https://doi.org/10.1007/s41095-022-0285-5","url":null,"abstract":"","PeriodicalId":37301,"journal":{"name":"Computational Visual Media","volume":"92 4","pages":"329-329"},"PeriodicalIF":6.9,"publicationDate":"2022-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138494565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unsupervised random forest for affinity estimation. 用于亲和性估计的无监督随机森林。
IF 6.9 3区 计算机科学
Computational Visual Media Pub Date : 2022-01-01 Epub Date: 2021-12-06 DOI: 10.1007/s41095-021-0241-9
Yunai Yi, Diya Sun, Peixin Li, Tae-Kyun Kim, Tianmin Xu, Yuru Pei
{"title":"Unsupervised random forest for affinity estimation.","authors":"Yunai Yi, Diya Sun, Peixin Li, Tae-Kyun Kim, Tianmin Xu, Yuru Pei","doi":"10.1007/s41095-021-0241-9","DOIUrl":"10.1007/s41095-021-0241-9","url":null,"abstract":"<p><p>This paper presents an unsupervised clustering random-forest-based metric for affinity estimation in large and high-dimensional data. The criterion used for node splitting during forest construction can handle rank-deficiency when measuring cluster compactness. The binary forest-based metric is extended to continuous metrics by exploiting both the common traversal path and the smallest shared parent node. The proposed forest-based metric efficiently estimates affinity by passing down data pairs in the forest using a limited number of decision trees. A pseudo-leaf-splitting (PLS) algorithm is introduced to account for spatial relationships, which regularizes affinity measures and overcomes inconsistent leaf assign-ments. The random-forest-based metric with PLS facilitates the establishment of consistent and point-wise correspondences. The proposed method has been applied to automatic phrase recognition using color and depth videos and point-wise correspondence. Extensive experiments demonstrate the effectiveness of the proposed method in affinity estimation in a comparison with the state-of-the-art.</p>","PeriodicalId":37301,"journal":{"name":"Computational Visual Media","volume":"8 2","pages":"257-272"},"PeriodicalIF":6.9,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8645415/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39720010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信