ACM Transactions on Multimedia Computing Communications and Applications最新文献

筛选
英文 中文
Bridging the Domain Gap in Scene Flow Estimation via Hierarchical Smoothness Refinement 通过层次平滑度细化缩小场景流估计中的领域差距
IF 5.1 3区 计算机科学
Dejun Zhang, Mian Zhang, Xuefeng Tan, Jun Liu
{"title":"Bridging the Domain Gap in Scene Flow Estimation via Hierarchical Smoothness Refinement","authors":"Dejun Zhang, Mian Zhang, Xuefeng Tan, Jun Liu","doi":"10.1145/3661823","DOIUrl":"https://doi.org/10.1145/3661823","url":null,"abstract":"<p>This paper introduces SmoothFlowNet3D, an innovative encoder-decoder architecture specifically designed for bridging the domain gap in scene flow estimation. To achieve this goal, SmoothFlowNet3D divides the scene flow estimation task into two stages: initial scene flow estimation and smoothness refinement. Specifically, SmoothFlowNet3D comprises a hierarchical encoder that extracts multi-scale point cloud features from two consecutive frames, along with a hierarchical decoder responsible for predicting the initial scene flow and further refining it to achieve smoother estimation. To generate the initial scene flow, a cross-frame nearest neighbor search operation is performed between the features extracted from two consecutive frames, resulting in forward and backward flow embeddings. These embeddings are then combined to form the bidirectional flow embedding, serving as input for predicting the initial scene flow. Additionally, a flow smoothing module based on the self-attention mechanism is proposed to predict the smoothing error and facilitate the refinement of the initial scene flow for more accurate and smoother estimation results. Extensive experiments demonstrate that the proposed SmoothFlowNet3D approach achieves state-of-the-art performance on both synthetic datasets and real LiDAR point clouds, confirming its effectiveness in enhancing scene flow smoothness.</p>","PeriodicalId":50937,"journal":{"name":"ACM Transactions on Multimedia Computing Communications and Applications","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2024-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140800729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EdiTor: Edge-guided Transformer for Ghost-free High Dynamic Range Imaging EdiTor:用于无鬼影高动态范围成像的边缘导向变压器
IF 5.1 3区 计算机科学
Yuanshen Guan, Ruikang Xu, Mingde Yao, Jie Huang, Zhiwei Xiong
{"title":"EdiTor: Edge-guided Transformer for Ghost-free High Dynamic Range Imaging","authors":"Yuanshen Guan, Ruikang Xu, Mingde Yao, Jie Huang, Zhiwei Xiong","doi":"10.1145/3657293","DOIUrl":"https://doi.org/10.1145/3657293","url":null,"abstract":"<p>Synthesizing the high dynamic range (HDR) image from multi-exposure images has been extensively studied by exploiting convolutional neural networks (CNNs) recently. Despite the remarkable progress, existing CNN-based methods have the intrinsic limitation of local receptive field, which hinders the model’s capability of capturing long-range correspondence and large motions across under/over-exposure images, resulting in ghosting artifacts of dynamic scenes. To address the above challenge, we propose a novel <b>Ed</b>ge-gu<b>i</b>ded <b>T</b>ransf<b>or</b>mer framework (EdiTor) customized for ghost-free HDR reconstruction, where the long-range motions across different exposures can be delicately modeled by incorporating the edge prior. Specifically, EdiTor calculates patch-wise correlation maps on both image and edge domains, enabling the network to effectively model the global movements and the fine-grained shifts across multiple exposures. Based on this framework, we further propose an exposure-masked loss to adaptively compensate for the severely distorted regions (<i>e.g.</i>, highlights and shadows). Experiments demonstrate that EdiTor outperforms state-of-the-art methods both quantitatively and qualitatively, achieving appealing HDR visualization with unified textures and colors.</p>","PeriodicalId":50937,"journal":{"name":"ACM Transactions on Multimedia Computing Communications and Applications","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2024-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140800732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learned Video Compression with Adaptive Temporal Prior and Decoded Motion-aided Quality Enhancement 具有自适应时序先验和解码运动辅助质量增强功能的学习视频压缩技术
IF 5.1 3区 计算机科学
Jiayu Yang, Chunhui Yang, Fei Xiong, Yongqi Zhai, Ronggang Wang
{"title":"Learned Video Compression with Adaptive Temporal Prior and Decoded Motion-aided Quality Enhancement","authors":"Jiayu Yang, Chunhui Yang, Fei Xiong, Yongqi Zhai, Ronggang Wang","doi":"10.1145/3661824","DOIUrl":"https://doi.org/10.1145/3661824","url":null,"abstract":"<p>Learned video compression has drawn great attention and shown promising compression performance recently. In this paper, we focus on the two components in learned video compression framework, i.e., conditional entropy model and quality enhancement module, to improve compression performance. Specifically, we propose an adaptive spatial-temporal entropy model for image, motion and residual compression, which introduces temporal prior to reduce temporal redundancy of latents and an additional modulated mask to evaluate the similarity and perform refinement. Besides, a quality enhancement module is proposed for predicted frame and reconstructed frame to improve frame quality and reduce bitrate cost of residual coding. The module reuses decoded optical flow as motion prior and utilizes deformable convolution to mine high-quality information from reference frame in a bit-free manner. The two proposed coding tools are integrated into a pixel-domain residual-coding based compression framework to evaluate their effectiveness. Experimental results demonstrate that our framework achieves competitive compression performance in low-delay scenario, compared with recent learning-based methods and traditional H.265/HEVC in terms of PSNR and MS-SSIM. The code is available at OpenLVC.</p>","PeriodicalId":50937,"journal":{"name":"ACM Transactions on Multimedia Computing Communications and Applications","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2024-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140800801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Domain-invariant and Patch-discriminative Feature Learning for General Deepfake Detection 用于一般深度伪造检测的域不变性和斑块判别特征学习
IF 5.1 3区 计算机科学
Jian Zhang, Jiangqun Ni, Fan Nie, jiwu Huang
{"title":"Domain-invariant and Patch-discriminative Feature Learning for General Deepfake Detection","authors":"Jian Zhang, Jiangqun Ni, Fan Nie, jiwu Huang","doi":"10.1145/3657297","DOIUrl":"https://doi.org/10.1145/3657297","url":null,"abstract":"<p>Hyper-realistic avatars in the metaverse have already raised security concerns about deepfake techniques, deepfakes involving generated video “recording” may be mistaken for a real recording of the people it depicts. As a result, deepfake detection has drawn considerable attention in the multimedia forensic community. Though existing methods for deepfake detection achieve fairly good performance under the intra-dataset scenario, many of them gain unsatisfying results in the case of cross-dataset testing with more practical value, where the forged faces in training and testing datasets are from different domains. To tackle this issue, in this paper, we propose a novel Domain-Invariant and Patch-Discriminative feature learning framework - DI&amp;PD. For image-level feature learning, a single-side adversarial domain generalization is introduced to eliminate domain variances and learn domain-invariant features in training samples from different manipulation methods, along with the global and local random crop augmentation strategy to generate more data views of forged images at various scales. A graph structure is then built by splitting the learned image-level feature maps, with each spatial location corresponding to a local patch, which facilitates patch representation learning by message-passing among similar nodes. Two types of center losses are utilized to learn more discriminative features in both image-level and patch-level embedding spaces. Extensive experimental results on several datasets demonstrate the effectiveness and generalization of the proposed method compared with other state-of-the-art methods.</p>","PeriodicalId":50937,"journal":{"name":"ACM Transactions on Multimedia Computing Communications and Applications","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2024-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140800800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integrated Sensing, Communication, and Computing for Cost-effective Multimodal Federated Perception 集成传感、通信和计算功能,实现经济高效的多模式联合感知
IF 5.1 3区 计算机科学
Ning Chen, Zhipeng Cheng, Xuwei Fan, Zhang Liu, Bangzhen Huang, Yifeng Zhao, Lianfen Huang, Xiaojiang Du, Mohsen Guizani
{"title":"Integrated Sensing, Communication, and Computing for Cost-effective Multimodal Federated Perception","authors":"Ning Chen, Zhipeng Cheng, Xuwei Fan, Zhang Liu, Bangzhen Huang, Yifeng Zhao, Lianfen Huang, Xiaojiang Du, Mohsen Guizani","doi":"10.1145/3661313","DOIUrl":"https://doi.org/10.1145/3661313","url":null,"abstract":"<p>Federated learning (FL) is a prominent paradigm of 6G edge intelligence (EI), which mitigates privacy breaches and high communication pressure caused by conventional centralized model training in the artificial intelligence of things (AIoT). The execution of multimodal federated perception (MFP) services comprises three sub-processes, including sensing-based multimodal data generation, communication-based model transmission, and computing-based model training, ultimately competitive on available underlying multi-domain physical resources such as time, frequency, and computing power. How to reasonably coordinate the multi-domain resources scheduling among sensing, communication, and computing, therefore, is vital to the MFP networks. To address the above issues, this paper explores service-oriented resource management with integrated sensing, communication, and computing (ISCC). Specifically, employing the incentive mechanism of the MFP service market, the resources management problem is defined as a social welfare maximization problem, where the concept of “expanding resources” and “reducing costs” is used to enhance learning performance gain and reduce resource costs. Experimental results demonstrate the effectiveness and robustness of the proposed resource scheduling mechanisms.</p>","PeriodicalId":50937,"journal":{"name":"ACM Transactions on Multimedia Computing Communications and Applications","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2024-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140800799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High Efficiency Deep-learning Based Video Compression 基于深度学习的高效视频压缩
IF 5.1 3区 计算机科学
Lv Tang, Xinfeng Zhang
{"title":"High Efficiency Deep-learning Based Video Compression","authors":"Lv Tang, Xinfeng Zhang","doi":"10.1145/3661311","DOIUrl":"https://doi.org/10.1145/3661311","url":null,"abstract":"<p>Although deep learning technique has achieved significant improvement on image compression, but its advantages are not fully explored in video compression, which leads to the performance of deep-learning based video compression (DLVC) is obvious inferior to that of hybrid video coding framework. In this paper, we proposed a novel network to improve the performance of DLVC from its most important modules, including <i>Motion Process</i> (MP), <i>Residual Compression</i> (RC) and <i>Frame Reconstruction</i> (FR). In MP, we design a split second-order attention and multi-scale feature extraction module to fully remove the warping artifacts from multi-scale feature space and pixel space, which can help reduce the distortion in the following process. In RC, we propose a channel selection mechanism to gradually drop redundant information while preserving informative channels for a better rate-distortion performance. Finally, in FR, we introduce a residual multi-scale recurrent network to improve the quality of the current reconstructed frame by progressively exploiting temporal context information between it and its several previous reconstructed frames. Extensive experiments are conducted on the three widely used video compression datasets (HEVC, UVG and MCL-JVC), and the performance demonstrates the superiority of our proposed approach over the state-of-the-art methods.</p>","PeriodicalId":50937,"journal":{"name":"ACM Transactions on Multimedia Computing Communications and Applications","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2024-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140636682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Recurrent Appearance Flow for Occlusion-Free Virtual Try-On 用于无闭塞虚拟试戴的循环外观流
IF 5.1 3区 计算机科学
Xiaoling Gu, Junkai Zhu, Yongkang Wong, Zizhao Wu, Jun Yu, Jianping Fan, Mohan S. Kankanhalli
{"title":"Recurrent Appearance Flow for Occlusion-Free Virtual Try-On","authors":"Xiaoling Gu, Junkai Zhu, Yongkang Wong, Zizhao Wu, Jun Yu, Jianping Fan, Mohan S. Kankanhalli","doi":"10.1145/3659581","DOIUrl":"https://doi.org/10.1145/3659581","url":null,"abstract":"<p>Image-based virtual try-on aims at transferring a target in-shop garment onto a reference person, which has garnered significant attention from the research communities recently. However, previous methods have faced severe challenges in handling occlusion problems. To address this limitation, we classify occlusion problems into three types based on the reference person’s arm postures: <i>single-arm occlusion</i>, <i>two-arm non-crossed occlusion</i>, and <i>two-arm crossed occlusion</i>. Specifically, we propose a novel Occlusion-Free Virtual Try-On Network (OF-VTON) that effectively overcomes these occlusion challenges. The OF-VTON framework consists of two core components: i) a new <i>Recurrent Appearance Flow based Deformation</i> (RAFD) model that robustly aligns the in-shop garment to the reference person by adopting a <i>multi-task learning strategy</i>. This model jointly produces the dense appearance flow to warp the garment and predicts a human segmentation map to provide semantic guidance for the subsequent image synthesis model. ii) a powerful <i>Multi-mask Image SynthesiS</i> (MISS) model that generates photo-realistic try-on results by introducing a new <i>mask generation and selection mechanism</i>. Experimental results demonstrate that our proposed OF-VTON significantly outperforms existing state-of-the-art methods by mitigating the impact of occlusion problems. Our code is available at https://github.com/gxl-groups/OF-VTON.</p>","PeriodicalId":50937,"journal":{"name":"ACM Transactions on Multimedia Computing Communications and Applications","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2024-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140636678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-grained Representation Aggregating Transformer with Gating Cycle for Change Captioning 带门控周期的多粒度表示聚合变换器,用于更改字幕
IF 5.1 3区 计算机科学
Shengbin Yue, Yunbin Tu, Liang Li, Shengxiang Gao, Zhengtao Yu
{"title":"Multi-grained Representation Aggregating Transformer with Gating Cycle for Change Captioning","authors":"Shengbin Yue, Yunbin Tu, Liang Li, Shengxiang Gao, Zhengtao Yu","doi":"10.1145/3660346","DOIUrl":"https://doi.org/10.1145/3660346","url":null,"abstract":"<p>Change captioning aims to describe the difference within an image pair in natural language, which combines visual comprehension and language generation. Although significant progress has been achieved, it remains a key challenge of perceiving the object change from different perspectives, especially the severe situation with drastic viewpoint change. In this paper, we propose a novel full-attentive network, namely Multi-grained Representation Aggregating Transformer (MURAT), to distinguish the actual change from viewpoint change. Specifically, the Pair Encoder first captures similar semantics between pairwise objects in a multi-level manner, which are regarded as the semantic cues of distinguishing the irrelevant change. Next, a novel Multi-grained Representation Aggregator (MRA) is designed to construct the reliable difference representation by employing both coarse- and fine-grained semantic cues. Finally, the language decoder generates a description of the change based on the output of MRA. Besides, the Gating Cycle Mechanism is introduced to facilitate the semantic consistency between difference representation learning and language generation with a reverse manipulation, so as to bridge the semantic gap between change features and text features. Extensive experiments demonstrate that the proposed MURAT can greatly improve the ability to describe the actual change in the distraction of irrelevant change and achieves state-of-the-art performance on three benchmarks, CLEVR-Change, CLEVR-DC and Spot-the-Diff.</p>","PeriodicalId":50937,"journal":{"name":"ACM Transactions on Multimedia Computing Communications and Applications","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2024-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140636796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Seventeen Years of the ACM Transactions on Multimedia Computing, Communications and Applications: A Bibliometric Overview ACM 《多媒体计算、通信和应用》期刊十七年:文献计量概览
IF 5.1 3区 计算机科学
Walayat Hussain, Honghao Gao, Rafiul Karim, Abdulmotaleb El Saddik
{"title":"Seventeen Years of the ACM Transactions on Multimedia Computing, Communications and Applications: A Bibliometric Overview","authors":"Walayat Hussain, Honghao Gao, Rafiul Karim, Abdulmotaleb El Saddik","doi":"10.1145/3660347","DOIUrl":"https://doi.org/10.1145/3660347","url":null,"abstract":"<p>ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM) has been dedicated to advancing multimedia research, fostering discoveries, innovations, and practical applications since 2005. The journal consistently publishes top-notch, original research in emerging fields through open submissions, calls for papers, special issues, rigorous review processes, and diverse research topics. This study aims to delve into an extensive bibliometric analysis of the journal, utilising various bibliometric indicators. The paper seeks to unveil the latent implications within the journal’s scholarly landscape from 2005 to 2022. The data primarily draws from the Web of Science (WoS) Core Collection database. The analysis encompasses diverse viewpoints, including yearly publication rates and citations, identifying highly cited papers, and assessing the most prolific authors, institutions, and countries. The paper employs VOSviewer-generated graphical maps, effectively illustrating networks of co-citations, keyword co-occurrences, and institutional and national bibliographic couplings. Furthermore, the study conducts a comprehensive global and temporal examination of co-occurrences of the author’s keywords. This investigation reveals the emergence of numerous novel keywords over the past decades.</p>","PeriodicalId":50937,"journal":{"name":"ACM Transactions on Multimedia Computing Communications and Applications","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2024-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140617366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
C2: ABR Streaming in Cognizant of Consumption Context for Improved QoE and Resource Usage Tradeoffs C2:ABR 流媒体在消费情境中的认知,以改善 QoE 和资源使用权衡
IF 5.1 3区 计算机科学
Cheonjin Park, Chinmaey Shende, Subhabrata Sen, Bing Wang
{"title":"C2: ABR Streaming in Cognizant of Consumption Context for Improved QoE and Resource Usage Tradeoffs","authors":"Cheonjin Park, Chinmaey Shende, Subhabrata Sen, Bing Wang","doi":"10.1145/3652517","DOIUrl":"https://doi.org/10.1145/3652517","url":null,"abstract":"<p>Smartphones have emerged as ubiquitous platforms for people to consume content in a wide range of <i>consumption contexts (C2)</i>, e.g., over cellular or WiFi, playing back audio and video directly on phone or through peripheral devices such as external screens or speakers. In this paper, we argue that a user’s specific C2 is an important factor to consider in Adaptive Bitrate (ABR) streaming. We examine the current practices of using C2 in five popular ABR players, and identify various limitations in existing treatments that have a detrimental impact on network resource usage and user experience. We then formulate C2-cognizant ABR streaming as an optimization problem and develop practical best-practice guidelines to realize it. Instantiating these guidelines, we develop a proof-of-concept implementation in the widely used state-of-the-art ExoPlayer platform and demonstrate that it leads to significantly better tradeoffs in terms of user experience and resource usage. Last, we show that the guidelines also benefit <monospace>dash.js</monospace> player that uses an ABR logic significantly different from that of ExoPlayer.</p>","PeriodicalId":50937,"journal":{"name":"ACM Transactions on Multimedia Computing Communications and Applications","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2024-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140616852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信