Proceedings of the 31st ACM International Conference on Information & Knowledge Management最新文献

筛选
英文 中文
Shoe Size Resolution in Search Queries and Product Listings using Knowledge Graphs 使用知识图谱在搜索查询和产品列表中的鞋码分辨率
Petar Ristoski, Aritra Mandal, Simon Becker, Anu Mandalam, Ethan Hart, Sanjika Hewavitharana, Zhe Wu, Qunzhi Zhou
{"title":"Shoe Size Resolution in Search Queries and Product Listings using Knowledge Graphs","authors":"Petar Ristoski, Aritra Mandal, Simon Becker, Anu Mandalam, Ethan Hart, Sanjika Hewavitharana, Zhe Wu, Qunzhi Zhou","doi":"10.1145/3511808.3557519","DOIUrl":"https://doi.org/10.1145/3511808.3557519","url":null,"abstract":"The Fashion domain is one of the most profitable domains in most of the e-commerce shops, shoes being one of the top-selling categories within this domain. When shopping for shoes, one of the most important aspects for the buyers is the shoe size. Shoe size charts differ between different brands, geographical regions, genders and age groups. Not providing some of these details, as a buyer or a seller, could lead to a query intent to inventory mismatch and reduced or wrong search results. Furthermore, buying the wrong shoe size is one of the top reasons for product returns, which causes shipping delays and loss in revenue. To address this issue, we propose an approach for shoe size resolution and normalization in search queries and product listings using Knowledge Graphs.","PeriodicalId":389624,"journal":{"name":"Proceedings of the 31st ACM International Conference on Information & Knowledge Management","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122284830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-domain Recommendation via Adversarial Adaptation 通过对抗性适应进行跨域推荐
Hongzu Su, Yifei Zhang, Xuejiao Yang, Hua Hua, Shuangyang Wang, Jingjing Li
{"title":"Cross-domain Recommendation via Adversarial Adaptation","authors":"Hongzu Su, Yifei Zhang, Xuejiao Yang, Hua Hua, Shuangyang Wang, Jingjing Li","doi":"10.1145/3511808.3557277","DOIUrl":"https://doi.org/10.1145/3511808.3557277","url":null,"abstract":"Data scarcity, e.g., labeled data being either unavailable or too expensive, is a perpetual challenge of recommendation systems. Cross-domain recommendation leverages the label information in the source domain to facilitate the task in the target domain. However, in many real-world cross-domain recommendation systems, the source domain and the target domain are sampled from different data distributions, which obstructs the cross-domain knowledge transfer. In this paper, we propose to specifically align the data distributions between the source domain and the target domain to alleviate imbalanced sample distribution and thus challenge the data scarcity issue in the target domain. Technically, our proposed approach builds an adversarial adaptation (AA) framework to adversarially train the target model together with a pre-trained source model. A domain discriminator plays the two-player minmax game with the target model and guides the target model to learn domain-invariant features that can be transferred across domains. At the same time, the target model is calibrated to learn domain-specific information of the target domain. With such a formulation, the target model not only learns domain-invariant features for knowledge transfer, but also preserves domain-specific information for target recommendation. We apply the proposed method to address the issues of insufficient data and imbalanced sample distribution in real-world Click-Through Rate (CTR)/Conversion Rate (CVR) predictions on a large-scale dataset. Specifically, we formulate our approach as a plug-and-play module to boost existing recommendation systems. Extensive experiments verify that the proposed method is able to significantly improve the prediction performance on the target domain. For instance, our method can boost PLE with a performance improvement of 13.88% in terms of Area Under Curve (AUC) compared with single-domain PLE.","PeriodicalId":389624,"journal":{"name":"Proceedings of the 31st ACM International Conference on Information & Knowledge Management","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126091152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Risk-Aware Bid Optimization for Online Display Advertisement 基于风险意识的网络展示广告竞价优化
Rui Fan, E. Delage
{"title":"Risk-Aware Bid Optimization for Online Display Advertisement","authors":"Rui Fan, E. Delage","doi":"10.1145/3511808.3557436","DOIUrl":"https://doi.org/10.1145/3511808.3557436","url":null,"abstract":"This research focuses on the bid optimization problem in the real-time bidding setting for online display advertisements, where an advertiser, or the advertiser's agent, has access to the features of the website visitor and the type of ad slots, to decide the optimal bid prices given a predetermined total advertisement budget. We propose a risk-aware data-driven bid optimization model that maximizes the expected profit for the advertiser by exploiting historical data to design upfront a bidding policy, mapping the type of advertisement opportunity to a bid price, and accounting for the risk of violating the budget constraint during a given period of time. After employing a Lagrangian relaxation, we derive a parametrized closed-form expression for the optimal bidding strategy. Using a real-world dataset, we demonstrate that our risk-averse method can effectively control the risk of overspending the budget while achieving a competitive level of profit compared with the risk-neutral model and a state-of-the-art data-driven risk-aware bidding approach.","PeriodicalId":389624,"journal":{"name":"Proceedings of the 31st ACM International Conference on Information & Knowledge Management","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121090544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Accelerating CNN via Dynamic Pattern-based Pruning Network 基于动态模式的剪枝网络加速CNN
Gwanghan Lee, Saebyeol Shin, Simon S. Woo
{"title":"Accelerating CNN via Dynamic Pattern-based Pruning Network","authors":"Gwanghan Lee, Saebyeol Shin, Simon S. Woo","doi":"10.1145/3511808.3557225","DOIUrl":"https://doi.org/10.1145/3511808.3557225","url":null,"abstract":"Recently, dynamic pruning methods have been actively researched, as they have shown very effective and remarkable performance in reducing computation complexity of deep neural networks. Nevertheless, most dynamic pruning methods fail to achieve actual acceleration due to the extra overheads caused by indexing and weight-copying to implement the dynamic sparse patterns for every input sample. To address this issue, we propose Dynamic Pattern-based Pruning Network (DPPNet), which preserves the advantages of both static and dynamic networks. First, our method statically prunes the weight kernel into various sparse patterns. Then, the dynamic convolution kernel is generated via aggregating input-dependent attention weights and static kernels. Unlike previous dynamic pruning methods, our novel method dynamically fuses static kernel patterns, enhancing the kernel's representational power without additional overhead. Moreover, our dynamic sparse pattern enables an efficient process using BLAS libraries, accomplishing actual acceleration. We demonstrate the effectiveness of the proposed DPPNet on CIFAR and ImageNet, outperforming the state-of-the-art methods achieving better accuracy with lower computational cost. For example, on ImageNet classification, ResNet34 utilizing DPP module achieves state-of-the-art performance with 65.6% FLOPs reduction and the inference speed increased by 35.9% without loss in accuracy. Code is available at https://github.com/lee-gwang/DPPNet.","PeriodicalId":389624,"journal":{"name":"Proceedings of the 31st ACM International Conference on Information & Knowledge Management","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115321293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
End-to-end Multi-task Learning Framework for Spatio-Temporal Grounding in Video Corpus 面向视频语料库时空基础的端到端多任务学习框架
Yingqi Gao, Zhiling Luo, Shiqian Chen, Wei Zhou
{"title":"End-to-end Multi-task Learning Framework for Spatio-Temporal Grounding in Video Corpus","authors":"Yingqi Gao, Zhiling Luo, Shiqian Chen, Wei Zhou","doi":"10.1145/3511808.3557596","DOIUrl":"https://doi.org/10.1145/3511808.3557596","url":null,"abstract":"In this paper, we consider a novel task, Video Corpus Spatio-Temporal Grounding (VCSTG) for material selection and spatio-temporal adaption in intelligent video editing. Given a text query depicting an object and a corpus of untrimmed and unsegmented videos, VCSTG aims to localize a sequence of spatio-temporal object tubes from the video corpus. Existing methods tackle the VCSTG task in a multi-stage approach, which encodes the query and video representation independently for each task, leading to local optimum. In this paper, we propose a novel one-stage multi-task learning based framework named MTSTG for the VCSTG task. MTSTG learns unified query and video representation for video retrieval, temporal grounding and spatial grounding tasks. Video-level, frame-level and object-level contrastive learning are introduced to measure the mutual information between query and video at different granularity. Comprehensive experiments demonstrate our newly proposed framework outperforms the state-of-the-art multi-stage methods on VidSTG dataset.","PeriodicalId":389624,"journal":{"name":"Proceedings of the 31st ACM International Conference on Information & Knowledge Management","volume":"277 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120962830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-Domain Product Search with Knowledge Graph 基于知识图谱的跨领域产品搜索
Rui Zhu, Yiming Zhao, Weixiong Qu, Zhongyi Liu, Chenliang Li
{"title":"Cross-Domain Product Search with Knowledge Graph","authors":"Rui Zhu, Yiming Zhao, Weixiong Qu, Zhongyi Liu, Chenliang Li","doi":"10.1145/3511808.3557116","DOIUrl":"https://doi.org/10.1145/3511808.3557116","url":null,"abstract":"The notion personalization lies on the core of a real-world product search system, whose aim is to understand the user's search intent in a fine-grained level. The existing solutions mainly achieve this purpose through a coarse-grained semantic matching in terms of the query and item's description or the collective click correlations. Besides the issued query, the historical search behaviors of a user would cover lots of her personalized interests, which is a promising avenue to alleviate the semantic gap between users, items and queries. However, as to a specific domain, a user's search behaviors are generally sparse or even unavailable (i.e., cold-start users). How to exploit the search behaviors from the other relevant domain and enable effective fine-grained intent understanding remains largely unexplored for product search. Moreover, the semantic gap could be further aggravated since the properties of an item could evolve over time (e.g., the price adjustment for a mobile phone or the business plan update for a financial item), which is also mainly overlooked by the existing solutions. To this end, we are interested in bridging the semantic gap via a marriage between cross-domain transfer learning and knowledge graph. Specifically, we propose a simple yet effective knowledge graph based information propagation framework for cross-domain product search (named KIPS). In KIPS, we firstly utilize a shared knowledge graph relevant to both source and target domains as a semantic backbone to facilitate the information propagation across domains. Then, we build individual collaborative knowledge graphs to model both long-term interests/characteristics and short-term interests/characteristics of a user/item respectively. In order to harness cross-domain interest correlations, two unsupervised strategies to guide the interest learning and alignment are introduced: maximum mean discrepancy (MMD) and kg-aware contrastive learning. In detail, the MMD is utilized to support a coarse-grained domain alignment over the user's long-term interests across two domains. Then, the kg-aware contrastive learning process conducts a fine-grained interest alignment based on the shared knowledge graph. Experiments over two real-world large-scale datasets demonstrate the effectiveness of KIPS over a series of strong baselines. Our online A/B test also shows substantial performance gain on multiple metrics. Currently, KIPS has been deployed in AliPay for financial product search. Both the code implementation and the two datasets used for evaluation will be released online publicly.","PeriodicalId":389624,"journal":{"name":"Proceedings of the 31st ACM International Conference on Information & Knowledge Management","volume":"2021 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116584858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Demonstrating SubStrat: A Subset-Based Strategy for Faster AutoML on Large Datasets 展示基础:基于子集的大数据集快速自动学习策略
T. Lazebnik, Amit Somech
{"title":"Demonstrating SubStrat: A Subset-Based Strategy for Faster AutoML on Large Datasets","authors":"T. Lazebnik, Amit Somech","doi":"10.1145/3511808.3557160","DOIUrl":"https://doi.org/10.1145/3511808.3557160","url":null,"abstract":"Automated machine learning (AutoML) frameworks are gaining popularity among data scientists as they dramatically reduce the manual work devoted to the construction of ML pipelines while obtaining similar and sometimes even better results than manually-built models. Such frameworks intelligently search among millions of possible ML pipeline configurations to finally retrieve an optimal pipeline in terms of predictive accuracy. However, when the training dataset is large, the construction and evaluation of a single ML pipeline take longer, which makes the overall AutoML running times increasingly high. To this end, in this work we demonstrate SubStrat, an AutoML optimization strategy that tackles the dataset size rather than the configurations search space. SubStrat wraps existing AutoML tools, and instead of executing them directly on the large dataset, it uses a genetic-based algorithm to find a small yet representative data subset that preserves characteristics of the original one. SubStrat then employs the AutoML tool on the generated subset, resulting in an intermediate ML pipeline, which is later refined by executing a restricted, much shorter, AutoML process on the large dataset. We demonstrate SubStrat on both AutoSklearn, TPOT, and H2O, three popular AutoML frameworks, using several real-life datasets.","PeriodicalId":389624,"journal":{"name":"Proceedings of the 31st ACM International Conference on Information & Knowledge Management","volume":"153 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121664172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Multi-agent Transformer Networks for Multimodal Human Activity Recognition 多模态人类活动识别的多智能体变压器网络
Jingcheng Li, L. Yao, Binghao Li, Xianzhi Wang, C. Sammut
{"title":"Multi-agent Transformer Networks for Multimodal Human Activity Recognition","authors":"Jingcheng Li, L. Yao, Binghao Li, Xianzhi Wang, C. Sammut","doi":"10.1145/3511808.3557402","DOIUrl":"https://doi.org/10.1145/3511808.3557402","url":null,"abstract":"Human activity recognition has become an important challenge yet to resolve while also having promising benefits in various applications for years. Existing approaches have made great progress by applying deep-learning and attention-based methods. However, the deep learning-based approaches may not fully exploit the features to resolve multimodal human activity recognition tasks. Also, the potential of attention-based methods still has not been fully explored to better extract the multimodal spatial-temporal relationship and produce robust results. In this work, we propose Multi-agent Transformer Network (MATN), a multi-agent attention-based deep learning algorithm, to address the above issues in multimodal human activity recognition. We first design a unified representation learning layer to encode the multimodal data, which preprocesses the data in a generalized and efficient way. Then we develop a multimodal spatial-temporal transformer module that applies the attention mechanism to extract the salient spatial-temporal features. Finally, we use a multi-agent training module to collaboratively select the informative modalities and predict the activity labels. We have extensively conducted experiments to evaluate MATN's performance on two public multimodal human activity recognition datasets. The results show that our model has achieved competitive performance compared to the state-of-the-art approaches, which also demonstrates scalability, effectiveness, and robustness.","PeriodicalId":389624,"journal":{"name":"Proceedings of the 31st ACM International Conference on Information & Knowledge Management","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121672415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Can Adversarial Training benefit Trajectory Representation?: An Investigation on Robustness for Trajectory Similarity Computation 对抗性训练是否有利于轨迹表征?轨迹相似度计算鲁棒性研究
Quanliang Jing, Shuo Liu, Xinxin Fan, Jingwei Li, Di Yao, Baoli Wang, Jingping Bi
{"title":"Can Adversarial Training benefit Trajectory Representation?: An Investigation on Robustness for Trajectory Similarity Computation","authors":"Quanliang Jing, Shuo Liu, Xinxin Fan, Jingwei Li, Di Yao, Baoli Wang, Jingping Bi","doi":"10.1145/3511808.3557250","DOIUrl":"https://doi.org/10.1145/3511808.3557250","url":null,"abstract":"Trajectory similarity computation as the fundamental problem for various downstream analytic tasks, such as trajectory classification and clustering, has been extensively studied in recent years. However, how to infer an accurate and robust similarity over two trajectories is difficult due to the some trajectory characteristics in practice, e.g. non-uniform sampling rate, nonmalignant fluctuation, and noise points, etc. To circumvent such challenges, we in this paper introduce the adversarial training idea into the trajectory representation learning for the first time to enhance the robustness and accuracy. Specifically, our proposed method AdvTraj2Vec has two novelties: i) it perturbs the weight parameters of embedding layers to learn a robust model to infer an accurate pairwise similarity over each two trajectories; and ii) it employs the GAN momentum to harness the perturbation extent to which an appropriate trajectory representation can be learned for the similarity computation. Extensive experiments using two real-world trajectory datasets Porto and Beijing validate our proposed AdvTraj2Vec on the robustness and accuracy aspects. The multi-facet results show that our AdvTraj2Vec significantly outperforms the stat-of-the-art methods in terms of different distortions, such as trajectory-point addition, deletion, disturbance, and outlier injection.","PeriodicalId":389624,"journal":{"name":"Proceedings of the 31st ACM International Conference on Information & Knowledge Management","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123785866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Multi-Faceted Hierarchical Multi-Task Learning for Recommender Systems 面向推荐系统的多层分层多任务学习
Junning Liu, Xin-ju Li, Bo An, Zijie Xia, Xu Wang
{"title":"Multi-Faceted Hierarchical Multi-Task Learning for Recommender Systems","authors":"Junning Liu, Xin-ju Li, Bo An, Zijie Xia, Xu Wang","doi":"10.1145/3511808.3557140","DOIUrl":"https://doi.org/10.1145/3511808.3557140","url":null,"abstract":"There have been many studies on improving the efficiency of shared learning in Multi-Task Learning (MTL). Previous works focused on the \"micro\" sharing perspective for a small number of tasks, while in Recommender Systems (RS) and many other AI applications, we often need to model a large number of tasks. For example, when using MTL to model various user behaviors in RS, if we differentiate new users and new items from old ones, the number of tasks will increase exponentially with multidimensional relations. This work proposes a Multi-Faceted Hierarchical MTL model (MFH) that exploits the multidimensional task relations in large scale MTLs with a nested hierarchical tree structure. MFH maximizes the shared learning through multi-facets of sharing and improves the performance with heterogeneous task tower design. For the first time, MFH addresses the \"macro\" perspective of shared learning and defines a \"switcher\" structure to conceptualize the structures of macro shared learning. We evaluate MFH and SOTA models in a large industry video platform of 10 billion samples and hundreds of millions of monthly active users. Results show that MFH outperforms SOTA MTL models significantly in both offline and online evaluations across all user groups, especially remarkable for new users with an online increase of 9.1% in app time per user and 1.85% in next-day retention rate. MFH currently has been deployed in WeSee, Tencent News, QQ Little World and Tencent Video, several products of Tencent. MFH is especially beneficial to the cold-start problems in RS where new users and new items often suffer from a \"local overfitting\" phenomenon that we first formalize in this paper.","PeriodicalId":389624,"journal":{"name":"Proceedings of the 31st ACM International Conference on Information & Knowledge Management","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128043539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信