Computer Networks最新文献

筛选
英文 中文
Efficient load distribution in heterogeneous vehicular networks using hierarchical controllers 使用分层控制器在异构车载网络中高效分配负载
IF 4.4 2区 计算机科学
Computer Networks Pub Date : 2024-09-13 DOI: 10.1016/j.comnet.2024.110805
{"title":"Efficient load distribution in heterogeneous vehicular networks using hierarchical controllers","authors":"","doi":"10.1016/j.comnet.2024.110805","DOIUrl":"10.1016/j.comnet.2024.110805","url":null,"abstract":"<div><p>Vehicle movement poses significant challenges in vehicular networks, often resulting in uneven traffic distribution. Fog computing (FC) addresses this by operating at the network edge, handling specific tasks locally instead of relying solely on cloud computing (CC) facilities. There are instances where FC may need additional resources and must delegate tasks to CC, leading to increased delay and response time. This work conducts a thorough examination of previous load balancing (LB) strategies, with a specific focus on software-defined networking (SDN) and machine learning (ML) based LB within the internet of vehicles (IoV). The insights derived from this research expedite the development of SDN controller-based LB solutions in the IoV network. The authors proposes the integration of a local SDN controller (LSDNC) within the FC tier to enable localized LB, addressing delay concerns. However, the information will be available to the main SDN controller (MSDNC) too. The authors explore the concept mathematically and simulates the formulated model and subjecting it to a comprehensive performance analysis. The simulation results demonstrate a significant reduction in delay, with a 125 ms difference when 200 onboard units (OBUs) are used, compared to conventional software-defined vehicular networks (SDVN). This improvement continues to increase as the number of OBUs grows. Our model achieves the same maximum throughput as the previous model but delivers faster response times, as decisions are made locally without the need to wait for the main controller.</p></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142241818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Survey of fault management techniques for edge-enabled distributed metaverse applications 边缘分布式元数据应用的故障管理技术概览
IF 4.4 2区 计算机科学
Computer Networks Pub Date : 2024-09-13 DOI: 10.1016/j.comnet.2024.110803
{"title":"Survey of fault management techniques for edge-enabled distributed metaverse applications","authors":"","doi":"10.1016/j.comnet.2024.110803","DOIUrl":"10.1016/j.comnet.2024.110803","url":null,"abstract":"<div><p>The metaverse, envisioned as a vast, distributed virtual world, relies on edge computing for low-latency data processing. However, ensuring fault tolerance – the system’s ability to handle failures – is critical for a seamless user experience. This paper analyzes existing research on fault tolerance in edge computing over the past six years, specifically focusing on its applicability to the metaverse. We identify common fault types like node failures, communication disruptions, and security issues. The analysis then explores various fault management techniques including proactive monitoring, resource optimization, task scheduling, workload migration, redundancy for service continuity, machine learning for predictive maintenance, and consensus algorithms to guarantee data integrity. While these techniques hold promise, adaptations are necessary to address the metaverse’s real-time interaction requirements and low-latency constraints. This paper analyzes existing research and identifies key areas for improvement, providing valuable research guidelines and insights to pave the way for the development of fault management techniques specifically tailored to the metaverse, ultimately contributing to a robust and secure virtual world.</p></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142274084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Securing the internet’s backbone: A blockchain-based and incentive-driven architecture for DNS cache poisoning defense 确保互联网骨干网的安全:基于区块链的 DNS 缓存中毒防御激励驱动架构
IF 4.4 2区 计算机科学
Computer Networks Pub Date : 2024-09-12 DOI: 10.1016/j.comnet.2024.110777
{"title":"Securing the internet’s backbone: A blockchain-based and incentive-driven architecture for DNS cache poisoning defense","authors":"","doi":"10.1016/j.comnet.2024.110777","DOIUrl":"10.1016/j.comnet.2024.110777","url":null,"abstract":"<div><p>Domain Name System (DNS) is the backbone of the Internet infrastructure, converting human-friendly domain names into machine-processable IP addresses. However, DNS remains vulnerable to various security threats, such as cache poisoning attacks, where malicious attackers inject false information into DNS resolvers’ caches. Although efforts have been made to enhance DNS against such vulnerabilities, existing countermeasures often fall short in one or more areas: they may offer limited resistance to the collusion attack, introduce significant overhead, or require complex implementation that hinders widespread adoption. To address these challenges, this paper introduces TI-DNS+, a trusted and incentivized blockchain-based DNS resolution architecture for cache poisoning defense. TI-DNS+ introduces a <em>Verification Cache</em> exploiting blockchain ledger’s immutable nature to detect and correct forged DNS responses. The architecture also incorporates a multi-resolver <em>Query Vote</em> mechanism, enhancing the ledger’s credibility by validating each record modification through a stake-weighted algorithm. This algorithm selects resolvers as validators based on their stake proportion. To promote well-behaved participation, TI-DNS+ also implements a novel stake-based incentive mechanism that optimizes the generation and distribution of stake rewards. This ensures that incentives align with participants’ contributions, achieving incentive compatibility, fairness, and efficiency. Moreover, TI-DNS+ possesses high practicability as it requires only resolver-side modifications to current DNS. Finally, through comprehensive prototyping and experimental evaluations, the results demonstrate that our solution effectively mitigates DNS cache poisoning. Compared to competitors, our solution improves attack resistance by 1-3 orders of magnitude, while also reducing resolution latency by 5% to 68%.</p></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142274080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Task offloading strategies for mobile edge computing: A survey 移动边缘计算的任务卸载策略:一项调查
IF 4.4 2区 计算机科学
Computer Networks Pub Date : 2024-09-12 DOI: 10.1016/j.comnet.2024.110791
{"title":"Task offloading strategies for mobile edge computing: A survey","authors":"","doi":"10.1016/j.comnet.2024.110791","DOIUrl":"10.1016/j.comnet.2024.110791","url":null,"abstract":"<div><p>With the wide adoption of 5G technology and the rapid development of 6G technology, a variety of new applications have emerged. A multitude of compute-intensive and time-sensitive applications deployed on terminal equipment have placed increased demands on Internet delay and bandwidth. Mobile Edge Computing (MEC) can effectively mitigate the issues of long transmission times, high energy consumption, and data insecurity. Task offloading, as a key technology within MEC, has become a prominent research focus in this field. This paper presents a comprehensive review of the current research progress in MEC task offloading. Firstly, it introduces the fundamental concepts, application scenarios, and related technologies of MEC. Secondly, it categorizes offloading decisions into five aspects: reducing delay, minimizing energy consumption, balancing energy consumption and delay, enabling high-computing offloading, and addressing different application scenarios. It then critically analyzes and compares existing research efforts in these areas.</p></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142241822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lifetime maximization of IoT-enabled smart grid applications using error control strategies 利用误差控制策略实现物联网智能电网应用的寿命最大化
IF 4.4 2区 计算机科学
Computer Networks Pub Date : 2024-09-12 DOI: 10.1016/j.comnet.2024.110778
{"title":"Lifetime maximization of IoT-enabled smart grid applications using error control strategies","authors":"","doi":"10.1016/j.comnet.2024.110778","DOIUrl":"10.1016/j.comnet.2024.110778","url":null,"abstract":"<div><p>Recently, with the advancement of Internet of Things (IoT) technology, IoT-enabled Smart Grid (SG) applications have gained tremendous popularity. Ensuring reliable communication in IoT-based SG applications is challenging due to the harsh channel environment often encountered in the power grid. Error Control (EC) techniques have emerged as a promising solution to enhance reliability. Nevertheless, ensuring network reliability requires a substantial amount of energy consumption. In this paper, we formulate a Mixed Integer Programming (MIP) model which considers the energy dissipation of EC techniques to maximize IoT network lifetime while ensuring the desired level of IoT network reliability. We develop meta-heuristic approaches such as Artificial Bee Colony (ABC) and Particle Swarm Optimization (PSO) to address the high computation complexity of large-scale IoT networks. Performance evaluations indicate that the EC-Node strategy, where each IoT node employs the most energy-efficient EC technique, yields a minimum of 8.9% extended lifetimes compared to the EC-Net strategies, where all IoT nodes employ the same EC method for a communication. Moreover, the PSO algorithm reduces the computational time by 77% while exhibiting a 2.69% network lifetime decrease compared to the optimal solution.</p></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142241814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Joint path planning and power allocation of a cellular-connected UAV using apprenticeship learning via deep inverse reinforcement learning 通过深度反强化学习,利用学徒式学习实现蜂窝连接无人机的联合路径规划和功率分配
IF 4.4 2区 计算机科学
Computer Networks Pub Date : 2024-09-12 DOI: 10.1016/j.comnet.2024.110789
{"title":"Joint path planning and power allocation of a cellular-connected UAV using apprenticeship learning via deep inverse reinforcement learning","authors":"","doi":"10.1016/j.comnet.2024.110789","DOIUrl":"10.1016/j.comnet.2024.110789","url":null,"abstract":"<div><p>This paper investigates an interference-aware joint path planning and power allocation mechanism for a cellular-connected unmanned aerial vehicle (UAV) in a sparse suburban environment. The UAV’s goal is to fly from an initial point and reach a destination point by moving along the cells to guarantee the required quality of service (QoS). In particular, the UAV aims to maximize its uplink throughput and minimize interference to the ground user equipment (UEs) connected to neighboring cellular base stations (BSs), considering both the shortest path and limitations on flight resources. Expert knowledge is used to experience the scenario and define the desired behavior for the sake of the agent (i.e., UAV) training. To solve the problem, an apprenticeship learning method is utilized via inverse reinforcement learning (IRL) based on both Q-learning and deep reinforcement learning (DRL). The performance of this method is compared to learning from a demonstration technique called behavioral cloning (BC) using a supervised learning approach. Simulation and numerical results show that the proposed approach can achieve expert-level performance. We also demonstrate that, unlike the BC technique, the performance of our proposed approach does not degrade in unseen situations.</p></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1389128624006212/pdfft?md5=dc7b3d1acee33e2f5feab69fccae53be&pid=1-s2.0-S1389128624006212-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142232950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data signals for deep learning applications in Terahertz communications 太赫兹通信中深度学习应用的数据信号
IF 4.4 2区 计算机科学
Computer Networks Pub Date : 2024-09-12 DOI: 10.1016/j.comnet.2024.110800
{"title":"Data signals for deep learning applications in Terahertz communications","authors":"","doi":"10.1016/j.comnet.2024.110800","DOIUrl":"10.1016/j.comnet.2024.110800","url":null,"abstract":"<div><p>The Terahertz (THz) band (0.1–10 THz) is projected to enable broadband wireless communications of the future, and many envision deep learning as a solution to improve the performance of THz communication systems and networks. However, there are few available datasets of true THz signals that could enable testing and training of deep learning algorithms for the research community. In this paper, we provide an extensive dataset of 120,000 data frames for the research community. All signals were transmitted at 165 GHz but with varying bandwidths (5 GHz, 10 GHz, and 20 GHz), modulations (4PSK, 8PSK, 16QAM, and 64QAM), and transmit amplitudes (75 mV and 600 mV), resulting in twenty-four distinct bandwidth-modulation-power combinations each with 5,000 unique captures. The signals were captured after down conversion at an intermediate frequency of 10 GHz. This dataset enables the research community to experimentally explore solutions relating to ultrabroadband deep and machine learning applications.</p></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1389128624006327/pdfft?md5=c4870e9a435477344bfb00ccf315d922&pid=1-s2.0-S1389128624006327-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142232951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A fast malware detection model based on heterogeneous graph similarity search 基于异构图相似性搜索的快速恶意软件检测模型
IF 4.4 2区 计算机科学
Computer Networks Pub Date : 2024-09-12 DOI: 10.1016/j.comnet.2024.110799
{"title":"A fast malware detection model based on heterogeneous graph similarity search","authors":"","doi":"10.1016/j.comnet.2024.110799","DOIUrl":"10.1016/j.comnet.2024.110799","url":null,"abstract":"<div><p>The Android operating system has long been vulnerable to malicious software. Existing malware detection methods often fail to identify ever-evolving malware and are slow in detection. To address this, we propose a new model for rapid Android malware detection, which constructs various Android entities and relationships into a heterogeneous graph. Firstly, to address the semantic fusion problem in high-order heterogeneous graphs that arises with the increase in the depth of the heterogeneous graph model, we introduce adaptive weights during node aggregation to absorb the local semantics of nodes. This allows more attention to be paid to the feature information of the node itself during the semantic aggregation stage, thereby avoiding semantic confusion. Secondly, to mitigate the high time costs associated with detecting unknown applications, we employ an incremental similarity search model. This model quickly measures the similarity between unknown applications and those within the sample, aggregating the weights of nodes based on similarity scores and semantic attention coefficients, thereby enabling rapid detection. Lastly, considering the high time and space complexity of calculating node similarity scores on large graphs, we design a <em>NeuSim</em> model based on an encoder–decoder structure. The encoder module embeds each path instance as a vector, while the decoder converts the vector into a scalar similarity score, significantly reducing the complexity of the calculation. Experiments demonstrate that this model can not only rapidly detect malware but also capture high-level semantic relationships of application software in complex malware networks by hierarchically aggregating information from neighbors and meta-paths of different orders. Moreover, this model achieved an AUC of 0.9356 and an F1 score of 0.9355, surpassing existing malware detection algorithms. Particularly in the detection of unknown application software, the <em>NeuSim</em> model can double the detection speed, with an average detection time of 105 ms.</p></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142241820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Truthful mechanism for joint resource allocation and task offloading in mobile edge computing 移动边缘计算中联合资源分配和任务卸载的真实机制
IF 4.4 2区 计算机科学
Computer Networks Pub Date : 2024-09-11 DOI: 10.1016/j.comnet.2024.110796
{"title":"Truthful mechanism for joint resource allocation and task offloading in mobile edge computing","authors":"","doi":"10.1016/j.comnet.2024.110796","DOIUrl":"10.1016/j.comnet.2024.110796","url":null,"abstract":"<div><p>In the context of mobile edge computing (MEC), the delay-sensitive tasks can achieve real-time data processing and analysis by offloading to the MEC servers. The objective is maximizing social welfare in an auction-based model. However, the distances between mobile devices and access points lead to differences in energy consumption. Unfortunately, existing works have not considered both maximizing social welfare and minimizing energy consumption. Motivated by this, we address the problem of joint resource allocation and task offloading in MEC, with heterogeneous MEC servers providing multiple types of resources for mobile devices (MDs) to perform tasks remotely. We split the problem into two sub-problems: winner determination and offloading decision. The first sub-problem determines winners granted the ability to offload tasks to maximize social welfare. The second sub-problem determines how to offload tasks among the MEC servers to minimize energy consumption. In the winner determination problem, we propose a truthful algorithm that drives the system into equilibrium. We then show the approximate ratios for single and multiple MEC servers. In the offloading decision problem, we propose an approximation algorithm. We then show it is a polynomial-time approximation scheme for a single MEC server. Experiment results show that our proposed mechanism finds high-quality solutions in changing mobile environments.</p></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142228492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Adversarial Machine Learning Based Approach for Privacy Preserving Face Recognition in Distributed Smart City Surveillance 基于对抗式机器学习的分布式智能城市监控中保护隐私的人脸识别方法
IF 4.4 2区 计算机科学
Computer Networks Pub Date : 2024-09-11 DOI: 10.1016/j.comnet.2024.110798
{"title":"An Adversarial Machine Learning Based Approach for Privacy Preserving Face Recognition in Distributed Smart City Surveillance","authors":"","doi":"10.1016/j.comnet.2024.110798","DOIUrl":"10.1016/j.comnet.2024.110798","url":null,"abstract":"<div><p>Smart cities rely heavily on surveillance cameras for urban management and security. However, the extensive use of these cameras also raises significant concerns regarding data privacy. Unauthorized access to facial data captured by these cameras and the potential for misuse of this data poses serious threats to individuals’ privacy. Current privacy preservation solutions often compromise data usability with noise application-based approaches and vulnerable centralized data handling settings. To address these privacy challenges, we propose a novel approach that combines Adversarial Machine Learning (AML) with Federated Learning (FL). Our approach involves the use of a noise generator that perturbs surveillance data right from the source before they leave the surveillance cameras. By exclusively training the Federated Learning model on these perturbed samples, we ensure that sensitive biometric features are not shared with centralized servers. Instead, such data remains on local devices (e.g., cameras), thereby ensuring that data privacy is maintained. We performed a thorough real-world evaluation of the proposed method and achieved an accuracy of around 99.95% in standard machine learning settings. In distributed settings, we achieved an accuracy of around 96.24% using federated learning, demonstrating the practicality and effectiveness of the proposed solution.<span><span><sup>1</sup></span></span></p></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1389128624006303/pdfft?md5=da5fe96757f1e618798967bd74657413&pid=1-s2.0-S1389128624006303-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142241836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信