IEEE Transactions on Green Communications and Networking最新文献

筛选
英文 中文
RADAR: Robust DRL-Based Resource Allocation Against Adversarial Attacks in Intelligent O-RAN 雷达:智能O-RAN中基于drl的抗对抗性资源分配
IF 6.7 2区 计算机科学
IEEE Transactions on Green Communications and Networking Pub Date : 2025-04-21 DOI: 10.1109/TGCN.2025.3562895
Yared Abera Ergu;Van-Linh Nguyen
{"title":"RADAR: Robust DRL-Based Resource Allocation Against Adversarial Attacks in Intelligent O-RAN","authors":"Yared Abera Ergu;Van-Linh Nguyen","doi":"10.1109/TGCN.2025.3562895","DOIUrl":"https://doi.org/10.1109/TGCN.2025.3562895","url":null,"abstract":"The advent of open radio access networks (O-RAN) has introduced intelligent, flexible, and multi-vendor network ecosystems. While O-RAN’s open interfaces and artificial intelligence (AI)-driven solutions offer improved performance, energy efficiency, and resource minimization for green networking, they also expose the system to new security vulnerabilities, particularly adversarial attacks. This paper presents a robust defense approach, termed RADAR, designed to secure deep reinforcement learning (DRL)-powered resource allocation mechanisms in O-RAN. RADAR is a multi-faceted defense framework that integrates adversarial input sanitization, proactive adversarial training, and adapted defensive distillation to counter policy infiltration attacks, gradient-based deceptive loss maximization, and signal perturbation injections into the O-CU via the O-DU in O-RAN. This study evaluates the effectiveness of RADAR not only against a novel attack variant—policy infiltration attack (PIA), which manipulates environmental parameters to disrupt allocation decisions, but also against well-known adversarial techniques such as the fast gradient sign method (FGSM) and projected gradient descent (PGD). Experimental results demonstrate that RADAR achieves significant recovery in user data rates across three network slices: 73.33% for eMBB, 64.71% for mMTC and 52.94% for uRLLC, outperforming the existing standalone approach. The findings highlight RADAR’s effectiveness in mitigating adversarial attack techniques, underscoring its potential to secure AI-driven core functions in intelligent O-RAN.","PeriodicalId":13052,"journal":{"name":"IEEE Transactions on Green Communications and Networking","volume":"9 4","pages":"2305-2318"},"PeriodicalIF":6.7,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145646093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive ML MISO Receiver: Conditional Fine-Tuning Without CSI 自适应ML MISO接收器:没有CSI的条件微调
IF 6.7 2区 计算机科学
IEEE Transactions on Green Communications and Networking Pub Date : 2025-04-15 DOI: 10.1109/TGCN.2025.3560652
Arhum Ahmad;Satyam Agarwal
{"title":"Adaptive ML MISO Receiver: Conditional Fine-Tuning Without CSI","authors":"Arhum Ahmad;Satyam Agarwal","doi":"10.1109/TGCN.2025.3560652","DOIUrl":"https://doi.org/10.1109/TGCN.2025.3560652","url":null,"abstract":"This paper introduces a novel machine learning-based receiver for symbol detection in a Multiple-Input Single-Output system, optimized for next-generation vehicular networks. The receiver operates without channel state information (CSI), leveraging an innovative feature selection strategy that enhances its adaptability to dynamic, real-world communication environments. Key components include Neural Adaptive Symbol Detection (NASD), which provides an initial detection framework, and the Context-Enhanced Symbol Detector (CESD), a fine-tuning mechanism that dynamically adjusts to varying signal conditions. These innovations equip the receiver with robustness against unpredictable vehicular communication challenges, such as rapid movement, Doppler effects, and multipath fading. The system is evaluated using testbed featuring a custom-built UAV to emulate complex vehicle dynamics. This setup enables rigorous testing under a variety of conditions, including static, maneuvering, and hovering scenarios. Experimental results demonstrate the receiver’s ability to sustain low bit error rates across a wide range of signal-to-noise ratios, significantly outperforming non-adaptive methods, especially in dynamic environments. The combination of NASD and CESD facilitates real-time adaptation without the need for CSI or extensive pre-training, establishing this approach as an efficient, low-complexity receiver solution for modern vehicular communication systems.","PeriodicalId":13052,"journal":{"name":"IEEE Transactions on Green Communications and Networking","volume":"9 4","pages":"2292-2304"},"PeriodicalIF":6.7,"publicationDate":"2025-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145646087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Joint Energy and Computation Workload Management for Geo-Distributed Data Centers 地理分布式数据中心的联合能源和计算工作负载管理
IF 6.7 2区 计算机科学
IEEE Transactions on Green Communications and Networking Pub Date : 2025-04-14 DOI: 10.1109/TGCN.2025.3559505
Ran Wang;Rixin Wu;Linfeng Liu;Changyan Yi;Kun Zhu;Ping Wang;Dusit Niyato
{"title":"Joint Energy and Computation Workload Management for Geo-Distributed Data Centers","authors":"Ran Wang;Rixin Wu;Linfeng Liu;Changyan Yi;Kun Zhu;Ping Wang;Dusit Niyato","doi":"10.1109/TGCN.2025.3559505","DOIUrl":"https://doi.org/10.1109/TGCN.2025.3559505","url":null,"abstract":"The increasing demands of data computation and storage for cloud-based services motivate the development and deployment of large-scale data centers (DCs). The energy demand of these devices is rising rapidly and becoming a noticeable challenge for current power networks. The smart grid (SG) is deemed as the future power system paradigm enabling more affordable and sustainable energy supply, which can effectively relieve the load pressure from DCs. Moreover, with growing concerns regarding harmful emissions due to combustion of fossil fuels, the exploitation of renewable energy sources (RES) has attracted extensive attention, which can benefit SGs and DCs, as well as society at large. However, the geo-distributed property of DCs and SGs and the uncertain nature of RES production pose severe challenges to the optimal management of computation and energy resources in such a tripartite coupling system. Focusing on these issues, a joint energy and computation workload management framework is proposed for enabling a sustainable DC paradigm with distributed RES. Specifically, a three-layer game is formulated to model the iterations among entities including the energy market, data center operators (DCOs), and SGs. The market includes a certain amount of RES that must be dispatched. The SG offers the DCO an electricity selling price while simultaneously importing RES from the market at a buying price in order to maximize the benefit. The DCO allocates the workload to different DCs, aiming to minimize the costs of energy consumption and carbon emissions. The interactive processes between different entities are further decomposed into two coupling Stackelberg games. We obtain the equilibrium state of the game and prove its uniqueness and optimality. Simulation experiments are conducted to evaluate the performance of the joint energy and computation workload management scheme and show its superiority over counterparts in utilizing renewable energy and reducing emissions. Furthermore, the impacts of various parameters on the utility of the system are investigated carefully. The proposed approach and obtained results provide useful insights for helping the DCO developing rational management strategies.","PeriodicalId":13052,"journal":{"name":"IEEE Transactions on Green Communications and Networking","volume":"9 4","pages":"2115-2128"},"PeriodicalIF":6.7,"publicationDate":"2025-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145646081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Two-Stage Green Energy Dispatch Scheme for Microgrid Using Deep Reinforcement Learning 基于深度强化学习的两阶段微电网绿色能源调度方案
IF 6.7 2区 计算机科学
IEEE Transactions on Green Communications and Networking Pub Date : 2025-04-11 DOI: 10.1109/TGCN.2025.3560143
Rui Luo;Weidong Gao;Xu Zhao;Kaisa Zhang;Xiangyu Chen;Yuan Guan;Siqi Liu;Jingwen Liu
{"title":"A Two-Stage Green Energy Dispatch Scheme for Microgrid Using Deep Reinforcement Learning","authors":"Rui Luo;Weidong Gao;Xu Zhao;Kaisa Zhang;Xiangyu Chen;Yuan Guan;Siqi Liu;Jingwen Liu","doi":"10.1109/TGCN.2025.3560143","DOIUrl":"https://doi.org/10.1109/TGCN.2025.3560143","url":null,"abstract":"The integration of renewable energy resources in microgrid productively contributes to reducing the emission of greenhouse gases, but inherently increases the complexity of energy management. Capable of rapid-response characteristic, the deep reinforcement learning (DRL) algorithm could be applied to provide real-time energy scheduling. However, due to the limitation of restricted training data and ignoring of the impact on the environment, most DRL-based schemes fail to get comprehensive solutions. To overcome this, we proposed a two-stage scheme, namely GAN-DDPG energy dispatch scheme, which utilizes the benefits of both the generative adversarial networks (GAN) and an enhanced deep deterministic policy gradient algorithm, namely CE-DDPG algorithm. In the first stage, a trained GAN is used to generate sufficient training data for the training process of the CE-DDPG algorithm. Then, the microgrid controller could invoke the trained CE-DDPG algorithm to obtain a real-time scheduling with efficient carbon emissions reductions. Different from the traditional DRL algorithm, a novel reward function is proposed in the CE-DDPG algorithm, promoting the scheduling of the energy storage system (ESS) with more correct actions. Numerical simulations demonstrated that the proposed GAN-DDPG scheme could reduce the cumulative cost up to 35% with less carbon emissions of 23% compared to existing schemes.","PeriodicalId":13052,"journal":{"name":"IEEE Transactions on Green Communications and Networking","volume":"9 4","pages":"2279-2291"},"PeriodicalIF":6.7,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145646091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Communications Society Information IEEE通信学会信息
IF 5.3 2区 计算机科学
IEEE Transactions on Green Communications and Networking Pub Date : 2025-03-21 DOI: 10.1109/TGCN.2025.3570064
{"title":"IEEE Communications Society Information","authors":"","doi":"10.1109/TGCN.2025.3570064","DOIUrl":"https://doi.org/10.1109/TGCN.2025.3570064","url":null,"abstract":"","PeriodicalId":13052,"journal":{"name":"IEEE Transactions on Green Communications and Networking","volume":"9 2","pages":"C3-C3"},"PeriodicalIF":5.3,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11008660","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144108287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Transactions on Green Communications and Networking IEEE绿色通信与网络学报
IF 5.3 2区 计算机科学
IEEE Transactions on Green Communications and Networking Pub Date : 2025-03-21 DOI: 10.1109/TGCN.2025.3570062
{"title":"IEEE Transactions on Green Communications and Networking","authors":"","doi":"10.1109/TGCN.2025.3570062","DOIUrl":"https://doi.org/10.1109/TGCN.2025.3570062","url":null,"abstract":"","PeriodicalId":13052,"journal":{"name":"IEEE Transactions on Green Communications and Networking","volume":"9 2","pages":"C2-C2"},"PeriodicalIF":5.3,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11008659","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144117179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Cross Q-Learning Assisted Resource Allocation for User-Centric Optical Wireless Communication Networks 基于交叉q学习的以用户为中心的无线光通信网络资源分配
IF 6.7 2区 计算机科学
IEEE Transactions on Green Communications and Networking Pub Date : 2025-03-20 DOI: 10.1109/TGCN.2025.3553202
Simeng Feng;Nian Li;Kai Liu;Baolong Li;Chao Dong;Qihui Wu
{"title":"A Cross Q-Learning Assisted Resource Allocation for User-Centric Optical Wireless Communication Networks","authors":"Simeng Feng;Nian Li;Kai Liu;Baolong Li;Chao Dong;Qihui Wu","doi":"10.1109/TGCN.2025.3553202","DOIUrl":"https://doi.org/10.1109/TGCN.2025.3553202","url":null,"abstract":"The user-centric (UC) association in optical wireless communication (OWC) forms amorphous cells (A-Cells) by considering the dynamic distribution and load demand of user equipments (UEs). This philosophy offers advantages over the conventional network-centric (NC) association that purely relies on a pre-defined and fixed network configuration, in terms of alleviating undesired inter-cell interference (ICI) and achieving superior system performance. However, constructing the optimal A-Cells for a given OWC network, including determining the appropriate number of A-Cells associated to their contained UEs, is deeply integrated with the UEs’ distribution and transmission conditions. To address the intractable issue, in this paper, we conceive an adaptive UC-OWC network that relies on a feedback-guided iterative framework, which is capable of jointly optimizing A-Cells formation, modulation-mode assignment and power allocation strategies. For the sake of attaining the optimized throughput of this adaptive network, we initialize the UC association by the designed k-means based genetic algorithm (KGA), which can then be iteratively adjusted based on the throughput feedback obtained via our proposed multi-user cross Q-learning (MUCQ) resource allocation algorithm. Simulation results indicate that, compared to conventional counterparts, our adaptive UC-OWC network is able to significantly improve throughput performance and reduce outage probability.","PeriodicalId":13052,"journal":{"name":"IEEE Transactions on Green Communications and Networking","volume":"9 4","pages":"2264-2278"},"PeriodicalIF":6.7,"publicationDate":"2025-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145646089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Elastic Scaling of Resources for Energy-Efficient Container Cloud Using Reinforcement Learning 基于强化学习的节能容器云资源弹性扩展
IF 6.7 2区 计算机科学
IEEE Transactions on Green Communications and Networking Pub Date : 2025-03-18 DOI: 10.1109/TGCN.2025.3552594
Yanyu Shen;Chonglin Gu;Xin Chen;Xiaoyu Gao;Zaixing Sun;Hejiao Huang
{"title":"Elastic Scaling of Resources for Energy-Efficient Container Cloud Using Reinforcement Learning","authors":"Yanyu Shen;Chonglin Gu;Xin Chen;Xiaoyu Gao;Zaixing Sun;Hejiao Huang","doi":"10.1109/TGCN.2025.3552594","DOIUrl":"https://doi.org/10.1109/TGCN.2025.3552594","url":null,"abstract":"In this paper, we aim to save the total energy consumption of servers through elastic scaling of CPU resources in container cloud. To be practical, we propose an online scheduling method, which consists of three parts: container placement, vertical scaling and migration. 1) For container placement, we design an algorithm based on dynamic threshold, resource balancing and delayed running. When there are PMs (Physical Machines) turned on, the CPU threshold increases so that the containers can be placed onto fewest possible PMs. To make full use of multi-dimensional resources of PM, we put forward a resource balancing strategy. Since the number of CPU cores can be scaled dynamically in containers’ run time, the start time of containers can be delayed without violating deadlines. 2) For vertical scaling, a collaborative multi-agent reinforcement learning (MARL) algorithm is proposed to adjust the container’s CPU, so that the containers on the same PM can finish simultaneously if possible. Then, the PM can be turned off to save energy. 3) To further reduce total energy consumption, we consider migrating the containers from underloaded PMs and overloaded PMs. Experiment results show the superior performance of our method to that of the state-of-the-art.","PeriodicalId":13052,"journal":{"name":"IEEE Transactions on Green Communications and Networking","volume":"9 4","pages":"2249-2263"},"PeriodicalIF":6.7,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145646085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient Deep Reinforcement Learning-Based Resource Allocation for Cloud Native Wireless Network 基于深度强化学习的云原生无线网络资源高效分配
IF 6.7 2区 计算机科学
IEEE Transactions on Green Communications and Networking Pub Date : 2025-03-12 DOI: 10.1109/TGCN.2025.3550599
Lin Wang;Jiasheng Wu;Jingjing Zhang;Yue Gao
{"title":"Efficient Deep Reinforcement Learning-Based Resource Allocation for Cloud Native Wireless Network","authors":"Lin Wang;Jiasheng Wu;Jingjing Zhang;Yue Gao","doi":"10.1109/TGCN.2025.3550599","DOIUrl":"https://doi.org/10.1109/TGCN.2025.3550599","url":null,"abstract":"Cloud native technology has revolutionized 5G beyond and 6G communication networks, offering unprecedented levels of operational automation, flexibility, and adaptability. However, the vast array of cloud native services and applications presents a new challenge in resource allocation for dynamic cloud computing environments. To tackle this challenge, we investigate a cloud native wireless architecture that employs container-based virtualization to enable flexible service deployment. We then study two representative use cases: network slicing and multi-access edge computing. To improve resource allocation and maximize utilization efficiency in these scenarios, we propose two deep reinforcement learning-based algorithms that enhance resource allocation efficiency and network resource utilization by leveraging comprehensive observational data to guide and refine the allocation policies. We validate the effectiveness of our algorithms in a testbed developed using Free5gc. Our findings demonstrate significant improvements in network efficiency, underscoring the potential of our proposed techniques in unlocking the full potential of cloud native wireless networks.","PeriodicalId":13052,"journal":{"name":"IEEE Transactions on Green Communications and Networking","volume":"9 4","pages":"2236-2248"},"PeriodicalIF":6.7,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145646090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graph Pointer Network Assisted Deep Reinforcement Learning for Virtualized Network Embedding 图指针网络辅助深度强化学习的虚拟网络嵌入
IF 6.7 2区 计算机科学
IEEE Transactions on Green Communications and Networking Pub Date : 2025-03-05 DOI: 10.1109/TGCN.2025.3548140
Xinglong Pei;Shuhan Guo;Yuxiang Hu;Ziyong Li;Quanming Yao;Dan Li;Jinchuan Pei;Yongji Dong
{"title":"Graph Pointer Network Assisted Deep Reinforcement Learning for Virtualized Network Embedding","authors":"Xinglong Pei;Shuhan Guo;Yuxiang Hu;Ziyong Li;Quanming Yao;Dan Li;Jinchuan Pei;Yongji Dong","doi":"10.1109/TGCN.2025.3548140","DOIUrl":"https://doi.org/10.1109/TGCN.2025.3548140","url":null,"abstract":"Network Function Virtualization (NFV) improves the flexibility and scalability of network services and reduces operating costs. As the core of research on network virtualization, Virtual Network Embedding (VNE) aims to effectively deploy service requests on physical network components and allocate underlying physical resources. However, network services can be complex and diverse, which makes it difficult for existing embedding methods to effectively utilize the graph structure of services, tackle the complexity of dynamic networks, and provide effective embedding solutions. To this end, we propose GPRL, an online VNE method based on graph pointer network and Deep Reinforcement Learning (DRL). By combining the graph neural network and pointer network, we design a novel graph pointer network as the DRL agent. It employs the graph attention network to encode graph feature data and decodes to output the embedding policy via the pointer network architecture. Furthermore, the Proximal Policy Optimization (PPO) algorithm is used to effectively train the designed agent. The effectiveness and superiority of GPRL are verified by simulation experiments, and GPRL is shown to perform better than existing embedding methods.","PeriodicalId":13052,"journal":{"name":"IEEE Transactions on Green Communications and Networking","volume":"9 4","pages":"2222-2235"},"PeriodicalIF":6.7,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145646077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书