Smart Health最新文献

筛选
英文 中文
Mobile app-based study of driving behaviors under the influence of cannabis
Smart Health Pub Date : 2025-03-26 DOI: 10.1016/j.smhl.2025.100558
Honglu Li , Bin Han , Cong Shi , Yan Wang , Tammy Chung , Yingying Chen
{"title":"Mobile app-based study of driving behaviors under the influence of cannabis","authors":"Honglu Li ,&nbsp;Bin Han ,&nbsp;Cong Shi ,&nbsp;Yan Wang ,&nbsp;Tammy Chung ,&nbsp;Yingying Chen","doi":"10.1016/j.smhl.2025.100558","DOIUrl":"10.1016/j.smhl.2025.100558","url":null,"abstract":"<div><div>Cannabis use has become increasingly prevalent due to evolving legal and societal attitudes, raising concerns about its influence on public safety, particularly in driving. Existing studies mostly rely on simulators or specialized equipment, which do not capture the complexities of real-world driving and pose cost and scalability issues. In this paper, we investigate the effects of cannabis on driving behavior using participants’ smartphones to gather data in natural settings. Our method focuses on three critical behaviors: weaving &amp; swerving, wide turning, and hard braking. We propose a two-step segmentation algorithm for processing continuous motion sensor data and use threshold-based methods for efficient detection. A custom application autonomously records driving events during actual road scenarios. On-road experiments with 9 participants who consumed cannabis under controlled conditions reveal a correlation between cannabis use and altered driving behaviors, with significant effects emerging approximately 2<span><math><mo>∼</mo></math></span>3 h after consumption.</div></div>","PeriodicalId":37151,"journal":{"name":"Smart Health","volume":"36 ","pages":"Article 100558"},"PeriodicalIF":0.0,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143738103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SNOMED CT ontology multi-relation classification by using knowledge embedding in neural network
Smart Health Pub Date : 2025-03-26 DOI: 10.1016/j.smhl.2025.100560
Bofan He, Jerry Q. Cheng, Huanying Gu
{"title":"SNOMED CT ontology multi-relation classification by using knowledge embedding in neural network","authors":"Bofan He,&nbsp;Jerry Q. Cheng,&nbsp;Huanying Gu","doi":"10.1016/j.smhl.2025.100560","DOIUrl":"10.1016/j.smhl.2025.100560","url":null,"abstract":"<div><div>SNOMED CT is a widely recognized healthcare terminology designed to comprehensively represent clinical knowledge. Identifying missing or incorrect relationships between medical concepts is crucial for enhancing the scope and quality of this ontology, thereby improving healthcare analytics and decision support. In this study, we propose a novel multi-link prediction approach that utilizes knowledge graph embeddings and neural networks to infer missing relationships within the SNOMED CT knowledge graph. By utilizing TransE, we train embeddings for triples (concept, relation, concept) and develop a multi-head classifier to predict relationship types based solely on concept pairs. With an embedding dimension of 200, a batch size of 128, and 10 epochs, we achieved the highest test accuracy of 91.96% in relationships prediction tasks. This study demonstrates an optimal balance between efficiency, generalization, and representational capacity. By expanding on existing methodologies, this work offers insights into practical applications for ontology enrichment and contributes to the ongoing advancement of predictive models in healthcare informatics. Furthermore, it highlights the potential scalability of the approach, providing a framework that can be extended to other knowledge graphs and domains.</div></div>","PeriodicalId":37151,"journal":{"name":"Smart Health","volume":"36 ","pages":"Article 100560"},"PeriodicalIF":0.0,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143760190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring finetuned audio-LLM on heart murmur features
Smart Health Pub Date : 2025-03-26 DOI: 10.1016/j.smhl.2025.100557
Adrian Florea, Xilin Jiang, Nima Mesgarani, Xiaofan Jiang
{"title":"Exploring finetuned audio-LLM on heart murmur features","authors":"Adrian Florea,&nbsp;Xilin Jiang,&nbsp;Nima Mesgarani,&nbsp;Xiaofan Jiang","doi":"10.1016/j.smhl.2025.100557","DOIUrl":"10.1016/j.smhl.2025.100557","url":null,"abstract":"<div><div>Large language models (LLMs) for audio have excelled in recognizing and analyzing human speech, music, and environmental sounds. However, their potential for understanding other types of sounds, particularly biomedical sounds, remains largely underexplored despite significant scientific interest. In this study, we focus on diagnosing cardiovascular diseases using phonocardiograms, i.e., heart sounds. Most existing deep neural network (DNN) paradigms are restricted to heart murmur classification (healthy vs unhealthy) and do not predict other acoustic features of the murmur such as grading, harshness, pitch, and quality, which are important in helping physicians diagnose the underlying heart conditions. We propose to finetune an audio LLM, Qwen2-Audio, on the PhysioNet CirCor DigiScope phonocardiogram (PCG) dataset and evaluate its performance in classifying 11 expert-labeled features. Additionally, we aim to achieve more noise-robust and generalizable system by exploring a preprocessing segmentation algorithm using an audio representation model, SSAMBA. Our results indicate that the LLM-based model outperforms state-of-the-art methods in 10 of the 11 tasks. Moreover, the LLM successfully classifies long-tail features with limited training data, a task that all previous methods have failed to classify. These findings underscore the potential of audio LLMs as assistants to human cardiologists in enhancing heart disease diagnosis.</div></div>","PeriodicalId":37151,"journal":{"name":"Smart Health","volume":"36 ","pages":"Article 100557"},"PeriodicalIF":0.0,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143714430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Transforming stroop task cognitive assessments with multimodal inverse reinforcement learning
Smart Health Pub Date : 2025-03-25 DOI: 10.1016/j.smhl.2025.100567
Ali Abbasi , Jiaqi Gong , Soroush Korivand
{"title":"Transforming stroop task cognitive assessments with multimodal inverse reinforcement learning","authors":"Ali Abbasi ,&nbsp;Jiaqi Gong ,&nbsp;Soroush Korivand","doi":"10.1016/j.smhl.2025.100567","DOIUrl":"10.1016/j.smhl.2025.100567","url":null,"abstract":"<div><div>Stroop tasks, recognized for their cognitively demanding nature, hold promise for diagnosing and monitoring neurodegenerative diseases. Understanding how humans allocate attention and resolve interference in the Stroop test remains a challenge; yet addressing this gap could reveal key opportunities for early-stage detection. Traditional approaches overlook the interplay between overt behavior and underlying neural processes, limiting insights into the complex color-word associations at play. To tackle this, we propose a framework that applies Inverse Reinforcement Learning (IRL) to fuse electroencephalography (EEG) signals with eye-tracking data, bridging the gap between neural and behavioral markers of cognition. We designed a Stroop experiment featuring congruent and incongruent conditions to evaluate attention allocation under varying levels of interference. By framing gaze as actions guided by an internally derived reward, IRL uncovers hidden motivations behind scanning patterns, while EEG data — processed with advanced feature extraction — reveals task-specific neural dynamics under high conflict. We validate our approach by measuring Probability Mismatch, Target Fixation Probability-Area Under the Curve, Sequence Score, and MultiMatch metrics. Results show that the IRL-EEG model outperforms an IRL-Image baseline, demonstrating improved alignment with human scanpaths and heightened sensitivity to attentional shifts in incongruent trials. These findings highlight the value of integrating neural data into computational models of cognition and illuminate possibilities for early detection of neurodegenerative disorders, where subclinical deficits may first emerge. Our IRL-based integration of EEG and eye-tracking further supports personalized cognitive assessments and adaptive user interfaces.</div></div>","PeriodicalId":37151,"journal":{"name":"Smart Health","volume":"36 ","pages":"Article 100567"},"PeriodicalIF":0.0,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143725638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An adaptive multimodal fusion framework for smartphone-based medication adherence monitoring of Parkinson’s disease
Smart Health Pub Date : 2025-03-25 DOI: 10.1016/j.smhl.2025.100561
Chongxin Zhong , Jinyuan Jia , Huining Li
{"title":"An adaptive multimodal fusion framework for smartphone-based medication adherence monitoring of Parkinson’s disease","authors":"Chongxin Zhong ,&nbsp;Jinyuan Jia ,&nbsp;Huining Li","doi":"10.1016/j.smhl.2025.100561","DOIUrl":"10.1016/j.smhl.2025.100561","url":null,"abstract":"<div><div>Ensuring medication adherence for Parkinson’s disease (PD) patients is crucial to relieve patients’ symptoms and better customizing regimens according to patient’s clinical responses. However, traditional self-management approaches are often error-prone and have limited effectiveness in improving adherence. While smartphone-based solutions have been introduced to monitor various PD metrics, including medication adherence, these methods often rely on single-modality data or fail to fully leverage the advantages of multimodal integration. To address the issues, we present an adaptive multimodal fusion framework for monitoring medication adherence of PD based on a smartphone. Specifically, we segment and transform raw data from sensors to spectrograms. Then, we integrate multimodal data with quantification of their qualities and perform gradient modulation based on the contribution of each modality. Afterward, we monitor medication adherence in PD patients by detecting their medicine intake status. We evaluate the performance with the dataset from daily-life scenarios involving 455 patients. The results show that our work can achieve around 94% accuracy in medication adherence monitoring, indicating that our proposed framework is a promising tool to facilitate medication adherence monitoring in PD patients’ daily lives.</div></div>","PeriodicalId":37151,"journal":{"name":"Smart Health","volume":"36 ","pages":"Article 100561"},"PeriodicalIF":0.0,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143704184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Continuous prediction of user dropout in a mobile mental health intervention program: An exploratory machine learning approach
Smart Health Pub Date : 2025-03-25 DOI: 10.1016/j.smhl.2025.100565
Pinxiang Wang , Hanqi Chen , Zhouyu Li , Wenyao Xu , Yu-Ping Chang , Huining Li
{"title":"Continuous prediction of user dropout in a mobile mental health intervention program: An exploratory machine learning approach","authors":"Pinxiang Wang ,&nbsp;Hanqi Chen ,&nbsp;Zhouyu Li ,&nbsp;Wenyao Xu ,&nbsp;Yu-Ping Chang ,&nbsp;Huining Li","doi":"10.1016/j.smhl.2025.100565","DOIUrl":"10.1016/j.smhl.2025.100565","url":null,"abstract":"<div><div>Mental health intervention can help to release individuals’ mental symptoms like anxiety and depression. A typical mental health intervention program can last for several months, people may lose interests along with the time and cannot insist till the end. Accurately predicting user dropout is crucial for delivering timely measures to address user disengagement and reduce its adverse effects on treatment. We develop a temporal deep learning approach to accurately predict dropout, leveraging advanced data augmentation and feature engineering techniques. By integrating interaction metrics from user behavior logs and semantic features from user self-reflections over a nine-week intervention program, our approach effectively characterizes user’s mental health intervention behavior patterns. The results validate the efficacy of temporal models for continuous dropout prediction.</div></div>","PeriodicalId":37151,"journal":{"name":"Smart Health","volume":"36 ","pages":"Article 100565"},"PeriodicalIF":0.0,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143704185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving gastric lesion detection with synthetic images from diffusion models
Smart Health Pub Date : 2025-03-25 DOI: 10.1016/j.smhl.2025.100569
Yanhua Si , Yingyun Yang , Qilei Chen , Zinan Xiong , Yu Cao , Xinwen Fu , Benyuan Liu , Aiming Yang
{"title":"Improving gastric lesion detection with synthetic images from diffusion models","authors":"Yanhua Si ,&nbsp;Yingyun Yang ,&nbsp;Qilei Chen ,&nbsp;Zinan Xiong ,&nbsp;Yu Cao ,&nbsp;Xinwen Fu ,&nbsp;Benyuan Liu ,&nbsp;Aiming Yang","doi":"10.1016/j.smhl.2025.100569","DOIUrl":"10.1016/j.smhl.2025.100569","url":null,"abstract":"<div><div>In the application of deep learning for gastric cancer detection, the quality of the data set is as important as, if not more, the design of the network architecture. However, obtaining labeled data, especially in fields such as medical imaging to detect gastric cancer, can be expensive and challenging. This scarcity is exacerbated by stringent privacy regulations and the need for annotations by specialists. Conventional methods of data augmentation fall short due to the complexities of medical imagery. In this paper, we explore the use of diffusion models to generate synthetic medical images for the detection of gastric cancer. We evaluate their capability to produce realistic images that can augment small datasets, potentially enhancing the accuracy and robustness of detection algorithms. By training diffusion models on existing gastric cancer data and producing new images, our aim is to expand these datasets, thereby enhancing the efficiency of deep learning model training to achieve better precision and generalization in lesion detection. Our findings indicate that images generated by diffusion models significantly mitigate the issue of data scarcity, advancing the field of deep learning in medical imaging.</div></div>","PeriodicalId":37151,"journal":{"name":"Smart Health","volume":"36 ","pages":"Article 100569"},"PeriodicalIF":0.0,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143738102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HealthQ: Unveiling questioning capabilities of LLM chains in healthcare conversations
Smart Health Pub Date : 2025-03-25 DOI: 10.1016/j.smhl.2025.100570
Ziyu Wang , Hao Li , Di Huang , Hye-Sung Kim , Chae-Won Shin , Amir M. Rahmani
{"title":"HealthQ: Unveiling questioning capabilities of LLM chains in healthcare conversations","authors":"Ziyu Wang ,&nbsp;Hao Li ,&nbsp;Di Huang ,&nbsp;Hye-Sung Kim ,&nbsp;Chae-Won Shin ,&nbsp;Amir M. Rahmani","doi":"10.1016/j.smhl.2025.100570","DOIUrl":"10.1016/j.smhl.2025.100570","url":null,"abstract":"<div><div>Effective patient care in digital healthcare requires large language models (LLMs) that not only answer questions but also actively gather critical information through well-crafted inquiries. This paper introduces HealthQ, a novel framework for evaluating the questioning capabilities of LLM healthcare chains. By implementing advanced LLM chains, including Retrieval-Augmented Generation (RAG), Chain of Thought (CoT), and reflective chains, HealthQ assesses how effectively these chains elicit comprehensive and relevant patient information. To achieve this, we integrate an LLM judge to evaluate generated questions across metrics such as specificity, relevance, and usefulness, while aligning these evaluations with traditional Natural Language Processing (NLP) metrics like ROUGE and Named Entity Recognition (NER)-based set comparisons. We validate HealthQ using two custom datasets constructed from public medical datasets, ChatDoctor and MTS-Dialog, and demonstrate its robustness across multiple LLM judge models, including GPT-3.5, GPT-4, and Claude. Our contributions are threefold: we present the first systematic framework for assessing questioning capabilities in healthcare conversations, establish a model-agnostic evaluation methodology, and provide empirical evidence linking high-quality questions to improved patient information elicitation.</div></div>","PeriodicalId":37151,"journal":{"name":"Smart Health","volume":"36 ","pages":"Article 100570"},"PeriodicalIF":0.0,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143704085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intuitive axial augmentation using polar-sine-based piecewise distortion for medical slice-wise segmentation
Smart Health Pub Date : 2025-03-24 DOI: 10.1016/j.smhl.2025.100556
Yiqin Zhang , Qingkui Chen , Chen Huang , Zhengjie Zhang , Meiling Chen , Zhibing Fu
{"title":"Intuitive axial augmentation using polar-sine-based piecewise distortion for medical slice-wise segmentation","authors":"Yiqin Zhang ,&nbsp;Qingkui Chen ,&nbsp;Chen Huang ,&nbsp;Zhengjie Zhang ,&nbsp;Meiling Chen ,&nbsp;Zhibing Fu","doi":"10.1016/j.smhl.2025.100556","DOIUrl":"10.1016/j.smhl.2025.100556","url":null,"abstract":"<div><div>Most data-driven models for medical image analysis rely on universal augmentations to improve accuracy. Experimental evidence has confirmed their effectiveness, but the unclear mechanism underlying them poses a barrier to the widespread acceptance and trust in such methods within the medical community. We revisit and acknowledge the unique characteristics of medical images apart from traditional digital images, and consequently, proposed a medical-specific augmentation algorithm that is more elastic and aligns well with radiology scan procedure. The method performs piecewise affine with sinusoidal distorted ray according to radius on polar coordinates, thus simulating uncertain postures of human lying flat on the scanning table. Our method could generate human visceral distribution without affecting the fundamental relative position on axial plane. Two non-adaptive algorithms, namely Meta-based Scan Table Removal and Similarity-Guided Parameter Search, are introduced to bolster robustness of our augmentation method. In contrast to other methodologies, our method is highlighted for its intuitive design and ease of understanding for medical professionals, thereby enhancing its applicability in clinical scenarios. Experiments show our method improves accuracy with two modality across multiple famous segmentation frameworks without requiring more data samples. Our preview code is available in: <span><span>https://github.com/MGAMZ/PSBPD</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":37151,"journal":{"name":"Smart Health","volume":"36 ","pages":"Article 100556"},"PeriodicalIF":0.0,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143696894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Quantum contrastive learning for human activity recognition
Smart Health Pub Date : 2025-03-24 DOI: 10.1016/j.smhl.2025.100574
Yanhui Ren , Di Wang , Lingling An , Shiwen Mao , Xuyu Wang
{"title":"Quantum contrastive learning for human activity recognition","authors":"Yanhui Ren ,&nbsp;Di Wang ,&nbsp;Lingling An ,&nbsp;Shiwen Mao ,&nbsp;Xuyu Wang","doi":"10.1016/j.smhl.2025.100574","DOIUrl":"10.1016/j.smhl.2025.100574","url":null,"abstract":"<div><div>Deep learning techniques have been widely used for human activity recognition (HAR) applications. The major challenge lies in obtaining high-quality, large-scale labeled sensor datasets. However, unlike datasets such as images or text, HAR sensor datasets are non-intuitive and uninterpretable, making manual labeling extremely difficult. Self-supervised learning has emerged to address this problem, which can learn from large-scale unlabeled datasets that are easier to collect. Nevertheless, self-supervised learning has the increased computational cost and the demand for larger deep neural networks. Recently, quantum machine learning has attracted widespread attention due to its powerful computational capability and feature extraction ability. In this paper, we aim to address this classical hardware bottleneck using quantum machine learning techniques. We propose QCLHAR, a quantum contrastive learning framework for HAR, which combines quantum machine learning techniques with contrastive learning to learn better latent representations. We evaluate the feasibility of the proposed framework on six publicly available datasets for HAR. The experimental results demonstrate the effectiveness of the framework for HAR, which can surpass or match the precision of classical contrastive learning with fewer parameters. This validates the effectiveness of our approach and demonstrates the significant potential of quantum technology in addressing the challenges associated with the scarcity of labeled sensory data.</div></div>","PeriodicalId":37151,"journal":{"name":"Smart Health","volume":"36 ","pages":"Article 100574"},"PeriodicalIF":0.0,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143714431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信