Frontiers in Robotics and AI最新文献

筛选
英文 中文
A review of robotic and automated systems in meat processing. 肉类加工中机器人和自动化系统的综述。
IF 2.9
Frontiers in Robotics and AI Pub Date : 2025-05-23 eCollection Date: 2025-01-01 DOI: 10.3389/frobt.2025.1578318
Yining Lyu, Fan Wu, Qingyu Wang, Guanyu Liu, Yingqi Zhang, Huanyu Jiang, Mingchuan Zhou
{"title":"A review of robotic and automated systems in meat processing.","authors":"Yining Lyu, Fan Wu, Qingyu Wang, Guanyu Liu, Yingqi Zhang, Huanyu Jiang, Mingchuan Zhou","doi":"10.3389/frobt.2025.1578318","DOIUrl":"10.3389/frobt.2025.1578318","url":null,"abstract":"<p><p>Tasks in the meat processing sector are physically challenging, repetitive, and prone to worker scarcity. Therefore, the imperative adoption of mechanization and automation within the domain of meat processing is underscored by its key role in mitigating labor-intensive processes while concurrently enhancing productivity, safety, and operator wellbeing. This review paper gives an overview of the current research for robotic and automated systems in meat processing. The modules of a robotic system are introduced and afterward, the robotic tasks are divided into three sections with the features of processing targets including livestock, poultry, and seafood. Furthermore, we analyze the technical details of whole meat processing, including skinning, gutting, abdomen cutting, and half-carcass cutting, and discuss these systems in performance and industrial feasibility. The review also refers to some commercialized products for automation in the meat processing industry. Finally, we conclude the review and discuss potential challenges for further robotization and automation in meat processing.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1578318"},"PeriodicalIF":2.9,"publicationDate":"2025-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12141337/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144250298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neurotechnology for enhancing human operation of robotic and semi-autonomous systems. 用于增强机器人和半自主系统的人类操作的神经技术。
IF 2.9
Frontiers in Robotics and AI Pub Date : 2025-05-23 eCollection Date: 2025-01-01 DOI: 10.3389/frobt.2025.1491494
William J Tyler, Anusha Adavikottu, Christian Lopez Blanco, Archana Mysore, Christopher Blais, Marco Santello, Avinash Unnikrishnan
{"title":"Neurotechnology for enhancing human operation of robotic and semi-autonomous systems.","authors":"William J Tyler, Anusha Adavikottu, Christian Lopez Blanco, Archana Mysore, Christopher Blais, Marco Santello, Avinash Unnikrishnan","doi":"10.3389/frobt.2025.1491494","DOIUrl":"10.3389/frobt.2025.1491494","url":null,"abstract":"<p><p>Human operators of remote and semi-autonomous systems must have a high level of executive function to safely and efficiently conduct operations. These operators face unique cognitive challenges when monitoring and controlling robotic machines, such as vehicles, drones, and construction equipment. The development of safe and experienced human operators of remote machines requires structured training and credentialing programs. This review critically evaluates the potential for incorporating neurotechnology into remote systems operator training and work to enhance human-machine interactions, performance, and safety. Recent evidence demonstrating that different noninvasive neuromodulation and neurofeedback methods can improve critical executive functions such as attention, learning, memory, and cognitive control is reviewed. We further describe how these approaches can be used to improve training outcomes, as well as teleoperator vigilance and decision-making. We also describe how neuromodulation can help remote operators during complex or high-risk tasks by mitigating impulsive decision-making and cognitive errors. While our review advocates for incorporating neurotechnology into remote operator training programs, continued research is required to evaluate the how these approaches will impact industrial safety and workforce readiness.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1491494"},"PeriodicalIF":2.9,"publicationDate":"2025-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12141011/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144250299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Co-design methodology for rapid prototyping of modular robots in care settings. 护理环境中模块化机器人快速原型设计的协同设计方法。
IF 2.9
Frontiers in Robotics and AI Pub Date : 2025-05-22 eCollection Date: 2025-01-01 DOI: 10.3389/frobt.2025.1581506
Alexandre Colle, Karen Donaldson, Mauro Dragone
{"title":"Co-design methodology for rapid prototyping of modular robots in care settings.","authors":"Alexandre Colle, Karen Donaldson, Mauro Dragone","doi":"10.3389/frobt.2025.1581506","DOIUrl":"10.3389/frobt.2025.1581506","url":null,"abstract":"<p><strong>Introduction: </strong>This paper introduces a structured co-design methodology for developing modular robotic solutions for the care sector. Despite the widespread adoption of co-design in robotics, existing frameworks often lack clear and systematic processes to effectively incorporate user requirements into tangible robotic designs.</p><p><strong>Method: </strong>To address this gap, the present work proposes an iterative, modular co-design methodology that captures, organises, and translates user insights into practical robotic modules. The methodology employs Design Research (DR) methods combined with Design for Additive Manufacturing (DfAM) principles, enabling rapid prototyping and iterative refinement based on continuous user feedback. The proposed approach was applied in the development of Robobrico, a modular robot created collaboratively with care home users.</p><p><strong>Results: </strong>Outcomes from this study demonstrate that this structured process effectively aligns robot functionality with user expectations, enhances adaptability, and facilitates practical integration of modular robotic platforms in real-world care environments.</p><p><strong>Discussion: </strong>This paper details the proposed methodology, the tools developed to support it, and key insights derived from its implementation.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1581506"},"PeriodicalIF":2.9,"publicationDate":"2025-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12137090/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144235567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How multimodal narrative and visual representations of human-like service robots shape attitudes and social connection. 类人服务机器人的多模态叙事和视觉表现如何塑造态度和社会联系。
IF 2.9
Frontiers in Robotics and AI Pub Date : 2025-05-22 eCollection Date: 2025-01-01 DOI: 10.3389/frobt.2025.1568146
Neil Anthony Daruwala
{"title":"How multimodal narrative and visual representations of human-like service robots shape attitudes and social connection.","authors":"Neil Anthony Daruwala","doi":"10.3389/frobt.2025.1568146","DOIUrl":"10.3389/frobt.2025.1568146","url":null,"abstract":"<p><strong>Introduction: </strong>Public attitudes toward service robots are critical to their acceptance across various industries. Previous research suggests that human-like features and behaviours perceived as empathetic may reduce negative perceptions and enhance emotional engagement. However, there is limited empirical evidence on how structured multimodal interventions influence these responses.</p><p><strong>Methods: </strong>A partially mixed experimental design was employed, featuring one between-subjects factor (group: experimental vs. control) and one within-subjects factor (time: pre-intervention vs. post-intervention), applied only to the experimental group. Two hundred twenty-eight adults (aged 18-65) were randomly assigned to either the experimental or control condition. The intervention included images, video demonstrations of human-like service robots performing socially meaningful gestures, and a narrative vignette depicting human-robot interaction. The control group completed the same assessment measures without the intervention. Outcomes included negative attitudes toward robots (Negative Attitudes Toward Robots Scale, NARS), affect (Positive and Negative Affect Schedule, PANAS), and perceived interpersonal connection (Inclusion of Other in the Self scale, IOS).</p><p><strong>Results: </strong>The experimental group demonstrated a significant reduction in negative attitudes (p < 0.001, Cohen's d = 0.37), as well as lower negative affect and a greater perceived interpersonal connection with the robots (both p < 0.001). Age moderated baseline attitudes, with younger participants reporting more positive initial views; gender was not a significant factor.</p><p><strong>Discussion: </strong>These findings suggest that multimodal portrayals of human-like service robots can improve attitudes, affective responses, and interpersonal connection, offering practical insights for robot design, marketing, and public engagement strategies.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1568146"},"PeriodicalIF":2.9,"publicationDate":"2025-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12137300/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144235568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning to suppress tremors: a deep reinforcement learning-enabled soft exoskeleton for Parkinson's patients. 学习抑制震颤:用于帕金森患者的深度强化学习软外骨骼。
IF 2.9
Frontiers in Robotics and AI Pub Date : 2025-05-21 eCollection Date: 2025-01-01 DOI: 10.3389/frobt.2025.1537470
Tamás Endrei, Sándor Földi, Ádám Makk, György Cserey
{"title":"Learning to suppress tremors: a deep reinforcement learning-enabled soft exoskeleton for Parkinson's patients.","authors":"Tamás Endrei, Sándor Földi, Ádám Makk, György Cserey","doi":"10.3389/frobt.2025.1537470","DOIUrl":"10.3389/frobt.2025.1537470","url":null,"abstract":"<p><strong>Introduction: </strong>Neurological tremors, prevalent among a large population, are one of the most rampant movement disorders. Biomechanical loading and exoskeletons show promise in enhancing patient well-being, but traditional control algorithms limit their efficacy in dynamic movements and personalized interventions. Furthermore, a pressing need exists for more comprehensive and robust validation methods to ensure the effectiveness and generalizability of proposed solutions.</p><p><strong>Methods: </strong>This paper proposes a physical simulation approach modeling multiple arm joints and tremor propagation. This study also introduces a novel adaptable reinforcement learning environment tailored for disorders with tremors. We present a deep reinforcement learning-based encoder-actor controller for Parkinson's tremors in various shoulder and elbow joint axes displayed in dynamic movements.</p><p><strong>Results: </strong>Our findings suggest that such a control strategy offers a viable solution for tremor suppression in real-world scenarios.</p><p><strong>Discussion: </strong>By overcoming the limitations of traditional control algorithms, this work takes a new step in adapting biomechanical loading into the everyday life of patients. This work also opens avenues for more adaptive and personalized interventions in managing movement disorders.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1537470"},"PeriodicalIF":2.9,"publicationDate":"2025-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12133501/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144227231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AcoustoBots: A swarm of robots for acoustophoretic multimodal interactions. 声学机器人:一群用于声学多模态交互的机器人。
IF 2.9
Frontiers in Robotics and AI Pub Date : 2025-05-21 eCollection Date: 2025-01-01 DOI: 10.3389/frobt.2025.1537101
Narsimlu Kemsaram, James Hardwick, Jincheng Wang, Bonot Gautam, Ceylan Besevli, Giorgos Christopoulos, Sourabh Dogra, Lei Gao, Akin Delibasi, Diego Martinez Plasencia, Orestis Georgiou, Marianna Obrist, Ryuji Hirayama, Sriram Subramanian
{"title":"AcoustoBots: A swarm of robots for acoustophoretic multimodal interactions.","authors":"Narsimlu Kemsaram, James Hardwick, Jincheng Wang, Bonot Gautam, Ceylan Besevli, Giorgos Christopoulos, Sourabh Dogra, Lei Gao, Akin Delibasi, Diego Martinez Plasencia, Orestis Georgiou, Marianna Obrist, Ryuji Hirayama, Sriram Subramanian","doi":"10.3389/frobt.2025.1537101","DOIUrl":"10.3389/frobt.2025.1537101","url":null,"abstract":"<p><strong>Introduction: </strong>Acoustophoresis has enabled novel interaction capabilities, such as levitation, volumetric displays, mid-air haptic feedback, and directional sound generation, to open new forms of multimodal interactions. However, its traditional implementation as a singular static unit limits its dynamic range and application versatility.</p><p><strong>Methods: </strong>This paper introduces \"AcoustoBots\" - a novel convergence of acoustophoresis with a movable and reconfigurable phased array of transducers for enhanced application versatility. We mount a phased array of transducers on a swarm of robots to harness the benefits of multiple mobile acoustophoretic units. This offers a more flexible and interactive platform that enables a swarm of acoustophoretic multimodal interactions. Our novel AcoustoBots design includes a hinge actuation system that controls the orientation of the mounted phased array of transducers to achieve high flexibility in a swarm of acoustophoretic multimodal interactions. In addition, we designed a BeadDispenserBot that can deliver particles to trapping locations, which automates the acoustic levitation interaction.</p><p><strong>Results: </strong>These attributes allow AcoustoBots to independently work for a common cause and interchange between modalities, allowing for novel augmentations (e.g., a swarm of haptics, audio, and levitation) and bilateral interactions with users in an expanded interaction area.</p><p><strong>Discussion: </strong>We detail our design considerations, challenges, and methodological approach to extend acoustophoretic central control in distributed settings. This work demonstrates a scalable acoustic control framework with two mobile robots, laying the groundwork for future deployment in larger robotic swarms. Finally, we characterize the performance of our AcoustoBots and explore the potential interactive scenarios they can enable.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1537101"},"PeriodicalIF":2.9,"publicationDate":"2025-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12133503/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144227230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ROSA: a knowledge-based solution for robot self-adaptation. ROSA:基于知识的机器人自适应解决方案。
IF 2.9
Frontiers in Robotics and AI Pub Date : 2025-05-20 eCollection Date: 2025-01-01 DOI: 10.3389/frobt.2025.1531743
Gustavo Rezende Silva, Juliane Päßler, S Lizeth Tapia Tarifa, Einar Broch Johnsen, Carlos Hernández Corbato
{"title":"ROSA: a knowledge-based solution for robot self-adaptation.","authors":"Gustavo Rezende Silva, Juliane Päßler, S Lizeth Tapia Tarifa, Einar Broch Johnsen, Carlos Hernández Corbato","doi":"10.3389/frobt.2025.1531743","DOIUrl":"10.3389/frobt.2025.1531743","url":null,"abstract":"<p><p>Autonomous robots must operate in diverse environments and handle multiple tasks despite uncertainties. This creates challenges in designing software architectures and task decision-making algorithms, as different contexts may require distinct task logic and architectural configurations. To address this, robotic systems can be designed as self-adaptive systems capable of adapting their task execution and software architecture at runtime based on their context. This paper introduces ROSA, a novel knowledge-based framework for RObot Self-Adaptation, which enables task-and-architecture co-adaptation (TACA) in robotic systems. ROSA achieves this by providing a knowledge model that captures all application-specific knowledge required for adaptation and by reasoning over this knowledge at runtime to determine when and how adaptation should occur. In addition to a conceptual framework, this work provides an open-source ROS 2-based reference implementation of ROSA and evaluates its feasibility and performance in an underwater robotics application. Experimental results highlight ROSA's advantages in reusability and development effort for designing self-adaptive robotic systems.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1531743"},"PeriodicalIF":2.9,"publicationDate":"2025-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12131011/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144217213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Simultaneous text and gesture generation for social robots with small language models. 具有小语言模型的社交机器人的同步文本和手势生成。
IF 2.9
Frontiers in Robotics and AI Pub Date : 2025-05-16 eCollection Date: 2025-01-01 DOI: 10.3389/frobt.2025.1581024
Alessio Galatolo, Katie Winkle
{"title":"Simultaneous text and gesture generation for social robots with small language models.","authors":"Alessio Galatolo, Katie Winkle","doi":"10.3389/frobt.2025.1581024","DOIUrl":"10.3389/frobt.2025.1581024","url":null,"abstract":"<p><strong>Introduction: </strong>As social robots gain advanced communication capabilities, users increasingly expect coherent verbal and non-verbal behaviours. Recent work has shown that Large Language Models (LLMs) can support autonomous generation of such multimodal behaviours. However, current LLM-based approaches to non-verbal behaviour often involve multi-step reasoning with large, closed-source models-resulting in significant computational overhead and limiting their feasibility in low-resource or privacy-constrained environments.</p><p><strong>Methods: </strong>To address these limitations, we propose a novel method for simultaneous generation of text and gestures with minimal computational overhead compared to plain text generation. Our system does not produce low-level joint trajectories, but instead predicts high-level communicative intentions, which are mapped to platform-specific expressions. Central to our approach is the introduction of lightweight, robot-specific \"gesture heads\" derived from the LLM's architecture, requiring no pose-based datasets and enabling generalisability across platforms.</p><p><strong>Results: </strong>We evaluate our method on two distinct robot platforms: Furhat (facial expressions) and Pepper (bodily gestures). Experimental results demonstrate that our method maintains behavioural quality while introducing negligible computational and memory overhead. Furthermore, the gesture heads operate in parallel with the language generation component, ensuring scalability and responsiveness even on small or locally deployed models.</p><p><strong>Discussion: </strong>Our approach supports the use of Small Language Models for multimodal generation, offering an effective alternative to existing high-resource methods. By abstracting gesture generation and eliminating reliance on platform-specific motion data, we enable broader applicability in real-world, low-resource, and privacy-sensitive HRI settings.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1581024"},"PeriodicalIF":2.9,"publicationDate":"2025-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12122315/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144200515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editorial: Advancements in AI-driven multimodal interfaces for robot-aided rehabilitation. 社论:机器人辅助康复中人工智能驱动的多模式界面的进展。
IF 2.9
Frontiers in Robotics and AI Pub Date : 2025-05-15 eCollection Date: 2025-01-01 DOI: 10.3389/frobt.2025.1605418
Christian Tamantini, Kevin Patrice Langlois, David Rodriguez Cianca, Loredana Zollo
{"title":"Editorial: Advancements in AI-driven multimodal interfaces for robot-aided rehabilitation.","authors":"Christian Tamantini, Kevin Patrice Langlois, David Rodriguez Cianca, Loredana Zollo","doi":"10.3389/frobt.2025.1605418","DOIUrl":"10.3389/frobt.2025.1605418","url":null,"abstract":"","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1605418"},"PeriodicalIF":2.9,"publicationDate":"2025-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12119299/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144183468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editorial: Advances in modern intelligent surgery: from computer-aided diagnosis to medical robotics. 社论:现代智能手术的进展:从计算机辅助诊断到医疗机器人。
IF 2.9
Frontiers in Robotics and AI Pub Date : 2025-05-14 eCollection Date: 2025-01-01 DOI: 10.3389/frobt.2025.1620551
Zhe Min, Rui Song, Changsheng Li, Jax Luo
{"title":"Editorial: Advances in modern intelligent surgery: from computer-aided diagnosis to medical robotics.","authors":"Zhe Min, Rui Song, Changsheng Li, Jax Luo","doi":"10.3389/frobt.2025.1620551","DOIUrl":"10.3389/frobt.2025.1620551","url":null,"abstract":"","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1620551"},"PeriodicalIF":2.9,"publicationDate":"2025-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12117187/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144175371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信