2022 IEEE-RAS 21st International Conference on Humanoid Robots (Humanoids)最新文献

筛选
英文 中文
Trajectory Generation and Compensation for External Forces with a Leg-wheeled Robot Designed for Human Passengers 人用腿轮式机器人的轨迹生成与外力补偿
2022 IEEE-RAS 21st International Conference on Humanoid Robots (Humanoids) Pub Date : 2022-11-28 DOI: 10.1109/Humanoids53995.2022.10000242
Youhei Kakiuchi, Yuta Kojio, Noriaki Imaoka, Daiki Kusuyama, Shimpei Sato, Yutaro Matsuura, Takeshi Ando, M. Inaba
{"title":"Trajectory Generation and Compensation for External Forces with a Leg-wheeled Robot Designed for Human Passengers","authors":"Youhei Kakiuchi, Yuta Kojio, Noriaki Imaoka, Daiki Kusuyama, Shimpei Sato, Yutaro Matsuura, Takeshi Ando, M. Inaba","doi":"10.1109/Humanoids53995.2022.10000242","DOIUrl":"https://doi.org/10.1109/Humanoids53995.2022.10000242","url":null,"abstract":"In this paper, we propose a method to generate a reference trajectory of a center of gravity (COG) and a zero moment point (ZMP) for hybrid locomotion, walking and wheeled locomotion. It extends the method to generate a reference ZMP trajectory with a linear inverted pendulum model (LIPM) for walking. By this method, an integrated stabilizing control for locomotion while switching walking and wheeled locomotion is achieved. It means that walking and wheeled locomotion are treated in a unified manner. It enables to generate hybrid locomotion easily and simply. In order to locomote with a passenger, a robot should be controlled by considering external force from passenger's weight and movement. We present a method to compensate such external forces by using a force/torque sensor between a seat and a robot. With the proposed methods stabilizing and compensating external forces, we verified that the real robot can locomote by a hybrid way, walking and wheeled locomotion, and the robot with a passenger can locomote by wheels.","PeriodicalId":180816,"journal":{"name":"2022 IEEE-RAS 21st International Conference on Humanoid Robots (Humanoids)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125345091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Skeleton recognition-based motion generation and user emotion evaluation with in-home rehabilitation assistive humanoid robot 基于骨骼识别的家庭康复辅助类人机器人运动生成与用户情感评估
2022 IEEE-RAS 21st International Conference on Humanoid Robots (Humanoids) Pub Date : 2022-11-28 DOI: 10.1109/Humanoids53995.2022.10000079
Tamon Miyake, Yushi Wang, Gang Yan, S. Sugano
{"title":"Skeleton recognition-based motion generation and user emotion evaluation with in-home rehabilitation assistive humanoid robot","authors":"Tamon Miyake, Yushi Wang, Gang Yan, S. Sugano","doi":"10.1109/Humanoids53995.2022.10000079","DOIUrl":"https://doi.org/10.1109/Humanoids53995.2022.10000079","url":null,"abstract":"The shortage of nurses and the increasing elderly population demand robots in nursing that can carry out care tasks safely and intelligently. In this study, the method of skeleton recognition-based motion generation of the humanoid robot for the human range-of-motion training with dual 7-DOF arm manipulation is developed. Mediapipe-based skeleton recognition is installed with humanoid robot to recognize the human pose even though the whole of body is not seen by a camera. The 7-DOF arm was controlled to reach the detected 3D coordinates of the human right shoulders. In the experiment, the robot stood at three positions: where experimental partici-pants could see the robot fully, where the participants could see the robot partially, and where the participants could not see the robot. In each standing point, the robot uses one arm to reach to the human's shoulder with 3 patterns of waypoints while the other hand supports the human's hand. The system successfully generated the motion for the mentioned conditions except when human had his/her back towards the robot. Results show that it was difficult to recognize human body parts when the back view of the participants could only be partially captured. In terms of motion generation, robot needs to stand in front or sideway of people for reaching hand to conduct range-of-motion training. In addition, we assume that upper waypoint has a relatively high acceptance when the participants did not look at the robot fully (the condition where robot stands in sideway of the human).","PeriodicalId":180816,"journal":{"name":"2022 IEEE-RAS 21st International Conference on Humanoid Robots (Humanoids)","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126860225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Dynamic Bipedal Turning through Sim-to-Real Reinforcement Learning 通过模拟到真实强化学习的动态双足转弯
2022 IEEE-RAS 21st International Conference on Humanoid Robots (Humanoids) Pub Date : 2022-11-28 DOI: 10.1109/Humanoids53995.2022.10000225
Fangzhou Yu, Ryan Batke, Jeremy Dao, J. Hurst, Kevin R. Green, Alan Fern
{"title":"Dynamic Bipedal Turning through Sim-to-Real Reinforcement Learning","authors":"Fangzhou Yu, Ryan Batke, Jeremy Dao, J. Hurst, Kevin R. Green, Alan Fern","doi":"10.1109/Humanoids53995.2022.10000225","DOIUrl":"https://doi.org/10.1109/Humanoids53995.2022.10000225","url":null,"abstract":"For legged robots to match the athletic capabilities of humans and animals, they must not only produce robust periodic walking and running, but also seamlessly switch between nominal locomotion gaits and more specialized transient maneuvers. Despite recent advancements in controls of bipedal robots, there has been little focus on producing highly dynamic behaviors. Recent work utilizing reinforcement learning to produce policies for control of legged robots have demonstrated success in producing robust walking behaviors. However, these learned policies have difficulty expressing a multitude of different behaviors on a single network. Inspired by conventional optimization-based control techniques for legged robots, this work applies a recurrent policy to execute four-step, 90° turns trained using reference data generated from optimized single rigid body model trajectories. We present a training framework using epilogue terminal rewards for learning specific behaviors from pre-computed trajectory data and demonstrate a successful transfer to hardware on the bipedal robot Cassie.","PeriodicalId":180816,"journal":{"name":"2022 IEEE-RAS 21st International Conference on Humanoid Robots (Humanoids)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115537164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Predicting full-arm grasping motions from anticipated tactile responses 从预期的触觉反应预测全臂抓取动作
2022 IEEE-RAS 21st International Conference on Humanoid Robots (Humanoids) Pub Date : 2022-11-28 DOI: 10.1109/Humanoids53995.2022.9999743
Vedant Dave, Elmar Rueckert
{"title":"Predicting full-arm grasping motions from anticipated tactile responses","authors":"Vedant Dave, Elmar Rueckert","doi":"10.1109/Humanoids53995.2022.9999743","DOIUrl":"https://doi.org/10.1109/Humanoids53995.2022.9999743","url":null,"abstract":"Tactile sensing provides significant information about the state of the environment for performing manipulation tasks. Depending on the physical properties of the object, manipulation tasks can exhibit large variation in their movements. For a grasping task, the movement of the arm and of the end effector varies depending on different points of contact on the object, especially if the object is non-homogeneous in hardness and/or has an uneven geometry. In this paper, we propose Tactile Probabilistic Movement Primitives (TacProMPs), to learn a highly non-linear relationship between the desired tactile responses and the full-arm movement. We solely condition on the tactile responses to infer the complex manipulation skills. We formulate a joint trajectory of full-arm joints with tactile data, leverage the model to condition on the desired tactile response from the non-homogeneous object and infer the full-arm (7-dof panda arm and 19-dof gripper hand) motion. We use a Gaussian Mixture Model of primitives to address the multimodality in demonstrations. We also show that the measurement noise adjustment must be taken into account due to multiple systems working in collaboration. We validate and show the robustness of the approach through two experiments. First, we consider an object with non-uniform hardness. Grasping different parts of an object require different motion, and results into different tactile responses. Second, we grasp multiple objects at different locations. Our result shows that TacProMPs can successfully model complex multimodal skills and generalise to new situations.","PeriodicalId":180816,"journal":{"name":"2022 IEEE-RAS 21st International Conference on Humanoid Robots (Humanoids)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129743803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Multi-metric Modular Framework for Human-like Gait Analysis Based on a Recorded Set of Variable Gait Patterns 基于可变步态模式记录集的类人步态分析多度量模块化框架
2022 IEEE-RAS 21st International Conference on Humanoid Robots (Humanoids) Pub Date : 2022-11-28 DOI: 10.1109/Humanoids53995.2022.10000194
Stephan Kapteijn, Wansoo Kim, L. Marchal-Crespo, L. Peternel
{"title":"A Multi-metric Modular Framework for Human-like Gait Analysis Based on a Recorded Set of Variable Gait Patterns","authors":"Stephan Kapteijn, Wansoo Kim, L. Marchal-Crespo, L. Peternel","doi":"10.1109/Humanoids53995.2022.10000194","DOIUrl":"https://doi.org/10.1109/Humanoids53995.2022.10000194","url":null,"abstract":"Walking is an essential part of almost all activities of daily living. We use different gait patterns in different situations, e.g., moving around the house, performing various sports, or when compensating for an injury. However, how humans perform this gait tailoring remains a partially unknown process. To this end, the influence of various performance metrics on the optimality and diversity of gait patterns can provide us with more insight. To analyse gait in terms of pattern diversity and performance metrics related to physical aspects, such as joint torque, fatigue, and manipulability, we propose a multi-metric gait analysis framework that simultaneously accounts for these parameters. We used a recorded set of versatile gait patterns that are already dynamically stable and physiologically feasible. To that end, 45 gait variations-varying in stride length, step height, and walking speed-were recorded in a motion capture experiment. Results of analysis using the recorded dataset are presented for a baseline case (with all optimisation weights set to one), which serves as the first step for future research, in particular giving insights into specific aspects of the gait, e.g., joint loading, long-term performance, and capacity to sustain ground reaction forces.","PeriodicalId":180816,"journal":{"name":"2022 IEEE-RAS 21st International Conference on Humanoid Robots (Humanoids)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126795385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-Contained Calibration of an Elastic Humanoid Upper Body Using Only a Head-Mounted RGB Camera 仅使用头戴式RGB相机的弹性人形上半身的自包含校准
2022 IEEE-RAS 21st International Conference on Humanoid Robots (Humanoids) Pub Date : 2022-11-28 DOI: 10.1109/Humanoids53995.2022.10000184
Johannes Tenhumberg, Dominik Winkelbauer, Darius Burschka, B. Bäuml
{"title":"Self-Contained Calibration of an Elastic Humanoid Upper Body Using Only a Head-Mounted RGB Camera","authors":"Johannes Tenhumberg, Dominik Winkelbauer, Darius Burschka, B. Bäuml","doi":"10.1109/Humanoids53995.2022.10000184","DOIUrl":"https://doi.org/10.1109/Humanoids53995.2022.10000184","url":null,"abstract":"When a humanoid robot performs a manipulation task, it first makes a model of the world using its visual sensors and then plans the motion of its body in this model. For this, precise calibration of the camera parameters and the kinematic tree is needed. Besides the accuracy of the calibrated model, the calibration process should be fast and self-contained, i.e., no external measurement equipment should be used. Therefore, we extend our prior work on calibrating the elastic upper body of DLR's Agile Justin by now using only its internal head-mounted RGB camera. We use simple visual markers at the ends of the kinematic chain and one in front of the robot, mounted on a pole, to get measurements for the whole kinematic tree. To ensure that the task-relevant cartesian error at the end-effectors is minimized, we introduce virtual noise to fit our imperfect robot model so that the pixel error has a higher weight if the marker is further away from the camera. This correction reduces the cartesian error by more than 20%, resulting in a final accuracy of 3.9mm on average and 9.1mm in the worst case. This way, we achieve the same precision as in our previous work [1], where an external cartesian tracking system was used.","PeriodicalId":180816,"journal":{"name":"2022 IEEE-RAS 21st International Conference on Humanoid Robots (Humanoids)","volume":"38 5-6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114013893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Seeking for a better Human-Prosthesis energetic gait efficiency by quantifying both propulsion power and instability control 通过量化推进力和不稳定性控制,寻求更好的人体假肢能量步态效率
2022 IEEE-RAS 21st International Conference on Humanoid Robots (Humanoids) Pub Date : 2022-11-28 DOI: 10.1109/Humanoids53995.2022.10000069
H. Pillet, X. Bonnet, Amandine Boos, Lucas Sedran, B. Watier
{"title":"Seeking for a better Human-Prosthesis energetic gait efficiency by quantifying both propulsion power and instability control","authors":"H. Pillet, X. Bonnet, Amandine Boos, Lucas Sedran, B. Watier","doi":"10.1109/Humanoids53995.2022.10000069","DOIUrl":"https://doi.org/10.1109/Humanoids53995.2022.10000069","url":null,"abstract":"The present study aims at quantifying propulsion and dynamic balance through biomechanical parameters issued from theoretical modeling and analysis of locomotion during the gait of people using prosthetic devices. An experimental protocol combined motion capture and oxygen consumption quantification during gait on a treadmill. The mechanical work produced and dissipated by the lower limbs and the evolution of a biomechanical indicator of balance were used and the estimation of the metabolic cost of walking was made from oxygen consumption. To test the relevance of the chosen parameters, the experiments were performed on six able-bodied volunteers successively equipped with two prosthetic ankle-feet (elastic vs rigid) mounted on a femoral prosthetic simulator. For each participant, the parameters were computed and compared in three configurations: i/ without prosthesis, ii/ with rigid prosthetic ankle-foot iii/ with elastic prosthetic ankle-foot. The results put in evidence an increase of energy consumption in both prosthetic configurations compared to the configuration without prosthesis. However, no differences could be observed between the elastic and rigid prosthetic configurations. The analysis of mechanical work performed by each lower limb, which confirmed the energy delivered by the elastic foot during the propulsion, did not explain by its own this discrepancy. The maintenance of balance that seems to be more challenging during the double support in the elastic configuration could be involved in this counter-intuitive result. Finally, this preliminary study shows the importance to consider simultaneously propulsion and balance objectives during gait as they must both require muscular actions involved in the production of energy by the prosthesis user.","PeriodicalId":180816,"journal":{"name":"2022 IEEE-RAS 21st International Conference on Humanoid Robots (Humanoids)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127901939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning from Unreliable Human Action Advice in Interactive Reinforcement Learning 从交互式强化学习中不可靠的人类行为建议中学习
2022 IEEE-RAS 21st International Conference on Humanoid Robots (Humanoids) Pub Date : 2022-11-28 DOI: 10.1109/Humanoids53995.2022.10000078
L. Scherf, Cigdem Turan, Dorothea Koert
{"title":"Learning from Unreliable Human Action Advice in Interactive Reinforcement Learning","authors":"L. Scherf, Cigdem Turan, Dorothea Koert","doi":"10.1109/Humanoids53995.2022.10000078","DOIUrl":"https://doi.org/10.1109/Humanoids53995.2022.10000078","url":null,"abstract":"Interactive Reinforcement Learning (IRL) uses human input to improve learning speed and enable learning in more complex environments. Human action advice is here one of the input channels preferred by human users. However, many existing IRL approaches do not explicitly consider the possibility of inaccurate human action advice. Moreover, most approaches that account for inaccurate advice compute trust in human action advice independent of a state. This can lead to problems in practical cases, where human input might be inaccurate only in some states while it is still useful in others. To this end, we propose a novel algorithm that can handle state-dependent unreliable human action advice in IRL. Here, we combine three potential indicator signals for unreliable advice, i.e. consistency of advice, retrospective optimality of advice, and behavioral cues that hint at human uncertainty. We evaluate our method in a simulated gridworld and in robotic sorting tasks with 28 subjects. We show that our method outperforms a state-independent baseline and analyze occurrences of behavioral cues related to unreliable advice.","PeriodicalId":180816,"journal":{"name":"2022 IEEE-RAS 21st International Conference on Humanoid Robots (Humanoids)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134252464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Humanoid Running based on 3D COG-ZMP Model and Resolved Centroidal Viscoelasticity Control 基于三维COG-ZMP模型和分辨质心粘弹性控制的仿人运动
2022 IEEE-RAS 21st International Conference on Humanoid Robots (Humanoids) Pub Date : 2022-11-28 DOI: 10.1109/Humanoids53995.2022.10000210
Zewen He, Ko Yamamoto
{"title":"Humanoid Running based on 3D COG-ZMP Model and Resolved Centroidal Viscoelasticity Control","authors":"Zewen He, Ko Yamamoto","doi":"10.1109/Humanoids53995.2022.10000210","DOIUrl":"https://doi.org/10.1109/Humanoids53995.2022.10000210","url":null,"abstract":"Dynamic motion including a flight phase is a challenging topic in humanoid robotics research, especially a running motion. The stability in the stance phase and the accuracy of the foot control in flight phase are important for this kind of motion. This paper presents a humanoid running motion, applying the resolved viscoelasticity control (RVC) including the centroidal viscoelasticity that we proposed in the previous report. Combining the RVC with real-time trajectory modification, we can achieve a stable foot landing and eventually stable running motion. Effectiveness of this control approach is validated by forward dynamics simulations.","PeriodicalId":180816,"journal":{"name":"2022 IEEE-RAS 21st International Conference on Humanoid Robots (Humanoids)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131842300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Neural Symbol Grounding with Multi-Layer Attention for Robot Task Planning 面向机器人任务规划的多层关注神经符号接地
2022 IEEE-RAS 21st International Conference on Humanoid Robots (Humanoids) Pub Date : 2022-11-28 DOI: 10.1109/Humanoids53995.2022.10000074
Pinxin Lv, Li Ning, Hao Jiang, Yushuang Huang, Jing Liu, Zhao Wang
{"title":"Neural Symbol Grounding with Multi-Layer Attention for Robot Task Planning","authors":"Pinxin Lv, Li Ning, Hao Jiang, Yushuang Huang, Jing Liu, Zhao Wang","doi":"10.1109/Humanoids53995.2022.10000074","DOIUrl":"https://doi.org/10.1109/Humanoids53995.2022.10000074","url":null,"abstract":"The high-level symbolic representation has proven to be an effective expression for planning problems and is widely used in robot task planning. However, the grounding of symbol based on multimodal raw data in complex environment still remains a significant challenge. In this paper, we put forward a Multi-layer Attention Network for Multimodal Symbol Grounding (maMSG Net) where we combine the high level symbolic representation and multimodal perception effectively, improving the capability and accuracy of understanding complex environment and increasing the diversity of the symbol definition. Meanwhile, we introduce both the cross-modality attention and intra-modality attention in our neural network, which is demonstrated to improve the accuracy of symbol grounding. The maMSG Net takes multimodal raw data as input and estimates values of state symbols defined in given planning domain. We designed computer simulated experiments to evaluate the effectiveness of our method and verify its robustness against external interference.","PeriodicalId":180816,"journal":{"name":"2022 IEEE-RAS 21st International Conference on Humanoid Robots (Humanoids)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132569610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信