Artificial Life and Robotics最新文献

筛选
英文 中文
Video stabilization algorithm for field robots in uneven terrain 不平整地形下野外机器人的视频稳定算法
IF 0.9
Artificial Life and Robotics Pub Date : 2023-07-11 DOI: 10.1007/s10015-023-00883-x
Abhijeet Ravankar, Arpit Rawankar, Ankit A. Ravankar
{"title":"Video stabilization algorithm for field robots in uneven terrain","authors":"Abhijeet Ravankar,&nbsp;Arpit Rawankar,&nbsp;Ankit A. Ravankar","doi":"10.1007/s10015-023-00883-x","DOIUrl":"10.1007/s10015-023-00883-x","url":null,"abstract":"<div><p>Field robots equipped with visual sensors have been used to automate several services. In many scenarios, these robots are tele-operated by a remote operator who controls the robot motion based on a live video feed from the robot’s cameras. In other cases, like surveillance and monitoring applications, the video recorded by the robot is later analyzed or inspected manually. A shaky video is produced on an uneven terrain. It could also be caused due to loose and vibrating mechanical frame on which the camera has been mounted. Jitters or shakes in these videos are undesired for tele-operation, and to maintain desired quality of service. In this paper, we present an algorithm to stabilize the undesired jitters in a shaky video using only the camera information for different areas of vineyard based on terrain profile. The algorithm works by tracking robust feature points in the successive frames of the camera, smoothing the trajectory, and generating desired transformations to output a stabilized video. We have tested the algorithm in actual field robots in uneven terrains used for agriculture, and found the algorithm to produce good results.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"28 3","pages":"502 - 508"},"PeriodicalIF":0.9,"publicationDate":"2023-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48999699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improvements of detection accuracy and its confidence of defective areas by YOLOv2 using a data set augmentation method YOLOv2使用数据集增强方法提高缺陷区域的检测精度及其置信度
IF 0.9
Artificial Life and Robotics Pub Date : 2023-07-08 DOI: 10.1007/s10015-023-00885-9
Koki Arima, Fusaomi Nagata, Tatsuki Shimizu, Akimasa Otsuka, Hirohisa Kato, Keigo Watanabe, Maki K. Habib
{"title":"Improvements of detection accuracy and its confidence of defective areas by YOLOv2 using a data set augmentation method","authors":"Koki Arima,&nbsp;Fusaomi Nagata,&nbsp;Tatsuki Shimizu,&nbsp;Akimasa Otsuka,&nbsp;Hirohisa Kato,&nbsp;Keigo Watanabe,&nbsp;Maki K. Habib","doi":"10.1007/s10015-023-00885-9","DOIUrl":"10.1007/s10015-023-00885-9","url":null,"abstract":"<div><p>Recently, CNN (Convolutional Neural Network) and Grad-CAM (Gradient-weighted Class Activation Map) are being applied to various kinds of defect detection and position recognition for industrial products. However, in training process of a CNN model, a large amount of image data are required to acquire a desired generalization ability. In addition, it is not easy for Grad-CAM to clearly identify the defect area which is predicted as the basis of a classification result. Moreover, when they are deployed in an actual production line, two calculation processes for CNN and Grad-CAM have to be sequentially called for defect detection and position recognition, so that the processing time is concerned. In this paper, the authors try to apply YOLOv2 (You Only Look Once) to defect detection and its visualization to process them at once. In general, a YOLOv2 model can be built with less training images; however, a complicated labeling process is required to prepare ground truth data for training. A data set for training a YOLOv2 model has to be composed of image files and the corresponding ground truth data file named gTruth. The gTruth file has names of all the image files and their labeled information, such as label names and box dimensions. Therefore, YOLOv2 requires complex data set augmentation for not only images but also gTruth data. Actually, target products dealt with in this paper are produced with various kinds and small quantity, and also the frequency of occurrence of the defect is infrequent. Moreover, due to the fixed indoor production line, the valid image augmentation to be applied is limited to the horizontal flip. In this paper, a data set augmentation method is proposed to efficiently generate training data for YOLOv2 even in such a production situation and to consequently enhance the performance of defect detection and its visualization. The effectiveness is shown through experiments.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"28 3","pages":"625 - 631"},"PeriodicalIF":0.9,"publicationDate":"2023-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49479986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-time monitoring of elderly people through computer vision 计算机视觉对老年人的实时监测
IF 0.9
Artificial Life and Robotics Pub Date : 2023-07-06 DOI: 10.1007/s10015-023-00882-y
Abhijeet Ravankar, Arpit Rawankar, Ankit A. Ravankar
{"title":"Real-time monitoring of elderly people through computer vision","authors":"Abhijeet Ravankar,&nbsp;Arpit Rawankar,&nbsp;Ankit A. Ravankar","doi":"10.1007/s10015-023-00882-y","DOIUrl":"10.1007/s10015-023-00882-y","url":null,"abstract":"<div><p>In recent years, many countries including Japan are facing the problems of increasing old-age population and shortage of labor. This has increased the demands of automating several tasks using robots and artificial intelligence in agriculture, production, and healthcare sectors. With increasing old-age population, an increasing number of people are expected to be admitted in old-age home and rehabilitation centers in the coming years where they receive proper care and attention. In such a scenario, it can be foreseen that it will be increasingly difficult to accurately monitor each patient. This requires an automation of patient’s activity detection. To this end, this paper proposes to use computer vision for automatic detection of patient’s behavior. The proposed work first detects the pose of the patient through a Convolution Neural Network. Next, the coordinates of the different body parts are detected. These coordinates are input in the decision generation layer which uses the relationship between the coordinates to predict the person’s actions. This paper focuses on the detection of important activities like: sudden fall, sitting, eating, sleeping, exercise, and computer usage. Although previous works in behavior detection focused only on detecting a particular activity, the proposed work can detect multiple activities in real-time. We verify the proposed system thorough experiments in real environment with actual sensors. The experimental results shows that the proposed system can accurately detect the activities of the patient in the room. Critical scenarios like sudden fall are detected and an alarm is raised for immediate support. Moreover, the the privacy of the patient is preserved though an ID based method in which only the detected activities are chronologically stored in the database.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"28 3","pages":"496 - 501"},"PeriodicalIF":0.9,"publicationDate":"2023-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10015-023-00882-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48188938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development of an origami-based robot molting structure 一种基于折纸的机器人脱毛结构的研制
IF 0.9
Artificial Life and Robotics Pub Date : 2023-07-05 DOI: 10.1007/s10015-023-00884-w
Aiko Miyamoto, Mitsuharu Matsumoto
{"title":"Development of an origami-based robot molting structure","authors":"Aiko Miyamoto,&nbsp;Mitsuharu Matsumoto","doi":"10.1007/s10015-023-00884-w","DOIUrl":"10.1007/s10015-023-00884-w","url":null,"abstract":"<div><p>Inspired by the molting behavior of living organisms, this paper describes a molting robot structure with a self-repair function. In past robot self-repair methods, the strength after repair was usually lower than before the repair. To realize a robot that can repeatedly repair its exterior while maintaining its quality, the replacement exterior that becomes the new outer skin is folded like origami and enclosed inside the robot. During the repair, the outer exterior can be replaced by extracting the replacement exterior from inside the robot. A prototype of the proposed molting structure was experimentally tested and its proper operation was confirmed. In addition, a honeycomb structure was combined with a bellows structure to improve the strength of the outer skin.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"28 4","pages":"645 - 651"},"PeriodicalIF":0.9,"publicationDate":"2023-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48918063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fuzzy controller for AUV robots based on machine learning and genetic algorithm 基于机器学习和遗传算法的AUV机器人模糊控制器
IF 0.9
Artificial Life and Robotics Pub Date : 2023-07-03 DOI: 10.1007/s10015-023-00881-z
Toya Yamada, Hiroshi Kinjo, Kunihiko Nakazono, Naoki Oshiro, Eiho Uezato
{"title":"Fuzzy controller for AUV robots based on machine learning and genetic algorithm","authors":"Toya Yamada,&nbsp;Hiroshi Kinjo,&nbsp;Kunihiko Nakazono,&nbsp;Naoki Oshiro,&nbsp;Eiho Uezato","doi":"10.1007/s10015-023-00881-z","DOIUrl":"10.1007/s10015-023-00881-z","url":null,"abstract":"<div><p>Marine robots play a crucial role in exploring and investigating underwater and seafloor environments, organisms, structures, and resources. In this study, we developed a control system for a small marine robot and conducted simulation experiments to evaluate its performance. The control system is based on fuzzy control, which resembles human control by defining rules, quantifying them through membership functions, and determining the appropriate manipulation level. Moreover, a genetic algorithm was employed to optimize the coefficients of a function utilized by the proposed controller in the non-fuzzification process to establish the operating parameters. When implementing this control system during simulations, the marine robot successfully reached a desired position within a specified time frame.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"28 3","pages":"632 - 641"},"PeriodicalIF":0.9,"publicationDate":"2023-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42661840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pain scores estimation using surgical pleth index and long short-term memory neural networks 基于手术体积指数和长短期记忆神经网络的疼痛评分评估
IF 0.9
Artificial Life and Robotics Pub Date : 2023-06-24 DOI: 10.1007/s10015-023-00880-0
Omar M. T. Abdel Deen, Wei-Horng Jean, Shou-Zen Fan, Maysam F. Abbod, Jiann-Shing Shieh
{"title":"Pain scores estimation using surgical pleth index and long short-term memory neural networks","authors":"Omar M. T. Abdel Deen,&nbsp;Wei-Horng Jean,&nbsp;Shou-Zen Fan,&nbsp;Maysam F. Abbod,&nbsp;Jiann-Shing Shieh","doi":"10.1007/s10015-023-00880-0","DOIUrl":"10.1007/s10015-023-00880-0","url":null,"abstract":"<div><p>Pain monitoring is crucial to provide proper healthcare for patients during general anesthesia (GA). In this study, photoplethysmographic waveform amplitude (PPGA), heartbeat interval (HBI), and surgical pleth index (SPI) are utilized for predicting pain scores during GA based on expert medical doctors’ assessments (EMDAs). Time series features are fed into different long short-term memory (LSTM) models, with different hyperparameters. The models’ performance is evaluated using mean absolute error (MAE), standard deviation (SD), and correlation (Corr). Three different models are used, the first model resulted in 6.9271 ± 1.913, 9.4635 ± 2.456, and 0.5955 0.069 for an overall MAE, SD, and Corr, respectively. The second model resulted in 3.418 ± 0.715, 3.847 ± 0.557, and 0.634 ± 0.068 for an overall MAE, SD, and Corr, respectively. In contrast, the third model resulted in 3.4009 ± 0.648, 3.909 ± 0.548, and 0.6197 ± 0.0625 for an overall MAE, SD, and Corr, respectively. The second model is selected as the best model based on its performance and applied 5-fold cross-validation for verification. Statistical results are quite similar: 4.722 ± 0.742, 3.922 ± 0.672, and 0.597 ± 0.053 for MAE, SD, and Corr, respectively. In conclusion, the SPI effectively predicted pain score based on EMDA, not only on good evaluation performance, but the trend of EMDA is replicated, which can be interpreted as a relation between SPI and EMDA; however, further improvements on data consistency are also needed to validate the results and obtain better performance. Furthermore, the usage of further signal features could be considered along with SPI.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"28 3","pages":"600 - 608"},"PeriodicalIF":0.9,"publicationDate":"2023-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45684306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MORI-A CPS: 3D printed soft actuators with 4D assembly simulation MORI-A CPS:具有4D装配模拟的3D打印软致动器
IF 0.9
Artificial Life and Robotics Pub Date : 2023-06-17 DOI: 10.1007/s10015-023-00878-8
Shoma Abe, Jun Ogawa, Yosuke Watanabe, MD Nahin Islam Shiblee, Masaru Kawakami, Hidemitsu Furukawa
{"title":"MORI-A CPS: 3D printed soft actuators with 4D assembly simulation","authors":"Shoma Abe,&nbsp;Jun Ogawa,&nbsp;Yosuke Watanabe,&nbsp;MD Nahin Islam Shiblee,&nbsp;Masaru Kawakami,&nbsp;Hidemitsu Furukawa","doi":"10.1007/s10015-023-00878-8","DOIUrl":"10.1007/s10015-023-00878-8","url":null,"abstract":"<div><p>Soft modular robotics combines soft materials and modular mechanisms. We are developing a vacuum-driven actuator module, MORI-A, which combines a 3D-printed flexible parallel cross structure with a cube-shaped hollow silicone. The MORI-A module has five deformation modes: no deformation, uniform contraction, uniaxial contraction, flexion, and shear. By combining these modules, soft robots with a variety of deformabilities can be constructed. However, assembling MORI-A requires predicting the deformation from the posture and mode of the modules, making assembly difficult. To overcome this problem, this study aims to construct a system called “MORI-A CPS,” which can predict the motion of a soft robot composed of MORI-A modules by simply arranging cubes in a virtual space. This paper evaluates how well the motion of virtual MORI-A modules, defined as a combination of swelling and shrinking voxels, approximates real-world motion. Then, it shows that the deformations of virtual soft robots constructed via MORI-A CPS are similar to those of real robots.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"28 3","pages":"609 - 617"},"PeriodicalIF":0.9,"publicationDate":"2023-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45212581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Route planning algorithm based on dynamic programming for electric vehicles delivering electric power to a region isolated from power grid 基于动态规划的电动汽车向电网隔离区域输送电力的路径规划算法
IF 0.9
Artificial Life and Robotics Pub Date : 2023-06-15 DOI: 10.1007/s10015-023-00879-7
Yu Zhang, Wenjing Cao, Hanqing Zhao, Shuang Gao
{"title":"Route planning algorithm based on dynamic programming for electric vehicles delivering electric power to a region isolated from power grid","authors":"Yu Zhang,&nbsp;Wenjing Cao,&nbsp;Hanqing Zhao,&nbsp;Shuang Gao","doi":"10.1007/s10015-023-00879-7","DOIUrl":"10.1007/s10015-023-00879-7","url":null,"abstract":"<div><p>In this study, we considered the electric power delivery problem when using electric vehicles (EVs) for multiple households located in a remote region or a region isolated by disasters. Two optimization problems are formulated and compared; they yield the optimal routes that minimize the overall traveling distance of the EVs and their overall electric power consumption, respectively. We assume that the number of households requiring power delivery and the number of EVs used for power delivery in the region are given constants. Subsequently, we divide the households into groups and assign the households in each group to one EV. Each EV is required to return to its initial position after delivering electric power to all the households in the assigned group. In the first method, the benchmark method, the optimal route that minimizes the overall traveling distance of all the EVs is determined using the dynamic programming method. However, owing to traffic congestion on the roads, the optimal path that minimizes the overall traveling distance of all the EVs does not necessarily yield their minimum overall electric power consumption. In this study, to directly minimize the overall electric power consumption of all the considered EVs, we propose an optimization method that considers traffic congestion. Therefore, a second method is proposed, which minimizes the overall electric power consumption considering traffic congestion. The electric power consumed during the travel of each EV is calculated as a function of the length of each road section and the nominal average speed of vehicles on the road section. A case study in which four EVs are assigned to deliver electric power to serve eight households is conducted to validate the proposed method. To verify the effectiveness of the proposed method, the calculation results considering traffic congestion are compared with the benchmark method results, which minimizes the traveling distance. The comparison of the results from the two different methods shows that the optimal solution for the proposed method reduces the overall electric power consumption of all the EVs by 236.5(kWh) (9.4%) compared with the benchmark method. Therefore, the proposed method is preferable for the reduction of the overall electric power consumption of EVs.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"28 3","pages":"583 - 590"},"PeriodicalIF":0.9,"publicationDate":"2023-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47019896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Correction to: Detecting deception using machine learning with facial expressions and pulse rate 更正:使用面部表情和脉搏率的机器学习检测欺骗
IF 0.9
Artificial Life and Robotics Pub Date : 2023-06-13 DOI: 10.1007/s10015-023-00877-9
Kento Tsuchiya, Ryo Hatano, Hiroyuki Nishiyama
{"title":"Correction to: Detecting deception using machine learning with facial expressions and pulse rate","authors":"Kento Tsuchiya,&nbsp;Ryo Hatano,&nbsp;Hiroyuki Nishiyama","doi":"10.1007/s10015-023-00877-9","DOIUrl":"10.1007/s10015-023-00877-9","url":null,"abstract":"","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"28 3","pages":"643 - 643"},"PeriodicalIF":0.9,"publicationDate":"2023-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10015-023-00877-9.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50477760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Engineering a data processing pipeline for an ultra-lightweight lensless fluorescence imaging device with neuronal-cluster resolution 设计一种具有神经元簇分辨率的超轻型无透镜荧光成像设备的数据处理管道
IF 0.9
Artificial Life and Robotics Pub Date : 2023-06-12 DOI: 10.1007/s10015-023-00875-x
Zihao Yu, Mark Christian S. G. Guinto, Brian Godwin S. Lim, Renzo Roel P. Tan, Junichiro Yoshimoto, Kazushi Ikeda, Yasumi Ohta, Jun Ohta
{"title":"Engineering a data processing pipeline for an ultra-lightweight lensless fluorescence imaging device with neuronal-cluster resolution","authors":"Zihao Yu,&nbsp;Mark Christian S. G. Guinto,&nbsp;Brian Godwin S. Lim,&nbsp;Renzo Roel P. Tan,&nbsp;Junichiro Yoshimoto,&nbsp;Kazushi Ikeda,&nbsp;Yasumi Ohta,&nbsp;Jun Ohta","doi":"10.1007/s10015-023-00875-x","DOIUrl":"10.1007/s10015-023-00875-x","url":null,"abstract":"<div><p>In working toward the goal of uncovering the inner workings of the brain, various imaging techniques have been the subject of research. Among the prominent technologies are devices that are based on the ability of transgenic animals to signal neuronal activity through fluorescent indicators. This paper investigates the utility of an original ultra-lightweight needle-type device in fluorescence neuroimaging. A generalizable data processing pipeline is proposed to compensate for the reduced image resolution of the lensless device. In particular, a modular solution centered on baseline-induced noise reduction and principal component analysis is designed as a stand-in for physical lenses in the aggregation and quasi-reconstruction of neuronal activity. Data-driven evidence backing the identification of regions of interest is then demonstrated, establishing the relative superiority of the method over neuroscience conventions within comparable contexts.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"28 3","pages":"483 - 495"},"PeriodicalIF":0.9,"publicationDate":"2023-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41513555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信