Artificial Life and Robotics最新文献

筛选
英文 中文
Integration of multiple dense point clouds based on estimated parameters in photogrammetry with QR code for reducing computation time 基于摄影测量中的估计参数,用 QR 码整合多个密集点云,以减少计算时间
IF 0.8
Artificial Life and Robotics Pub Date : 2024-09-20 DOI: 10.1007/s10015-024-00966-3
Keita Nakamura, Keita Baba, Yutaka Watanobe, Toshihide Hanari, Taku Matsumoto, Takashi Imabuchi, Kuniaki Kawabata
{"title":"Integration of multiple dense point clouds based on estimated parameters in photogrammetry with QR code for reducing computation time","authors":"Keita Nakamura,&nbsp;Keita Baba,&nbsp;Yutaka Watanobe,&nbsp;Toshihide Hanari,&nbsp;Taku Matsumoto,&nbsp;Takashi Imabuchi,&nbsp;Kuniaki Kawabata","doi":"10.1007/s10015-024-00966-3","DOIUrl":"10.1007/s10015-024-00966-3","url":null,"abstract":"<div><p>This paper describes a method for integrating multiple dense point clouds using a shared landmark to generate a single real-scale integrated result for photogrammetry. It is difficult to integrate high-density point clouds reconstructed by photogrammetry because the scale differs with each photogrammetry. To solve this problem, this study places a QR code of known sizes, which is a shared landmark, in the reconstruction target environment and divides the reconstruction target environment based on the position of the QR code that is placed. Then, photogrammetry is performed for each divided environment to obtain each high-density point cloud. Finally, we propose a method of scaling each high-density point cloud based on the size of the QR code and aligning each high-density point cloud as a single high-point cloud by partial-to-partial registration. To verify the effectiveness of the method, this paper compares the results obtained by applying all images to photogrammetry with those obtained by the proposed method in terms of accuracy and computation time. In this verification, ideal images generated by simulation and images obtained in real environments are applied to photogrammetry. We clarify the relationship between the number of divided environments, the accuracy of the reconstruction result, and the computation time required for the reconstruction.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"29 4","pages":"546 - 556"},"PeriodicalIF":0.8,"publicationDate":"2024-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10015-024-00966-3.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142519067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Obstacle feature point detection method for real-time processing by monocular camera and line laser 利用单目摄像头和线激光实时处理的障碍物特征点检测方法
IF 0.8
Artificial Life and Robotics Pub Date : 2024-09-19 DOI: 10.1007/s10015-024-00965-4
Hayato Mitsuhashi, Taku Itami
{"title":"Obstacle feature point detection method for real-time processing by monocular camera and line laser","authors":"Hayato Mitsuhashi,&nbsp;Taku Itami","doi":"10.1007/s10015-024-00965-4","DOIUrl":"10.1007/s10015-024-00965-4","url":null,"abstract":"<div><p>In this study, we propose a method for detecting feature points of obstacles by real-time processing using a monocular camera and a laser. Specifically, we propose an algorithm that detects laser beams emitted on obstacles by binarization, transforms pixel coordinates into distance using triangulation by a laser beam that is obliquely incident on the ground, and estimates the placement angle of the obstacle in real-time processing. No method has been devised to detect the placement angle of obstacles using a monocular camera and a laser. Furthermore, the proposed method does not calculate the linear distance from the camera, but from the position where the camera is moved horizontally parallel to the obstacle in front of the camera, thus detecting the placement angle independently of the placement position of the obstacle. As a result, it was confirmed that the placement angles of obstacles could be detected with a maximum error of <span>(6^{circ })</span> in six placement positions. Therefore, we succeeded in automatically detecting the placement angle of obstacles in real-time, independent of the illumination of the measurement environment, the reflectance of the obstacle.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"29 4","pages":"438 - 448"},"PeriodicalIF":0.8,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10015-024-00965-4.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142518385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research on English–Chinese machine translation shift based on word vector similarity 基于词向量相似性的英汉机器翻译转换研究
IF 0.8
Artificial Life and Robotics Pub Date : 2024-09-16 DOI: 10.1007/s10015-024-00964-5
Qingqing Ma
{"title":"Research on English–Chinese machine translation shift based on word vector similarity","authors":"Qingqing Ma","doi":"10.1007/s10015-024-00964-5","DOIUrl":"10.1007/s10015-024-00964-5","url":null,"abstract":"<div><p>In English–Chinese machine translation shift, the processing of out-of-vocabulary (OOV) words has a great impact on translation quality. Aiming at OOV, this paper proposed a method based on word vector similarity, calculated the word vector similarity based on the Skip-gram model, used the most similar words to replace OOV in the source sentences, and used the replaced corpus to train the Transformer model. It was found that when the original corpus was used for training, the bilingual evaluation understudy-4 (BLEU-4) of the Transformer model on NIST2006 and NIST2008 was 37.29 and 30.73, respectively. However, when the word vector similarity was used for processing and low-frequency OOV words were retained, the BLEU-4 of the Transformer model on NIST2006 and NIST2008 was improved to 37.36 and 30.78 respectively, showing an increase. Moreover, the translation quality obtained by retaining low-frequency OOV words was better than that obtained by removing low-frequency OOV words. The experimental results prove that the English–Chinese machine translation shift method based on word vector similarity is reliable and can be applied in practice.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"29 4","pages":"585 - 589"},"PeriodicalIF":0.8,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142518480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Management of power equipment inspection informationization through intelligent unmanned aerial vehicles 通过智能无人机实现电力设备巡检信息化管理
IF 0.8
Artificial Life and Robotics Pub Date : 2024-09-13 DOI: 10.1007/s10015-024-00963-6
Weizhi Lu, Qiang Li, Weijian Zhang, Lin Mei, Di Cai, Zepeng Li
{"title":"Management of power equipment inspection informationization through intelligent unmanned aerial vehicles","authors":"Weizhi Lu,&nbsp;Qiang Li,&nbsp;Weijian Zhang,&nbsp;Lin Mei,&nbsp;Di Cai,&nbsp;Zepeng Li","doi":"10.1007/s10015-024-00963-6","DOIUrl":"10.1007/s10015-024-00963-6","url":null,"abstract":"<div><p>With the implementation of intelligent unmanned aerial vehicles (UAVs) in power equipment inspection, managing the obtained inspection results through information technology is increasingly crucial. This paper collected insulator images, including images of standard and self-exploding insulators, during the inspection process using intelligent UAVs. Then, an optimized you only look once version 5 (YOLOv5) model was developed by incorporating the convolutional block attention module and utilizing the efficient intersection-over-union loss function. The detection performance of the designed algorithm was analyzed. It was found that among different models, the YOLOv5s model exhibited the smallest size and the highest detection speed. Moreover, the optimized YOLOv5 model showed a significant improvement in speed and accuracy for insulator detection, surpassing other methods with a mean average precision of 93.81% and 145.64 frames per second. These results demonstrate the reliability of the improved YOLOv5 model and its practical applicability.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"29 4","pages":"579 - 584"},"PeriodicalIF":0.8,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142518446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Regression analysis of facial thermal images for chronic stress estimation 用于慢性压力评估的面部热图像回归分析
IF 0.8
Artificial Life and Robotics Pub Date : 2024-09-11 DOI: 10.1007/s10015-024-00962-7
Miyu Kimura, Masahito Takano, Kent Nagumo, Akio Nozawa
{"title":"Regression analysis of facial thermal images for chronic stress estimation","authors":"Miyu Kimura,&nbsp;Masahito Takano,&nbsp;Kent Nagumo,&nbsp;Akio Nozawa","doi":"10.1007/s10015-024-00962-7","DOIUrl":"10.1007/s10015-024-00962-7","url":null,"abstract":"<div><p>In recent years, the focus on mental health care in Japan has increased, leading to a rise in companies addressing employee mental well-being. Chronic stress, stemming from various sources including work and interpersonal relationships, can have severe impacts on individuals’ happiness and health. To address this, there is a growing demand for technology capable of measuring chronic stress on a daily basis. In this study, we explore the potential of using facial thermal images (FTI) as a method for daily measurement of chronic stress. We conducted experiments over a 3-month period with healthy adult participants, collecting routine data on chronic stress and capturing FTI. Independent component analysis (ICA) was applied to the FTI data to extract relevant features. In addition, psychological questionnaires were administered to assess chronic stress levels. We aggregated the questionnaire scores using principal component analysis (PCA) to obtain a single chronic stress indicator. Multiple regression analysis (MRA) was then employed to model the relationship between the extracted FTI components and chronic stress scores. Our results indicate a moderate to strong correlation between the predicted and actual chronic stress scores, suggesting the potential utility of FTI in estimating stress levels. Identified features in the FTI, particularly around the upper lip and on the right half of the face, showed significant associations with chronic stress. This study provides insights into the feasibility of using FTI as a non-invasive method for daily monitoring of chronic stress levels. However, limitations such as variations in stress levels among participants and questionnaire administration frequency should be considered in future research.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"29 4","pages":"510 - 518"},"PeriodicalIF":0.8,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142518371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Prediction of COVID-19 cases using SIR and AR models: Tokyo-specific and nationwide application 使用 SIR 和 AR 模型预测 COVID-19 病例:东京和全国范围内的应用
IF 0.8
Artificial Life and Robotics Pub Date : 2024-09-02 DOI: 10.1007/s10015-024-00959-2
Tatsunori Seki, Tomoaki Sakurai, Satoshi Miyata, Keisuke Chujo, Toshiki Murata, Hiroyasu Inoue, Nobuyasu Ito
{"title":"Prediction of COVID-19 cases using SIR and AR models: Tokyo-specific and nationwide application","authors":"Tatsunori Seki,&nbsp;Tomoaki Sakurai,&nbsp;Satoshi Miyata,&nbsp;Keisuke Chujo,&nbsp;Toshiki Murata,&nbsp;Hiroyasu Inoue,&nbsp;Nobuyasu Ito","doi":"10.1007/s10015-024-00959-2","DOIUrl":"10.1007/s10015-024-00959-2","url":null,"abstract":"<div><p>With fast infectious diseases such as COVID-19, the SIR model may not represent the number of infections due to the occurrence of distribution shifts. In this study, we use simulations based on the SIR model to verify the prediction accuracy of new positive cases by considering distribution shifts. Instead of expressing the overall number of new positive cases in the SIR model, the number of new positive cases in a specific region is simulated, the expanded estimation ratio is expressed in the AR model, and these are multiplied to predict the overall number. In addition to the parameters used in the SIR model, we introduced parameters related to social variables. The parameters for the simulation were estimated daily from the data using approximate Bayesian computation (ABC). Using this method, the average absolute percent error in predicting the number of positive cases for the peak of the eighth wave (2022/12/22–12/28) for all of Japan was found to be 62.2% when using data up to two months before the peak and 6.2% when using data up to one month before the peak. Our simulations based on the SIR model reproduced the number of new positive cases across Japan and produced reasonable results when predicting the peak of the eighth wave.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"29 4","pages":"449 - 458"},"PeriodicalIF":0.8,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10015-024-00959-2.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142518474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A cognitive strategy for service robots in recognizing emotional attribute of objects 服务机器人识别物体情感属性的认知策略
IF 0.8
Artificial Life and Robotics Pub Date : 2024-08-27 DOI: 10.1007/s10015-024-00960-9
Hao Wu, Jiaxuan Du, Qin Cheng, Qing Ma
{"title":"A cognitive strategy for service robots in recognizing emotional attribute of objects","authors":"Hao Wu,&nbsp;Jiaxuan Du,&nbsp;Qin Cheng,&nbsp;Qing Ma","doi":"10.1007/s10015-024-00960-9","DOIUrl":"10.1007/s10015-024-00960-9","url":null,"abstract":"<div><p>With the advancement of service robots, discovering the emotional needs of users is becoming increasingly important. Unlike research focusing solely on human facial expression recognition or image sentiment recognition, our work proposes that the objects in the environment also impact human emotions, such as candy, which can make people happy. Therefore, studying the impact of objects on human emotions is crucial for service robots to regulate human emotions and provide more satisfactory services. In this work, we first propose the emotional attribute of objects: the ability to improve people’s moods. And we propose a strategy for recognizing this attribute. To achieve this, we first construct the H–S object emotional attribute image dataset, which contains different objects with pleasant or unpleasant emotion labels for people. We then propose the YOLOv3-SESA object detection model. By incorporating YOLOv3 with the SESA attention module, the model focuses more on the target objects, achieving higher recognition accuracy for small objects in the environment. We gain the correlation frequency between objects and emotion labels and convert it into emotional attribute probability values. Objects with the value exceeding a predefined threshold are defined as having an emotional attribute. Our experiments validate the effectiveness of our approach, yielding a list of common objects that can please users. By leveraging the knowledge of object emotional attributes, service robots can proactively provide emotionally appealing objects to humans, offering psychological comfort when they are depressed.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"29 4","pages":"536 - 545"},"PeriodicalIF":0.8,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142518611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effects of invisible body and optic flow on experience of users voluntarily walking in a VR environment 隐形身体和光流对用户在虚拟现实环境中自主行走体验的影响
IF 0.8
Artificial Life and Robotics Pub Date : 2024-08-14 DOI: 10.1007/s10015-024-00958-3
Asiri Weerashinghe, Hajime Kobayashi, Shusaku Nomura, Moto Kamiura, Tatsuji Takahashi, Yuta Nishiyama
{"title":"Effects of invisible body and optic flow on experience of users voluntarily walking in a VR environment","authors":"Asiri Weerashinghe,&nbsp;Hajime Kobayashi,&nbsp;Shusaku Nomura,&nbsp;Moto Kamiura,&nbsp;Tatsuji Takahashi,&nbsp;Yuta Nishiyama","doi":"10.1007/s10015-024-00958-3","DOIUrl":"10.1007/s10015-024-00958-3","url":null,"abstract":"<div><p>Studies have demonstrated that a multi-modal virtual reality (VR) system can enhance the realism of virtual walking. However, a few studies explore the body awareness altered by visual presentation of virtual body and optic flow during locomotion in VR. This study investigated the impact of invisible body and optic flow on experience of users voluntarily walking in a camera-image VR environment. Participants wearing a head-mounted display performed six-step walking at their own timing. Three experimental conditions providing visible body and optic flow as a baseline, invisible body and optic flow, and invisible body and no flow, were conducted on three different days. We found that losing visual body per se decreased the feeling of being-there-now. However, providing continuous optic flow maintained virtual presence equivalent to the baseline in terms of immersion and natural walking, as opposed to providing discontinuous flow. We discussed these results in association with body awareness.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"29 4","pages":"494 - 500"},"PeriodicalIF":0.8,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10015-024-00958-3.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142518447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Revealing inputs causing web API performance latency using response-time-guided genetic algorithm fuzzing 利用响应时间引导遗传算法模糊法揭示导致网络应用程序接口性能延迟的输入信息
IF 0.8
Artificial Life and Robotics Pub Date : 2024-08-02 DOI: 10.1007/s10015-024-00957-4
Ying-Tzu Huang, Shin-Jie Lee
{"title":"Revealing inputs causing web API performance latency using response-time-guided genetic algorithm fuzzing","authors":"Ying-Tzu Huang,&nbsp;Shin-Jie Lee","doi":"10.1007/s10015-024-00957-4","DOIUrl":"10.1007/s10015-024-00957-4","url":null,"abstract":"<div><p>Web APIs are integral to modern web development, enabling service integration and automation. Ensuring their performance and functionality is critical, yet performance testing is less explored due to the difficulty in detecting performance bugs. This paper presents a response time-guided genetic algorithm (GA) fuzzing approach to uncover web API performance latency in a black-box setting. Unlike traditional random input generation, our method uses GA to refine inputs through crossover and mutation, guided by response time-based fitness. We propose two seed generation methods: pairwise combinatorial testing using Mircosoft’s Pairwise Independent Combinatorial Testing (PICT) and randomly paired combinations. We compared our method with classic random fuzzing. Experiments on five real-world web APIs show that our approach significantly outperforms classic random fuzzing, identifying inputs with response times 1.5 to 26.3 times longer. Additionally, PICT-generated seeds demonstrated superior performance compared to randomly-paired combinations in 2 out of 5 APIs. Our findings highlight the potential of GA-based fuzzing to reveal web API performance latency, advocating for further research in this area.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"29 4","pages":"459 - 472"},"PeriodicalIF":0.8,"publicationDate":"2024-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142518520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Inhibition of convergence of mimicry rings by low learning abilities of predators 捕食者的低学习能力抑制拟态环的聚合
IF 0.8
Artificial Life and Robotics Pub Date : 2024-07-20 DOI: 10.1007/s10015-024-00956-5
Takashi Sato, Haruto Takaesu
{"title":"Inhibition of convergence of mimicry rings by low learning abilities of predators","authors":"Takashi Sato,&nbsp;Haruto Takaesu","doi":"10.1007/s10015-024-00956-5","DOIUrl":"10.1007/s10015-024-00956-5","url":null,"abstract":"<div><p>Since Müllerian mimicry is more effective, the greater the number of species involved, multiple mimicry rings tend to gradually converge into one large mimicry ring, which then tends to expand. However, in nature, mimicry rings often do not converge into a single ring. It is believed that various factors lead to the diversification of mimicry rings. In this study, we conducted an evolutionary simulation experiment using a multi-agent system (MAS) based on the hypothesis that “predators with low learning ability cannot learn the patterns of toxic prey and continue to prey on species that exhibit Müllerian mimicry.\" Our aim was to investigate the inhibitory factor of convergence of mimicry rings. We use two types of agent models that make up the MAS: PREY-agent and PREDATOR-agent. Each PREDATOR-agent encounters a PREY-agent randomly at each step, decides whether to prey on the PREY-agent or not based on the pattern of the PREY-agent using its own feed-forward neural network (FFNN); it also uses its FFNN to learn the relationship between the PREY-agent’s pattern and the presence or absence of venom. The PREY-agent determines its own fitness based on the results of whether or not it was preyed upon by PREDATOR-agent, performs genetic evolution based on this fitness, and decodes its own genes to generate patterns. This pattern is generated using a modified L-system. Evolutionary simulation experiments using the MAS showed that the convergence of the mimicry ring is inhibited when the number of neurons in the hidden layer in the PREDATOR-agent’s FFNN is small, i.e., when the learning ability of the PREDATOR-agent is low.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"29 3","pages":"404 - 409"},"PeriodicalIF":0.8,"publicationDate":"2024-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141820139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信