{"title":"Joint Semantic-geometric Mapping of Unstructured Environment for Autonomous Mobile Robotic Sprayers","authors":"Xubin Lin, Zerong Su, Zhihan Zhu, Pengfei Yuan, Haifei Zhu, Xuefeng Zhou","doi":"10.1002/rob.22553","DOIUrl":"https://doi.org/10.1002/rob.22553","url":null,"abstract":"<div>\u0000 \u0000 <p>Mobile robotic sprayers are expected to be employed in outdoor insecticide applications for mosquito control, epidemic prevention, and disinfection. To achieve this, a comprehensive 3D environmental model integrating both semantic and geometric information is indispensable for supporting mobile robotic sprayers in autonomous navigation, task planning, and adaptive spraying control. However, outdoor environments for insecticide spraying, such as public parks and gardens, are typically unstructured, dynamic and prone to sensor degradation, posing significant challenges to both LiDAR-only and camera-only perception and mapping approaches. In this paper, a visual-LiDAR fusion based joint semantic-geometric mapping framework is proposed, featuring a novel 2D-3D semantic perception module that is robust against complex segmentation conditions and sensor extrinsic drift. To this end, a Multi-scale Vague Boundary Augmented Dual Attention Network (MDANet), incorporating multi-scale 3D attention modules and vague boundary augmented attention modules, is proposed to tackle the image segmentation task involving dense vegetation with overlapping foliage and ambiguous boundaries. Additionally, a seed growth-based visual-LiDAR semantic data association method is proposed to resolve the issue of inaccurate pixel-to-point association in the presence of extrinsic drift, yielding more precise 3D semantic perception results. Furthermore, a semantic-aware SLAM system accounting for dynamic interference and pose estimation drift is presented. Extensive experimental evaluations on public datasets and self-recorded data are conducted. The segmentation results show that MDANet achieves a mean pixel accuracy (mPA) of 90.17%, outperforming competing methods in the vegetation-involved segmentation task. The proposed visual-LiDAR semantic data association method can tolerate a translational disturbance of up to 40 mm and a rotational disturbance of 0.18 rad without compromising 3D segmentation accuracy. Moreover, the evaluation of trajectory error, alongside ablation studies, validates the effectiveness and feasibility of the proposed mapping framework.</p>\u0000 </div>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"42 6","pages":"2952-2967"},"PeriodicalIF":5.2,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144881214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Samrat Gautam, Ming Li, Daniel Roozbahani, Marjan Alizadeh, Heikki Handroos
{"title":"Design and Development of an Intelligent Collision Avoidance Platform for a Teleoperated Dual Robotic Arm","authors":"Samrat Gautam, Ming Li, Daniel Roozbahani, Marjan Alizadeh, Heikki Handroos","doi":"10.1002/rob.22565","DOIUrl":"https://doi.org/10.1002/rob.22565","url":null,"abstract":"<p>The aim of this study was to develop collision avoidance control for a teleoperated dual robotic arm to minimize collisions in human-robot collaboration. Generally, in the absence of work cell mapping, collision checking in a teleoperated mobile robot is performed based on motor current or torque. However, this method cannot predict collisions that may occur during robot maneuvering. To overcome this problem, collision checking based on the description file model of the robot, and collision avoidance based on proximity sensors were designed for the robot's neighboring link and end-effector, respectively. The UR10 robotic arm was modeled in MATLAB. The reachable point in the workspace of the robotic arm and the Geomagic Touch haptic device was calculated using the Adaptive Neuro Fuzzy Inference System (ANFIS) method. Two sets of experiments with different scenarios were carried out to detect and avoid collision in the neighboring link and end-effector. The test results confirmed the effectiveness of the developed collision avoidance control performance in eliminating the risk of collision in the working environment of the teleoperated mobile robot. The approach presented in this study can be applied to almost any similar commercial robot as an independent or system-integrated package.x</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"42 7","pages":"3047-3062"},"PeriodicalIF":5.2,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/rob.22565","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145129204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Geometry-Aware 3D Point Cloud Learning for Precise Cutting-Point Detection in Unstructured Field Environments","authors":"Hongjun Wang, Gengming Zhang, Hao Cao, Kewei Hu, Quanchao Wang, Yuqin Deng, Junfeng Gao, Yunchao Tang","doi":"10.1002/rob.22567","DOIUrl":"https://doi.org/10.1002/rob.22567","url":null,"abstract":"<div>\u0000 \u0000 <p>In automated lychee harvesting, the complex geometric structures of branches, leaves, and clustered fruits pose significant challenges for robotic cutting point detection, where even minor positioning errors can lead to harvest damage and operational failures. This study introduces the Fcaf3d-lychee network model, specifically designed for precise lychee picking point localization. The data acquisition system utilizes Microsoft's Azure Kinect DK time-of-flight camera to capture point cloud data through multi-view stitching, enabling comprehensive spatial information capture. The proposed model enhances the Fully Convolutional Anchor-Free 3D Object Detection (Fcaf3d) architecture by incorporating a squeeze-and-excitation (SE) module, which leverages human visual attention mechanisms to improve feature extraction capabilities. Experimental results demonstrate the model's superior performance, achieving an <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 \u0000 <mrow>\u0000 <msub>\u0000 <mi>F</mi>\u0000 \u0000 <mn>1</mn>\u0000 </msub>\u0000 </mrow>\u0000 </mrow>\u0000 </semantics></math> score of 88.57% on the test data set, significantly outperforming existing approaches. Field tests in real orchard environments show robust performance under varying occlusion conditions, with detection accuracies of 0.932, 0.824, and 0.765 for unobstructed, partially obstructed, and severely obstructed scenarios, respectively. The model maintains localization errors within <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 \u0000 <mrow>\u0000 <mo>±</mo>\u0000 </mrow>\u0000 </mrow>\u0000 </semantics></math>1.5 cm in all directions, demonstrating exceptional precision for practical harvesting applications. This research advances the field of automated fruit harvesting by providing a reliable solution for accurate picking point detection, contributing to the development of more efficient agricultural robotics systems.</p>\u0000 </div>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"42 7","pages":"3063-3076"},"PeriodicalIF":5.2,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145129202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Development of an Air Pulse Jet Robot for Efficient Dust Removal in Air-Cooled Condensers Under Harsh Industrial Conditions","authors":"Rui Xue, Guidong Zhang, Samson S. Yu, Bo Zhao","doi":"10.1002/rob.22572","DOIUrl":"https://doi.org/10.1002/rob.22572","url":null,"abstract":"<div>\u0000 \u0000 <p>The manual water-sprinkling method has traditionally been used to clean accumulated dust from air-cooled condensers. However, this method is impractical under severe conditions—such as a 60° slope, high altitude, and high temperatures–where manual operations are impossible. Additionally, high-pressure water flow can cause physical damage and corrosion to the air cooler fins. To address these challenges, in this work, an air pulse jet robot is designed and validated for dry cleaning of accumulated dust in air-cooled condensers under harsh industrial conditions. The robot was designed to optimize cleaning efficiency and speed while reducing energy consumption. To address steep slopes, a permanent magnet absorber based on a proposed magnetic model of a Y-shaped magnetic circuit (reducing leakage flux by 42% and achieving <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 \u0000 <mrow>\u0000 <mn>6</mn>\u0000 \u0000 <msup>\u0000 <mn>0</mn>\u0000 \u0000 <mo>∘</mo>\u0000 </msup>\u0000 </mrow>\u0000 </mrow>\u0000 </semantics></math> slope stability) was installed on the robot base. For effective dry cleaning with air pulse jetting, a Computational Fluid Dynamics (CFD) model was developed to optimize the cleaning airflow velocity (peak velocity 65 m/s). The robot has been deployed to a thermal power plant for a 30,281 m<sup>2</sup> air-cooled condenser under severe industrial conditions. Tests conducted under different conditions, including varying ash thicknesses, particle diameters, and heat exchanger thermal resistance, showed that the robot achieved a dust removal rate of 95.6<span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 \u0000 <mrow>\u0000 <mo>%</mo>\u0000 </mrow>\u0000 </mrow>\u0000 </semantics></math> and a cleaning rate of 125.06 m<sup>2</sup>/min, which is significantly faster than the manual watering method while eliminating water consumption and the risk of corrosion.</p>\u0000 </div>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"42 7","pages":"3105-3120"},"PeriodicalIF":5.2,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145129205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fiorella Sibona, Nicholas Harrison, Nathan Wallace, Conrad Stevens, Salah Sukkarieh, David Raubenheimer, Stephen J. Simpson, Luciano A. González
{"title":"A Framework to Enable Eco-Cyber-Physical Systems for Robotics-Focused Digital Twins in Smart Farming","authors":"Fiorella Sibona, Nicholas Harrison, Nathan Wallace, Conrad Stevens, Salah Sukkarieh, David Raubenheimer, Stephen J. Simpson, Luciano A. González","doi":"10.1002/rob.22562","DOIUrl":"https://doi.org/10.1002/rob.22562","url":null,"abstract":"<p>This paper introduces the R-ECPS (Robotics-focused Eco-Cyber-Physical System) framework for the development of robotics-enabled digital twins, integrating human knowledge into the decision-making process. Our framework adapts ECPS and Digital Twin (DT) definitions to agricultural field robotics. We illustrate its application by envisioning an autonomous digital twin for smart livestock management. The presented preliminary implementation includes a digital testbed, an autonomous environmental parameter mapping algorithm, and a foraging model. Development choices are the result of interdisciplinary collaboration across field robotics, biology, and nutrition. Initial results from a proof-of-concept trial reveal both challenges and opportunities for improvement. Despite limitations in the use case early implementation, our framework provides a roadmap for future steps and aims at laying the foundation for advancing robotics-focused agricultural digital twins.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"42 6","pages":"3016-3037"},"PeriodicalIF":5.2,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/rob.22562","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144881217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Visual Servoing With Grid-Based Directional Error Mapping for Robotic TBM Disc Cutter Replacement","authors":"Qiang Yang, Liang Du, Hao Chen, Sheng Bao, Zhengtao Hu, Jianjun Yuan","doi":"10.1002/rob.22561","DOIUrl":"https://doi.org/10.1002/rob.22561","url":null,"abstract":"<div>\u0000 \u0000 <p>Tunnel boring machines (TBM) need to replace disc cutters to ensure the efficiency of tunneling, which relies on intensive labor operations in harsh environments, highlighting the urgent need for robotic systems to substitute. Visual servoing is crucial for robots to grasp disc cutters with uncertainty. However, traditional methods face significant challenges in environments with unpredictable occlusions, contamination, and damage. Thus, we propose to develop a robust visual servo strategy for the harsh working environment in real TBMs. The major contribution of this strategy includes two parts. First, we propose an image-based desired vectors field made up of griddings of image. Second, we propose a direct and constant interaction matrix to map the camera velocity from the image-based desired vectors. These two parts increase the robustness of visual servoing for the vision-based controller, especially for working with a polluted environment and servoing uncertain states of the disc cutters. The experiments validated it is a stable, easy-employing vision controller for overcoming the difficulty in controlling cutter replacement robots in unstatic environment conditions, thus promoting the application of robotic technologies in more field situations.</p></div>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"42 6","pages":"3003-3015"},"PeriodicalIF":5.2,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144881216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shenghao Tong, Jinbao Zhao, Peng Zhou, Ke Zhang, Donggang Liu, Yanze Long
{"title":"Automatic Obstacle Avoidance Control System for Cable Crane With Error Constraints","authors":"Shenghao Tong, Jinbao Zhao, Peng Zhou, Ke Zhang, Donggang Liu, Yanze Long","doi":"10.1002/rob.22568","DOIUrl":"https://doi.org/10.1002/rob.22568","url":null,"abstract":"<div>\u0000 \u0000 <p>To solve the problem of low efficiency of automatic obstacle avoidance in the operation of cable cranes, this paper proposes a dynamic sliding mode obstacle avoidance control method with an error constraint function. First, considering the geometric position relationship between the trolley and the obstacle, an obstacle avoidance trajectory with an autonomously planned path is constructed based on the sine function, which reduces the excessive requirements for the geometric parameters of the obstacle. Second, a dynamic sliding surface with an error constraint function is designed to control the system's tracking error within a preset range. Finally, the stability of the system is strictly proved by using Barbarat's lemma and Lyapunov's theorem. The simulation results and experiments show that compared with other methods, the controller proposed in this paper reduces the maximum load swing angle during operation, shortens the time required for the load to reach a stable state, and accurately limits the error tracking of the load obstacle avoidance trajectory to within 0.02 m. It also shows good obstacle avoidance performance for external disturbances in the experiment. Therefore, the controller proposed in this paper can complete the obstacle avoidance function of obstacles and achieve accurate positioning and rapid antisway effect of the load.</p>\u0000 </div>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"42 7","pages":"3077-3092"},"PeriodicalIF":5.2,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145129203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tengwei Li, Linzheng Ye, Xijing Zhu, Shida Chuai, Jialong Wu, Wanqi Zhang, Wenlong Li
{"title":"Research on VSLAM Algorithm Based on Improved YOLO Algorithm and Multi-View Geometry","authors":"Tengwei Li, Linzheng Ye, Xijing Zhu, Shida Chuai, Jialong Wu, Wanqi Zhang, Wenlong Li","doi":"10.1002/rob.22569","DOIUrl":"https://doi.org/10.1002/rob.22569","url":null,"abstract":"<div>\u0000 \u0000 <p>Visual Simultaneous Localization and Mapping (VSLAM) uses camera sensors for environmental sensing and localization, widely applied in robotics, unmanned vehicles, and other sectors. Traditional VSLAMs typically assume static environments, but dynamic objects in such settings can cause feature point mismatches, significantly impairing system accuracy and robustness. Furthermore, existing dynamic VSLAMs suffer from issues like inadequate real-time performance. To tackle the challenges of dynamic environments, this paper adopts ORB-SLAM2 as the framework, integrates the YOLOv5 object detection module and a dynamic feature rejection module, and introduces a dynamic VSLAM system that leverages YOLO's object detection and motion geometry's depth fusion, termed YOLO Geometry Simultaneous Visual Localization and Mapping(YG-VSLAM). This paper's algorithm differs significantly from other dynamic algorithms, focusing on basic feature points for dynamic feature point identification and elimination. Initially, the algorithm's front-end extracts feature points from the input image. Concurrently, the target detection module identifies dynamic classes, delineating dynamic and static regions. Subsequently, a six-class region classification strategy is applied to further categorize these regions into more detailed categories, such as suspected dynamic and static classes. Finally, a multi-vision geometric method is employed to detect and eliminate feature points within each region. This paper conducts a comprehensive evaluation using the TUM data set, assessing both accuracy and real-time performance. The experimental outcomes demonstrate the algorithm's effectiveness and practicality.</p>\u0000 </div>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"42 7","pages":"3093-3104"},"PeriodicalIF":5.2,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145129206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Localization of Tea Shoots for Robotic Plucking Using Binocular Stereo Vision","authors":"Leiying He, Qianyao Zhuang, Yatao Li, Zhenghao Zhong, Jianneng Chen, Chuanyu Wu","doi":"10.1002/rob.22559","DOIUrl":"https://doi.org/10.1002/rob.22559","url":null,"abstract":"<div>\u0000 \u0000 <p>Localization of tea shoots is essential for achieving intelligent plucking. However, accurately identifying the plucking point within the unstructured field environment remains challenging. This study proposes a method for three-dimensional (3D) localization of tea shoots utilizing binocular stereo vision for robotic plucking in such environments. Initially, tea shoot masks from each binocular image are extracted using the You Only Look Once segmentation network and paired by calculating image similarity through the combined use of Scale-Invariant Feature Transform features and color histograms. The Selective AD-Census-HSI stereo-matching algorithm was subsequently developed specifically to generate disparity maps for instance-segmented tea shoots. This approach also incorporated enhancements in the initial cost calculation and the cross-construction modules to improve the algorithm's performance. The point cloud is generated via triangulation to identify the plucking points using V-shaped template matching. Disparity evaluation results indicate that the proposed stereo-matching algorithm enhances accuracy compared with the original AD-Census, especially in scenarios with significant luminance contrast between the left and right views. Results from the indoor 3D localization experiment show that the average localization error of the tea shoot plucking point is 5.78 mm. Lastly, a robotic tea shoot plucking experiment conducted in the field achieved a success rate of 62%. These results demonstrate that the proposed tea shoot localization method satisfies the requirements for robotic tea plucking, providing a novel solution for intelligent harvesting of tea.</p>\u0000 </div>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"42 6","pages":"2985-3002"},"PeriodicalIF":5.2,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144881215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Huanlong Liu, Zhiyu Nie, Yuqi Liu, Jingyu Xu, Hao Tian
{"title":"Bolster Spring Visual Servo Positioning Method Based on Depth Online Detection","authors":"Huanlong Liu, Zhiyu Nie, Yuqi Liu, Jingyu Xu, Hao Tian","doi":"10.1002/rob.22557","DOIUrl":"https://doi.org/10.1002/rob.22557","url":null,"abstract":"<div>\u0000 \u0000 <p>The intelligent assembly system for railway wagon bolster springs needs to realize the positioning and grabbing of bolster springs, and also has high requirements for grabbing efficiency. To solve the problem of low efficiency of traditional visual servo positioning methods, an image visual servo (IBVS) control method based on depth online detection is proposed to improve the efficiency of maintenance operations. Based on MobileNetv3 network architecture and ECA attention mechanism, a lightweight object detection ME-YOLO model is proposed to improve the real-time positioning efficiency of bolster springs. The training results show that compared with the original YOLOv5s model, the detection accuracy of ME-YOLO is slightly reduced, but the model size is reduced by 81% and the detection speed is increased by 1.7 times. Taking advantage of the real-time detection advantages of the depth camera, a visual servo control method based on depth online detection is proposed to speed up the convergence of the IBVS system. A bolster spring grasping robot experimental platform was used to conduct a visual servo bolster spring positioning comparison test. The results show that the proposed ME-YOLO detection model can meet the grabbing needs of the bolster spring assembly robot system based on IBVS, while reducing the system convergence times by about 35%. The proposed IBVS method based on deep online detection can also further improve system operation efficiency by 7%.</p>\u0000 </div>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"42 6","pages":"2968-2984"},"PeriodicalIF":5.2,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144881339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}