High-precision road point cloud measurement using mobile LiDAR technology is essential digital infrastructure for various industries. Researchers focus primarily on developing high-precision automated semantic segmentation for road point clouds. Existing deep learning networks trained on uneven and sparse point clouds captured by self-developed Mobile LiDAR Systems (MLS) have low segmentation accuracy. This paper introduces a deep learning method that partitions data based on the spatial positions of road scene point clouds and considers the sampling radius of regional groups. We use a road point cloud dataset constructed with a self-developed MLS to train and test the semantic segmentation of road point clouds. Based on the linear characteristics of local road point clouds, Principal Component Analysis (PCA) and threshold filtering methods are applied to classify the point cloud into ground and non-ground points. Different sampling strategies are then employed for each class of points, which are subsequently fed into the network model for semantic segmentation. Experimental results show that the proposed method achieves an overall accuracy of 97.8% in road point cloud segmentation and a mean Intersection-Over-Union (mIOU) of 0.81. The specific mIOUs are 0.98 for roads, 0.98 for guardrails, 0.93 for signs, 0.96 for street lamps, and 0.56 for lane markings. These results indicate that the proposed method significantly improves the accuracy of segmenting uneven and sparse road point clouds captured by MLS and outperforms existing methods.