3D scene reconstruction is a critical task in computer vision and graphics, with recent advancements in 3D Gaussian Splatting (3DGS) demonstrating impressive novel view synthesis (NVS) result. However, most 3DGS methods rely on multi-view images, which are not always available, particularly in outdoor environments. In this paper, we explore 3D scene reconstruction using only single-view data, comprising an RGB image and sparse point clouds from a LiDAR sensor. To address the challenges posed by limited reference and LiDAR sensor insufficient point clouds, we propose a voxel-based structured 3DGS framework enhanced with depth prediction. We introduce a novel depth prior guided voxel growing and pruning algorithm, which leverages predicted depth maps to refine scene structure and improve rendering quality. Furthermore, we design a virtual background fitting method with an adaptive voxel size to accommodate the sparse distribution of LiDAR data in outdoor scenes. Our approach surpasses existing methods, including Scaffold-GS, Gaussian-Pro, 3DGS, Mip-splatting and UniDepth, in terms of PSNR, SSIM, LPIPS and FID metrics on the KITTI and Waymo datasets, demonstrating its effectiveness in single-viewpoint 3D reconstruction and NVS.