Prompt learning has achieved remarkable performance in various natural language understanding scenarios as it intuitively bridges the gap between pre-training and fine-tuning. However, directly applying monolingual prompting methods to cross-lingual tasks leads to discrepancies between source-language training and target-language inference, namely language bias in cross-lingual transfer. To address this gap, we propose a novel model called Cross-lingual Semantic Clustering Prompt (X-SCP). Specifically, in the prompt engineering stage, we design a language-agnostic prompt template and introduce a progressive code-switching approach to enhance the alignment between source and target languages. In the answer engineering stage, we construct a unified multilingual answer space through semantic consistency-guided clustering. The model trains a cluster-based verbalizer by learning a pre-clustered multilingual answer space. In this way, X-SCP alleviates language bias in both prompt engineering and answer engineering. Experimental results show that our model outperforms the strong baselines under zero-shot cross-lingual settings on both the XGLUE-NC and MLDoc document classification datasets.