Multi-modal Aspect-Based Sentiment Analysis (MABSA) aims to identify the sentiment polarity of aspects by incorporating visual information into text. Image and text are two types of modality information with significant modality gaps in both data form and semantic expression. Narrowing the modality gaps and feature fusion are two crucial challenges in MABSA. To address these issues, this paper introduces an aspect-enhanced alignment and fusion strategy with dual-layer contrastive learning to tackle the cross-modal fusion problem. Unlike traditional contrastive learning methods, our approach increases the number of negative samples, enabling the model to learn more discriminative features and better capture fine-grained cross-modal relationships. The proposed approach leverages overlapping aspect information as multi-modal pivots to first bridge the modality gaps and then integrate visual and text information in the multi-modal feature space, thereby improving multi-modal sentiment analysis performance. We first introduce an aspect-guided modality alignment strategy that narrows the fundamental modality gaps between image and text using modality contrastive learning. Then, we design an aspect-oriented multi-modal fusion approach to promote cross-modal feature fusion through symmetric cross-modal interaction. Extensive experiments demonstrate that the proposed approach outperforms other state-of-the-art (SOTA) MABSA methods on three MABSA benchmark datasets. In-depth analysis further validates the effectiveness of the proposed multi-modal fusion approach for MABSA.