Aspect-based sentiment analysis (ABSA) focused on forecasting the sentiment orientation of a given aspect target within the input. Existing methods employ neural networks and attention mechanisms to encode input and discern aspect-context relationships. Bidirectional Encoder Representation from Transformer(BERT) has become the standard contextual encoding method in the textual domain. Researchers have ventured into utilizing graph attention networks(GAT) to incorporate syntactic information into the task, yielding cutting-edge results. However, current approaches overlook the potential advantages of considering word dependency relations. This work proposes a hybrid model combining contextual information obtained from a post-trained BERT with syntactic information from a relational GAT (RGAT) for the ABSA task. Our approach leverages dependency relation information effectively to improve ABSA performance in terms of accuracy and F1-score, as demonstrated through experiments on SemEval-14 Restaurant and Laptop, MAMS, and ACL-14 Twitter datasets.