Images acquired from optical imaging devices in a low-light or back-lit environment usually lead to a poor visual experience. The poor visibility and the attendant contrast or color distortion may degrade the performance of subsequent vision processing. To enhance the visibility of low-light image and mitigate the degradation of vision systems, an attention-guided deep Retinex decomposition model, dubbed Ag-Retinex-Net, is proposed. Inspired by the Retinex theory, the Ag-Retinex-Net first decomposes the input low-light image into two layers under an elaborate multi-term regularization, and then recomposes the refined two layers to obtain the final enhanced images via attention-guided generative adversarial learning. The multi-term constraints in the decomposition module can help better regularize and extract the decomposed illumination and reflectance. And the attention-guided generative adversarial learning in the recomposition module is utilized to help remove the degradation. The experimental results show that the proposed Ag-Retinex-Net outperforms other Retinex-based methods in terms of both visual quality and several objective evaluation metrics.