The explainability of recommendation systems refers to the ability to explain the logic that guides the system’s decision to endorse or exclude an item. In industrial-grade recommendation systems, the high complexity of features, the presence of embedding layers, the existence of adversarial samples and the requirements for explanation accuracy and efficiency pose significant challenges to current explanation methods. This paper proposes a novel framework AdvLIME (Adversarial Local Interpretable Model-agnostic Explanation) that leverages Generative Adversarial Networks (GANs) with Embedding Constraints to enhance explainability. This method utilizes adversarial samples as references to explain recommendation decisions, generating these samples in accordance with realistic distributions and ensuring they meet the structural constraints of the embedding module. AdvLIME requires no modifications to the existing model architecture and needs only a single training session for global explanation, making it ideal for industrial applications. This work contributes two significant advancements. First, it develops a model-independent global explanation method via adversarial generation. Second, it introduces a model discrimination method to guarantee that the generated samples adhere to the embedding constraints. We evaluate the AdvLIME framework on the Behavior Sequence Transformer (BST) model using the MovieLens 20 M dataset. The experimental results show that AdvLIME outperforms traditional methods such as LIME and DLIME, reducing the approximation error of real samples by 50% and demonstrating improved stability and accuracy.