{"title":"Transforming legal texts into computational logic: Enhancing next generation public sector automation through explainable AI decision support","authors":"Markus Bertl , Simon Price , Dirk Draheim","doi":"10.1016/j.ijcce.2025.07.003","DOIUrl":null,"url":null,"abstract":"<div><div>This research presents a novel approach for translating legal texts into machine-executable computational logic to support the automation of public sector processes. Recognizing the high-stakes implications of artificial intelligence (AI) in legal domains, the proposed method emphasizes explainability by integrating explainable AI (XAI) techniques with natural language processing (NLP), employing scope-restricted pattern matching and grammatical parsing. The methodology involves several key steps: document structure inference from raw legal text, semantically neutral pre-processing, identification and resolution of internal and external references, contextualization of legal paragraphs, and rule extraction. The extracted rules are formalized as Prolog predicates and visualized as structured textual lists and graphical decision trees to enhance interpretability. To demonstrate the automatic extraction of explainable rules from legal text, we develop a Law-as-Code prototype and validate it through a real-world case study at the Austrian Ministry of Finance. The system successfully extracts executable rules from the Austrian <em>Study Funding Act</em>, confirming the feasibility and effectiveness of the proposed approach. This validation not only underscores the practical applicability of our method, but also highlights promising avenues for future research, particularly the integration of Generative AI and Large Language Models (LLMs) into the rule extraction pipeline, while preserving traceability and explainability.</div></div>","PeriodicalId":100694,"journal":{"name":"International Journal of Cognitive Computing in Engineering","volume":"7 ","pages":"Pages 40-57"},"PeriodicalIF":0.0000,"publicationDate":"2025-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Cognitive Computing in Engineering","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666307425000336","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This research presents a novel approach for translating legal texts into machine-executable computational logic to support the automation of public sector processes. Recognizing the high-stakes implications of artificial intelligence (AI) in legal domains, the proposed method emphasizes explainability by integrating explainable AI (XAI) techniques with natural language processing (NLP), employing scope-restricted pattern matching and grammatical parsing. The methodology involves several key steps: document structure inference from raw legal text, semantically neutral pre-processing, identification and resolution of internal and external references, contextualization of legal paragraphs, and rule extraction. The extracted rules are formalized as Prolog predicates and visualized as structured textual lists and graphical decision trees to enhance interpretability. To demonstrate the automatic extraction of explainable rules from legal text, we develop a Law-as-Code prototype and validate it through a real-world case study at the Austrian Ministry of Finance. The system successfully extracts executable rules from the Austrian Study Funding Act, confirming the feasibility and effectiveness of the proposed approach. This validation not only underscores the practical applicability of our method, but also highlights promising avenues for future research, particularly the integration of Generative AI and Large Language Models (LLMs) into the rule extraction pipeline, while preserving traceability and explainability.