{"title":"Carbon-Efficient Software Design and Development: A Systematic Literature Review","authors":"Ornela Danushi, Stefano Forti, Jacopo Soldani","doi":"10.1145/3728638","DOIUrl":"https://doi.org/10.1145/3728638","url":null,"abstract":"The ICT sector, responsible for 2% of global carbon emissions, is under scrutiny calling for methodologies and tools to design and develop software in an environmentally sustainable-by-design manner. However, the software engineering solutions for designing and developing carbon-efficient software are currently scattered over multiple different pieces of literature, which makes it difficult to consult the body of knowledge on the topic. In this article, we precisely conduct a systematic literature review on state-of-the-art proposals for designing and developing carbon-efficient software. We identify and analyse 65 primary studies by classifying them through a taxonomy aimed at answering the 5W1H questions of carbon-efficient software design and development. We first provide a reasoned overview and discussion of the existing guidelines, reference models, measurement solutions and techniques for measuring, reducing, or minimising the carbon footprint of software. Ultimately, we identify open challenges and research gaps, offering insights for future work in this field.","PeriodicalId":50926,"journal":{"name":"ACM Computing Surveys","volume":"38 1","pages":""},"PeriodicalIF":16.6,"publicationDate":"2025-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143805855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Antonio Di Marino, Vincenzo Bevilacqua, Angelo Ciaramella, Ivanoe De Falco, Giovanna Sannino
{"title":"Ante-Hoc Methods for Interpretable Deep Models: A Survey","authors":"Antonio Di Marino, Vincenzo Bevilacqua, Angelo Ciaramella, Ivanoe De Falco, Giovanna Sannino","doi":"10.1145/3728637","DOIUrl":"https://doi.org/10.1145/3728637","url":null,"abstract":"The increasing use of black-box networks in high-risk contexts has led researchers to propose explainable methods to make these networks transparent. Most methods that allow us to understand the behavior of Deep Neural Networks (DNNs) are post-hoc approaches, implying that the explainability is questionable, as these methods do not clarify the internal behavior of a model. Thus, this demonstrates the difficulty of interpreting the internal behavior of deep models. This systematic literature review collects the ante-hoc methods that provide an understanding of the internal mechanisms of deep models and which can be helpful to researchers who need to use interpretability methods to clarify DNNs. This work provides the definitions of strong interpretability and weak interpretability, which will be used to describe the interpretability of the methods discussed in this paper. The results of this work are divided mainly into prototype-based methods, concept-based methods, and other interpretability methods for deep models.","PeriodicalId":50926,"journal":{"name":"ACM Computing Surveys","volume":"23 1","pages":""},"PeriodicalIF":16.6,"publicationDate":"2025-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143805569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Efficient Compressing and Tuning Methods for Large Language Models: A Systematic Literature Review","authors":"Gun Il Kim, Sunga Hwang, Beakcheol Jang","doi":"10.1145/3728636","DOIUrl":"https://doi.org/10.1145/3728636","url":null,"abstract":"Efficient compression and tuning techniques have become indispensable in addressing the increasing computational and memory demands of large language models (LLMs). While these models have demonstrated exceptional performance across a wide range of natural language processing tasks, their growing size and resource requirements pose significant challenges to accessibility and sustainability. This survey systematically reviews state-of-the-art methods in model compression, including compression techniques such as knowledge distillation, low-rank approximation, parameter pruning, and quantization, as well as tuning techniques such as parameter-efficient fine-tuning and inference optimization. Compression techniques, though well-established in traditional deep learning, require updated methodologies tailored to the scale and dynamics of LLMs. Simultaneously, parameter-efficient fine-tuning, exemplified by techniques like Low-Rank Adaptation (LoRA) and query tuning, emerges as a promising solution for adapting models with minimal resource overhead. This study provides a detailed taxonomy of these methods, examining their practical applications, strengths, and limitations. Critical gaps are identified in scalability, and the integration of compression and tuning strategies, signaling the need for unified frameworks and hybrid approaches to maximize efficiency and performance. By addressing these challenges, this survey aims to guide researchers toward sustainable, efficient, and accessible LLM development, ensuring their broader applicability across diverse domains while mitigating resource constraints.","PeriodicalId":50926,"journal":{"name":"ACM Computing Surveys","volume":"4 1","pages":""},"PeriodicalIF":16.6,"publicationDate":"2025-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143805570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Diffusion-Based Visual Art Creation: A Survey and New Perspectives","authors":"Bingyuan Wang, Qifeng Chen, Zeyu Wang","doi":"10.1145/3728459","DOIUrl":"https://doi.org/10.1145/3728459","url":null,"abstract":"The integration of generative AI in visual art has revolutionized not only how visual content is created but also how AI interacts with and reflects the underlying domain knowledge. This survey explores the emerging realm of diffusion-based visual art creation, examining its development from both artistic and technical perspectives. We structure the survey into three phases, data feature and framework identification, detailed analyses using a structured coding process, and open-ended prospective outlooks. Our findings reveal how artistic requirements are transformed into technical challenges and highlight the design and application of diffusion-based methods within visual art creation. We also provide insights into future directions from technical and synergistic perspectives, suggesting that the confluence of generative AI and art has shifted the creative paradigm and opened up new possibilities. By summarizing the development and trends of this emerging interdisciplinary area, we aim to shed light on the mechanisms through which AI systems emulate and possibly, enhance human capacities in artistic perception and creativity.","PeriodicalId":50926,"journal":{"name":"ACM Computing Surveys","volume":"290 1","pages":""},"PeriodicalIF":16.6,"publicationDate":"2025-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143782800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Tree-based Models for Vertical Federated Learning: A Survey","authors":"Bingchen Qian, Yuexiang Xie, Yaliang Li, Bolin Ding, Jingren Zhou","doi":"10.1145/3728314","DOIUrl":"https://doi.org/10.1145/3728314","url":null,"abstract":"Tree-based models have achieved great success in a wide range of real-world applications due to their effectiveness, robustness, and interpretability, which inspired people to apply them in vertical federated learning (VFL) scenarios in recent years. In this paper, we conduct a comprehensive study to give an overall picture of applying tree-based models in VFL, from the perspective of their communication and computation protocols. We categorize tree-based models in VFL into two types, <jats:italic>i.e.,</jats:italic> feature-gathering models and label-scattering models, and provide a detailed discussion regarding their characteristics, advantages, privacy protection mechanisms, and applications. This study also focuses on the implementation of tree-based models in VFL, summarizing several design principles for better satisfying various requirements from both academic research and industrial deployment. We conduct a series of experiments to provide empirical observations on the differences and advances of different types of tree-based models.","PeriodicalId":50926,"journal":{"name":"ACM Computing Surveys","volume":"3 1","pages":""},"PeriodicalIF":16.6,"publicationDate":"2025-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143782802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qingbo Zhang, Xiangmin Zhou, Xiuzhen Zhang, Xiaochun Yang, Bin Wang, Xun Yi
{"title":"Unified Empirical Evaluation and Comparison of Session-based Recommendation Algorithms","authors":"Qingbo Zhang, Xiangmin Zhou, Xiuzhen Zhang, Xiaochun Yang, Bin Wang, Xun Yi","doi":"10.1145/3728358","DOIUrl":"https://doi.org/10.1145/3728358","url":null,"abstract":"Recently, session-based recommendation systems (SBRSs) have become a highly explored area, and numerous methods have been proposed. The abundance of related work poses a challenge for newcomers in comprehending the current research landscape and burdens researchers during method validation. Offering a thorough research overview helps newcomers understand the current research. Additionally, comparing representative methods in a consistent environment allows researchers to streamline their workload by focusing on the top-performing methods. Existing theory-oriented review articles introduce the main techniques employed in SBRSs but lack a detailed exploration of their specific applications. The most recent neural method evaluated in existing experiment-driven review was published in 2019, and the latest state-of-the-art methods haven’t been included. To address these gaps, this paper offers a more thorough overview of SBRSs. Specifically, we first categorize and overview existing methods. Then, we introduce the main techniques and illustrate their applications. The performance of representative methods is validated under identical experimental conditions to ensure reliable comparative results. Our findings indicate that dataset characteristics significantly impact model performance, and attention mechanisms-based and gated neural networks (GNNs)-based models generally outperform others. Finally, we propose potential directions for future research in SBRSs.","PeriodicalId":50926,"journal":{"name":"ACM Computing Surveys","volume":"183 1","pages":""},"PeriodicalIF":16.6,"publicationDate":"2025-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143782801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Md Mehedi Hasan, Moloud Abdar, Abbas Khosravi, Uwe Aickelin, Pietro Lio, Ibrahim Hossain, Ashikur Rahman, Saeid Nahavandi
{"title":"Survey on Leveraging Uncertainty Estimation Towards Trustworthy Deep Neural Networks: The Case of Reject Option and Post-training Processing","authors":"Md Mehedi Hasan, Moloud Abdar, Abbas Khosravi, Uwe Aickelin, Pietro Lio, Ibrahim Hossain, Ashikur Rahman, Saeid Nahavandi","doi":"10.1145/3727633","DOIUrl":"https://doi.org/10.1145/3727633","url":null,"abstract":"Although neural networks (especially deep neural networks) have achieved <jats:italic>better-than-human</jats:italic> performance in many fields, their real-world deployment is still questionable due to the lack of awareness about the limitations in their knowledge. To incorporate such awareness in the machine learning model, prediction with reject option (also known as selective classification or classification with abstention) has been proposed in the literature. In this paper, we present a systematic review of the prediction with the reject option in the context of various neural networks. To the best of our knowledge, this is the first study focusing on this aspect of neural networks. Moreover, we discuss different novel loss functions related to the reject option and post-training processing (if any) of network output for generating suitable measurements for knowledge awareness of the model. Finally, we address the application of the rejection option in reducing the prediction time for real-time problems and present a comprehensive summary of the techniques related to the reject option in the context of a wide variety of neural networks. Our code is available on GitHub: https://github.com/MehediHasanTutul/Reject_option.","PeriodicalId":50926,"journal":{"name":"ACM Computing Surveys","volume":"62 1","pages":""},"PeriodicalIF":16.6,"publicationDate":"2025-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143766549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Primer on Pretrained Multilingual Language Models","authors":"Sumanth Doddapaneni, Gowtham Ramesh, Mitesh Khapra, Anoop Kunchukuttan, Pratyush Kumar","doi":"10.1145/3727339","DOIUrl":"https://doi.org/10.1145/3727339","url":null,"abstract":"Multilingual Language Models (MLLMs) such as mBERT, XLM, XLM-R, <jats:italic>etc.</jats:italic> have emerged as a viable option for bringing the power of pretraining to a large number of languages. Given their success in zero-shot transfer learning, there has emerged a large body of work in (i) building bigger MLLMs covering a large number of languages (ii) creating exhaustive benchmarks covering a wider variety of tasks and languages for evaluating MLLMs (iii) analysing the performance of MLLMs on monolingual, zero-shot cross-lingual and bilingual tasks (iv) understanding the universal language patterns (if any) learnt by MLLMs and (v) augmenting the (often) limited capacity of MLLMs to improve their performance on seen or even unseen languages. In this survey, we review the existing literature covering the above broad areas of research pertaining to MLLMs. Based on our survey, we recommend some promising directions of future research.","PeriodicalId":50926,"journal":{"name":"ACM Computing Surveys","volume":"58 1","pages":""},"PeriodicalIF":16.6,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143744770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Trojan Attacks and Countermeasures on Deep Neural Networks from Life-Cycle Perspective: A Review","authors":"Lingxin Jin, Xiangyu Wen, Wei Jiang, Jinyu Zhan, Xingzhi Zhou","doi":"10.1145/3727640","DOIUrl":"https://doi.org/10.1145/3727640","url":null,"abstract":"Deep Neural Networks (DNNs) have been widely deployed in security-critical artificial intelligence systems, such as autonomous driving and facial recognition systems. However, recent research has revealed their susceptibility to Trojan information maliciously injected by attackers. This vulnerability is caused, on the one hand, by the complex architecture and non-interpretability of DNNs. On the other hand, external open-source datasets, pre-trained models, and intelligent service platforms further exacerbate the threat of Trojan attacks. This article presents the first comprehensive survey of Trojan attacks against DNNs from a lifecycle perspective, including training, post-training, and inference (deployment) stages. Specifically, this article reformulates the relationships of Trojan attacks with poisoning attacks, adversarial example attacks, and bit-flip attacks. Then, research on Trojan attacks against newly emerged model architectures (e.g., vision transformers and spiking neural networks) and in other research fields is investigated. Moreover, this article also provides a comprehensive review of countermeasures (including detection and elimination) against Trojan attacks. Further, it evaluates the practical effectiveness of existing defense strategies against Trojan attacks at different lifecycle stages. Finally, we conclude the survey and provide constructive insights to advance research on Trojan attacks and countermeasures.","PeriodicalId":50926,"journal":{"name":"ACM Computing Surveys","volume":"12 1","pages":""},"PeriodicalIF":16.6,"publicationDate":"2025-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143736751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Saeid Nahavandi, Roohallah Alizadehsani, Darius Nahavandi, Shady Mohamed, Navid Mohajer, Mohammad Rokonuzzaman, Ibrahim Hossain
{"title":"A Comprehensive Review on Autonomous Navigation","authors":"Saeid Nahavandi, Roohallah Alizadehsani, Darius Nahavandi, Shady Mohamed, Navid Mohajer, Mohammad Rokonuzzaman, Ibrahim Hossain","doi":"10.1145/3727642","DOIUrl":"https://doi.org/10.1145/3727642","url":null,"abstract":"The field of autonomous mobile robots has undergone dramatic advancements over the past decades. Despite achieving important milestones, several challenges are yet to be addressed. Aggregating the achievements of the robotic community as survey papers is vital to keep the track of current state-of-the-art and the challenges that must be tackled in the future. This paper tries to provide a comprehensive review of autonomous mobile robots covering topics such as sensor types, mobile robot platforms, simulation tools, path planning and following, sensor fusion methods, obstacle avoidance, and SLAM. The urge to present a survey paper is twofold. First, autonomous navigation field evolves fast so writing survey papers regularly is crucial to keep the research community well-aware of the current status of this field. Second, deep learning methods have revolutionized many fields including autonomous navigation. Therefore, it is necessary to give an appropriate treatment of the role of deep learning in autonomous navigation as well which is covered in this paper. Future works and research gaps will also be discussed.","PeriodicalId":50926,"journal":{"name":"ACM Computing Surveys","volume":"102 4 Pt 1 1","pages":""},"PeriodicalIF":16.6,"publicationDate":"2025-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143736613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}