Steffen Wendzel, Luca Caviglione, Wojciech Mazurczyk, Aleksandra Mileva, Jana Dittmann, Christian Krätzer, Kevin Lamshöft, Claus Vielhauer, Laura Hartmann, Jörg Keller, Tom Neubert, Sebastian Zillien
{"title":"A Generic Taxonomy for Steganography Methods","authors":"Steffen Wendzel, Luca Caviglione, Wojciech Mazurczyk, Aleksandra Mileva, Jana Dittmann, Christian Krätzer, Kevin Lamshöft, Claus Vielhauer, Laura Hartmann, Jörg Keller, Tom Neubert, Sebastian Zillien","doi":"10.1145/3729165","DOIUrl":"https://doi.org/10.1145/3729165","url":null,"abstract":"A unified understanding of terms is essential for every scientific discipline: steganography is no exception. Being divided into several domains (e.g., network and text steganography), it is crucial to provide a unified terminology as well as a taxonomy that is not limited to few applications or areas. A prime attempt towards a unified understanding of terms was conducted in 2015 with the introduction of a pattern-based taxonomy for network steganography. In 2021, the first work towards a pattern-based taxonomy for all domains of steganography was proposed. However, this initial attempt still faced several shortcomings, e.g., remaining inconsistencies and a lack of patterns for several steganography domains. As the consortium who published the previous studies on steganography patterns, we present the first comprehensive pattern-based taxonomy tailored to fit all known domains of steganography, including smaller and emerging areas, such as filesystem, IoT/CPS, and AI/ML steganography. To make our contribution more effective and promote the use of the taxonomy to advance research, we also provide a unified description method joint with a thorough tutorial on its utilization.","PeriodicalId":50926,"journal":{"name":"ACM Computing Surveys","volume":"183 1","pages":""},"PeriodicalIF":16.6,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143805846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Proof Scores: A Survey","authors":"Adrián Riesco, Kazuhiro Ogata, Masaki Nakamura, Daniel Gaina, Duong Dinh Tran, Kokichi Futatsugi","doi":"10.1145/3729166","DOIUrl":"https://doi.org/10.1145/3729166","url":null,"abstract":"Proof scores can be regarded as outlines of the formal verification of system properties. They have been historically used by the OBJ family of specification languages. The main advantage of proof scores is that they follow the same syntax as the specification language they are used in, so specifiers can easily adopt them and use as many features as the particular language provides. In this way, proof scores have been successfully used to prove properties of a large number of systems and protocols. However, proof scores also present a number of disadvantages that prevented a large audience from adopting them as proving mechanism. In this paper we present the theoretical foundations of proof scores; the different systems where they have been adopted and their latest developments; the classes of systems successfully verified using proof scores, including the main techniques used for it; the main reasons why they have not been widely adopted; and finally we discuss some directions of future work that might solve the problems discussed previously.","PeriodicalId":50926,"journal":{"name":"ACM Computing Surveys","volume":"24 1","pages":""},"PeriodicalIF":16.6,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143805852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"AI-Generated Content (AIGC) for Various Data Modalities: A Survey","authors":"Lin Geng Foo, Hossein Rahmani, Jun Liu","doi":"10.1145/3728633","DOIUrl":"https://doi.org/10.1145/3728633","url":null,"abstract":"AI-generated content (AIGC) methods aim to produce text, images, videos, 3D assets, and other media using AI algorithms. Due to its wide range of applications and the potential of recent works, AIGC developments – especially in Machine Learning (ML) and Deep Learning (DL) – have been attracting significant attention, and this survey focuses on comprehensively reviewing such advancements in ML/DL. AIGC methods have been developed for various data modalities, such as image, video, text, 3D shape, 3D scene, 3D human avatar, 3D motion, and audio – each presenting unique characteristics and challenges. Furthermore, there have been significant developments in cross-modality AIGC methods, where generative methods receive conditioning input in one modality and produce outputs in another. Examples include going from various modalities to image, video, 3D, and audio. This paper provides a comprehensive review of AIGC methods across different data modalities, including both single-modality and cross-modality methods, highlighting the various challenges, representative works, and recent technical directions in each setting. We also survey the representative datasets throughout the modalities, and present comparative results for various modalities. Moreover, we discuss the typical applications of AIGC methods in various domains, challenges, and future research directions.","PeriodicalId":50926,"journal":{"name":"ACM Computing Surveys","volume":"227 1","pages":""},"PeriodicalIF":16.6,"publicationDate":"2025-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143797962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Byeong Su Kim, Jieun Kim, Deokwoo Lee, Beakcheol Jang
{"title":"Visual Question Answering: A Survey of Methods, Datasets, Evaluation, and Challenges","authors":"Byeong Su Kim, Jieun Kim, Deokwoo Lee, Beakcheol Jang","doi":"10.1145/3728635","DOIUrl":"https://doi.org/10.1145/3728635","url":null,"abstract":"Visual question answering (VQA) is a dynamic field of research that aims to generate textual answers from given visual and question information. It is a multimodal field that has garnered significant interest from the computer vision and natural language processing communities. Furthermore, recent advances in these fields have yielded numerous achievements in VQA research. In VQA research, achieving balanced learning that avoids bias towards either visual or question information is crucial. The primary challenge in VQA lies in eliminating noise, while utilizing valuable and accurate information from different modalities. Various research methodologies have been developed to address these issues. In this study, we classify these research methods into three categories: Joint Embedding, Attention Mechanism, and Model-agnostic methods. We analyze the advantages, disadvantages, and limitations of each approach. In addition, we trace the evolution of datasets in VQA research, categorizing them into three types: Real Image, Synthetic Image, and Unbiased datasets. This study also provides an overview of evaluation metrics based on future research directions. Finally, we discuss future research and application directions for VQA research. We anticipate that this survey will offer useful perspectives and essential information to researchers and practitioners seeking to address visual questions effectively.","PeriodicalId":50926,"journal":{"name":"ACM Computing Surveys","volume":"31 1","pages":""},"PeriodicalIF":16.6,"publicationDate":"2025-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143805574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Max Klabunde, Tobias Schumacher, Markus Strohmaier, Florian Lemmerich
{"title":"Similarity of Neural Network Models: A Survey of Functional and Representational Measures","authors":"Max Klabunde, Tobias Schumacher, Markus Strohmaier, Florian Lemmerich","doi":"10.1145/3728458","DOIUrl":"https://doi.org/10.1145/3728458","url":null,"abstract":"Measuring similarity of neural networks to understand and improve their behavior has become an issue of great importance and research interest. In this survey, we provide a comprehensive overview of two complementary perspectives of measuring neural network similarity: (i) representational similarity, which considers how <jats:italic>activations</jats:italic> of intermediate layers differ, and (ii) functional similarity, which considers how models differ in their <jats:italic>outputs</jats:italic> . In addition to providing detailed descriptions of existing measures, we summarize and discuss results on the properties of and relationships between these measures, and point to open research problems. We hope our work lays a foundation for more systematic research on the properties and applicability of similarity measures for neural network models.","PeriodicalId":50926,"journal":{"name":"ACM Computing Surveys","volume":"183 1","pages":""},"PeriodicalIF":16.6,"publicationDate":"2025-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143798357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Hybrids of Reinforcement Learning and Evolutionary Computation in Finance: A Survey","authors":"Sandarbh Yadav, Vadlamani Ravi, Shivaram Kalyanakrishnan","doi":"10.1145/3728634","DOIUrl":"https://doi.org/10.1145/3728634","url":null,"abstract":"Many sequential decision-making problems in finance like trading, portfolio optimisation, etc. have been modelled using reinforcement learning (RL) and evolutionary computation (EC). Recent studies on problems from various domains have shown that EC can be used to improve the performance of RL and vice versa. Over the years, researchers have proposed different ways of hybridising RL and EC for trading and portfolio optimisation. However, there is a lack of a thorough survey in this research area, which lies at the intersection of RL, EC, and finance. This paper surveys hybrid techniques combining EC and RL for financial applications and presents a novel taxonomy. Research gaps have been discovered in existing works and some open problems have been identified for future works. A detailed discussion about different design choices made in the existing literature is also included.","PeriodicalId":50926,"journal":{"name":"ACM Computing Surveys","volume":"60 1 1","pages":""},"PeriodicalIF":16.6,"publicationDate":"2025-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143805576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Carbon-Efficient Software Design and Development: A Systematic Literature Review","authors":"Ornela Danushi, Stefano Forti, Jacopo Soldani","doi":"10.1145/3728638","DOIUrl":"https://doi.org/10.1145/3728638","url":null,"abstract":"The ICT sector, responsible for 2% of global carbon emissions, is under scrutiny calling for methodologies and tools to design and develop software in an environmentally sustainable-by-design manner. However, the software engineering solutions for designing and developing carbon-efficient software are currently scattered over multiple different pieces of literature, which makes it difficult to consult the body of knowledge on the topic. In this article, we precisely conduct a systematic literature review on state-of-the-art proposals for designing and developing carbon-efficient software. We identify and analyse 65 primary studies by classifying them through a taxonomy aimed at answering the 5W1H questions of carbon-efficient software design and development. We first provide a reasoned overview and discussion of the existing guidelines, reference models, measurement solutions and techniques for measuring, reducing, or minimising the carbon footprint of software. Ultimately, we identify open challenges and research gaps, offering insights for future work in this field.","PeriodicalId":50926,"journal":{"name":"ACM Computing Surveys","volume":"38 1","pages":""},"PeriodicalIF":16.6,"publicationDate":"2025-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143805855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Antonio Di Marino, Vincenzo Bevilacqua, Angelo Ciaramella, Ivanoe De Falco, Giovanna Sannino
{"title":"Ante-Hoc Methods for Interpretable Deep Models: A Survey","authors":"Antonio Di Marino, Vincenzo Bevilacqua, Angelo Ciaramella, Ivanoe De Falco, Giovanna Sannino","doi":"10.1145/3728637","DOIUrl":"https://doi.org/10.1145/3728637","url":null,"abstract":"The increasing use of black-box networks in high-risk contexts has led researchers to propose explainable methods to make these networks transparent. Most methods that allow us to understand the behavior of Deep Neural Networks (DNNs) are post-hoc approaches, implying that the explainability is questionable, as these methods do not clarify the internal behavior of a model. Thus, this demonstrates the difficulty of interpreting the internal behavior of deep models. This systematic literature review collects the ante-hoc methods that provide an understanding of the internal mechanisms of deep models and which can be helpful to researchers who need to use interpretability methods to clarify DNNs. This work provides the definitions of strong interpretability and weak interpretability, which will be used to describe the interpretability of the methods discussed in this paper. The results of this work are divided mainly into prototype-based methods, concept-based methods, and other interpretability methods for deep models.","PeriodicalId":50926,"journal":{"name":"ACM Computing Surveys","volume":"23 1","pages":""},"PeriodicalIF":16.6,"publicationDate":"2025-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143805569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Efficient Compressing and Tuning Methods for Large Language Models: A Systematic Literature Review","authors":"Gun Il Kim, Sunga Hwang, Beakcheol Jang","doi":"10.1145/3728636","DOIUrl":"https://doi.org/10.1145/3728636","url":null,"abstract":"Efficient compression and tuning techniques have become indispensable in addressing the increasing computational and memory demands of large language models (LLMs). While these models have demonstrated exceptional performance across a wide range of natural language processing tasks, their growing size and resource requirements pose significant challenges to accessibility and sustainability. This survey systematically reviews state-of-the-art methods in model compression, including compression techniques such as knowledge distillation, low-rank approximation, parameter pruning, and quantization, as well as tuning techniques such as parameter-efficient fine-tuning and inference optimization. Compression techniques, though well-established in traditional deep learning, require updated methodologies tailored to the scale and dynamics of LLMs. Simultaneously, parameter-efficient fine-tuning, exemplified by techniques like Low-Rank Adaptation (LoRA) and query tuning, emerges as a promising solution for adapting models with minimal resource overhead. This study provides a detailed taxonomy of these methods, examining their practical applications, strengths, and limitations. Critical gaps are identified in scalability, and the integration of compression and tuning strategies, signaling the need for unified frameworks and hybrid approaches to maximize efficiency and performance. By addressing these challenges, this survey aims to guide researchers toward sustainable, efficient, and accessible LLM development, ensuring their broader applicability across diverse domains while mitigating resource constraints.","PeriodicalId":50926,"journal":{"name":"ACM Computing Surveys","volume":"4 1","pages":""},"PeriodicalIF":16.6,"publicationDate":"2025-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143805570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Diffusion-Based Visual Art Creation: A Survey and New Perspectives","authors":"Bingyuan Wang, Qifeng Chen, Zeyu Wang","doi":"10.1145/3728459","DOIUrl":"https://doi.org/10.1145/3728459","url":null,"abstract":"The integration of generative AI in visual art has revolutionized not only how visual content is created but also how AI interacts with and reflects the underlying domain knowledge. This survey explores the emerging realm of diffusion-based visual art creation, examining its development from both artistic and technical perspectives. We structure the survey into three phases, data feature and framework identification, detailed analyses using a structured coding process, and open-ended prospective outlooks. Our findings reveal how artistic requirements are transformed into technical challenges and highlight the design and application of diffusion-based methods within visual art creation. We also provide insights into future directions from technical and synergistic perspectives, suggesting that the confluence of generative AI and art has shifted the creative paradigm and opened up new possibilities. By summarizing the development and trends of this emerging interdisciplinary area, we aim to shed light on the mechanisms through which AI systems emulate and possibly, enhance human capacities in artistic perception and creativity.","PeriodicalId":50926,"journal":{"name":"ACM Computing Surveys","volume":"290 1","pages":""},"PeriodicalIF":16.6,"publicationDate":"2025-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143782800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}