Milad Taleby Ahvanooey , Wojciech Mazurczyk , Zefan Wang , Jun Zhao
{"title":"深度造假中网民网络不信任行为决定性风险因素评估的新框架","authors":"Milad Taleby Ahvanooey , Wojciech Mazurczyk , Zefan Wang , Jun Zhao","doi":"10.1016/j.engappai.2025.111319","DOIUrl":null,"url":null,"abstract":"<div><div>Nowadays, Generative Artificial Intelligence (GenAI) tools or trainable agents can craft synthetic media (hereafter referred to as deepfakes) in the form of realistic texts, images, videos, and audios, incorporating events or things that never occurred in real life. These GenAI tools empower marketers and malicious actors to create deepfakes, both authorized and weaponized multimedia, which allows them to include celebrities without appearing in front of cameras or creating seductive phishing scams. Although GenAI tools can reduce the cost of content construction, they enable new risky opportunities (e.g., deepfake phishing and cyberbullying) that negatively impact netizens’ learning and (dis)trust behaviors in cyberspace. To address such risks, this study proposes a Multi-Criteria-Multi-Decision-Makers (MCMDM)-based Deepfake Risk Assessment Framework (DeepFakeR-MF) to evaluate determinant factors that impact the cyber (dis)trust behaviors of netizens in deepfakes. Moreover, DeepFakeR-MF deploys a combination of a novel optimized spherical fuzzy analytic hierarchy process method and a game theory-based MCMDM approach to prioritize and recommend alternative strategies that can be taken by five management sectors (e.g., industrial enterprises, governmental organizations, media outlets, social non-profit, and educational institutes) to mitigate GenAI-associated risks. Then, we collect 100 experts’ judgments by analyzing their responses to our questionnaire and prioritize the importance of determinant factors considering their preferences. To validate the prioritized factors on the performance of DeepFakeR-MF, we conduct a sensitivity analysis applying Monte Carlo statistical modeling. Finally, our results confirm that DeepFakeR-MF provides effective strategic alternatives for policymakers, educators, media professionals, engineers, and netizens, hopefully reducing the socio-economic risks of deepfakes.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"159 ","pages":"Article 111319"},"PeriodicalIF":7.5000,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A novel framework for assessing determinant risk factors on cyber (dis)trust behaviors of netizens in deepfakes\",\"authors\":\"Milad Taleby Ahvanooey , Wojciech Mazurczyk , Zefan Wang , Jun Zhao\",\"doi\":\"10.1016/j.engappai.2025.111319\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Nowadays, Generative Artificial Intelligence (GenAI) tools or trainable agents can craft synthetic media (hereafter referred to as deepfakes) in the form of realistic texts, images, videos, and audios, incorporating events or things that never occurred in real life. These GenAI tools empower marketers and malicious actors to create deepfakes, both authorized and weaponized multimedia, which allows them to include celebrities without appearing in front of cameras or creating seductive phishing scams. Although GenAI tools can reduce the cost of content construction, they enable new risky opportunities (e.g., deepfake phishing and cyberbullying) that negatively impact netizens’ learning and (dis)trust behaviors in cyberspace. To address such risks, this study proposes a Multi-Criteria-Multi-Decision-Makers (MCMDM)-based Deepfake Risk Assessment Framework (DeepFakeR-MF) to evaluate determinant factors that impact the cyber (dis)trust behaviors of netizens in deepfakes. Moreover, DeepFakeR-MF deploys a combination of a novel optimized spherical fuzzy analytic hierarchy process method and a game theory-based MCMDM approach to prioritize and recommend alternative strategies that can be taken by five management sectors (e.g., industrial enterprises, governmental organizations, media outlets, social non-profit, and educational institutes) to mitigate GenAI-associated risks. Then, we collect 100 experts’ judgments by analyzing their responses to our questionnaire and prioritize the importance of determinant factors considering their preferences. To validate the prioritized factors on the performance of DeepFakeR-MF, we conduct a sensitivity analysis applying Monte Carlo statistical modeling. Finally, our results confirm that DeepFakeR-MF provides effective strategic alternatives for policymakers, educators, media professionals, engineers, and netizens, hopefully reducing the socio-economic risks of deepfakes.</div></div>\",\"PeriodicalId\":50523,\"journal\":{\"name\":\"Engineering Applications of Artificial Intelligence\",\"volume\":\"159 \",\"pages\":\"Article 111319\"},\"PeriodicalIF\":7.5000,\"publicationDate\":\"2025-07-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Engineering Applications of Artificial Intelligence\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0952197625013211\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Engineering Applications of Artificial Intelligence","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0952197625013211","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
A novel framework for assessing determinant risk factors on cyber (dis)trust behaviors of netizens in deepfakes
Nowadays, Generative Artificial Intelligence (GenAI) tools or trainable agents can craft synthetic media (hereafter referred to as deepfakes) in the form of realistic texts, images, videos, and audios, incorporating events or things that never occurred in real life. These GenAI tools empower marketers and malicious actors to create deepfakes, both authorized and weaponized multimedia, which allows them to include celebrities without appearing in front of cameras or creating seductive phishing scams. Although GenAI tools can reduce the cost of content construction, they enable new risky opportunities (e.g., deepfake phishing and cyberbullying) that negatively impact netizens’ learning and (dis)trust behaviors in cyberspace. To address such risks, this study proposes a Multi-Criteria-Multi-Decision-Makers (MCMDM)-based Deepfake Risk Assessment Framework (DeepFakeR-MF) to evaluate determinant factors that impact the cyber (dis)trust behaviors of netizens in deepfakes. Moreover, DeepFakeR-MF deploys a combination of a novel optimized spherical fuzzy analytic hierarchy process method and a game theory-based MCMDM approach to prioritize and recommend alternative strategies that can be taken by five management sectors (e.g., industrial enterprises, governmental organizations, media outlets, social non-profit, and educational institutes) to mitigate GenAI-associated risks. Then, we collect 100 experts’ judgments by analyzing their responses to our questionnaire and prioritize the importance of determinant factors considering their preferences. To validate the prioritized factors on the performance of DeepFakeR-MF, we conduct a sensitivity analysis applying Monte Carlo statistical modeling. Finally, our results confirm that DeepFakeR-MF provides effective strategic alternatives for policymakers, educators, media professionals, engineers, and netizens, hopefully reducing the socio-economic risks of deepfakes.
期刊介绍:
Artificial Intelligence (AI) is pivotal in driving the fourth industrial revolution, witnessing remarkable advancements across various machine learning methodologies. AI techniques have become indispensable tools for practicing engineers, enabling them to tackle previously insurmountable challenges. Engineering Applications of Artificial Intelligence serves as a global platform for the swift dissemination of research elucidating the practical application of AI methods across all engineering disciplines. Submitted papers are expected to present novel aspects of AI utilized in real-world engineering applications, validated using publicly available datasets to ensure the replicability of research outcomes. Join us in exploring the transformative potential of AI in engineering.