Futures of Deepfake and society: Myths, metaphors, and future implications for a trustworthy digital future

IF 3.8 3区 管理学 Q1 ECONOMICS
Abdul Wahab
{"title":"Futures of Deepfake and society: Myths, metaphors, and future implications for a trustworthy digital future","authors":"Abdul Wahab","doi":"10.1016/j.futures.2025.103672","DOIUrl":null,"url":null,"abstract":"<div><div>The revolutionizing journey of technology has traveled so far from simple Artificial intelligence to advanced Machine learning using deep learning algorithms. These algorithms generate high-level realistic content, either audio or video, that is indistinguishable from human cognition, known as Deepfake. Deepfake technology jeopardized the quality of information and trust in society. Therefore, this study aims to explore and uncover the hidden myths, metaphors, worldviews, and societal systematic causes of deepfake technology and identify the potential risk reduction strategies from a strategic foresight perspective using Causal Layered Analysis (CLA). This was achieved by the qualitative review of statistical data conducted at the mass level and case studies related to the current and future implications of deepfake across various societal domains. Our findings reveal that Deepfake technology erodes trust in digital media, causing suicides due to psychological distress while presenting risks and opportunities. Based on CLA outcomes, future recommendations are formulated to minimize deepfake creation and the victimization process. Also, increased awareness, regulation, and education are essential to mitigate its negative impacts and harness its benefits. However, the technology also presents opportunities for positive applications in education and entertainment. The results underscore the need for enhanced media literacy and regulatory frameworks to address the challenges posed by deepfakes while harnessing their potential benefits. Potential Future research should focus on developing effective detection methods and public awareness campaigns.</div></div>","PeriodicalId":48239,"journal":{"name":"Futures","volume":"173 ","pages":"Article 103672"},"PeriodicalIF":3.8000,"publicationDate":"2025-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Futures","FirstCategoryId":"91","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S001632872500134X","RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ECONOMICS","Score":null,"Total":0}
引用次数: 0

Abstract

The revolutionizing journey of technology has traveled so far from simple Artificial intelligence to advanced Machine learning using deep learning algorithms. These algorithms generate high-level realistic content, either audio or video, that is indistinguishable from human cognition, known as Deepfake. Deepfake technology jeopardized the quality of information and trust in society. Therefore, this study aims to explore and uncover the hidden myths, metaphors, worldviews, and societal systematic causes of deepfake technology and identify the potential risk reduction strategies from a strategic foresight perspective using Causal Layered Analysis (CLA). This was achieved by the qualitative review of statistical data conducted at the mass level and case studies related to the current and future implications of deepfake across various societal domains. Our findings reveal that Deepfake technology erodes trust in digital media, causing suicides due to psychological distress while presenting risks and opportunities. Based on CLA outcomes, future recommendations are formulated to minimize deepfake creation and the victimization process. Also, increased awareness, regulation, and education are essential to mitigate its negative impacts and harness its benefits. However, the technology also presents opportunities for positive applications in education and entertainment. The results underscore the need for enhanced media literacy and regulatory frameworks to address the challenges posed by deepfakes while harnessing their potential benefits. Potential Future research should focus on developing effective detection methods and public awareness campaigns.
《Deepfake与社会的未来:神话、隐喻和对可信赖的数字未来的未来影响
技术的革命之旅已经从简单的人工智能发展到使用深度学习算法的高级机器学习。这些算法生成高级逼真的内容,无论是音频还是视频,与人类的认知没有区别,被称为Deepfake。深度造假技术危害了信息质量和社会信任。因此,本研究旨在探索和揭示深度伪造技术隐藏的神话、隐喻、世界观和社会系统原因,并利用因果分层分析(Causal Layered Analysis, CLA)从战略远见的角度确定潜在的风险降低策略。这是通过对大规模统计数据的定性审查以及与deepfake在各个社会领域的当前和未来影响相关的案例研究实现的。我们的研究结果表明,深度造假技术侵蚀了人们对数字媒体的信任,在带来风险和机遇的同时,也会导致心理困扰导致自杀。根据CLA的结果,制定了未来的建议,以尽量减少深度伪造和受害过程。此外,提高意识、监管和教育对于减轻其负面影响和利用其好处至关重要。然而,这项技术也为教育和娱乐领域的积极应用提供了机会。研究结果强调,需要加强媒体素养和监管框架,以应对深度造假带来的挑战,同时利用其潜在好处。未来的研究应侧重于发展有效的检测方法和提高公众认识的运动。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Futures
Futures Multiple-
CiteScore
6.00
自引率
10.00%
发文量
124
期刊介绍: Futures is an international, refereed, multidisciplinary journal concerned with medium and long-term futures of cultures and societies, science and technology, economics and politics, environment and the planet and individuals and humanity. Covering methods and practices of futures studies, the journal seeks to examine possible and alternative futures of all human endeavours. Futures seeks to promote divergent and pluralistic visions, ideas and opinions about the future. The editors do not necessarily agree with the views expressed in the pages of Futures
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信