{"title":"Mitigating Chatbots AI Data Privacy Violations in the Banking Sector: A Qualitative Grounded Theory Study","authors":"John Giordani","doi":"10.59324/ejaset.2024.2(4).02","DOIUrl":null,"url":null,"abstract":"This research study examines the impact of Artificial Intelligence (AI) data poisoning on data privacy violations in AI-enabled banking chatbots, employing a qualitative approach grounded in AI, data privacy, and cybersecurity theories. Through qualitative grounded theory research approach, viewpoints were gathered from a group of IT professionals in the banking sector. The research uncovered the impact of AI data poisoning across different professional roles, ranging from direct breaches to indirect exposure. Key findings revealed a spectrum of mitigation strategies, from technical solutions to basic awareness and mixed responses regarding the impact on personally identifiable information (PII), underscoring the complexity of safeguarding customer data [1]. Despite potential limitations stemming from the rapidly evolving AI landscape, this study contributes valuable insights into effective strategies for mitigating AI data poisoning risks and enhancing the security of AI-enabled chatbots in banking. It highlights the critical importance of developing robust security measures to protect sensitive customer data against privacy violations.","PeriodicalId":517802,"journal":{"name":"European Journal of Applied Science, Engineering and Technology","volume":"44 6","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"European Journal of Applied Science, Engineering and Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.59324/ejaset.2024.2(4).02","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This research study examines the impact of Artificial Intelligence (AI) data poisoning on data privacy violations in AI-enabled banking chatbots, employing a qualitative approach grounded in AI, data privacy, and cybersecurity theories. Through qualitative grounded theory research approach, viewpoints were gathered from a group of IT professionals in the banking sector. The research uncovered the impact of AI data poisoning across different professional roles, ranging from direct breaches to indirect exposure. Key findings revealed a spectrum of mitigation strategies, from technical solutions to basic awareness and mixed responses regarding the impact on personally identifiable information (PII), underscoring the complexity of safeguarding customer data [1]. Despite potential limitations stemming from the rapidly evolving AI landscape, this study contributes valuable insights into effective strategies for mitigating AI data poisoning risks and enhancing the security of AI-enabled chatbots in banking. It highlights the critical importance of developing robust security measures to protect sensitive customer data against privacy violations.
本研究采用以人工智能、数据隐私和网络安全理论为基础的定性方法,探讨了人工智能(AI)数据中毒对人工智能银行聊天机器人中数据隐私侵犯行为的影响。通过定性基础理论研究方法,研究人员收集了银行业 IT 专业人士的观点。研究揭示了人工智能数据中毒对不同专业角色的影响,从直接泄露到间接暴露。主要发现揭示了一系列缓解策略,从技术解决方案到基本意识,以及对个人身份信息(PII)影响的不同反应,凸显了保护客户数据的复杂性[1]。尽管快速发展的人工智能环境可能会带来一些限制,但本研究为降低人工智能数据中毒风险和提高银行业人工智能聊天机器人安全性的有效策略提供了宝贵的见解。它强调了制定强有力的安全措施以保护敏感客户数据免受隐私侵犯的极端重要性。