美国医疗保健行业的人工智能:展望2030年

F. Tewes
{"title":"美国医疗保健行业的人工智能:展望2030年","authors":"F. Tewes","doi":"10.52916/jmrs224089","DOIUrl":null,"url":null,"abstract":"Artificial intelligence (AI) has the potential to speed up the exponential growth of cutting-edge technology, much way the Internet did. Due to intense competition from the private sector, governments, and businesspeople around the world, the Internet has already reached its peak as an exponential technology. In contrast, artificial intelligence is still in its infancy, and people all over the world are unsure of how it will impact their lives in the future. Artificial intelligence, is a field of technology that enables robots and computer programmes to mimic human intellect by teaching a predetermined set of software rules to learn by repetitive learning from experience and slowly moving toward maximum performance. Although this intelligence is still developing, it has already demonstrated five different levels of independence. Utilized initially to resolve issues. Next, think about solutions. Third, respond to inquiries. Fourth, use data analytics to generate forecasts. Fifth, make tactical recommendations. Massive data sets and \"iterative algorithms,\" which use lookup tables and other data structures like stacks and queues to solve issues, make all of this possible. Iteration is a strategy where software rules are regularly adjusted to patterns in the data for a certain number of iterations. The artificial intelligence continuously makes small, incremental improvements that result in exponential growth, which enables the computer to become incredibly proficient at whatever it is trained to do. For each round of data processing, the artificial intelligence tests and measures its performance to develop new expertise. In order to address complicated problems, artificial intelligence aims to create computer systems that can mimic human behavior and exhibit human-like thought processes [1]. Artificial intelligence technology is being developed to give individualized medication in the field of healthcare. By 2030, six different artificial intelligence sectors will have considerably improved healthcare delivery through the utilization of larger, more accessible data sets. The first is machine learning. This area of artificial intelligence learns automatically and produces improved results based on identifying patterns in the data, gaining new insights, and enhancing the outcomes of whatever activity the system is intended to accomplish. It does this without being trained to learn a particular topic. Here are several instances of machine learning in the healthcare industry. The first is the IBM Watson Genomics, which aids in rapid disease diagnosis and identification by fusing cognitive computing with genome-based tumour sequencing. Second, a project called Nave Bayes allows for the prediction of diabetes years before an official diagnosis, before it results in harm to the kidneys, the heart, and the nerves. Third, employing two machine learning approaches termed classification and clustering to analyse the Indian Liver Patient Data (ILPD) set in order to predict liver illness before this organ that regulates metabolism becomes susceptible to chronic hepatitis, liver cancer, and cirrhosis [2]. Second, deep learning. Deep learning employs artificial intelligence to learn from data processing, much like machine learning does. Deep learning, on the other hand, makes use of synthetic neural networks that mimic human brain function to analyse data, identify relationships between the data, and provide outputs based on positive and negative reinforcement. For instance, in the fields of Magnetic Resonance Imaging (MRI) and Computed Tomography (CT), deep learning aids in the processes of picture recognition and object detection. Deep learning algorithms for the early identification of Alzheimer's, diabetic retinopathy, and breast nodule ultrasound detection are three applications of this cutting-edge technology in the real world. Future developments in deep learning will make considerable improvements in pathology and radiology pictures [3]. Third, neural networks. The artificial intelligence system can now accept massive data sets, find patterns within the data, and respond to queries regarding the information processed because the computer learning process resembles a network of neurons in the human brain. Let's examine a few application examples that are now applicable to the healthcare sector. According to studies from John Hopkins University, surgical errors are a major contributor to medical malpractice claims since they happen more than 4,000 times a year in just the United States due to the human error of surgeons. Neural networks can be used in robot-assisted surgery to model and plan procedures, evaluate the abilities of the surgeon, and streamline surgical activities. In one study of 379 orthopaedic patients, it was discovered that robotic surgery using neural networks results in five times fewer complications than surgery performed by a single surgeon. Another application of neural networks is in visualising diagnostics, which was proven to physicians by Harvard University researchers who inserted an image of a gorilla to x-rays. Of the radiologists who saw the images, 83% did not recognise the gorilla. The Houston Medical Research Institute has created a breast cancer early detection programme that can analyse mammograms with 99 percent accuracy and offer diagnostic information 30 times faster than a human [4]. Cognitive computing is the fourth. Aims to replicate the way people and machines interact, showing how a computer may operate like the human brain when handling challenging tasks like text, speech, or image analysis. Large volumes of patient data have been analysed, with the majority of the research to date focusing on cancer, diabetes, and cardiovascular disease. Companies like Google, IBM, Facebook, and Apple have shown interest in this work. Cognitive computing made up the greatest component of the artificial market in 2020, with 39% of the total [5]. Hospitals made up 42% of the market for cognitive computing end users because of the rising demand for individualised medical data. IBM invested more than $1 billion on the development of the WATSON analytics platform ecosystem and collaboration with startups committed to creating various cloud and application-based systems for the healthcare business in 2014 because it predicted the demand for cognitive computing in this sector. Natural Language Processing (NLP) is the fifth. This area of artificial intelligence enables computers to comprehend and analyse spoken language. The initial phase of this pre-processing is to divide the data up into more manageable semantic units, which merely makes the information simpler for the NLP system to understand. Clinical trial development is experiencing exponential expansion in the healthcare sector thanks to NLP. First, the NLP uses speech-to-text dictation and structured data entry to extract clinical data at the point of care, reducing the need for manual assessment of complex clinical paperwork. Second, using NLP technology, healthcare professionals can automatically examine enormous amounts of unstructured clinical and patient data to select the most suitable patients for clinical trials, perhaps leading to an improvement in the patients' health [6]. Computer vision comes in sixth. Computer vision, an essential part of artificial intelligence, uses visual data as input to process photos and videos continuously in order to get better results faster and with higher quality than would be possible if the same job were done manually. Simply put, doctors can now diagnose their patients with diseases like cancer, diabetes, and cardiovascular disorders more quickly and at an earlier stage. Here are a few examples of real-world applications where computer vision technology is making notable strides. Mammogram images are analysed by visual systems that are intended to spot breast cancer at an early stage. Automated cell counting is another example from the real world that dramatically decreases human error and raises concerns about the accuracy of the results because they might differ greatly depending on the examiner's experience and degree of focus. A third application of computer vision in the real world is the quick and painless early-stage tumour detection enabled by artificial intelligence. Without a doubt, computer vision has the unfathomable potential to significantly enhance how healthcare is delivered. Other than for visual data analysis, clinicians can use this technology to enhance their training and skill development. Currently, Gramener is the top company offering medical facilities and research organisations computer vision solutions [7]. The usage of imperative rather than functional programming languages is one of the key difficulties in creating artificial intelligence software. As artificial intelligence starts to increase exponentially, developers employing imperative programming languages must assume that the machine is stupid and supply detailed instructions that are subject to a high level of maintenance and human error. In software with hundreds of thousands of lines of code, human error detection is challenging. Therefore, the substantial amount of ensuing maintenance may become ridiculously expensive, maintaining the high expenditures of research and development. As a result, software developers have contributed to the unreasonably high cost of medical care. Functional programming languages, on the other hand, demand that the developer use their problem-solving abilities as though the computer were a mathematician. As a result, compared to the number of lines of code needed by the programme to perform the same operation, mathematical functions are orders of magnitude shorter. In software with hundreds of thousands of lines of code, human error detection is challenging. Therefore, the substantial amount of ensuing maintenance may become ridiculously expensive, maintaining the high expenditures o","PeriodicalId":73820,"journal":{"name":"Journal of medical research and surgery","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2022-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Artificial Intelligence in the American Healthcare Industry: Looking Forward to 2030\",\"authors\":\"F. Tewes\",\"doi\":\"10.52916/jmrs224089\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Artificial intelligence (AI) has the potential to speed up the exponential growth of cutting-edge technology, much way the Internet did. Due to intense competition from the private sector, governments, and businesspeople around the world, the Internet has already reached its peak as an exponential technology. In contrast, artificial intelligence is still in its infancy, and people all over the world are unsure of how it will impact their lives in the future. Artificial intelligence, is a field of technology that enables robots and computer programmes to mimic human intellect by teaching a predetermined set of software rules to learn by repetitive learning from experience and slowly moving toward maximum performance. Although this intelligence is still developing, it has already demonstrated five different levels of independence. Utilized initially to resolve issues. Next, think about solutions. Third, respond to inquiries. Fourth, use data analytics to generate forecasts. Fifth, make tactical recommendations. Massive data sets and \\\"iterative algorithms,\\\" which use lookup tables and other data structures like stacks and queues to solve issues, make all of this possible. Iteration is a strategy where software rules are regularly adjusted to patterns in the data for a certain number of iterations. The artificial intelligence continuously makes small, incremental improvements that result in exponential growth, which enables the computer to become incredibly proficient at whatever it is trained to do. For each round of data processing, the artificial intelligence tests and measures its performance to develop new expertise. In order to address complicated problems, artificial intelligence aims to create computer systems that can mimic human behavior and exhibit human-like thought processes [1]. Artificial intelligence technology is being developed to give individualized medication in the field of healthcare. By 2030, six different artificial intelligence sectors will have considerably improved healthcare delivery through the utilization of larger, more accessible data sets. The first is machine learning. This area of artificial intelligence learns automatically and produces improved results based on identifying patterns in the data, gaining new insights, and enhancing the outcomes of whatever activity the system is intended to accomplish. It does this without being trained to learn a particular topic. Here are several instances of machine learning in the healthcare industry. The first is the IBM Watson Genomics, which aids in rapid disease diagnosis and identification by fusing cognitive computing with genome-based tumour sequencing. Second, a project called Nave Bayes allows for the prediction of diabetes years before an official diagnosis, before it results in harm to the kidneys, the heart, and the nerves. Third, employing two machine learning approaches termed classification and clustering to analyse the Indian Liver Patient Data (ILPD) set in order to predict liver illness before this organ that regulates metabolism becomes susceptible to chronic hepatitis, liver cancer, and cirrhosis [2]. Second, deep learning. Deep learning employs artificial intelligence to learn from data processing, much like machine learning does. Deep learning, on the other hand, makes use of synthetic neural networks that mimic human brain function to analyse data, identify relationships between the data, and provide outputs based on positive and negative reinforcement. For instance, in the fields of Magnetic Resonance Imaging (MRI) and Computed Tomography (CT), deep learning aids in the processes of picture recognition and object detection. Deep learning algorithms for the early identification of Alzheimer's, diabetic retinopathy, and breast nodule ultrasound detection are three applications of this cutting-edge technology in the real world. Future developments in deep learning will make considerable improvements in pathology and radiology pictures [3]. Third, neural networks. The artificial intelligence system can now accept massive data sets, find patterns within the data, and respond to queries regarding the information processed because the computer learning process resembles a network of neurons in the human brain. Let's examine a few application examples that are now applicable to the healthcare sector. According to studies from John Hopkins University, surgical errors are a major contributor to medical malpractice claims since they happen more than 4,000 times a year in just the United States due to the human error of surgeons. Neural networks can be used in robot-assisted surgery to model and plan procedures, evaluate the abilities of the surgeon, and streamline surgical activities. In one study of 379 orthopaedic patients, it was discovered that robotic surgery using neural networks results in five times fewer complications than surgery performed by a single surgeon. Another application of neural networks is in visualising diagnostics, which was proven to physicians by Harvard University researchers who inserted an image of a gorilla to x-rays. Of the radiologists who saw the images, 83% did not recognise the gorilla. The Houston Medical Research Institute has created a breast cancer early detection programme that can analyse mammograms with 99 percent accuracy and offer diagnostic information 30 times faster than a human [4]. Cognitive computing is the fourth. Aims to replicate the way people and machines interact, showing how a computer may operate like the human brain when handling challenging tasks like text, speech, or image analysis. Large volumes of patient data have been analysed, with the majority of the research to date focusing on cancer, diabetes, and cardiovascular disease. Companies like Google, IBM, Facebook, and Apple have shown interest in this work. Cognitive computing made up the greatest component of the artificial market in 2020, with 39% of the total [5]. Hospitals made up 42% of the market for cognitive computing end users because of the rising demand for individualised medical data. IBM invested more than $1 billion on the development of the WATSON analytics platform ecosystem and collaboration with startups committed to creating various cloud and application-based systems for the healthcare business in 2014 because it predicted the demand for cognitive computing in this sector. Natural Language Processing (NLP) is the fifth. This area of artificial intelligence enables computers to comprehend and analyse spoken language. The initial phase of this pre-processing is to divide the data up into more manageable semantic units, which merely makes the information simpler for the NLP system to understand. Clinical trial development is experiencing exponential expansion in the healthcare sector thanks to NLP. First, the NLP uses speech-to-text dictation and structured data entry to extract clinical data at the point of care, reducing the need for manual assessment of complex clinical paperwork. Second, using NLP technology, healthcare professionals can automatically examine enormous amounts of unstructured clinical and patient data to select the most suitable patients for clinical trials, perhaps leading to an improvement in the patients' health [6]. Computer vision comes in sixth. Computer vision, an essential part of artificial intelligence, uses visual data as input to process photos and videos continuously in order to get better results faster and with higher quality than would be possible if the same job were done manually. Simply put, doctors can now diagnose their patients with diseases like cancer, diabetes, and cardiovascular disorders more quickly and at an earlier stage. Here are a few examples of real-world applications where computer vision technology is making notable strides. Mammogram images are analysed by visual systems that are intended to spot breast cancer at an early stage. Automated cell counting is another example from the real world that dramatically decreases human error and raises concerns about the accuracy of the results because they might differ greatly depending on the examiner's experience and degree of focus. A third application of computer vision in the real world is the quick and painless early-stage tumour detection enabled by artificial intelligence. Without a doubt, computer vision has the unfathomable potential to significantly enhance how healthcare is delivered. Other than for visual data analysis, clinicians can use this technology to enhance their training and skill development. Currently, Gramener is the top company offering medical facilities and research organisations computer vision solutions [7]. The usage of imperative rather than functional programming languages is one of the key difficulties in creating artificial intelligence software. As artificial intelligence starts to increase exponentially, developers employing imperative programming languages must assume that the machine is stupid and supply detailed instructions that are subject to a high level of maintenance and human error. In software with hundreds of thousands of lines of code, human error detection is challenging. Therefore, the substantial amount of ensuing maintenance may become ridiculously expensive, maintaining the high expenditures of research and development. As a result, software developers have contributed to the unreasonably high cost of medical care. Functional programming languages, on the other hand, demand that the developer use their problem-solving abilities as though the computer were a mathematician. As a result, compared to the number of lines of code needed by the programme to perform the same operation, mathematical functions are orders of magnitude shorter. In software with hundreds of thousands of lines of code, human error detection is challenging. Therefore, the substantial amount of ensuing maintenance may become ridiculously expensive, maintaining the high expenditures o\",\"PeriodicalId\":73820,\"journal\":{\"name\":\"Journal of medical research and surgery\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-10-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of medical research and surgery\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.52916/jmrs224089\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of medical research and surgery","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.52916/jmrs224089","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

人工智能(AI)有可能加速尖端技术的指数级增长,就像互联网一样。由于来自世界各地的私营部门、政府和商人的激烈竞争,互联网已经达到了它作为指数技术的顶峰。相比之下,人工智能仍处于起步阶段,全世界的人们都不确定它将如何影响他们未来的生活。人工智能是一个技术领域,它使机器人和计算机程序能够模仿人类的智力,方法是教一套预先确定的软件规则,让它们通过从经验中反复学习,慢慢地向最佳表现靠拢。虽然这种智能仍在发展中,但它已经表现出了五个不同程度的独立性。最初用于解决问题。接下来,想想解决方案。第三,回应询问。第四,使用数据分析生成预测。第五,提出战术建议。大量的数据集和“迭代算法”,使用查找表和其他数据结构,如堆栈和队列来解决问题,使这一切成为可能。迭代是一种策略,其中软件规则在一定次数的迭代中定期调整为数据中的模式。人工智能不断地进行微小的、渐进式的改进,从而导致指数级增长,这使得计算机在训练它做的任何事情上都变得非常精通。对于每一轮数据处理,人工智能都会测试和衡量其性能,以开发新的专业知识。为了解决复杂的问题,人工智能旨在创造能够模仿人类行为并表现出类似人类思维过程的计算机系统。人们正在开发人工智能技术,以在医疗保健领域提供个性化药物。到2030年,六个不同的人工智能部门将通过利用更大、更容易获取的数据集,大大改善医疗保健服务。第一个是机器学习。人工智能的这一领域自动学习,并基于识别数据中的模式、获得新的见解以及增强系统打算完成的任何活动的结果来产生改进的结果。它不需要训练来学习特定的主题。以下是医疗保健行业中机器学习的几个例子。第一个是IBM沃森基因组学,它通过融合认知计算和基于基因组的肿瘤测序,帮助快速诊断和识别疾病。其次,一个名为“自然贝叶斯”的项目允许在正式诊断前几年预测糖尿病,在糖尿病对肾脏、心脏和神经造成伤害之前。第三,采用分类和聚类两种机器学习方法来分析印度肝脏患者数据(ILPD)集,以便在这个调节新陈代谢的器官易患慢性肝炎、肝癌和肝硬化之前预测肝脏疾病。第二,深度学习。深度学习利用人工智能从数据处理中学习,就像机器学习一样。另一方面,深度学习利用模拟人脑功能的合成神经网络来分析数据,识别数据之间的关系,并提供基于正强化和负强化的输出。例如,在磁共振成像(MRI)和计算机断层扫描(CT)领域,深度学习有助于图像识别和目标检测的过程。用于早期识别阿尔茨海默氏症、糖尿病视网膜病变和乳腺结节超声检测的深度学习算法是这项尖端技术在现实世界中的三个应用。深度学习的未来发展将在病理学和放射学图像方面取得相当大的进步。第三,神经网络。由于计算机的学习过程类似于人脑中的神经元网络,人工智能系统现在可以接受大量的数据集,在数据中找到模式,并对所处理信息的查询做出响应。让我们来看看现在适用于医疗保健部门的几个应用程序示例。根据约翰霍普金斯大学的研究,手术失误是医疗事故索赔的主要原因,因为仅在美国,由于外科医生的人为失误,每年就会发生4000多起。神经网络可以在机器人辅助手术中用于建模和计划手术过程,评估外科医生的能力,并简化手术活动。一项针对379名骨科患者的研究发现,使用神经网络的机器人手术比单个外科医生进行的手术并发症少五倍。 神经网络的另一个应用是可视化诊断,哈佛大学的研究人员通过将大猩猩的图像插入x射线向医生证明了这一点。在看过这些图像的放射科医生中,83%的人认不出大猩猩。休斯顿医学研究所(Houston Medical Research Institute)创建了一个乳腺癌早期检测项目,该项目能够以99%的准确率分析乳房x光照片,提供的诊断信息比人类的bbb快30倍。认知计算是第四个。旨在复制人和机器交互的方式,展示计算机在处理文本、语音或图像分析等具有挑战性的任务时如何像人脑一样运作。已经分析了大量的患者数据,到目前为止,大多数研究都集中在癌症、糖尿病和心血管疾病上。b谷歌、IBM、Facebook和苹果等公司都对这项工作表现出了兴趣。认知计算是2020年人工智能市场的最大组成部分,占总规模的39%。由于对个性化医疗数据的需求不断增长,医院占认知计算终端用户市场的42%。2014年,IBM投资了超过10亿美元用于开发WATSON分析平台生态系统,并与致力于为医疗保健业务创建各种基于云和应用程序的系统的初创公司合作,因为它预测了该领域对认知计算的需求。自然语言处理(NLP)是第五个。人工智能的这一领域使计算机能够理解和分析口语。这种预处理的初始阶段是将数据划分为更易于管理的语义单元,这只是为了使NLP系统更容易理解信息。由于NLP,临床试验开发在医疗保健领域正经历着指数级的扩张。首先,NLP使用语音到文本的听写和结构化数据输入在护理点提取临床数据,减少了人工评估复杂临床文书工作的需要。其次,使用NLP技术,医疗保健专业人员可以自动检查大量非结构化的临床和患者数据,以选择最合适的患者进行临床试验,这可能会改善患者的健康状况。计算机视觉排名第六。计算机视觉是人工智能的重要组成部分,它使用视觉数据作为输入,连续处理照片和视频,以便比手动完成同样的工作更快、更高质量地获得更好的结果。简而言之,医生现在可以更快、更早地诊断出癌症、糖尿病和心血管疾病等疾病。以下是计算机视觉技术取得显著进步的一些实际应用示例。乳房x光照片是由视觉系统分析的,目的是在早期发现乳腺癌。自动细胞计数是另一个来自现实世界的例子,它极大地减少了人为错误,并引起了对结果准确性的担忧,因为它们可能因审查员的经验和专注程度而有很大差异。计算机视觉在现实世界中的第三个应用是由人工智能实现的快速无痛的早期肿瘤检测。毫无疑问,计算机视觉具有不可估量的潜力,可以显著增强医疗保健的交付方式。除了视觉数据分析,临床医生还可以使用这项技术来加强他们的培训和技能发展。目前,Gramener是为医疗机构和研究机构提供计算机视觉解决方案的顶级公司。使用命令式而不是函数式编程语言是创建人工智能软件的关键困难之一。随着人工智能开始呈指数级增长,使用命令式编程语言的开发人员必须假设机器是愚蠢的,并提供详细的指令,这些指令容易受到高水平的维护和人为错误的影响。在拥有数十万行代码的软件中,人为错误检测是具有挑战性的。因此,大量的后续维护可能会变得非常昂贵,从而维持研究和开发的高支出。因此,软件开发人员对医疗保健的不合理的高成本做出了贡献。另一方面,函数式编程语言要求开发人员像计算机是数学家一样使用他们解决问题的能力。因此,与程序执行相同操作所需的代码行数相比,数学函数要短几个数量级。在拥有数十万行代码的软件中,人为错误检测是具有挑战性的。 因此,大量的后续维护可能会变得非常昂贵,维持高支出
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Artificial Intelligence in the American Healthcare Industry: Looking Forward to 2030
Artificial intelligence (AI) has the potential to speed up the exponential growth of cutting-edge technology, much way the Internet did. Due to intense competition from the private sector, governments, and businesspeople around the world, the Internet has already reached its peak as an exponential technology. In contrast, artificial intelligence is still in its infancy, and people all over the world are unsure of how it will impact their lives in the future. Artificial intelligence, is a field of technology that enables robots and computer programmes to mimic human intellect by teaching a predetermined set of software rules to learn by repetitive learning from experience and slowly moving toward maximum performance. Although this intelligence is still developing, it has already demonstrated five different levels of independence. Utilized initially to resolve issues. Next, think about solutions. Third, respond to inquiries. Fourth, use data analytics to generate forecasts. Fifth, make tactical recommendations. Massive data sets and "iterative algorithms," which use lookup tables and other data structures like stacks and queues to solve issues, make all of this possible. Iteration is a strategy where software rules are regularly adjusted to patterns in the data for a certain number of iterations. The artificial intelligence continuously makes small, incremental improvements that result in exponential growth, which enables the computer to become incredibly proficient at whatever it is trained to do. For each round of data processing, the artificial intelligence tests and measures its performance to develop new expertise. In order to address complicated problems, artificial intelligence aims to create computer systems that can mimic human behavior and exhibit human-like thought processes [1]. Artificial intelligence technology is being developed to give individualized medication in the field of healthcare. By 2030, six different artificial intelligence sectors will have considerably improved healthcare delivery through the utilization of larger, more accessible data sets. The first is machine learning. This area of artificial intelligence learns automatically and produces improved results based on identifying patterns in the data, gaining new insights, and enhancing the outcomes of whatever activity the system is intended to accomplish. It does this without being trained to learn a particular topic. Here are several instances of machine learning in the healthcare industry. The first is the IBM Watson Genomics, which aids in rapid disease diagnosis and identification by fusing cognitive computing with genome-based tumour sequencing. Second, a project called Nave Bayes allows for the prediction of diabetes years before an official diagnosis, before it results in harm to the kidneys, the heart, and the nerves. Third, employing two machine learning approaches termed classification and clustering to analyse the Indian Liver Patient Data (ILPD) set in order to predict liver illness before this organ that regulates metabolism becomes susceptible to chronic hepatitis, liver cancer, and cirrhosis [2]. Second, deep learning. Deep learning employs artificial intelligence to learn from data processing, much like machine learning does. Deep learning, on the other hand, makes use of synthetic neural networks that mimic human brain function to analyse data, identify relationships between the data, and provide outputs based on positive and negative reinforcement. For instance, in the fields of Magnetic Resonance Imaging (MRI) and Computed Tomography (CT), deep learning aids in the processes of picture recognition and object detection. Deep learning algorithms for the early identification of Alzheimer's, diabetic retinopathy, and breast nodule ultrasound detection are three applications of this cutting-edge technology in the real world. Future developments in deep learning will make considerable improvements in pathology and radiology pictures [3]. Third, neural networks. The artificial intelligence system can now accept massive data sets, find patterns within the data, and respond to queries regarding the information processed because the computer learning process resembles a network of neurons in the human brain. Let's examine a few application examples that are now applicable to the healthcare sector. According to studies from John Hopkins University, surgical errors are a major contributor to medical malpractice claims since they happen more than 4,000 times a year in just the United States due to the human error of surgeons. Neural networks can be used in robot-assisted surgery to model and plan procedures, evaluate the abilities of the surgeon, and streamline surgical activities. In one study of 379 orthopaedic patients, it was discovered that robotic surgery using neural networks results in five times fewer complications than surgery performed by a single surgeon. Another application of neural networks is in visualising diagnostics, which was proven to physicians by Harvard University researchers who inserted an image of a gorilla to x-rays. Of the radiologists who saw the images, 83% did not recognise the gorilla. The Houston Medical Research Institute has created a breast cancer early detection programme that can analyse mammograms with 99 percent accuracy and offer diagnostic information 30 times faster than a human [4]. Cognitive computing is the fourth. Aims to replicate the way people and machines interact, showing how a computer may operate like the human brain when handling challenging tasks like text, speech, or image analysis. Large volumes of patient data have been analysed, with the majority of the research to date focusing on cancer, diabetes, and cardiovascular disease. Companies like Google, IBM, Facebook, and Apple have shown interest in this work. Cognitive computing made up the greatest component of the artificial market in 2020, with 39% of the total [5]. Hospitals made up 42% of the market for cognitive computing end users because of the rising demand for individualised medical data. IBM invested more than $1 billion on the development of the WATSON analytics platform ecosystem and collaboration with startups committed to creating various cloud and application-based systems for the healthcare business in 2014 because it predicted the demand for cognitive computing in this sector. Natural Language Processing (NLP) is the fifth. This area of artificial intelligence enables computers to comprehend and analyse spoken language. The initial phase of this pre-processing is to divide the data up into more manageable semantic units, which merely makes the information simpler for the NLP system to understand. Clinical trial development is experiencing exponential expansion in the healthcare sector thanks to NLP. First, the NLP uses speech-to-text dictation and structured data entry to extract clinical data at the point of care, reducing the need for manual assessment of complex clinical paperwork. Second, using NLP technology, healthcare professionals can automatically examine enormous amounts of unstructured clinical and patient data to select the most suitable patients for clinical trials, perhaps leading to an improvement in the patients' health [6]. Computer vision comes in sixth. Computer vision, an essential part of artificial intelligence, uses visual data as input to process photos and videos continuously in order to get better results faster and with higher quality than would be possible if the same job were done manually. Simply put, doctors can now diagnose their patients with diseases like cancer, diabetes, and cardiovascular disorders more quickly and at an earlier stage. Here are a few examples of real-world applications where computer vision technology is making notable strides. Mammogram images are analysed by visual systems that are intended to spot breast cancer at an early stage. Automated cell counting is another example from the real world that dramatically decreases human error and raises concerns about the accuracy of the results because they might differ greatly depending on the examiner's experience and degree of focus. A third application of computer vision in the real world is the quick and painless early-stage tumour detection enabled by artificial intelligence. Without a doubt, computer vision has the unfathomable potential to significantly enhance how healthcare is delivered. Other than for visual data analysis, clinicians can use this technology to enhance their training and skill development. Currently, Gramener is the top company offering medical facilities and research organisations computer vision solutions [7]. The usage of imperative rather than functional programming languages is one of the key difficulties in creating artificial intelligence software. As artificial intelligence starts to increase exponentially, developers employing imperative programming languages must assume that the machine is stupid and supply detailed instructions that are subject to a high level of maintenance and human error. In software with hundreds of thousands of lines of code, human error detection is challenging. Therefore, the substantial amount of ensuing maintenance may become ridiculously expensive, maintaining the high expenditures of research and development. As a result, software developers have contributed to the unreasonably high cost of medical care. Functional programming languages, on the other hand, demand that the developer use their problem-solving abilities as though the computer were a mathematician. As a result, compared to the number of lines of code needed by the programme to perform the same operation, mathematical functions are orders of magnitude shorter. In software with hundreds of thousands of lines of code, human error detection is challenging. Therefore, the substantial amount of ensuing maintenance may become ridiculously expensive, maintaining the high expenditures o
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信