Leah Vines , Diana Sotelo , Allison Johnson, Evan Dennis, Peter Manza, Nora D. Volkow, Gene-Jack Wang
{"title":"Ketamine use disorder: preclinical, clinical, and neuroimaging evidence to support proposed mechanisms of actions","authors":"Leah Vines , Diana Sotelo , Allison Johnson, Evan Dennis, Peter Manza, Nora D. Volkow, Gene-Jack Wang","doi":"10.1016/j.imed.2022.03.001","DOIUrl":"10.1016/j.imed.2022.03.001","url":null,"abstract":"<div><p>Ketamine, a noncompetitive N-methyl-D-aspartate (NMDA) receptor antagonist, has been exclusively used as an anesthetic in medicine and has led to new insights into the pathophysiology of neuropsychiatric disorders. Clinical studies have shown that low subanesthetic doses of ketamine produce antidepressant effects for individuals with depression. However, its use as a treatment for psychiatric disorders has been limited due to its reinforcing effects and high potential for diversion and misuse. Preclinical studies have focused on understanding the molecular mechanisms underlying ketamine's antidepressant effects, but a precise mechanism had yet to be elucidated. Here we review different hypotheses for ketamine's mechanism of action including the direct inhibition and disinhibition of NMDA receptors, aminomethylphosphonic acid receptors (AMPAR) activation, and heightened activation of monoaminergic systems. The proposed mechanisms are not mutually exclusive, and their combined influence may exert the observed structural and functional neural impairments. Long term use of ketamine induces brain structural, functional impairments, and neurodevelopmental effects in both rodents and humans. Its misuse has increased rapidly in the past 20 years and is one of the most common addictive drugs used in Asia. The proposed mechanisms of action and supporting neuroimaging data allow for the development of tools to identify ‘biotypes’ of ketamine use disorder (KUD) using machine learning approaches, which could inform intervention and treatment.</p></div>","PeriodicalId":73400,"journal":{"name":"Intelligent medicine","volume":"2 2","pages":"Pages 61-68"},"PeriodicalIF":0.0,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9249268/pdf/nihms-1788545.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9733633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yash Amethiya , Prince Pipariya , Shlok Patel , Manan Shah
{"title":"Comparative analysis of breast cancer detection using machine learning and biosensors","authors":"Yash Amethiya , Prince Pipariya , Shlok Patel , Manan Shah","doi":"10.1016/j.imed.2021.08.004","DOIUrl":"10.1016/j.imed.2021.08.004","url":null,"abstract":"<div><p>Breast cancer is a widely occurring cancer in women worldwide and is related to high mortality. The objective of this review was to present several approaches to investigate the application of multiple algorithms based on machine learning (ML) approach and biosensors for early breast cancer detection. Automation is needed because biosensors and ML are needed to identify cancers based on microscopic images. ML aims to facilitate self-learning in computers. Rather than relying on explicit pre-programmed rules and models, it is based on identifying patterns in observed data and building models to predict outcomes. We have compared and analysed various types of algorithms such as fuzzy extreme learning machine – radial basis function (ELM-RBF), support vector machine (SVM), support vector regression (SVR), relevance vector machine (RVM), naive bayes, k-nearest neighbours algorithm (K-NN), decision tree (DT), artificial neural network (ANN), back-propagation neural network (BPNN), and random forest across different databases including images digitized from fine needle aspirations of breast masses, scanned film mammography, breast infrared images, MR images, data collected by using blood analyses, and histopathology image samples. The results were compared on performance metric elements like accuracy, precision, and recall. Further, we used biosensors to determine the presence of a specific biological analyte by transforming the cellular constituents of proteins, DNA, or RNA into electrical signals that can be detected and analysed. Here, we have compared the detection of different types of analytes such as HER2, miRNA 21, miRNA 155, MCF-7 cells, DNA, BRCA1, BRCA2, human tears, and saliva by using different types of biosensors including FET, electrochemical, and sandwich electrochemical, among others. Several biosensors use a different type of specification which is also discussed. The result of which is analysed on the basis of detection limit, linear ranges, and response time. Different studies and related articles were reviewed and analysed systematically, and those published from 2010 to 2021 were considered. Biosensors and ML both have the potential to detect breast cancer quickly and effectively.</p></div>","PeriodicalId":73400,"journal":{"name":"Intelligent medicine","volume":"2 2","pages":"Pages 69-81"},"PeriodicalIF":0.0,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667102621000887/pdfft?md5=56a9a0654a7f1385fb9e0985640b5a10&pid=1-s2.0-S2667102621000887-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45247393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiaqi Lu , Ruiqing Liu , Yuejuan Zhang , Xianxiang Zhang , Longbo Zheng , Chao Zhang , Kaiming Zhang , Shuai Li , Yun Lu
{"title":"Development and application of a detection platform for colorectal cancer tumor sprouting pathological characteristics based on artificial intelligence","authors":"Jiaqi Lu , Ruiqing Liu , Yuejuan Zhang , Xianxiang Zhang , Longbo Zheng , Chao Zhang , Kaiming Zhang , Shuai Li , Yun Lu","doi":"10.1016/j.imed.2021.08.003","DOIUrl":"10.1016/j.imed.2021.08.003","url":null,"abstract":"<div><h3>Objective</h3><p>Tumor sprouting can reflect independent risk factors for tumor malignancy and a poor clinical prognosis. However, there are significant differences and difficulties associated with manually identifying tumor sprouting. This study used the Faster region convolutional neural network (RCNN) model to build a colorectal cancer tumor sprouting artificial intelligence recognition framework based on pathological sections to automatically identify the budding area to assist in the clinical diagnosis and treatment of colorectal cancer.</p></div><div><h3>Methods</h3><p>We retrospectively collected 100 surgical pathological sections of colorectal cancer from January 2019 to October 2019. The pathologists used LabelImg software to identify tumor buds and to count their numbers. Finally, 1,000 images were screened, and the total number of tumor buds was approximately 3,226; the images were randomly divided into a training set and a test set at a ratio of 6:4. After the images in the training set were manually identified, the identified buds in the 600 images were used to train the Faster RCNN identification model. After the establishment of the artificial intelligence identification detection platform, 400 images in the test set were used to test the identification detection system to identify and predict the area and number of tumor buds. Finally, by comparing the results of the Faster RCNN system and the identification information of pathologists, the performance of the artificial intelligence automatic detection platform was evaluated to determine the area and number of tumor sprouting in the pathological sections of the colorectal cancers to achieve an auxiliary diagnosis and to suggest appropriate treatment. The selected performance indicators include accuracy, precision, specificity, etc. ROC (receiver operator characteristic) and AUC (area under the curve) were used to quantify the performance of the system to automatically identify tumor budding areas and numbers.</p></div><div><h3>Results</h3><p>The AUC of the receiver operating characteristic curve of the artificial intelligence detection and identification system was 0.96, the image diagnosis accuracy rate was 0.89, the precision was 0.855, the sensitivity was 0.94, the specificity was 0.83, and the negative predictive value was 0.933. After 400 test sets, pathological image verification showed that there were 356 images with the same positive budding area count, and the difference between the positive area count and the manual detection count in the remaining images was less than 3. The detection system based on tumor budding recognition in pathological sections is comparable to that of pathologists’ accuracy; however, it took significantly less time (0.03±0.01)s for the pathologist (13±5)s to diagnose the sections with the assistance of the AI model.</p></div><div><h3>Conclusion</h3><p>This system can accurately and quickly identify the tumor sprouting area in the patholo","PeriodicalId":73400,"journal":{"name":"Intelligent medicine","volume":"2 2","pages":"Pages 82-87"},"PeriodicalIF":0.0,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667102621000851/pdfft?md5=5abb742d385b1532c61d1ac77e31cfe7&pid=1-s2.0-S2667102621000851-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45656749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exploring the social, ethical, legal, and responsibility dimensions of artificial intelligence for health – a new column in Intelligent Medicine","authors":"Achim Rosemann , Xinqing Zhang","doi":"10.1016/j.imed.2021.12.002","DOIUrl":"10.1016/j.imed.2021.12.002","url":null,"abstract":"<div><p>This essay is the starting point of a new column in <em>Intelligent Medicine</em> that invites interdisciplinary perspectives on the social, ethical, legal, and responsibility aspects of the use of artificial intelligence (AI) in medicine and health care. Papers in this column will examine the practical, conceptual, and policy dimensions of the use of AI for health-related purposes from comparative and international perspectives. We invite contributions from around the world in all application areas of AI for health, including health care, health research, drug development, health care system management, as well as public health and public health surveillance. The column aims to provide a forum for reflective and critical scholarship that contributes to the ongoing academic and policy debates about the development, use, governance, and implications of AI in medical and health care settings.</p><p>To launch the column, we first provide an overview of recent approaches that have been developed to identify and address the effects and potential impacts of science and technology innovations on human societies and the environment. These include ethical, legal, and social implications/aspects (ELSI/A) research, responsible research and innovation (RRI), sustainability transitions research, and the use of international standard-setting instruments for responsible and open science issued by the United Nations Educational, Scientific, and Cultural Organization (UNESCO), the World Health Organization (WHO), and other international bodies. In Part Two of this essay, we discuss some of the central challenges that arise with regard to the integration of AI and big data analytics in medical and health care settings. This includes concerns regarding (i) the control, reliability, and trustworthiness of AI systems, (ii) privacy and surveillance, (iii) the impact of AI and automation on health care staff employment and the nature of clinical work, (iv) the effects of AI on health inequalities, justice, and access to medical care, and (v) challenges related to regulation and governance. We end the essay with a call for papers and a set of questions that could be relevant for future studies.</p></div>","PeriodicalId":73400,"journal":{"name":"Intelligent medicine","volume":"2 2","pages":"Pages 103-109"},"PeriodicalIF":0.0,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667102621001212/pdfft?md5=56297c87bdfbcd2ce2a7cb46e1429fd1&pid=1-s2.0-S2667102621001212-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"54899840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wanling Huang , Yifan Xiang , Yahan Yang , Qing Tang , Guangjian Liu , Hong Yang , Erjiao Xu , Huitong Lin , Zhixing Zhang , Zhe Ma , Zhendong Li , Ruiyang Li , Anqi Yan , Haotian Lin , Zhu Wang , Chinese Association of Artificial Intelligence, Medical Artificial Intelligence Branch of the Guangdong Medical Association
{"title":"Expert recommendations on data collection and annotation of two dimensional ultrasound images in azoospermic males for evaluation of testicular spermatogenic function in intelligent medicine","authors":"Wanling Huang , Yifan Xiang , Yahan Yang , Qing Tang , Guangjian Liu , Hong Yang , Erjiao Xu , Huitong Lin , Zhixing Zhang , Zhe Ma , Zhendong Li , Ruiyang Li , Anqi Yan , Haotian Lin , Zhu Wang , Chinese Association of Artificial Intelligence, Medical Artificial Intelligence Branch of the Guangdong Medical Association","doi":"10.1016/j.imed.2021.09.002","DOIUrl":"https://doi.org/10.1016/j.imed.2021.09.002","url":null,"abstract":"<div><p>Testicular two-dimensional ultrasound is a testing modality that is often used to evaluate azoospermia and other related diseases. With the continuous development of deep learning in recent years, the combination of deep learning and testicular ultrasound appears unstoppable despite a lack of relevant standards. One of the major problems associated with the digitization of ultrasound images is the uneven quality of data however, and a standardized data source and acquisition process has not yet been developed. Such a standard could fill the current gap, and establish acquisition criteria for ultrasound images of testes during the male reproductive period, including grayscale ultrasound, shear wave elastography, and contrast-enhanced ultrasound. By following these guidelines the quality of testicular ultrasound images would be improved and standardized, which would lay a solid foundation for the standardization of testicular ultrasound images, and assist automated evaluation of testicular spermatogenic function of whole testis in azoospermic males.</p></div>","PeriodicalId":73400,"journal":{"name":"Intelligent medicine","volume":"2 2","pages":"Pages 97-102"},"PeriodicalIF":0.0,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667102621000875/pdfft?md5=7769c17e0ffec2fd129bab462b564ce4&pid=1-s2.0-S2667102621000875-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136977017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Artificial intelligence for COVID-19: battling the pandemic with computational intelligence","authors":"Zhenxing Xu , Chang Su , Yunyu Xiao , Fei Wang","doi":"10.1016/j.imed.2021.09.001","DOIUrl":"10.1016/j.imed.2021.09.001","url":null,"abstract":"<div><p>The new coronavirus disease 2019 (COVID-19) has become a global pandemic leading to over 180 million confirmed cases and nearly 4 million deaths until June 2021, according to the World Health Organization. Since the initial report in December 2019 , COVID-19 has demonstrated a high transmission rate (with an R<sub>0</sub> > 2), a diverse set of clinical characteristics (e.g., high rate of hospital and intensive care unit admission rates, multi-organ dysfunction for critically ill patients due to hyperinflammation, thrombosis, etc.), and a tremendous burden on health care systems around the world. To understand the serious and complex diseases and develop effective control, treatment, and prevention strategies, researchers from different disciplines have been making significant efforts from different aspects including epidemiology and public health, biology and genomic medicine, as well as clinical care and patient management. In recent years, artificial intelligence (AI) has been introduced into the healthcare field to aid clinical decision-making for disease diagnosis and treatment such as detecting cancer based on medical images, and has achieved superior performance in multiple data-rich application scenarios. In the COVID-19 pandemic, AI techniques have also been used as a powerful tool to overcome the complex diseases. In this context, the goal of this study is to review existing studies on applications of AI techniques in combating the COVID-19 pandemic. Specifically, these efforts can be grouped into the fields of epidemiology, therapeutics, clinical research, social and behavioral studies and are summarized. Potential challenges, directions, and open questions are discussed accordingly, which may provide new insights into addressing the COVID-19 pandemic and would be helpful for researchers to explore more related topics in the post-pandemic era.</p></div>","PeriodicalId":73400,"journal":{"name":"Intelligent medicine","volume":"2 1","pages":"Pages 13-29"},"PeriodicalIF":0.0,"publicationDate":"2022-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8529224/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9502437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Mobile health technology: a novel tool in chronic disease management","authors":"Kaman Fan , Yi Zhao","doi":"10.1016/j.imed.2021.06.003","DOIUrl":"10.1016/j.imed.2021.06.003","url":null,"abstract":"<div><p>The successful control of chronic diseases mainly depends on how well patients manage their disease conditions with the aid of healthcare providers. Mobile health technology—also known as mHealth—supports healthcare practice by means of mobile devices such as smartphone applications, web-based technologies, telecommunications services, social media, and wearable technology, and is becoming increasingly popular. Many studies have evaluated the utility of mHealth as a tool to improve chronic disease management through monitoring and feedback, educational and lifestyle interventions, clinical decision support, medication adherence, risk screening, and rehabilitation support. The aim of this article is to summarize systematic reviews addressing the effect of mHealth on the outcome of patients with chronic diseases. We describe the current applications of various mHealth approaches, evaluate their effectiveness as well as limitations, and discuss potential challenges in their future development. The evidence to date indicates that none of the existing mHealth technologies are inferior to traditional care. Telehealth and web-based technologies are the most frequently reported interventions, with promising results ranging from alleviation of disease-related symptoms, improvement in medication adherence, and decreased rates of rehospitalization and mortality. The new generation of mHealth devices based on various technologies are likely to provide more efficient and personalized healthcare programs for patients.</p></div>","PeriodicalId":73400,"journal":{"name":"Intelligent medicine","volume":"2 1","pages":"Pages 41-47"},"PeriodicalIF":0.0,"publicationDate":"2022-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.imed.2021.06.003","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48102051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Guang Jia , Xunan Huang , Sen Tao , Xianghuai Zhang , Yue Zhao , Hongcai Wang , Jie He , Jiaxue Hao , Bo Liu , Jiejing Zhou , Tanping Li , Xiaoling Zhang , Jinglong Gao
{"title":"Artificial intelligence-based medical image segmentation for 3D printing and naked eye 3D visualization","authors":"Guang Jia , Xunan Huang , Sen Tao , Xianghuai Zhang , Yue Zhao , Hongcai Wang , Jie He , Jiaxue Hao , Bo Liu , Jiejing Zhou , Tanping Li , Xiaoling Zhang , Jinglong Gao","doi":"10.1016/j.imed.2021.04.001","DOIUrl":"10.1016/j.imed.2021.04.001","url":null,"abstract":"<div><p>Image segmentation for 3D printing and 3D visualization has become an essential component in many fields of medical research, teaching, and clinical practice. Medical image segmentation requires sophisticated computerized quantifications and visualization tools. Recently, with the development of artificial intelligence (AI) technology, tumors or organs can be quickly and accurately detected and automatically contoured from medical images. This paper introduces a platform-independent, multi-modality image registration, segmentation, and 3D visualization program, named artificial intelligence-based medical image segmentation for 3D printing and naked eye 3D visualization (AIMIS3D). YOLOV3 algorithm was used to recognize prostate organ from T2-weighted MRI images with proper training. Prostate cancer and bladder cancer were segmented based on U-net from MRI images. CT images of osteosarcoma were loaded into the platform for the segmentation of lumbar spine, osteosarcoma, vessels, and local nerves for 3D printing. Breast displacement during each radiation therapy was quantitatively evaluated by automatically identifying the position of the 3D printed plastic breast bra. Brain vessel from multi-modality MRI images was segmented by using model-based transfer learning for 3D printing and naked eye 3D visualization in AIMIS3D platform.</p></div>","PeriodicalId":73400,"journal":{"name":"Intelligent medicine","volume":"2 1","pages":"Pages 48-53"},"PeriodicalIF":0.0,"publicationDate":"2022-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.imed.2021.04.001","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"112350185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}