BJR openPub Date : 2023-09-25eCollection Date: 2023-01-01DOI: 10.1259/bjro.20220020
Alex Ghorishi, Amaris Alayon, Tarek Ghaddar, Maya Kandah, Per K Amundson
{"title":"MR and CT angiography in the diagnosis of vasculitides.","authors":"Alex Ghorishi, Amaris Alayon, Tarek Ghaddar, Maya Kandah, Per K Amundson","doi":"10.1259/bjro.20220020","DOIUrl":"10.1259/bjro.20220020","url":null,"abstract":"<p><p>Vasculitides represent the wide-ranging series of complex inflammatory diseases that involve inflammation of blood vessel walls. These conditions are characterized according to the caliber of the predominantly involved vessels. The work-up of vasculitides often includes imaging to narrow a differential diagnosis and guide management. Findings from CT and MR angiography in conjunction with a thorough history and physical exam are of utmost importance in making an accurate diagnosis. Further, imaging can be used for follow-up, in order to monitor disease progression and response to treatment. This wide-ranging literature review serves as the primary resource for clinicians looking to diagnose and monitor the progression of rare vascular inflammatory conditions. This article provides a comprehensive summary of the main findings on imaging related to each of these vasculitides. For each of the named vasculitis conditions, a thorough overview of the diagnostic modalities and their respective findings is described. Many specific hallmarks of pathology are included in this review article.</p>","PeriodicalId":72419,"journal":{"name":"BJR open","volume":"5 1","pages":"20220020"},"PeriodicalIF":0.0,"publicationDate":"2023-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10636354/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89720835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
BJR openPub Date : 2023-09-25eCollection Date: 2023-01-01DOI: 10.1259/bjro.20230041
Stephen W Duffy, Laszlo Tabar, Tony H H Chen, Amy M F Yen, Peter B Dean, Robert A Smith
{"title":"A plea for more careful scholarship in reviewing evidence: the case of mammographic screening.","authors":"Stephen W Duffy, Laszlo Tabar, Tony H H Chen, Amy M F Yen, Peter B Dean, Robert A Smith","doi":"10.1259/bjro.20230041","DOIUrl":"10.1259/bjro.20230041","url":null,"abstract":"<p><strong>Objectives: </strong>To identify issues of principle and practice giving rise to misunderstandings in reviewing evidence, to illustrate these by reference to the Nordic Cochrane Review (NCR) and its interpretation of two trials of mammographic screening, and to draw lessons for future reviewing of published results.</p><p><strong>Methods: </strong>A narrative review of the publications of the Nordic Cochrane Review of mammographic screening (NCR), the Swedish Two-County Trial (S2C) and the Canadian National Breast Screening Study 1 and 2 (CNBSS-1 and CNBSS-2).</p><p><strong>Results: </strong>The NCR concluded that the S2C was unreliable, despite the review's complaints being shown to be mistaken, by direct reference to the original primary publications of the S2C. Repeated concerns were expressed by others about potential subversion of randomisation in CNBSS-1 and CNBSS-2; however, the NCR continued to rely heavily on the results of these trials. Since 2022, however, eyewitness evidence of such subversion has been in the public domain.</p><p><strong>Conclusions: </strong>An over-reliance on nominal satisfaction of checklists of criteria in systematic reviewing can lead to erroneous conclusions. This occurred in the case of the NCR, which concluded that mammographic screening was ineffective or minimally effective. Broader and more even-handed reviews of the evidence show that screening confers a substantial reduction in breast cancer mortality.</p><p><strong>Advances in knowledge: </strong>Those carrying out systematic reviews should be aware of the dangers of over-reliance on checklists and guidelines. Readers of systematic reviews should be aware that a systematic review is just another study, with the capability that all studies have of coming to incorrect conclusions. When a review seems to overturn the current position, it is essential to revisit the publications of the primary research.</p>","PeriodicalId":72419,"journal":{"name":"BJR open","volume":"5 1","pages":"20230041"},"PeriodicalIF":0.0,"publicationDate":"2023-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10630970/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71523463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
BJR openPub Date : 2023-08-29eCollection Date: 2023-01-01DOI: 10.1259/bjro.20230017
A J van Hasselt, J Pustjens, A D de Zwart, M Dal, A J de Vries, T M van Raaij
{"title":"Clinical impact of 99mTc-HDP SPECT/CT imaging as standard workup for foot and ankle osteoarthritis.","authors":"A J van Hasselt, J Pustjens, A D de Zwart, M Dal, A J de Vries, T M van Raaij","doi":"10.1259/bjro.20230017","DOIUrl":"10.1259/bjro.20230017","url":null,"abstract":"<p><strong>Objective: </strong>The primary aim of this study was to assess to what extent 99mTc-HDP Single photon emission computed tomography/computed tomography (SPECT/CT) will lead to change of diagnosis and treatment, in patients with suspected foot and ankle osteoarthritis (OA). Secondary aim was to assess the intraobserver variability.</p><p><strong>Methods: </strong>Retrospectively 107 patients, with suspected foot and/or ankle OA of which a SPECT/CT was made, were included for analysis. All the clinical and radiological data were randomized and blinded before being scored by one experienced orthopaedic surgeon. Firstly, based on the clinical data and conventional radiographs, a diagnosis and treatment plan was scored. Secondly, the observer accessed the SPECT/CT and could change the diagnosis and treatment plan. Additionally, the intraobserver reliability was determined by data of 18 patients that were added in twofold to the dataset, without awareness of the observer and by calculating the κ values.</p><p><strong>Results: </strong>The diagnosis changed in 53% (57/107) and treatment plans changed in 26% (28/107) of the patients. Intraobserver reliability for the conventional workup was <i>k</i> = 0.54 (moderate strength of agreement), compared to <i>k</i> = 0.66 (substantial strength of agreement) when SPECT/CT data were added.</p><p><strong>Conclusions: </strong>This study describes the influence of SPECT/CT on diagnosis and treatment plans in patients with suspected symptomatic OA. Also, it shows SPECT/CT leads to a higher intraobserver variability. We believe SPECT/CT has a promising role in the workup for foot and ankle OA.</p><p><strong>Advances in knowledge: </strong>In addition to what was found in complex foot and ankle cases, this study shows that in patients with non-complex foot and ankle problems, SPECT/CT has a substantial influence on the diagnosis (and subsequent treatment plan).</p>","PeriodicalId":72419,"journal":{"name":"BJR open","volume":" ","pages":"20230017"},"PeriodicalIF":0.0,"publicationDate":"2023-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10636335/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48609954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
BJR openPub Date : 2023-08-15eCollection Date: 2023-01-01DOI: 10.1259/bjro.20230008
Mitchell Wiebe, Christina Haston, Michael Lamey, Apurva Narayan, Rasika Rajapakshe
{"title":"The effect of spatial resolution on deep learning classification of lung cancer histopathology.","authors":"Mitchell Wiebe, Christina Haston, Michael Lamey, Apurva Narayan, Rasika Rajapakshe","doi":"10.1259/bjro.20230008","DOIUrl":"10.1259/bjro.20230008","url":null,"abstract":"<p><strong>Objective: </strong>The microscopic analysis of biopsied lung nodules represents the gold-standard for definitive diagnosis of lung cancer. Deep learning has achieved pathologist-level classification of non-small cell lung cancer histopathology images at high resolutions (0.5-2 µm/px), and recent studies have revealed tomography-histology relationships at lower spatial resolutions. Thus, we tested whether patterns for histological classification of lung cancer could be detected at spatial resolutions such as those offered by ultra-high-resolution CT.</p><p><strong>Methods: </strong>We investigated the performance of a deep convolutional neural network (inception-v3) to classify lung histopathology images at lower spatial resolutions than that of typical pathology. Models were trained on 2167 histopathology slides from The Cancer Genome Atlas to differentiate between lung cancer tissues (adenocarcinoma (LUAD) and squamous-cell carcinoma (LUSC)), and normal dense tissue. Slides were accessed at 2.5 × magnification (4 µm/px) and reduced resolutions of 8, 16, 32, 64, and 128 µm/px were simulated by applying digital low-pass filters.</p><p><strong>Results: </strong>The classifier achieved area under the curve ≥0.95 for all classes at spatial resolutions of 4-16 µm/px, and area under the curve ≥0.95 for differentiating normal tissue from the two cancer types at 128 µm/px.</p><p><strong>Conclusions: </strong>Features for tissue classification by deep learning exist at spatial resolutions below what is typically viewed by pathologists.</p><p><strong>Advances in knowledge: </strong>We demonstrated that a deep convolutional network could differentiate normal and cancerous lung tissue at spatial resolutions as low as 128 µm/px and LUAD, LUSC, and normal tissue as low as 16 µm/px. Our data, and results of tomography-histology studies, indicate that these patterns should also be detectable within tomographic data at these resolutions.</p>","PeriodicalId":72419,"journal":{"name":"BJR open","volume":" ","pages":"20230008"},"PeriodicalIF":0.0,"publicationDate":"2023-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10636338/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42229537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
BJR openPub Date : 2023-06-30eCollection Date: 2023-01-01DOI: 10.1259/bjro.20230033
Gemma Walsh, Nikolaos Stogiannos, Riaan van de Venter, Clare Rainey, Winnie Tam, Sonyia McFadden, Jonathan P McNulty, Nejc Mekis, Sarah Lewis, Tracy O'Regan, Amrita Kumar, Merel Huisman, Sotirios Bisdas, Elmar Kotter, Daniel Pinto Dos Santos, Cláudia Sá Dos Reis, Peter van Ooijen, Adrian P Brady, Christina Malamateniou
{"title":"Responsible AI practice and AI education are central to AI implementation: a rapid review for all medical imaging professionals in Europe.","authors":"Gemma Walsh, Nikolaos Stogiannos, Riaan van de Venter, Clare Rainey, Winnie Tam, Sonyia McFadden, Jonathan P McNulty, Nejc Mekis, Sarah Lewis, Tracy O'Regan, Amrita Kumar, Merel Huisman, Sotirios Bisdas, Elmar Kotter, Daniel Pinto Dos Santos, Cláudia Sá Dos Reis, Peter van Ooijen, Adrian P Brady, Christina Malamateniou","doi":"10.1259/bjro.20230033","DOIUrl":"10.1259/bjro.20230033","url":null,"abstract":"<p><p>Artificial intelligence (AI) has transitioned from the lab to the bedside, and it is increasingly being used in healthcare. Radiology and Radiography are on the frontline of AI implementation, because of the use of big data for medical imaging and diagnosis for different patient groups. Safe and effective AI implementation requires that responsible and ethical practices are upheld by all key stakeholders, that there is harmonious collaboration between different professional groups, and customised educational provisions for all involved. This paper outlines key principles of ethical and responsible AI, highlights recent educational initiatives for clinical practitioners and discusses the synergies between all medical imaging professionals as they prepare for the digital future in Europe. Responsible and ethical AI is vital to enhance a culture of safety and trust for healthcare professionals and patients alike. Educational and training provisions for medical imaging professionals on AI is central to the understanding of basic AI principles and applications and there are many offerings currently in Europe. Education can facilitate the transparency of AI tools, but more formalised, university-led training is needed to ensure the academic scrutiny, appropriate pedagogy, multidisciplinarity and customisation to the learners' unique needs are being adhered to. As radiographers and radiologists work together and with other professionals to understand and harness the benefits of AI in medical imaging, it becomes clear that they are faced with the same challenges and that they have the same needs. The digital future belongs to multidisciplinary teams that work seamlessly together, learn together, manage risk collectively and collaborate for the benefit of the patients they serve.</p>","PeriodicalId":72419,"journal":{"name":"BJR open","volume":" ","pages":"20230033"},"PeriodicalIF":0.0,"publicationDate":"2023-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10636340/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47931132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
BJR openPub Date : 2023-06-13eCollection Date: 2023-01-01DOI: 10.1259/bjro.20230029
Aisha Shaheen Hameed, Aneesa K Hameed
{"title":"Radiology and the medical student: do increased hours of teaching translate to more radiologists?","authors":"Aisha Shaheen Hameed, Aneesa K Hameed","doi":"10.1259/bjro.20230029","DOIUrl":"10.1259/bjro.20230029","url":null,"abstract":"","PeriodicalId":72419,"journal":{"name":"BJR open","volume":" ","pages":"20230029"},"PeriodicalIF":0.0,"publicationDate":"2023-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10636334/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46536598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
BJR openPub Date : 2023-06-06eCollection Date: 2023-01-01DOI: 10.1259/bjro.20220033
Patricia Logullo, Angela MacCarthy, Paula Dhiman, Shona Kirtley, Jie Ma, Garrett Bullock, Gary S Collins
{"title":"Artificial intelligence in lung cancer diagnostic imaging: a review of the reporting and conduct of research published 2018-2019.","authors":"Patricia Logullo, Angela MacCarthy, Paula Dhiman, Shona Kirtley, Jie Ma, Garrett Bullock, Gary S Collins","doi":"10.1259/bjro.20220033","DOIUrl":"10.1259/bjro.20220033","url":null,"abstract":"<p><strong>Objective: </strong>This study aimed to describe the methodologies used to develop and evaluate models that use artificial intelligence (AI) to analyse lung images in order to detect, segment (outline borders of), or classify pulmonary nodules as benign or malignant.</p><p><strong>Methods: </strong>In October 2019, we systematically searched the literature for original studies published between 2018 and 2019 that described prediction models using AI to evaluate human pulmonary nodules on diagnostic chest images. Two evaluators independently extracted information from studies, such as study aims, sample size, AI type, patient characteristics, and performance. We summarised data descriptively.</p><p><strong>Results: </strong>The review included 153 studies: 136 (89%) development-only studies, 12 (8%) development and validation, and 5 (3%) validation-only. CT scans were the most common type of image type used (83%), often acquired from public databases (58%). Eight studies (5%) compared model outputs with biopsy results. 41 studies (26.8%) reported patient characteristics. The models were based on different units of analysis, such as patients, images, nodules, or image slices or patches.</p><p><strong>Conclusion: </strong>The methods used to develop and evaluate prediction models using AI to detect, segment, or classify pulmonary nodules in medical imaging vary, are poorly reported, and therefore difficult to evaluate. Transparent and complete reporting of methods, results and code would fill the gaps in information we observed in the study publications.</p><p><strong>Advances in knowledge: </strong>We reviewed the methodology of AI models detecting nodules on lung images and found that the models were poorly reported and had no description of patient characteristics, with just a few comparing models' outputs with biopsies results. When lung biopsy is not available, lung-RADS could help standardise the comparisons between the human radiologist and the machine. The field of radiology should not give up principles from the diagnostic accuracy studies, such as the choice for the correct ground truth, just because AI is used. Clear and complete reporting of the reference standard used would help radiologists trust in the performance that AI models claim to have. This review presents clear recommendations about the essential methodological aspects of diagnostic models that should be incorporated in studies using AI to help detect or segmentate lung nodules. The manuscript also reinforces the need for more complete and transparent reporting, which can be helped using the recommended reporting guidelines.</p>","PeriodicalId":72419,"journal":{"name":"BJR open","volume":"5 1","pages":"20220033"},"PeriodicalIF":0.0,"publicationDate":"2023-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10301715/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9730154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
BJR openPub Date : 2023-05-17eCollection Date: 2023-01-01DOI: 10.1259/bjro.20220021
Andrew Lin, Konrad Pieszko, Caroline Park, Katarzyna Ignor, Michelle C Williams, Piotr Slomka, Damini Dey
{"title":"Artificial intelligence in cardiovascular imaging: enhancing image analysis and risk stratification.","authors":"Andrew Lin, Konrad Pieszko, Caroline Park, Katarzyna Ignor, Michelle C Williams, Piotr Slomka, Damini Dey","doi":"10.1259/bjro.20220021","DOIUrl":"10.1259/bjro.20220021","url":null,"abstract":"<p><p>In this review, we summarize state-of-the-art artificial intelligence applications for non-invasive cardiovascular imaging modalities including CT, MRI, echocardiography, and nuclear myocardial perfusion imaging.</p>","PeriodicalId":72419,"journal":{"name":"BJR open","volume":"5 1","pages":"20220021"},"PeriodicalIF":0.0,"publicationDate":"2023-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10311632/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10104101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
BJR openPub Date : 2023-05-16eCollection Date: 2023-01-01DOI: 10.1259/bjro.20230014
Daniel Liu, Neil C Binkley, Alberto Perez, John W Garrett, Ryan Zea, Ronald M Summers, Perry J Pickhardt
{"title":"CT image-based biomarkers acquired by AI-based algorithms for the opportunistic prediction of falls.","authors":"Daniel Liu, Neil C Binkley, Alberto Perez, John W Garrett, Ryan Zea, Ronald M Summers, Perry J Pickhardt","doi":"10.1259/bjro.20230014","DOIUrl":"10.1259/bjro.20230014","url":null,"abstract":"<p><strong>Objective: </strong>Evaluate whether biomarkers measured by automated artificial intelligence (AI)-based algorithms are suggestive of future fall risk.</p><p><strong>Methods: </strong>In this retrospective age- and sex-matched case-control study, 9029 total patients underwent initial abdominal CT for a variety of indications over a 20-year interval at one institution. 3535 case patients (mean age at initial CT, 66.5 ± 9.6 years; 63.4% female) who went on to fall (mean interval to fall, 6.5 years) and 5494 controls (mean age at initial CT, 66.7 ± 9.8 years; 63.4% females; mean follow-up interval, 6.6 years) were included. Falls were identified by electronic health record review. Validated and fully automated quantitative CT algorithms for skeletal muscle, adipose tissue, and trabecular bone attenuation at the level of L1 were applied to all scans. Uni- and multivariate assessment included hazard ratios (HRs) and area under the receiver operating characteristic (AUROC) curve.</p><p><strong>Results: </strong>Fall HRs (with 95% CI) for low muscle Hounsfield unit, high total adipose area, and low bone Hounsfield unit were 1.82 (1.65-2.00), 1.31 (1.19-1.44) and 1.91 (1.74-2.11), respectively, and the 10-year AUROC values for predicting falls were 0.619, 0.556, and 0.639, respectively. Combining all these CT biomarkers further improved the predictive value, including 10-year AUROC of 0.657.</p><p><strong>Conclusion: </strong>Automated abdominal CT-based opportunistic measures of muscle, fat, and bone offer a novel approach to risk stratification for future falls, potentially by identifying patients with osteosarcopenic obesity.</p><p><strong>Advances in knowledge: </strong>There are few well-established clinical tools to predict falls. We use novel AI-based body composition algorithms to leverage incidental CT data to help determine a patient's future fall risk.</p>","PeriodicalId":72419,"journal":{"name":"BJR open","volume":"1 1","pages":"20230014"},"PeriodicalIF":0.0,"publicationDate":"2023-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10636337/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41891258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
BJR openPub Date : 2023-05-16eCollection Date: 2023-01-01DOI: 10.1259/bjro.20220023
Abdalah Ismail, Talha Al-Zoubi, Issam El Naqa, Hina Saeed
{"title":"The role of artificial intelligence in hastening time to recruitment in clinical trials.","authors":"Abdalah Ismail, Talha Al-Zoubi, Issam El Naqa, Hina Saeed","doi":"10.1259/bjro.20220023","DOIUrl":"10.1259/bjro.20220023","url":null,"abstract":"<p><p>Novel and developing artificial intelligence (AI) systems can be integrated into healthcare settings in numerous ways. For example, in the case of automated image classification and natural language processing, AI systems are beginning to demonstrate near expert level performance in detecting abnormalities such as seizure activity. This paper, however, focuses on AI integration into clinical trials. During the clinical trial recruitment process, considerable labor and time is spent sifting through electronic health record and interviewing patients. With the advancement of deep learning techniques such as natural language processing, intricate electronic health record data can be efficiently processed. This provides utility to workflows such as recruitment for clinical trials. Studies are starting to show promise in shortening the time to recruitment and reducing workload for those involved in clinical trial design. Additionally, numerous guidelines are being constructed to encourage integration of AI into the healthcare setting with meaningful impact. The goal would be to improve the clinical trial process by reducing bias in patient composition, improving retention of participants, and lowering costs and labor.</p>","PeriodicalId":72419,"journal":{"name":"BJR open","volume":" ","pages":"20220023"},"PeriodicalIF":0.0,"publicationDate":"2023-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10636341/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43154960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}