J Glodo, E van Loef, Y Wang, P Bhattacharya, L Soundara Pandian, U Shirwadkar, I Hubble, J Schott, M Muller
{"title":"Novel high-stopping power scintillators for medical applications.","authors":"J Glodo, E van Loef, Y Wang, P Bhattacharya, L Soundara Pandian, U Shirwadkar, I Hubble, J Schott, M Muller","doi":"10.1117/12.3006480","DOIUrl":"10.1117/12.3006480","url":null,"abstract":"<p><p>Development of new scintillator materials is a continuous effort, which recently has been focused on materials with higher stopping power. Higher stopping power can be achieved if the compositions include elements such as Tl (Z=81) or Lu (Z=71), as the compounds gain higher densities and effective atomic numbers. In context of medical imaging this translates into high detection efficiency (count rates), therefore, better image quality (statistics, thinner films) or lower irradiation doses to patients in addition to lowering of cost. Many known scintillator hosts, commercial or in research stages, are alkali metal halides (Cs, K, Rb). Often these monovalent ions can be replaced with monovalent Tl. Since Tl has a higher atomic number than for example Cs (55), this increases the stopping power of modified compounds. A good example of an enhanced host is Ce doped Tl<sub>2</sub>LaCl<sub>5</sub> (5.2 g/cm<sup>3</sup>), that mirrors less dense Ce doped K<sub>2</sub>LaCl<sub>5</sub> (2.89 g/cm<sup>3</sup>). Tl substation also increased the luminosity to >60,000 ph/MeV, as it often leads to a reduction in the bandgap. Another example is the dual mode (gamma/neutron) Ce doped Cs<sub>2</sub>LiYCl<sub>6</sub> scintillator (density 3.31 g/cm<sup>3</sup>). Substitution creates Ce doped Tl<sub>2</sub>LiYCl<sub>6</sub> with density of 4.5 g/cm<sup>3</sup>, with much better stopping power and 20% higher light yield. Binary Tl-compounds are also of interest, although mostly they are semiconductors. Notable example of a scintillator is double doped TlCl with Be, I. This scintillator offers fast Cherenkov emission topped off with scintillation signal for achieving better energy resolution. Another family of interesting and dense compositions is based on Lu<sub>2</sub>O<sub>3</sub> ceramics. Lu<sub>2</sub>O<sub>3</sub> is one of the densest hosts (9.2 g/cm<sup>3</sup>) available offering high stopping power. Lu<sub>2</sub>O<sub>3</sub> doped with Eu<sup>3+</sup> is known to be a high luminosity scintillator, however, this emission is very slow (1-3 ms), which limits its utility. On the other hand, ultra-fast, 1 ns, scintillation can be achieved with the Yb<sup>3+</sup> doping that can be used for timing or high count-rate applications. However, while fast, Yb<sup>3+</sup> doped Lu<sub>2</sub>O<sub>3</sub> has very low luminosity. Recently, we have shown a middle ground performance, with Lu<sub>2</sub>O<sub>3</sub> doped with La<sup>3+</sup>. This composition generates scintillation with 1,000 ns decay time and up to 20,000 ph/MeV luminosity. Moreover, the material demonstrates very good energy resolution.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12925 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11631204/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142815248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"End-to-end Deep Learning Restoration of GLCM Features from blurred and noisy images.","authors":"Yijie Yuan, J Webster Stayman, Grace J Gang","doi":"10.1117/12.3006205","DOIUrl":"10.1117/12.3006205","url":null,"abstract":"<p><p>Radiomics involves the quantitative analysis of medical images to provide useful information for a range of clinical applications including disease diagnosis, treatment assessment, etc. However, the generalizability of radiomics model is often challenged by undesirable variability in radiomics feature values introduced by different scanners and imaging conditions. To address this issue, we developed a novel dual-domain deep learning algorithm to recover ground truth feature values given known blur and noise in the image. The network consists of two U-Nets connected by a differentiable GLCM estimator. The first U-Net restores the image, and the second restores the GLCM. We evaluated the performance of the network on lung CT image patches in terms of both closeness of recovered feature values to the ground truth and accuracy of classification between normal and COVID lungs. Performance was compared with an image restoration-only method and an analytical method developed in previous work. The proposed network outperforms both methods, achieving GLCM with the lowest mean-absolute-error from ground truth. Recovered GLCM feature values from the proposed method, on average, is within 2.19% error to the ground truth. Classification performance using recovered features from the network closely matches the \"best case\" performance achieved using ground truth feature values. The deep learning method has been shown to be a promising tool for radiomics standardization, paving the way for more reliable and repeatable radiomics models.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12927 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11377019/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142141966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Motion-compensated 4DCT reconstruction from single-beat cardiac CT scans using convolutional networks.","authors":"Zhenyao Yan, Li Zhang, Quanzheng Li, Dufan Wu","doi":"10.1117/12.3005368","DOIUrl":"https://doi.org/10.1117/12.3005368","url":null,"abstract":"<p><p>We proposed a deep learning-based method for single-heartbeat 4D cardiac CT reconstruction, where a single cardiac cycle was split into multiple phases for reconstruction. First, we pre-reconstruct each phase using the projection data from itself and the neighboring phases. The pre-reconstructions are fed into a supervised registration network to generate the deformation fields between different phases. The deformation fields are trained so that it can match the ground truth images from the corresponding phases. The deformation fields are then used in the FBP-and-wrap method for motion-compensated reconstruction, where a subsequent network is used to remove residual artifacts. The proposed method was validated with simulation data from 40 4D cardiac CT scans and demonstrated improved RMSE and SSIM and less blurring compared to FBP and PICCS.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12925 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11555688/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142633813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Modeling human observer detection for varying data acquisition in undersampled MRI for two-alternative forced choice (2-AFC) and forced localization tasks.","authors":"Rehan Mehta, Tetsuya A Kawakita, Angel R Pineda","doi":"10.1117/12.3005839","DOIUrl":"10.1117/12.3005839","url":null,"abstract":"<p><p>Undersampling in the frequency domain (k-space) in MRI enables faster data acquisition. In this study, we used a fixed 1D undersampling factor of 5x with only 20% of the k-space collected. The fraction of fully acquired low k-space frequencies were varied from 0% (all aliasing) to 20% (all blurring). The images were reconstructed using a multi-coil SENSE algorithm. We used two-alternative forced choice (2-AFC) and the forced localization tasks with a subtle signal to estimate the human observer performance. The 2-AFC average human observer performance remained fairly constant across all imaging conditions. The forced localization task performance improved from the 0% condition to the 2.5% condition and remained fairly constant for the remaining conditions, suggesting that there was a decrease in task performance only in the pure aliasing situation. We modeled the average human performance using a sparse-difference of Gaussians (SDOG) Hotelling observer model. Because the blurring in the undersampling direction makes the mean signal asymmetric, we explored an adaptation for irregular signals that made the SDOG template asymmetric. To improve the observer performance, we also varied the number of SDOG channels from 3 to 4. We found that despite the asymmetry in the mean signal, both the symmetric and asymmetric models reasonably predicted the human performance in the 2-AFC experiments. However, the symmetric model performed slightly better. We also found that a symmetric SDOG model with 4 channels implemented using a spatial domain convolution and constrained to the possible signal locations reasonably modeled the forced localization human observer results.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12929 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11128320/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141155281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kathleen Baur, Xin Xiong, Erickson Torio, Rose Du, Parikshit Juvekar, Reuben Dorent, Alexandra Golby, Sarah Frisken, Nazim Haouchine
{"title":"Spatiotemporal Disentanglement of Arteriovenous Malformations in Digital Subtraction Angiography.","authors":"Kathleen Baur, Xin Xiong, Erickson Torio, Rose Du, Parikshit Juvekar, Reuben Dorent, Alexandra Golby, Sarah Frisken, Nazim Haouchine","doi":"10.1117/12.3006740","DOIUrl":"10.1117/12.3006740","url":null,"abstract":"<p><p>Although Digital Subtraction Angiography (DSA) is the most important imaging for visualizing cerebrovascular anatomy, its interpretation by clinicians remains difficult. This is particularly true when treating arteriovenous malformations (AVMs), where entangled vasculature connecting arteries and veins needs to be carefully identified. The presented method aims to enhance DSA image series by highlighting critical information via automatic classification of vessels using a combination of two learning models: An unsupervised machine learning method based on Independent Component Analysis that decomposes the phases of flow and a convolutional neural network that automatically delineates the vessels in image space. The proposed method was tested on clinical DSA images series and demonstrated efficient differentiation between arteries and veins that provides a viable solution to enhance visualizations for clinical use.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12926 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11330340/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142001526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhiyun Xue, Tochi Oguguo, Kelly J Yu, Tseng-Cheng Chen, Chun-Hung Hua, Chung Jan Kang, Chih-Yen Chien, Ming-Hsui Tsai, Cheng-Ping Wang, Anil K Chaturvedi, Sameer Antani
{"title":"Cleaning and Harmonizing Medical Image Data for Reliable AI: Lessons Learned from Longitudinal Oral Cancer Natural History Study Data.","authors":"Zhiyun Xue, Tochi Oguguo, Kelly J Yu, Tseng-Cheng Chen, Chun-Hung Hua, Chung Jan Kang, Chih-Yen Chien, Ming-Hsui Tsai, Cheng-Ping Wang, Anil K Chaturvedi, Sameer Antani","doi":"10.1117/12.3005875","DOIUrl":"10.1117/12.3005875","url":null,"abstract":"<p><p>For deep learning-based machine learning, not only are large and sufficiently diverse data crucial but their good qualities are equally important. However, in real-world applications, it is very common that raw source data may contain incorrect, noisy, inconsistent, improperly formatted and sometimes missing elements, particularly, when the datasets are large and sourced from many sites. In this paper, we present our work towards preparing and making image data ready for the development of AI-driven approaches for studying various aspects of the natural history of oral cancer. Specifically, we focus on two aspects: 1) cleaning the image data; and 2) extracting the annotation information. Data cleaning includes removing duplicates, identifying missing data, correcting errors, standardizing data sets, and removing personal sensitive information, toward combining data sourced from different study sites. These steps are often collectively referred to as data harmonization. Annotation information extraction includes identifying crucial or valuable texts that are manually entered by clinical providers related to the image paths/names and standardizing of the texts of labels. Both are important for the successful deep learning algorithm development and data analyses. Specifically, we provide details on the data under consideration, describe the challenges and issues we observed that motivated our work, present specific approaches and methods that we used to clean and standardize the image data and extract labelling information. Further, we discuss the ways to increase efficiency of the process and the lessons learned. Research ideas on automating the process with ML-driven techniques are also presented and discussed. Our intent in reporting and discussing such work in detail is to help provide insights in automating or, minimally, increasing the efficiency of these critical yet often under-reported processes.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12931 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11107840/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141077451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ziyu Su, Wei Chen, Sony Annem, Usama Sajjad, Mostafa Rezapour, Wendy L Frankel, Metin N Gurcan, M Khalid Khan Niazi
{"title":"Adapting SAM to Histopathology Images for Tumor Bud Segmentation in Colorectal Cancer.","authors":"Ziyu Su, Wei Chen, Sony Annem, Usama Sajjad, Mostafa Rezapour, Wendy L Frankel, Metin N Gurcan, M Khalid Khan Niazi","doi":"10.1117/12.3006517","DOIUrl":"10.1117/12.3006517","url":null,"abstract":"<p><p>Colorectal cancer (CRC) is the third most common cancer in the United States. Tumor Budding (TB) detection and quantification are crucial yet labor-intensive steps in determining the CRC stage through the analysis of histopathology images. To help with this process, we adapt the Segment Anything Model (SAM) on the CRC histopathology images to segment TBs using SAM-Adapter. In this approach, we automatically take task-specific prompts from CRC images and train the SAM model in a parameter-efficient way. We compare the predictions of our model with the predictions from a trained-from-scratch model using the annotations from a pathologist. As a result, our model achieves an intersection over union (IoU) of 0.65 and an instance-level Dice score of 0.75, which are promising in matching the pathologist's TB annotation. We believe our study offers a novel solution to identify TBs on H&E-stained histopathology images. Our study also demonstrates the value of adapting the foundation model for pathology image segmentation tasks.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12933 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11099868/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141066462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Olivia F Sandvold, Roland Proksa, Heiner Daerr, Amy E Perkins, Kevin M Brown, Nadav Shapira, Thomas Koehler, J Webster Stayman, Grace J Gang, Ravindra M Manjeshwar, Peter B Noël
{"title":"Hybrid spectral CT system with clinical rapid kVp-switching x-ray tube and dual-layer detector for improved iodine quantification.","authors":"Olivia F Sandvold, Roland Proksa, Heiner Daerr, Amy E Perkins, Kevin M Brown, Nadav Shapira, Thomas Koehler, J Webster Stayman, Grace J Gang, Ravindra M Manjeshwar, Peter B Noël","doi":"10.1117/12.3006451","DOIUrl":"10.1117/12.3006451","url":null,"abstract":"<p><p>Spectral computed tomography (CT) is a powerful diagnostic tool offering quantitative material decomposition results that enhance clinical imaging by providing physiologic and functional insights. Iodine, a widely used contrast agent, improves visualization in various clinical contexts. However, accurately detecting low-concentration iodine presents challenges in spectral CT systems, particularly crucial for conditions like pancreatic cancer assessment. In this study, we present preliminary results from our hybrid spectral CT instrumentation which includes clinical-grade hardware (rapid kVp-switching x-ray tube, dual-layer detector). This combination expands spectral datasets from two to four channels, wherein we hypothesize improved quantification accuracy for low-dose and low-iodine concentration cases. We modulate the system duty cycle to evaluate its impact on quantification noise and bias. We evaluate iodine quantification performance by comparing two hybrid weighting strategies alongside rapid kVp-switching. This evaluation is performed with a polyamide phantom containing seven iodine inserts ranging from 0.5 to 20 mg/mL. In comparison to alternative methodologies, the maximum separation configuration, incorporating data from both the 80 kVp, low photon energy detector layer and the 140 kVp, high photon energy detector layer produces spectral images containing low quantitative noise and bias. This study presents initial evaluations on a hybrid spectral CT system, leveraging clinical hardware to demonstrate the potential for enhanced precision and sensitivity in spectral imaging. This research holds promise for advancing spectral CT imaging performance across diverse clinical scenarios.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12925 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11129556/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141157581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tao Sheng, Tejas Sudharshan Mathai, Alexander Shieh, Ronald M Summers
{"title":"Weakly-Supervised Detection of Bone Lesions in CT.","authors":"Tao Sheng, Tejas Sudharshan Mathai, Alexander Shieh, Ronald M Summers","doi":"10.1117/12.3008823","DOIUrl":"10.1117/12.3008823","url":null,"abstract":"<p><p>The skeletal region is one of the common sites of metastatic spread of cancer in the breast and prostate. CT is routinely used to measure the size of lesions in the bones. However, they can be difficult to spot due to the wide variations in their sizes, shapes, and appearances. Precise localization of such lesions would enable reliable tracking of interval changes (growth, shrinkage, or unchanged status). To that end, an automated technique to detect bone lesions is highly desirable. In this pilot work, we developed a pipeline to detect bone lesions (lytic, blastic, and mixed) in CT volumes via a proxy segmentation task. First, we used the bone lesions that were prospectively marked by radiologists in a few 2D slices of CT volumes and converted them into weak 3D segmentation masks. Then, we trained a 3D full-resolution nnUNet model using these weak 3D annotations to segment the lesions and thereby detected them. Our automated method detected bone lesions in CT with a precision of 96.7% and recall of 47.3% despite the use of incomplete and partial training data. To the best of our knowledge, we are the first to attempt the direct detection of bone lesions in CT via a proxy segmentation task.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12927 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11225794/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141556147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shunxing Bao, Sichen Zhu, Vasantha L Kolachala, Lucas W Remedios, Yeonjoo Hwang, Yutong Sun, Ruining Deng, Can Cui, Rendong Zhang, Yike Li, Jia Li, Joseph T Roland, Qi Liu, Ken S Lau, Subra Kugathasan, Peng Qiu, Keith T Wilson, Lori A Coburn, Bennett A Landman, Yuankai Huo
{"title":"Cell Spatial Analysis in Crohn's Disease: Unveiling Local Cell Arrangement Pattern with Graph-based Signatures.","authors":"Shunxing Bao, Sichen Zhu, Vasantha L Kolachala, Lucas W Remedios, Yeonjoo Hwang, Yutong Sun, Ruining Deng, Can Cui, Rendong Zhang, Yike Li, Jia Li, Joseph T Roland, Qi Liu, Ken S Lau, Subra Kugathasan, Peng Qiu, Keith T Wilson, Lori A Coburn, Bennett A Landman, Yuankai Huo","doi":"10.1117/12.3006675","DOIUrl":"10.1117/12.3006675","url":null,"abstract":"<p><p>Crohn's disease (CD) is a chronic and relapsing inflammatory condition that affects segments of the gastrointestinal tract. CD activity is determined by histological findings, particularly the density of neutrophils observed on Hematoxylin and Eosin stains (H&E) imaging. However, understanding the broader morphometry and local cell arrangement beyond cell counting and tissue morphology remains challenging. To address this, we characterize six distinct cell types from H&E images and develop a novel approach for the local spatial signature of each cell. Specifically, we create a 10-cell neighborhood matrix, representing neighboring cell arrangements for each individual cell. Utilizing t-SNE for non-linear spatial projection in scatter-plot and Kernel Density Estimation contour-plot formats, our study examines patterns of differences in the cellular environment associated with the odds ratio of spatial patterns between active CD and control groups. This analysis is based on data collected at the two research institutes. The findings reveal heterogeneous nearest-neighbor patterns, signifying distinct tendencies of cell clustering, with a particular focus on the rectum region. These variations underscore the impact of data heterogeneity on cell spatial arrangements in CD patients. Moreover, the spatial distribution disparities between the two research sites highlight the significance of collaborative efforts among healthcare organizations. All research analysis pipeline tools are available at https://github.com/MASILab/cellNN.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12933 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11415268/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142302981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}