{"title":"Performance Investigations of Two Channel Readout Configurations on the Cross-Strip Cadmium Zinc Telluride Detector","authors":"Emily Enlow;Yuli Wang;Greyson Shoop;Shiva Abbaszadeh","doi":"10.1109/TRPMS.2024.3411522","DOIUrl":"https://doi.org/10.1109/TRPMS.2024.3411522","url":null,"abstract":"In a detector system where the number of channels exceeds the number of channels available on an application-specific integrated circuit (ASIC), there is a need to configure channels among the multiple ASICs to achieve the lowest electronic noise and highest count rate. In this work, two board configurations were designed to experimentally assess which one provides the more favorable performance. In the half-half configuration, contiguous channels from one edge to the center of CZT detector are read by one ASIC, and the other half are read by the other ASIC. In the alternate configuration, the CZT channels are read by alternating ASICs. A lower electronic noise level, better FWHM energy resolution performance (5.35% \u0000<inline-formula> <tex-math>$pm ~1.08$ </tex-math></inline-formula>\u0000% compared to 7.84% \u0000<inline-formula> <tex-math>$pm ~0.98$ </tex-math></inline-formula>\u0000%), and higher count rate was found for the anode electrode strips with the half-half configuration. Cross-talk between the ASICs and deadtime play a role in the different performances, and the total count rate of the half-half configuration has a count rate 18.1% higher than that of the alternate configuration.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"8 8","pages":"886-892"},"PeriodicalIF":4.6,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142587583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"XGenRecon: A New Perspective in Ultrasparse Volumetric CBCT Reconstruction Through Geometry-Controlled X-Ray Projection Generation","authors":"Chulong Zhang;Yaoqin Xie;Xiaokun Liang","doi":"10.1109/TRPMS.2024.3420742","DOIUrl":"https://doi.org/10.1109/TRPMS.2024.3420742","url":null,"abstract":"We propose a novel paradigm for cone-beam computed tomography (CBCT) reconstruction from ultrasparse X-ray projections, by introducing a framework that generates auxiliary X-ray projections under controlled geometric parameters. This innovation overcomes the limitations of conventional methods that are constrained to producing fixed-angle projections. Our approach is organized into three key modules: 1) the XGen module; 2) X-Correction module; and 3) CT-Correction module. Through the XGen module, we generate projections based on any given geometric parameters to supplement the geometric information in the projection domain. The X-Correction module then introduces geometric corrections to harmonize the generated projections. Finally, through the CT-Correction module, the reconstructed image undergoes refining, thereby enhancing the image quality within the image domain. We have validated our model on several datasets, including a large-scale publicly available lung CT dataset (LIDC-IDRI with 1018 patients); an extensive abdominal CT dataset (AbdomenCT-1K, with a selected 1k patients); and our proprietary pelvic CT dataset, collated from a hospital (445 patients). Real walnut projection data were also incorporated for genuine projection validation. Compared to the traditional projection generation methods and the state-of-the-art ultrasparse reconstruction techniques on 2-view and 10-view tasks, our method has demonstrated consistently superior performance across various tasks.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"9 1","pages":"95-106"},"PeriodicalIF":4.6,"publicationDate":"2024-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142912533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Huabin Wang;Zongguang Li;Xianjun Han;Gong Zhang;Qiang Zhang;Dailei Zhang;Fei Liu
{"title":"MAG-Net: A Multiscale Adaptive Generation Network for PET Synthetic CT","authors":"Huabin Wang;Zongguang Li;Xianjun Han;Gong Zhang;Qiang Zhang;Dailei Zhang;Fei Liu","doi":"10.1109/TRPMS.2024.3418831","DOIUrl":"https://doi.org/10.1109/TRPMS.2024.3418831","url":null,"abstract":"In traditional positron emission computed tomography (PET)/computed tomography (CT) imaging, CT can be used to accurately display lesion anatomical structure. However, CT is not available in single brain PET imaging system. Therefore, this article proposes a novel generation network (MAG-Net) for generating CT images with clear morphological details from PET. The MAG-Net contains three unique features: 1) a parallel multiscale adaptive module is designed to extract robust features of PET, which can improve the quality of the generated images with various resolutions; 2) a binarized contour mask module is applied to constrain the generating process of the fake CT. It can guide the model focusing on generating more CT texture details; and 3) a pixel-level feature encoder is designed to reduce the pixel difference and achieve the accuracy of generated CT by mapping the position information of CT tissues and structures corresponding to bright and dark areas. Experimental results on the SCHERI dataset show that compared with real CT images, structural similarity and PSNR index of generated images reach 0.909 and 26.386. The results of visualization experiments show that the generated CT has clear texture details and realistic morphological structure, which can make the single brain PET imaging system close to the PET/CT imaging system.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"9 1","pages":"83-94"},"PeriodicalIF":4.6,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142912395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lei Bi;Xiaohang Fu;Qiufang Liu;Shaoli Song;David Dagan Feng;Michael Fulham;Jinman Kim
{"title":"Co-Learning Multimodality PET-CT Features via a Cascaded CNN-Transformer Network","authors":"Lei Bi;Xiaohang Fu;Qiufang Liu;Shaoli Song;David Dagan Feng;Michael Fulham;Jinman Kim","doi":"10.1109/TRPMS.2024.3417901","DOIUrl":"https://doi.org/10.1109/TRPMS.2024.3417901","url":null,"abstract":"<italic>Background:</i>\u0000 Automated segmentation of multimodality positron emission tomography—computed tomography (PET-CT) data is a major challenge in the development of computer-aided diagnosis systems (CADs). In this context, convolutional neural network (CNN)-based methods are considered as the state-of-the-art. These CNN-based methods, however, have difficulty in co-learning the complementary PET-CT image features and in learning the global context when focusing solely on local patterns. \u0000<italic>Methods:</i>\u0000 We propose a cascaded CNN-transformer network (CCNN-TN) tailored for PET-CT image segmentation. We employed a transformer network (TN) because of its ability to establish global context via self-attention and embedding image patches. We extended the TN definition by cascading multiple TNs and CNNs to learn the global and local contexts. We also introduced a hyper fusion branch that iteratively fuses the separately extracted complementary image features. We evaluated our approach, when compared to current state-of-the-art CNN methods, on three datasets: two nonsmall cell lung cancer (NSCLC) and one soft tissue sarcoma (STS). \u0000<italic>Results:</i>\u0000 Our CCNN-TN method achieved a dice similarity coefficient (DSC) score of 72.25% (NSCLC), 67.11% (NSCLC), and 66.36% (STS) for segmentation of tumors. Compared to other methods the DSC was higher for our CCNN-TN by 4.5%, 1.31%, and 3.44%. \u0000<italic>Conclusion:</i>\u0000 Our experimental results demonstrate that CCNN-TN, when compared to the existing methods, achieved more generalizable results across different datasets and has consistent performance across various image fusion strategies and network backbones.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"8 7","pages":"814-825"},"PeriodicalIF":4.6,"publicationDate":"2024-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142143751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Mariscal-Castilla;S. Gómez;R. Manera;J. M. Fernández-Tenllado;J. Mauricio;N. Kratochwil;J. Alozy;M. Piller;S. Portero;A. Sanuy;D. Guberman;J. J. Silva;E. Auffray;R. Ballabriga;G. Ariño-Estrada;M. Campbell;D. Gascón
{"title":"Toward Sub-100 ps TOF-PET Systems Employing the FastIC ASIC With Analog SiPMs","authors":"A. Mariscal-Castilla;S. Gómez;R. Manera;J. M. Fernández-Tenllado;J. Mauricio;N. Kratochwil;J. Alozy;M. Piller;S. Portero;A. Sanuy;D. Guberman;J. J. Silva;E. Auffray;R. Ballabriga;G. Ariño-Estrada;M. Campbell;D. Gascón","doi":"10.1109/TRPMS.2024.3414578","DOIUrl":"https://doi.org/10.1109/TRPMS.2024.3414578","url":null,"abstract":"Time of Flight positron emission tomography (TOF-PET) scanners demand electronics that are power-efficient, low-noise, cost-effective, and possess a large bandwidth. Recent developments have demonstrated sub-100 ps time resolution with elevated power consumption per channel, rendering this unfeasible to build a scanner. In this work, we evaluate the performance for the TOF-PET of the FastIC front-end using different scintillators and silicon photomultipliers (SiPMs). FastIC is an eight-channel application specific integrated circuit developed in CMOS 65 nm capable of measuring the energy and the arrival time of a detected pulse with 12 mW per channel. Using Hamamatsu SiPMs (S13360-3050PE) coupled to LSO:Ce:0.2%Ca crystals of \u0000<inline-formula> <tex-math>$2times 2times $ </tex-math></inline-formula>\u0000 3 mm\u0000<sup>3</sup>\u0000 and LYSO:Ce:0.2%Ca of \u0000<inline-formula> <tex-math>$3.13times 3.13times $ </tex-math></inline-formula>\u0000 20 mm\u0000<sup>3</sup>\u0000, we measured a coincidence time resolution (CTR) of (\u0000<inline-formula> <tex-math>$95~pm ~3$ </tex-math></inline-formula>\u0000) and \u0000<inline-formula> <tex-math>$156~pm ~4$ </tex-math></inline-formula>\u0000) ps full width half maximum (FWHM), respectively. With Fondazione Bruno Kessler NUV-HD LF2 M0 SiPMs coupled to the same crystals, we obtained a CTR of (\u0000<inline-formula> <tex-math>$76~pm ~2$ </tex-math></inline-formula>\u0000) and (\u0000<inline-formula> <tex-math>$127~pm ~3$ </tex-math></inline-formula>\u0000) ps FWHM. We employed FastIC with a TlCl pure Cherenkov emitter, demonstrating time resolutions comparable to those achieved with the high-power-consuming electronics. These findings shows that the FastIC represents a cost-effective alternative that can significantly enhance the time resolution of the current TOF-PET systems while maintaining low power consumption.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"8 7","pages":"718-733"},"PeriodicalIF":4.6,"publicationDate":"2024-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10557761","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142143649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Accurate Whole-Brain Segmentation for Bimodal PET/MR Images via a Cross-Attention Mechanism","authors":"Wenbo Li;Zhenxing Huang;Qiyang Zhang;Na Zhang;Wenjie Zhao;Yaping Wu;Jianmin Yuan;Yang Yang;Yan Zhang;Yongfeng Yang;Hairong Zheng;Dong Liang;Meiyun Wang;Zhanli Hu","doi":"10.1109/TRPMS.2024.3413862","DOIUrl":"https://doi.org/10.1109/TRPMS.2024.3413862","url":null,"abstract":"The PET/MRI system plays a significant role in the functional and anatomical quantification of the brain, providing accurate diagnostic data for a variety of brain disorders. However, most of the current methods for segmenting the brain are based on unimodal MRI and rarely combine structural and functional dual-modality information. Therefore, we aimed to employ deep-learning techniques to achieve automatic and accurate segmentation of the whole brain while incorporating functional and anatomical information. To leverage dual-modality information, a novel 3-D network with a cross-attention module was proposed to capture the correlation between dual-modality features and improve segmentation accuracy. Moreover, several deep-learning methods were employed as comparison measures to evaluate the model performance, with the dice similarity coefficient (DSC), Jaccard index (JAC), recall, and precision serving as quantitative metrics. Experimental results demonstrated our advantages in whole-brain segmentation, achieving an 85.35% DSC, 77.22% JAC, 88.86% recall, and 84.81% precision, which were better than those comparative methods. In addition, consistent and correlated analyses based on segmentation results also demonstrated that our approach achieved superior performance. In future work, we will try to apply our method to other multimodal tasks, such as PET/CT data analysis.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"9 1","pages":"47-56"},"PeriodicalIF":4.6,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142912440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dennis Hein;Staffan Holmin;Timothy Szczykutowicz;Jonathan S. Maltz;Mats Danielsson;Ge Wang;Mats Persson
{"title":"PPFM: Image Denoising in Photon-Counting CT Using Single-Step Posterior Sampling Poisson Flow Generative Models","authors":"Dennis Hein;Staffan Holmin;Timothy Szczykutowicz;Jonathan S. Maltz;Mats Danielsson;Ge Wang;Mats Persson","doi":"10.1109/TRPMS.2024.3410092","DOIUrl":"https://doi.org/10.1109/TRPMS.2024.3410092","url":null,"abstract":"Diffusion and Poisson flow models have shown impressive performance in a wide range of generative tasks, including low-dose CT (LDCT) image denoising. However, one limitation in general, and for clinical applications in particular, is slow sampling. Due to their iterative nature, the number of function evaluations (NFEs) required is usually on the order of \u0000<inline-formula> <tex-math>$10-10^{3}$ </tex-math></inline-formula>\u0000, both for conditional and unconditional generation. In this article, we present posterior sampling Poisson flow generative models (PPFMs), a novel image denoising technique for low-dose and photon-counting CT that produces excellent image quality whilst keeping NFE = 1. Updating the training and sampling processes of Poisson flow generative models (PFGMs)++, we learn a conditional generator which defines a trajectory between the prior noise distribution and the posterior distribution of interest. We additionally hijack and regularize the sampling process to achieve NFE = 1. Our results shed light on the benefits of the PFGM++ framework compared to diffusion models. In addition, PPFM is shown to perform favorably compared to current state-of-the-art diffusion-style models with NFE = 1, consistency models, as well as popular deep learning and nondeep learning-based image denoising techniques, on clinical LDCT images and clinical images from a prototype photon-counting CT system.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"8 7","pages":"788-799"},"PeriodicalIF":4.6,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10554640","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142143752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andrea Gonzalez-Montoro;Santiago Jiménez-Serrano;Jorge Álamo;Julio Barberá;Alejandro Lucero;Neus Cucarella;Karel Díaz;Marta Freire;Antonio J. Gonzalez;Laura Moliner;Álvaro Mondejar;Constantino Morera-Ballester;John Prior;David Sánchez;Jose M. Benlloch
{"title":"First Results of the 4D-PET Brain System","authors":"Andrea Gonzalez-Montoro;Santiago Jiménez-Serrano;Jorge Álamo;Julio Barberá;Alejandro Lucero;Neus Cucarella;Karel Díaz;Marta Freire;Antonio J. Gonzalez;Laura Moliner;Álvaro Mondejar;Constantino Morera-Ballester;John Prior;David Sánchez;Jose M. Benlloch","doi":"10.1109/TRPMS.2024.3412798","DOIUrl":"https://doi.org/10.1109/TRPMS.2024.3412798","url":null,"abstract":"Positron emission tomography (PET) imaging is the molecular technique of choice for studying many illnesses, including the ones related to the brain. Nevertheless, the use of PET scanners in neurology is limited by several factors, such as their limited availability for brain imaging due to the high oncology demand for PET and the low sensitivity and poor spatial resolution in the brain of the standard PET scanners. To expand the PET application in neurology, the brain-specific systems with increased clinical and physical sensitivities and higher spatial resolution are required. The present work reports on the design and development process of a compact dedicated PET scanner suitable for human brain imaging. This article includes the description and experimental validation of the detector components and their implementation in a full-size system called 4D-PET. The detector has been designed to simultaneously provide photon depth of interaction (DOI) and time of flight (TOF) information. It is based on the semi-monolithic LYSO modules optically coupled to silicon photomultipliers (SiPMs) and connected to a multiplexing readout. The analog output signals are fed to the PETsys TOFPET2 analog-specific integrated circuit circuits enabling scalability of the readout. The evaluation of the 4D-PET modules resulted in average detector resolutions of \u0000<inline-formula> <tex-math>$2.1pm 1$ </tex-math></inline-formula>\u0000.0 mm, \u0000<inline-formula> <tex-math>$3.4pm 1$ </tex-math></inline-formula>\u0000.8 mm, and \u0000<inline-formula> <tex-math>$386pm 9$ </tex-math></inline-formula>\u0000 ps for the y- (transaxial direction), DOI-, and coincidence time resolution TOF, respectively. The preliminary 4D-PET imaging performance is reported through the simulations and for the first time through the real reconstructed images (collected in the La Fe Hospital, Valencia).","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"8 7","pages":"839-849"},"PeriodicalIF":4.6,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10554551","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142143720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mikiko Ito;Dahea Han;Tae-Hyung Kim;Young-Tae Kim;Sungeun Lee;Jeongtae Soh;Young-Jun Jung;Byungkee Lee
{"title":"Performance Evaluation of a Mobile Digital Tomosynthesis System Using a Moving CNT-Based Tube Array for Extremity Scans","authors":"Mikiko Ito;Dahea Han;Tae-Hyung Kim;Young-Tae Kim;Sungeun Lee;Jeongtae Soh;Young-Jun Jung;Byungkee Lee","doi":"10.1109/TRPMS.2024.3408870","DOIUrl":"https://doi.org/10.1109/TRPMS.2024.3408870","url":null,"abstract":"Digital tomosynthesis (DTS) can enhance diagnostic accuracy by providing 3-D volume images with a remarkably low-X-ray dose. The aim of this study is to provide an initial assessment of the image quality and the X-ray dose for a mobile DTS system employing a moving carbon-nanotube (CNT)-based digital X-ray source array and a fixed detector for extremity scans. This design allows to reduce the source-to-detector distance (SDD) to only 400 mm, thereby enabling a compact and highly mobile system. We first measured the entrance surface dose (ESD), which is the sum of the X-ray dose irradiated from individual projections using a dosimeter placed at the center of the X-ray detector. The ESDs obtained for hand, foot, and knee scan configurations were 0.15, 0.22, and 0.43 mGy, respectively, which were comparable to those obtained from 2-D radiography exposures. For the evaluation of its reconstructed image quality, the in-plane modulation transfer function (MTF), \u0000<italic>Z</i>\u0000-resolution, geometry distortion, and image homogeneity were assessed by utilizing a wire-phantom, sphere-phantom, and PMMA phantoms. The reconstructed images of hand, ankle and knee phantoms were evaluated qualitatively. The results of the evaluation demonstrate the successful development of the mobile DTS system proposed in this article.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"8 7","pages":"826-838"},"PeriodicalIF":4.6,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142143750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Pyramid Convolutional Recurrent Network for Serial Medical Image Registration With Adaptive Motion Regularizations","authors":"Jiayi Lu;Renchao Jin;Enmin Song","doi":"10.1109/TRPMS.2024.3410021","DOIUrl":"https://doi.org/10.1109/TRPMS.2024.3410021","url":null,"abstract":"<italic>Objective:</i>\u0000 Serial medical image registration plays an important role in radiation therapy treatment planning. However, current deep learning-based deformable registration models suffer from excessive resource consumption and suboptimal precision issues. Moreover, the global regularization term may result in unrealistic deformations due to displacement field noise and intertissue sliding motion omission. \u0000<italic>Methods:</i>\u0000 This article proposes a patch-based pyramid convolutional recurrent neural network (pyramid CRNet) for serial medical image registration. Patch-wise training is employed to alleviate resource constraints. Incorporating spatiotemporal features across multiple scales is beneficial for focusing on more details to improve accuracy. Moreover, two motion adaptive techniques are introduced to provide anatomically plausible displacement fields. The first uses a guided filter to reduce noise and maintain motion continuity within organs. The second involves a pixel-wise weight regularization term within the loss function to provide a tailored solution for distinctive tissue characteristics, especially for sliding motion at organ boundaries. \u0000<italic>Results:</i>\u0000 Experiments were conducted on lung 4DCT images and cardiac cine MR images. Quantitative and qualitative results have demonstrated that our method can align anatomical structures across multiple images in a physiologically sensible manner. \u0000<italic>Conclusion:</i>\u0000 The significance of this work lies in its potential to address pressing challenges in clinical applications, and further investigations could be extended to explore different modalities and dimensions.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"8 7","pages":"800-813"},"PeriodicalIF":4.6,"publicationDate":"2024-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142143653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}