arXiv - EE - Image and Video Processing最新文献

筛选
英文 中文
Adaptive Selection of Sampling-Reconstruction in Fourier Compressed Sensing 傅立叶压缩传感中采样-重构的自适应选择
arXiv - EE - Image and Video Processing Pub Date : 2024-09-18 DOI: arxiv-2409.11738
Seongmin Hong, Jaehyeok Bae, Jongho Lee, Se Young Chun
{"title":"Adaptive Selection of Sampling-Reconstruction in Fourier Compressed Sensing","authors":"Seongmin Hong, Jaehyeok Bae, Jongho Lee, Se Young Chun","doi":"arxiv-2409.11738","DOIUrl":"https://doi.org/arxiv-2409.11738","url":null,"abstract":"Compressed sensing (CS) has emerged to overcome the inefficiency of Nyquist\u0000sampling. However, traditional optimization-based reconstruction is slow and\u0000can not yield an exact image in practice. Deep learning-based reconstruction\u0000has been a promising alternative to optimization-based reconstruction,\u0000outperforming it in accuracy and computation speed. Finding an efficient\u0000sampling method with deep learning-based reconstruction, especially for Fourier\u0000CS remains a challenge. Existing joint optimization of sampling-reconstruction\u0000works (H1) optimize the sampling mask but have low potential as it is not\u0000adaptive to each data point. Adaptive sampling (H2) has also disadvantages of\u0000difficult optimization and Pareto sub-optimality. Here, we propose a novel\u0000adaptive selection of sampling-reconstruction (H1.5) framework that selects the\u0000best sampling mask and reconstruction network for each input data. We provide\u0000theorems that our method has a higher potential than H1 and effectively solves\u0000the Pareto sub-optimality problem in sampling-reconstruction by using separate\u0000reconstruction networks for different sampling masks. To select the best\u0000sampling mask, we propose to quantify the high-frequency Bayesian uncertainty\u0000of the input, using a super-resolution space generation model. Our method\u0000outperforms joint optimization of sampling-reconstruction (H1) and adaptive\u0000sampling (H2) by achieving significant improvements on several Fourier CS\u0000problems.","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142269820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tumor aware recurrent inter-patient deformable image registration of computed tomography scans with lung cancer 肺癌患者计算机断层扫描的肿瘤感知复发可变形图像配准
arXiv - EE - Image and Video Processing Pub Date : 2024-09-18 DOI: arxiv-2409.11910
Jue Jiang, Chloe Min Seo Choi, Maria Thor, Joseph O. Deasy, Harini Veeraraghavan
{"title":"Tumor aware recurrent inter-patient deformable image registration of computed tomography scans with lung cancer","authors":"Jue Jiang, Chloe Min Seo Choi, Maria Thor, Joseph O. Deasy, Harini Veeraraghavan","doi":"arxiv-2409.11910","DOIUrl":"https://doi.org/arxiv-2409.11910","url":null,"abstract":"Background: Voxel-based analysis (VBA) for population level radiotherapy (RT)\u0000outcomes modeling requires topology preserving inter-patient deformable image\u0000registration (DIR) that preserves tumors on moving images while avoiding\u0000unrealistic deformations due to tumors occurring on fixed images. Purpose: We\u0000developed a tumor-aware recurrent registration (TRACER) deep learning (DL)\u0000method and evaluated its suitability for VBA. Methods: TRACER consists of\u0000encoder layers implemented with stacked 3D convolutional long short term memory\u0000network (3D-CLSTM) followed by decoder and spatial transform layers to compute\u0000dense deformation vector field (DVF). Multiple CLSTM steps are used to compute\u0000a progressive sequence of deformations. Input conditioning was applied by\u0000including tumor segmentations with 3D image pairs as input channels.\u0000Bidirectional tumor rigidity, image similarity, and deformation smoothness\u0000losses were used to optimize the network in an unsupervised manner. TRACER and\u0000multiple DL methods were trained with 204 3D CT image pairs from patients with\u0000lung cancers (LC) and evaluated using (a) Dataset I (N = 308 pairs) with DL\u0000segmented LCs, (b) Dataset II (N = 765 pairs) with manually delineated LCs, and\u0000(c) Dataset III with 42 LC patients treated with RT. Results: TRACER accurately\u0000aligned normal tissues. It best preserved tumors, blackindicated by the\u0000smallest tumor volume difference of 0.24%, 0.40%, and 0.13 % and mean square\u0000error in CT intensities of 0.005, 0.005, 0.004, computed between original and\u0000resampled moving image tumors, for Datasets I, II, and III, respectively. It\u0000resulted in the smallest planned RT tumor dose difference computed between\u0000original and resampled moving images of 0.01 Gy and 0.013 Gy when using a\u0000female and a male reference.","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142263129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-Organ and Cross-Scanner Adenocarcinoma Segmentation using Rein to Fine-tune Vision Foundation Models 利用 Rein 微调视觉基础模型进行跨器官和跨扫描仪腺癌分类
arXiv - EE - Image and Video Processing Pub Date : 2024-09-18 DOI: arxiv-2409.11752
Pengzhou Cai, Xueyuan Zhang, Ze Zhao
{"title":"Cross-Organ and Cross-Scanner Adenocarcinoma Segmentation using Rein to Fine-tune Vision Foundation Models","authors":"Pengzhou Cai, Xueyuan Zhang, Ze Zhao","doi":"arxiv-2409.11752","DOIUrl":"https://doi.org/arxiv-2409.11752","url":null,"abstract":"In recent years, significant progress has been made in tumor segmentation\u0000within the field of digital pathology. However, variations in organs, tissue\u0000preparation methods, and image acquisition processes can lead to domain\u0000discrepancies among digital pathology images. To address this problem, in this\u0000paper, we use Rein, a fine-tuning method, to parametrically and efficiently\u0000fine-tune various vision foundation models (VFMs) for MICCAI 2024 Cross-Organ\u0000and Cross-Scanner Adenocarcinoma Segmentation (COSAS2024). The core of Rein\u0000consists of a set of learnable tokens, which are directly linked to instances,\u0000improving functionality at the instance level in each layer. In the data\u0000environment of the COSAS2024 Challenge, extensive experiments demonstrate that\u0000Rein fine-tuned the VFMs to achieve satisfactory results. Specifically, we used\u0000Rein to fine-tune ConvNeXt and DINOv2. Our team used the former to achieve\u0000scores of 0.7719 and 0.7557 on the preliminary test phase and final test phase\u0000in task1, respectively, while the latter achieved scores of 0.8848 and 0.8192\u0000on the preliminary test phase and final test phase in task2. Code is available\u0000at GitHub.","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142263130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hyperspectral Image Classification Based on Faster Residual Multi-branch Spiking Neural Network 基于更快残差多分支尖峰神经网络的高光谱图像分类技术
arXiv - EE - Image and Video Processing Pub Date : 2024-09-18 DOI: arxiv-2409.11619
Yang Liu, Yahui Li, Rui Li, Liming Zhou, Lanxue Dang, Huiyu Mu, Qiang Ge
{"title":"Hyperspectral Image Classification Based on Faster Residual Multi-branch Spiking Neural Network","authors":"Yang Liu, Yahui Li, Rui Li, Liming Zhou, Lanxue Dang, Huiyu Mu, Qiang Ge","doi":"arxiv-2409.11619","DOIUrl":"https://doi.org/arxiv-2409.11619","url":null,"abstract":"Convolutional neural network (CNN) performs well in Hyperspectral Image (HSI)\u0000classification tasks, but its high energy consumption and complex network\u0000structure make it difficult to directly apply it to edge computing devices. At\u0000present, spiking neural networks (SNN) have developed rapidly in HSI\u0000classification tasks due to their low energy consumption and event driven\u0000characteristics. However, it usually requires a longer time step to achieve\u0000optimal accuracy. In response to the above problems, this paper builds a\u0000spiking neural network (SNN-SWMR) based on the leaky integrate-and-fire (LIF)\u0000neuron model for HSI classification tasks. The network uses the spiking width\u0000mixed residual (SWMR) module as the basic unit to perform feature extraction\u0000operations. The spiking width mixed residual module is composed of spiking\u0000mixed convolution (SMC), which can effectively extract spatial-spectral\u0000features. Secondly, this paper designs a simple and efficient arcsine\u0000approximate derivative (AAD), which solves the non-differentiable problem of\u0000spike firing by fitting the Dirac function. Through AAD, we can directly train\u0000supervised spike neural networks. Finally, this paper conducts comparative\u0000experiments with multiple advanced HSI classification algorithms based on\u0000spiking neural networks on six public hyperspectral data sets. Experimental\u0000results show that the AAD function has strong robustness and a good fitting\u0000effect. Meanwhile, compared with other algorithms, SNN-SWMR requires a time\u0000step reduction of about 84%, training time, and testing time reduction of about\u000063% and 70% at the same accuracy. This study solves the key problem of SNN\u0000based HSI classification algorithms, which has important practical significance\u0000for promoting the practical application of HSI classification algorithms in\u0000edge devices such as spaceborne and airborne devices.","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142263133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Few-Shot Domain Adaptation for Learned Image Compression 用于学习型图像压缩的少镜头域自适应技术
arXiv - EE - Image and Video Processing Pub Date : 2024-09-17 DOI: arxiv-2409.11111
Tianyu Zhang, Haotian Zhang, Yuqi Li, Li Li, Dong Liu
{"title":"Few-Shot Domain Adaptation for Learned Image Compression","authors":"Tianyu Zhang, Haotian Zhang, Yuqi Li, Li Li, Dong Liu","doi":"arxiv-2409.11111","DOIUrl":"https://doi.org/arxiv-2409.11111","url":null,"abstract":"Learned image compression (LIC) has achieved state-of-the-art rate-distortion\u0000performance, deemed promising for next-generation image compression techniques.\u0000However, pre-trained LIC models usually suffer from significant performance\u0000degradation when applied to out-of-training-domain images, implying their poor\u0000generalization capabilities. To tackle this problem, we propose a few-shot\u0000domain adaptation method for LIC by integrating plug-and-play adapters into\u0000pre-trained models. Drawing inspiration from the analogy between latent\u0000channels and frequency components, we examine domain gaps in LIC and observe\u0000that out-of-training-domain images disrupt pre-trained channel-wise\u0000decomposition. Consequently, we introduce a method for channel-wise\u0000re-allocation using convolution-based adapters and low-rank adapters, which are\u0000lightweight and compatible to mainstream LIC schemes. Extensive experiments\u0000across multiple domains and multiple representative LIC schemes demonstrate\u0000that our method significantly enhances pre-trained models, achieving comparable\u0000performance to H.266/VVC intra coding with merely 25 target-domain samples.\u0000Additionally, our method matches the performance of full-model finetune while\u0000transmitting fewer than $2%$ of the parameters.","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142269824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Cohort Framework with Cohort-Aware Attention and Adversarial Mutual-Information Minimization for Whole Slide Image Classification 具有同群感知注意力和逆向互信息最小化功能的多同群框架,用于整张幻灯片图像分类
arXiv - EE - Image and Video Processing Pub Date : 2024-09-17 DOI: arxiv-2409.11119
Sharon Peled, Yosef E. Maruvka, Moti Freiman
{"title":"Multi-Cohort Framework with Cohort-Aware Attention and Adversarial Mutual-Information Minimization for Whole Slide Image Classification","authors":"Sharon Peled, Yosef E. Maruvka, Moti Freiman","doi":"arxiv-2409.11119","DOIUrl":"https://doi.org/arxiv-2409.11119","url":null,"abstract":"Whole Slide Images (WSIs) are critical for various clinical applications,\u0000including histopathological analysis. However, current deep learning approaches\u0000in this field predominantly focus on individual tumor types, limiting model\u0000generalization and scalability. This relatively narrow focus ultimately stems\u0000from the inherent heterogeneity in histopathology and the diverse morphological\u0000and molecular characteristics of different tumors. To this end, we propose a\u0000novel approach for multi-cohort WSI analysis, designed to leverage the\u0000diversity of different tumor types. We introduce a Cohort-Aware Attention\u0000module, enabling the capture of both shared and tumor-specific pathological\u0000patterns, enhancing cross-tumor generalization. Furthermore, we construct an\u0000adversarial cohort regularization mechanism to minimize cohort-specific biases\u0000through mutual information minimization. Additionally, we develop a\u0000hierarchical sample balancing strategy to mitigate cohort imbalances and\u0000promote unbiased learning. Together, these form a cohesive framework for\u0000unbiased multi-cohort WSI analysis. Extensive experiments on a uniquely\u0000constructed multi-cancer dataset demonstrate significant improvements in\u0000generalization, providing a scalable solution for WSI classification across\u0000diverse cancer types. Our code for the experiments is publicly available at\u0000<link>.","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142263236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unsupervised Hybrid framework for ANomaly Detection (HAND) -- applied to Screening Mammogram 用于异常检测的无监督混合框架 (HAND) -- 应用于乳房X光筛查
arXiv - EE - Image and Video Processing Pub Date : 2024-09-17 DOI: arxiv-2409.11534
Zhemin Zhang, Bhavika Patel, Bhavik Patel, Imon Banerjee
{"title":"Unsupervised Hybrid framework for ANomaly Detection (HAND) -- applied to Screening Mammogram","authors":"Zhemin Zhang, Bhavika Patel, Bhavik Patel, Imon Banerjee","doi":"arxiv-2409.11534","DOIUrl":"https://doi.org/arxiv-2409.11534","url":null,"abstract":"Out-of-distribution (OOD) detection is crucial for enhancing the\u0000generalization of AI models used in mammogram screening. Given the challenge of\u0000limited prior knowledge about OOD samples in external datasets, unsupervised\u0000generative learning is a preferable solution which trains the model to discern\u0000the normal characteristics of in-distribution (ID) data. The hypothesis is that\u0000during inference, the model aims to reconstruct ID samples accurately, while\u0000OOD samples exhibit poorer reconstruction due to their divergence from\u0000normality. Inspired by state-of-the-art (SOTA) hybrid architectures combining\u0000CNNs and transformers, we developed a novel backbone - HAND, for detecting OOD\u0000from large-scale digital screening mammogram studies. To boost the learning\u0000efficiency, we incorporated synthetic OOD samples and a parallel discriminator\u0000in the latent space to distinguish between ID and OOD samples. Gradient\u0000reversal to the OOD reconstruction loss penalizes the model for learning OOD\u0000reconstructions. An anomaly score is computed by weighting the reconstruction\u0000and discriminator loss. On internal RSNA mammogram held-out test and external\u0000Mayo clinic hand-curated dataset, the proposed HAND model outperformed\u0000encoder-based and GAN-based baselines, and interestingly, it also outperformed\u0000the hybrid CNN+transformer baselines. Therefore, the proposed HAND pipeline\u0000offers an automated efficient computational solution for domain-specific\u0000quality checks in external screening mammograms, yielding actionable insights\u0000without direct exposure to the private medical imaging data.","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142269818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using Physics Informed Generative Adversarial Networks to Model 3D porous media 利用物理信息生成对抗网络建立三维多孔介质模型
arXiv - EE - Image and Video Processing Pub Date : 2024-09-17 DOI: arxiv-2409.11541
Zihan Ren, Sanjay Srinivasan
{"title":"Using Physics Informed Generative Adversarial Networks to Model 3D porous media","authors":"Zihan Ren, Sanjay Srinivasan","doi":"arxiv-2409.11541","DOIUrl":"https://doi.org/arxiv-2409.11541","url":null,"abstract":"Micro-CT scanning of rocks significantly enhances our understanding of\u0000pore-scale physics in porous media. With advancements in pore-scale simulation\u0000methods, such as pore network models, it is now possible to accurately simulate\u0000multiphase flow properties, including relative permeability, from CT-scanned\u0000rock samples. However, the limited number of CT-scanned samples and the\u0000challenge of connecting pore-scale networks to field-scale rock properties\u0000often make it difficult to use pore-scale simulated properties in realistic\u0000field-scale reservoir simulations. Deep learning approaches to create synthetic\u00003D rock structures allow us to simulate variations in CT rock structures, which\u0000can then be used to compute representative rock properties and flow functions.\u0000However, most current deep learning methods for 3D rock structure synthesis\u0000don't consider rock properties derived from well observations, lacking a direct\u0000link between pore-scale structures and field-scale data. We present a method to\u0000construct 3D rock structures constrained to observed rock properties using\u0000generative adversarial networks (GANs) with conditioning accomplished through a\u0000gradual Gaussian deformation process. We begin by pre-training a Wasserstein\u0000GAN to reconstruct 3D rock structures. Subsequently, we use a pore network\u0000model simulator to compute rock properties. The latent vectors for image\u0000generation in GAN are progressively altered using the Gaussian deformation\u0000approach to produce 3D rock structures constrained by well-derived conditioning\u0000data. This GAN and Gaussian deformation approach enables high-resolution\u0000synthetic image generation and reproduces user-defined rock properties such as\u0000porosity, permeability, and pore size distribution. Our research provides a\u0000novel way to link GAN-generated models to field-derived quantities.","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142263137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-frequency Electrical Impedance Tomography Reconstruction with Multi-Branch Attention Image Prior 利用多分支注意力图像先验进行多频率电阻抗断层扫描重建
arXiv - EE - Image and Video Processing Pub Date : 2024-09-17 DOI: arxiv-2409.10794
Hao Fang, Zhe Liu, Yi Feng, Zhen Qiu, Pierre Bagnaninchi, Yunjie Yang
{"title":"Multi-frequency Electrical Impedance Tomography Reconstruction with Multi-Branch Attention Image Prior","authors":"Hao Fang, Zhe Liu, Yi Feng, Zhen Qiu, Pierre Bagnaninchi, Yunjie Yang","doi":"arxiv-2409.10794","DOIUrl":"https://doi.org/arxiv-2409.10794","url":null,"abstract":"Multi-frequency Electrical Impedance Tomography (mfEIT) is a promising\u0000biomedical imaging technique that estimates tissue conductivities across\u0000different frequencies. Current state-of-the-art (SOTA) algorithms, which rely\u0000on supervised learning and Multiple Measurement Vectors (MMV), require\u0000extensive training data, making them time-consuming, costly, and less practical\u0000for widespread applications. Moreover, the dependency on training data in\u0000supervised MMV methods can introduce erroneous conductivity contrasts across\u0000frequencies, posing significant concerns in biomedical applications. To address\u0000these challenges, we propose a novel unsupervised learning approach based on\u0000Multi-Branch Attention Image Prior (MAIP) for mfEIT reconstruction. Our method\u0000employs a carefully designed Multi-Branch Attention Network (MBA-Net) to\u0000represent multiple frequency-dependent conductivity images and simultaneously\u0000reconstructs mfEIT images by iteratively updating its parameters. By leveraging\u0000the implicit regularization capability of the MBA-Net, our algorithm can\u0000capture significant inter- and intra-frequency correlations, enabling robust\u0000mfEIT reconstruction without the need for training data. Through simulation and\u0000real-world experiments, our approach demonstrates performance comparable to, or\u0000better than, SOTA algorithms while exhibiting superior generalization\u0000capability. These results suggest that the MAIP-based method can be used to\u0000improve the reliability and applicability of mfEIT in various settings.","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142263245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Domain Data Aggregation for Axon and Myelin Segmentation in Histology Images 多域数据聚合用于组织学图像中的轴突和髓鞘分割
arXiv - EE - Image and Video Processing Pub Date : 2024-09-17 DOI: arxiv-2409.11552
Armand Collin, Arthur Boschet, Mathieu Boudreau, Julien Cohen-Adad
{"title":"Multi-Domain Data Aggregation for Axon and Myelin Segmentation in Histology Images","authors":"Armand Collin, Arthur Boschet, Mathieu Boudreau, Julien Cohen-Adad","doi":"arxiv-2409.11552","DOIUrl":"https://doi.org/arxiv-2409.11552","url":null,"abstract":"Quantifying axon and myelin properties (e.g., axon diameter, myelin\u0000thickness, g-ratio) in histology images can provide useful information about\u0000microstructural changes caused by neurodegenerative diseases. Automatic tissue\u0000segmentation is an important tool for these datasets, as a single stained\u0000section can contain up to thousands of axons. Advances in deep learning have\u0000made this task quick and reliable with minimal overhead, but a deep learning\u0000model trained by one research group will hardly ever be usable by other groups\u0000due to differences in their histology training data. This is partly due to\u0000subject diversity (different body parts, species, genetics, pathologies) and\u0000also to the range of modern microscopy imaging techniques resulting in a wide\u0000variability of image features (i.e., contrast, resolution). There is a pressing\u0000need to make AI accessible to neuroscience researchers to facilitate and\u0000accelerate their workflow, but publicly available models are scarce and poorly\u0000maintained. Our approach is to aggregate data from multiple imaging modalities\u0000(bright field, electron microscopy, Raman spectroscopy) and species (mouse,\u0000rat, rabbit, human), to create an open-source, durable tool for axon and myelin\u0000segmentation. Our generalist model makes it easier for researchers to process\u0000their data and can be fine-tuned for better performance on specific domains. We\u0000study the benefits of different aggregation schemes. This multi-domain\u0000segmentation model performs better than single-modality dedicated learners\u0000(p=0.03077), generalizes better on out-of-distribution data and is easier to\u0000use and maintain. Importantly, we package the segmentation tool into a\u0000well-maintained open-source software ecosystem (see\u0000https://github.com/axondeepseg/axondeepseg).","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142263134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信