Sandeep Manandhar, I. Veith, M. Parrini, Auguste Genovesio
{"title":"SAVGAN:基于自我关注的肿瘤芯片视频生成","authors":"Sandeep Manandhar, I. Veith, M. Parrini, Auguste Genovesio","doi":"10.1109/ISBI52829.2022.9761518","DOIUrl":null,"url":null,"abstract":"Generation of videomicroscopy sequences will become increasingly important in order to train and evaluate dynamic image analysis methods. The latter are crucial to the study of biological dynamic processes such as tumour-immune cell interactions. However, current generative models developed in the context of natural image sequences employ either a single 3D (2D+time) convolutional neural network (CNN) based generator, which fails to capture long range interactions, or two separate (spatial and temporal) generators, which are unable to faithfully reproduce the morphology of moving objects. Here, we propose a self-attention based generative model for videomicroscopy sequences that aims to take into account for the full range of interactions within a spatio-temporal volume of 32 frames. To reduce the computational burden of such a strategy, we consider the Nyström approximation of the attention matrix. This approach leads to significant improvements in reproducing the structures and the proper motion of videomicroscopy sequences as assessed by a range of existing and proposed quantitative metrics.","PeriodicalId":6827,"journal":{"name":"2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI)","volume":"30 1","pages":"1-5"},"PeriodicalIF":0.0000,"publicationDate":"2022-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"SAVGAN: Self-Attention Based Generation of Tumour on Chip Videos\",\"authors\":\"Sandeep Manandhar, I. Veith, M. Parrini, Auguste Genovesio\",\"doi\":\"10.1109/ISBI52829.2022.9761518\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Generation of videomicroscopy sequences will become increasingly important in order to train and evaluate dynamic image analysis methods. The latter are crucial to the study of biological dynamic processes such as tumour-immune cell interactions. However, current generative models developed in the context of natural image sequences employ either a single 3D (2D+time) convolutional neural network (CNN) based generator, which fails to capture long range interactions, or two separate (spatial and temporal) generators, which are unable to faithfully reproduce the morphology of moving objects. Here, we propose a self-attention based generative model for videomicroscopy sequences that aims to take into account for the full range of interactions within a spatio-temporal volume of 32 frames. To reduce the computational burden of such a strategy, we consider the Nyström approximation of the attention matrix. This approach leads to significant improvements in reproducing the structures and the proper motion of videomicroscopy sequences as assessed by a range of existing and proposed quantitative metrics.\",\"PeriodicalId\":6827,\"journal\":{\"name\":\"2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI)\",\"volume\":\"30 1\",\"pages\":\"1-5\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-03-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISBI52829.2022.9761518\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISBI52829.2022.9761518","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
SAVGAN: Self-Attention Based Generation of Tumour on Chip Videos
Generation of videomicroscopy sequences will become increasingly important in order to train and evaluate dynamic image analysis methods. The latter are crucial to the study of biological dynamic processes such as tumour-immune cell interactions. However, current generative models developed in the context of natural image sequences employ either a single 3D (2D+time) convolutional neural network (CNN) based generator, which fails to capture long range interactions, or two separate (spatial and temporal) generators, which are unable to faithfully reproduce the morphology of moving objects. Here, we propose a self-attention based generative model for videomicroscopy sequences that aims to take into account for the full range of interactions within a spatio-temporal volume of 32 frames. To reduce the computational burden of such a strategy, we consider the Nyström approximation of the attention matrix. This approach leads to significant improvements in reproducing the structures and the proper motion of videomicroscopy sequences as assessed by a range of existing and proposed quantitative metrics.