{"title":"Control+Shift: Generating Controllable Distribution Shifts","authors":"Roy Friedman, Rhea Chowers","doi":"arxiv-2409.07940","DOIUrl":null,"url":null,"abstract":"We propose a new method for generating realistic datasets with distribution\nshifts using any decoder-based generative model. Our approach systematically\ncreates datasets with varying intensities of distribution shifts, facilitating\na comprehensive analysis of model performance degradation. We then use these\ngenerated datasets to evaluate the performance of various commonly used\nnetworks and observe a consistent decline in performance with increasing shift\nintensity, even when the effect is almost perceptually unnoticeable to the\nhuman eye. We see this degradation even when using data augmentations. We also\nfind that enlarging the training dataset beyond a certain point has no effect\non the robustness and that stronger inductive biases increase robustness.","PeriodicalId":501130,"journal":{"name":"arXiv - CS - Computer Vision and Pattern Recognition","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computer Vision and Pattern Recognition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.07940","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
We propose a new method for generating realistic datasets with distribution
shifts using any decoder-based generative model. Our approach systematically
creates datasets with varying intensities of distribution shifts, facilitating
a comprehensive analysis of model performance degradation. We then use these
generated datasets to evaluate the performance of various commonly used
networks and observe a consistent decline in performance with increasing shift
intensity, even when the effect is almost perceptually unnoticeable to the
human eye. We see this degradation even when using data augmentations. We also
find that enlarging the training dataset beyond a certain point has no effect
on the robustness and that stronger inductive biases increase robustness.