{"title":"Brain-Supervised Conditional Generative Modeling","authors":"Jun Ma;Tuukka Ruotsalo","doi":"10.1109/THMS.2025.3537339","DOIUrl":null,"url":null,"abstract":"Present machine learning approaches to steer generative models rely on the availability of manual human input. We propose an alternative approach to supervising generative machine learning models by directly detecting task-relevant information from brain responses. That is, requiring humans only to perceive stimulus and react to it naturally. Brain responses of participants (N=30) were recorded via electroencephalography (EEG) while they perceived artificially generated images of faces and were instructed to look for a particular semantic feature, such as “smile” or “young”. A supervised adversarial autoencoder was trained to disentangle semantic image features by using EEG data as a supervision signal. The model was subsequently conditioned to generate images matching users' intentions without additional human input. The approach was evaluated in a validation study comparing brain-conditioned models to manually conditioned and randomly conditioned alternatives. Human assessors scored the saliency of images generated from different models according to the target visual features (e.g., which face image is more “smiling” or more “young”). The results show that brain-supervised models perform comparably to models trained with manually curated labels, without requiring any manual input from humans.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":"55 3","pages":"383-393"},"PeriodicalIF":3.5000,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10937235","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Human-Machine Systems","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10937235/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Present machine learning approaches to steer generative models rely on the availability of manual human input. We propose an alternative approach to supervising generative machine learning models by directly detecting task-relevant information from brain responses. That is, requiring humans only to perceive stimulus and react to it naturally. Brain responses of participants (N=30) were recorded via electroencephalography (EEG) while they perceived artificially generated images of faces and were instructed to look for a particular semantic feature, such as “smile” or “young”. A supervised adversarial autoencoder was trained to disentangle semantic image features by using EEG data as a supervision signal. The model was subsequently conditioned to generate images matching users' intentions without additional human input. The approach was evaluated in a validation study comparing brain-conditioned models to manually conditioned and randomly conditioned alternatives. Human assessors scored the saliency of images generated from different models according to the target visual features (e.g., which face image is more “smiling” or more “young”). The results show that brain-supervised models perform comparably to models trained with manually curated labels, without requiring any manual input from humans.
期刊介绍:
The scope of the IEEE Transactions on Human-Machine Systems includes the fields of human machine systems. It covers human systems and human organizational interactions including cognitive ergonomics, system test and evaluation, and human information processing concerns in systems and organizations.