ACM Transactions on Applied Perception最新文献

筛选
英文 中文
Design and Validation of a Virtual Reality Mental Rotation Test 虚拟现实心理旋转测试的设计与验证
4区 计算机科学
ACM Transactions on Applied Perception Pub Date : 2023-10-09 DOI: 10.1145/3626238
Kristin A. Bartlett, Almudena Palacios-Ibáñez, Jorge Dorribo Camba
{"title":"Design and Validation of a Virtual Reality Mental Rotation Test","authors":"Kristin A. Bartlett, Almudena Palacios-Ibáñez, Jorge Dorribo Camba","doi":"10.1145/3626238","DOIUrl":"https://doi.org/10.1145/3626238","url":null,"abstract":"Mental rotation, a common measure of spatial ability, has traditionally been assessed through paper-based instruments like the Mental Rotation Test (MRT) or the Purdue Spatial Visualization Test: Rotations (PSVT:R). The fact that these instruments present 3D shapes in a 2D format devoid of natural cues like shading and perspective likely limits their ability to accurately assess the fundamental skill of mentally rotating 3D shapes. In this paper, we describe the Virtual Reality Mental Rotation Assessment (VRMRA), a virtual reality-based mental rotation assessment derived from the Revised PSVT:R and MRT. The VRMRA reimagines traditional mental rotation assessments in a room-scale virtual environment and uses hand-tracking and elements of gamification in attempts to create an intuitive, engaging experience for test-takers. To validate the instrument, we compared response patterns in the VRMRA with patterns observed on the MRT and Revised PSVT:R. For the PSVT:R-type questions, items requiring a rotation around two axes were significantly harder than items requiring rotations around a single axis in the VRMRA, which is not the case in the Revised PSVT:R. For the MRT-type questions in the VRMRA, a moderate negative correlation was found between the degree of rotation in the X direction and item difficulty. While the problem of occlusion was reduced, features of the shapes and distractors accounted for 50.6% of the variance in item difficulty. Results suggest that the VRMRA is likely a more accurate tool to assess mental rotation ability in comparison to traditional instruments which present the stimuli through 2D media. Our findings also point to potential problems with the fundamental designs of the Revised PSVT:R and MRT question formats.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135141330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Haptic Intensity Order Illusion is Caused by Amplitude Changes 触觉强度顺序错觉是由振幅变化引起的
4区 计算机科学
ACM Transactions on Applied Perception Pub Date : 2023-10-02 DOI: 10.1145/3626237
Ivan Makarov, Snorri Steinn Stefánsson Thors, Elvar Atli Aevarsson, Finnur Kári Pind Jörgensson, Nashmin Yeganyeh, Árni Kristjánsson, Runar Unnthorsson
{"title":"The Haptic Intensity Order Illusion is Caused by Amplitude Changes","authors":"Ivan Makarov, Snorri Steinn Stefánsson Thors, Elvar Atli Aevarsson, Finnur Kári Pind Jörgensson, Nashmin Yeganyeh, Árni Kristjánsson, Runar Unnthorsson","doi":"10.1145/3626237","DOIUrl":"https://doi.org/10.1145/3626237","url":null,"abstract":"When two brief vibrotactile stimulations are sequentially applied to observers’ lower back,there is systematic mislocalization of the stimulation: If the second stimulation is of higher intensity than the first one, observers tend to respond that the second stimulation was above the first one, and vice versa when weak intensity stimulation follows a strong one. This haptic mislocalization effect has been called the intensity order illusion . In the original demonstration of the illusion, frequency and amplitude of the stimulation were inextricably linked so that changes in amplitude also resulted in changes in frequency. It is therefore unknown whether the illusion is caused by changes in frequency, amplitude or both. To test this, we performed a multifactorial experiment, where we used L5 actuators that allow independent manipulation of frequency and amplitude. This approach enabled us to investigate the effects of stimulus amplitude, frequency and location and assess any potential interactions between these factors. We report four main findings: 1) we were able to replicate the intensity order illusion with the L5 tactors, 2) the illusion mainly occurred in the upwards direction, or in other words, when strong stimulation following a weaker one occurred above or in the same location as the first stimulation, 3) the illusion did not occur when similar stimulation patterns were applied in the horizontal direction and 4) the illusion was solely due to changes in amplitude, while changes in frequency (100 Hz vs 200 Hz) had no effect.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135893088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The effect of interocular contrast differences on the appearance of augmented reality imagery 眼间对比差异对增强现实图像外观的影响
IF 1.6 4区 计算机科学
ACM Transactions on Applied Perception Pub Date : 2023-08-29 DOI: 10.1145/3617684
Minqi Wang, Jian Ding, D. Levi, Emily Cooper
{"title":"The effect of interocular contrast differences on the appearance of augmented reality imagery","authors":"Minqi Wang, Jian Ding, D. Levi, Emily Cooper","doi":"10.1145/3617684","DOIUrl":"https://doi.org/10.1145/3617684","url":null,"abstract":"Augmented reality (AR) devices seek to create compelling visual experiences that merge virtual imagery with the natural world. These devices often rely on wearable near-eye display systems that can optically overlay digital images to the left and right eyes of the user separately. Ideally, the two eyes should be shown images with minimal radiometric differences (e.g., the same overall luminance, contrast, and color in both eyes), but achieving this binocular equality can be challenging in wearable systems with stringent demands on weight and size. Basic vision research has shown that a spectrum of potentially detrimental perceptual effects can be elicited by imagery with radiometric differences between the eyes, but it is not clear whether and how these findings apply to the experience of modern AR devices. In this work, we first develop a testing paradigm for assessing multiple aspects of visual appearance at once, and characterize five key perceptual factors when participants viewed stimuli with interocular contrast differences. In a second experiment, we simulate optical see-through AR imagery using conventional desktop LCD monitors and use the same paradigm to evaluate the multifaceted perceptual implications when the AR display luminance differs between the two eyes. We also include simulations of monocular AR systems (i.e., systems in which only one eye sees the displayed image). Our results suggest that interocular contrast differences can drive several potentially detrimental perceptual effects in binocular AR systems, such as binocular luster, rivalry, and spurious depth differences. In addition, monocular AR displays tend to have more artifacts than binocular displays with a large contrast difference in the two eyes. A better understanding of the range and likelihood of these perceptual phenomena can help inform design choices that support a high-quality user experience in AR.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46456217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Calibrated passability perception in virtual reality transfers to augmented reality 虚拟现实中校准的可通行性感知转移到增强现实
IF 1.6 4区 计算机科学
ACM Transactions on Applied Perception Pub Date : 2023-08-05 DOI: 10.1145/3613450
Holly C. Gagnon, Jeanine Stefanucci, Sarah H. Creem-Regehr, Bobby Bodenheimer
{"title":"Calibrated passability perception in virtual reality transfers to augmented reality","authors":"Holly C. Gagnon, Jeanine Stefanucci, Sarah H. Creem-Regehr, Bobby Bodenheimer","doi":"10.1145/3613450","DOIUrl":"https://doi.org/10.1145/3613450","url":null,"abstract":"As applications for virtual reality (VR) and augmented reality (AR) technology increase, it will be important to understand how users perceive their action capabilities in virtual environments. Feedback about actions may help to calibrate perception for action opportunities (affordances) so that action judgments in VR and AR mirror actors’ real abilities. Previous work indicates that walking through a virtual doorway while wielding an object can calibrate the perception of one’s passability through feedback from collisions. In the current study, we aimed to replicate this calibration through feedback using a different paradigm in VR while also testing whether this calibration transfers to AR. Participants held a pole at 45 degrees and made passability judgments in AR (pretest phase). Then, they made passability judgments in VR and received feedback on those judgments by walking through a virtual doorway while holding the pole (calibration phase). Participants then returned to AR to make posttest passability judgments. Results indicate that feedback calibrated participants’ judgments in VR. Moreover, this calibration transferred to the AR environment. In other words, after experiencing feedback in VR, passability judgments in VR and in AR became closer to an actor’s actual ability, which could make training applications in these technologies more effective.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48802143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Changes in Navigation Over Time: A Comparison of Teleportation and Joystick-based Locomotion 导航随时间的变化:基于遥控器和操纵杆的机车的比较
IF 1.6 4区 计算机科学
ACM Transactions on Applied Perception Pub Date : 2023-08-05 DOI: 10.1145/3613902
Moloud Nasiri, John R. Porter, Kristopher Kohm, Andrew C. Robb
{"title":"Changes in Navigation Over Time: A Comparison of Teleportation and Joystick-based Locomotion","authors":"Moloud Nasiri, John R. Porter, Kristopher Kohm, Andrew C. Robb","doi":"10.1145/3613902","DOIUrl":"https://doi.org/10.1145/3613902","url":null,"abstract":"Little research has studied how people use Virtual Reality (VR) changes as they experience VR. This paper reports the results of an experiment investigating how users’ behavior with two locomotion methods changed over four weeks: teleportation and joystick-based locomotion. Twenty novice VR users (no more than 1 hour prior experience with any form of walking in VR) were recruited. They loaned an Oculus Quest for four weeks on their own time, including an activity we provided them with. Results showed that the time required to complete the navigation task decreased faster for joystick-based locomotion. Spatial memory improved with time, particularly when using teleportation (which starts disadvantaged to joystick-based locomotion). Also, overall cybersickness decreased slightly over time; two dimensions of cybersickness (nausea and disorientation) increased notably over time using joystick-based navigation.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"1 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42916637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Participatory Design of Virtual Humans for Mental Health Support Among North American Computer Science Students: Voice, Appearance, and the Similarity-attraction Effect 北美计算机科学专业学生心理健康支持虚拟人的参与性设计:声音、外表和相似吸引效应
IF 1.6 4区 计算机科学
ACM Transactions on Applied Perception Pub Date : 2023-08-05 DOI: 10.1145/3613961
P. Feijóo-García, Chase Wrenn, J. Stuart, A. G. de Siqueira, Benjamin C. Lok
{"title":"Participatory Design of Virtual Humans for Mental Health Support Among North American Computer Science Students: Voice, Appearance, and the Similarity-attraction Effect","authors":"P. Feijóo-García, Chase Wrenn, J. Stuart, A. G. de Siqueira, Benjamin C. Lok","doi":"10.1145/3613961","DOIUrl":"https://doi.org/10.1145/3613961","url":null,"abstract":"Virtual humans (VHs) have the potential to support mental wellness among college computer science (CS) students. However, designing effective VHs for counseling purposes requires a clear understanding of students’ demographics, backgrounds, and expectations. To this end, we conducted two user studies with 216 CS students from a major university in North America. In the first study, we explored how students co-designed VHs to support mental wellness conversations and found that the VHs’ demographics, appearance, and voice closely resembled the characteristics of their designers. In the second study, we investigated how the interplay between the VH’s appearance and voice impacted the agent’s effectiveness in promoting CS students’ intentions toward gratitude journaling. Our findings suggest that the active participation of CS students in VH design leads to the creation of agents that closely resemble their designers. Moreover, we found that the interplay between the VH’s appearance and voice impacts the agent’s effectiveness in promoting CS students’ intentions toward mental wellness techniques.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"20 1","pages":"1 - 27"},"PeriodicalIF":1.6,"publicationDate":"2023-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47083061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On Human-like Biases in Convolutional Neural Networks for the Perception of Slant from Texture 关于卷积神经网络中从纹理感知倾斜的类人偏差
IF 1.6 4区 计算机科学
ACM Transactions on Applied Perception Pub Date : 2023-08-05 DOI: 10.1145/3613451
Yuanhao Wang, Qian Zhang, Celine Aubuchon, Jovan T. Kemp, F. Domini, J. Tompkin
{"title":"On Human-like Biases in Convolutional Neural Networks for the Perception of Slant from Texture","authors":"Yuanhao Wang, Qian Zhang, Celine Aubuchon, Jovan T. Kemp, F. Domini, J. Tompkin","doi":"10.1145/3613451","DOIUrl":"https://doi.org/10.1145/3613451","url":null,"abstract":"Depth estimation is fundamental to 3D perception, and humans are known to have biased estimates of depth. This study investigates whether convolutional neural networks (CNNs) can be biased when predicting the sign of curvature and depth of surfaces of textured surfaces under different viewing conditions (field of view) and surface parameters (slant and texture irregularity). This hypothesis is drawn from the idea that texture gradients described by local neighborhoods—a cue identified in human vision literature—are also representable within convolutional neural networks. To this end, we trained both unsupervised and supervised CNN models on the renderings of slanted surfaces with random Polka dot patterns and analyzed their internal latent representations. The results show that the unsupervised models have similar prediction biases as humans across all experiments, while supervised CNN models do not exhibit similar biases. The latent spaces of the unsupervised models can be linearly separated into axes representing field of view and optical slant. For supervised models, this ability varies substantially with model architecture and the kind of supervision (continuous slant vs. sign of slant). Even though this study says nothing of any shared mechanism, these findings suggest that unsupervised CNN models can share similar predictions to the human visual system. Code: github.com/brownvc/Slant-CNN-Biases","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46819820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving the Perception of Mid-Air Tactile Shapes With Spatio-Temporally-Modulated Tactile Pointers 利用时空调制触觉指针提高对空中触觉形状的感知
IF 1.6 4区 计算机科学
ACM Transactions on Applied Perception Pub Date : 2023-07-29 DOI: 10.1145/3611388
Lendy Mulot, Thomas Howard, C. Pacchierotti, M. Marchal
{"title":"Improving the Perception of Mid-Air Tactile Shapes With Spatio-Temporally-Modulated Tactile Pointers","authors":"Lendy Mulot, Thomas Howard, C. Pacchierotti, M. Marchal","doi":"10.1145/3611388","DOIUrl":"https://doi.org/10.1145/3611388","url":null,"abstract":"Ultrasound mid-air haptic (UMH) devices can remotely render vibrotactile shapes on the skin of unequipped users, e.g., to draw haptic icons or render virtual object shapes. Spatio-temporal modulation (STM), the state-of-the-art UMH shape rendering method, provides large freedom in shape design and produces the strongest possible stimuli for this technology. Yet, STM shapes are often reported to be blurry, complicating shape identification. Dynamic tactile pointers (DTP) were recently introduced as a technique to overcome this issue. By tracing a contour with an amplitude-modulated focal point, they significantly improve shape identification accuracy over STM, but at the cost of much lower stimulus intensity. Building upon this, we propose Spatio-temporally-modulated Tactile Pointers (STP), a novel method for rendering clearer and sharper UMH shapes while at the same time producing strong vibrotactile sensations. We ran two human participant experiments, which show that STP shapes are perceived as significantly stronger than DTP shapes, while shape identification accuracy is significantly improved over STM and on par with that obtained with DTP. Our work has implications for effective shape rendering with UMH, and provides insights which could inform future psychophysical investigation into vibrotactile shape perception in UMH.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44964635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Twin identification over viewpoint change: A deep convolutional neural network surpasses humans 超越视点变化的孪生识别:深度卷积神经网络超越人类
IF 1.6 4区 计算机科学
ACM Transactions on Applied Perception Pub Date : 2023-07-20 DOI: https://dl.acm.org/doi/10.1145/3609224
Connor J. Parde, Virginia E. Strehle, Vivekjyoti Banerjee, Ying Hu, Jacqueline G. Cavazos, Carlos D. Castillo, Alice J. O’Toole
{"title":"Twin identification over viewpoint change: A deep convolutional neural network surpasses humans","authors":"Connor J. Parde, Virginia E. Strehle, Vivekjyoti Banerjee, Ying Hu, Jacqueline G. Cavazos, Carlos D. Castillo, Alice J. O’Toole","doi":"https://dl.acm.org/doi/10.1145/3609224","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3609224","url":null,"abstract":"<p>Deep convolutional neural networks (DCNNs) have achieved human-level accuracy in face identification (Phillips et al., 2018), though it is unclear how accurately they discriminate highly-similar faces. Here, humans and a DCNN performed a challenging face-identity matching task that included identical twins. Participants (<i>N</i> = 87) viewed pairs of face images of three types: same-identity, general imposters (different identities from similar demographic groups), and twin imposters (identical twin siblings). The task was to determine whether the pairs showed the same person or different people. Identity comparisons were tested in three viewpoint-disparity conditions: frontal to frontal, frontal to 45-degree profile, and frontal to 90-degree-profile. Accuracy for discriminating matched-identity pairs from twin-imposter pairs and general imposter pairs was assessed in each viewpoint-disparity condition. Humans were more accurate for general-imposter pairs than twin-imposter pairs, and accuracy declined with increased viewpoint disparity between the images in a pair. A DCNN trained for face identification (Ranjan et al., 2018) was tested on the same image pairs presented to humans. Machine performance mirrored the pattern of human accuracy, but with performance at or above all humans in all but one condition. Human and machine similarity scores were compared across all image-pair types. This item-level analysis showed that human and machine similarity ratings correlated significantly in six of nine image-pair types [range <i>r</i> = 0.38 to <i>r</i> = 0.63], suggesting general accord between the perception of face similarity by humans and the DCNN. These findings also contribute to our understanding of DCNN performance for discriminating high-resemblance faces, demonstrate that the DCNN performs at a level at or above humans, and suggest a degree of parity between the features used by humans and the DCNN.</p>","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"9 2","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138504115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Twin Identification over Viewpoint Change: A Deep Convolutional Neural Network Surpasses Humans. 超越视点变化的孪生识别:深度卷积神经网络超越人类
IF 1.9 4区 计算机科学
ACM Transactions on Applied Perception Pub Date : 2023-07-01 DOI: 10.1145/3609224
Connor J Parde, Virginia E Strehle, Vivekjyoti Banerjee, Ying Hu, Jacqueline G Cavazos, Carlos D Castillo, Alice J O'Toole
{"title":"Twin Identification over Viewpoint Change: A Deep Convolutional Neural Network Surpasses Humans.","authors":"Connor J Parde, Virginia E Strehle, Vivekjyoti Banerjee, Ying Hu, Jacqueline G Cavazos, Carlos D Castillo, Alice J O'Toole","doi":"10.1145/3609224","DOIUrl":"10.1145/3609224","url":null,"abstract":"<p><p>Deep convolutional neural networks (DCNNs) have achieved human-level accuracy in face identification (Phillips et al., 2018), though it is unclear how accurately they discriminate highly-similar faces. Here, humans and a DCNN performed a challenging face-identity matching task that included identical twins. Participants (<i>N</i> = 87) viewed pairs of face images of three types: same-identity, general imposters (different identities from similar demographic groups), and twin imposters (identical twin siblings). The task was to determine whether the pairs showed the same person or different people. Identity comparisons were tested in three viewpoint-disparity conditions: frontal to frontal, frontal to 45° profile, and frontal to 90°profile. Accuracy for discriminating matched-identity pairs from twin-imposter pairs and general-imposter pairs was assessed in each viewpoint-disparity condition. Humans were more accurate for general-imposter pairs than twin-imposter pairs, and accuracy declined with increased viewpoint disparity between the images in a pair. A DCNN trained for face identification (Ranjan et al., 2018) was tested on the same image pairs presented to humans. Machine performance mirrored the pattern of human accuracy, but with performance at or above all humans in all but one condition. Human and machine similarity scores were compared across all image-pair types. This item-level analysis showed that human and machine similarity ratings correlated significantly in six of nine image-pair types [range <i>r</i> = 0.38 to <i>r</i> = 0.63], suggesting general accord between the perception of face similarity by humans and the DCNN. These findings also contribute to our understanding of DCNN performance for discriminating high-resemblance faces, demonstrate that the DCNN performs at a level at or above humans, and suggest a degree of parity between the features used by humans and the DCNN.</p>","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"20 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11315461/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42806850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信