Multisensory Research最新文献

筛选
英文 中文
Perceived Audio-Visual Simultaneity Is Recalibrated by the Visual Intensity of the Preceding Trial. 感知到的视听同时性会被前一个试验的视觉强度重新校准。
IF 1.6 4区 心理学
Multisensory Research Pub Date : 2024-04-30 DOI: 10.1163/22134808-bja10121
Ryan Horsfall, Neil Harrison, Georg Meyer, Sophie Wuerger
{"title":"Perceived Audio-Visual Simultaneity Is Recalibrated by the Visual Intensity of the Preceding Trial.","authors":"Ryan Horsfall, Neil Harrison, Georg Meyer, Sophie Wuerger","doi":"10.1163/22134808-bja10121","DOIUrl":"https://doi.org/10.1163/22134808-bja10121","url":null,"abstract":"<p><p>A vital heuristic used when making judgements on whether audio-visual signals arise from the same event, is the temporal coincidence of the respective signals. Previous research has highlighted a process, whereby the perception of simultaneity rapidly recalibrates to account for differences in the physical temporal offsets of stimuli. The current paper investigated whether rapid recalibration also occurs in response to differences in central arrival latencies, driven by visual-intensity-dependent processing times. In a behavioural experiment, observers completed a temporal-order judgement (TOJ), simultaneity judgement (SJ) and simple reaction-time (RT) task and responded to audio-visual trials that were preceded by other audio-visual trials with either a bright or dim visual stimulus. It was found that the point of subjective simultaneity shifted, due to the visual intensity of the preceding stimulus, in the TOJ, but not SJ task, while the RT data revealed no effect of preceding intensity. Our data therefore provide some evidence that the perception of simultaneity rapidly recalibrates based on stimulus intensity.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2024-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140877920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tactile Landmarks: the Relative Landmark Location Alters Spatial Distortions 触觉地标:相对地标位置改变空间扭曲
IF 1.6 4区 心理学
Multisensory Research Pub Date : 2024-04-25 DOI: 10.1163/22134808-bja10122
Paula Soballa, Christian Frings, Simon Merz
{"title":"Tactile Landmarks: the Relative Landmark Location Alters Spatial Distortions","authors":"Paula Soballa, Christian Frings, Simon Merz","doi":"10.1163/22134808-bja10122","DOIUrl":"https://doi.org/10.1163/22134808-bja10122","url":null,"abstract":"\u0000The influence of landmarks, that is, nearby non-target stimuli, on spatial perception has been shown in multiple ways. These include altered target localization variability near landmarks and systematic spatial distortions of target localizations. Previous studies have mostly been conducted in the visual modality using temporary, artificial landmarks or the tactile modality with persistent landmarks on the body. Thus, it is unclear whether both landmark types produce the same spatial distortions as they were never investigated in the same modality. Addressing this, we used a novel tactile setup to present temporary, artificial landmarks on the forearm and systematically manipulated their location to either be close to a persistent landmark (wrist or elbow) or in between both persistent landmarks at the middle of the forearm. Initial data (Exp. 1 and Exp. 2) suggested systematic differences of temporary landmarks based on their distance from the persistent landmark, possibly indicating different distortions of temporary and persistent landmarks. Subsequent control studies (Exp. 3 and Exp. 4) showed this effect was driven by the relative landmark location within the target distribution. Specifically, landmarks in the middle of the target distribution led to systematic distortions of target localizations toward the landmark, whereas landmarks at the side led to distortions away from the landmark for nearby targets, and toward the landmark with wider distances. Our results indicate that experimental results with temporary landmarks can be generalized to more natural settings with persistent landmarks, and further reveal that the relative landmark location leads to different effects of the pattern of spatial distortions.","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140654488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Four-Stroke Apparent Motion Can Effectively Induce Visual Self-Motion Perception: an Examination Using Expanding, Rotating, and Translating Motion 四冲程表观运动能有效诱发视觉自我运动感知:利用膨胀、旋转和平移运动进行的研究
IF 1.6 4区 心理学
Multisensory Research Pub Date : 2024-04-24 DOI: 10.1163/22134808-bja10120
Shinji Nakamura
{"title":"Four-Stroke Apparent Motion Can Effectively Induce Visual Self-Motion Perception: an Examination Using Expanding, Rotating, and Translating Motion","authors":"Shinji Nakamura","doi":"10.1163/22134808-bja10120","DOIUrl":"https://doi.org/10.1163/22134808-bja10120","url":null,"abstract":"\u0000The current investigation examined whether visual motion without continuous visual displacement could effectively induce self-motion perception (vection). Four-stroke apparent motions (4SAM) were employed in the experiments as visual inducers. The 4SAM pattern contained luminance-defined motion energy equivalent to the real motion pattern, and the participants perceived unidirectional motion according to the motion energy but without displacements (the visual elements flickered on the spot). The experiments revealed that the 4SAM stimulus could effectively induce vection in the horizontal, expanding, or rotational directions, although its strength was significantly weaker than that induced by the real-motion stimulus. This result suggests that visual displacement is not essential, and the luminance-defined motion energy and/or the resulting perceived motion of the visual inducer would be sufficient for inducing visual self-motion perception. Conversely, when the 4SAM and real-motion patterns were presented simultaneously, self-motion perception was mainly determined in accordance with real motion, suggesting that the real-motion stimulus is a predominant determinant of vection. These research outcomes may be worthy of considering the perceptual and neurological mechanisms underlying self-motion perception.","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2024-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140661460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Multimodal Trust Effects of Face, Voice, and Sentence Content 人脸、声音和句子内容的多模态信任效应
IF 1.6 4区 心理学
Multisensory Research Pub Date : 2024-04-03 DOI: 10.1163/22134808-bja10119
Isar Syed, M. Baart, Jean Vroomen
{"title":"The Multimodal Trust Effects of Face, Voice, and Sentence Content","authors":"Isar Syed, M. Baart, Jean Vroomen","doi":"10.1163/22134808-bja10119","DOIUrl":"https://doi.org/10.1163/22134808-bja10119","url":null,"abstract":"\u0000Trust is an aspect critical to human social interaction and research has identified many cues that help in the assimilation of this social trait. Two of these cues are the pitch of the voice and the width-to-height ratio of the face (fWHR). Additionally, research has indicated that the content of a spoken sentence itself has an effect on trustworthiness; a finding that has not yet been brought into multisensory research. The current research aims to investigate previously developed theories on trust in relation to vocal pitch, fWHR, and sentence content in a multimodal setting. Twenty-six female participants were asked to judge the trustworthiness of a voice speaking a neutral or romantic sentence while seeing a face. The average pitch of the voice and the fWHR were varied systematically. Results indicate that the content of the spoken message was an important predictor of trustworthiness extending into multimodality. Further, the mean pitch of the voice and fWHR of the face appeared to be useful indicators in a multimodal setting. These effects interacted with one another across modalities. The data demonstrate that trust in the voice is shaped by task-irrelevant visual stimuli. Future research is encouraged to clarify whether these findings remain consistent across genders, age groups, and languages.","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2024-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140747513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Addressing the Association Between Action Video Game Playing Experience and Visual Search in Naturalistic Multisensory Scenes 解决动作电子游戏游戏体验与自然多感官场景中视觉搜索之间的关联问题
IF 1.6 4区 心理学
Multisensory Research Pub Date : 2024-02-13 DOI: 10.1163/22134808-bja10118
M. Hamzeloo, Daria Kvasova, Salvador Soto-Faraco
{"title":"Addressing the Association Between Action Video Game Playing Experience and Visual Search in Naturalistic Multisensory Scenes","authors":"M. Hamzeloo, Daria Kvasova, Salvador Soto-Faraco","doi":"10.1163/22134808-bja10118","DOIUrl":"https://doi.org/10.1163/22134808-bja10118","url":null,"abstract":"\u0000Prior studies investigating the effects of routine action video game play have demonstrated improvements in a variety of cognitive processes, including improvements in attentional tasks. However, there is little evidence indicating that the cognitive benefits of playing action video games generalize from simplified unisensory stimuli to multisensory scenes — a fundamental characteristic of natural, everyday life environments. The present study addressed if video game experience has an impact on crossmodal congruency effects when searching through such multisensory scenes. We compared the performance of action video game players (AVGPs) and non-video game players (NVGPs) on a visual search task for objects embedded in video clips of realistic scenes. We conducted two identical online experiments with gender-balanced samples, for a total of . Overall, the data replicated previous findings reporting search benefits when visual targets were accompanied by semantically congruent auditory events, compared to neutral or incongruent ones. However, according to the results, AVGPs did not consistently outperform NVGPs in the overall search task, nor did they use multisensory cues more efficiently than NVGPs. Exploratory analyses with self-reported gender as a variable revealed a potential difference in response strategy between experienced male and female AVGPs when dealing with crossmodal cues. These findings suggest that the generalization of the advantage of AVG experience to realistic, crossmodal situations should be made with caution and considering gender-related issues.","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2024-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139780110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Addressing the Association Between Action Video Game Playing Experience and Visual Search in Naturalistic Multisensory Scenes 解决动作电子游戏游戏体验与自然多感官场景中视觉搜索之间的关联问题
IF 1.6 4区 心理学
Multisensory Research Pub Date : 2024-02-13 DOI: 10.1163/22134808-bja10118
M. Hamzeloo, Daria Kvasova, Salvador Soto-Faraco
{"title":"Addressing the Association Between Action Video Game Playing Experience and Visual Search in Naturalistic Multisensory Scenes","authors":"M. Hamzeloo, Daria Kvasova, Salvador Soto-Faraco","doi":"10.1163/22134808-bja10118","DOIUrl":"https://doi.org/10.1163/22134808-bja10118","url":null,"abstract":"\u0000Prior studies investigating the effects of routine action video game play have demonstrated improvements in a variety of cognitive processes, including improvements in attentional tasks. However, there is little evidence indicating that the cognitive benefits of playing action video games generalize from simplified unisensory stimuli to multisensory scenes — a fundamental characteristic of natural, everyday life environments. The present study addressed if video game experience has an impact on crossmodal congruency effects when searching through such multisensory scenes. We compared the performance of action video game players (AVGPs) and non-video game players (NVGPs) on a visual search task for objects embedded in video clips of realistic scenes. We conducted two identical online experiments with gender-balanced samples, for a total of . Overall, the data replicated previous findings reporting search benefits when visual targets were accompanied by semantically congruent auditory events, compared to neutral or incongruent ones. However, according to the results, AVGPs did not consistently outperform NVGPs in the overall search task, nor did they use multisensory cues more efficiently than NVGPs. Exploratory analyses with self-reported gender as a variable revealed a potential difference in response strategy between experienced male and female AVGPs when dealing with crossmodal cues. These findings suggest that the generalization of the advantage of AVG experience to realistic, crossmodal situations should be made with caution and considering gender-related issues.","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2024-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139840141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spatial Sensory References for Vestibular Self-Motion Perception. 前庭自我运动感知的空间感觉参考。
IF 1.6 4区 心理学
Multisensory Research Pub Date : 2023-12-20 DOI: 10.1163/22134808-bja10117
Silvia Zanchi, Luigi F Cuturi, Giulio Sandini, Monica Gori, Elisa R Ferrè
{"title":"Spatial Sensory References for Vestibular Self-Motion Perception.","authors":"Silvia Zanchi, Luigi F Cuturi, Giulio Sandini, Monica Gori, Elisa R Ferrè","doi":"10.1163/22134808-bja10117","DOIUrl":"10.1163/22134808-bja10117","url":null,"abstract":"<p><p>While navigating through the surroundings, we constantly rely on inertial vestibular signals for self-motion along with visual and acoustic spatial references from the environment. However, the interaction between inertial cues and environmental spatial references is not yet fully understood. Here we investigated whether vestibular self-motion sensitivity is influenced by sensory spatial references. Healthy participants were administered a Vestibular Self-Motion Detection Task in which they were asked to detect vestibular self-motion sensations induced by low-intensity Galvanic Vestibular Stimulation. Participants performed this detection task with or without an external visual or acoustic spatial reference placed directly in front of them. We computed the d prime ( d ' ) as a measure of participants' vestibular sensitivity and the criterion as an index of their response bias. Results showed that the visual spatial reference increased sensitivity to detect vestibular self-motion. Conversely, the acoustic spatial reference did not influence self-motion sensitivity. Both visual and auditory spatial references did not cause changes in response bias. Environmental visual spatial references provide relevant information to enhance our ability to perceive inertial self-motion cues, suggesting a specific interaction between visual and vestibular systems in self-motion perception.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138832890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-Modal Contributions to Episodic Memory for Voices. 对声音外显记忆的跨模态贡献
IF 1.6 4区 心理学
Multisensory Research Pub Date : 2023-12-20 DOI: 10.1163/22134808-bja10116
Joshua R Tatz, Zehra F Peynircioğlu
{"title":"Cross-Modal Contributions to Episodic Memory for Voices.","authors":"Joshua R Tatz, Zehra F Peynircioğlu","doi":"10.1163/22134808-bja10116","DOIUrl":"10.1163/22134808-bja10116","url":null,"abstract":"<p><p>Multisensory context often facilitates perception and memory. In fact, encoding items within a multisensory context can improve memory even on strictly unisensory tests (i.e., when the multisensory context is absent). Prior studies that have consistently found these multisensory facilitation effects have largely employed multisensory contexts in which the stimuli were meaningfully related to the items targeting for remembering (e.g., pairing canonical sounds and images). Other studies have used unrelated stimuli as multisensory context. A third possible type of multisensory context is one that is environmentally related simply because the stimuli are often encountered together in the real world. We predicted that encountering such a multisensory context would also enhance memory through cross-modal associations, or representations relating to one's prior multisensory experience with that sort of stimuli in general. In two memory experiments, we used faces and voices of unfamiliar people as everyday stimuli individuals have substantial experience integrating the perceptual features of. We assigned participants to face- or voice-recognition groups and ensured that, during the study phase, half of the face or voice targets were encountered also with information in the other modality. Voices initially encoded along with faces were consistently remembered better, providing evidence that cross-modal associations could explain the observed multisensory facilitation.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138808898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Stationary Haptic Stimuli Do not Produce Ocular Accommodation in Most Individuals. 静止的触觉刺激在大多数个体中不会产生眼部调节。
IF 1.6 4区 心理学
Multisensory Research Pub Date : 2023-11-28 DOI: 10.1163/22134808-bja10115
Lawrence R Stark, Kim Shiraishi, Tyler Sommerfeld
{"title":"Stationary Haptic Stimuli Do not Produce Ocular Accommodation in Most Individuals.","authors":"Lawrence R Stark, Kim Shiraishi, Tyler Sommerfeld","doi":"10.1163/22134808-bja10115","DOIUrl":"10.1163/22134808-bja10115","url":null,"abstract":"<p><p>This study aimed to determine the extent to which haptic stimuli can influence ocular accommodation, either alone or in combination with vision. Accommodation was measured objectively in 15 young adults as they read stationary targets containing Braille letters. These cards were presented at four distances in the range 20-50 cm. In the Touch condition, the participant read by touch with their dominant hand in a dark room. Afterward, they estimated card distance with their non-dominant hand. In the Vision condition, they read by sight binocularly without touch in a lighted room. In the Touch with Vision condition, they read by sight binocularly and with touch in a lighted room. Sensory modality had a significant overall effect on the slope of the accommodative stimulus-response function. The slope in the Touch condition was not significantly different from zero, even though depth perception from touch was accurate. Nevertheless, one atypical participant had a moderate accommodative slope in the Touch condition. The accommodative slope in the Touch condition was significantly poorer than in the Vision condition. The accommodative slopes in the Vision condition and Touch with Vision condition were not significantly different. For most individuals, haptic stimuli for stationary objects do not influence the accommodation response, alone or in combination with vision. These haptic stimuli provide accurate distance perception, thus questioning the general validity of Heath's model of proximal accommodation as driven by perceived distance. Instead, proximally induced accommodation relies on visual rather than touch stimuli.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138453050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reflections on Cross-Modal Correspondences: Current Understanding and Issues for Future Research. 对跨模态对应的思考:当前认识和未来研究的问题。
IF 1.8 4区 心理学
Multisensory Research Pub Date : 2023-11-10 DOI: 10.1163/22134808-bja10114
Kosuke Motoki, Lawrence E Marks, Carlos Velasco
{"title":"Reflections on Cross-Modal Correspondences: Current Understanding and Issues for Future Research.","authors":"Kosuke Motoki, Lawrence E Marks, Carlos Velasco","doi":"10.1163/22134808-bja10114","DOIUrl":"10.1163/22134808-bja10114","url":null,"abstract":"<p><p>The past two decades have seen an explosion of research on cross-modal correspondences. Broadly speaking, this term has been used to encompass associations between and among features, dimensions, or attributes across the senses. There has been an increasing interest in this topic amongst researchers from multiple fields (psychology, neuroscience, music, art, environmental design, etc.) and, importantly, an increasing breadth of the topic's scope. Here, this narrative review aims to reflect on what cross-modal correspondences are, where they come from, and what underlies them. We suggest that cross-modal correspondences are usefully conceived as relative associations between different actual or imagined sensory stimuli, many of these correspondences being shared by most people. A taxonomy of correspondences with four major kinds of associations (physiological, semantic, statistical, and affective) characterizes cross-modal correspondences. Sensory dimensions (quantity/quality) and sensory features (lower perceptual/higher cognitive) correspond in cross-modal correspondences. Cross-modal correspondences may be understood (or measured) from two complementary perspectives: the phenomenal view (perceptual experiences of subjective matching) and the behavioural response view (observable patterns of behavioural response to multiple sensory stimuli). Importantly, we reflect on remaining questions and standing issues that need to be addressed in order to develop an explanatory framework for cross-modal correspondences. Future research needs (a) to understand better when (and why) phenomenal and behavioural measures are coincidental and when they are not, and, ideally, (b) to determine whether different kinds of cross-modal correspondence (quantity/quality, lower perceptual/higher cognitive) rely on the same or different mechanisms.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2023-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"107592772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信