Matthias F.C. Hudecek , Eva Lermer , Susanne Gaube , Julia Cecil , Silke F. Heiss , Falk Batz
{"title":"Fine for others but not for me: The role of perspective in patients’ perception of artificial intelligence in online medical platforms","authors":"Matthias F.C. Hudecek , Eva Lermer , Susanne Gaube , Julia Cecil , Silke F. Heiss , Falk Batz","doi":"10.1016/j.chbah.2024.100046","DOIUrl":"10.1016/j.chbah.2024.100046","url":null,"abstract":"<div><p>In the near future, online medical platforms enabled by artificial intelligence (AI) technology will become increasingly more prevalent, allowing patients to use them directly without having to consult a human doctor. However, there is still little research from the patient's perspective on such AI-enabled tools. We, therefore, conducted a preregistered 2x3 between-subjects experiment (<em>N</em> = 266) to examine the influence of <em>perspective</em> (oneself vs. average person) and <em>source of advice</em> (AI vs. male physician vs. female physician) on the perception of a medical diagnosis and corresponding treatment recommendations. Results of robust ANOVAs showed a statistically significant interaction between the source of advice and perspective for all three dependent variables (i.e., evaluation of the diagnosis, evaluation of the treatment recommendation, and risk perception). People prefer the advice of human doctors to an AI when it comes to their own situation. In contrast, the participants made no differences between the sources of medical advice when it comes to assessing the situation of an average person. Our study contributes to a better understanding of the patient's perspective of modern digital health technology. As our findings suggest the perception of AI-enabled diagnostic tools is more critical when it comes to oneself, future research should examine the relevant factors that influence this perception.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 1","pages":"Article 100046"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000069/pdfft?md5=2fcb09cbbee613acb0eb286cb234004f&pid=1-s2.0-S2949882124000069-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139637127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yannick Fernholz , Tatiana Ermakova , B. Fabian , P. Buxmann
{"title":"User-driven prioritization of ethical principles for artificial intelligence systems","authors":"Yannick Fernholz , Tatiana Ermakova , B. Fabian , P. Buxmann","doi":"10.1016/j.chbah.2024.100055","DOIUrl":"10.1016/j.chbah.2024.100055","url":null,"abstract":"<div><p>Despite the progress of Artificial Intelligence (AI) and its contribution to the advancement of human society, the prioritization of ethical principles from the viewpoint of its users has not yet received much attention and empirical investigations. This is important to develop appropriate safeguards and increase the acceptance of AI-mediated technologies among all members of society.</p><p>In this research, we collected, integrated, and prioritized ethical principles for AI systems with respect to their relevance in different real-life application scenarios.</p><p>First, an overview of ethical principles for AI was systematically derived from various academic and non-academic sources. Our results clearly show that transparency, justice and fairness, non-maleficence, responsibility, and privacy are most frequently mentioned in this corpus of documents.</p><p>Next, an empirical survey to systematically identify users’ priorities was designed and conducted in the context of selected scenarios: AI-mediated recruitment (human resources), predictive policing, autonomous vehicles, and hospital robots.</p><p>We anticipate that the resulting ranking can serve as a valuable basis for formulating requirements for AI-mediated solutions and creating AI algorithms that prioritize user's needs. Our target audience includes everyone who will be affected by AI systems, e.g., policy makers, algorithm developers, and system managers as our ranking clearly depicts user's awareness regarding AI ethics.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 1","pages":"Article 100055"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S294988212400015X/pdfft?md5=911f54e1aba722dbdf8fcef066dde5e5&pid=1-s2.0-S294988212400015X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139889572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Artificial empathy in healthcare chatbots: Does it feel authentic?","authors":"Lennart Seitz","doi":"10.1016/j.chbah.2024.100067","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100067","url":null,"abstract":"<div><p>Implementing empathy to healthcare chatbots is considered promising to create a sense of human warmth. However, existing research frequently overlooks the multidimensionality of empathy, leading to an insufficient understanding if artificial empathy is perceived similarly to interpersonal empathy. This paper argues that implementing experiential expressions of empathy may have unintended negative consequences as they might feel inauthentic. Instead, providing instrumental support could be more suitable for modeling artificial empathy as it aligns better with computer-like schemas towards chatbots. Two experimental studies using healthcare chatbots examine the effect of <em>empathetic</em> (feeling with), <em>sympathetic</em> (feeling for), and <em>behavioral-empathetic</em> (empathetic helping) vs. <em>non-empathetic</em> responses on perceived warmth, perceived authenticity, and their consequences on trust and using intentions. Results reveal that any kind of empathy (vs. no empathy) enhances perceived warmth resulting in higher trust and using intentions. As hypothesized, <em>empathetic,</em> and <em>sympathetic</em> responses reduce the chatbot's perceived authenticity suppressing this positive effect in both studies. A third study does not replicate this backfiring effect in human-human interactions. This research thus highlights that empathy does not equally apply to human-bot interactions. It further introduces the concept of ‘perceived authenticity’ and demonstrates that distinctively human attributes might backfire by feeling inauthentic in interactions with chatbots.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 1","pages":"Article 100067"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000276/pdfft?md5=0d321010e61e06e55e950fbc8ca81fa2&pid=1-s2.0-S2949882124000276-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140328259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"How chatbots perceive sexting by adolescents","authors":"Tsameret Ricon","doi":"10.1016/j.chbah.2024.100068","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100068","url":null,"abstract":"<div><p>This study compares the perceptions and attitudes of two AI chatbots – Claude and ChatGPT – towards sexting by adolescents. Sexting, defined as sharing sexually explicit messages or images, is increasingly common among teenagers and has sparked ethical debates on consent, privacy, and potential harm. The study employs qualitative content analysis to investigate how AI systems address the complex issues related to sexting.</p><p>The chatbots were queried on Dec 2023 about the legitimacy of sexting in adolescent relationships, the non-consensual sharing of sexts, and privacy risks. Their responses were analyzed for themes related to the appropriateness, potential harm, and the specificity of recommendations the chatbots offered.</p><p>Key differences emerged in their ethical stances. Claude declined to render definitive value judgments, instead emphasizing consent, evaluating risks versus rewards, and seeking to prevent harm by providing concrete advice. ChatGPT was more abstract, stating that appropriateness depends on societal norms. While Claude provided a harm-centric framing of potential emotional, reputational, and legal consequences of activities such as nonconsensual “revenge porn,” ChatGPT used more tentative language. Finally, Claude offered actionable guidance aligned with research insights, while ChatGPT reiterated the need to respect consent without clearly outlining the next steps.</p><p>Overall, Claude demonstrated greater nuance in reasoning about ethical sexting issues, while ChatGPT showed greater subjectivity tied to societal standards.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 1","pages":"Article 100068"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000288/pdfft?md5=1fd0ec5bdb989f7d776a272841f738bd&pid=1-s2.0-S2949882124000288-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140332879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Atypical responses of job candidates in chatbot job interviews and their possible triggers","authors":"Helena Řepová, Pavel Král, Jan Zouhar","doi":"10.1016/j.chbah.2023.100038","DOIUrl":"https://doi.org/10.1016/j.chbah.2023.100038","url":null,"abstract":"<div><p>Recruiters have observed increased verbal abuse and other non-standard behavior in chatbot job interviews. However, current knowledge about such behavior, which we term atypical responses, is limited. The purpose of this research is to explore and classify the atypical responses of job candidates and explain what triggers these atypical responses in two studies. Study 1 identified atypical candidate responses in chatbot job interviews by applying content analysis to transcripts of authentic job interviews (N = 6583). A multi-stage process classifies atypical responses into six categories: testing the chatbot's capabilities, verbal abuse, testing the chatbot's reactions, further conversation, sex offers, and reverse discrimination. Study 2 tested the triggers of atypical reactions in fictitious chatbot job interviews. Several triggers proved to induce atypical reactions, e.g., lower company attractiveness leads to testing of the chatbot's capabilities and reactions, and additional stress and negative well-being induce responses containing insults.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 1","pages":"Article 100038"},"PeriodicalIF":0.0,"publicationDate":"2023-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882123000385/pdfft?md5=5e2309c23cc5c7cdfa25699223ab9646&pid=1-s2.0-S2949882123000385-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138738980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"How humanlike is enough?: Uncover the underlying mechanism of virtual influencer endorsement","authors":"Yanni Ma , Jingren Li","doi":"10.1016/j.chbah.2023.100037","DOIUrl":"10.1016/j.chbah.2023.100037","url":null,"abstract":"<div><p>Social media and computer-mediated communication technologies have given rise to the emergence of virtual influencers and created a new digital landscape for online interactions. Although an increasing number of virtual influencers - computer-generated agents are developing partnerships with organizations and brands to connect with social media users, there is a paucity of research exploring the mechanism underlying the endorsement of virtual influencers. With an online experiment (<em>N</em> = 320), this study investigated the effects of using virtual influencers in branding. Particularly, we examined how variations in humanlike appearances affect two-dimensional anthropomorphism and para-social interaction in the communication process. In general, results showed that respondents perceived higher levels of mindful anthropomorphism and stronger para-social interactions with virtual influencers that had a more humanlike appearance, leading to more favorable brand attitudes and higher purchase intentions. No significant difference in branding effects was found between a highly humanlike virtual influencer and a real human. Additionally, the branding effects were not different between using a moderately humanlike virtual influencer and a highly humanlike one or a real human endorser via mindless anthropomorphism. Findings provide both theoretical and practical insights into using virtual influencers in branding.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 1","pages":"Article 100037"},"PeriodicalIF":0.0,"publicationDate":"2023-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882123000373/pdfft?md5=d431ea176f9cdc0038568bb5ab7d180a&pid=1-s2.0-S2949882123000373-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139019364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Laura Eltahawy , Todd Essig , Nils Myszkowski , Leora Trub
{"title":"Can robots do therapy?: Examining the efficacy of a CBT bot in comparison with other behavioral intervention technologies in alleviating mental health symptoms","authors":"Laura Eltahawy , Todd Essig , Nils Myszkowski , Leora Trub","doi":"10.1016/j.chbah.2023.100035","DOIUrl":"10.1016/j.chbah.2023.100035","url":null,"abstract":"<div><p>Artificial intelligence therapy bots are gaining traction in the psychotherapy marketplace. Yet, the only existing study examining the efficacy of a therapy bot lacks any meaningful controls for comparison in claiming its effectiveness to treat depression. The current study aims to examine the efficacy of Woebot against three control conditions, including ELIZA, a basic (non-“smart”) conversational bot, a journaling app, and a passive psychoeducation control group. In a sample of 65 young adults, a repeated measures ANOVA failed to detect differences in symptom reduction between active and passive groups. In follow-up analyses using paired samples t-tests, ELIZA users experienced mental health improvements with the largest effect sizes across all mental health outcomes, followed by daily journaling, then Woebot, and finally psychoeducation. Findings reveal that Woebot does not offer benefit above and beyond other self-help behavioral intervention technologies. They underscore that using a no-treatment control group study design to market clinical services should no longer be acceptable nor serve as an acceptable precursor to marketing a chatbot as functionally equivalent to psychotherapy. Doing so creates unnecessary risk for consumers of psychotherapy and undermines the clinical value of robotic therapeutics that could prove effective at addressing mental health problems through rigorous research.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 1","pages":"Article 100035"},"PeriodicalIF":0.0,"publicationDate":"2023-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S294988212300035X/pdfft?md5=2f5886d63cf05ac01ee83fabc35463cb&pid=1-s2.0-S294988212300035X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138611264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anshu Saxena Arora , Amit Arora , K. Sivakumar , Vasyl Taras
{"title":"The role of anthropomorphic, xˆenocentric, intentional, and social (AXˆIS) robotics in human-robot interaction","authors":"Anshu Saxena Arora , Amit Arora , K. Sivakumar , Vasyl Taras","doi":"10.1016/j.chbah.2023.100036","DOIUrl":"https://doi.org/10.1016/j.chbah.2023.100036","url":null,"abstract":"<div><p>This research explores the socio-cognitive mechanisms of human intelligence through the lens of anthropomorphic, <span><math><mrow><mover><mi>x</mi><mo>ˆ</mo></mover></mrow></math></span>enocentric, intentional, and social (A<span><math><mrow><mover><mi>X</mi><mo>ˆ</mo></mover></mrow></math></span>IS) robotics. After delving into three pivotal A<span><math><mrow><mover><mi>X</mi><mo>ˆ</mo></mover></mrow></math></span>IS concepts – robotic anthropomorphism, intentionality, and sociality – the study examines their impact on robot likeability and successful human-robot interaction (HRI) implementation. The research introduces the concept of robotic <span><math><mrow><mover><mi>x</mi><mo>ˆ</mo></mover></mrow></math></span>enocentrism (represented by perceived inferiority and social aggrandizement) as a new global dimension in social robotics literature, positioning it as a higher-order concept that moderates the impact of pivotal independent variables on robot likeability. Analyzing a sample of 308 respondents in global cross-cultural teams, the study confirms that pivotal AXÍS robotics concepts foster positive robot likeability and successful HRI implementation for both industrial and social robots. Perceived inferiority negatively moderated the relationship between anthropomorphism and robot likeability, but it was a positive moderator between intentionality and robot likeability. However, social aggrandizement did not act as a significant boundary condition. Sociality remains unaffected by the moderating influence of <span><math><mrow><mover><mi>x</mi><mo>ˆ</mo></mover></mrow></math></span>enocentrism. The study concludes by outlining future research directions for A<span><math><mrow><mover><mi>X</mi><mo>ˆ</mo></mover></mrow></math></span>IS robotics.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 1","pages":"Article 100036"},"PeriodicalIF":0.0,"publicationDate":"2023-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882123000361/pdfft?md5=06b2e254b6c38991e93a7b2a4c5fe749&pid=1-s2.0-S2949882123000361-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138570371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ilkka Kaate , Joni Salminen , João M. Santos , Soon-Gyo Jung , Hind Almerekhi , Bernard J. Jansen
{"title":"“There Is something Rotten in Denmark”: Investigating the Deepfake persona perceptions and their Implications for human-centered AI","authors":"Ilkka Kaate , Joni Salminen , João M. Santos , Soon-Gyo Jung , Hind Almerekhi , Bernard J. Jansen","doi":"10.1016/j.chbah.2023.100031","DOIUrl":"https://doi.org/10.1016/j.chbah.2023.100031","url":null,"abstract":"<div><p>Although they often have a negative connotation due to their social risks, deepfakes have the potential to improve HCI, human-centered AI, and user experience (UX). To investigate the impact of deepfakes on persona UX, we conducted an experimental study with 46 users who used a deepfake persona and a human persona to carry out a design task. We collected think-aloud, observant notes, and survey data. The results of our mixed-method analysis indicate that if users observe glitches in the deepfake personas, these glitches have a detrimental effect on the persona UX and task performance; however, not all users identify glitches. Our quantitative analysis of survey data shows that there are differences in how (a) users perceive deepfakes, (b) users detect deepfake glitches, (c) deepfake glitches affect information comprehension, and (d) deepfake glitches affect task completion. Glitches have the most significant impact on authenticity, persona perception, and task perception variables but less impact on behavioral variables. The results imply that organizations implementing deepfake personas need to address perceptual challenges before the full potential of deepfake technology can be realized for persona creation.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 1","pages":"Article 100031"},"PeriodicalIF":0.0,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882123000312/pdfft?md5=dc48df221bd193fd7498297cbded5465&pid=1-s2.0-S2949882123000312-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138557625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Reza Hadi Mogavi , Chao Deng , Justin Juho Kim , Pengyuan Zhou , Young D. Kwon , Ahmed Hosny Saleh Metwally , Ahmed Tlili , Simone Bassanelli , Antonio Bucchiarone , Sujit Gujar , Lennart E. Nacke , Pan Hui
{"title":"ChatGPT in education: A blessing or a curse? A qualitative study exploring early adopters’ utilization and perceptions","authors":"Reza Hadi Mogavi , Chao Deng , Justin Juho Kim , Pengyuan Zhou , Young D. Kwon , Ahmed Hosny Saleh Metwally , Ahmed Tlili , Simone Bassanelli , Antonio Bucchiarone , Sujit Gujar , Lennart E. Nacke , Pan Hui","doi":"10.1016/j.chbah.2023.100027","DOIUrl":"https://doi.org/10.1016/j.chbah.2023.100027","url":null,"abstract":"<div><p>To foster the development of pedagogically potent and ethically sound AI-integrated learning landscapes, it is pivotal to critically explore the perceptions and experiences of the users immersed in these contexts. In this study, we perform a thorough qualitative content analysis across four key social media platforms. Our goal is to understand the user experience (UX) and views of early adopters of ChatGPT across different educational sectors. The results of our research show that ChatGPT is most commonly used in the domains of higher education, K-12 education, and practical skills training. In social media dialogues, the topics most frequently associated with ChatGPT are <em>productivity</em>, <em>efficiency</em>, and <em>ethics</em>. Early adopters' attitudes towards ChatGPT are multifaceted. On one hand, some users view it as a transformative tool capable of amplifying student self-efficacy and learning motivation. On the other hand, there is a degree of apprehension among concerned users. They worry about a potential overdependence on the AI system, which they fear might encourage superficial learning habits and erode students’ social and critical thinking skills. This dichotomy of opinions underscores the complexity of Human-AI Interaction in educational contexts. Our investigation adds depth to this ongoing discourse, providing crowd-sourced insights for educators and learners who are considering incorporating ChatGPT or similar generative AI tools into their pedagogical strategies.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 1","pages":"Article 100027"},"PeriodicalIF":0.0,"publicationDate":"2023-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882123000270/pdfft?md5=e16714ccddd9036b5ccd2fd32a44df5f&pid=1-s2.0-S2949882123000270-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138448597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}