{"title":"基于技术接受模型的算法意识对个性化社交媒体内容推荐接受度的影响","authors":"Yimu Huang, Lin Liu","doi":"10.1016/j.actpsy.2025.105383","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Concerns about personalization bias and opaque algorithmic control raise questions about trust and user agency. Despite widespread adoption, users often lack awareness of how recommendations are generated. This study examines how logic comprehension, bias perception, and transparency recognition influence trust, perceived usefulness, and behavioural intention within an extended technology acceptance model framework.</p><p><strong>Methods: </strong>A cross-sectional survey of 1200 users from Twitter (X), TikTok, YouTube, and Facebook was conducted, stratified by algorithm awareness levels (low = 400, moderate = 400, high = 400). Validated and custom-developed scales were used to assess TAM constructs and algorithm-specific perceptions. Structural equation modelling (SEM) was performed using Python 3.9 (Semopy, Statsmodels; Python Software Foundation, USA). Qualitative outcomes were derived from 40 semi-structured interviews coded thematically using NVivo 14 (QSR International, Australia).</p><p><strong>Results: </strong>Algorithm awareness correlated positively with perceived usefulness (r = 0.62), ease of use (r = 0.55), trust (r = 0.48), and behavioural intention (r = 0.50). Structural equation modelling indicated direct effects on usefulness (β = 0.63, p = 0.002), ease of use (β = 0.58, p = 0.004), and trust (β = 0.51, p = 0.010), which jointly mediated the relationship between awareness and behavioural intention (indirect effects = 0.29-0.35). Moderation analyses showed that digital literacy, prior experience, and privacy concern significantly altered these paths (ΔR<sup>2</sup> = 0.05-0.07). Model fit indices were excellent (CFI = 0.98, TLI = 0.97, RMSEA = 0.04, SRMR = 0.05). Interview themes revealed user resistance strategies, including platform-switching, manual curation, and contesting recommendation logic.</p><p><strong>Conclusion: </strong>Algorithm awareness enhances perceived utility but also intensifies skepticism, underscoring the need for transparent, user-controllable recommendation systems to sustain engagement while preserving trust. Altogether, these insights offer useful direction for making recommendation systems more transparent and better tuned to user needs.</p>","PeriodicalId":7141,"journal":{"name":"Acta Psychologica","volume":"259 ","pages":"105383"},"PeriodicalIF":2.7000,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"The impact of algorithm awareness on the acceptance of personalized social media content recommendation based on the technology acceptance model.\",\"authors\":\"Yimu Huang, Lin Liu\",\"doi\":\"10.1016/j.actpsy.2025.105383\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>Concerns about personalization bias and opaque algorithmic control raise questions about trust and user agency. Despite widespread adoption, users often lack awareness of how recommendations are generated. This study examines how logic comprehension, bias perception, and transparency recognition influence trust, perceived usefulness, and behavioural intention within an extended technology acceptance model framework.</p><p><strong>Methods: </strong>A cross-sectional survey of 1200 users from Twitter (X), TikTok, YouTube, and Facebook was conducted, stratified by algorithm awareness levels (low = 400, moderate = 400, high = 400). Validated and custom-developed scales were used to assess TAM constructs and algorithm-specific perceptions. Structural equation modelling (SEM) was performed using Python 3.9 (Semopy, Statsmodels; Python Software Foundation, USA). Qualitative outcomes were derived from 40 semi-structured interviews coded thematically using NVivo 14 (QSR International, Australia).</p><p><strong>Results: </strong>Algorithm awareness correlated positively with perceived usefulness (r = 0.62), ease of use (r = 0.55), trust (r = 0.48), and behavioural intention (r = 0.50). Structural equation modelling indicated direct effects on usefulness (β = 0.63, p = 0.002), ease of use (β = 0.58, p = 0.004), and trust (β = 0.51, p = 0.010), which jointly mediated the relationship between awareness and behavioural intention (indirect effects = 0.29-0.35). Moderation analyses showed that digital literacy, prior experience, and privacy concern significantly altered these paths (ΔR<sup>2</sup> = 0.05-0.07). Model fit indices were excellent (CFI = 0.98, TLI = 0.97, RMSEA = 0.04, SRMR = 0.05). Interview themes revealed user resistance strategies, including platform-switching, manual curation, and contesting recommendation logic.</p><p><strong>Conclusion: </strong>Algorithm awareness enhances perceived utility but also intensifies skepticism, underscoring the need for transparent, user-controllable recommendation systems to sustain engagement while preserving trust. Altogether, these insights offer useful direction for making recommendation systems more transparent and better tuned to user needs.</p>\",\"PeriodicalId\":7141,\"journal\":{\"name\":\"Acta Psychologica\",\"volume\":\"259 \",\"pages\":\"105383\"},\"PeriodicalIF\":2.7000,\"publicationDate\":\"2025-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Acta Psychologica\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://doi.org/10.1016/j.actpsy.2025.105383\",\"RegionNum\":4,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/8/8 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q2\",\"JCRName\":\"PSYCHOLOGY, EXPERIMENTAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Acta Psychologica","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1016/j.actpsy.2025.105383","RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/8/8 0:00:00","PubModel":"Epub","JCR":"Q2","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
The impact of algorithm awareness on the acceptance of personalized social media content recommendation based on the technology acceptance model.
Background: Concerns about personalization bias and opaque algorithmic control raise questions about trust and user agency. Despite widespread adoption, users often lack awareness of how recommendations are generated. This study examines how logic comprehension, bias perception, and transparency recognition influence trust, perceived usefulness, and behavioural intention within an extended technology acceptance model framework.
Methods: A cross-sectional survey of 1200 users from Twitter (X), TikTok, YouTube, and Facebook was conducted, stratified by algorithm awareness levels (low = 400, moderate = 400, high = 400). Validated and custom-developed scales were used to assess TAM constructs and algorithm-specific perceptions. Structural equation modelling (SEM) was performed using Python 3.9 (Semopy, Statsmodels; Python Software Foundation, USA). Qualitative outcomes were derived from 40 semi-structured interviews coded thematically using NVivo 14 (QSR International, Australia).
Results: Algorithm awareness correlated positively with perceived usefulness (r = 0.62), ease of use (r = 0.55), trust (r = 0.48), and behavioural intention (r = 0.50). Structural equation modelling indicated direct effects on usefulness (β = 0.63, p = 0.002), ease of use (β = 0.58, p = 0.004), and trust (β = 0.51, p = 0.010), which jointly mediated the relationship between awareness and behavioural intention (indirect effects = 0.29-0.35). Moderation analyses showed that digital literacy, prior experience, and privacy concern significantly altered these paths (ΔR2 = 0.05-0.07). Model fit indices were excellent (CFI = 0.98, TLI = 0.97, RMSEA = 0.04, SRMR = 0.05). Interview themes revealed user resistance strategies, including platform-switching, manual curation, and contesting recommendation logic.
Conclusion: Algorithm awareness enhances perceived utility but also intensifies skepticism, underscoring the need for transparent, user-controllable recommendation systems to sustain engagement while preserving trust. Altogether, these insights offer useful direction for making recommendation systems more transparent and better tuned to user needs.
期刊介绍:
Acta Psychologica publishes original articles and extended reviews on selected books in any area of experimental psychology. The focus of the Journal is on empirical studies and evaluative review articles that increase the theoretical understanding of human capabilities.