社交媒体引发的两极分化

IF 6.5 2区 管理学 Q1 INFORMATION SCIENCE & LIBRARY SCIENCE
Israr Qureshi, Babita Bhatt
{"title":"社交媒体引发的两极分化","authors":"Israr Qureshi,&nbsp;Babita Bhatt","doi":"10.1111/isj.12525","DOIUrl":null,"url":null,"abstract":"<p>In contemporary discourse, a discernible surge in socio-cultural fragmentation, political schism and right-wing hate speech has emerged, exacerbated by the proliferation of extremist ideologies and discriminatory rhetoric (Das &amp; Schroeder, <span>2021</span>; Ghasiya &amp; Sasahara, <span>2022</span>; Hameleers, <span>2022</span>; Risius et al., <span>2024</span>). This phenomenon is starkly evident in online harassment, the dissemination of misinformation and the normalisation of confrontational dialogue, indicating a pressing demand for the cultivation of inclusive digital environments. Over the past two decades, the evolution of social media platforms has significantly contributed to this trend by employing algorithmic curation and engendering personalised information bubbles that foster heightened polarisation and the segregation of content consumption. While these platforms offer societal benefits such as timely access to news, they concurrently erode trust and facilitate the dissemination of extreme viewpoints and conspiracy theories (Abdalla Mikhaeil &amp; Baskerville, <span>2024</span>). Consequently, they have led to cyberbalkanisation, amplifying societal divides along the faultlines of ethnicity, religion, ideologies and sexual orientation. Compounded by a decline in trust in both institutions and fellow citizens, this expansion of communication avenues has provided fertile ground for the proliferation of extreme opinions, accompanied by challenges such as the dissemination of misinformation and the propagation of toxic headlines. Thus, an imminent imperative exists for scholarly inquiry aimed at comprehending the theoretical foundations of social media-induced polarisation and devising effective interventions to mitigate its deleterious societal impacts.</p><p>In the context of contemporary democracies, public deliberation, which is fundamental for societal progress, faces formidable barriers such as escalating incivility, the propagation of misinformation and polarisation across political, environmental and social spectra (French et al., <span>2024</span>; Miller et al., <span>2024</span>; Weismueller et al., <span>2024</span>). Despite serving as hubs for diverse interactions, social media platforms concurrently foster echo chambers, potentially obstructing the possibility of bridging divides. The complex interplay between social media and polarisation remains a contentious subject, with divergent perspectives on its role in shaping online discourse (Qureshi et al., <span>2020</span>). However, the ramifications of social media extend far beyond political domains, influencing environmental activism, public health responses and business marketing strategies. Moreover, the algorithmic curation utilised by these platforms poses formidable challenges, as it may exacerbate echo chambers and impede the exchange of diverse viewpoints (cf. Miller et al., <span>2024</span>). These platforms play a pivotal role in shaping societal dynamics, impacting attitudes, behaviours and the trajectory of critical issues. Thus, a nuanced understanding and concerted efforts to address the multifaceted impact of social media on public discourse are imperative for fostering inclusive and well-informed deliberation in contemporary democracies (Figure 1).</p><p>Social media-induced polarisation (SMIP) manifests the interplay of social, communicative, individual and selection processes with technological mechanisms such as platform features, algorithmic curation, likes and other signalling tools (Qureshi et al., <span>2020</span>). Social processes, encompassing interactions and influences among individuals, play a substantial role in shaping polarisation dynamics. Factors such as herd behaviour, social dominance and social identity formation delineate group dynamics and ideological cleavages, mirroring societal hierarchies and sectarian discord. Additionally, right-wing authoritarianism exacerbates polarisation by fostering the dissemination of misinformation across various domains (Hameleers, <span>2022</span>). Individual processes, propelled by confirmation bias and motivated reasoning, culminate in selective exposure and biased assimilation of information, thereby reinforcing ideological schisms (Qureshi et al., <span>2020</span>). Theories such as moral panic and moral contagion elucidate the influence of online interactions on attitude polarisation and emotive discourse. Some users use social media features to remain anonymous but also signal identity. Thus, <i>idenonymity</i> (identity + anonymity) refers to the dual strategy adopted by social media users, mainly trolls and those with extreme viewpoints, who maintain anonymity through pseudonyms while simultaneously signalling aspects of their identity, such as ideological affiliations or political leanings, through political avatar or hashtags (cf. Jaidka et al., <span>2022</span>). Communication processes facilitated by social media platforms afford avenues for marginalised voices but also engender flaming and agenda-setting, shaping public opinion and exacerbating polarisation (Qureshi et al., <span>2020</span>). Selection processes, such as homophily, selective exposure, biased assimilation and false consensus effect, further perpetuate polarisation as users gravitate towards and disseminate attitude-consistent information, reinforcing ideological echo chambers (Qureshi et al., <span>2020</span>). <i>Herded anarchy</i> refers to the semblance of disorder and chaos on the surface but with underlying control and guidance by algorithmic curation that amplifies extreme views and shapes discourse within digital spaces.</p><p>Polarisation arising from the interplay of these processes underscores the proliferation of fake news and disinformation, further entrenching polarisation dynamics. The result is <i>regression to meanness</i>, a phenomenon where the dominance of extreme views on social media, fuelled by fake news and algorithmic curation, leads to the marginalisation of moderate voices. This results in an environment where toxic or aggressive voices become more prevalent over time, overshadowing more balanced perspectives. Ultimately, the dominance of extreme voices perpetuates a cycle of fear, anxiety and isolation among moderate individuals, underscoring the multifaceted nature of SMIP and the intricate interplay of social, psychological and technological factors. A nuanced understanding of these dynamics is pivotal for mitigating the deleterious effects of polarisation and fostering inclusive discourse in digital spaces.</p><p>The pervasive ubiquity of social media in everyday life has entrenched it as a pivotal platform for communication, information dissemination and community formation. Initially lauded with utopian aspirations, the evolution of social media has unveiled a dystopian reality typified by the proliferation of disinformation, misinformation and partisan narratives, culminating in heightened polarisation within digital spheres. To confront this issue and prefigure a more equitable future (Bhatt et al., <span>2024</span>), a comprehensive approach involving stakeholders such as users, platform developers, policymakers and advertisers is imperative. Strategies aimed at mitigating polarisation should encompass the promotion of media literacy, the advocacy for algorithmic transparency, the diversification of content recommendations, community moderation, the cultivation of civil discourse, cross-sector collaboration and a re-evaluation of the affordances of social media platforms. The papers included in this special issue shed light on some of these mechanisms.</p><p>The seven papers comprising this special issue provide valuable insights into the expansive domain of SMIP. While not exhaustive, these articles offer a glimpse into the potential avenues of inquiry within this field and establish foundational benchmarks for future exploration. Demonstrating excellence in Information Systems (IS) research, these papers explore specific facets of SMIP, showcasing a diversity of theoretical perspectives, epistemological approaches and methodological frameworks applicable to SMIP studies. These papers provide a foundation for new research explorations to advance the SMIP research agenda.</p><p>Miller et al. (<span>2024</span>) studied the complex aspects of social media engagement, especially how user biases and the likelihood of paying attention affect responses to disinformation. Their research illuminates the intricate relationships between political alignment, truth bias, communicative suspicion and media literacy within social media contexts, contributing significantly to disinformation studies. Notably, the study adopts an experimental approach, considering concepts such as amplification cycles, persuasion, polarisation and aversion, thus enriching Elaboration Likelihood Model (ELM) literature. Their findings demonstrate that political alignment moderates the impact of suspicion on truth bias and engagement. As alignment increases, suspicion's effect on truth bias shifts positively, while truth bias's effect on engagement becomes pronounced. Integrating new media literacy theory into ELM underscores users' ability to discern disinformation from factual content. Surprisingly, critical consuming media literacy, when viewed as a stable trait rather than an intervention, correlates positively with disinformation engagement, challenging assumptions about media literacy's efficacy in mitigating truth bias. Moreover, the study suggests that critical consuming media literacy might erroneously empower users, potentially leading to increased disinformation engagement, particularly among politically biased individuals. These findings highlight the need for educational efforts promoting scepticism across political affiliations to combat disinformation effectively. Future research could investigate social media metrics and user perceptions to enhance the understanding of engagement dynamics and realism in online environments.</p><p>Risius et al. (<span>2024</span>) conducted a sociotechnical investigation into online extremism, arguing for the essential integration of societal and technological perspectives in crafting more effective regulatory policies. Through a systematic review of 222 articles, they aim to map the current research landscape, identify gaps and propose future research trajectories. Their research identifies two primary research streams. The first stream focuses on understanding online extremism, particularly examining how digital technologies have transformed it compared with traditional forms. The authors highlight a gap in comprehending the amplifying effect of internet technologies on extremism and advocate for inquiries into how online extremism differs from conventional manifestations, including its impact on extremist groups' strategies and structures. The second stream concentrates on countering online extremism, stressing the need for a nuanced understanding to develop effective counterstrategies. They caution against simply replicating traditional measures and emphasise the unique challenges of online extremism, such as its broad reach and potential for radicalising a wider audience. Utilising a sociotechnical lens, the authors advocate for analysing the interaction between social and technical elements to grasp online extremism fully. They underscore the importance of addressing both individual and societal impacts of digital technologies, including considerations of user privacy and platform characteristics. Thus, they make a strong case for continuing research to understand online extremism better. They stress the importance of having many different places where research is published and considering real-world risks when making rules and policies to control online extremism.</p><p>Weismueller et al. (<span>2024</span>) offer insights into how misinformation and extreme political content affect social media use, emotional reactions and political polarisation. They show how different types of content lead to different emotional responses depending on a person's political beliefs. They discovered that misinformation and extreme political content get shared more often on social media than accurate or less extreme political information, confirming what others have found before. Moreover, the study reveals that exposure to misinformation and extreme partisan content often evokes more intense negative emotional responses than encounters with accurate or less extreme content. These emotional reactions significantly contribute to polarisation among social media users. Additionally, the research demonstrates that individuals with stronger political ideologies tend to experience heightened negative emotions in response to extreme partisan content, highlighting the influence of personal beliefs on user interaction with social media content. Theoretically, the study advances the understanding of how different types of information influence user behaviour, emphasising falsehood, partisanship, negative emotions and political polarisation. However, the study presents mixed findings regarding the role of political ideology in moderating emotional responses to misinformation, suggesting a complex interplay between personal beliefs and content reactions. The findings highlight how important it is for social media platforms to check information from political elites and extreme partisan sources carefully. Moreover, educating users on critically engaging with political content on social media is crucial. Future research should investigate specific content characteristics that exacerbate the sharing and polarisation effects of misinformation and extreme partisan content.</p><p>Abdalla Mikhaeil and Baskerville (<span>2024</span>) explore how online conspiracy theories have become more extreme, leading to radical beliefs, especially with the help of social media features. They carefully examine how these theories become more extreme online, introducing a theoretical model that considers social identity, digital platform features and online community dynamics. Central to their framework is the concept of affordances. The authors emphasise the substantial influence of digital environments in shaping and amplifying these theories, highlighting the significance of understanding online radicalisation and its societal implications. Their theoretical framework, rooted in affordances, delineates the progression from first-order affordances to second-order affordances for escalation, emphasising shared social identity and ongoing resource commitment. The study underscores the role of social media platforms like 4chan and TikTok in fostering the growth of conspiracy theories, contrasting them with older platforms such as Twitter, YouTube and Facebook. Moreover, the authors advocate for interdisciplinary approaches to develop de-escalation strategies and enhance social media governance to mitigate the spread and impact of conspiracy theories. Additionally, they stress the applicability of their findings to various contexts, including lone-wolf terrorism and events like the U.S. Capitol riot. The research highlights social identity as a crucial factor in conspiracy theory radicalisation, suggesting avenues for future research to explore similar identity-driven phenomena and develop de-escalation strategies.</p><p>Wang et al. (<span>2024</span>) explore how social media interactions impact people's behaviour and opinions. They emphasise how the design of social media platforms plays a crucial role in shaping social norms and behaviours. The study specifically looks at how interactions on social media, such as using features like the ‘friend function,’ can affect the division of opinions in user reviews. To study this, they use a quasi-experimental design, propensity score matching (PSM) and difference-in-differences (DID). The findings reveal that the utilisation of the friend function is linked to less polarised reviews, with a more pronounced effect observed in positive reviews than negative ones. Moreover, the analysis suggests that highly engaged users are less affected by the friend function, indicating a nuanced relationship between engagement level and social influence. Theoretically, the research challenges the notion that social influence exacerbates opinion polarisation, demonstrating its potential as a mitigating factor. It distinguishes between the normative influence of online friends and informational influence, shedding light on the mechanisms underlying polarisation reduction. Furthermore, the study highlights the practical implications for social media platform designers, advocating for deliberate design strategies to cultivate a more socially oriented normative environment and reduce polarisation. Future research avenues include investigating the effects of social influence on review polarity through field experiments, analysing review content and exploring the impact of different types of online friendships. Overall, Wang et al.'s study enriches our understanding of SMIP, offering valuable insights for both theoretical development and practical application in platform design and social interaction dynamics.</p><p>Zhu et al. (<span>2024</span>) discuss the role of accountability mobilisation in combating misinformation and mitigating SMIP. They investigate the intricate interplay between cultural dynamics, notably guanxi, and the effectiveness of such interventions, advocating for culturally sensitive strategies across diverse social media landscapes. The study scrutinises the efficacy of accountability mobilisation to foster the prosocial punishment of misinformation disseminators on social media, particularly within China's guanxi culture. The authors elucidate how societal divisions increasingly align along an ‘Us versus Them’ axis, posing governance challenges, eroding institutional trust and jeopardising democratic systems. In response, the Chinese government enacted regulations in 2017 to empower social media users as misinformation monitors, a move especially relevant within the context of guanxi culture, where traditional prosocial punishment mechanisms are less prevalent due to associated personal costs. Employing a Vignette Survey Experiment (VSE) on WeChat users and analysing data through a random regression model, the study unveils that accountability mobilisation significantly amplifies prosocial punishment among bystanders, potentially disrupting the SMIP pathway of misinformation. However, the moderating influence of guanxi culture dampens this effect, as individuals are less inclined to apply prosocial punishment, mainly when misinformation spreaders belong to their guanxi network. The research underscores the practical utility of regulations assigning specific accountability to individuals, such as chat group administrators, to enhance their willingness to engage in prosocial punishment. Yet, the impact of guanxi necessitates nuanced approaches across varied cultural milieus. It highlights the imperative for further inquiry into the relationship between misinformation and polarisation, examining diverse misinformation types and cultural contexts.</p><p>French et al. (<span>2024</span>) present a comprehensive examination of ‘disinformation’, emphasising its pivotal role in societal polarisation and proposing strategies for its mitigation. Introducing the innovative Typology of Disinformation Intentionality and Impact (DII), this paper offers a framework to classify disinformation threats based on their intentionality and impact, specifically emphasising virality and polarisation. Grounded in Information Manipulation Theory (IMT), the research elucidates how disinformation is crafted and disseminated, making substantial contributions to understanding social media-induced polarisation. Employing decision theory and risk management principles, the study advocates for proactive approaches to evaluate and categorise disinformation risks, departing from traditional reactive tactics. The DII typology classifies disinformation based on spreaders' belief in its truthfulness (intentionality) and its potential to go viral and induce polarisation (impact). This results in a matrix with four quadrants identifying different types of disinformation spreaders. A case study of disinformation campaigns during the US presidential elections illustrates the practical application of the DII typology, providing insights into the nature and scale of disinformation issues. The study proposes specific mitigation strategies for each category within the DII typology, including monitoring and responding to low-impact disinformation, raising awareness, managing high-impact campaigns and discouraging dissemination of high-impact, intentional disinformation. Furthermore, the authors advocate for establishing a Disinformation Management Officer (DMO) role within organisations. This role entails ongoing monitoring, assessment and response to disinformation threats, aligning strategies with the DII typology to combat disinformation effectively.</p><p>In conclusion, the articles featured in this special issue significantly advance our understanding of SMIP. Through various theoretical frameworks, empirical evidence and practical implications, they offer valuable insights into the intricate challenges posed by polarisation within digital contexts. Despite the perceived advantages of social media platforms, this collection of research brings to light the urgent concerns arising from increased social media driven by misinformation and polarisation, the important issues of our time that are often sidelined in public policy discourse. By exploring various facets of SMIP, these articles deepen our understanding of the phenomenon and its profound consequences, fostering a collective awareness of social media's role in perpetuating socio-cultural polarisation. The relevance of this special issue is evident as it addresses both theoretical complexities and methodological obstacles, while also suggesting potential solutions that could stimulate broader discussions on socio-cultural polarisation induced by social media. This body of work is poised to benefit scholars across diverse disciplines, policymakers and organisational leaders by equipping them with valuable insights and tools to navigate the complexities of polarisation in digital spaces. Ultimately, we anticipate that the insights gleaned from this editorial and the articles within this special issue will empower stakeholders with the conceptual and empirical resources necessary to foster inclusive and cohesive digital environments, thus mitigating the adverse impacts of polarisation induced by misinformation, disinformation, fake news, deepfakes, conspiracy theories and herded anarchy. Such polarisation often manifests itself in echo chambers, filter bubbles, cyberbalkanisation, splinternet, ghettoisation and <i>regression to meanness</i>, emphasising the critical importance of addressing these issues to promote informed discourse and societal cohesion.</p>","PeriodicalId":48049,"journal":{"name":"Information Systems Journal","volume":"34 4","pages":"1425-1431"},"PeriodicalIF":6.5000,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/isj.12525","citationCount":"0","resultStr":"{\"title\":\"Social media-induced polarisation\",\"authors\":\"Israr Qureshi,&nbsp;Babita Bhatt\",\"doi\":\"10.1111/isj.12525\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>In contemporary discourse, a discernible surge in socio-cultural fragmentation, political schism and right-wing hate speech has emerged, exacerbated by the proliferation of extremist ideologies and discriminatory rhetoric (Das &amp; Schroeder, <span>2021</span>; Ghasiya &amp; Sasahara, <span>2022</span>; Hameleers, <span>2022</span>; Risius et al., <span>2024</span>). This phenomenon is starkly evident in online harassment, the dissemination of misinformation and the normalisation of confrontational dialogue, indicating a pressing demand for the cultivation of inclusive digital environments. Over the past two decades, the evolution of social media platforms has significantly contributed to this trend by employing algorithmic curation and engendering personalised information bubbles that foster heightened polarisation and the segregation of content consumption. While these platforms offer societal benefits such as timely access to news, they concurrently erode trust and facilitate the dissemination of extreme viewpoints and conspiracy theories (Abdalla Mikhaeil &amp; Baskerville, <span>2024</span>). Consequently, they have led to cyberbalkanisation, amplifying societal divides along the faultlines of ethnicity, religion, ideologies and sexual orientation. Compounded by a decline in trust in both institutions and fellow citizens, this expansion of communication avenues has provided fertile ground for the proliferation of extreme opinions, accompanied by challenges such as the dissemination of misinformation and the propagation of toxic headlines. Thus, an imminent imperative exists for scholarly inquiry aimed at comprehending the theoretical foundations of social media-induced polarisation and devising effective interventions to mitigate its deleterious societal impacts.</p><p>In the context of contemporary democracies, public deliberation, which is fundamental for societal progress, faces formidable barriers such as escalating incivility, the propagation of misinformation and polarisation across political, environmental and social spectra (French et al., <span>2024</span>; Miller et al., <span>2024</span>; Weismueller et al., <span>2024</span>). Despite serving as hubs for diverse interactions, social media platforms concurrently foster echo chambers, potentially obstructing the possibility of bridging divides. The complex interplay between social media and polarisation remains a contentious subject, with divergent perspectives on its role in shaping online discourse (Qureshi et al., <span>2020</span>). However, the ramifications of social media extend far beyond political domains, influencing environmental activism, public health responses and business marketing strategies. Moreover, the algorithmic curation utilised by these platforms poses formidable challenges, as it may exacerbate echo chambers and impede the exchange of diverse viewpoints (cf. Miller et al., <span>2024</span>). These platforms play a pivotal role in shaping societal dynamics, impacting attitudes, behaviours and the trajectory of critical issues. Thus, a nuanced understanding and concerted efforts to address the multifaceted impact of social media on public discourse are imperative for fostering inclusive and well-informed deliberation in contemporary democracies (Figure 1).</p><p>Social media-induced polarisation (SMIP) manifests the interplay of social, communicative, individual and selection processes with technological mechanisms such as platform features, algorithmic curation, likes and other signalling tools (Qureshi et al., <span>2020</span>). Social processes, encompassing interactions and influences among individuals, play a substantial role in shaping polarisation dynamics. Factors such as herd behaviour, social dominance and social identity formation delineate group dynamics and ideological cleavages, mirroring societal hierarchies and sectarian discord. Additionally, right-wing authoritarianism exacerbates polarisation by fostering the dissemination of misinformation across various domains (Hameleers, <span>2022</span>). Individual processes, propelled by confirmation bias and motivated reasoning, culminate in selective exposure and biased assimilation of information, thereby reinforcing ideological schisms (Qureshi et al., <span>2020</span>). Theories such as moral panic and moral contagion elucidate the influence of online interactions on attitude polarisation and emotive discourse. Some users use social media features to remain anonymous but also signal identity. Thus, <i>idenonymity</i> (identity + anonymity) refers to the dual strategy adopted by social media users, mainly trolls and those with extreme viewpoints, who maintain anonymity through pseudonyms while simultaneously signalling aspects of their identity, such as ideological affiliations or political leanings, through political avatar or hashtags (cf. Jaidka et al., <span>2022</span>). Communication processes facilitated by social media platforms afford avenues for marginalised voices but also engender flaming and agenda-setting, shaping public opinion and exacerbating polarisation (Qureshi et al., <span>2020</span>). Selection processes, such as homophily, selective exposure, biased assimilation and false consensus effect, further perpetuate polarisation as users gravitate towards and disseminate attitude-consistent information, reinforcing ideological echo chambers (Qureshi et al., <span>2020</span>). <i>Herded anarchy</i> refers to the semblance of disorder and chaos on the surface but with underlying control and guidance by algorithmic curation that amplifies extreme views and shapes discourse within digital spaces.</p><p>Polarisation arising from the interplay of these processes underscores the proliferation of fake news and disinformation, further entrenching polarisation dynamics. The result is <i>regression to meanness</i>, a phenomenon where the dominance of extreme views on social media, fuelled by fake news and algorithmic curation, leads to the marginalisation of moderate voices. This results in an environment where toxic or aggressive voices become more prevalent over time, overshadowing more balanced perspectives. Ultimately, the dominance of extreme voices perpetuates a cycle of fear, anxiety and isolation among moderate individuals, underscoring the multifaceted nature of SMIP and the intricate interplay of social, psychological and technological factors. A nuanced understanding of these dynamics is pivotal for mitigating the deleterious effects of polarisation and fostering inclusive discourse in digital spaces.</p><p>The pervasive ubiquity of social media in everyday life has entrenched it as a pivotal platform for communication, information dissemination and community formation. Initially lauded with utopian aspirations, the evolution of social media has unveiled a dystopian reality typified by the proliferation of disinformation, misinformation and partisan narratives, culminating in heightened polarisation within digital spheres. To confront this issue and prefigure a more equitable future (Bhatt et al., <span>2024</span>), a comprehensive approach involving stakeholders such as users, platform developers, policymakers and advertisers is imperative. Strategies aimed at mitigating polarisation should encompass the promotion of media literacy, the advocacy for algorithmic transparency, the diversification of content recommendations, community moderation, the cultivation of civil discourse, cross-sector collaboration and a re-evaluation of the affordances of social media platforms. The papers included in this special issue shed light on some of these mechanisms.</p><p>The seven papers comprising this special issue provide valuable insights into the expansive domain of SMIP. While not exhaustive, these articles offer a glimpse into the potential avenues of inquiry within this field and establish foundational benchmarks for future exploration. Demonstrating excellence in Information Systems (IS) research, these papers explore specific facets of SMIP, showcasing a diversity of theoretical perspectives, epistemological approaches and methodological frameworks applicable to SMIP studies. These papers provide a foundation for new research explorations to advance the SMIP research agenda.</p><p>Miller et al. (<span>2024</span>) studied the complex aspects of social media engagement, especially how user biases and the likelihood of paying attention affect responses to disinformation. Their research illuminates the intricate relationships between political alignment, truth bias, communicative suspicion and media literacy within social media contexts, contributing significantly to disinformation studies. Notably, the study adopts an experimental approach, considering concepts such as amplification cycles, persuasion, polarisation and aversion, thus enriching Elaboration Likelihood Model (ELM) literature. Their findings demonstrate that political alignment moderates the impact of suspicion on truth bias and engagement. As alignment increases, suspicion's effect on truth bias shifts positively, while truth bias's effect on engagement becomes pronounced. Integrating new media literacy theory into ELM underscores users' ability to discern disinformation from factual content. Surprisingly, critical consuming media literacy, when viewed as a stable trait rather than an intervention, correlates positively with disinformation engagement, challenging assumptions about media literacy's efficacy in mitigating truth bias. Moreover, the study suggests that critical consuming media literacy might erroneously empower users, potentially leading to increased disinformation engagement, particularly among politically biased individuals. These findings highlight the need for educational efforts promoting scepticism across political affiliations to combat disinformation effectively. Future research could investigate social media metrics and user perceptions to enhance the understanding of engagement dynamics and realism in online environments.</p><p>Risius et al. (<span>2024</span>) conducted a sociotechnical investigation into online extremism, arguing for the essential integration of societal and technological perspectives in crafting more effective regulatory policies. Through a systematic review of 222 articles, they aim to map the current research landscape, identify gaps and propose future research trajectories. Their research identifies two primary research streams. The first stream focuses on understanding online extremism, particularly examining how digital technologies have transformed it compared with traditional forms. The authors highlight a gap in comprehending the amplifying effect of internet technologies on extremism and advocate for inquiries into how online extremism differs from conventional manifestations, including its impact on extremist groups' strategies and structures. The second stream concentrates on countering online extremism, stressing the need for a nuanced understanding to develop effective counterstrategies. They caution against simply replicating traditional measures and emphasise the unique challenges of online extremism, such as its broad reach and potential for radicalising a wider audience. Utilising a sociotechnical lens, the authors advocate for analysing the interaction between social and technical elements to grasp online extremism fully. They underscore the importance of addressing both individual and societal impacts of digital technologies, including considerations of user privacy and platform characteristics. Thus, they make a strong case for continuing research to understand online extremism better. They stress the importance of having many different places where research is published and considering real-world risks when making rules and policies to control online extremism.</p><p>Weismueller et al. (<span>2024</span>) offer insights into how misinformation and extreme political content affect social media use, emotional reactions and political polarisation. They show how different types of content lead to different emotional responses depending on a person's political beliefs. They discovered that misinformation and extreme political content get shared more often on social media than accurate or less extreme political information, confirming what others have found before. Moreover, the study reveals that exposure to misinformation and extreme partisan content often evokes more intense negative emotional responses than encounters with accurate or less extreme content. These emotional reactions significantly contribute to polarisation among social media users. Additionally, the research demonstrates that individuals with stronger political ideologies tend to experience heightened negative emotions in response to extreme partisan content, highlighting the influence of personal beliefs on user interaction with social media content. Theoretically, the study advances the understanding of how different types of information influence user behaviour, emphasising falsehood, partisanship, negative emotions and political polarisation. However, the study presents mixed findings regarding the role of political ideology in moderating emotional responses to misinformation, suggesting a complex interplay between personal beliefs and content reactions. The findings highlight how important it is for social media platforms to check information from political elites and extreme partisan sources carefully. Moreover, educating users on critically engaging with political content on social media is crucial. Future research should investigate specific content characteristics that exacerbate the sharing and polarisation effects of misinformation and extreme partisan content.</p><p>Abdalla Mikhaeil and Baskerville (<span>2024</span>) explore how online conspiracy theories have become more extreme, leading to radical beliefs, especially with the help of social media features. They carefully examine how these theories become more extreme online, introducing a theoretical model that considers social identity, digital platform features and online community dynamics. Central to their framework is the concept of affordances. The authors emphasise the substantial influence of digital environments in shaping and amplifying these theories, highlighting the significance of understanding online radicalisation and its societal implications. Their theoretical framework, rooted in affordances, delineates the progression from first-order affordances to second-order affordances for escalation, emphasising shared social identity and ongoing resource commitment. The study underscores the role of social media platforms like 4chan and TikTok in fostering the growth of conspiracy theories, contrasting them with older platforms such as Twitter, YouTube and Facebook. Moreover, the authors advocate for interdisciplinary approaches to develop de-escalation strategies and enhance social media governance to mitigate the spread and impact of conspiracy theories. Additionally, they stress the applicability of their findings to various contexts, including lone-wolf terrorism and events like the U.S. Capitol riot. The research highlights social identity as a crucial factor in conspiracy theory radicalisation, suggesting avenues for future research to explore similar identity-driven phenomena and develop de-escalation strategies.</p><p>Wang et al. (<span>2024</span>) explore how social media interactions impact people's behaviour and opinions. They emphasise how the design of social media platforms plays a crucial role in shaping social norms and behaviours. The study specifically looks at how interactions on social media, such as using features like the ‘friend function,’ can affect the division of opinions in user reviews. To study this, they use a quasi-experimental design, propensity score matching (PSM) and difference-in-differences (DID). The findings reveal that the utilisation of the friend function is linked to less polarised reviews, with a more pronounced effect observed in positive reviews than negative ones. Moreover, the analysis suggests that highly engaged users are less affected by the friend function, indicating a nuanced relationship between engagement level and social influence. Theoretically, the research challenges the notion that social influence exacerbates opinion polarisation, demonstrating its potential as a mitigating factor. It distinguishes between the normative influence of online friends and informational influence, shedding light on the mechanisms underlying polarisation reduction. Furthermore, the study highlights the practical implications for social media platform designers, advocating for deliberate design strategies to cultivate a more socially oriented normative environment and reduce polarisation. Future research avenues include investigating the effects of social influence on review polarity through field experiments, analysing review content and exploring the impact of different types of online friendships. Overall, Wang et al.'s study enriches our understanding of SMIP, offering valuable insights for both theoretical development and practical application in platform design and social interaction dynamics.</p><p>Zhu et al. (<span>2024</span>) discuss the role of accountability mobilisation in combating misinformation and mitigating SMIP. They investigate the intricate interplay between cultural dynamics, notably guanxi, and the effectiveness of such interventions, advocating for culturally sensitive strategies across diverse social media landscapes. The study scrutinises the efficacy of accountability mobilisation to foster the prosocial punishment of misinformation disseminators on social media, particularly within China's guanxi culture. The authors elucidate how societal divisions increasingly align along an ‘Us versus Them’ axis, posing governance challenges, eroding institutional trust and jeopardising democratic systems. In response, the Chinese government enacted regulations in 2017 to empower social media users as misinformation monitors, a move especially relevant within the context of guanxi culture, where traditional prosocial punishment mechanisms are less prevalent due to associated personal costs. Employing a Vignette Survey Experiment (VSE) on WeChat users and analysing data through a random regression model, the study unveils that accountability mobilisation significantly amplifies prosocial punishment among bystanders, potentially disrupting the SMIP pathway of misinformation. However, the moderating influence of guanxi culture dampens this effect, as individuals are less inclined to apply prosocial punishment, mainly when misinformation spreaders belong to their guanxi network. The research underscores the practical utility of regulations assigning specific accountability to individuals, such as chat group administrators, to enhance their willingness to engage in prosocial punishment. Yet, the impact of guanxi necessitates nuanced approaches across varied cultural milieus. It highlights the imperative for further inquiry into the relationship between misinformation and polarisation, examining diverse misinformation types and cultural contexts.</p><p>French et al. (<span>2024</span>) present a comprehensive examination of ‘disinformation’, emphasising its pivotal role in societal polarisation and proposing strategies for its mitigation. Introducing the innovative Typology of Disinformation Intentionality and Impact (DII), this paper offers a framework to classify disinformation threats based on their intentionality and impact, specifically emphasising virality and polarisation. Grounded in Information Manipulation Theory (IMT), the research elucidates how disinformation is crafted and disseminated, making substantial contributions to understanding social media-induced polarisation. Employing decision theory and risk management principles, the study advocates for proactive approaches to evaluate and categorise disinformation risks, departing from traditional reactive tactics. The DII typology classifies disinformation based on spreaders' belief in its truthfulness (intentionality) and its potential to go viral and induce polarisation (impact). This results in a matrix with four quadrants identifying different types of disinformation spreaders. A case study of disinformation campaigns during the US presidential elections illustrates the practical application of the DII typology, providing insights into the nature and scale of disinformation issues. The study proposes specific mitigation strategies for each category within the DII typology, including monitoring and responding to low-impact disinformation, raising awareness, managing high-impact campaigns and discouraging dissemination of high-impact, intentional disinformation. Furthermore, the authors advocate for establishing a Disinformation Management Officer (DMO) role within organisations. This role entails ongoing monitoring, assessment and response to disinformation threats, aligning strategies with the DII typology to combat disinformation effectively.</p><p>In conclusion, the articles featured in this special issue significantly advance our understanding of SMIP. Through various theoretical frameworks, empirical evidence and practical implications, they offer valuable insights into the intricate challenges posed by polarisation within digital contexts. Despite the perceived advantages of social media platforms, this collection of research brings to light the urgent concerns arising from increased social media driven by misinformation and polarisation, the important issues of our time that are often sidelined in public policy discourse. By exploring various facets of SMIP, these articles deepen our understanding of the phenomenon and its profound consequences, fostering a collective awareness of social media's role in perpetuating socio-cultural polarisation. The relevance of this special issue is evident as it addresses both theoretical complexities and methodological obstacles, while also suggesting potential solutions that could stimulate broader discussions on socio-cultural polarisation induced by social media. This body of work is poised to benefit scholars across diverse disciplines, policymakers and organisational leaders by equipping them with valuable insights and tools to navigate the complexities of polarisation in digital spaces. Ultimately, we anticipate that the insights gleaned from this editorial and the articles within this special issue will empower stakeholders with the conceptual and empirical resources necessary to foster inclusive and cohesive digital environments, thus mitigating the adverse impacts of polarisation induced by misinformation, disinformation, fake news, deepfakes, conspiracy theories and herded anarchy. Such polarisation often manifests itself in echo chambers, filter bubbles, cyberbalkanisation, splinternet, ghettoisation and <i>regression to meanness</i>, emphasising the critical importance of addressing these issues to promote informed discourse and societal cohesion.</p>\",\"PeriodicalId\":48049,\"journal\":{\"name\":\"Information Systems Journal\",\"volume\":\"34 4\",\"pages\":\"1425-1431\"},\"PeriodicalIF\":6.5000,\"publicationDate\":\"2024-04-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1111/isj.12525\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information Systems Journal\",\"FirstCategoryId\":\"91\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1111/isj.12525\",\"RegionNum\":2,\"RegionCategory\":\"管理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"INFORMATION SCIENCE & LIBRARY SCIENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Systems Journal","FirstCategoryId":"91","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/isj.12525","RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"INFORMATION SCIENCE & LIBRARY SCIENCE","Score":null,"Total":0}
引用次数: 0

摘要

(2024)对网络极端主义进行了社会技术调查,认为在制定更有效的监管政策时,必须将社会和技术视角结合起来。通过对 222 篇文章的系统性回顾,他们旨在绘制当前的研究图景,找出差距并提出未来的研究轨迹。他们的研究确定了两个主要研究方向。第一个研究方向侧重于了解网络极端主义,特别是研究与传统形式相比,数字技术是如何改变极端主义的。作者强调了在理解互联网技术对极端主义的放大效应方面存在的差距,并主张探究网络极端主义与传统表现形式的不同之处,包括其对极端主义团体战略和结构的影响。第二种观点集中于打击网络极端主义,强调需要细致入微的理解才能制定有效的反击策略。他们告诫不要简单地复制传统措施,并强调了网络极端主义的独特挑战,如其广泛的影响力和激化更多受众的潜力。作者从社会技术的角度出发,主张分析社会和技术因素之间的相互作用,以全面把握网络极端主义。他们强调了解决数字技术对个人和社会影响的重要性,包括对用户隐私和平台特性的考虑。因此,他们强烈建议继续开展研究,以更好地了解网络极端主义。Weismueller 等人(2024 年)深入探讨了错误信息和极端政治内容如何影响社交媒体的使用、情绪反应和政治极化。他们展示了不同类型的内容如何根据个人的政治信仰导致不同的情绪反应。他们发现,与准确或不那么极端的政治信息相比,错误信息和极端政治内容在社交媒体上被分享的频率更高,这证实了其他人之前的发现。此外,研究还发现,与接触准确或不那么极端的内容相比,接触错误信息和极端党派内容往往会引起更强烈的负面情绪反应。这些情绪反应极大地助长了社交媒体用户之间的两极分化。此外,研究还表明,政治意识形态较强的人在遇到极端党派内容时往往会产生更强烈的负面情绪反应,这凸显了个人信仰对用户与社交媒体内容互动的影响。从理论上讲,本研究推进了对不同类型信息如何影响用户行为的理解,强调了虚假、党派、负面情绪和政治极化。然而,关于政治意识形态在调节对错误信息的情绪反应方面的作用,研究结果不一,表明个人信仰与内容反应之间存在复杂的相互作用。研究结果凸显了社交媒体平台仔细检查来自政治精英和极端党派来源的信息的重要性。此外,教育用户以批判的眼光看待社交媒体上的政治内容也至关重要。Abdalla Mikhaeil 和 Baskerville(2024 年)探讨了网络阴谋论如何变得更加极端,导致激进信仰,尤其是在社交媒体功能的帮助下。他们仔细研究了这些理论如何在网上变得更加极端,并引入了一个考虑社会身份、数字平台特征和网络社区动态的理论模型。其框架的核心是 "承受力 "概念。作者强调了数字环境在塑造和放大这些理论方面的重大影响,强调了理解网络激进化及其社会影响的重要性。他们的理论框架植根于 "可承受性",划分了从一阶 "可承受性 "到二阶 "可承受性 "的升级过程,强调了共同的社会身份和持续的资源承诺。研究强调了 4chan 和 TikTok 等社交媒体平台在促进阴谋论发展方面的作用,并将其与 Twitter、YouTube 和 Facebook 等老平台进行了对比。此外,作者主张采用跨学科方法制定降级策略并加强社交媒体管理,以减轻阴谋论的传播和影响。 这项研究运用决策理论和风险管理原则,主张采用积极主动的方法来评估虚假信息风险并对其进行分类,而不是采用传统的被动应对策略。DII 类型学根据传播者对虚假信息真实性的信念(意向性)及其病毒式传播和引发两极分化的潜力(影响力)对虚假信息进行分类。这就形成了一个矩阵,其中的四个象限可识别不同类型的虚假信息传播者。对美国总统大选期间虚假信息活动的案例研究说明了 DII 类型学的实际应用,提供了对虚假信息问题的性质和规模的见解。该研究针对 DII 类型学中的每个类别提出了具体的缓解策略,包括监测和应对影响较小的虚假信息、提高认识、管理影响较大的活动以及阻止传播影响较大的蓄意虚假信息。此外,作者还主张在组织内部设立虚假信息管理官(DMO)一职。这一角色需要持续监测、评估和应对虚假信息威胁,根据 DII 类型调整战略,以有效打击虚假信息。通过各种理论框架、经验证据和实际影响,这些文章对数字环境下两极分化带来的错综复杂的挑战提出了宝贵的见解。尽管社交媒体平台被认为具有优势,但这组研究报告揭示了由错误信息和两极分化驱动的社交媒体增长所带来的紧迫问题,这些问题是我们这个时代的重要问题,但在公共政策讨论中却常常被搁置一边。通过探讨 SMIP 的各个方面,这些文章加深了我们对这一现象及其深远影响的理解,促进了对社交媒体在社会文化两极分化中的作用的集体认识。本特刊的相关性是显而易见的,因为它既解决了理论上的复杂性和方法上的障碍,又提出了潜在的解决方案,可以激发人们对社交媒体引发的社会文化两极分化进行更广泛的讨论。这些研究成果将为不同学科的学者、政策制定者和组织领导者提供有价值的见解和工具,帮助他们应对数字空间中复杂的两极分化问题。最终,我们预计,从这篇社论和本特刊中的文章中获得的见解将为利益相关者提供必要的概念和经验资源,以促进包容和有凝聚力的数字环境,从而减轻由错误信息、虚假信息、假新闻、深度伪造、阴谋论和无政府状态所引发的两极分化的不利影响。这种两极分化往往表现为回音室、过滤泡沫、网络巴尔干化、互联网分裂、贫民窟化和向卑鄙倒退,这强调了解决这些问题对于促进知情讨论和社会凝聚力的极端重要性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Social media-induced polarisation

Social media-induced polarisation

In contemporary discourse, a discernible surge in socio-cultural fragmentation, political schism and right-wing hate speech has emerged, exacerbated by the proliferation of extremist ideologies and discriminatory rhetoric (Das & Schroeder, 2021; Ghasiya & Sasahara, 2022; Hameleers, 2022; Risius et al., 2024). This phenomenon is starkly evident in online harassment, the dissemination of misinformation and the normalisation of confrontational dialogue, indicating a pressing demand for the cultivation of inclusive digital environments. Over the past two decades, the evolution of social media platforms has significantly contributed to this trend by employing algorithmic curation and engendering personalised information bubbles that foster heightened polarisation and the segregation of content consumption. While these platforms offer societal benefits such as timely access to news, they concurrently erode trust and facilitate the dissemination of extreme viewpoints and conspiracy theories (Abdalla Mikhaeil & Baskerville, 2024). Consequently, they have led to cyberbalkanisation, amplifying societal divides along the faultlines of ethnicity, religion, ideologies and sexual orientation. Compounded by a decline in trust in both institutions and fellow citizens, this expansion of communication avenues has provided fertile ground for the proliferation of extreme opinions, accompanied by challenges such as the dissemination of misinformation and the propagation of toxic headlines. Thus, an imminent imperative exists for scholarly inquiry aimed at comprehending the theoretical foundations of social media-induced polarisation and devising effective interventions to mitigate its deleterious societal impacts.

In the context of contemporary democracies, public deliberation, which is fundamental for societal progress, faces formidable barriers such as escalating incivility, the propagation of misinformation and polarisation across political, environmental and social spectra (French et al., 2024; Miller et al., 2024; Weismueller et al., 2024). Despite serving as hubs for diverse interactions, social media platforms concurrently foster echo chambers, potentially obstructing the possibility of bridging divides. The complex interplay between social media and polarisation remains a contentious subject, with divergent perspectives on its role in shaping online discourse (Qureshi et al., 2020). However, the ramifications of social media extend far beyond political domains, influencing environmental activism, public health responses and business marketing strategies. Moreover, the algorithmic curation utilised by these platforms poses formidable challenges, as it may exacerbate echo chambers and impede the exchange of diverse viewpoints (cf. Miller et al., 2024). These platforms play a pivotal role in shaping societal dynamics, impacting attitudes, behaviours and the trajectory of critical issues. Thus, a nuanced understanding and concerted efforts to address the multifaceted impact of social media on public discourse are imperative for fostering inclusive and well-informed deliberation in contemporary democracies (Figure 1).

Social media-induced polarisation (SMIP) manifests the interplay of social, communicative, individual and selection processes with technological mechanisms such as platform features, algorithmic curation, likes and other signalling tools (Qureshi et al., 2020). Social processes, encompassing interactions and influences among individuals, play a substantial role in shaping polarisation dynamics. Factors such as herd behaviour, social dominance and social identity formation delineate group dynamics and ideological cleavages, mirroring societal hierarchies and sectarian discord. Additionally, right-wing authoritarianism exacerbates polarisation by fostering the dissemination of misinformation across various domains (Hameleers, 2022). Individual processes, propelled by confirmation bias and motivated reasoning, culminate in selective exposure and biased assimilation of information, thereby reinforcing ideological schisms (Qureshi et al., 2020). Theories such as moral panic and moral contagion elucidate the influence of online interactions on attitude polarisation and emotive discourse. Some users use social media features to remain anonymous but also signal identity. Thus, idenonymity (identity + anonymity) refers to the dual strategy adopted by social media users, mainly trolls and those with extreme viewpoints, who maintain anonymity through pseudonyms while simultaneously signalling aspects of their identity, such as ideological affiliations or political leanings, through political avatar or hashtags (cf. Jaidka et al., 2022). Communication processes facilitated by social media platforms afford avenues for marginalised voices but also engender flaming and agenda-setting, shaping public opinion and exacerbating polarisation (Qureshi et al., 2020). Selection processes, such as homophily, selective exposure, biased assimilation and false consensus effect, further perpetuate polarisation as users gravitate towards and disseminate attitude-consistent information, reinforcing ideological echo chambers (Qureshi et al., 2020). Herded anarchy refers to the semblance of disorder and chaos on the surface but with underlying control and guidance by algorithmic curation that amplifies extreme views and shapes discourse within digital spaces.

Polarisation arising from the interplay of these processes underscores the proliferation of fake news and disinformation, further entrenching polarisation dynamics. The result is regression to meanness, a phenomenon where the dominance of extreme views on social media, fuelled by fake news and algorithmic curation, leads to the marginalisation of moderate voices. This results in an environment where toxic or aggressive voices become more prevalent over time, overshadowing more balanced perspectives. Ultimately, the dominance of extreme voices perpetuates a cycle of fear, anxiety and isolation among moderate individuals, underscoring the multifaceted nature of SMIP and the intricate interplay of social, psychological and technological factors. A nuanced understanding of these dynamics is pivotal for mitigating the deleterious effects of polarisation and fostering inclusive discourse in digital spaces.

The pervasive ubiquity of social media in everyday life has entrenched it as a pivotal platform for communication, information dissemination and community formation. Initially lauded with utopian aspirations, the evolution of social media has unveiled a dystopian reality typified by the proliferation of disinformation, misinformation and partisan narratives, culminating in heightened polarisation within digital spheres. To confront this issue and prefigure a more equitable future (Bhatt et al., 2024), a comprehensive approach involving stakeholders such as users, platform developers, policymakers and advertisers is imperative. Strategies aimed at mitigating polarisation should encompass the promotion of media literacy, the advocacy for algorithmic transparency, the diversification of content recommendations, community moderation, the cultivation of civil discourse, cross-sector collaboration and a re-evaluation of the affordances of social media platforms. The papers included in this special issue shed light on some of these mechanisms.

The seven papers comprising this special issue provide valuable insights into the expansive domain of SMIP. While not exhaustive, these articles offer a glimpse into the potential avenues of inquiry within this field and establish foundational benchmarks for future exploration. Demonstrating excellence in Information Systems (IS) research, these papers explore specific facets of SMIP, showcasing a diversity of theoretical perspectives, epistemological approaches and methodological frameworks applicable to SMIP studies. These papers provide a foundation for new research explorations to advance the SMIP research agenda.

Miller et al. (2024) studied the complex aspects of social media engagement, especially how user biases and the likelihood of paying attention affect responses to disinformation. Their research illuminates the intricate relationships between political alignment, truth bias, communicative suspicion and media literacy within social media contexts, contributing significantly to disinformation studies. Notably, the study adopts an experimental approach, considering concepts such as amplification cycles, persuasion, polarisation and aversion, thus enriching Elaboration Likelihood Model (ELM) literature. Their findings demonstrate that political alignment moderates the impact of suspicion on truth bias and engagement. As alignment increases, suspicion's effect on truth bias shifts positively, while truth bias's effect on engagement becomes pronounced. Integrating new media literacy theory into ELM underscores users' ability to discern disinformation from factual content. Surprisingly, critical consuming media literacy, when viewed as a stable trait rather than an intervention, correlates positively with disinformation engagement, challenging assumptions about media literacy's efficacy in mitigating truth bias. Moreover, the study suggests that critical consuming media literacy might erroneously empower users, potentially leading to increased disinformation engagement, particularly among politically biased individuals. These findings highlight the need for educational efforts promoting scepticism across political affiliations to combat disinformation effectively. Future research could investigate social media metrics and user perceptions to enhance the understanding of engagement dynamics and realism in online environments.

Risius et al. (2024) conducted a sociotechnical investigation into online extremism, arguing for the essential integration of societal and technological perspectives in crafting more effective regulatory policies. Through a systematic review of 222 articles, they aim to map the current research landscape, identify gaps and propose future research trajectories. Their research identifies two primary research streams. The first stream focuses on understanding online extremism, particularly examining how digital technologies have transformed it compared with traditional forms. The authors highlight a gap in comprehending the amplifying effect of internet technologies on extremism and advocate for inquiries into how online extremism differs from conventional manifestations, including its impact on extremist groups' strategies and structures. The second stream concentrates on countering online extremism, stressing the need for a nuanced understanding to develop effective counterstrategies. They caution against simply replicating traditional measures and emphasise the unique challenges of online extremism, such as its broad reach and potential for radicalising a wider audience. Utilising a sociotechnical lens, the authors advocate for analysing the interaction between social and technical elements to grasp online extremism fully. They underscore the importance of addressing both individual and societal impacts of digital technologies, including considerations of user privacy and platform characteristics. Thus, they make a strong case for continuing research to understand online extremism better. They stress the importance of having many different places where research is published and considering real-world risks when making rules and policies to control online extremism.

Weismueller et al. (2024) offer insights into how misinformation and extreme political content affect social media use, emotional reactions and political polarisation. They show how different types of content lead to different emotional responses depending on a person's political beliefs. They discovered that misinformation and extreme political content get shared more often on social media than accurate or less extreme political information, confirming what others have found before. Moreover, the study reveals that exposure to misinformation and extreme partisan content often evokes more intense negative emotional responses than encounters with accurate or less extreme content. These emotional reactions significantly contribute to polarisation among social media users. Additionally, the research demonstrates that individuals with stronger political ideologies tend to experience heightened negative emotions in response to extreme partisan content, highlighting the influence of personal beliefs on user interaction with social media content. Theoretically, the study advances the understanding of how different types of information influence user behaviour, emphasising falsehood, partisanship, negative emotions and political polarisation. However, the study presents mixed findings regarding the role of political ideology in moderating emotional responses to misinformation, suggesting a complex interplay between personal beliefs and content reactions. The findings highlight how important it is for social media platforms to check information from political elites and extreme partisan sources carefully. Moreover, educating users on critically engaging with political content on social media is crucial. Future research should investigate specific content characteristics that exacerbate the sharing and polarisation effects of misinformation and extreme partisan content.

Abdalla Mikhaeil and Baskerville (2024) explore how online conspiracy theories have become more extreme, leading to radical beliefs, especially with the help of social media features. They carefully examine how these theories become more extreme online, introducing a theoretical model that considers social identity, digital platform features and online community dynamics. Central to their framework is the concept of affordances. The authors emphasise the substantial influence of digital environments in shaping and amplifying these theories, highlighting the significance of understanding online radicalisation and its societal implications. Their theoretical framework, rooted in affordances, delineates the progression from first-order affordances to second-order affordances for escalation, emphasising shared social identity and ongoing resource commitment. The study underscores the role of social media platforms like 4chan and TikTok in fostering the growth of conspiracy theories, contrasting them with older platforms such as Twitter, YouTube and Facebook. Moreover, the authors advocate for interdisciplinary approaches to develop de-escalation strategies and enhance social media governance to mitigate the spread and impact of conspiracy theories. Additionally, they stress the applicability of their findings to various contexts, including lone-wolf terrorism and events like the U.S. Capitol riot. The research highlights social identity as a crucial factor in conspiracy theory radicalisation, suggesting avenues for future research to explore similar identity-driven phenomena and develop de-escalation strategies.

Wang et al. (2024) explore how social media interactions impact people's behaviour and opinions. They emphasise how the design of social media platforms plays a crucial role in shaping social norms and behaviours. The study specifically looks at how interactions on social media, such as using features like the ‘friend function,’ can affect the division of opinions in user reviews. To study this, they use a quasi-experimental design, propensity score matching (PSM) and difference-in-differences (DID). The findings reveal that the utilisation of the friend function is linked to less polarised reviews, with a more pronounced effect observed in positive reviews than negative ones. Moreover, the analysis suggests that highly engaged users are less affected by the friend function, indicating a nuanced relationship between engagement level and social influence. Theoretically, the research challenges the notion that social influence exacerbates opinion polarisation, demonstrating its potential as a mitigating factor. It distinguishes between the normative influence of online friends and informational influence, shedding light on the mechanisms underlying polarisation reduction. Furthermore, the study highlights the practical implications for social media platform designers, advocating for deliberate design strategies to cultivate a more socially oriented normative environment and reduce polarisation. Future research avenues include investigating the effects of social influence on review polarity through field experiments, analysing review content and exploring the impact of different types of online friendships. Overall, Wang et al.'s study enriches our understanding of SMIP, offering valuable insights for both theoretical development and practical application in platform design and social interaction dynamics.

Zhu et al. (2024) discuss the role of accountability mobilisation in combating misinformation and mitigating SMIP. They investigate the intricate interplay between cultural dynamics, notably guanxi, and the effectiveness of such interventions, advocating for culturally sensitive strategies across diverse social media landscapes. The study scrutinises the efficacy of accountability mobilisation to foster the prosocial punishment of misinformation disseminators on social media, particularly within China's guanxi culture. The authors elucidate how societal divisions increasingly align along an ‘Us versus Them’ axis, posing governance challenges, eroding institutional trust and jeopardising democratic systems. In response, the Chinese government enacted regulations in 2017 to empower social media users as misinformation monitors, a move especially relevant within the context of guanxi culture, where traditional prosocial punishment mechanisms are less prevalent due to associated personal costs. Employing a Vignette Survey Experiment (VSE) on WeChat users and analysing data through a random regression model, the study unveils that accountability mobilisation significantly amplifies prosocial punishment among bystanders, potentially disrupting the SMIP pathway of misinformation. However, the moderating influence of guanxi culture dampens this effect, as individuals are less inclined to apply prosocial punishment, mainly when misinformation spreaders belong to their guanxi network. The research underscores the practical utility of regulations assigning specific accountability to individuals, such as chat group administrators, to enhance their willingness to engage in prosocial punishment. Yet, the impact of guanxi necessitates nuanced approaches across varied cultural milieus. It highlights the imperative for further inquiry into the relationship between misinformation and polarisation, examining diverse misinformation types and cultural contexts.

French et al. (2024) present a comprehensive examination of ‘disinformation’, emphasising its pivotal role in societal polarisation and proposing strategies for its mitigation. Introducing the innovative Typology of Disinformation Intentionality and Impact (DII), this paper offers a framework to classify disinformation threats based on their intentionality and impact, specifically emphasising virality and polarisation. Grounded in Information Manipulation Theory (IMT), the research elucidates how disinformation is crafted and disseminated, making substantial contributions to understanding social media-induced polarisation. Employing decision theory and risk management principles, the study advocates for proactive approaches to evaluate and categorise disinformation risks, departing from traditional reactive tactics. The DII typology classifies disinformation based on spreaders' belief in its truthfulness (intentionality) and its potential to go viral and induce polarisation (impact). This results in a matrix with four quadrants identifying different types of disinformation spreaders. A case study of disinformation campaigns during the US presidential elections illustrates the practical application of the DII typology, providing insights into the nature and scale of disinformation issues. The study proposes specific mitigation strategies for each category within the DII typology, including monitoring and responding to low-impact disinformation, raising awareness, managing high-impact campaigns and discouraging dissemination of high-impact, intentional disinformation. Furthermore, the authors advocate for establishing a Disinformation Management Officer (DMO) role within organisations. This role entails ongoing monitoring, assessment and response to disinformation threats, aligning strategies with the DII typology to combat disinformation effectively.

In conclusion, the articles featured in this special issue significantly advance our understanding of SMIP. Through various theoretical frameworks, empirical evidence and practical implications, they offer valuable insights into the intricate challenges posed by polarisation within digital contexts. Despite the perceived advantages of social media platforms, this collection of research brings to light the urgent concerns arising from increased social media driven by misinformation and polarisation, the important issues of our time that are often sidelined in public policy discourse. By exploring various facets of SMIP, these articles deepen our understanding of the phenomenon and its profound consequences, fostering a collective awareness of social media's role in perpetuating socio-cultural polarisation. The relevance of this special issue is evident as it addresses both theoretical complexities and methodological obstacles, while also suggesting potential solutions that could stimulate broader discussions on socio-cultural polarisation induced by social media. This body of work is poised to benefit scholars across diverse disciplines, policymakers and organisational leaders by equipping them with valuable insights and tools to navigate the complexities of polarisation in digital spaces. Ultimately, we anticipate that the insights gleaned from this editorial and the articles within this special issue will empower stakeholders with the conceptual and empirical resources necessary to foster inclusive and cohesive digital environments, thus mitigating the adverse impacts of polarisation induced by misinformation, disinformation, fake news, deepfakes, conspiracy theories and herded anarchy. Such polarisation often manifests itself in echo chambers, filter bubbles, cyberbalkanisation, splinternet, ghettoisation and regression to meanness, emphasising the critical importance of addressing these issues to promote informed discourse and societal cohesion.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Information Systems Journal
Information Systems Journal INFORMATION SCIENCE & LIBRARY SCIENCE-
CiteScore
14.60
自引率
7.80%
发文量
44
期刊介绍: The Information Systems Journal (ISJ) is an international journal promoting the study of, and interest in, information systems. Articles are welcome on research, practice, experience, current issues and debates. The ISJ encourages submissions that reflect the wide and interdisciplinary nature of the subject and articles that integrate technological disciplines with social, contextual and management issues, based on research using appropriate research methods.The ISJ has particularly built its reputation by publishing qualitative research and it continues to welcome such papers. Quantitative research papers are also welcome but they need to emphasise the context of the research and the theoretical and practical implications of their findings.The ISJ does not publish purely technical papers.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信