Pickled eggs: Generative AI as research assistant or co-author?

IF 6.5 2区 管理学 Q1 INFORMATION SCIENCE & LIBRARY SCIENCE
Robert M. Davison, Sven Laumer, Monideepa Tarafdar, Louie H. M. Wong
{"title":"Pickled eggs: Generative AI as research assistant or co-author?","authors":"Robert M. Davison,&nbsp;Sven Laumer,&nbsp;Monideepa Tarafdar,&nbsp;Louie H. M. Wong","doi":"10.1111/isj.12455","DOIUrl":null,"url":null,"abstract":"<p>Over the last year, much has been written about Artificial Intelligence (AI) chatbots and their potential impact across the spectrum of research contexts. At the time of writing (March 2023), a Google search for the single term ‘ChatGPT’ indicates approximately 416 million hits, while Google Scholar suggests 4710 articles, a number that is likely to rise fast. Generative AI is clearly here to stay, though there are already calls to curtail its use and some countries appear to be banning it. Interestingly, we already see the roll-out of generative AI <i>detection</i> software with tools like GPTZero. The plagiarism detection tool, iThenticate, is likely to follow up and embed generative AI detection as well. In order to reflect on this rapidly developing phenomenon, we conducted a sentiment and topic analysis of Twitter tweets about ChatGPT in academia. We analysed 12 400 tweets over a 3-month period from late 2022 to early 2023. 3680 tweets display a positive sentiment, 2612 display a negative one and 6109 display a neutral view. This pattern already exhibits a shift from the earlier reactions which were predominantly negative. Nevertheless, the topics discussed in these tweets are similar and can be subsumed within two major streams: research and publication.</p><p>Our focus in this editorial is on the practicalities of applying generative AI in the academic research community. The stakeholders of relevance here are authors, reviewers, editors and readers (scholars and practitioners) of ISJ, and also its publisher. All of these stakeholders have a commitment to and interest in the undertaking and publishing of academic research. Indeed, research and publication are the twin pillars on which a scholarly journal stands. Emerging articles comment on the strengths, weaknesses, opportunities and threats of generative AI in the context of research (see for instance Dwivedi et al., <span>2023</span>, for an extensive account). In academia, there is an ongoing debate on the various implications that the technology may have.</p><p>The title of this editorial, viz.: “pickled eggs”, should have given some pause for thought. As it happened the collaborators of one of the co-authors of this editorial used a generative AI assistant (Microsoft Office 365's transcription function) to convert an audio file (of an interview with a research subject) to text. In that interview, reference was made to “pico-x”, the name of a software. The generative AI assistant duly rendered this as “pickled eggs”, presumably the closest term in its vocabulary. Reading through the transcript, numerous errors appeared: personal, product or place names were routinely misspelled, but so too were more mundane expressions. Similar errors occurred in Chinese where an interviewee used the English letters EDI (Electronic Data Interchange) in the middle of what was otherwise a Chinese sentence. The generative AI assistant, which was set up to transcribe Chinese audio to Chinese text, correctly transcribed all the Chinese words into characters, but made an error with EDI, which it also converted to characters that have nothing to do with the meaning of EDI at all. It was neither able to understand what the letters meant nor to insert them into the transcribed Chinese sentence. To exemplify, the following is what happened. The interviewee said “啊,你是做EDI的,你呢?” meaning “Oh, you are responsible for EDI, how about you?”. The AI assistant transcribed this as “啊,你是做一点爱的话,你呢?” meaning “Oh, you are making a little love, how about you? The sound (in Mandarin) of the characters that it inserted ‘一点爱’ is ‘yi dian ai’, which means ‘a little love’, but is not so far from the phonetics of the English letters EDI. Contextual sensitivity is clearly important, and abbreviations (or words in an unexpected language) may flummox a generative AI tool that has not been carefully trained in the idiosyncrasies of a particular domain. Thus, we need to educate people on how to use these tools effectively, and certainly not to rely on them blindly. Just as it is dangerous to accept (without review) all the spelling corrections that a spell checker recommends, so it is dangerous to assume that an AI tool will always create accurate and sensitive content without any need for human checking.</p><p>Another co-author of this editorial recently had a discussion with one of their PhD students who used ChatGTP. The PhD student used it to better understand the concept of ‘intercoder reliability’, a concept that he was using in his research and was able to converse with ChatGTP on this matter to better understand how to use it, what different forms it has, etc. He concluded that he learnt a lot about this concept by asking questions to and getting explanations from ChatGPT. This indicates that ChatGTP has the potential to change the way we search for information (as Google did previously) and how we make sense of it. It will support us as we are able to ask questions about it as one could have done with a mentor, teacher, etc. It will make possible what Apple first suggested in its Knowledge Navigator concept.1</p><p>How should academic publishers deal with ChatGPT and what will the impact on academic writing in general be? As journal editors, our interest in tools that create, as well as those that detect, is related to the contribution that they may make in helping researchers to undertake better research, to write up more cogent accounts of their research and indeed to present that research in ways that are intelligible to the audience. “An AI chatbot will likely perform these activities much more effectively and efficiently than a human, discerning patterns that elude the human brain, and so conceivably creating a better quality or more useful contribution to knowledge than could the human author …[and] ‘improve’ the linguistic fidelity of the final article, levelling the playing field for non-native speaker authors” (Dwivedi et al., <span>2023</span>). This will trigger a discussion on the importance of the medium of ‘text’ for academic communication. ChatGTP demonstrates that more people will be able to write good-quality text. Interestingly, the ACM SIGCHI has included a Grammarly licence in its membership benefits, encouraging researchers to utilise AI in their writing.</p><p>We also see that an AI programme may help authors to assimilate and analyse a large literature base, to transcribe, translate and extract quotes from interviews, to suggest draft text that synthesises that literature or those quotations. Is this just a shifting baseline (Davison &amp; Tarafdar, <span>2018</span>) in terms of what's possible, or is it something more insidious, a shift from more mundane activities to those that we used to imagine require more trenchant human intelligence? As AI programmes undertake more routine activities, does that mean that they could be credited as co-authors (or even sole authors) of a scholarly paper? Should they be acknowledged? Our professional copyeditors are not (credited as) our co-authors (and they may not even be acknowledged). Spelling and grammar checking tools are never mentioned. So, if ChatGPT helps authors to polish their writing then this action does not seem to qualify as worthy of acknowledgement. However, other tools (e.g., SPSS, PLS, AMOS, etc.) are usually mentioned explicitly, and so if we relied on ChatGPT for data analysis, or significant writing tasks, it would be appropriate to mention this, perhaps in the methods section of an article. Or is the use of ChatGPT a limitation that should be recognised? These are questions whose answers we hope will become apparent as the scholarly community collectively works its way through the use of generative AI tools such as ChatGPT.</p><p>The tricky question is going to be ‘how much reliance on a generative AI tool like ChatGPT is reasonable?’. For instance, in journalism there is considerable concern (Bell, <span>2023</span>) that “a platform that can mimic humans' writing with no commitment to the truth is a gift for those who benefit from disinformation”. If a newsroom considers it unethical to publish lies, then any application of ChatGPT to create content must be accompanied by a lot of human-based fact checking and editing. The same applies in academia: academic publishers will not want to (run the risk of) publish(ing) factually incorrect, let alone libellous or discriminatory, material. It is not surprising that the academic publishers are taking this seriously: Wiley (the publisher of the ISJ) is developing detection technology so as to screen submissions at the front end, though it is unclear if this will trigger an outright rejection (i.e., without the editor even knowing about it) or simply an advisory message to an editor along the lines of current plagiarism detection programmes like iThenticate. We assume that iThenticate will embed AI-generated content detection functionality into its product sooner or later, since its parent organisation, Turnitin, already does so.</p><p>Generative AI is simply the latest technology to emerge that may have an impact on how we work. The emergence of generative AI tools like ChatGPT is representative of the shifting baseline in technology that happens continually (cf. Davison &amp; Tarafdar, <span>2018</span>). Much of the early discourse about ChatGPT has focussed on the dark side, and yet there is a legitimate bright side. We should never simply ban the technology. What we need to do is to educate potential users of the technology about its impact, its applicability and the ethical issues surrounding its use in business, society and academia in particular. We live in a world where algorithms are insidious. We need to help people to use them responsibly, even as we develop ways to detect if they are used irresponsibly.</p><p>While generative AI is a legitimate topic for discussion and interest to academics, its limitations are notable. For instance, ChatGPT's inability to formulate interesting research questions means that human intelligence is required to determine what is both interesting (given practical and theoretical constraints), important and feasible. The nature of ChatGPT and similar tools is that they are only as good as their training allows them to be. This means that they are inherently biased by their trainers and the training materials.</p><p>Despite these limitations, AI tools potentially have value as research assistants. They can help us to develop familiarity with a research domain, assimilate and synthesise literature, analyse data and transcribe text, although errors may occur. They can help us polish our text, which is especially beneficial for non-native users of a language, thereby promoting diversity and inclusiveness in our discipline (Davison, <span>2021</span>). Authors may find AI tools beneficial when they are writing their revision notes, explaining more carefully how they responded to reviewer suggestions (Techatassanasoontorn &amp; Davison, <span>2022</span>).</p><p>Journal editors have a responsibility to establish and enforce policies that govern generative AI tool use (cf. Tarafdar &amp; Davison, <span>2021</span>), while also reinforcing and enshrining the cultural values of the journal (Davison &amp; Tarafdar, <span>2022</span>). AI tools must be subject to human oversight and control: ultimately, it is the authors who are responsible and accountable for the text that they submit.</p><p>In this issue of the ISJ, we present seven papers. In the first paper, Morquin et al. (<span>2023</span>) address the issue of resolving misfits between organisational processes and enterprise systems (Org-ES misfits). These misfits are recognised as one of the primary factors contributing to the failure of enterprise systems after their initial deployment in organisations. The authors propose a pragmatic and actionable method for the diagnosis and resolution of Org-ES misfits in pluralistic organisations where diverging individual and collective perceptions of processes are particularly pronounced. The approach builds on theoretical concepts of affordances, affordance actualization, user participation, and change agentry. An action research study in a university hospital in France was conducted to demonstrate the feasibility, effectiveness and boundary conditions of the method. The findings suggest that the proposed method effectively diagnoses misfits and optimises the resources required for their resolution through efficient management of user participation.</p><p>In the second paper, McCarthy et al. (<span>2023</span>) highlight the impact of context on controlee appraisals and responses to IS project control activities. The authors argue that in complex environments, controlees may respond in different ways depending on the salience of personal, professional, project and organisational contexts in their appraisal. The authors also contribute new insights into controlee ‘coping routes’ as consecutive cognitive and behavioural efforts at managing disruptions caused by control misalignments. A triadic model of control enactment (controller-controlee-other) is further presented which suggests that in certain situations, controlee behaviours are shaped not only by control activities but also by the coping strategies or routes of other controlees.</p><p>In the third paper, Bartelheimer et al. (<span>2023</span>) explore the triggers, occurrences and consequences of workarounds for bottom-up process innovation, drawing on data from a multiple case study. The authors investigate nine workarounds that were developed in three cases and find that when workarounds are observed by others, they may have consequences that reach far beyond their creators. These consequences include innovations to organisational routines, the way IT artefacts are used and even the way the organisation itself is structured. Workarounds are not always noticed, let alone steered, at the managerial level, which may lead to organisational drift (Ciborra et al., <span>2000</span>).</p><p>In the fourth paper, Engert et al. (<span>2023</span>) delve into the challenge of sustaining complementor engagement in digital platform ecosystems. Prior research has identified factors and strategies that attract complementors to a particular platform; however, interactions between the platform owner and complementor leading to high or low engagement remain to be understood. To that end, the authors turn to the engagement concept from service research, drawing from actor and stakeholder engagement to contextualise complementor engagement in digital platform research. They define complementor engagement as a state-based, partly volitional resource contribution in a digital platform ecosystem and conduct an embedded case study of two digital platform ecosystems in the enterprise software industry. They identify five antecedents that lead to three engagement behaviours and uncover how antecedent changes implicate them in subsequent stages. They then reveal four engagement trajectories, shedding light on dynamic complementor engagement and how platform governance can change those dynamics. Ultimately, the study provides an integrated understanding and empirical evidence of dynamic complementor engagement in digital platform ecosystems.</p><p>In the fifth paper, Tan et al. (<span>2023</span>) explore the world of online medical consultations (OMCs), where both patient satisfaction and gratitude are the crucial outcomes for physicians' knowledge sharing. Drawing on the affect theory of social exchange, the authors attempt to distinguish patient satisfaction from gratitude in OMCs and elucidate the relationship between the knowledge-sharing process and outcomes using data from a well-known online health platform. The research findings indicate that patient gratitude is associated with a more favourable service evaluation than satisfaction in OMCs. Physicians' informational support and emotional support have different effects on patient satisfaction and patient gratitude. Moreover, professional seniority and disease severity positively and negatively moderate the relationship between emotional support and patient gratitude, respectively. A survey-based experiment is also adopted to validate the research model with self-reported perceptual measures. This study contributes to the literature on patient gratitude, online healthcare service evaluation, knowledge sharing, and the affect theory of social exchange.</p><p>In the sixth paper, Hassandoust and Johnston (<span>2023</span>) investigate organisations that aim to establish a robust security culture to enhance safety and efficiency. A strong security culture guides employees in risk identification and appropriate actions while shaping communication and organisational responses. However, many organisations lack a mature security culture, leaving them vulnerable due to inactive socio-cultural norms. While there are extant frameworks that can aid organisations in forming and/or maturing a security culture, these frameworks largely overlook the supportive competencies essential for success. This research develops a Security Culture Model by drawing on high-reliability theory and practices of high-reliability organisations (HROs). The model explores how HROs' supportive competencies and Information Security practices influence security cultures. Organisational mindfulness, structure, and top management involvement play vital roles in fostering an effective security culture. These findings emphasise the significance of supportive competencies in establishing resilient security cultures within organisations.</p><p>In the seventh paper, Strunk and Strich (<span>2023</span>) address the gap of building professional holding environments in virtual work contexts. Through a qualitative exploration of worker interactions across six online crowdsourcing communities, the authors analyse how crowd workers craft their jobs and enhance their work experiences despite their precarious working conditions. Introducing the concept of professional holding environments for online communities, the authors outline an extended model for job crafting and illustrate how crowd worker develop mechanisms for of work improvement. Their findings uncover how building professional holding environments reduces workers precariousness and helps workers to draw from a collective experience in crowd work.</p>","PeriodicalId":48049,"journal":{"name":"Information Systems Journal","volume":"33 5","pages":"989-994"},"PeriodicalIF":6.5000,"publicationDate":"2023-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/isj.12455","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Systems Journal","FirstCategoryId":"91","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/isj.12455","RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"INFORMATION SCIENCE & LIBRARY SCIENCE","Score":null,"Total":0}
引用次数: 2

Abstract

Over the last year, much has been written about Artificial Intelligence (AI) chatbots and their potential impact across the spectrum of research contexts. At the time of writing (March 2023), a Google search for the single term ‘ChatGPT’ indicates approximately 416 million hits, while Google Scholar suggests 4710 articles, a number that is likely to rise fast. Generative AI is clearly here to stay, though there are already calls to curtail its use and some countries appear to be banning it. Interestingly, we already see the roll-out of generative AI detection software with tools like GPTZero. The plagiarism detection tool, iThenticate, is likely to follow up and embed generative AI detection as well. In order to reflect on this rapidly developing phenomenon, we conducted a sentiment and topic analysis of Twitter tweets about ChatGPT in academia. We analysed 12 400 tweets over a 3-month period from late 2022 to early 2023. 3680 tweets display a positive sentiment, 2612 display a negative one and 6109 display a neutral view. This pattern already exhibits a shift from the earlier reactions which were predominantly negative. Nevertheless, the topics discussed in these tweets are similar and can be subsumed within two major streams: research and publication.

Our focus in this editorial is on the practicalities of applying generative AI in the academic research community. The stakeholders of relevance here are authors, reviewers, editors and readers (scholars and practitioners) of ISJ, and also its publisher. All of these stakeholders have a commitment to and interest in the undertaking and publishing of academic research. Indeed, research and publication are the twin pillars on which a scholarly journal stands. Emerging articles comment on the strengths, weaknesses, opportunities and threats of generative AI in the context of research (see for instance Dwivedi et al., 2023, for an extensive account). In academia, there is an ongoing debate on the various implications that the technology may have.

The title of this editorial, viz.: “pickled eggs”, should have given some pause for thought. As it happened the collaborators of one of the co-authors of this editorial used a generative AI assistant (Microsoft Office 365's transcription function) to convert an audio file (of an interview with a research subject) to text. In that interview, reference was made to “pico-x”, the name of a software. The generative AI assistant duly rendered this as “pickled eggs”, presumably the closest term in its vocabulary. Reading through the transcript, numerous errors appeared: personal, product or place names were routinely misspelled, but so too were more mundane expressions. Similar errors occurred in Chinese where an interviewee used the English letters EDI (Electronic Data Interchange) in the middle of what was otherwise a Chinese sentence. The generative AI assistant, which was set up to transcribe Chinese audio to Chinese text, correctly transcribed all the Chinese words into characters, but made an error with EDI, which it also converted to characters that have nothing to do with the meaning of EDI at all. It was neither able to understand what the letters meant nor to insert them into the transcribed Chinese sentence. To exemplify, the following is what happened. The interviewee said “啊,你是做EDI的,你呢?” meaning “Oh, you are responsible for EDI, how about you?”. The AI assistant transcribed this as “啊,你是做一点爱的话,你呢?” meaning “Oh, you are making a little love, how about you? The sound (in Mandarin) of the characters that it inserted ‘一点爱’ is ‘yi dian ai’, which means ‘a little love’, but is not so far from the phonetics of the English letters EDI. Contextual sensitivity is clearly important, and abbreviations (or words in an unexpected language) may flummox a generative AI tool that has not been carefully trained in the idiosyncrasies of a particular domain. Thus, we need to educate people on how to use these tools effectively, and certainly not to rely on them blindly. Just as it is dangerous to accept (without review) all the spelling corrections that a spell checker recommends, so it is dangerous to assume that an AI tool will always create accurate and sensitive content without any need for human checking.

Another co-author of this editorial recently had a discussion with one of their PhD students who used ChatGTP. The PhD student used it to better understand the concept of ‘intercoder reliability’, a concept that he was using in his research and was able to converse with ChatGTP on this matter to better understand how to use it, what different forms it has, etc. He concluded that he learnt a lot about this concept by asking questions to and getting explanations from ChatGPT. This indicates that ChatGTP has the potential to change the way we search for information (as Google did previously) and how we make sense of it. It will support us as we are able to ask questions about it as one could have done with a mentor, teacher, etc. It will make possible what Apple first suggested in its Knowledge Navigator concept.1

How should academic publishers deal with ChatGPT and what will the impact on academic writing in general be? As journal editors, our interest in tools that create, as well as those that detect, is related to the contribution that they may make in helping researchers to undertake better research, to write up more cogent accounts of their research and indeed to present that research in ways that are intelligible to the audience. “An AI chatbot will likely perform these activities much more effectively and efficiently than a human, discerning patterns that elude the human brain, and so conceivably creating a better quality or more useful contribution to knowledge than could the human author …[and] ‘improve’ the linguistic fidelity of the final article, levelling the playing field for non-native speaker authors” (Dwivedi et al., 2023). This will trigger a discussion on the importance of the medium of ‘text’ for academic communication. ChatGTP demonstrates that more people will be able to write good-quality text. Interestingly, the ACM SIGCHI has included a Grammarly licence in its membership benefits, encouraging researchers to utilise AI in their writing.

We also see that an AI programme may help authors to assimilate and analyse a large literature base, to transcribe, translate and extract quotes from interviews, to suggest draft text that synthesises that literature or those quotations. Is this just a shifting baseline (Davison & Tarafdar, 2018) in terms of what's possible, or is it something more insidious, a shift from more mundane activities to those that we used to imagine require more trenchant human intelligence? As AI programmes undertake more routine activities, does that mean that they could be credited as co-authors (or even sole authors) of a scholarly paper? Should they be acknowledged? Our professional copyeditors are not (credited as) our co-authors (and they may not even be acknowledged). Spelling and grammar checking tools are never mentioned. So, if ChatGPT helps authors to polish their writing then this action does not seem to qualify as worthy of acknowledgement. However, other tools (e.g., SPSS, PLS, AMOS, etc.) are usually mentioned explicitly, and so if we relied on ChatGPT for data analysis, or significant writing tasks, it would be appropriate to mention this, perhaps in the methods section of an article. Or is the use of ChatGPT a limitation that should be recognised? These are questions whose answers we hope will become apparent as the scholarly community collectively works its way through the use of generative AI tools such as ChatGPT.

The tricky question is going to be ‘how much reliance on a generative AI tool like ChatGPT is reasonable?’. For instance, in journalism there is considerable concern (Bell, 2023) that “a platform that can mimic humans' writing with no commitment to the truth is a gift for those who benefit from disinformation”. If a newsroom considers it unethical to publish lies, then any application of ChatGPT to create content must be accompanied by a lot of human-based fact checking and editing. The same applies in academia: academic publishers will not want to (run the risk of) publish(ing) factually incorrect, let alone libellous or discriminatory, material. It is not surprising that the academic publishers are taking this seriously: Wiley (the publisher of the ISJ) is developing detection technology so as to screen submissions at the front end, though it is unclear if this will trigger an outright rejection (i.e., without the editor even knowing about it) or simply an advisory message to an editor along the lines of current plagiarism detection programmes like iThenticate. We assume that iThenticate will embed AI-generated content detection functionality into its product sooner or later, since its parent organisation, Turnitin, already does so.

Generative AI is simply the latest technology to emerge that may have an impact on how we work. The emergence of generative AI tools like ChatGPT is representative of the shifting baseline in technology that happens continually (cf. Davison & Tarafdar, 2018). Much of the early discourse about ChatGPT has focussed on the dark side, and yet there is a legitimate bright side. We should never simply ban the technology. What we need to do is to educate potential users of the technology about its impact, its applicability and the ethical issues surrounding its use in business, society and academia in particular. We live in a world where algorithms are insidious. We need to help people to use them responsibly, even as we develop ways to detect if they are used irresponsibly.

While generative AI is a legitimate topic for discussion and interest to academics, its limitations are notable. For instance, ChatGPT's inability to formulate interesting research questions means that human intelligence is required to determine what is both interesting (given practical and theoretical constraints), important and feasible. The nature of ChatGPT and similar tools is that they are only as good as their training allows them to be. This means that they are inherently biased by their trainers and the training materials.

Despite these limitations, AI tools potentially have value as research assistants. They can help us to develop familiarity with a research domain, assimilate and synthesise literature, analyse data and transcribe text, although errors may occur. They can help us polish our text, which is especially beneficial for non-native users of a language, thereby promoting diversity and inclusiveness in our discipline (Davison, 2021). Authors may find AI tools beneficial when they are writing their revision notes, explaining more carefully how they responded to reviewer suggestions (Techatassanasoontorn & Davison, 2022).

Journal editors have a responsibility to establish and enforce policies that govern generative AI tool use (cf. Tarafdar & Davison, 2021), while also reinforcing and enshrining the cultural values of the journal (Davison & Tarafdar, 2022). AI tools must be subject to human oversight and control: ultimately, it is the authors who are responsible and accountable for the text that they submit.

In this issue of the ISJ, we present seven papers. In the first paper, Morquin et al. (2023) address the issue of resolving misfits between organisational processes and enterprise systems (Org-ES misfits). These misfits are recognised as one of the primary factors contributing to the failure of enterprise systems after their initial deployment in organisations. The authors propose a pragmatic and actionable method for the diagnosis and resolution of Org-ES misfits in pluralistic organisations where diverging individual and collective perceptions of processes are particularly pronounced. The approach builds on theoretical concepts of affordances, affordance actualization, user participation, and change agentry. An action research study in a university hospital in France was conducted to demonstrate the feasibility, effectiveness and boundary conditions of the method. The findings suggest that the proposed method effectively diagnoses misfits and optimises the resources required for their resolution through efficient management of user participation.

In the second paper, McCarthy et al. (2023) highlight the impact of context on controlee appraisals and responses to IS project control activities. The authors argue that in complex environments, controlees may respond in different ways depending on the salience of personal, professional, project and organisational contexts in their appraisal. The authors also contribute new insights into controlee ‘coping routes’ as consecutive cognitive and behavioural efforts at managing disruptions caused by control misalignments. A triadic model of control enactment (controller-controlee-other) is further presented which suggests that in certain situations, controlee behaviours are shaped not only by control activities but also by the coping strategies or routes of other controlees.

In the third paper, Bartelheimer et al. (2023) explore the triggers, occurrences and consequences of workarounds for bottom-up process innovation, drawing on data from a multiple case study. The authors investigate nine workarounds that were developed in three cases and find that when workarounds are observed by others, they may have consequences that reach far beyond their creators. These consequences include innovations to organisational routines, the way IT artefacts are used and even the way the organisation itself is structured. Workarounds are not always noticed, let alone steered, at the managerial level, which may lead to organisational drift (Ciborra et al., 2000).

In the fourth paper, Engert et al. (2023) delve into the challenge of sustaining complementor engagement in digital platform ecosystems. Prior research has identified factors and strategies that attract complementors to a particular platform; however, interactions between the platform owner and complementor leading to high or low engagement remain to be understood. To that end, the authors turn to the engagement concept from service research, drawing from actor and stakeholder engagement to contextualise complementor engagement in digital platform research. They define complementor engagement as a state-based, partly volitional resource contribution in a digital platform ecosystem and conduct an embedded case study of two digital platform ecosystems in the enterprise software industry. They identify five antecedents that lead to three engagement behaviours and uncover how antecedent changes implicate them in subsequent stages. They then reveal four engagement trajectories, shedding light on dynamic complementor engagement and how platform governance can change those dynamics. Ultimately, the study provides an integrated understanding and empirical evidence of dynamic complementor engagement in digital platform ecosystems.

In the fifth paper, Tan et al. (2023) explore the world of online medical consultations (OMCs), where both patient satisfaction and gratitude are the crucial outcomes for physicians' knowledge sharing. Drawing on the affect theory of social exchange, the authors attempt to distinguish patient satisfaction from gratitude in OMCs and elucidate the relationship between the knowledge-sharing process and outcomes using data from a well-known online health platform. The research findings indicate that patient gratitude is associated with a more favourable service evaluation than satisfaction in OMCs. Physicians' informational support and emotional support have different effects on patient satisfaction and patient gratitude. Moreover, professional seniority and disease severity positively and negatively moderate the relationship between emotional support and patient gratitude, respectively. A survey-based experiment is also adopted to validate the research model with self-reported perceptual measures. This study contributes to the literature on patient gratitude, online healthcare service evaluation, knowledge sharing, and the affect theory of social exchange.

In the sixth paper, Hassandoust and Johnston (2023) investigate organisations that aim to establish a robust security culture to enhance safety and efficiency. A strong security culture guides employees in risk identification and appropriate actions while shaping communication and organisational responses. However, many organisations lack a mature security culture, leaving them vulnerable due to inactive socio-cultural norms. While there are extant frameworks that can aid organisations in forming and/or maturing a security culture, these frameworks largely overlook the supportive competencies essential for success. This research develops a Security Culture Model by drawing on high-reliability theory and practices of high-reliability organisations (HROs). The model explores how HROs' supportive competencies and Information Security practices influence security cultures. Organisational mindfulness, structure, and top management involvement play vital roles in fostering an effective security culture. These findings emphasise the significance of supportive competencies in establishing resilient security cultures within organisations.

In the seventh paper, Strunk and Strich (2023) address the gap of building professional holding environments in virtual work contexts. Through a qualitative exploration of worker interactions across six online crowdsourcing communities, the authors analyse how crowd workers craft their jobs and enhance their work experiences despite their precarious working conditions. Introducing the concept of professional holding environments for online communities, the authors outline an extended model for job crafting and illustrate how crowd worker develop mechanisms for of work improvement. Their findings uncover how building professional holding environments reduces workers precariousness and helps workers to draw from a collective experience in crowd work.

泡菜蛋:作为研究助理或合著者的一代人工智能?
参与数字平台研究。他们将补充者参与定义为数字平台生态系统中基于状态、部分自愿的资源贡献,并对企业软件行业中的两个数字平台生态进行了嵌入式案例研究。他们确定了导致三种参与行为的五个前因,并揭示了前因变化如何将其牵连到后续阶段。然后,他们揭示了四个参与轨迹,揭示了动态补充者参与以及平台治理如何改变这些动态。最终,该研究提供了对数字平台生态系统中动态补充者参与的综合理解和经验证据。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Information Systems Journal
Information Systems Journal INFORMATION SCIENCE & LIBRARY SCIENCE-
CiteScore
14.60
自引率
7.80%
发文量
44
期刊介绍: The Information Systems Journal (ISJ) is an international journal promoting the study of, and interest in, information systems. Articles are welcome on research, practice, experience, current issues and debates. The ISJ encourages submissions that reflect the wide and interdisciplinary nature of the subject and articles that integrate technological disciplines with social, contextual and management issues, based on research using appropriate research methods.The ISJ has particularly built its reputation by publishing qualitative research and it continues to welcome such papers. Quantitative research papers are also welcome but they need to emphasise the context of the research and the theoretical and practical implications of their findings.The ISJ does not publish purely technical papers.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信